Configure Mosquitto mqtt broker user authentication in Docker running on Synology NAS

Today I’ve tried to enable user authentication for my Mosquitto mqtt broker running in a Docker container on my Synology NAS.

Here’s my shared folder for use with docker, its under /volume1/docker:

mqtt
├── data
├── log
│   └── mosquitto.log
├── mosquitto.conf
└── mosquitto.passwd

The mqtt folder needs to be accessible by the docker process running in the container, e.g. by using:

sudo chown -R 1883:1883 mqtt/

The content of my used docker-compose.yml:

version: '3'
services:
  mosquitto:
    hostname: mosquitto
    image: eclipse-mosquitto:latest
    restart: always
    volumes:
      - /volume1/docker/mqtt/mosquitto.conf:/mosquitto/config/mosquitto.conf:ro
      - /volume1/docker/mqtt/mosquitto.passwd:/mosquitto/config/mosquitto.passwd
      - /volume1/docker/mqtt/log/mosquitto.log:/mosquitto/log/mosquitto.log
      - /volume1/docker/mqtt/data:/mosquitto/data
    ports:
      - "1883:1883"

The mapped files in the volume section need to be present, otherise docker will complain during startup of the container.

Make also sure that you’re writing mosquitto with double t. I’ve forgotten this and used only one t, wondering why nothing was working the way I’ve expected it.

Here’s the content of my mosquitto.conf:

pid_file /var/run/mosquitto.pid

persistence true
persistence_location /mosquitto/data/

log_dest file /mosquitto/log/mosquitto.log
log_dest stdout

password_file /mosquitto/config/mosquitto.passwd
allow_anonymous false

You can setup the mosquitto.passwd using the docker container and/or an installation of mosquitto, so that you can use the mosquitto_passwd tool.

mosquitto_passwd -c /mosquitto/config/mosquitto.passwd <username>

It will ask you twice for the password for the username. If you want to setup additional users, you should omit the -c parameter, so that the existing file won’t be overwritten.

The „allow_anonymous false“ line will disable anonymous authentication to the broker.

You can now SSH to your Synology and start the docker container using the docker-compose file:

docker-compose -f docker-compose.yml up -d

This will look for the docker-compose.yml in the current folder and will execute docker in daemon mode. It will restart automatically when your Synology is restarting (e.g. after system updates).

Howto control a Xiaomi Robot Vacuum without app using Valetudo

I’ve tinkered before with my Xiaomi Robot Vacuum but returned to the official Xiaomi app since the existing solutions felt uncomfortable. I even worked on adding Mac support for the dustcloud software but stopped using the rooted firmware.

A few days ago I’ve read about Valetudo. Valetudo is a web interface to the Xiaomi robot being self hosted on the robot. It allows easy extraction of the necessary control token and stops the robot from reporting cleaning and location data to Xiaomi. There’s also support for MQTT so that you can integrate it into existing home automation systems.

I followed the instructions on creating a rooted firmware and found a few problems and want to share my solution:

  • The firmware builder creates a firmware package along with SSH keys supplied during the build process. I could not login using those SSH keys and required the SSH key directly from the ~/.ssh folder of the user.
  • Flashing inside a VirtualBox Ubuntu VM doesn’t work, even when you use a bridged network interface. You maybe able to request the device token but the flash command always fail.
  • Flashing the robot may fail, if it isn’t completely reset to its default. You can reset the robot to factory default by pressing the home and reset button until you hear the chinese voice.
  • You should flash the robot while it is inside its charging station.
  • If you’re using a Mac, you can install python3 and the required python packages. This will allow you to flash the firmware directly from your mac.
  • Keep your machine close to the robot during the flashing process, because it might otherwise timeout.
  • Since I’m using a chinese version of the robot, I only hear the chinese voice. In this case you’ll need to convert the robot to a european version following these instructions. Once the robot is rebooted you’ll hear the english translation and can verify this from the Valetudo interface.

Now you’re ready to use Valetudo. I’ve added a link to the Valetudo homepage on my smartphone. It replaces now the Xiaomi app while it still provides access to the cleaning map, the maintenance hours for replacing parts as well as automated clean up plans. All in all its a really nice piece of software!

Monitor Fritz!Box connection statistics with Grafana, InfluxDB and Raspberry Pi

I’ve recently stumbled over an article in the german magazine C’T about visualisations of your Fritz!Box’s connection. The solution looked quite boring and outdated, since it used MRTG for the graph creation.

I’ve started searching for a better solution using Grafana, InfluxDB and my Raspberry Pi and found this great blog post. I’ve already explained how to install Grafana and InfluxDB in this post, so I’ll concentrate on the Fritz!Box related parts:

Start with the installation of fritzcollectd. It is a plugin for collectd.

sudo apt-get install -y python-pip
sudo apt-get install -y libxml2-dev libxslt1-dev
sudo pip install fritzcollectd

Now create a user account in the Fritz!Box for collectd. Go to System, Fritz!Box-user and create a new user with password, who has access from internet disabled. The important part is to enable „Fritz!Box settings“.

Additionally make sure that your Fritz!Box is configured to support connection queries using UPnP. You can configure this under „Home Network > Network > Networksettings“. Select „Allow access for applications“ as well as „Statusinformation using UPnP“.

Next part is the installation and configuration of collectd:

sudo apt-get install -y collectd
sudo nano /etc/collectd/collectd.conf

Enable the python and network plugins by removing the hashtag

LoadPlugin python
[...]
LoadPlugin network

Scroll down till you’ll see the plugin configuration and configure the port and IP for collectd

<Plugin network>
    Server "127.0.0.1" "25826"
</Plugin>

Enable the python plugin and configure the module with the username and password of the user you’ve created. Make also sure to use the right address.

<Plugin python>
    Import "fritzcollectd"

    <Module fritzcollectd>
        Address "fritz.box"
        Port 49000
        User "user"
        Password "password"
        Hostname "FritzBox"
        Instance "1"
        Verbose "False"
    </Module>
</Plugin>

Since you’ve already got a running InfluxDB, you’ll just need to enable collectd as data source:

sudo nano /etc/influxdb/influxdb.conf

Search for the [collectd] part and replace it with

[[collectd]]
  enabled = true
  bind-address = "127.0.0.1:25826"
  database = "collectd"
  typesdb = "/usr/share/collectd/types.db"

Reboot collectd and influx to activate the changes made

sudo systemctl restart collectd
sudo systemctl restart influxdb

Login to your grafana installation and configure a new datasource. Make sure to set the collectd database. If you’re using credentials for the InfluxDB, you can add them now. If you’re not using authentication you can disable the „With credentials“ checkbox.

Check if your configuration is working by clicking on „Save & Test“.

If everything worked, you can proceed to importing the Fritz!Box Dashboard from the Grafana.com dashboard. The ID is 713. Make sure to select the right InfluxDB during the import setup.

After clicking on import, you’ll should be able to see your new Dashboard. It might take a few minutes/hours until you’ve gathered enough data to properly display graphs.

Be aware though that if you start gathering this much data you’ll might end up with „insufficient memory“ errors. You’ll might want to tweak your InfluxDB settings accordingly.

Auto mount NFS shares on Raspbian

I’m using influxdb on my Raspberry Pi in combination with a NFS mount. The NFS mount is on my Synology NAS and should store the database data of influxdb. Reason for this setup is that I fear that the SD card won’t survive the many write/read cycles caused by a database writing to it.

The shared folder on my Synology is configured to be accessible by various IPs in my network:

The problem with Raspbian is that I’ve tried to auto mount the NFS share on startup, so that the influxdb service can directly write to the NFS mount. 

I’ve used these settings in my /etc/fstab to mount the volume automatically:

<DS IP>:/volume1/databases /mnt/databases nfs auto,user,rw,nolock,nosuid 0 0

This doesn’t work properly since my influxdb is often dead after a restart, but if I check the mounted volumes I see the NFS volume mounted properly.

However, there’s a tool called autofs which already helped me with a similar problem on my Mac when I moved my iTunes library to the Synology share.

Install autofs using

sudo apt-get install autofs

Open the file /etc/auto.master and add something like this

/mnt    /etc/auto.databases     -nosuid,noowners

Now create a file called /etc/auto.databases with this content

databases       -fstype=nfs,user,nolock,nosuid,rw <DS IP>:/volume1/databases

Unmount the existing NFS share. Remove/comment out the line for the nfs mount in your /etc/fstab so that it doesn’t conflict with autofs. Restart autofs with

sudo service autofs restart

Now check the content of your mount point with e.g.

ls /mnt/databases

Autofs should now automatically mount the NFS share. This might take a while, which is a good sign that the mount is loaded. You can also verify with

mount

that your NFS share is mounted to e.g. /mnt/databases. If you’ll restart now, influxdb should be happy on restart. When it tries to start, autofs will see the access to the mounted folder and will mount the NFS share before influxdb can start up properly.

Configure influxDB to store its data in a different folder

The default location of the influxDB data is /var/lib/influxdb. If you want to change the location, you’ll need to configure three folders to be in a different place. The changes should be done in the file /etc/influxdb/influxdb.conf

...
[meta]
  # Where the metadata/raft database is stored
  #dir = "/var/lib/influxdb/meta"
  dir = "/mnt/databases/influxdb/meta"
...
[data]
  # The directory where the TSM storage engine stores TSM files.
  #dir = "/var/lib/influxdb/data"
  dir = "/mnt/databases/influxdb/data"

  # The directory where the TSM storage engine stores WAL files.
  #wal-dir = "/var/lib/influxdb/wal"
  wal-dir = "/mnt/databases/influxdb/wal"

I’m using this to store the data on a NFS share which is mounted automatically. If you want to keep your existing data, move the existing content of /var/lib/influxdb to the new location.

Make sure, that the new location is owned by influxdb user and group.