Howto mass delete old Tweets on Twitter

There’s unfortunately no way to mass delete old Tweets you’ve posted on Twitter. There are some online services, who promise to delete your data for you, but since you’ll have to grant them access to your account I’ve got a bad feeling and wanted to do things on my own.

I’ve tried last year a windows only software called Twitter Archive Eraser. Last year it used to be a github project which you could compile locally and let it run on your account. It’s now free for a limited amount of tweets and also only works with tweets not older than two years. To remove these restrictions you’ve got to pay a small amount for a license.

You’ll need to download your complete message archive for the deletion process. Once you’ve got the data from Twitter you might as well start to write a little script which deletes the old messages for you using the Twitter post id.

Luckily, I found this blog post by Kris Shaffer. He explains how he deleted a large amount of his tweets using python so I’ve started to try this for myself. There was also a different blog which explained the process more beginner friendly. However, I’ve got problems with misformatted characters so I’ve decided to post my used code as gist to github:

To use this I’ve done the following things:

  • Requested and download my account data from Twitter
  • Create a Twitter developer account
  • Created a new app to get Api keys and Access tokens
  • Installed python3 on my mac with homebrew ‚brew install python3‘
  • Installed tweepy with pip3 ‚pip3 install tweepy‘
  • Created a virtual environment for this script
  • Copied the lines in blocks into the python3 interactive shell

Please be aware that above gist only deleted the tweets from 2017 to June 2018. Please refer for other scenarios to Kris blog post (e.g. delete only mentions in a given time frame).

Auto mount NFS shares on Raspbian

I’m using influxdb on my Raspberry Pi in combination with a NFS mount. The NFS mount is on my Synology NAS and should store the database data of influxdb. Reason for this setup is that I fear that the SD card won’t survive the many write/read cycles caused by a database writing to it.

The shared folder on my Synology is configured to be accessible by various IPs in my network:

The problem with Raspbian is that I’ve tried to auto mount the NFS share on startup, so that the influxdb service can directly write to the NFS mount. 

I’ve used these settings in my /etc/fstab to mount the volume automatically:

<DS IP>:/volume1/databases /mnt/databases nfs auto,user,rw,nolock,nosuid 0 0

This doesn’t work properly since my influxdb is often dead after a restart, but if I check the mounted volumes I see the NFS volume mounted properly.

However, there’s a tool called autofs which already helped me with a similar problem on my Mac when I moved my iTunes library to the Synology share.

Install autofs using

sudo apt-get install autofs

Open the file /etc/auto.master and add something like this

/mnt    /etc/auto.databases     -nosuid,noowners

Now create a file called /etc/auto.databases with this content

databases       -fstype=nfs,user,nolock,nosuid,rw <DS IP>:/volume1/databases

Unmount the existing NFS share. Remove/comment out the line for the nfs mount in your /etc/fstab so that it doesn’t conflict with autofs. Restart autofs with

sudo service autofs restart

Now check the content of your mount point with e.g.

ls /mnt/databases

Autofs should now automatically mount the NFS share. This might take a while, which is a good sign that the mount is loaded. You can also verify with

mount

that your NFS share is mounted to e.g. /mnt/databases. If you’ll restart now, influxdb should be happy on restart. When it tries to start, autofs will see the access to the mounted folder and will mount the NFS share before influxdb can start up properly.

Configure influxDB to store its data in a different folder

The default location of the influxDB data is /var/lib/influxdb. If you want to change the location, you’ll need to configure three folders to be in a different place. The changes should be done in the file /etc/influxdb/influxdb.conf

...
[meta]
  # Where the metadata/raft database is stored
  #dir = "/var/lib/influxdb/meta"
  dir = "/mnt/databases/influxdb/meta"
...
[data]
  # The directory where the TSM storage engine stores TSM files.
  #dir = "/var/lib/influxdb/data"
  dir = "/mnt/databases/influxdb/data"

  # The directory where the TSM storage engine stores WAL files.
  #wal-dir = "/var/lib/influxdb/wal"
  wal-dir = "/mnt/databases/influxdb/wal"

I’m using this to store the data on a NFS share which is mounted automatically. If you want to keep your existing data, move the existing content of /var/lib/influxdb to the new location.

Make sure, that the new location is owned by influxdb user and group.

Improve OpenVPN security on Synology DiskStations

I’m using OpenVPN on my Synology DiskStation with certificates instead of Preshared Keys. A few days ago I’ve wanted to login to my VPN and it wasn’t working. After checking the log file I’ve seen that there were some issues with the used configuration file for OpenVPN.

Tue Nov 20 23:04:27 2018 Cipher algorithm 'TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CB' not found
Tue Nov 20 23:04:27 2018 Exiting due to fatal error

How can this be? The configuration worked for months without problems? I’ve started to remember that I’ve started to increase the security of my OpenVPN configuration using a few parameters. The Cipher algorithm is one of them. This page describes some of the changes I’ve made (unfortunately only in German).

I’ve added the tls-cipher and tls-auth options as last parameter lines to my configuration file. The synology web UI tried to parse those parameters as cipher and auth parameter when it shows those values as part of the DSM UI.

I’ve reorderded the tls-auth and tls-cipher parameter to be above the auth and cipher parameters and the DSM UI is now able to show those values correct. This will enable you to restart the OpenVPN service from the WebUI without the need to login via SSH.

How do you get supported values for auth, cipher and tls-cipher you might wonder? Just execute

openvpn --show-tls

to get the supported tls-cipher you might line up with a : separated.

openvpn --show-digests

shows you the allowed values for auth and

openvpn --show-ciphers

will show the allowed values for cipher. However, cipher and auth can also be preselected from the DSM UI.

Don’t forget to use the same values in your OpenVPN configuration on your VPN client as well, otherwise the connection won’t work.

Howto install InfluxDB and Grafana on a Raspberry Pi 3

Inspired by a friend I’ve decided to install InfluxDB and Grafana on my Raspberry Pi 3. InfluxDB is a database optimized for storing time related data like measurements of my recently installed particle sensor. Grafana is used to create beautiful graphs to display the stored data.

The InfluxDB installation can be done in a few simple steps:

This will install the InfluxDB without a user and any rights. You can read up further on that topic. Ideally you should setup an user for authentication but since some IoT devices do not support this I’m not going to explain it here.

The Grafana installation is similar simple:

Please make sure that you’ll get the most current version from github and replace it in the wget command:

First login to Grafana:

Now you’re ready to configure Grafana. Go to http://<ip-of-grafana-machine>:3000 and setup a new username and password for the webinterface. The default is admin/admin

Configure InfluxDB as datasource in Grafana:

You need to configure a datasource under http://<ip-of-grafana-machine>:3000/datasources

Enter as name the name of the database you’ve created earlier. In this case it was topic.

The type of the database is InfluxDB.

The HTTP connection URL is http://localhost:8086

Hit Save & Test, once you’ve configured everything to your liking. The connection to the database should work now.