Collecting server backups with my Pi

I have two spare SSDs at home and wanted to do something useful with them. Now I can even do something useful with them together with my Raspberry Pi!

I will use it to temporarily store my server backups. First I thought about a push from every server, but I didn’t want to put private keys on webservers. Just doesn’t feel right. So I abandoned this idea. Having a pull seemed like a better idea, but then you have to guess when the server is ready with the backup and assume it’s there. That’s about as elegant as a ballet dancing heavy weight lifter, so I abandoned this idea, too. Unfortunately there’s only push and pull, so I decided to take a nap.

After my nap the following idea knocked at my door:

  • Create backups of web, config and database folders and encrypt them (GPG)
  • Notify the Raspberry with a simple web hook
  • Raspberry downloads the files

That sounded pretty sleek to me, so I extended the idea a bit. As I have 4 servers, which would finish their backups roughly at the same time, I need something to throttle that. I decided to go for a job queue, beanstalkd.

So after the backup script finished its job, it sends a notification: bash $ http POST server=webserver1 \ file=2015-04-26-www-webserver1.tar.gz \ type=www --auth backup:password

The JSON payload can be inserted in the job queue and by starting a reasonable amount of consumers, I can easily throttle the download. I might start with two consumers first and see if blocks my internet connection or not.

The code for the hook and the backup script is already done, tomorrow I’ll write the consumer and then push it on GitHub, maybe someone (of my 0 readers) is interested.

comments powered by Disqus