Project: Automated offsite backups for an NSLU2 -- part 12

Posted on 16 November 2006 in NSLU2 offsite backup project

Previously in this series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8, Part 9, Part 10, Part 11.

I'm setting up automated offsite backups from my NSLU2 to Amazon S3. With suprisingly little effort, I've managed to get a tool called s3sync running on the "slug" (as it's known). s3sync is a Ruby script, so in order to run it, I had to install Ruby, which in turn meant that I had to replace the slug's firmware with a different version of Linux, called Unslung. Once all of this was done, I just had to set up the appropriate directory structures and certificates so that the sync tool could use SSL, and write a simple upload/download script. All of this worked pretty much as advertised in the tools' respective documentation -- for the details, see the previous posts in this series.

My final step in my last post was to set up a cron job to synchronise quite a lot of data up to S3 overnight. This post covers what I found the next day.

Here is the line from the crontab file:

42 22 * * * root /home/s3sync/upload.sh &> /tmp/s3sync.log

Checking this morning brought some bad news. Nothing had been written to the log file, and the bucket I'd set up to receive the backup on S3 had only 6Mb of data -- as compared to a total of 4Gb+ that was there to be backed up.

Clearly something had gone wrong.

I figured it was best to try again, this time trying to eliminate whatever problem had occurred with the cron job by simply running the backup script from a command prompt. After all, I had run the script from a command line previously, and had seen some useful logging information.

This time it at least seemed to be logging something:

-bash-3.1# /home/s3sync/upload.sh
Create node Giles

...

I left it for an hour or so, after which it had uploaded 141.25Mb, logging all the while. Clearly there was (a) something wrong with the way I had set up logging from the crontab, and (b) something had interrupted it when it had run the previous night. After a little thought, I came to the conclusion that it might not ba a great idea to have something in the crontab that could take multiple hours to run; there could well be a limit, at least in the version of the cron daemon that lives on the NSLU2, and the sync process might have been killed before it was able to sync its output to the log file. That said, I could find no mention of such a thing on the obvious page on the NSLU2-Linux site. I decided to ask the site's mailing list, to see if anyone knew for sure if this was the answer; in the meantime, I watched the sync from the command line as it reached 683 items and 270Mb.

Next: Further investigations.