Project: Automated offsite backups for an NSLU2 – part 13

17 November 2006

Previously in this series: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8, Part 9, Part 10, Part 11, Part 13.

I’m setting up automated offsite backups from my NSLU2 to Amazon S3. With suprisingly little effort, I’ve managed to get a tool called s3sync running on the “slug” (as it’s known). s3sync is a Ruby script, so in order to run it, I had to install Ruby, which in turn meant that I had to replace the slug’s firmware with a different version of Linux, called Unslung. Once all of this was done, I just had to set up the appropriate directory structures and certificates so that the sync tool could use SSL, and write a simple upload/download script. All of this worked pretty much as advertised in the tools’ respective documentation – for the details, see the previous posts in this series.

My final step had been to set up a cron job to run the upload script, but it had failed, not logging anything. In order to debug, I ran the upload script directly from the command line, and left it to run overnight, copying a large set of directories to S3.

13 hours later, it had completed. From the jet3St Cockpit, I checked how much data was present in the bucket; it told me I had 1.61Gb, split over 2774 items. This seemed a little on the low side, but I had to get back to my workstation to be sure. And there, the same program told me that I had 1.71Gb, split over 2770 items. Checking the console showed that the command had, it thought, succeeded with no errors – but and the directory that was meant to be synced claimed to be about 4Gb in size!

A quick investigation showed that there were certainly files missing from S3. I decided to see what would happen if I ran it again – would it start uploading where it left off?

I suspect sorting this problem out may take a certain amount of poking around over a number of days, so I won’t post again in this series until I’ve found the solution.

[Update] Still hard at work on this; it looks like there’s a problem with s3sync making it cut out after some amount of transfer, so I’m trying to diagnose the problem – which is tricky when each exeperiment takes 24 hours :-/ Final results will be posted here when I have them.