Tag Archives: backup

Backup Script: A Love Story

I have written a few backup scripts by now. Every time I do I find a new interesting challenge somewhere in the task. As such I’d like to talk a little about my most recent backup script and offer my script to the community at large.

First of all, I tend to use rsync for backups. It’s powerful and it works well. You can use rsync to backup any file system and so it is also very flexible in a network or on a virtual machine. I’m not going to dive into the man page for rsync, but you will want to take a look there (man rsync) because there is a wealth of information about the various switches available for that command. I have selected the switches that fit my purposes and that is what is displayed below.

Next, where possible I prefer to leave my backup drives unmounted until they are actually needed for the backup process. I consider my current system for dealing with this imperfect, but again that is what you will see in my current script.

Finally, the server I have written this script for is running the desktop version of Ubuntu. This is not likely important in any way except that (as you will see) if I had been running the server version I would not likely have had the same problems when I did my test run.

Ok, so here is my current script.

# Simple Back Up Script
# by JamesIsIn from JamesIsIn.com
# Do something nice today.

# run in cron Mondays at 4 AM
# [0 4 * * 1 /home/[scriptpath] >/dev/null 2>1]

# redirect from script --> sends all STDERR to log file

exec 2> /home/[username]/Desktop/$(date +%Y%m%d)
## 2> routes standard error as well as standard output together the rest uses a dated folder at the specified location

## backup [DriveA]

if ! mountpoint -q /media/[BUDriveA]
   mount -t [filesystem] -U [UUID goes here; no brackets or quotes] /media/[BUDriveA]

# Note rsync -a copies permissions but will not copy owner:group if not run as root

rsync -ailS --delete --progress /media/[DriveA]/[Folder] /media/[BUDriveA]

umount /media/[BUDriveA]

## backup [DriveB]

if ! mountpoint -q /media/[BUDriveB]
   mount -t [filesystem] -U [UUID goes here; no brackets or quotes] /media/[BUDriveB]

# Note rsync -a copies permissions but will not copy owner:group if not run as root

rsync -ailS --delete --progress /media/[DriveB]/[Y] /media/[BUDriveB]
rsync -ailS --delete --progress /media/[DriveB]/[Z] /media/[BUDriveB]

umount /media/[BUDriveB]


As you can see I like to keep my scripts well documented. I encourage you to do the same. Memory is fallible, after all.

(Note: Everything in square brackets [] will be replaced in your script.)


This is where you specify the file system type you are using and is called out by the -t argument. For my script it was ext3.


This is a unique identifier for a drive and it is called out by the -U argument. I prefer using UUID’s because other drive/partition designations can change (for instance, sda1 can become sdb1 if you add a new drive). (For information on adding drives to your system see this post.)

My backup script manages the backups for three directories (recursive) over two drives. I first test to see if a backup drive is mounted and if it is not I mount it. That’s the job of the if statements. (This is what I consider imperfect and will seek to improve as time moves forward.) It works well enough, and if you don’t want your backup drives to be mounted all the time this is a decent way of dealing with the matter.

You will also note that I unmount each drive as I finish with it (umount).

You will also see that in the DriveB example I backup (synchronize) two directories (Y & Z). This also helps to make it clear that you do not need to specify the name of the directory at the backup location.

Lastly, you see my note about -a copying permissions and requiring root to copy also the owner and user information. As such I put this script into a cron job as root.


It’s easy enough to do. Just open cron as root:

sudo crontab -e

You will be prompted for your password. You can learn more here. You can see my cron entry for this script in a comment in my script above. The man page for cron (man cron) will help you understand how 0 4 * * 1 means every Monday at four in the morning.

Let’s talk about some of the mistakes I made.

The first big snag was not having the if statements correct. I left out the UUID’s and so the script did not mount the drive for the first sync operation. The if statement tested to see if DriveA was mounted and found that it was not. Then it ran the mount command which failed because no valid drive (UUID) was specified.

Because the drive was not mounted at the mount point rsync began synchronizing data to the mount folder and not the mounted drive. This caused the script to fail (once the OS drive containing / was full—about 105 GB later) and borked the / partition.

I was not able to restart Gnome (Gnome, the desktop environment, requires some free space on your / partition to function and mine was 100% full). I ssh’d into the machine from a Windows box nearby (using Cygwin) and maybe two hours later I sorted out what I had done. I was able to remove (using the rm command) the offending folder after making absolutely certain DriveA was in fact not mounted. After that I was able to reboot and get back into Gnome. (However, I did have to run fsck on the drive probably due also to the drive having filled itself. If you boot your system and get a shell stating you cannot login try running fsck and answering y to all the fixit questions.)

Oops. This is why we make test runs, right?

So I fixed the if statements (specifically the mount and umount commands) and that took care of that.

Then the script ran fine through the backup for DriveA but finished DriveB in a few seconds. Not possible. I looked back at the script and realized that I had specified the backup location for both the source and the destination. Damn it. Fixed that and DriveB was synchronizing properly.

I hope this helps you out.

(Thanks to Ian over at the Ubuntu Forums for his suggestions while I was troubleshooting.)

Happy scripting.


BAShing sudo through a Pipe

I fixed my dad’s Windows box and after getting it all set up I figured I would create a backup image based on the completed build, installations, and updates.  A nice best-of restore point, eh?

So I booted the machine into an Ubuntu 10.04 Live CD, created the necessary partition using gparted, and mounted the partition onto which I would create the backup.

I tried using dd to create the backup image, but found that my backup space was too small for the full OS partition.  To build the backup I ran this:

# dd command from partition to file

sudo dd if=/dev/hdb of=/path/to/image


Not such a problem because I could of course pipe my dd output into gzip and compress my backup image.  Typically if you want to compress the  image you would merely pipe the dd output into gzip like thus:

# normal format for piping dd into gzip

dd if=/dev/hdb | gzip > /path/to/image.gz


The problem is that if I used sudo on this full command I hit a permission denied error.  This is because sudo only applies to the first command (dd) and not to any subsequent commands (in this case gzip)—sudo does not flow through the pipe.  However, I pulled out my trusty BASH Cookbook and found a way to run the whole thing under sudo:

# proper way to manage using dd, gzip, and sudo

sudo bash -c 'dd if=/dev/hdb | gzip > /path/to/image.gz'


This runs a bash terminal as root (non-interactive) for the duration of the quoted command and then exits back into the interactive terminal I had been using.  Worked great.

(The drive I backed up was > adjust your code accordingly.)

Thanks, O’Reilly.