Skip to Content

Bash em Down - part 2

In our first instalment on bash scripting, we created a simple script to copy the home directory to another location. Let's expand on this and explore some other options. Specifically, let's see how we can use the rsync and date commands to improve our backup script.

The real key to writing good bash scripts is understanding the tools available to you. In our current script we are using the simple cp (copy) command. Well, cp has limitations - first, we have to tell it to recurse to the sub directories, and most importantly it will ALWAYS copy all the files in the path. If your destination folder is not on the local box, or even the local network, then you'll see bandwidth issues crop up. Is there another tool that can address this problem? Of course there is, we're on Linux afterall... :) There's always more than one way to do a task, and we get to pick and choose which one is best for us. In this case I'm thinking about rsync, which is usually installed by default and/or is available for most distributions.

The command syntax for rsync is similar to the the cp command - you issue the command, indicate the source file/directory and then the destination file/directory. The difference between the two is that rsync will only copy files that have changed. It does this by comparing the dates associated with the files. So, we might change the cp command in our original script to look like this:

rsync -r /home /backup

As it turns out we only need to change the command itself, and we suddenly are minimizing our impact on the available bandwidth. Because of this, I like to use rsync for my backup scripts. The other big reason I like rsync is that we can sync files from or to a remote computer via ssh. We would do this with a command something like

rsync me@remotebox:/home /backup

where the 'me' is the username to use to access the 'remotebox' computer.

So, now we have a nice backup of our home folder. Can we restore a file? Yes we can. Can we restore a file from last week? No, we can't. We are replacing the backup copies every time the script is run. Wouldn't it be nice if we could create directories based on a date, and recover files from any day in the past week? Or month? Enter the date command.

We can enter 'date' on the command line and get the current date spit out at us. But with some creative use of the parameters for date, we can then build a backup routine that suits our needs. Issuing the command " date '+%a' " will return an abbreviated name for the day of the week. So we could create directories with these abbreviations like so:

mkdir `date '+%a'`

Notice the quotes. The first and last quote are back-ticks (which are usually above the tab key on english keyboards). The inner quotes are regular single quotes. A back-tick is special - it tells Bash to replace this area with the results of a command. So, if that command were run on a Monday, it would result in "mkdir Mon".

The +%a above tells date to apply a format to the date - in this case an abbreviated name for the day. Read up on the date command to learn all the fun you can have with it. (er, there might be subtle differences on different distributions in how the format parameter is issued...)

So, now we have enough info to expand our backup script to be more robust:

#copy everything in the /home folder to the /backup folder
mkdir /backup/`date '+%a'`
rsync -r /home /backup/`date '+%a'`

We can get away with this approach because the mkdir command does nothing if the directory already exists. So when we run this script (remember to make it executable if needed), we create a daily backup directory if it didn't already exist, and then synchronize the /home directory with the corresponding daily backup directory.