yeah. it’s been awhile…
Tag: backup
finally backing up
almost 7 months since my last full backup. oops.
gotta remember to run time machine…
oops. it’s pretty easy to forget to plug the external drive into the laptop every now and then so it can get backed up automatically. On the desktop at work, I leave the time machine drive plugged in, so it’s always up to date. Need to be less forgetful with the home laptop, considering that’s where all of our family photos live…
Check your backups…
I just tried to navigate around in my Time Machine backup, only to find that it was woefully out of date. I’d set it to automatically back everything up, and had assumed that it would, you know, automatically back things up.
It hadn’t been doing it.
Since December 5, 2007.
Over a month ago.
And I’d assumed that I had a reliable hourly/daily/weekly/monthly backup.
Except the last month got dropped.
Frack. If you have a backup solution, check it. If you don’t have one, get one. And check it regularly. I’ve turned Time Machine back on, and will let it chew overnight to make sure I get a decent backup. And now I’ve got to figure out how in hell the Time Machine switch got thrown to “OFF” in the first place. I sure as frack didn’t do that. Time Machine has been the biggest feature of Leopard for me (or so I thought…)
Update: Just to be clear – I’m not saying there’s something wrong with Time Machine, or that there’s some nasty bug that borked things. All I’m saying is “periodically verify that your backup system is functioning as you expect it” – nothing more, nothing less…
Be Careful With rsync –delete
I’ve been using an rsync
-based script to backup my iPhoto library to another computer, just in case my Powerbook blows up or something, so I don’t lose every photo I’ve taken for the last 4 years. It’s worked flawlessly, run every now and then from home, to squirt changes in my Pictures directory to my desktop at work, where I further back it up on an external drive using another rsync
script.
But, now that I’ve added Aperture to my work desktop, the script I had been using became suddenly quite dangerous. As in, destructively dangerous. And I hadn’t thought to check out the script in awhile.
So, I ran it last night. It faithfully chugged through my ‘book’s ~/Pictures directory, so I walked away as it started working.
Then, late last night I did a quick scroll through the rsync
log, and to my horror found a whole bunch of stuff like “Deleting: ~/Pictures/Aperture Library.aplibrary/...
”
Holy. Crap.
I’d accidentally told my computer to nuke my Aperture library on the work desktop.
So, I cracked open the “rsyncpics” script, and lo and behold, I’d left in the “–delete” flag. Which wasn’t a problem – it was actually desired – when I was only using iPhoto. But since Aperture isn’t on the Powerbook, the “–delete” flag told rsync
to nuke anything on the desktop that isn’t on the powerbook. Like the Aperture library. Doh.
Thankfully, I also periodically backup my work desktop’s home directory to an external drive, so had a slightly out of date version of the Aperture library which I could just copy back into place. But it was missing everything after Ken Ryba’s session from a week ago. I’d copied the best of the “Campus Tour” photos to my home iPhoto library, so it’s not fatal, but a good lesson learned.
The moral of the story is: be careful with rsync
, especially when using --delete
. I’ve learned my lesson, and have resurrected most of the lost photos. I’ve also added an Aperture Vault on the external drive. They offer these backup tools, so why not use them?
Update: Between the various locations I’d copied files, and a healthy application of the awesome Flickr Backup utility, all photos worth saving have been restored. Whew.
I’ve been using an rsync
-based script to backup my iPhoto library to another computer, just in case my Powerbook blows up or something, so I don’t lose every photo I’ve taken for the last 4 years. It’s worked flawlessly, run every now and then from home, to squirt changes in my Pictures directory to my desktop at work, where I further back it up on an external drive using another rsync
script.
But, now that I’ve added Aperture to my work desktop, the script I had been using became suddenly quite dangerous. As in, destructively dangerous. And I hadn’t thought to check out the script in awhile.
So, I ran it last night. It faithfully chugged through my ‘book’s ~/Pictures directory, so I walked away as it started working.
Then, late last night I did a quick scroll through the rsync
log, and to my horror found a whole bunch of stuff like “Deleting: ~/Pictures/Aperture Library.aplibrary/...
”
Holy. Crap.
I’d accidentally told my computer to nuke my Aperture library on the work desktop.
So, I cracked open the “rsyncpics” script, and lo and behold, I’d left in the “–delete” flag. Which wasn’t a problem – it was actually desired – when I was only using iPhoto. But since Aperture isn’t on the Powerbook, the “–delete” flag told rsync
to nuke anything on the desktop that isn’t on the powerbook. Like the Aperture library. Doh.
Thankfully, I also periodically backup my work desktop’s home directory to an external drive, so had a slightly out of date version of the Aperture library which I could just copy back into place. But it was missing everything after Ken Ryba’s session from a week ago. I’d copied the best of the “Campus Tour” photos to my home iPhoto library, so it’s not fatal, but a good lesson learned.
The moral of the story is: be careful with rsync
, especially when using --delete
. I’ve learned my lesson, and have resurrected most of the lost photos. I’ve also added an Aperture Vault on the external drive. They offer these backup tools, so why not use them?
Update: Between the various locations I’d copied files, and a healthy application of the awesome Flickr Backup utility, all photos worth saving have been restored. Whew.
Automatically backing up multiple Subversion repositories
We’re using Subversion to manage files for all of our projects in the Teaching & Learning Centre. More projects means more Subversion repositories to backup. Instead of maintaining a list of projects and repositories, we stick all repositories in a common root directory, and I’ve just put together a dead simple script to automatically dump all of them to a directory of my choosing. I’ve added this script to the crontab for the www
user on the server, and it runs svndump
on all repositories, gzip
ping the output for archive (and possibly restore).
The output is stored in a specified backup directory, which is then picked up via rsync
from my desktop Mac, and copied to the external backup drive.
#!/bin/sh
SVN_REPOSITORIES_ROOT_DIR="/svn_repositories/"
BACKUP_DIRECTORY="/Users/Shared/backup/svn/"
for REPOSITORY in `ls -1 $SVN_REPOSITORIES_ROOT_DIR`
do
echo 'dumping repository: ' $REPOSITORY
/usr/local/bin/svnadmin dump $SVN_REPOSITORIES_ROOT_DIR$REPOSITORY | gzip > $BACKUP_DIRECTORY$REPOSITORY'.gz'
done
We’re using Subversion to manage files for all of our projects in the Teaching & Learning Centre. More projects means more Subversion repositories to backup. Instead of maintaining a list of projects and repositories, we stick all repositories in a common root directory, and I’ve just put together a dead simple script to automatically dump all of them to a directory of my choosing. I’ve added this script to the crontab for the www
user on the server, and it runs svndump
on all repositories, gzip
ping the output for archive (and possibly restore).
The output is stored in a specified backup directory, which is then picked up via rsync
from my desktop Mac, and copied to the external backup drive.
#!/bin/sh
SVN_REPOSITORIES_ROOT_DIR="/svn_repositories/"
BACKUP_DIRECTORY="/Users/Shared/backup/svn/"
for REPOSITORY in `ls -1 $SVN_REPOSITORIES_ROOT_DIR`
do
echo 'dumping repository: ' $REPOSITORY
/usr/local/bin/svnadmin dump $SVN_REPOSITORIES_ROOT_DIR$REPOSITORY | gzip > $BACKUP_DIRECTORY$REPOSITORY'.gz'
done
More on MySQL backups
I’m just putting some additional refinements to my automated server backup process, and have rolled together a handy script to backup each database into its own backup file (so I can restore a single database, rather than blowing them all away to restore from an --all-databases
backup.
I’m going to work on making a fancier / more dynamic script based on MySOL’s show databases
command to get all databases backed up individually without having to remember to add them to the backup script. In the meantime, here’s how I’m backing up my databases.
In a script creatively named “backup_databases.sh
” – which has been added to the crontab on the server – I have this:
Update: A much better script was provided in the comments for this post. Definitely use that one rather than this one that I cobbled together. I’m leaving this script up in case it comes in handy, but have switched my servers to use the script provided by Jon.
#!/bin/sh
# Customize these variables to match what you have
MYSQL_ACCOUNT=root
MYSQL_PASSWORD=password
BACKUP_DIRECTORY="/Users/Shared/backup/mysql/"
backupdb() {
DATABASE_NAME=$1
FILE_NAME=${2:-$DATABASE_NAME}
echo "dumping database " $DATABASE_NAME " to " $FILE_NAME
/usr/bin/mysqldump -u $MYSQL_ACCOUNT -p$MYSQL_PASSWORD -q $DATABASE_NAME | gzip > $BACKUP_DIRECTORY$FILE_NAME.sql.gz
}
# add lines for each database you want to back up
backupdb "first_database"
backupdb "database2"
# keep adding databases...
# finish up by grabbing the whole works Just In Case
backupdb "--all-databases" "mysqldump"
The script has a function that is called for each database you want to back up, passing in the database name, and optionally the name of the output file. I’ll be tweaking the script over the next few days to make it more robust and flexible, but it’s a decent starting point anyway.
Of course, if you don’t need to restore individual databases, you can simply call
mysqldump -u USER -pPASSWORD -q --all-databases | gzip > mysqldump.sql.gz
Update: A much better, more flexible, and robust script was provided by Jon in the comments for this post. I’m using that script now. Thanks!
I’m just putting some additional refinements to my automated server backup process, and have rolled together a handy script to backup each database into its own backup file (so I can restore a single database, rather than blowing them all away to restore from an --all-databases
backup.
I’m going to work on making a fancier / more dynamic script based on MySOL’s show databases
command to get all databases backed up individually without having to remember to add them to the backup script. In the meantime, here’s how I’m backing up my databases.
In a script creatively named “backup_databases.sh
” – which has been added to the crontab on the server – I have this:
Update: A much better script was provided in the comments for this post. Definitely use that one rather than this one that I cobbled together. I’m leaving this script up in case it comes in handy, but have switched my servers to use the script provided by Jon.
#!/bin/sh
# Customize these variables to match what you have
MYSQL_ACCOUNT=root
MYSQL_PASSWORD=password
BACKUP_DIRECTORY="/Users/Shared/backup/mysql/"
backupdb() {
DATABASE_NAME=$1
FILE_NAME=${2:-$DATABASE_NAME}
echo "dumping database " $DATABASE_NAME " to " $FILE_NAME
/usr/bin/mysqldump -u $MYSQL_ACCOUNT -p$MYSQL_PASSWORD -q $DATABASE_NAME | gzip > $BACKUP_DIRECTORY$FILE_NAME.sql.gz
}
# add lines for each database you want to back up
backupdb "first_database"
backupdb "database2"
# keep adding databases...
# finish up by grabbing the whole works Just In Case
backupdb "--all-databases" "mysqldump"
The script has a function that is called for each database you want to back up, passing in the database name, and optionally the name of the output file. I’ll be tweaking the script over the next few days to make it more robust and flexible, but it’s a decent starting point anyway.
Of course, if you don’t need to restore individual databases, you can simply call
mysqldump -u USER -pPASSWORD -q --all-databases | gzip > mysqldump.sql.gz
Update: A much better, more flexible, and robust script was provided by Jon in the comments for this post. I’m using that script now. Thanks!
How to back up multiple servers
Just writing down the process so I don’t forget. If anyone else gets some use out of it, that’s cool too…
Here’s how I just set up my Mac to automatically back up 2 servers, as well as my home directory, to an external firewire drive. The process uses stuff that’s included with MacOSX, so won’t cost a dime. And it’s automatable, so I won’t forget to run it.
Set up SSH to allow automated connection
Following these instructions, boiled down to bare essentials below. Run this stuff from the “client” machine (in my case, my desktop box in my cube) – where all data will wind up.
% .ssh/authorized_keys2
% scp ~/.ssh/id_dsa.pub :~/.ssh/authorized_keys2
repeat scp
step for each server
This will allow your account on that client machine to SSH, rsync and SCP without being prompted for a password every time, making it possible to automate the process.
Create a shell script to automate backups
Using rsync
to copy directories from the server(s) to a local volume (preferably an external drive). I created a file at ~/bin/backup_servers.sh
, but it can live anywhere. Replace [SERVER] with the ip/domain of the server to be backed up. Replace [DIRECTORY] with the directory on the server to be backed up (could be something like /Library/WebServer
). Replace [DEST_DIRECTORY] with the directory that will contain the backup (could be something like /Volumes/BackupDrive/[SERVER]/[DIRECTORY]
).
#!/bin/sh
echo "Backing up [SERVER] [DIRECTORY] directory"
rsync -rtlzv --ignore-errors -e ssh [USERNAME]@[SERVER]:[DIRECTORY] [DEST_DIRECTORY] > [DEST_DIRECTORY]/backup.log
Tweak directories as needed, but this should create a backup (without deleting missing files) of the server directory to the external hard drive. If you want to back up more than one server, or more than one directory, just keep repeating the echo/rsync lines as needed (changing values for each server/directory, of course). I have 5 entries in my script, copying a couple of directories from 2 servers, and also backing up my home directory on my Desktop machine.
Automate it via cron
I have my cron tasks defined in a file at ~/mycrontab
, so I just added this to the end of that file:
30 4 * * * ~/bin/backup_servers.sh > /dev/null
So now, every morning at 4:30AM, the servers and directories that I’ve specified in ~/bin/backup_server.sh
will get backed up to my external firewire drive (assuming it’s on and mounted). I’m sure I could be doing fancier things to make this process even smoother, but it seems to be working fine now.
At the moment, I have stuff backed up from the 2 servers, and important stuff from my Powerbook (iPhoto library, iTunes library, Documents, etc…) get copied to the desktop where they get backed up automatically to the external drive.
Just writing down the process so I don’t forget. If anyone else gets some use out of it, that’s cool too…
Here’s how I just set up my Mac to automatically back up 2 servers, as well as my home directory, to an external firewire drive. The process uses stuff that’s included with MacOSX, so won’t cost a dime. And it’s automatable, so I won’t forget to run it.
Set up SSH to allow automated connection
Following these instructions, boiled down to bare essentials below. Run this stuff from the “client” machine (in my case, my desktop box in my cube) – where all data will wind up.
% .ssh/authorized_keys2
% scp ~/.ssh/id_dsa.pub :~/.ssh/authorized_keys2
repeat scp
step for each server
This will allow your account on that client machine to SSH, rsync and SCP without being prompted for a password every time, making it possible to automate the process.
Create a shell script to automate backups
Using rsync
to copy directories from the server(s) to a local volume (preferably an external drive). I created a file at ~/bin/backup_servers.sh
, but it can live anywhere. Replace [SERVER] with the ip/domain of the server to be backed up. Replace [DIRECTORY] with the directory on the server to be backed up (could be something like /Library/WebServer
). Replace [DEST_DIRECTORY] with the directory that will contain the backup (could be something like /Volumes/BackupDrive/[SERVER]/[DIRECTORY]
).
#!/bin/sh
echo "Backing up [SERVER] [DIRECTORY] directory"
rsync -rtlzv --ignore-errors -e ssh [USERNAME]@[SERVER]:[DIRECTORY] [DEST_DIRECTORY] > [DEST_DIRECTORY]/backup.log
Tweak directories as needed, but this should create a backup (without deleting missing files) of the server directory to the external hard drive. If you want to back up more than one server, or more than one directory, just keep repeating the echo/rsync lines as needed (changing values for each server/directory, of course). I have 5 entries in my script, copying a couple of directories from 2 servers, and also backing up my home directory on my Desktop machine.
Automate it via cron
I have my cron tasks defined in a file at ~/mycrontab
, so I just added this to the end of that file:
30 4 * * * ~/bin/backup_servers.sh > /dev/null
So now, every morning at 4:30AM, the servers and directories that I’ve specified in ~/bin/backup_server.sh
will get backed up to my external firewire drive (assuming it’s on and mounted). I’m sure I could be doing fancier things to make this process even smoother, but it seems to be working fine now.
At the moment, I have stuff backed up from the 2 servers, and important stuff from my Powerbook (iPhoto library, iTunes library, Documents, etc…) get copied to the desktop where they get backed up automatically to the external drive.
LaCie Big Disk Extreme back in action
I got the LaCie Big Disk Extreme 500GB backup drive back today. LaCie is suggesting it was just a firmware issue, so they thoughtfully updated the drive, nuked the data that was on it, and returned it. Over a month after it died. Going a month without any kind of backups is a little scary. Here’s hoping it was just a firmware issue – the drive checks out OK now, and already has over 100GB of backed up data on it (and growing – likely will be close to 200GB by the end of the day) – and that’s only 2 servers backed up…
Update: A few hours of unattended backing up later, and I’ve already burned through 214.79GB of space on the drive – including a full Carbon Copy Cloner image of my laptop’s drive, and all critical files from the two servers that I care about (and can still access). Whew!
I got the LaCie Big Disk Extreme 500GB backup drive back today. LaCie is suggesting it was just a firmware issue, so they thoughtfully updated the drive, nuked the data that was on it, and returned it. Over a month after it died. Going a month without any kind of backups is a little scary. Here’s hoping it was just a firmware issue – the drive checks out OK now, and already has over 100GB of backed up data on it (and growing – likely will be close to 200GB by the end of the day) – and that’s only 2 servers backed up…
Update: A few hours of unattended backing up later, and I’ve already burned through 214.79GB of space on the drive – including a full Carbon Copy Cloner image of my laptop’s drive, and all critical files from the two servers that I care about (and can still access). Whew!
LaCie Big Disk Extreme Failure
Well, not sure if it’s an “extreme failure” or just a failure of a LaCie Big Disk Extreme. Either way, my shiny new 500GB backup drive decided to go on strike yesterday. Maybe the most depressing day of the year got to it. Maybe it thought, as a 500GB drive, that it should have been doing more exciting things than just backing up my crap.
I was copying over the latest MySQL dumps from our production servers, and got an interesting error – “write error” – meaning, of course, that the file could not be copied to the BDE. Mwaaaaah? So I fire up Disk Utility, and it greets me with this:
Great. I grab our copy of Disk Warrior, hoping to throw its advanced repair ninja mojo at the problem. No joy. It keeps finding errors, and complaining that it’s running slowly because of a disk error. Great. So, I decide to reformat the drive to start fresh. Blowing away 110GB of backups to try to fix the drive that’s supposed to be saving me this grief. I make 2 new partitions – one for a backup system, and one for data. I tell Disk Utility to verify each new partition. It chokes on the first one.
At this point, I’m guessing it’s a hardware failure, so pack up the drive to be replaced. Now, I’m sitting here without backups, hoping that Nothing Bad Happens™ before the drive is replaced and I have a chance to rebuild the backups… To add insult to injury, the SMART status on my desktop’s system drive temporarily flickered into “DRIVE FAILING” status, then returned to OK.
Update: I’m turning comments off for this post. I think it is just aggregating unhappy customers, rather than fairly representing what LaCie products are like in the real world. And, my blog isn’t a therapy group for disgruntled consumers. LaCie fixed my drive very quickly, and it’s been running as expected ever since. Give their support folks a shot – they did the right thing by me. Should be more productive that ranting into the night about a failed hard drive.
Well, not sure if it’s an “extreme failure” or just a failure of a LaCie Big Disk Extreme. Either way, my shiny new 500GB backup drive decided to go on strike yesterday. Maybe the most depressing day of the year got to it. Maybe it thought, as a 500GB drive, that it should have been doing more exciting things than just backing up my crap.
I was copying over the latest MySQL dumps from our production servers, and got an interesting error – “write error” – meaning, of course, that the file could not be copied to the BDE. Mwaaaaah? So I fire up Disk Utility, and it greets me with this:
Great. I grab our copy of Disk Warrior, hoping to throw its advanced repair ninja mojo at the problem. No joy. It keeps finding errors, and complaining that it’s running slowly because of a disk error. Great. So, I decide to reformat the drive to start fresh. Blowing away 110GB of backups to try to fix the drive that’s supposed to be saving me this grief. I make 2 new partitions – one for a backup system, and one for data. I tell Disk Utility to verify each new partition. It chokes on the first one.
At this point, I’m guessing it’s a hardware failure, so pack up the drive to be replaced. Now, I’m sitting here without backups, hoping that Nothing Bad Happens™ before the drive is replaced and I have a chance to rebuild the backups… To add insult to injury, the SMART status on my desktop’s system drive temporarily flickered into “DRIVE FAILING” status, then returned to OK.
Update: I’m turning comments off for this post. I think it is just aggregating unhappy customers, rather than fairly representing what LaCie products are like in the real world. And, my blog isn’t a therapy group for disgruntled consumers. LaCie fixed my drive very quickly, and it’s been running as expected ever since. Give their support folks a shot – they did the right thing by me. Should be more productive that ranting into the night about a failed hard drive.