More on MySQL backups

I’m just putting some additional refinements to my automated server backup process, and have rolled together a handy script to backup each database into its own backup file (so I can restore a single database, rather than blowing them all away to restore from an --all-databases backup.

I’m going to work on making a fancier / more dynamic script based on MySOL’s show databases command to get all databases backed up individually without having to remember to add them to the backup script. In the meantime, here’s how I’m backing up my databases.

In a script creatively named “backup_databases.sh” – which has been added to the crontab on the server – I have this:
Update: A much better script was provided in the comments for this post. Definitely use that one rather than this one that I cobbled together. I’m leaving this script up in case it comes in handy, but have switched my servers to use the script provided by Jon.

#!/bin/sh
# Customize these variables to match what you have
MYSQL_ACCOUNT=root
MYSQL_PASSWORD=password
BACKUP_DIRECTORY="/Users/Shared/backup/mysql/"

backupdb() {
    DATABASE_NAME=$1
    FILE_NAME=${2:-$DATABASE_NAME}
    echo "dumping database " $DATABASE_NAME " to " $FILE_NAME
    /usr/bin/mysqldump -u $MYSQL_ACCOUNT -p$MYSQL_PASSWORD -q $DATABASE_NAME | gzip > $BACKUP_DIRECTORY$FILE_NAME.sql.gz
}

# add lines for each database you want to back up
backupdb "first_database"
backupdb "database2"

# keep adding databases...

# finish up by grabbing the whole works Just In Case
backupdb "--all-databases" "mysqldump" 

The script has a function that is called for each database you want to back up, passing in the database name, and optionally the name of the output file. I’ll be tweaking the script over the next few days to make it more robust and flexible, but it’s a decent starting point anyway.

Of course, if you don’t need to restore individual databases, you can simply call
mysqldump -u USER -pPASSWORD -q --all-databases | gzip > mysqldump.sql.gz

Update: A much better, more flexible, and robust script was provided by Jon in the comments for this post. I’m using that script now. Thanks!

I’m just putting some additional refinements to my automated server backup process, and have rolled together a handy script to backup each database into its own backup file (so I can restore a single database, rather than blowing them all away to restore from an --all-databases backup.

I’m going to work on making a fancier / more dynamic script based on MySOL’s show databases command to get all databases backed up individually without having to remember to add them to the backup script. In the meantime, here’s how I’m backing up my databases.

In a script creatively named “backup_databases.sh” – which has been added to the crontab on the server – I have this:
Update: A much better script was provided in the comments for this post. Definitely use that one rather than this one that I cobbled together. I’m leaving this script up in case it comes in handy, but have switched my servers to use the script provided by Jon.

#!/bin/sh
# Customize these variables to match what you have
MYSQL_ACCOUNT=root
MYSQL_PASSWORD=password
BACKUP_DIRECTORY="/Users/Shared/backup/mysql/"

backupdb() {
    DATABASE_NAME=$1
    FILE_NAME=${2:-$DATABASE_NAME}
    echo "dumping database " $DATABASE_NAME " to " $FILE_NAME
    /usr/bin/mysqldump -u $MYSQL_ACCOUNT -p$MYSQL_PASSWORD -q $DATABASE_NAME | gzip > $BACKUP_DIRECTORY$FILE_NAME.sql.gz
}

# add lines for each database you want to back up
backupdb "first_database"
backupdb "database2"

# keep adding databases...

# finish up by grabbing the whole works Just In Case
backupdb "--all-databases" "mysqldump" 

The script has a function that is called for each database you want to back up, passing in the database name, and optionally the name of the output file. I’ll be tweaking the script over the next few days to make it more robust and flexible, but it’s a decent starting point anyway.

Of course, if you don’t need to restore individual databases, you can simply call
mysqldump -u USER -pPASSWORD -q --all-databases | gzip > mysqldump.sql.gz

Update: A much better, more flexible, and robust script was provided by Jon in the comments for this post. I’m using that script now. Thanks!

8 thoughts on “More on MySQL backups”

  1. Cornelius – not sure if MySQL can verify the dumps. I’ve stopped playing with FB for a couple of years now – can’t justify the cost to get access to the full set of DB tools – so can’t compare.

    A quick Google didn’t turn up anything or verifying MySQL database dumps, but that doesn’t mean it’s not possible…

  2. Hi Guys,

    Nice scripts.
    I mostly use FrontBase, and it allows you to check it’s catalog file for the validity / integrity of the dump. so i basically make a backup of the db to a file in a directory nonverifiedbackups. then run a FBListContents on the file. if it is invalid an email is sent to the admin. if it isn’t it is moved to a directory verified backups.
    i’m not sure how to make the code frame above, i’ll send the script on request.

  3. […] Update: One of the things I’m loving about hosting at Dreamhost is that backups are trivial. I have a script in my hosted account that I can trigger via SSH, and it will rsync my entire home directory (including all files needed to host my domain and any subdomains) to my desktop box on campus for backup. And, because rsync is so cool, it only takes a few seconds, since only changed files are sent. So, I can make sure all of my 300+MB of files are backed up in about 10 seconds. Add a scripted mysqldump into the mix, and all data backups are up to date as well. […]

  4. D’Arcy,

    Here is what I run on my servers:

    #!/bin/sh
    # mysqlback.sh -- do mysql database dumps -- t| tmpsql$$
    show databases;
    EOF
    /usr/local/mysql/bin/mysql -u root --password=******* < tmpsql$$ | sed '/Database/d' > dblist$$
    #########################
    # back each one up in turn;
    # --opt locks tables during
    # backup, so no need to shut
    # down mysql; but access will
    # be denied during backup
    #
    # we also pipe through gzip
    #########################
    for DB in `cat dblist$$`
    do /usr/local/mysql/bin/mysqldump -u root --password=******** --opt --flush-logs $DB | gzip > $DB.`date '+%m.%d.%y'`.gz
    done
    ###################
    # empty the logfile
    ###################
    rm /usr/local/mysql/logs/mysql.log.*
    cat /dev/null > /usr/local/mysql/logs/mysql.log

    #######
    # email
    #######
    ls -l $BACKDIR | /usr/bin/mail -s "MySQL Backup Completed" user@domain
    ##########
    # clean up
    ##########
    rm dblist$$ tmpsql$$

    Not sure how it compares.

  5. Oh, sure. NOW I see your script 🙂 I’ll totally be ripping this sucker off first thing in the morning. Thanks! The part where you talk with mysql to get the db list would have stumped me for longer than I’d care to admit…

    (comment edited to fix silly qwerty typos. must change all layouts to dvorak…)

  6. Glad I could help. I found it in a google search and modified it a bit. Mr. Evans should get the credit.

    On my Xserve I run this script along with a script to tar the WebServer directory. The script then tars the MySQL backup directory and sftps both tar files to a server which is backed up to a SAN. I think you discovered the sftp using key pairs in an earlier post. I’d known about that for a while. It was on O’Reilly a few years ago (well the key pair thing for SSHwas) and that is how I have secured SSH on my servers. I actually require a password to use the key when doing SSH so that no one could login even if they got my laptop (which is the only machine that can login to the server).

    If there is anything else your trying to do let me know. I may have a few more tricks up my sleeve (mostly borrowed from scripts found by googling or on OS X Hints).

  7. Jon – serious thanks for that script. MUCH better than mine. I’ve deployed trivially tweaked versions of it to both servers, and it appears to work perfectly. Love the 7-day backup snapshots as well.

Comments are closed.