how to repair all tables in all databases on a mysql server

This comes in handy, and I have to google it every time I need it12. So, here’s a copy for reference later…

mysqlcheck --repair --use-frm --all-databases

Run it as root, with MySQL running. It’ll repair every table in every database. Give it time to chew for awhile. It spews out the status of every table as it works. Here’s what it found with my FeverËš database tables (which now work just fine):

dnorman_fever.fever__config
warning  : Number of rows changed from 0 to 1
status   : OK
dnorman_fever.fever_favicons
warning  : Number of rows changed from 0 to 408
status   : OK
dnorman_fever.fever_feeds
warning  : Number of rows changed from 0 to 240
status   : OK
dnorman_fever.fever_feeds_groups
warning  : Number of rows changed from 0 to 305
status   : OK
dnorman_fever.fever_groups
warning  : Number of rows changed from 0 to 17
status   : OK
dnorman_fever.fever_items
warning  : Number of rows changed from 0 to 13660
status   : OK
dnorman_fever.fever_links
warning  : Number of rows changed from 0 to 46208
status   : OK

Better.

Looks like it doesn’t like INNODB tables, throwing this:

note     : The storage engine for the table doesn't support repair

So, if you’re using MyISAM tables, this should do the trick. Not sure how to fix the INNODB tables, or if they even need fixing…

  1. usually coming up with the top-voted answer for a question posted to stackoverflow.com []
  2. actually, I use DuckDuckGo, so I get the tip inline in the search results… []

Hippie Hosting server now has room to grow

I’d been getting nervous, seeing the storage on our Hippie Hosting Co-op server filling up. We were over 80% full, with less than 18GB left until we were in serious trouble. So, I did some digging. I was getting ready to start deleting some of the bigger video files in my web hosting account, to make space. Turns out, that wasn’t necessary.

I use this to find directories that have lots of stuff in them:

du -Psckx * | sort -nr

For bonus marks, run that as root. For extra-special bonus marks, set it as an alias – I have this in my .bash_profile on several servers:

alias dus='du -Psckx * | sort -nr'

Start where you suspect trouble (I started in the /var/www/vhosts directory, thinking one of the stinking hippies was filling the place up with their free love. turns out, the hippies are only using just over 30GB of space, on a 100GB volume. no problem.) So, I moved to the root directory / and tried it there. /var was the biggest directory, so I moved into it and ran it from there. 2 big directories in it – /var/lib and /var/www. I’d already checked out /var/www (where /var/www/vhosts is), so I popped into /var/lib and ran it again. /var/lib/mysql. Metric buttloads of files in there. Ruh roh.

I took a look, and saw LOTS of BIG mysql-bin files in there. Some quick poking around the great MediaTemple documentation site, and I came across this piece on mysql binary logging. It’s used for replication and disaster recovery. We don’t replicate the server, and aren’t using the binary logs for disaster recovery. That’s what backups are for. I’ve had to disable binary logging on servers before, so it wasn’t a surprise. Well, I was surprised that it was enabled by default, but yeah…

The reporting command described on the MediaTemple documentation site dumped this:

MySQL binlog consuming 34.98 Gigabytes of disk space

35 GB. On a 100GB drive. When the next step up in server specs is about triple the monthly cost. Yeah… We don’t need binary logs that badly…

So I disabled binary logging, restarted the mysql server, and nuked the binary log files. Hey, presto! We’re now back under 50% of storage space used, with LOTS of room to grow. Awesome.

Enterprise-Class WordPress

I’d been thinking that WordPress might be tricky to scale, but between WP-Cache and the newly announced HyperDB, I think WP might well have some legs in it.

WP-Cache stores pages as static files, and dramatically reduces the load on the database. This makes sites more responsive, and at least theoretically able to survive a Slashdotting or Digging.

Matt just announced the other side of the equation. Enterprise-level database connectivity. They’re releasing the (previously custom) database class that was developed for WordPress.com. It obviously works, as WordPress.com has something like 47 quajillion blogs hosted, with pretty decent performance.

Matt’s notes on the HyperDB release describe features including:

  • Replication
  • Failover
  • Redundant (public/private) networks
  • Local and remote datacenters
  • Partitioning
  • Different tables on different DBs
  • Advanced stats for profiling
  • More…?

So, it supports spreading databases across a bunch of servers, making it easier to set up server clusters to scale WordPress (and WPMU) up to any level you want. Might be handy for, oh, I don’t know… an institutional blogging platform?

WordPress just got a bunch more interesting, from a CMS perspective…

How to migrate from Drupal 5 to WordPress 2

I migrated my blog from Drupal 5 to WordPress 2 nearly 2 weeks ago. The process wasn’t as painful as I thought it would be, thanks to a handy howto via vrypan.net. Another resource I refer to every time I get into tweaking MySQL rows is UrbanMainframe’s MySQL search and replace tipsheet. Thanks to both of these great resources for helping me through the migration.

This guide is intended only to document what I did. It’s not a polished howto or manual. There is no warranty. If you blow up your database because you didn’t work with offline backup copies, I won’t be able to help you. Actually, if you’re that silly, I won’t be willing to help you, either. Your mileage may vary.

Use a backup copy of your Drupal database, and a fresh WordPress database.

The basic process I followed was:

  1. Work from offline copies of the databases. Dump your Drupal 5 database, import it into a fresh database (say “drupalmigration” or something creative). I did the migration on a separate machine from my “live” server – I used my desktop box, with local copies of MySQL and Apache. Anything with a decent version of MySQL on it will do.
  2. Install WordPress, (you don’t need to install Drupal for this migration) using the same database server containing the copy of your Drupal database (use “wordpress” as the name of the new WordPress database).
  3. Now that you’ve got WordPress and Drupal running on the same database server, in separate databases, run this MySQL script which I modified slightly after the file provided by vrypan.net. The script assumes the WordPress database is called “wordpress” and the Drupal database is called “drupalmigration” – feel free to modify the script to match the database names you need to use, if they differ.
  4. That should do it. Log into your WordPress site. You might have to hand tweak the usernames, but all posts and comments should be there…

One additional thing I had to do was fix the comment_ID values in the wp_comments table. After migration, they were too big for the data type, and things went poopy. I’m sure there’s an elegant way to renumber rows in MySQL. I used brute force, by dumping the table to a .sql file and opening that in a text editor to do a search-and-replace to lower the numbers used as primary keys. I then moved the old table out of the way (renaming it to “wp_comments_old“) and imported the new wp_comments table definition and content. It was a funky thing to have to do, but it solved all kinds of comment-related misbehaviour.

Test things out in the migrated database. If all seems well, go ahead and dump the wordpress database to a .sql file, and import it into a new database on your server. Install and configure WordPress on your server to use this new database. You may need to manually change the URL used by the blog, so that it matches the “live” server rather than whatever you used as a staging/migration server. The values are in the wp_options table, with option_name of “siteurl” and “home” – change those values to whatever matches the root URL for your blog.

You’ll also have to make sure all files are in the proper place, so URL references from the old Drupal content doesn’t point off to 404 error pages. That’s an exercise left to the reader. I just SSHed into my server and used a lot of cp -R drupal/directoryname wordpress/directoryname – being paranoidally careful to copy files rather than just moving them. Always keep backups.

Update: I’ve updated the SQL script to automatically set the comment and category counts, so it should appear to work better now.

Help – Slow MySQL Insert?

I’ve been struggling with this for what feels like months. On a project, we’re using a third party hosting provider, who offers us space on a managed server, complete with everything we need to run Drupal in a shared hosting environment. We’re running a copy of Provisionator on the server to help us deploy lots of Drupal sites easily.

Here’s where it gets messy. We can create new databases just fine, but importing a .sql file takes for freaking ever. Imports that take 3 seconds on my Powerbook can take 90 – 300 seconds on this server. Running the import on a dual G5 XServe with the same version of MySQL finishes the job in about a second.

I’ve tried removing variables from the equation. Using full-on Provisionator. Using a separate custom .php script. Using phpMyAdmin. Each take so long that browsers often time out before completing the import (leaving partially imported databases). I’ve tried command line mysql directly on the server, with the same range of very slow times.

I can’t seem to find anything that might make imports of a smallish .sql (~400K) file take so long. Unfortunately, it’s making the process of deploying new sites essentially useless for now.

The curious thing is that once a database has (eventually) been populated, the select queries seem to run at normalish speed.

Any ideas on how to get the server back up to speed? It’s running MySQL 4.1.20 with MyISAM tables on a RedHat Linux box. I don’t have access to tools like top, ps, netstat, or even env, so exact details of the box’s config are shrouded in a veil of mystery and obscurity.

I’ve been struggling with this for what feels like months. On a project, we’re using a third party hosting provider, who offers us space on a managed server, complete with everything we need to run Drupal in a shared hosting environment. We’re running a copy of Provisionator on the server to help us deploy lots of Drupal sites easily.

Here’s where it gets messy. We can create new databases just fine, but importing a .sql file takes for freaking ever. Imports that take 3 seconds on my Powerbook can take 90 – 300 seconds on this server. Running the import on a dual G5 XServe with the same version of MySQL finishes the job in about a second.

I’ve tried removing variables from the equation. Using full-on Provisionator. Using a separate custom .php script. Using phpMyAdmin. Each take so long that browsers often time out before completing the import (leaving partially imported databases). I’ve tried command line mysql directly on the server, with the same range of very slow times.

I can’t seem to find anything that might make imports of a smallish .sql (~400K) file take so long. Unfortunately, it’s making the process of deploying new sites essentially useless for now.

The curious thing is that once a database has (eventually) been populated, the select queries seem to run at normalish speed.

Any ideas on how to get the server back up to speed? It’s running MySQL 4.1.20 with MyISAM tables on a RedHat Linux box. I don’t have access to tools like top, ps, netstat, or even env, so exact details of the box’s config are shrouded in a veil of mystery and obscurity.

Estimating blog feed subscribers in Drupal

Guesstimating the size of an RSS feed audience is always a huge shot in the dark, but sometimes I get curious about how many people subscribe to this silly blog. If I was willing to surrender my feeds to Feedburner, I could get some pretty detailed stats. But, I don’t want to hand over that.

So, I thought about digging into the accesslog that’s stored in Drupal’s database. I’ve set my copy to store access logs for the past 2 weeks, and it dutifully records which pages are viewed, as well as the IP address the request came from. It’s just a subset of a typical webserver log, so there isn’t any privacy issue here (if you’re really worried about being tracked online, you’re already using an anonymizing proxy…)

A quick MySQL poke-and-test session, and I came up with a quick query that spits out a WAG about the number of feed subscribers. It’s not accurate, because people might be accessing the feed from multiple locations (recording different IP addresses), and services like Bloglines might be sending many readers in under the cover of a single IP address (at the moment, Bloglines claims 334 folks are subscribing to various feeds published by this blog). Also, it makes no distinction between bots (Google and friends) and actual human-proxying-agents. Whatever.

Here’s my brain-dead-simple query. It just looks for all requests for feed-related paths, and counts up the number of unique IP addresses.

select count(distinct hostname) from accesslog where path like '%/feed%'

YMMV. IANAP. YHBH. Wait. Not the last one.

According to my Drupal log, there have been 955 unique IP addresses requesting the RSS/atom feeds on this blog in the last 2 weeks. That may be higher or lower than the number of actual humans reading the blog. Still, ballpark order-of-magnitude WAG at roughly 1,000. That kind of boggles my mind.

Update: Oops. My rudimentary query forgot subscribers of “rss.xml” – which turned out to be more than the various /feed subscribers! Also, thanks to a tip from Bér Kessels, I added the cool XStatistics module, which takes care of the guesswork. According to it, there are 2062 subscribers to the various feeds on this blog. Wow.

Guesstimating the size of an RSS feed audience is always a huge shot in the dark, but sometimes I get curious about how many people subscribe to this silly blog. If I was willing to surrender my feeds to Feedburner, I could get some pretty detailed stats. But, I don’t want to hand over that.

So, I thought about digging into the accesslog that’s stored in Drupal’s database. I’ve set my copy to store access logs for the past 2 weeks, and it dutifully records which pages are viewed, as well as the IP address the request came from. It’s just a subset of a typical webserver log, so there isn’t any privacy issue here (if you’re really worried about being tracked online, you’re already using an anonymizing proxy…)

A quick MySQL poke-and-test session, and I came up with a quick query that spits out a WAG about the number of feed subscribers. It’s not accurate, because people might be accessing the feed from multiple locations (recording different IP addresses), and services like Bloglines might be sending many readers in under the cover of a single IP address (at the moment, Bloglines claims 334 folks are subscribing to various feeds published by this blog). Also, it makes no distinction between bots (Google and friends) and actual human-proxying-agents. Whatever.

Here’s my brain-dead-simple query. It just looks for all requests for feed-related paths, and counts up the number of unique IP addresses.

select count(distinct hostname) from accesslog where path like '%/feed%'

YMMV. IANAP. YHBH. Wait. Not the last one.

According to my Drupal log, there have been 955 unique IP addresses requesting the RSS/atom feeds on this blog in the last 2 weeks. That may be higher or lower than the number of actual humans reading the blog. Still, ballpark order-of-magnitude WAG at roughly 1,000. That kind of boggles my mind.

Update: Oops. My rudimentary query forgot subscribers of “rss.xml” – which turned out to be more than the various /feed subscribers! Also, thanks to a tip from Bér Kessels, I added the cool XStatistics module, which takes care of the guesswork. According to it, there are 2062 subscribers to the various feeds on this blog. Wow.

Upgrading MySQL on MacOSX Server

I just upgraded our TLC development/staging/small-deployment server from MySQL 4.0 to 5.0.22. I'd never upgraded a MySQL server before (always just installed a fresh copy on a new box, or updated along with MacOSX) so I wanted to do some testing before making the plunge on a deployment server. We've got a bunch of databases on that box, running everything from weblogs.ucalgary.ca to some custom apps written here.

I did a quick RTFM , but the MySQL manual recommended not jumping right from 4.0 to 5.0 using the normal upgrade process. It said to go to 4.1 first, then to 5.0. So, I did some more Googling, and realized that a full mysqldump and restore would do the trick, without requiring the milk run through 4.1.

I installed a fresh copy of MySQL 5.0.x on my desktop box, did a full mysqldump from the server using:

mysqldump -u root -p --opt -Q --add-drop-table > mysqldump.sql

That gave me a 400+MB SQL file, handy for a backup, or for migrating between copies of MySQL. I copied the file to my desktop, and ran this:

mysql -u root -p < mysqldump.sql

mysqladmin -u root -p --flush-privileges

And, it Just Worked™ – so the data should safely migrate this way. I did some testing with apps – Drupal etc… and thinks behaved as expected. It takes longer to dump/restore than just updating the MySQL server, but it's safe. And all database users and privileges were intact, so everything should hook up just fine.

Then, I did the same on the server. I killed MySQL, which was still running version 4.0. I ran the installer (both MySQL 5.0 and the StartupItem) via SSH (sudo installer…) and it took care of setting stuff up. But, the old MySQL didn't shutdown cleanly, so I had to do the unthinkable – reboot the box. I know there's a cleaner, more "official" way to clean stuff up, but the box had been up for a couple months, so a reboot wasn't the end of the world.

When the server came back up, MySQL was running as expected. Then, it was a simple. First, I told my shell to use the new copy of MySQL, then imported the data:

alias mysql=/usr/local/mysql/bin/mysql

alias mysqladmin=/usr/local/mysql/bin/mysqladmin

mysql -u root -p < mysql.sql

mysqladmin -u root -p --flush-privileges

And all of our apps are up and running on a shiny new copy of MySQL. The previous version of MySQL is still on the server (but not running) with all data still safe in its own directory, so if something catastrophic has happened, reverting will be trivial.

Now, to upgrade my copy of Moodle to 1.6, which is what prompted the whole MySQL upgrade process in the first place…

I just upgraded our TLC development/staging/small-deployment server from MySQL 4.0 to 5.0.22. I'd never upgraded a MySQL server before (always just installed a fresh copy on a new box, or updated along with MacOSX) so I wanted to do some testing before making the plunge on a deployment server. We've got a bunch of databases on that box, running everything from weblogs.ucalgary.ca to some custom apps written here.

I did a quick RTFM , but the MySQL manual recommended not jumping right from 4.0 to 5.0 using the normal upgrade process. It said to go to 4.1 first, then to 5.0. So, I did some more Googling, and realized that a full mysqldump and restore would do the trick, without requiring the milk run through 4.1.

I installed a fresh copy of MySQL 5.0.x on my desktop box, did a full mysqldump from the server using:

mysqldump -u root -p --opt -Q --add-drop-table > mysqldump.sql

That gave me a 400+MB SQL file, handy for a backup, or for migrating between copies of MySQL. I copied the file to my desktop, and ran this:

mysql -u root -p < mysqldump.sql

mysqladmin -u root -p --flush-privileges

And, it Just Worked™ – so the data should safely migrate this way. I did some testing with apps – Drupal etc… and thinks behaved as expected. It takes longer to dump/restore than just updating the MySQL server, but it's safe. And all database users and privileges were intact, so everything should hook up just fine.

Then, I did the same on the server. I killed MySQL, which was still running version 4.0. I ran the installer (both MySQL 5.0 and the StartupItem) via SSH (sudo installer…) and it took care of setting stuff up. But, the old MySQL didn't shutdown cleanly, so I had to do the unthinkable – reboot the box. I know there's a cleaner, more "official" way to clean stuff up, but the box had been up for a couple months, so a reboot wasn't the end of the world.

When the server came back up, MySQL was running as expected. Then, it was a simple. First, I told my shell to use the new copy of MySQL, then imported the data:

alias mysql=/usr/local/mysql/bin/mysql

alias mysqladmin=/usr/local/mysql/bin/mysqladmin

mysql -u root -p < mysql.sql

mysqladmin -u root -p --flush-privileges

And all of our apps are up and running on a shiny new copy of MySQL. The previous version of MySQL is still on the server (but not running) with all data still safe in its own directory, so if something catastrophic has happened, reverting will be trivial.

Now, to upgrade my copy of Moodle to 1.6, which is what prompted the whole MySQL upgrade process in the first place…

More on MySQL backups

I’m just putting some additional refinements to my automated server backup process, and have rolled together a handy script to backup each database into its own backup file (so I can restore a single database, rather than blowing them all away to restore from an --all-databases backup.

I’m going to work on making a fancier / more dynamic script based on MySOL’s show databases command to get all databases backed up individually without having to remember to add them to the backup script. In the meantime, here’s how I’m backing up my databases.

In a script creatively named “backup_databases.sh” – which has been added to the crontab on the server – I have this:
Update: A much better script was provided in the comments for this post. Definitely use that one rather than this one that I cobbled together. I’m leaving this script up in case it comes in handy, but have switched my servers to use the script provided by Jon.

#!/bin/sh
# Customize these variables to match what you have
MYSQL_ACCOUNT=root
MYSQL_PASSWORD=password
BACKUP_DIRECTORY="/Users/Shared/backup/mysql/"

backupdb() {
    DATABASE_NAME=$1
    FILE_NAME=${2:-$DATABASE_NAME}
    echo "dumping database " $DATABASE_NAME " to " $FILE_NAME
    /usr/bin/mysqldump -u $MYSQL_ACCOUNT -p$MYSQL_PASSWORD -q $DATABASE_NAME | gzip > $BACKUP_DIRECTORY$FILE_NAME.sql.gz
}

# add lines for each database you want to back up
backupdb "first_database"
backupdb "database2"

# keep adding databases...

# finish up by grabbing the whole works Just In Case
backupdb "--all-databases" "mysqldump" 

The script has a function that is called for each database you want to back up, passing in the database name, and optionally the name of the output file. I’ll be tweaking the script over the next few days to make it more robust and flexible, but it’s a decent starting point anyway.

Of course, if you don’t need to restore individual databases, you can simply call
mysqldump -u USER -pPASSWORD -q --all-databases | gzip > mysqldump.sql.gz

Update: A much better, more flexible, and robust script was provided by Jon in the comments for this post. I’m using that script now. Thanks!

I’m just putting some additional refinements to my automated server backup process, and have rolled together a handy script to backup each database into its own backup file (so I can restore a single database, rather than blowing them all away to restore from an --all-databases backup.

I’m going to work on making a fancier / more dynamic script based on MySOL’s show databases command to get all databases backed up individually without having to remember to add them to the backup script. In the meantime, here’s how I’m backing up my databases.

In a script creatively named “backup_databases.sh” – which has been added to the crontab on the server – I have this:
Update: A much better script was provided in the comments for this post. Definitely use that one rather than this one that I cobbled together. I’m leaving this script up in case it comes in handy, but have switched my servers to use the script provided by Jon.

#!/bin/sh
# Customize these variables to match what you have
MYSQL_ACCOUNT=root
MYSQL_PASSWORD=password
BACKUP_DIRECTORY="/Users/Shared/backup/mysql/"

backupdb() {
    DATABASE_NAME=$1
    FILE_NAME=${2:-$DATABASE_NAME}
    echo "dumping database " $DATABASE_NAME " to " $FILE_NAME
    /usr/bin/mysqldump -u $MYSQL_ACCOUNT -p$MYSQL_PASSWORD -q $DATABASE_NAME | gzip > $BACKUP_DIRECTORY$FILE_NAME.sql.gz
}

# add lines for each database you want to back up
backupdb "first_database"
backupdb "database2"

# keep adding databases...

# finish up by grabbing the whole works Just In Case
backupdb "--all-databases" "mysqldump" 

The script has a function that is called for each database you want to back up, passing in the database name, and optionally the name of the output file. I’ll be tweaking the script over the next few days to make it more robust and flexible, but it’s a decent starting point anyway.

Of course, if you don’t need to restore individual databases, you can simply call
mysqldump -u USER -pPASSWORD -q --all-databases | gzip > mysqldump.sql.gz

Update: A much better, more flexible, and robust script was provided by Jon in the comments for this post. I’m using that script now. Thanks!

B2 to WordPress Migration is Not Smooth

A friend of mine wanted to migrate from an aging B2 installation to a shiny new WordPress setup. I figured it should be a simple process, given that WP was spawned from B2’s loins and all. It should have been relatively trivial – tweak some database fields, massage some data, and done. Not so fast, smartass.

Turns out that all of the tips I found were, well, a bit short. They either didn’t work at all, or sorta worked, but not enough to be useful. Crap. So, I started eyeballing the B2 schema. Turns out the biggest difference between the B2 tables and the WP tables is – wait for it – the table names. Aside from that, it’s some trivial stuff like changing int into bigint, etc…

So, here’s the process I followed. Update – this didn’t work, after all… read down to the bottom for the steps that did work.

  1. Install WordPress on my Powerbook. Create a new database for it.
  2. Import the B2 database into a new database on my Powerbook.
  3. Massage the imported B2 database – change table names to match WP, and make all fields to match what’s expected by WP. I used CocoaMySQL because it rocks so much.
  4. Dump the data from the imported/modified database.
  5. Import the mysqldump of massaged B2 data into the shiny new WP database.
  6. Verify that it’s working in WP.
  7. Mysqldump the shiny new WP database, and import it into the database server used by my friend.
  8. Cross fingers and hope it works fine on his server…

Now, why on earth doesn’t the stock “import-b2.php” file included with WordPress not just take care of this? I tried it, and it barfed miserably. I tried several other documented techniques, and they barfed equally violently. Manual tweakage appears to have worked.

Hopefully, he’ll have the blog up and running by the end of the weekend. At least he’ll be spared manual copy-and-paste operations to import the nearly 90 posts in his database, as well as the comments…

Update: OK. The manual tweak basically didn’t work after all. I ended up starting over from scratch (again), following the instructions here, followed by an additional call to update.php, and it appears all is well.

So, the real B2–>WP migration process that worked here is:

  1. Backup the B2 tables into a fresh database – call it something creative like “b2migration” or something.
  2. Install WordPress, using the “b2migration” database to store the automatically-created WordPress tables.
  3. Follow the instructions, essentially running this on “b2migration” database:
    DROP TABLE wp_categories;
    DROP TABLE wp_comments;
    DROP TABLE wp_posts;
    DROP TABLE wp_settings;
    DROP TABLE wp_users;
    RENAME TABLE b2categories TO wp_categories;
    RENAME TABLE b2comments TO wp_comments;
    RENAME TABLE b2posts TO wp_posts;
    RENAME TABLE b2settings TO wp_settings;
    RENAME TABLE b2users TO wp_users;
  4. Run /wp-admin/import-b2.php
  5. Run /wp-admin/upgrade.php
  6. Dump the tables and move them wherever you need them – you didn’t do this on your live database, did you?

A friend of mine wanted to migrate from an aging B2 installation to a shiny new WordPress setup. I figured it should be a simple process, given that WP was spawned from B2’s loins and all. It should have been relatively trivial – tweak some database fields, massage some data, and done. Not so fast, smartass.

Turns out that all of the tips I found were, well, a bit short. They either didn’t work at all, or sorta worked, but not enough to be useful. Crap. So, I started eyeballing the B2 schema. Turns out the biggest difference between the B2 tables and the WP tables is – wait for it – the table names. Aside from that, it’s some trivial stuff like changing int into bigint, etc…

So, here’s the process I followed. Update – this didn’t work, after all… read down to the bottom for the steps that did work.

  1. Install WordPress on my Powerbook. Create a new database for it.
  2. Import the B2 database into a new database on my Powerbook.
  3. Massage the imported B2 database – change table names to match WP, and make all fields to match what’s expected by WP. I used CocoaMySQL because it rocks so much.
  4. Dump the data from the imported/modified database.
  5. Import the mysqldump of massaged B2 data into the shiny new WP database.
  6. Verify that it’s working in WP.
  7. Mysqldump the shiny new WP database, and import it into the database server used by my friend.
  8. Cross fingers and hope it works fine on his server…

Now, why on earth doesn’t the stock “import-b2.php” file included with WordPress not just take care of this? I tried it, and it barfed miserably. I tried several other documented techniques, and they barfed equally violently. Manual tweakage appears to have worked.

Hopefully, he’ll have the blog up and running by the end of the weekend. At least he’ll be spared manual copy-and-paste operations to import the nearly 90 posts in his database, as well as the comments…

Update: OK. The manual tweak basically didn’t work after all. I ended up starting over from scratch (again), following the instructions here, followed by an additional call to update.php, and it appears all is well.

So, the real B2–>WP migration process that worked here is:

  1. Backup the B2 tables into a fresh database – call it something creative like “b2migration” or something.
  2. Install WordPress, using the “b2migration” database to store the automatically-created WordPress tables.
  3. Follow the instructions, essentially running this on “b2migration” database:
    DROP TABLE wp_categories;
    DROP TABLE wp_comments;
    DROP TABLE wp_posts;
    DROP TABLE wp_settings;
    DROP TABLE wp_users;
    RENAME TABLE b2categories TO wp_categories;
    RENAME TABLE b2comments TO wp_comments;
    RENAME TABLE b2posts TO wp_posts;
    RENAME TABLE b2settings TO wp_settings;
    RENAME TABLE b2users TO wp_users;
  4. Run /wp-admin/import-b2.php
  5. Run /wp-admin/upgrade.php
  6. Dump the tables and move them wherever you need them – you didn’t do this on your live database, did you?

Godaddy Database Goofup

So, it’s 4:50pm Friday afternoon. I’m about to click “Publish” on a post about the random image rotator dealie. Boom. WordPress throws up a big old “Error establishing a database connection” error screen. Crap.

I login to my Godaddy account page, hit the database manager, and PHPMyAdmin can connect. The database is there, and running. WTF. I notice my ShortStat table has ballooned to over 100MB of data. That’s insane, so I truncate it. I’ll remove the plugin. I check again – maybe I was over my DB quota or something – and it’s still no joy from WP. I try Referrer Karma – it uses the same MySQL database, and throws a scarier – but more helpful – error message. “Warning: mysql_connect(): User ‘dnorman’ has exceeded the ‘max_connections’ resource (current value: 50)”

Ahah! Something wrong with the database server. But… I haven’t added any new apps or anything, and am in a relatively low traffic load (compared to, say the week of WWDC), so my blog shouldn’t be crippling the database server.

I decide to go to the GoDaddy support page – they’ll have a “Status” section that indicates that Apache is up, MySQL is up, etc… Right? No. OK, So I fill in the email form, even though it says they won’t be able to respond to email for an estimated 10 hours. About 1 minute after clicking “Send”, I break into a cold sweat.

My blog is dead in the water. So, I call their phone support line. In the back of my head, I’m thinking “So, this is how they keep their rates so low – I’ll get gouged on long distance support, since they don’t offer a toll-free line.” At least it isn’t a 1-900 number or anything. I get connected, pass through voicemail menu hell (why does it ask me to type in my account number if the person who eventually picks up will just be asking me to repeat it anyway?). I eventually get through, and the support tech is nice enough. He doesn’t know what’s wrong, but would I mind if he put me on hold while he goes to talk to someone with a clue? Sure. The hold music doesn’t suck.

13 minutes on the phone, long distance, to GoDaddy tech support. Only to hear “oh, yeah. that’s a known issue. They’ve got their wrenches and hammers out, and are working on it. It should only be a couple of hours. Uh, yeah, your data is probably safe. Can I send you an email survey so you can let us know you were satisfied with this call?”

3 hours later, and it looks like things are back up and running. How about sending me a survey about that?

So, it’s 4:50pm Friday afternoon. I’m about to click “Publish” on a post about the random image rotator dealie. Boom. WordPress throws up a big old “Error establishing a database connection” error screen. Crap.

I login to my Godaddy account page, hit the database manager, and PHPMyAdmin can connect. The database is there, and running. WTF. I notice my ShortStat table has ballooned to over 100MB of data. That’s insane, so I truncate it. I’ll remove the plugin. I check again – maybe I was over my DB quota or something – and it’s still no joy from WP. I try Referrer Karma – it uses the same MySQL database, and throws a scarier – but more helpful – error message. “Warning: mysql_connect(): User ‘dnorman’ has exceeded the ‘max_connections’ resource (current value: 50)”

Ahah! Something wrong with the database server. But… I haven’t added any new apps or anything, and am in a relatively low traffic load (compared to, say the week of WWDC), so my blog shouldn’t be crippling the database server.

I decide to go to the GoDaddy support page – they’ll have a “Status” section that indicates that Apache is up, MySQL is up, etc… Right? No. OK, So I fill in the email form, even though it says they won’t be able to respond to email for an estimated 10 hours. About 1 minute after clicking “Send”, I break into a cold sweat.

My blog is dead in the water. So, I call their phone support line. In the back of my head, I’m thinking “So, this is how they keep their rates so low – I’ll get gouged on long distance support, since they don’t offer a toll-free line.” At least it isn’t a 1-900 number or anything. I get connected, pass through voicemail menu hell (why does it ask me to type in my account number if the person who eventually picks up will just be asking me to repeat it anyway?). I eventually get through, and the support tech is nice enough. He doesn’t know what’s wrong, but would I mind if he put me on hold while he goes to talk to someone with a clue? Sure. The hold music doesn’t suck.

13 minutes on the phone, long distance, to GoDaddy tech support. Only to hear “oh, yeah. that’s a known issue. They’ve got their wrenches and hammers out, and are working on it. It should only be a couple of hours. Uh, yeah, your data is probably safe. Can I send you an email survey so you can let us know you were satisfied with this call?”

3 hours later, and it looks like things are back up and running. How about sending me a survey about that?