For a very long time, fixing MySQL slaves was a tedious task. Slaves died, replication fails, tables would get out of sync and you’d have to import all the (consistant) data from the master database. Depending on the importance of the slave, this could mean downtime for your master while exporting.
MySQLdump is usually the application one uses to execute this task, locking the tables while exporting. For me, having a database readonly, is downtime. However you look at it, if your website is not 100% useable, it is experiencing downtime.
I came across Percona’s xtrabackup utility a while back and wanted to share this, because it has saved me a lot of trouble and downtime.
Xtrabackup is basically a hotbackup utility. It performs a backup of a running MySQL server, without blocking anything. It does this via the multi-version concurrency control (MVCC) inside innodb. I could go into this, but Wikipedia does a way better job.
Here’s a step-by-step guide to restore a MySQL slave using xtrabackup:
1. Install xtrabackup on both the master and slave
You can use their preconfigured rpm to add their yum or apt repository, so installing the utilities is a breeze. I’ll be using a yum repo from here on out. (check here for a apt guide).
If you’re using a 32bit system, replace x86_64 with i386:
rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
Now we can install xtrabackup:
# yum install -y xtrabackup
2. Snapshot the master data directory
We’ll use innobackupex to create the actual backups. It will determine your MySQL configuration from your /etc/my.cnf file (and possibly complain about settings you’re missing – just make any changes to /etc/my.cnf if it suggests them)
# innobackupex /home/backup/
It will run through it’s backup, depending on the amount of data you have, it could take a while. When it’s done, it will have created a date-tagged directory inside /home/backup/ with the backup of your current master databases.
3. Apply binary logs
The backup that was just created is a hot backup of your databases. Any transactions that were taking place, are probably incomplete or not flushed to disk yet. We need an consistant backup, so we need to apply the binary logs to the created backup and make it consistant.
# innobackupex --apply-log /home/backup/2013-01-01_14-02-12/
4. Copy backup to slave
After applying the binary logs, we can copy the backup to the slave. You can chose to tar or gzip (or both) the backup directory, depending on the size. Here, I’ll just copy the directory.
# scp -r /home/backup/2013-01-01_14-02-12 [email protected]:/home/dumps/
5. Apply backup to slave
What you need to understand is that the backup you just copied, is a full fledged MySQL data directory. We’re going to use this backup to replace the slaves data directory.
Make sure the MySQL daemon is stopped and replace the datadir:
# service mysqld stop
# mv /var/lib/mysql /var/lib/mysql_old_2013_01_01
# mv /home/dumps/2013-01-01_14-02-12 /var/lib/mysql
# service mysqld start
6. Restart replication
Once you have updated the slaves’ data, you need to restart replication. Xtrabackup makes a snapshot of the master status when it creates the backup, so we know the proper binlog and position we need to use.
Check master status:
# cat /var/lib/mysql/xtrabackup_binlog_info
And update the replication:
mysql> CHANGE MASTER TO MASTER_HOST='ip.of.master', MASTER_USER='user', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000046', MASTER_LOG_POS=252752601;
mysql> START SLAVE;
After this, check if the replication is running again.
Leave a Reply