How to write the Mysql binary log position of master when doing a mysqldump from slave?
Asked Answered
P

5

7

I am currently running mysqldump on a Mysql slave to backup our database. This has worked fine for backing up our data itself, but what I would like to supplement it with is the binary log position of the master that corresponds with the data generated by the mysqldump.

Doing this would allow us to restore our slave (or setup new slaves) without having to do a separate mysqldump on the main database where we grab the binary log position of the master. We would just take the data generated by the mysqldump, combine it with the binary log information we generated, and voila... be resynced.

So far, my research has gotten me very CLOSE to being able to accomplish this goal, but I can't seem to figure out an automated way to pull it off. Here are the "almosts" I've uncovered:

  • If we were running mysqldump from the main database, we could use the "--master-data" parameter with mysqldump to log the master's binary position along with the dump data (I presume this would probably also work if we started generating binary logs from our slave, but that seems like overkill for what we want to accomplish)
  • If we wanted to do this in a non-automated way, we could log into the slave's database and run "STOP SLAVE SQL_THREAD;" followed by "SHOW SLAVE STATUS;" (http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html). But this isn't going to do us any good unless we know in advance we want to back something up from the salve.
  • If we had $500/year to blow, we could use the InnoDb hot backup plugin and just run our mysqldumps from the main DB. But we don't have that money, and I don't want to add any extra I/O on our main DB anyway.

This seems like something common enough that somebody must have figured out before, hopefully that somebody is using Stack Overflow?

Psychoanalysis answered 5/10, 2009 at 23:51 Comment(0)
O
9

The following shell script will run in cron or periodic, replace variables as necessary (defaults are written for FreeBSD):

# MySQL executable location
mysql=/usr/local/bin/mysql

# MySQLDump location
mysqldump=/usr/local/bin/mysqldump

# MySQL Username and password
userpassword=" --user=<username> --password=<password>"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert"

# Databases
databases="db1 db2 db3"

# Backup Directory
backupdir=/usr/backups

# Flush and Lock
mysql $userpassword -e 'STOP SLAVE SQL_THREAD;'

set `date +'%Y %m %d'`

# Binary Log Positions
masterlogfile=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep '[^_]Master_Log_File'`
masterlogpos=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep 'Read_Master_Log_Pos'`

# Write Binlog Info
echo $masterlogfile >> ${backupdir}/info-$1-$2-$3.txt
echo $masterlogpos >> ${backupdir}/info-$1-$2-$3.txt

# Dump all of our databases
echo "Dumping MySQL Databases"
for database in $databases
do
$mysqldump $userpassword $dumpoptions $database | gzip - > ${backupdir}/${database}-$1-$2-$3.sql.gz
done

# Unlock
$mysql $userpassword -e 'START SLAVE'

echo "Dump Complete!"

exit 0
Oleviaolfaction answered 6/10, 2009 at 1:19 Comment(9)
Yup, that is similar to my second scenario, above. If the Mysql docs are to be believed, you can get the binary position of the master from the slave by stopping the slave thread and showing the slave's status. This doesn't require locking the master. But I'm hoping to find an automated solution that will automatically store the bin log position in the course of running our everyday backup.Psychoanalysis
Hey, had totally forgotten that the master status can be retrieved from the slave! Cheers for the reminder. I've added the information to the shellscript that performs daily backups, so we should have binary log information written out now alongside the backups. I'll add the info to my answer, will only be directly applicable if you're using a *nix system, but I'm sure if you're working on a Windows system you have your own way of doing it :)Oleviaolfaction
The OP really doesn't want to do a LOCK TABLES WITH READ LOCK on the master. Nor does anybody, really.Lotus
That's not a read lock on the master, it's a read lock on the slave.Oleviaolfaction
Hey Ross, great job, that almost works for me. If you could make the following modifications I'll accept this as the answer: * All of the "-u <username> -p<password>" bits should be replaced with the $userpassword variable you declared * Path to Mysql and Mysqldump should be declared as variables in top area, alongside username and such. In Debian (which is what I'm running) they're in a different location. Other than that, this script has successfully dumped position+database in my test. Going to test re-importing it a bit later to verify it's all kosher, but looks very promising.Psychoanalysis
Also, it'd probably be best to just remove the top bit about locking master. Your second, slave-only solution is exactly what the question calls for.Psychoanalysis
Hey, thanks for the cleanup tips, should be good now! Glad it helped.Oleviaolfaction
One thing to note, running mysql $userpassword -e 'FLUSH TABLES WITH READ LOCK' will NOT maintain the lock! The lock will be released as soon as that command is finished running. Not maintaining the lock will result in data that is not in sync with other tables. Since the lock is released immediately after the command is completed running the follow up command of UNLOCK TABLES is unnecessary. Also, several of the mysqldump options are not necessary either, since they are on by default using the --opt command. Check the mysqldump man page.Carefree
mysqldump --user=***--password=*** --lock-all-tables --dump-slave=2 database > /backup.sql it's will add master position and log file in dumpOverpay
F
1

Although Ross's script is on the right track, @joatis is right when he says to stop the slave before checking the master log position. The reason being is the READ LOCK will not preserve the Read_Master_Log_Pos that is retrieved with SHOW SLAVE STATUS.

To see that this is the case, log into MySQL on your slave and run:

FLUSH TABLES WITH READ LOCK

SHOW SLAVE STATUS \G

Note the Read_Master_Log_Pos

Wait a few seconds and once again run:

SHOW SLAVE STATUS \G

You should notice that the Read_Master_Log_Pos has changed.

Since the backup is initiated quickly after we check the status, the log position recorded by the script may be accurate. However, its prefereable instead to follow the procedure here: http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html

And run STOP SLAVE SQL_THREAD; instead of FLUSH TABLES WITH READ LOCK for the duration of the backup.

When done, start replication again with START SLAVE

Also, if you wish to backup the bin-logs for incremental backups or as a extra safety measure, it is useful to append --flush-logs to the $dumpoptions variable above

Francis answered 16/12, 2010 at 1:56 Comment(0)
M
0

You're second option looks like the right track.

I had to figure a way to do differential backups using mysqldump. I ended up writing a script that chose what databases to back up and then executed mysqldump. Couldn't you create a script that followed the steps mentioned in http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_master-data and call that from a cron job?

  1. connect to mysql and "stop slave"
  2. execute SHOW SLAVE STATUS
  3. store file_name, file_pos in variables
  4. dump and restart the slave.

Just a thought but I'm guessing you could append the "CHANGE MASTER TO" line to the dumpfile and it would get executed when you restored/setup the new slave.

Mooneyham answered 6/10, 2009 at 21:23 Comment(0)
H
0

Using Read_Master_Log_Pos as the position to continue from the master means you can end up with missing data.

The Read_Master_Log_Pos variable is the position in the master binary log file that the slave IO thread is up to.

The problem here is that even in the small amount of time between stopping the slave SQL thread and retreiving the Read_Master_Log_Pos the IO thread may have received more data from the master which hasn't been applied by the SQL thread having been stopped.

This results in the Read_Master_Log_Pos being further ahead than the data returned in the mysqldump, leaving a gap in the data when imported and continued on another slave.

The correct value to use on the slave is Exec_Master_Log_Pos, which is the position in the master binary log file that the slave SQL thread last executed, meaning there is no data gap between the mysqldump and the Exec_Master_Log_Pos.

Using Ross's script above the correct usage would be:

# MySQL executable location
mysql=/usr/bin/mysql

# MySQLDump executable location
mysqldump=/usr/bin/mysqldump

# MySQL Username and password
userpassword=" --user=<username> --password=<password>"

# MySQL dump options
dumpoptions=" --quick --add-drop-table --add-locks --extended-insert"

# Databases to dump
databases="db1 db2 db3"

# Backup Directory
# You need to create this dir
backupdir=~/mysqldump


# Stop slave sql thread

echo -n "Stopping slave SQL_THREAD... "
mysql $userpassword -e 'STOP SLAVE SQL_THREAD;'
echo "Done."

set `date +'%Y %m %d'`

# Get Binary Log Positions

echo "Logging master status..."
masterlogfile=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep '[^_]Master_Log_File'`
masterlogpos=`$mysql $userpassword -e 'SHOW SLAVE STATUS \G' | grep 'Exec_Master_Log_Pos'`

# Write log Info

echo $masterlogfile
echo $masterlogpos
echo $masterlogfile >> ${backupdir}/$1-$2-$3_info.txt
echo $masterlogpos >> ${backupdir}/$1-$2-$3_info.txt

# Dump the databases

echo "Dumping MySQL Databases..."
for database in $databases
do
echo -n "$database... "
$mysqldump $userpassword $dumpoptions $database | gzip - > ${backupdir}/$1-$2-$3_${database}.sql.gz
echo "Done."
done

# Start slave again

echo -n "Starting slave... "
$mysql $userpassword -e 'START SLAVE'
echo "Done."

echo "All complete!"

exit 0
Hausner answered 12/2, 2013 at 18:12 Comment(0)
B
0

mysqldump (on 5.6) seems to have an option --dump-slave that when executed on a slave records the binary log co-ords of the master that the node was a slave of. The intent of such a dump is exactly what you are describing.

(I am late, I know )

Bechtel answered 14/3, 2017 at 8:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.