Run MySQLDump without Locking Tables
Asked Answered
L

14

519

I want to copy a live production database into my local development database. Is there a way to do this without locking the production database?

I'm currently using:

mysqldump -u root --password=xxx -h xxx my_db1 | mysql -u root --password=xxx -h localhost my_db1

But it's locking each table as it runs.

Longe answered 19/9, 2008 at 19:7 Comment(1)
Another late solution: You could also use Percona XtraBackup to dump your production database without interruption regarding transaction processing. It allows to do a hot backup, i.e. it doesn't affect current activites. See here: percona.com/software/mysql-database/percona-xtrabackup (I have no affiliation in any way with Percona.)Schmidt
F
742

Does the --lock-tables=false option work?

According to the man page, if you are dumping InnoDB tables you can use the --single-transaction option:

--lock-tables, -l

Lock all tables before dumping them. The tables are locked with READ
LOCAL to allow concurrent inserts in the case of MyISAM tables. For
transactional tables such as InnoDB and BDB, --single-transaction is
a much better option, because it does not need to lock the tables at
all.

For innodb DB:

mysqldump --single-transaction=TRUE -u username -p DB
Formal answered 19/9, 2008 at 19:9 Comment(7)
for innodb DB mysqldump --single-transaction=TRUE -u username -p DBMicrotone
What if you have innodb and myisam?Marine
Is this on by default?Marine
obviously on (ie. locked) ?Cortisone
I don't think you can pass an option to --lock-tables, so don't think --lock-tables=false would workEntrails
This does not work for me. I've tried it with a 58Gb database and the sql file still contains LOCK TABLES and UNLOCK TABLES and fails to restore using redirection. I'm using innodb engine.Fda
--lock-tables=false worked for me, thanks! (running innodb)Cheekbone
Z
313

This is ages too late, but good for anyone that is searching the topic. If you're not innoDB, and you're not worried about locking while you dump simply use the option:

--lock-tables=false
Zilla answered 11/8, 2009 at 22:20 Comment(7)
Thanks for the response Warren, this was very helpful and worked like a charm.Saintmihiel
using '--lock-table=false --quick' uses the least server resourcesPleasing
But you should be worried about locking tables. If multiple tables are written to while mysqldump is running (and you use foreign keys), you're dump may be inconsistent. You won't know until you restore it and happen to run JOIN queries on the inconsistent data. It may take a while for the inconsistent data to be discovered because the JOINs are used by your application not Mysql (with MyISAM tables); the restore will work just fine, mysql will not warn you about the inconsistencies. So: MyIsam -> always lock your tables. InnoDB -> use --single-transaction.Envelope
@Envelope I don't think locking tables is even sufficient for MyISAM tables. If mysqldump locks the tables in between queries executed by the application then you wind up with the same inconsistencies. The answer is even simpler: MyISAM -> use InnoDB instead.Organon
@Envelope you definitely should be worried about locking tables, but only if you do need a consistent dump. There are some rare cases when you don't. For example, a crude fgrep on database-wide dump (debugging): I bet one don't want users to wait for ~20 minutes to create dump of production database (true story). If the point is to get dump not only ASAP, but also CONSISTENT, one should dump replicated slave or use lower-level snapshotting (lvm, zfs, btrfs, etc), keeping in mind FLUSH TABLES WITH READ LOCK stuff.Karlee
Warning: when @Saintmihiel typed lock-table he meant --lock-tables (plural)Inhospitality
--lock-tables=false don`t work sometimes. See documentation of --lock-tables Some options, such as --opt, automatically enable --lock-tables. If you want to override this, use --skip-lock-tables at the end of the option list. Fleece
P
49

The answer varies depending on what storage engine you're using. The ideal scenario is if you're using InnoDB. In that case you can use the --single-transaction flag, which will give you a coherent snapshot of the database at the time that the dump begins.

Padraic answered 19/9, 2008 at 20:19 Comment(0)
S
44

--skip-add-locks helped for me

Seddon answered 16/12, 2010 at 11:43 Comment(6)
or also --compact to include skip locks with other optimizations.Blasphemy
This removes LOCK TABLES and UNLOCK TABLES statements from the dump file, it does not affect the locking during export.Hager
No, it's not what you're looking for! See dabest1's comment. This does NOTHING to keep your tables from getting locked while doing a mysqldump. This is NOT an answer to the question.Spectroscopy
@dabest and @Spectroscopy are correct: --skip-add-locks would just make the dump restore faster. This is not a correct answer.Openminded
Thankfully this is what I needed for my specific use-case. AWS aurora, when dumping data with mysql packaged mysqldump (I'm on driver 5.7.~something) will lock tables on inserts, including log tables. This will raise error You can't use locks with log tables. while re-inserting. I'm only dumping data for a migration so I think this will elegantly solve my issue.Wyly
Also note is --opt is in the arguments it will override this flag.Wyly
O
18

To dump large tables, you should combine the --single-transaction option with --quick.

http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction

Oft answered 4/5, 2012 at 19:10 Comment(2)
Option --quick is part of option --opt now. And --opt is enabled by default.Afraid
still didn't work for me on a 58Gb database. I see lots of LOCK TABLES and then insert fail during the restore.Fda
C
17

For InnoDB tables use flag --single-transaction

it dumps the consistent state of the database at the time when BEGIN was issued without blocking any applications

MySQL DOCS

http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction

Considerable answered 10/1, 2013 at 6:40 Comment(1)
Maybe along with --skip-lock-tables.Birck
B
13

This is about as late compared to the guy who said he was late as he was to the original answer, but in my case (MySQL via WAMP on Windows 7), I had to use:

--skip-lock-tables
Betulaceous answered 23/5, 2012 at 19:48 Comment(2)
This is what worked for me to dump information_schema without having the error "Access denied for user 'debian-sys-maint'@'localhost' to database 'information_schema' when using LOCK TABLES"Dah
@miken32 you mean the answer that suggested --skip-add-locks? That's definitely not the same as --skip-lock-tables. Thanks for taking the time to incorrectly point this out 9 years later though. :)Betulaceous
M
12

Honestly, I would setup replication for this, as if you don't lock tables you will get inconsistent data out of the dump.

If the dump takes longer time, tables which were already dumped might have changed along with some table which is only about to be dumped.

So either lock the tables or use replication.

Mayne answered 19/9, 2008 at 19:44 Comment(4)
This whole DB is almost entirely read only so I'm not too worried about it changing.Longe
This comment is incorrect. MVCC allows for reading consistent state without locks on InnoDB.Nosepiece
If you don't have replication already set up, then you need to do a dump to set it up. Same problem exists.Public
If you don't have replication already setup, then you will need to lock the tables to do the dump to ensure data integrity though. So it's a catch 22.Rawdan
E
6
    mysqldump -uuid -ppwd --skip-opt --single-transaction --max_allowed_packet=1G -q db |   mysql -u root --password=xxx -h localhost db
Elbertine answered 1/5, 2012 at 9:7 Comment(2)
Up vote, this one worked for me just add the parameters --skip-opt --single-transaction --max_allowed_packet=1GMopes
I don't recommend "--skip-opt" for this purpose. That does a lot more than what the original question asked for. It turns off quick mode, it doesn't include the charset, etc etc.Spectroscopy
A
3

When using MySQL Workbench, at Data Export, click in Advanced Options and uncheck the "lock-tables" options.

enter image description here

Antisocial answered 28/2, 2019 at 18:8 Comment(0)
H
1

Due to https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_lock-tables :

Some options, such as --opt (which is enabled by default), automatically enable --lock-tables. If you want to override this, use --skip-lock-tables at the end of the option list.

Hirundine answered 17/10, 2018 at 14:19 Comment(0)
A
1

If you use the Percona XtraDB Cluster - I found that adding --skip-add-locks
to the mysqldump command Allows the Percona XtraDB Cluster to run the dump file without an issue about LOCK TABLES commands in the dump file.

Azarria answered 14/1, 2021 at 10:35 Comment(0)
F
0

Another late answer:

If you are trying to make a hot copy of server database (in a linux environment) and the database engine of all tables is MyISAM you should use mysqlhotcopy.

Acordingly to documentation:

It uses FLUSH TABLES, LOCK TABLES, and cp or scp to make a database backup. It is a fast way to make a backup of the database or single tables, but it can be run only on the same machine where the database directories are located. mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables.

The LOCK TABLES time depends of the time the server can copy MySQL files (it doesn't make a dump).

Franciskus answered 12/8, 2017 at 12:43 Comment(0)
G
-3

As none of these approaches worked for me, I simply did a:

mysqldump [...] | grep -v "LOCK TABLE" | mysql [...]

It will exclude both LOCK TABLE <x> and UNLOCK TABLES commands.

Note: Hopefully your data doesn't contain that string in it!

Golden answered 18/10, 2016 at 16:48 Comment(1)
--skip-add-locks during the dump does that tooTyrannicide

© 2022 - 2024 — McMap. All rights reserved.