How to duplicate a MySQL database on the same server
Asked Answered
N

5

66

I have a large MySQL database, lets call it live_db, which I want to replicate on the same machine to provide a test system to play around with (test_db), including table structure and data. In regular intervals I want to update the test_db with the content of the live_db; if possible incremental.

Is there some built-in mechanism in MySQL to do that? I think that master-slave replication is not the thing I want since it should be possible to alter data in the test_db. These changes do not have to be preserved, though.

Regards,

CGD

Neddra answered 11/7, 2011 at 14:59 Comment(2)
12.9 GB at the moment and growing.Neddra
In the unlikely case that you don't have auto_increment keys to worry about, you could set up replication, but keep the slave stopped most of the time. Periodically then, you START SLAVE; and it brings itself up to date. If you have any auto_increment fields though this will result in key collisions.Cnidus
C
96

The mysql command line client will accept a stream of SQL statements from standard input. You can therefore pipe the output of mysqldump directly into mysql on the command line. Doing this as a cron job will regularly overwrite your test data with updated live data:

mysql --user=username --password=passwd -e 'DROP DATABASE test_db;'
mysql --user=username --password=passwd -e 'CREATE DATABASE test_db;'
mysqldump --user=username --password=passwd live_db | mysql --user=username --password=passwd test_db

Note that since your data is large, it will take a long time.

Cnidus answered 11/7, 2011 at 15:4 Comment(18)
This is the best way that I have found. I'll looking forward to hearing of a better way.Enriqueenriqueta
As I stated in the comment above the DB is currently 12.9 GB and growing. Doing a full dump takes an awful lot of time.Neddra
OK, I did it this way. Creating the dump is actually quite fast. Importing it, though, is a different matter. The whole process takes about 15 minutes which is not sooo bad... I also looked into mysqlhotcopy but decided against it. Thanks all!Neddra
Do you really need the DROP|CREATE databsae: mysqldump will include DROP TABLEs anyway, so unless you generate tables in the test db (which would therefore still be there unless manually DROPped), you don't need the first two lines?Cord
I've used mysqldbcopy: dev.mysql.com/doc/mysql-utilities/1.3/en/mysqldbcopy.htmlFootnote
and there should be space between -u and usernameEspy
@Espy Most platforms accept it without a space, certainly any *nix - does Windows not? Anyway, to avoid confusion with the password, the space is left out here. The -p must not have a space: "The password of the MySQL account. As described earlier, the password value is optional, but if given, there must be no space between -p or --password= and the password following it. The default is to send no password." dev.mysql.com/doc/refman/5.0/en/connecting.htmlCnidus
I'm not talking about -p, but about -u.Espy
@Espy I understand you're talking about -u. I am trying to say that -u was deliberately presented without a space here to be consistent and avoid confusion with -p, since -p must not have a space. I am asking you to confirm that there is a platform or MySQL version that will not correctly parse -uusername (Windows perhaps?), because I have not encountered one to my recollection. Either way though I'll just change them both to the long form --username,--password above.Cnidus
yey, looks that it's working with -uusername too! I take my workds (above) back then :)Espy
Please note, do NOT use the --database orig_db parameter on the left-hand-side if you are wanting to restore to a different database. mysqldump will include a "use orig_db;" statement in its output, and you'll not be writing to the db you specified on the right-hand-side of the pipe. Learned that one the hard way. : - /Benzoin
I do this and works fine, but when i want to open the database with other user it seems that doesn't copy all the permissions of the source database to the path database... any suggest? :/Prissie
Grants/permissions are not stored with the database so dumping the database won't include them. You must separately SHOW GRANTS and place its output into the new database serverfault.com/questions/8860/…Cnidus
This is incredibly slow. Though I can mysqldump in about 1-2 seconds, it takes 35 minutes to import to the new db with mysql command line. What about mysqldbcopy? I haven't tried it yet but perhaps it's faster. What I'd really like to know if I can just copy a data folder for very fast copying.Barnstorm
@ButtleButkus 35 minutes is unusually & unreasonably slow. With extensive indexing, the load time would be significantly longer than the dump time, but not by 100x's of times. Do you have query logging enabled on the target server? Bin logs for replication? Anything else unusual? Try mysqldbcopy to see if it makes a difference for you. If not, I might suspect a problem with the filesystem on the target server.Cnidus
@MichaelBerkowski it's a large database. Some tables have 300,000 rows, a few even have millions of rows. Using mysql to import is slow because it runs all of these insert queries, right? My impression is that mysql is the bottleneck. But why even export and import, if it might be possible just to copy files in the file system? The server is CentOS 6 with a fast processor and 32GB RAM. Trying to load the same db on a local VM would take hours.Barnstorm
It is possible under some circumstances. dev.mysql.com/doc/refman/5.6/en/… also serverfault.com/questions/367255/…Cnidus
MySQL provides for incremental backups, which should make the transfers much faster...Vilberg
B
11

Michaels answer abowe works well but does not copy events, stored procedures or triggers.

To copy those a few more switches is needed for mysqldump: --events --triggers --routines

To complement an already made copy:

mysqldump --user=username --password=passwd --no-data --no-create-info --no-create-db --events --triggers --routines live_db | mysql --user=username --password=passwd test_db

Berkshire answered 20/4, 2015 at 10:15 Comment(0)
O
6

I have been doing this for several years in different contexts, with small to medium databases (1 G to 100 G). The quick and dirty mysqldump works for smaller data sets; the smaller they are, the better it goes.

When you exceed 5-10 GB, depending on the MySQL load, quick and dirty does not cut it anymore.

why mysqldump might not be enough

The problem with MySQLdump is that while it dumps, the live database is either unusable, very awkward to use, or the backup will not be consistent. Unless you have a wide enough time window when the unusability of the live database is not important because the database needs not be in use anyway (for example, late at night).

The default options (here a discussion of the why) make the database next to unusable while it's being dumped, unless the usage is just reading data and little of that. On a busy ecommerce site, you're looking at a client pile-up crash.

So you use InnoDB and the modern options (not defaults, as far as I know)

--single-transaction --skip-lock-tables

which allow the site to run, albeit slower than normal, during the dump. Depending on what the usage is, it might be noticeably slower.

While you're at it, also dump other data which might be important:

--events --triggers --routines

(...oh, and this still won't have dumped user permissions. To use as a test perhaps it wasn't so important).

There is a workaround I've found "advised" (!) as a "great hack", which basically disables transactional integrity allowing the database to run at full speed while it's being dumped. Somewhat like removing the brakes from your car to lighten it and have it run faster, yeah it will work, but it'll have some side effects that you might not notice immediately. You will almost surely notice them sooner or later - and just like brakes, it will be when you'll need them most, and it won't be pretty.

However, for a test database, it could still work.

Xtrabackup

If you have a "serious" database, what's the reason not to have a "serious" backup?

Slave replication

Another possibility if you have space to spare - and, nowadays, 20 Gb is not that much - is that of using an ancillary database.

You can install a second copy of MySQL Server on the same server on a different port, and have it be the slave (the server will take a performance hit, storage-speed-wise). Then you will have two identical databases (live master, live slave). The first time you will still have to run a full dump to get them in sync, with all the problems it involves.

When you need to clone the test database, stop the slave replication - the live slave will now remain "frozen" in time - and backup the live slave to the test db, using MySQLbackup or just copying the data files. Once done, you restart the replication.

The impact on the live master is negligible, and the slave can actually be used for non-update-critical selects.

Oui answered 29/5, 2019 at 6:29 Comment(0)
B
0

In case you prefer MySQL Migration Toolkit, you may double click on schema name in Data Mapping step and change target schema name.

Baeza answered 9/10, 2014 at 18:12 Comment(0)
B
0

For all the mac users, with sequel pro all you need to do is go to database (menu) -> Duplicate database. Done!

Brietta answered 29/5, 2019 at 6:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.