I have been doing this for several years in different contexts, with small to medium databases (1 G to 100 G). The quick and dirty mysqldump
works for smaller data sets; the smaller they are, the better it goes.
When you exceed 5-10 GB, depending on the MySQL load, quick and dirty does not cut it anymore.
why mysqldump
might not be enough
The problem with MySQLdump is that while it dumps, the live database is either unusable, very awkward to use, or the backup will not be consistent. Unless you have a wide enough time window when the unusability of the live database is not important because the database needs not be in use anyway (for example, late at night).
The default options (here a discussion of the why) make the database next to unusable while it's being dumped, unless the usage is just reading data and little of that. On a busy ecommerce site, you're looking at a client pile-up crash.
So you use InnoDB and the modern options (not defaults, as far as I know)
--single-transaction --skip-lock-tables
which allow the site to run, albeit slower than normal, during the dump. Depending on what the usage is, it might be noticeably slower.
While you're at it, also dump other data which might be important:
--events --triggers --routines
(...oh, and this still won't have dumped user permissions. To use as a test perhaps it wasn't so important).
There is a workaround I've found "advised" (!) as a "great hack", which basically disables transactional integrity allowing the database to run at full speed while it's being dumped. Somewhat like removing the brakes from your car to lighten it and have it run faster, yeah it will work, but it'll have some side effects that you might not notice immediately. You will almost surely notice them sooner or later - and just like brakes, it will be when you'll need them most, and it won't be pretty.
However, for a test database, it could still work.
Xtrabackup
If you have a "serious" database, what's the reason not to have a "serious" backup?
Slave replication
Another possibility if you have space to spare - and, nowadays, 20 Gb is not that much - is that of using an ancillary database.
You can install a second copy of MySQL Server on the same server on a different port, and have it be the slave (the server will take a performance hit, storage-speed-wise). Then you will have two identical databases (live master, live slave). The first time you will still have to run a full dump to get them in sync, with all the problems it involves.
When you need to clone the test database, stop the slave replication - the live slave will now remain "frozen" in time - and backup the live slave to the test db, using MySQLbackup or just copying the data files. Once done, you restart the replication.
The impact on the live master is negligible, and the slave can actually be used for non-update-critical selects.
auto_increment
keys to worry about, you could set up replication, but keep the slave stopped most of the time. Periodically then, youSTART SLAVE;
and it brings itself up to date. If you have any auto_increment fields though this will result in key collisions. – Cnidus