mysqldump | mysql yields 'too many open files' error. Why?
Asked Answered
I

4

6

I have a RHEL 5 system with a fresh new hard drive I just dedicated to the MySQL server. To get things started, I used "mysqldump --host otherhost -A | mysql", even though I noticed the manpage never explicitly recommends trying this (mysqldump into a file is a no-go. We're talking 500G of database).

This process fails at random intervals, complaining that too many files are open (at which point mysqld gets the relevant signal, and dies and respawns).

I tried upping it at sysctl and ulimit, but the problem persists. What do I do about it?

Inigo answered 17/9, 2008 at 13:33 Comment(0)
W
10

mysqldump by default performs a per-table lock of all involved tables. If you have many tables that can exceed the amount of file descriptors of the mysql server process. Try --skip-lock-tables or if locking is imperative --lock-all-tables.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html

--lock-all-tables, -x

Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.

Woaded answered 17/9, 2008 at 13:45 Comment(0)
H
4

mysqldump has been reported to yeld that error for larger databases (1, 2, 3). Explanation and workaround from MySQL Bugs:

[3 Feb 2007 22:00] Sergei Golubchik This is not really a bug.

mysqldump by default has --lock-tables enabled, which means it tries to lock all tables to be dumped before starting the dump. And doing LOCK TABLES t1, t2, ... for really big number of tables will inevitably exhaust all available file descriptors, as LOCK needs all tables to be opened.

Workarounds: --skip-lock-tables will disable such a locking completely. Alternatively, --lock-all-tables will make mysqldump to use FLUSH TABLES WITH READ LOCK which locks all tables in all databases (without opening them). In this case mysqldump will automatically disable --lock-tables because it makes no sense when --lock-all-tables is used.

Edit: Please check Dave's workaround for InnoDB in the comment below.

Hundredweight answered 17/9, 2008 at 13:46 Comment(1)
Or if you are using innodb tables try --single-transaction, which avoids both the problem of running of file handles, and locking all your tablesMarquis
G
0

If your database is that large you've got a few issues.

  1. You have to lock the tables to dump the data.

  2. mysqldump will take a very very long time and your tables will need to locked during this time.

  3. importing the data on the new server will also take a long time.

Since your database is going to be essentially unusable while #1 and #2 are happening I would actually recommend stopping the database and using rsync to copy the files to the other server. It's faster than using mysqldump and much faster than importing because you don't have the added IO and CPU of generating indexes.

In production environments on Linux many people put Mysql data on an LVM partition. Then they stop the database, do an LVM snapshot, start the database, and copy off the state of the stopped database at their leisure.

Gaughan answered 17/9, 2008 at 21:5 Comment(0)
J
0

I just restarted the "MySql" Server and then I could use the mysqldump command flawlessly.

Thought this might be helpful tip here.

Jannajannel answered 20/7, 2019 at 18:2 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.