getting Lost connection to mysql when using mysqldump even with max_allowed_packet parameter
Asked Answered
A

8

35

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:

Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359

so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.

so im running the following command to dump my table:

mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip  > dump_test.sql.gz

and from some reason, i still get:

Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499

am i doing something wrong?

its weird, only 9m records...not too big.

Arlenearles answered 31/10, 2018 at 20:47 Comment(0)
S
60

Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.

 mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
 gzip  > dump_test.sql.gz

You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.

It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.

These set the timeouts to one calendar day.

    SET GLOBAL wait_timeout=86400;
    SET GLOBAL interactive_timeout=86400;

Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.

Try logging into a machine closer to the server and running mysqldump on it. Then use some other means (sftp?) to copy your gz file to your own machine.

Or, you may have to segment the dump of this file. You can do something like this (not debugged).

mysqldump  -uroot -h my.host -p'mypassword'  \ 
          db_name table_name --skip-create-options --skip-add-drop-table \
          --where="id>=0 AND id < 1000000" | \
          gzip....

Then repeat that with these lines.

          --where="id>=1000000 AND id < 2000000" | \

          --where="id>=2000000 AND id < 3000000" | \
          ...

until you get all the rows. Pain in the neck, but it will work.

Sideboard answered 31/10, 2018 at 21:22 Comment(3)
it survived a bit longer but still got Lost connection to MySQL server during query when dumping table `table_name` at row: 2926704. it drives me nuts...Arlenearles
Please see my enlarged answer.Sideboard
thanks buddy, i think that --compress did the difference :) appreciate it @O. JonesArlenearles
O
8

For me, all worked fine when I skip lock tables

 mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose   -h xxx.xxx.xxx.xxx > db.sql

I may create problems with consistency but allowed me to backup a 5GB database without any issue.

Octuple answered 25/9, 2020 at 18:6 Comment(0)
A
3

other option to try:

net_read_timeout=3600 
net_write_timeout=3600

on my.ini/my.cnf or via SET GLOBAL ...

Adamina answered 28/9, 2019 at 20:54 Comment(0)
C
2

Using JohnBigs comment above, the --compress flag was what worked for me.

I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.

Callaghan answered 17/6, 2019 at 20:54 Comment(0)
G
1

Also, make sure you MYSQL.EXE client is the same version as your mysql server.

So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.

Gnostic answered 11/6, 2021 at 17:45 Comment(1)
This was it for me. Using mysqldump version 8 connecting to mysql server 5.6Trigger
S
0

I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.

Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.

Things to look for:

  • OOM? dmesg | grep invoked
  • Process killed? grep killed /var/log/kern.log
Starflower answered 3/5, 2021 at 8:55 Comment(0)
B
0

If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.

It might be tedious but it would most likely work.

e.g.

"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf" 

--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace 

--where="last_modify > '2022-01-01 00:00:00'" 

> "C:\...\dump.txt"

my_password.cnf

[client]
password=xxxxxxxx

[mysqldump]
ignore-table=db.table1
ignore-table=db.table2

Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.

Batsman answered 17/1, 2023 at 7:49 Comment(0)
G
0

For me, --result-file worked like a charm. This is in powershell on windows, where you should use --result-file anyway instead of redirecting the output because of encoding. (see mysqldump docs)

example: mysqldump -u xxxxx --password=xxxxx --max_allowed_packet=512M --lock-tables --host xxx.xxx.xxx.xxx --result-file="dump.sql" database-name

Gomel answered 2/2, 2024 at 11:15 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.