How to limit bandwidth used by mysqldump
Asked Answered
M

6

7

I have to dump a large database over a network pipe that doesn't have that much bandwidth and other people need to use concurrently. If I try it it soaks up all the bandwidth and latency soars and everyone else gets messed up.

I'm aware of the --compress flag to mysqldump which help somewhat.

How can I do this without soaking up all the bandwidth over this connection?

Update:

The suggestion to copy a dumpfile using scp with the -l flag is a good one, but I should note that I don't have SSH access to the database server.

Mattox answered 22/12, 2008 at 17:49 Comment(1)
Could you please give me a more detailed description of your environment. As well as to what servers you have acces to and in which manner.Stylish
S
8

trickle?

trickle is a portable lightweight userspace bandwidth shaper

You don't mention how you are actually transffering the DB dump, but if the transfer happens over TCP/IP, trickle should work. For example, if you use nc (for example: nc -L 1234 > backup.sql) the following command will transfer the backup at no greater than 20KB/s:

mysqldump [database name] | trickle -u 20 nc backup.example.com:1234
Salivation answered 29/12, 2008 at 11:17 Comment(0)
S
3

You will have to have access to a linux machine (sorry I'm a linuxy sort of person).

An ingress policy can decrease the amount of incoming traffic, but the server on the other side needs to have a farely well behaved TCP/IP stack.

tc qdisc add dev eth0 handle ffff: ingress
tc filter add dev eth0 parent ffff: protocol ip prio 50 \
  u32 match ip src server.ip.address/32 police rate 256kbit \
  burst 10k drop flowid :1
tc qdisc add dev eth0 root tbf \
  rate 256kbit latency 25ms burst 10k

You can find more information on ingress filters in the advanced routing howto.

http://www.linux.org/docs/ldp/howto/Adv-Routing-HOWTO/index.html


If you are doing it in linux, you can dump the file locally, compress it and use scp to copy the file with the -l switch to limit the bandwidth used:

-l limit Limits the used bandwidth, specified in Kbit/s.

eg

scp -l 16 dumpfile remotehost:filepathandname

Stylish answered 23/12, 2008 at 11:52 Comment(1)
This is a good suggestion, but sadly I don't have ssh access to this server.Busk
T
1

One trick I've used is to specify CSV format rather than the insert. It doesn't change how much bandwidth you use per unit time, but it can reduce the total number of bytes you're pulling out.

Tepic answered 22/12, 2008 at 17:56 Comment(0)
B
0

If you send it over TCP, the bandwidth will be shared equally between all parties. If you want to lower the speed even more, you need to shape your device to only allow a certain amount of data going out.

Biff answered 22/12, 2008 at 17:53 Comment(0)
L
0

On the client, you can run a proxy that will limit the speed of the download. You can also control # of connections etc.

If you are on windows, this should work nicely:

http://www.youngzsoft.net/ccproxy/index.html

Lakin answered 22/12, 2008 at 19:27 Comment(0)
L
0

Are you using a transactional table engine like InnoDB? Is this your master database? Be very careful! mysqldump will hold table locks and disrupt the production use of your database. Slowing down the backup will only cause this period of disruption to get longer. Always mysqldump to a local disc, and then copy the dump from there.

One other approach might be to set up a replication slave at your remote site, and take your backups from that. Then database updates will trickle over your contended link instead of coming down in one big lump.

Another alternative: do your backups when noone else is using the network :)

Lambert answered 29/12, 2008 at 11:59 Comment(1)
In this case, it's a production database but it's read-only from production, so shared locks won't affect anything. You can give mysqldump flags not to use any locking though of course this could cause all sorts of other data integrity issues were the tables being updated.Busk

© 2022 - 2024 — McMap. All rights reserved.