Docker parallel operations limit
Asked Answered
P

3

62

Is there a limit to the number of parallel Docker push/pulls you can do?

E.g. if you thread Docker pull / push commands such that they are pulling/pushing different images at the same time what would be the upper limit to the number of parallel push/pulls

Or alternatively

On one terminal you do docker pull ubuntu on another you do docker pull httpd etc - what would be the limit Docker would support?

Palacios answered 18/4, 2017 at 18:25 Comment(0)
B
92

The options are set in the configuration file (Linux-based OS it is located in the path: /etc/docker/daemon.json and C:\ProgramData\docker\config\daemon.json on Windows)

Open /etc/docker/daemon.json (If doesn't exist, create it)

Add the values(for push/pulls) and set parallel operations limit

{
    "max-concurrent-uploads": 1,
    "max-concurrent-downloads": 1
}

Restart daemon: sudo service docker restart

Bouillon answered 24/12, 2018 at 16:19 Comment(5)
Should be the accepted answer, this was much easier to follow. – Dissidence
Good to know. I answered because the accepted one was too long 😁 – Bouillon
for Docker Desktop on mac, you need to go to Preferences | Docker Engine and specify the config there – Jennettejenni
"(If doesn't exist, create it)" Hint: if it doesn't exist, it's most likely your current user doesn't have permission to /etc/docker instead of the file not existent. This assumption is dangerous because you may accidentally overwrite the default config. – Mesopotamia
docs.docker.com/config/daemon "Edit the daemon.json file, which is usually located in /etc/docker/. You may need to create this file, if it does not yet exist." πŸ€” – Bouillon
H
28

The docker daemon (dockerd) has two flags:

  --max-concurrent-downloads int          Set the max concurrent downloads for each pull
                                          (default 3)
  --max-concurrent-uploads int            Set the max concurrent uploads for each push
                                          (default 5)

The upper limit will likely depend on the number of open files you permit for the process (ulimit -n). There will be some overhead of other docker file handles, and I expect that each push and pull opens multiple handles, one for the remote connection, and another for the local file storage.

To compound the complication of this, each push and pull of an image will open multiple connections, one per layer, up to the concurrent limit. So if you run a dozen concurrent pulls, you may have 50-100 potential layers to pull.

While docker does allow these limits to be increased, there's a practical limit where you'll see diminishing returns if not a negative return to opening more concurrent connections. Assuming the bandwidth to the remote registry is limited, more connections will simply split that bandwidth, and docker itself will wait until the very first layer finishes before it starts unpacking that transmission. Also any aborted docker pull or push will lose any partial transmissions of a layer, so you increase the potential data you'd need to retransmit with more concurrent connections.

The default limits are well suited for a development environment, and if you find the need to adjust them, I'd recommend measuring the performance improvement before trying to find the max number of concurrent sessions.

Heliacal answered 19/4, 2017 at 3:16 Comment(3)
Thanks - this is so helpful. That being said though if you had to test a Docker Registry endpoint to see how well it scales and how many concurrent connections it can support or in general how well it performs under stress - would it then be wise to fiddle around with these parameters and hit the Docker Registry from multiple endpoints? Grateful if you could shed some light on this topic too. – Palacios
If you control both the registry, the network, and the endpoints, this could be optimized. But I fear the return on the effort will be small, unless your use case is unusual. Most would see much more benefit to optimizing the images themselves. – Heliacal
For Windows user, this can be done by creating a daemon.json file in C:\ProgramData\Docker\config. See docs.docker.com/engine/reference/commandline/dockerd/… – Ironic
B
3

For anyone using Docker for Windows and WSL2: You can (and should) set the options on the Settings tab:

Docker for Windows Docker Engine settings

Byelostok answered 20/5, 2021 at 14:27 Comment(0)

© 2022 - 2024 β€” McMap. All rights reserved.