The docker daemon (dockerd) has two flags:
--max-concurrent-downloads int Set the max concurrent downloads for each pull
(default 3)
--max-concurrent-uploads int Set the max concurrent uploads for each push
(default 5)
The upper limit will likely depend on the number of open files you permit for the process (ulimit -n
). There will be some overhead of other docker file handles, and I expect that each push and pull opens multiple handles, one for the remote connection, and another for the local file storage.
To compound the complication of this, each push and pull of an image will open multiple connections, one per layer, up to the concurrent limit. So if you run a dozen concurrent pulls, you may have 50-100 potential layers to pull.
While docker does allow these limits to be increased, there's a practical limit where you'll see diminishing returns if not a negative return to opening more concurrent connections. Assuming the bandwidth to the remote registry is limited, more connections will simply split that bandwidth, and docker itself will wait until the very first layer finishes before it starts unpacking that transmission. Also any aborted docker pull or push will lose any partial transmissions of a layer, so you increase the potential data you'd need to retransmit with more concurrent connections.
The default limits are well suited for a development environment, and if you find the need to adjust them, I'd recommend measuring the performance improvement before trying to find the max number of concurrent sessions.