Docker push takes a really long time
Asked Answered
B

3

15

I have a deployment setup with Docker that works as follows:

  1. Build an image on my dev machine via a Dockerfile
  2. Push the image to a registry (I tried both Docker Hub and Quay.io)
  3. Pull this image to the deployment server, and restart the container.

I'd like to do these steps as quickly as possible, but they take an incredibly long time. Even for an image of modest size (750MiB, including the standard ubuntu and friends), after a small modification, it takes 17 minutes to deploy. I optimized the order of items in my Dockerfile, so it actually hits the cached images most of the time. This doesn't seem to make a difference.

The main culprit is the docker push step. For both Docker Hub and Quay.io, it takes an unrealistically long time to push images. In one simple benchmark I did, I executed docker push twice back to back, so all the previous images are already on the registry. So I only see these lines:

...
bf84c1d841244f: Image already pushed, skipping
...

But if I time the push, the performance is horrendous. Pushing to Quay.io takes 3.5 minutes when all the images are already on the server! Pushing to Docker Hub takes about 12 minutes!

There is clearly something wrong somewhere, since many people are using Docker in production, these times are exactly the opposite of continuous delivery.

How can I make this run quicker? Do others also see this kind of performance? Does it have to do with the registry services, or somehow related to my local machine?

I am using Docker under Mac OS X.

Bartle answered 28/10, 2015 at 14:20 Comment(0)
S
7

Just a note: I run my own docker registry which is local to the machine I am issuing the "docker push" command on and it still takes an inordinate amount of time. It is definitely not an I/O rate issue from the disks as they are backed by SSDs (and to clarify, they are performant with ~500+MB/sec from anything else that uses them). However, the docker push command seems to take just as long as if I were sending it to a remote site. I think there is something beyond "bandwidth" issues going on. My suspicion is that regardless of the fact that my registry is local, it is still attempting to use the NIC to transfer data (which seems to make sense due to requiring a URI as the push destination and the registry being a container itself).

That being said, I can copy the same file(s) to where they will ultimately reside on the local registry orders of magnitude faster than the push command. Perhaps the solution would be just that. However, the one thing that is clear is that the problem alone is not one of bandwidth per se, but likely data path in general.

At any rate, running a local registry will not likely (totally) solve the OP's issue. While I just started to investigate, I suspect there needs to be a code change to docker in order to resolve this issue. I don't think it is a bug, but rather a design challenge. URIs and/or host<->host communications require network stacks, even when the source and destination are the same machine/host/container.

Surfacetoair answered 6/2, 2018 at 19:10 Comment(3)
Your comment, though useful, does not completely answer the question. You could try making your comment shorter and adding it as a comment.Joy
I actually meant it to be a comment, sorry. I am kind of new to iPosts. I'm a data hermit.Surfacetoair
No harm done :-). You may add a comment by clicking on add a comment under the question. You may also delete your answer by clicking delele under your answer. Answers are voted and should address the question and have a minimum quality. See here about what an answer should be like.Joy
T
1

Is it was said in the previews answer, you should possibly use your local registry. It is not very hard to install and use it, here you can find the information, how you can start with it. It could be much faster, because you are not limited with upload speed limits from your provider. By the way, you can always push the image from local registry into Docker Hub or other local registry (for example, installed in your customers network).

One more thing, I could suggest, in terms of continuous integration and delivery, is to use some continuous integration server, which could automatically build your images on Linux OS, where you don't need to use boot2docker or docker-machine. For test and development purposes, you could build your images locally, without making pushes to the remote registry.

Toh answered 28/10, 2015 at 14:53 Comment(6)
Yeah, the 3.5 minutes to Quay.is probably due to OPs broadband upload bandwith.Sanguinolent
@stanislav Thanks for the replies. I will look into running a local registry, but just to be clear, you think the issue is that Quay and Docker Hub do not have enough capacity for everyone?Bartle
@AdrianMouat Thanks for the comment, but how can my upload bandwidth be the bottleneck, if, for every single image, it says "skipping"? Are you suggesting that Docker uploads the entire image, computes the checksum on the registry, compares it with the previous checksum, and then re-uploads it if they are different?Bartle
Oh, sorry, I didn't realise it was an empty upload. No, I think it should just upload the metadata and compare. Annoying it takes so long.Sanguinolent
But relying on the Docker Hub for production is probably a bad idea; the service is often a bit up and down.Sanguinolent
@Bartle it's just a suggestion. I was using Docker Hub for a while and just for learninig purposes and never for production. But I can imagine, that during the peak load, this services could reduce perfomance. I saw some issues about slow pushes into the Docker Hub at the docker github account, so I would prefferably not rely on it without any valuable reason.Toh
S
0

For this reason, organizations typically run their own registries on the local network. This also keeps organizations in control of their own data and avoids relying on an external service.

You will also find that cloud hosts such as Google Container Engine and the Amazon Container Service offer hosted registries to provide users with fast, local downloads.

Sanguinolent answered 28/10, 2015 at 14:36 Comment(1)
I'm afraid this answer causes several confusions. The points about "controlling my own data" and "relying on external services" are irrelevant. All the code is entirely running on the cloud, and my source code is on Github anyway. Second, I don't see how cloud providers can have fast, local registries? From what I understand, those are just registries that are devoted to a single company/account. So you are suggesting that the source of this performance issue is that Quay and Docker are not running their services with enough capacity?Bartle

© 2022 - 2024 — McMap. All rights reserved.