How do I force Kubernetes to re-pull an image?
Asked Answered
D

21

337

I have the following replication controller in Kubernetes on GKE:

apiVersion: v1
kind: ReplicationController
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 2
  selector:
    app: myapp
    deployment: initial
  template:
    metadata:
      labels:
        app: myapp
        deployment: initial
    spec:
      containers:
      - name: myapp
        image: myregistry.com/myapp:5c3dda6b
        ports:
        - containerPort: 80
      imagePullPolicy: Always
      imagePullSecrets:
        - name: myregistry.com-registry-key

Now, if I say

kubectl rolling-update myapp --image=us.gcr.io/project-107012/myapp:5c3dda6b

the rolling update is performed, but no re-pull. Why?

Dressler answered 13/10, 2015 at 21:17 Comment(14)
You should use different image when updating.Kynewulf
I gave a different image, just with the same tag. If it is necessary to give a different tag, well, I see no point in the imagePullPolicy field.Dressler
Out of interest, why would you want to do this? The only reason I can think of is using latest but if you use latest, it always pulls anyway.Hilary
I want to use a specific tag, but its newest version.Dressler
@TorstenBronger I think this is a breaking change in Kubernetes/Docker theory. The idea that you could pull image:tag (other than latest) at two different times and get two different images would be problematic. A tag is akin to a version number. It would be better practice to always change the tag when the image changes.Tiannatiara
It depends. There is software with a very stable API but security updates. Then, I want the latest version without having to say so explicitly.Dressler
I am running into this issue now. The reason I want to have the same tag is to make a distinction between my staging and production environments without creating separate projects. And I'm making sure that cloudbuild.yaml gets the branch name to create the image version. Is that bad practice?Sandstorm
@TorstenBronger Regarding using latest, dont do it. Latest will pull the, well, more recently image with the latest tag. What you want is a SemVer range. ~1.2.3 for example. this will pull images with tags between the range of >= 1.2.3 and < 1.3.0. As long as the image vendor follows SemVer your know (and this is the important part) no backwards breaking change were added (on purpose) and that no new features were added (possible security concern). Please, please never use latest in production systems.Supermarket
The question if and when to use latest is a different story. There are circumstance where it makes sense.Dressler
You could alternatively delete the deployment with kubectl delete command and then reapply if this is development time activityVideogenic
@TorstenBronger please mark question as answered if you are clear on answer.Chafer
But this question is marked answered for a long time already.Dressler
I wrote a script ``` #!/bin/bash kubectl patch deployment $1 -p '{"spec": {"template": {"spec":{"containers":[{"name": "'$1'", "imagePullPolicy":"Always"}]}}}}' sleep 30 kubectl rollout restart deployment $1 sleep 120 kubectl patch deployment $1 -p '{"spec": {"template": {"spec":{"containers":[{"name": "'$1'", "imagePullPolicy":"IfNotPresent"}]}}}}' ```Kutz
gist.github.com/smyth64/8a32bb02a7354220234425e5a03dcffa I wrote a simple bash script, check it out :)Kutz
D
158

One has to group imagePullPolicy inside the container data instead of inside the spec data. However, I filed an issue about this because I find it odd. Besides, there is no error message.

So, this spec snippet works:

spec:
  containers:
  - name: myapp
    image: myregistry.com/myapp:5c3dda6b
    ports:
    - containerPort: 80
    imagePullPolicy: Always
  imagePullSecrets:
    - name: myregistry.com-registry-key
Dressler answered 14/10, 2015 at 9:46 Comment(8)
imagePullPolicy (or tagging :latest) is good if you want to always pull, but doesn't solve the question of pulling on demande.Jotting
Yes, I want to always pull, as stated in the question.Dressler
Using imagePullPolicy: Always inside the container definition will have kubernetes fetch images tagged with :latest whenever a newer version of them is pushed to the registry?Dupuy
@Dupuy No. imagePullPolicy: Always simply tells Kubernetes to always pull image from the registry. What image it will is configured by image attribute. If you configure it to image: your-image:latest, then it will always pull the your-image image with the latest tag.Vania
I just had the same issue here with a cronjob. The "latest" tag was ignored and only setting the job spec to the always pull policy made k8s reload the image for the next execution (=container creation) something seems to be different between these two options, despite every documentation treating them as equal.Pillow
@RomanGruber so I have a similar issue for a cronjob, the POD (in completed status) apprently didn't take the last DOCKER image, will it take when the cronjob executes again? or do i need to recreate again? imagePullPolicy: AlwaysYardmaster
@Yardmaster - I seem to not be notified about all comments... Anyhow, it worked on my end; when I set the plicy to "always", it did pull the image again upon next execution.Pillow
This is part of the solution. After this you need to trigger kubectl rollout restart deploy <name>Complemental
J
293

Kubernetes will pull upon Pod creation if either (see updating-images doc):

  • Using images tagged :latest
  • imagePullPolicy: Always is specified

This is great if you want to always pull. But what if you want to do it on demand: For example, if you want to use some-public-image:latest but only want to pull a newer version manually when you ask for it. You can currently:

  • Set imagePullPolicy to IfNotPresent or Never and pre-pull: Pull manually images on each cluster node so the latest is cached, then do a kubectl rolling-update or similar to restart Pods (ugly easily broken hack!)
  • Temporarily change imagePullPolicy, do a kubectl apply, restart the pod (e.g. kubectl rolling-update), revert imagePullPolicy, redo a kubectl apply (ugly!)
  • Pull and push some-public-image:latest to your private repository and do a kubectl rolling-update (heavy!)

No good solution for on-demand pull. If that changes, please comment; I'll update this answer.

Jotting answered 11/3, 2016 at 13:21 Comment(8)
You say kubernetes will pull on Pod creation when using :latest - what about patching? does it also always pull the newest/latest image? Seems not to work for me :(Hemicellulose
It depends if your patch forces the re-creation of a Pod or not. Most likely not, then it'll not pull again. You may kill the Pod manually, or tag with something unique and patch with that updated tag.Jotting
This is an answer to a different question. I asked for forcing a re-pull.Dressler
This allowed me to force a new pull from GCR. I had a :latest tag which pointed at a new image, and the kubectl rolling-update worked to update the pods.Peggypegma
Thanks. Went for the Pull & Push approach. Automated as much of it as possible with bash scripts but agreed, it's heavy :)Hluchy
setting both those optiions are not working and i experimented with it. kuberneltes never pull the new image , although its showing in logs that its pulling the image.Cockadoodledoo
How about having for each environment a label like "prod", "stage", "test", leave the imagePullPolicy to "always" and push the label, whenever you want to deploy, to the image that shall be deployed?Bk
Do this iff your image tag is latest else take caution of the tag. Example if your image tag comes form your CI environmentMillhon
D
158

One has to group imagePullPolicy inside the container data instead of inside the spec data. However, I filed an issue about this because I find it odd. Besides, there is no error message.

So, this spec snippet works:

spec:
  containers:
  - name: myapp
    image: myregistry.com/myapp:5c3dda6b
    ports:
    - containerPort: 80
    imagePullPolicy: Always
  imagePullSecrets:
    - name: myregistry.com-registry-key
Dressler answered 14/10, 2015 at 9:46 Comment(8)
imagePullPolicy (or tagging :latest) is good if you want to always pull, but doesn't solve the question of pulling on demande.Jotting
Yes, I want to always pull, as stated in the question.Dressler
Using imagePullPolicy: Always inside the container definition will have kubernetes fetch images tagged with :latest whenever a newer version of them is pushed to the registry?Dupuy
@Dupuy No. imagePullPolicy: Always simply tells Kubernetes to always pull image from the registry. What image it will is configured by image attribute. If you configure it to image: your-image:latest, then it will always pull the your-image image with the latest tag.Vania
I just had the same issue here with a cronjob. The "latest" tag was ignored and only setting the job spec to the always pull policy made k8s reload the image for the next execution (=container creation) something seems to be different between these two options, despite every documentation treating them as equal.Pillow
@RomanGruber so I have a similar issue for a cronjob, the POD (in completed status) apprently didn't take the last DOCKER image, will it take when the cronjob executes again? or do i need to recreate again? imagePullPolicy: AlwaysYardmaster
@Yardmaster - I seem to not be notified about all comments... Anyhow, it worked on my end; when I set the plicy to "always", it did pull the image again upon next execution.Pillow
This is part of the solution. After this you need to trigger kubectl rollout restart deploy <name>Complemental
F
121

There is a comand to directly do that:

Create a new kubectl rollout restart command that does a rolling restart of a deployment.

The pull request got merged. It is part of the version 1.15 (changelog) or higher.

Fsh answered 30/4, 2019 at 5:30 Comment(7)
Yes part of Issue: github.com/kubernetes/kubernetes/issues/13488Pomerleau
Yes,this is the best way to trigger update in new kubernetes verion of 1.15.Pageantry
No there isn't a command to directly do that. This only work with imagePullPolicy: Always set.Production
@Production together with kubectl rollout restart deploy <name>Complemental
Life saver of an answer!! Shocked it took me this long to sort this out. Thank you.Phenformin
@melroy-van-den-berg This is easy and perfect. Thanks.Surtout
Thank you :) It took me a while myself to found out to use kubectl rollout restart deployComplemental
I
45

My hack during development is to change my Deployment manifest to add the latest tag and always pull like so

image: etoews/my-image:latest
imagePullPolicy: Always

Then I delete the pod manually

kubectl delete pod my-app-3498980157-2zxhd

Because it's a Deployment, Kubernetes will automatically recreate the pod and pull the latest image.

Ichang answered 27/10, 2017 at 16:53 Comment(4)
I like taking advantage of the "desired state" premises of the "deployment" object... thanks for the suggestion!Munich
It's worth noting that strategy is viable only if failures in the service and downtime are tolerable. For development it seems reasonable, but I would never carry this strategy over for a production deploy.Blender
Edit the deployment, changing the imagePullPolicy to always and deleting the pod was enough for me, as Everett suggested. This is a development environment though. kubernetes.io/docs/concepts/containers/imagesSecund
The "Always" imagePullPolicy is the default for tags named "latest" or no tag. Therefore you don't need to specify it in this exampleDevorahdevore
E
33

A popular workaround is to patch the deployment with a dummy annotation (or label):

kubectl patch deployment <name> -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"

Assuming your deployment meets these requirements, this will cause K8s to pull any new image and redeploy.

Ectogenous answered 18/3, 2019 at 12:15 Comment(3)
Yes, I use an annotation for this.Dressler
what annotation?Cairistiona
Another sophisticated solution would be a combination of both ie. adding an annotation and setting ImagePullPolicy as Always. annotations like deployment.kubernetes.io/revision: "v-someversion" and kubernetes.io/change-cause: the reason can be quite helpful and heads towards immutable deployments.Thoracotomy
B
18

Now, the command kubectl rollout restart deploy YOUR-DEPLOYMENT combined with a imagePullPolicy: Always policy will allow you to restart all your pods with a latest version of your image.

Biocatalyst answered 20/9, 2019 at 15:10 Comment(0)
C
17
  1. Specify strategy as:
  strategy: 
    type: Recreate
    rollingUpdate: null
  1. Make sure you have different annotation for each deployment. Helm does it like:
  template:
    metadata:
      labels:
        app.kubernetes.io/name: AppName
        app.kubernetes.io/instance: ReleaseName
      annotations:
        rollme: {{ randAlphaNum 5 | quote }}
  1. Specify image pull policy - Always
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: Always
Chafer answered 22/8, 2020 at 8:58 Comment(1)
Warning: changing the annotation value leads to the pod recreation even if the docker image has not been changed!Etti
S
10
# Linux

kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"

# windows

kubectl patch deployment <name> -p (-join("{\""spec\"":{\""template\"":{\""metadata\"":{\""annotations\"":{\""date\"":\""" , $(Get-Date -Format o).replace(':','-').replace('+','_') , "\""}}}}}"))
Scanties answered 29/12, 2019 at 8:11 Comment(0)
G
10

This answer aims to force an image pull in a situation where your node has already downloaded an image with the same name, therefore even though you push a new image to container registry, when you spin up some pods, your pod says "image already present".

For a case in Azure Container Registry (probably AWS and GCP also provides this):

  1. You can look to your Azure Container Registry and by checking the manifest creation date you can identify what image is the most recent one.

  2. Then, copy its digest hash (which has a format of sha256:xxx...xxx).

  3. You can scale down your current replica by running command below. Note that this will obviously stop your container and cause downtime.

kubectl scale --replicas=0 deployment <deployment-name> -n <namespace-name>
  1. Then you can get the copy of the deployment.yaml by running:
kubectl get deployments.apps <deployment-name> -o yaml > deployment.yaml
  1. Then change the line with image field from <image-name>:<tag> to <image-name>@sha256:xxx...xxx, save the file.

  2. Now you can scale up your replicas again. New image will be pulled with its unique digest.

Note: It is assumed that, imagePullPolicy: Always field is present in the container.

Geerts answered 26/2, 2021 at 9:51 Comment(0)
U
10

Having gone through all the other answers and not being satisfied, I found much better solution here: https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps

It works without using latest tag or imagePullPolicy: Always. It also works if you push new image to the same tag by specifying image sha256 digest.

Steps:

  1. get image SHA256 from docker hub (see image below)
  2. find your deployment using kubectl get deployments
  3. kubectl set image deployment/<your-deployment> <your_container_name>=<some/image>@sha256:<your sha>
  4. kubectl scale deployment <your-deployment>--replicas=0
  5. kubectl scale deployment <your-deployment>--replicas=original replicas count

Note: Rollout might also work instead of scale but in my case we don't have enough hardware resources to create another instance and k8s gets stuck.

docker hub sha256 location

Unifoliolate answered 29/7, 2021 at 8:29 Comment(2)
Small correction. In the second point, it should be <your_container_name> instead of <your-pod-name>Crankcase
Setting replicas to 0 will cause a service outage. Is there a way to do this without taking the app down?Misunderstanding
C
9

Apparently now when you run a rolling-update with the --image argument the same as the existing container image, you must also specify an --image-pull-policy. The following command should force a pull of the image when it is the same as the container image:

kubectl rolling-update myapp --image=us.gcr.io/project-107012/myapp:5c3dda6b --image-pull-policy Always

Counterpane answered 19/12, 2016 at 23:6 Comment(1)
Since Kubernetes 1.18 this feature is removed, as stated here: v1-18.docs.kubernetes.io/docs/setup/release/notes/#kubectlFsh
H
7

The rolling update command, when given an image argument, assumes that the image is different than what currently exists in the replication controller.

Harken answered 14/10, 2015 at 3:7 Comment(7)
Does this mean the image tag (aka name) must be different?Dressler
Yes, the image name must be different if you pass the --image flag.Harken
As my own answer says, it works also if the image name is the same. It was simply that the imagePullPolicy was in the wrong place. To my defence, the k8s 1.0 docs are erroneous in this aspect.Dressler
Gotta love when the docs are out of sync with the behavior. :/Harken
The URL is outdated, use this one -> github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/… (Not sure which line, though)Williamson
That url is outdated too.Expertize
kubectl has been moved into the "staging" part of the kubernetes repository (in preparation for moving to a separate repo in the future). The current link to the file is github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/…Harken
A
7

You can define imagePullPolicy: Always in your deployment file.

Automatism answered 23/8, 2019 at 7:32 Comment(1)
works for dev environment, but for prod use rolligupdate strategyNutbrown
S
5

I have used kubectl rollout restart for my springboot api and it works.

kubectl rollout restart -f pod-staging.yml --namespace test

Yaml for the Deployment:

apiVersion: "apps/v1"
kind: "Deployment"
metadata:
    name: "my-api"
    labels:
      app: "my-api"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "my-api"
  template:
    metadata:
      labels:
        app: "my-api"
    spec:
      containers:
        - name: my-api
          image: harbor.url.com/mycompany/my-api:staging
          ports:
            - containerPort: 8099
              protocol: TCP
          imagePullPolicy: Always
          livenessProbe:
            httpGet:
              path: /actuator/health/liveness
              port: 8099
            initialDelaySeconds: 90
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /actuator/health/readiness
              port: 8099
            initialDelaySeconds: 90
            periodSeconds: 5
          envFrom:
            - configMapRef:
                name: "my-api-configmap"
          env:
            - name: "TOKEN_VALUE"
              valueFrom:
                secretKeyRef:
                  name: "my-api-secret"
                  key: "TOKEN_VALUE"
          resources:
            requests:
              memory: "512Mi"
              cpu: "500m"
            limits:
              memory: "2048Mi"
              cpu: "1000m"
      imagePullSecrets:
        - name: "my-ci-user"
Samford answered 2/8, 2022 at 14:51 Comment(0)
M
4

Defining imagePullPolicy: Always in the deployment would do.

Men answered 3/2 at 5:6 Comment(1)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Proulx
L
3

The Image pull policy will always actually help to pull the image every single time a new pod is created (this can be in any case like scaling the replicas, or pod dies and new pod is created)

But if you want to update the image of the current running pod, deployment is the best way. It leaves you flawless update without any problem (mainly when you have a persistent volume attached to the pod) :)

Labe answered 4/6, 2020 at 23:35 Comment(0)
H
1

The below solved my problem:

kubectl rollout restart
Haihaida answered 17/9, 2022 at 16:6 Comment(0)
A
0

if you want to perform a direct image update on a specific pod, you can use kubectl set image also.

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Arbitrament answered 5/9, 2021 at 11:50 Comment(1)
Please add further details to expand on your answer, such as working code or documentation citations.Proulx
D
0

either you have deleted all the pods manually to get it recreated with pulling the image again.

or

run this below command kubectl rollout restart deployment/deployment_name kubectl rollout restart deployment/nginx

this command should recreate all the pods.

for both scenarios imagepullPolicy should be set as Always.

Denti answered 27/1, 2022 at 13:3 Comment(0)
I
0

A one-liner solution based on invalidating Deployment hash by adding some new unique data, here: a timestamp-based environment variable (just like adding a "volatile" ENV to bust docker cache during image builds):

kubectl set env deployment/nginx REDEPLOY_TIME="$(date)"

or when using oc Client Tools under OCP/OKD:

oc set env dc/nginx REDEPLOY_TIME="$(date)"

It will trigger an automatic rolling re-deployment/re-pull even in older installations of k8s (not just in v1.15 or above, where kubectl rollout restart is the correct solution as described in this answer). In fact I verified this workaround even in the archaic Openshift 3.11 based on k8s 1.11 from mid-2018!

Note we need the usual prerequisites of imagePullPolicy: Always and a "rolling" container image tag such as latest.

Note: kudos and the original idea (using a YAML Deployment manifest file and sed) go back to this comment in the rather long-running k8s issue devoted to this now thankfully gone opinionated choice made initially by k8s devs.

Input answered 9/5, 2023 at 11:57 Comment(0)
L
0

I developed a lightweight Kubernetes tool URunner in order to automatically restart deployment resources while maintaining the same tag (es. :latest)

URunner can also be installed using Helm (Artifacthub link)

It uses Docker V2 API standard in order to continuously check if a Tag Digest is changed and eventually perform the needed restarts.
So basically it is compatible with ALL available container registries (ex. AWS ECR, GCP, Harbor, DigitalOcean registry..)

Leclaire answered 13/1 at 14:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.