How to retry image pull in a kubernetes Pods?
Asked Answered
P

8

108

I am new to kubernetes. I have an issue in the pods. When I run the command

 kubectl get pods

Result:

NAME                   READY     STATUS             RESTARTS   AGE
mysql-apim-db-1viwg    1/1       Running            1          20h
mysql-govdb-qioee      1/1       Running            1          20h
mysql-userdb-l8q8c     1/1       Running            0          20h
wso2am-default-813fy   0/1       ImagePullBackOff   0          20h

Due to an issue of "wso2am-default-813fy" node, I need to restart it. Any suggestion?

Procurator answered 26/10, 2016 at 9:58 Comment(0)
V
75

Usually in case of "ImagePullBackOff" it's retried after few seconds/minutes. In case you want to try again manually you can delete the old pod and recreate the pod. The one line command to delete and recreate the pod would be:

kubectl replace --force -f <yml_file_describing_pod>
Verdi answered 26/10, 2016 at 10:25 Comment(5)
If you have got replication set/controller managing this pod, a new pod should be automatically created after killing it.Assert
^^ absolutely. I'd be very worried if killing pod had it disappear for good.Infold
I believe kubectl replace --force -f ... would be equivalent to delete followed by createActinic
If your pod created via Deployment, then just delete a pod - a new one will be created automaticallyKenaz
Why the --force option is required?Projection
G
245

In case of not having the yaml file:

kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -

Governance answered 12/7, 2017 at 0:13 Comment(0)
V
75

Usually in case of "ImagePullBackOff" it's retried after few seconds/minutes. In case you want to try again manually you can delete the old pod and recreate the pod. The one line command to delete and recreate the pod would be:

kubectl replace --force -f <yml_file_describing_pod>
Verdi answered 26/10, 2016 at 10:25 Comment(5)
If you have got replication set/controller managing this pod, a new pod should be automatically created after killing it.Assert
^^ absolutely. I'd be very worried if killing pod had it disappear for good.Infold
I believe kubectl replace --force -f ... would be equivalent to delete followed by createActinic
If your pod created via Deployment, then just delete a pod - a new one will be created automaticallyKenaz
Why the --force option is required?Projection
I
19
$ kubectl replace --force -f <resource-file>

if all goes well, you should see something like:

<resource-type> <resource-name> deleted
<resource-type> <resource-name> replaced

details of this can be found in the Kubernetes documentation, "manage-deployment" and kubectl-cheatsheet pages at the time of writing.

Iconography answered 1/6, 2017 at 8:2 Comment(1)
How do I know what the resource file for the pod should look like? I saw the ./pod.json file but the link doesnt mention any template or similarAnsermet
T
11

If the Pod is part of a Deployment or Service, deleting it will restart the Pod and, potentially, place it onto another node:

$ kubectl delete po $POD_NAME

replace it if it's an individual Pod:

$ kubectl get po -n $namespace $POD_NAME -o yaml | kubectl replace -f -

Trusty answered 1/8, 2018 at 20:24 Comment(0)
C
4

Try with deleting pod it will try to pull image again.

kubectl delete pod <pod_name> -n <namespace_name>

Cordellcorder answered 23/8, 2019 at 9:27 Comment(0)
H
0

First try to see what's wrong with the pod:

kubectl logs -p <your_pod>

In my case it was a problem with the YAML file.

So, I needed to correct the configuration file and replace it:

kubectl replace --force -f <yml_file_describing_pod>
Huffman answered 30/3, 2019 at 13:48 Comment(0)
T
0

Most probably the issue of ImagePullBackOff is due to either the image not being present or issue with the pod YAML file.

What I will do is this

kubectl get pod -n $namespace $POD_NAME --export > pod.yaml | kubectl -f apply -

I would also see the pod.yaml to see the why the earlier pod didn't work

Topcoat answered 21/7, 2020 at 0:19 Comment(0)
F
0

There is also possibility that the pull policy is not defined or kubernetes is configured to pull from the hub but fails due network issues. Try setting up a local secure registry and execute a pull . It would work.

Fernand answered 26/4, 2021 at 9:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.