My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log
Asked Answered
C

22

216

This is what I keep getting:

[root@centos-master ~]# kubectl get pods
NAME               READY     STATUS             RESTARTS   AGE
nfs-server-h6nw8   1/1       Running            0          1h
nfs-web-07rxz      0/1       CrashLoopBackOff   8          16m
nfs-web-fdr9h      0/1       CrashLoopBackOff   8          16m

Below is output from describe pods kubectl describe pods

Events:
  FirstSeen LastSeen    Count   From                SubobjectPath       Type        Reason      Message
  --------- --------    -----   ----                -------------       --------    ------      -------
  16m       16m     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned nfs-web-fdr9h to centos-minion-2
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Created     Created container with docker id 495fcbb06836
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Started     Started container with docker id 495fcbb06836
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Started     Started container with docker id d56f34ae4e8f
  16m       16m     1   {kubelet centos-minion-2}   spec.containers{web}    Normal      Created     Created container with docker id d56f34ae4e8f
  16m       16m     2   {kubelet centos-minion-2}               Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "web" with CrashLoopBackOff: "Back-off 10s restarting failed container=web pod=nfs-web-fdr9h_default(461c937d-d870-11e6-98de-005056040cc2)"

I have two pods: nfs-web-07rxz, nfs-web-fdr9h, but if I do kubectl logs nfs-web-07rxz or with -p option I don't see any log in both pods.

[root@centos-master ~]# kubectl logs nfs-web-07rxz -p
[root@centos-master ~]# kubectl logs nfs-web-07rxz

This is my replicationController yaml file: replicationController yaml file

apiVersion: v1 kind: ReplicationController metadata:   name: nfs-web spec:   replicas: 2   selector:
    role: web-frontend   template:
    metadata:
      labels:
        role: web-frontend
    spec:
      containers:
      - name: web
        image: eso-cmbu-docker.artifactory.eng.vmware.com/demo-container:demo-version3.0
        ports:
          - name: web
            containerPort: 80
        securityContext:
          privileged: true

My Docker image was made from this simple docker file:

FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y nfs-common

I am running my kubernetes cluster on CentOs-1611, kube version:

[root@centos-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

If I run the docker image by docker run I was able to run the image without any issue, only through kubernetes I got the crash.

Can someone help me out, how can I debug without seeing any log?

Clevelandclevenger answered 12/1, 2017 at 3:13 Comment(3)
Can you try adding a command to the pod yaml?Pichardo
check the logs with kubectl logs -f <pod_name> it could be the (server/ container) startup issue.Strained
You could also run kubectl get events to see what is causing the crush loop.Linton
C
157

As @Sukumar commented, you need to have your Dockerfile have a Command to run or have your ReplicationController specify a command.

The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues.

Cryolite answered 18/1, 2017 at 2:50 Comment(2)
If we have proper Dockerfile added and still getting the error , what may be the reason ? I am getting the same error evenif I properly added the Command. And When I am testing the independant docker image without using kubernetes deployment , then I am getting the output. So it is not problem with Dockerfile. Its something related with deployment ?. Here I added the whole issue that I am facing , #56001852 . Can you please look on that?Lynnalynne
There is a really good blog that goes in depth on what a CrashLoopBackoff means and the various cases where this can happen: managedkube.com/kubernetes/pod/failure/crashloopbackoff/k8sbot/…Xanthochroism
G
122
#Show details of specific pod
kubectl  describe pod <pod name> -n <namespace-name>

# View logs for specific pod
kubectl  logs <pod name> -n <namespace-name>
Gigue answered 4/6, 2018 at 8:54 Comment(2)
Although this commands might (or might not solve) the problem, a good answer should always contain an explanation how the problem is solved.Overstudy
The first command kubectl -n <namespace-name> describe pod <pod name> is to describe your pod, which can be used to see any error in pod creation and running the pod like lack of resource, etc. And the second command kubectl -n <namespace-name> logs -p <pod name> to see the logs of the application running in the pod.Juratory
A
28

If you have an application that takes slower to bootstrap, it could be related to the initial values of the readiness/liveness probes. I solved my problem by increasing the value of initialDelaySeconds to 120s as my SpringBoot application deals with a lot of initialization. The documentation does not mention the default 0 (https://kubernetes.io/docs/api-reference/v1.9/#probe-v1-core)

service:
  livenessProbe:
    httpGet:
      path: /health/local
      scheme: HTTP
      port: 8888
    initialDelaySeconds: 120
    periodSeconds: 5
    timeoutSeconds: 5
    failureThreshold: 10
  readinessProbe:
    httpGet:
      path: /admin/health
      scheme: HTTP
      port: 8642
    initialDelaySeconds: 150
    periodSeconds: 5
    timeoutSeconds: 5
    failureThreshold: 10

A very good explanation about those values is given by What is the default value of initialDelaySeconds.

The health or readiness check algorithm works like:

  1. wait for initialDelaySeconds
  2. perform check and wait timeoutSeconds for a timeout if the number of continued successes is greater than successThreshold return success
  3. if the number of continued failures is greater than failureThreshold return failure otherwise wait periodSeconds and start a new check

In my case, my application can now bootstrap in a very clear way, so that I know I will not get periodic crashloopbackoff because sometimes it would be on the limit of those rates.

Analyzer answered 17/11, 2018 at 23:47 Comment(2)
you saved me a lot of hours! Thank you. My probe time was 90s and it wouldn't even let the pod start.Gelhar
Lol mine's was 1s so it crashed immediately. Switched to 300 and it's running fine now!Returnable
C
25

My pod kept crashing and I was unable to find the cause. Luckily there is a space where kubernetes saves all the events that occurred before my pod crashed.
(#List Events sorted by timestamp)

To see these events run the command:

kubectl get events --sort-by=.metadata.creationTimestamp

make sure to add a --namespace mynamespace argument to the command if needed

The events shown in the output of the command showed my why my pod kept crashing.

Crustacean answered 11/11, 2019 at 15:43 Comment(2)
Thanks! This tip helped me detect there was a problem mounting the volume with secret.Fda
Also helped me to discover assigned managed identity on the pod was incorrect.Ampulla
L
21

I had the need to keep a pod running for subsequent kubectl exec calls and as the comments above pointed out my pod was getting killed by my k8s cluster because it had completed running all its tasks. I managed to keep my pod running by simply kicking the pod with a command that would not stop automatically as in:

kubectl run YOUR_POD_NAME -n YOUR_NAMESPACE --image SOME_PUBLIC_IMAGE:latest --command tailf /dev/null
Longshoreman answered 13/6, 2017 at 17:23 Comment(3)
tailf did not work for me but this did (on Alpine linux): --command /usr/bin/tail -- -f /dev/nullLadon
it's not pod name. it's deployment name.kubectl run <deployment name> -n <namespace> --image <image> --command tailf /dev/nullHanna
Perfect! Was looking for this for really longDogma
S
15

From This page, the container dies after running everything correctly but crashes because all the commands ended. Either you make your services run on the foreground, or you create a keep alive script. By doing so, Kubernetes will show that your application is running. We have to note that in the Docker environment, this problem is not encountered. It is only Kubernetes that wants a running app.

Update (an example):

Here's how to avoid CrashLoopBackOff, when launching a Netshoot container:

kubectl run netshoot --image nicolaka/netshoot -- sleep infinity
Sergeant answered 11/6, 2018 at 9:33 Comment(0)
M
7

In your yaml file, add command and args lines:

...
containers:
      - name: api
        image: localhost:5000/image-name 
        command: [ "sleep" ]
        args: [ "infinity" ]
...

Works for me.

Mannie answered 30/5, 2020 at 7:31 Comment(1)
what happens to command specified in the Dockerfile, does it still execute it? is this reliable or a hack?Mysterious
C
6

I observed the same issue, and added the command and args block in yaml file. I am copying sample of my yaml file for reference

 apiVersion: v1
    kind: Pod
    metadata:
      labels:
        run: ubuntu
      name: ubuntu
      namespace: default
    spec:
      containers:
      - image: gcr.io/ow/hellokubernetes/ubuntu
        imagePullPolicy: Never
        name: ubuntu
        resources:
          requests:
            cpu: 100m
        command: ["/bin/sh"]
        args: ["-c", "while true; do echo hello; sleep 10;done"]
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
Comestible answered 3/8, 2020 at 7:34 Comment(1)
arguably this should be done in a script run inside the container, not on the masterDusen
M
5

As mentioned in above posts, the container exits upon creation.

If you want to test this without using a yaml file, you can pass the sleep command to the kubectl create deployment statement. The double hyphen -- indicates a command, which is equivalent of command: in a Pod or Deployment yaml file.

The below command creates a deployment for debian with sleep 1234, so it doesn't exit immediately.

kubectl create deployment deb --image=debian:buster-slim -- "sh" "-c" "while true; do sleep 1234; done"

You then can create a service etc, or, to test the container, you can kubectl exec -it <pod-name> -- sh (or -- bash) into the container you just created to test it.

Millenarian answered 10/6, 2021 at 8:46 Comment(0)
R
3

I solved this problem I increased memory resource

  resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 100m
        memory: 250Mi 
Reyreyes answered 30/4, 2020 at 15:44 Comment(0)
S
2

In my case the problem was what Steve S. mentioned:

The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues.

Namely I had a Java application whose main threw an exception (and something overrode the default uncaught exception handler so that nothing was logged). The solution was to put the body of main into try { ... } catch and print out the exception. Thus I could find out what was wrong and fix it.

(Another cause could be something in the app calling System.exit; you could use a custom SecurityManager with an overridden checkExit to prevent (or log the caller of) exit; see https://mcmap.net/q/103064/-preventing-system-exit-from-api.)

Sykes answered 15/1, 2019 at 12:40 Comment(0)
T
2

kubectl logs -f POD, will only produce logs from a running container. Suffix --previous to the command to get logs from a previous container. Used maily for debugging. Hope this helps.

Trevortrevorr answered 25/1, 2023 at 20:18 Comment(0)
Z
1

Whilst troubleshooting the same issue I found no logs when using kubeclt logs <pod_id>. Therefore I ssh:ed in to the node instance to try to run the container using plain docker. To my surprise this failed also.

When entering the container with:

docker exec -it faulty:latest /bin/sh

and poking around I found that it wasn't the latest version.

A faulty version of the docker image was already available on the instance.

When I removed the faulty:latest instance with:

docker rmi faulty:latest

everything started to work.

Zoroastrian answered 17/1, 2019 at 15:29 Comment(0)
E
1

I had same issue and now I finally resolved it. I am not using docker-compose file. I just added this line in my Docker file and it worked.

ENV CI=true

Reference: https://github.com/GoogleContainerTools/skaffold/issues/3882

Exhibitionist answered 9/6, 2020 at 22:30 Comment(0)
T
1

Try rerunning the pod and running

 kubectl get pods --watch

to watch the status of the pod as it progresses.

In my case, I would only see the end result, 'CrashLoopBackOff,' but the docker container ran fine locally. So I watched the pods using the above command, and I saw the container briefly progress into an OOMKilled state, which meant to me that it required more memory.

Trommel answered 21/7, 2020 at 6:23 Comment(0)
S
1

In my case this error was specific to the hello-world docker image. I used the nginx image instead of the hello-world image and the error was resolved.

Somite answered 21/1, 2021 at 17:8 Comment(0)
V
0

i solved this problem by removing space between quotes and command value inside of array ,this is happened because container exited after started and no executable command present which to be run inside of container.

['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
Venettavenezia answered 8/9, 2020 at 12:22 Comment(1)
this is a time bombDusen
S
0

I had similar issue but got solved when I corrected my zookeeper.yaml file which had the service name mismatch with file deployment's container names. It got resolved by making them same.

apiVersion: v1
kind: Service
metadata:
  name: zk1
  namespace: nbd-mlbpoc-lab
  labels:
    app: zk-1
spec:
  ports:
  - name: client
    port: 2181
    protocol: TCP
  - name: follower
    port: 2888
    protocol: TCP
  - name: leader
    port: 3888
    protocol: TCP
  selector:
    app: zk-1
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: zk-deployment
  namespace: nbd-mlbpoc-lab
spec:
  template:
    metadata:
      labels:
        app: zk-1
    spec:
      containers:
      - name: zk1
        image: digitalwonderland/zookeeper
        ports:
        - containerPort: 2181
        env:
        - name: ZOOKEEPER_ID
          value: "1"
        - name: ZOOKEEPER_SERVER_1
          value: zk1
Sago answered 19/10, 2020 at 4:24 Comment(0)
W
0

In my case, the issue was a misconstrued list of command-line arguments. I was doing this in my deployment file:

...
args:
  - "--foo 10"
  - "--bar 100"

Instead of the correct approach:

...
args:
  - "--foo"
  - "10"
  - "--bar"
  - "100"
Woke answered 1/2, 2021 at 15:32 Comment(0)
T
0

I finally found the solution when I execute 'docker run xxx ' command ,and I got the error then.It is caused by incomplete-platform .

Tatyanatau answered 1/5, 2021 at 16:57 Comment(0)
S
0

It seems there could be a lot of reasons why a Pod should be in crashloopbackoff state.

In my case, one of the container was terminating continuously due to the missing Environment value.

So, the best way to debug is to -

1. check Pod description output i.e. kubectl describe pod abcxxx
2. check the events generated related to the Pod i.e. kubectl get events| grep abcxxx
3. Check if End-points have been created for the Pod i.e. kubectl get ep
4. Check if dependent resources have been in-place e.g. CRDs or configmaps or any other resource that may be required.
Sandbank answered 16/11, 2021 at 10:6 Comment(0)
R
0

Error crashloopbackoff mainly comes across when your pod gets created and exits immediately most probably due to resource availability. I would recommend using readinessProbe for this refer to the below yaml


spec:
  containers:
  - name: liveness
    image: registry.k8s.io/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

Redo answered 23/2 at 7:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.