Pod status as CreateContainerConfigError in Minikube cluster
Asked Answered
S

6

102

I am trying to run Sonarqube service using the following helm chart.

So the set-up is like it starts a MySQL and Sonarqube service in the minikube cluster and Sonarqube service talks to the MySQL service to dump the data.

When I do helm install followed by kubectl get pods I see the MySQL pod status as running, but the Sonarqube pod status shows as CreateContainerConfigError. I reckon it has to do with the mounting volume thingy: link. Although I am not quite sure how to fix it (pretty new to Kubernetes environment and till learning :) )

Sable answered 19/5, 2018 at 11:23 Comment(5)
Hi, can you add logs of the container? kubectl logs POD_NAMELevitical
Since , the pod is in CreateContainerConfigError . kubectl logs returns: Error from server (BadRequest): container "sonar-play-sonarqube" in pod "sonar-play-sonarqube-6ffdff74d4-w2pvs" is waiting to start: CreateContainerConfigErrorSable
Yes, It could be the caused by the mounting volumes. As I understand init container makes sure these directories need to be available for the app container. Is it possible to get the logs of init containers kubectl logs podname -c init-container-name ?Levitical
What does ‘kubectl describe pod <pod name>’ show? Logs is going to show you application errors, but your containers haven’t booted up, so the application hasn’t started yet. ‘Describe pod’ will get you insights into k8s config errors.Accoucheur
In my case of the same error there was a configmap linked to the cronjob but the actual configmap was missing.Owing
W
134

This can be solved by various ways, I suggest better go for kubectl describe pod podname name, you now might see the cause of why the service that you've been trying is failing. In my case, I've found that some of my key-values were missing from the configmap while doing the deployment.

Wisteria answered 11/3, 2019 at 12:0 Comment(0)
F
59

I ran into this problem myself today as I was trying to create secrets and using them in my pod definition yaml file. It would help if you check the output of kubectl get secrets and kubectl get configmaps if you are using any of them and validate if the # of data items you wanted are listed correctly.

I recognized that in my case problem was that when we create secrets with multiple data items: the output of kubectl get secrets <secret_name> had only 1 item of data while I had specified 2 items in my secret_name_definition.yaml. This is because of the difference between using kubectl create -f secret_name_definition.yaml vs kubectl create secret <secret_name> --from-file=secret_name_definition.yaml The difference is that in the case of the former, all the items listed in the data section of the yaml will be considered as key-value pairs and hence the # of items will be shown as the correct output when we query using kubectl get secrets secret_name but in the case of the latter only the first data item in the secret_name_definition.yaml will be evaluated for the key-value pairs and hence the output of kubectl get secrets secret_name will show only 1 data item and this is when we see the error "CreateContainerConfigError".

Note that this problem wouldn't occur if we use kubectl create secret <secret_name> with the options --from-literal= because then we would have to use the prefix --from-literal= for every key-value pair we want to define.

Similarly, if we are using --from-file= option, we still have to specify the prefix multiple times, one for each key-value pair, but just that we can pass the raw value of the key when we use --from-literal and the encoded form (i.e. value of the key will now be echo raw_value | base64 of it as a value when we use --from-file.

For example, say the keys are "username" and "password", if creating the secret using the command kubectl create -f secret_definition.yaml we need to have the values for both "username" and "password" encoded as mentioned in the "Create a Secret" section of https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/

I would like to highlight the "Note:" section in https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/ Also, https://kubernetes.io/docs/concepts/configuration/secret/ has a very clear explanation of creating secrets

Also make sure that the deployment.yaml now has the correct definiton for this container:

      env:
        - name: DB_HOST
          value: 127.0.0.1
        # These secrets are required to start the pod.
        # [START cloudsql_secrets]
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: cloudsql-db-credentials
              key: username
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: cloudsql-db-credentials
              key: password
        # [END cloudsql_secrets]

As quoted by others, "kubectl describe pods pod_name" would help but in my case I only understood that the container wasn't being created first of all and the output of "kubectl logs pod_name -c container_name" didn't help much.

Fidelis answered 28/6, 2018 at 4:45 Comment(2)
It is important to mention here to always echo -n raw_value | base64. The -n removes a new line character which is otherwise added to the raw_value of the secret, adding to keyboard bangs later on.Marji
but how do you even debug that is the case? where can i see an actual error that mention secrets/configmaps?Sudoriferous
S
22

Recently, I had encountered the same CreateContainerConfigError error and after little debugging I found out that it was because I was using a kubernetes secret in my Deployment yaml, which was not actually present/created in that namespace where the pods were getting created.

Also after reading the previous answer I guess this can be assured that this particular error is focused around kubernetes secrets!

Shool answered 16/11, 2018 at 16:13 Comment(3)
Can u explain about not actually not present/created in that namespace, i got same error message, CreateContainerConfigErrorTocology
If you are using a secret in your Deployment which is not present in the said namespace then it will give you this errorShool
This was similar for me. I created the secret "db-secret" but then referenced "db-secrets" (with a trailing s). Kubernetes should really do something about their error verbosity ^^Calvillo
D
9

Check your secrets and config maps (kubectl get [secrets|configmaps]) that already exist and are correctly pointed in the YAML descriptor file, in both cases an incorrect secret/configmap (not created, mispelling, etc.) results in CreateContainerConfigError.

As already pointed in the answers can check the error with kubectl describe pod [pod name] and something like this should appear at the bottom of the ouput:

  Warning  Failed     85s (x12 over 3m37s)  kubelet, gke-****-default-pool-300d3c89-9jkz
  Error: configmaps "config-map-1" not found

UPDATE: From @alexis-wilke

The list of events can be ephemeral in some versions and this message disappear quickly. As a rule of thumb, check events list immediately when booting a pod, or if you have CreateContainerConfigError without events double check secrets and config maps as they can leave the pod in this state with no trace at some point

Devisable answered 16/5, 2019 at 17:58 Comment(2)
Great. Thank you for showing the potential error. In my case... no error at the bottom. Just a recap of an event which doesn't look like an error. Still, the pod won't start.Khachaturian
If you wouldn't mind updating your answer, it would be even more useful to let the user know that events do not stay behind for very long. After a little while, you won't get the error in the event list. The only way to know what is still wrong is to start with a new pod (strange design!)Khachaturian
T
2

I also ran into this issue, and the problem was due to an environment variable using a field ref, on a controller. The other controller and the worker were able to resolve the reference. We didn't have time to track down the cause of the issue and wound up tearing down the cluster and rebuilding it.

          - name: DD_KUBERNETES_KUBELET_HOST
            valueFrom:
              fieldRef:
                fieldPath: status.hostIP
Apr 02 16:35:46 ip-10-30-45-105.ec2.internal sh[1270]: E0402 16:35:46.502567    1270 pod_workers.go:186] Error syncing pod 3eab4618-5564-11e9-a980-12a32bf6e6c0 ("datadog-datadog-spn8j_monitoring(3eab4618-5564-11e9-a980-12a32bf6e6c0)"), skipping: failed to "StartContainer" for "datadog" with CreateContainerConfigError: "host IP unknown; known addresses: [{Hostname ip-10-30-45-105.ec2.internal}]"
Towhaired answered 3/4, 2019 at 20:43 Comment(0)
B
0

Try to use the option --from-env-file instead of --from-file and see if this problem disappears. I got the same error and looking into the pod events, it suggested that the key-value pairs inside the mysecrets.txt file is not properly read. If you have only one line, Kubernetes takes the content inside the file as value and the filename as key. To avoid this issue, you need to read the file as environment variable files as shown below.

mysecrets.txt:

MYSQL_PASSWORD=dfsdfsdfkhk

For example:

kubectl create secret generic secret-name --from-env-file=mysecrets.txt
kubectl create configmap generic configmap-name --from-env-file=myconfigs.txt
Basin answered 2/7, 2019 at 15:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.