How do I force Kubernetes CoreDNS to reload its Config Map after a change?
Asked Answered
M

3

15

I'm running Kubernetes 1.11, and trying to configure the Kubernetes cluster to check a local name server first. I read the instructions on the Kubernetes site for customizing CoreDNS, and used the Dashboard to edit the system ConfigMap for CoreDNS. The resulting corefile value is:

.:53 {
    errors
    health
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       upstream    192.168.1.3 209.18.47.61
       fallthrough in-addr.arpa ip6.arpa
    }
    prometheus :9153
    proxy . /etc/resolv.conf
    cache 30
    reload
}

You can see the local address as the first upstream name server. My problem is that this doesn't seem to have made any impact. I have a container running with ping & nslookup, and neither will resolve names from the local name server.

I've worked around the problem for the moment by specifying the name server configuration in a few pod specifications that need it, but I don't like the workaround.

How do I force CoreDNS to update based on the changed ConfigMap? I can see that it is a Deployment in kube-system namespace, but I haven't found any docs on how to get it to reload or otherwise respond to a changed configuration.

Mainstream answered 27/11, 2018 at 11:14 Comment(1)
You can just delete coredns pod. then, it will be created automatically with new configuration.Emilio
M
18

You can edit it in command line:

kubectl edit cm coredns -n kube-system

Save it and exit, which should reload it.

If it will not reload, as Emruz Hossain advised delete coredns:

kubectl get pods -n kube-system -oname |grep coredns |xargs kubectl delete -n kube-system

Mylan answered 27/11, 2018 at 15:5 Comment(2)
I have the same issue, on EKS. Deleting the pods didn't help, not even terminating all nodes with all pods. I also tried to apply kube-dns ConfigMap with upstreamNameservers instead, as suggested here: kubernetes.io/docs/tasks/administer-cluster/… none of these helped.Fluorescein
As of coredns:v1.8.4-eksbuild.1, I did kubectl edit cm coredns -n kube-system and had to wait 60 seconds (even when the it says cache 30) before the changes took effect. There was no need to reload or delete the pods.Goofy
H
37

One way to apply Configmap changes would be to redeploy CoreDNS pods:

kubectl rollout restart -n kube-system deployment/coredns
Heterosexual answered 9/7, 2020 at 13:25 Comment(2)
This worked for me. Editing the Deployment/coredns object did not work, and killing the individual pods didn't work either (surprising) on Amazon EKS.Renascent
It worked for me, all the DNS resolution was blocked and returned a timeout.Pursley
M
18

You can edit it in command line:

kubectl edit cm coredns -n kube-system

Save it and exit, which should reload it.

If it will not reload, as Emruz Hossain advised delete coredns:

kubectl get pods -n kube-system -oname |grep coredns |xargs kubectl delete -n kube-system

Mylan answered 27/11, 2018 at 15:5 Comment(2)
I have the same issue, on EKS. Deleting the pods didn't help, not even terminating all nodes with all pods. I also tried to apply kube-dns ConfigMap with upstreamNameservers instead, as suggested here: kubernetes.io/docs/tasks/administer-cluster/… none of these helped.Fluorescein
As of coredns:v1.8.4-eksbuild.1, I did kubectl edit cm coredns -n kube-system and had to wait 60 seconds (even when the it says cache 30) before the changes took effect. There was no need to reload or delete the pods.Goofy
O
3

The coredns will reload itself after a period of 30 to 45 secs as you have specified the 'reload' configuration in the configMap. https://coredns.io/plugins/reload/

If you wish to restart directly after making changes in the configMap then you can either delete all the pods or do a rolling restart.

Obmutescence answered 16/3, 2023 at 13:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.