I'm trying to setup a Kubernetes cluster, but I cannot get CoreDNS running. I've ran the following to start the cluster:
sudo swapoff -a
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s- version=$(kubectl version | base64 | tr -d '\n')"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
To check the PODs with kubectl get pods --all-namespaces
, I get
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-68fb79bcf6-6s5bp 0/1 CrashLoopBackOff 6 10m
kube-system coredns-68fb79bcf6-hckxq 0/1 CrashLoopBackOff 6 10m
kube-system etcd-myserver 1/1 Running 0 79m
kube-system kube-apiserver-myserver 1/1 Running 0 79m
kube-system kube-controller-manager-myserver 1/1 Running 0 79m
kube-system kube-proxy-9ls64 1/1 Running 0 80m
kube-system kube-scheduler-myserver 1/1 Running 0 79m
kube-system kubernetes-dashboard-77fd78f978-tqt8m 1/1 Running 0 80m
kube-system weave-net-zmhwg 2/2 Running 0 80m
So CoreDNS keeps crashing. The only error messages I could found were from
/var/log/syslog
:
Oct 4 18:06:44 myserver kubelet[16397]: E1004 18:06:44.961409 16397 pod_workers.go:186] Error syncing pod c456a48b-c7c3-11e8-bf23-02426706c77f ("coredns-68fb79bcf6-6s5bp_kube-system(c456a48b-c7c3-11e8-bf23-02426706c77f)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-68fb79bcf6-6s5bp_kube-system(c456a48b-c7c3-11e8-bf23-02426706c77f)"
and from kubectl logs coredns-68fb79bcf6-6s5bp -n kube-system
:
.:53
2018/10/04 11:04:55 [INFO] CoreDNS-1.2.2
2018/10/04 11:04:55 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2018/10/04 11:04:55 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
2018/10/04 11:04:55 [FATAL] plugin/loop: Seen "HINFO IN 3256902131464476443.1309143030470211725." more than twice, loop detected
Some solutions I found are to issue
kubectl -n kube-system get deployment coredns -o yaml | \
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
kubectl apply -f -
and to modify /etc/resolv.conf
to point to an actual DNS, not to localhost, which I tried as well.
The issue is described in https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#pods-in-runcontainererror-crashloopbackoff-or-error-state and I tried many different Pod Networks but no help.
I've run sudo kubeadm reset && rm -rf ~/.kube/ && sudo kubeadm init
several times.
I'm running Ubuntu 16.04, Kubernetes 1.12 and Docker 17.03. Any ideas?
kubeadm init --config config.yaml
but kubelet/CoreDNS didn't pick up the file. I'd be still interested how to specify alternative location of resolv.conf to kubeadm. – Cabbagewormkubectl patch deployment -n=kube-system coredns -p '{"spec": {"template": {"spec":{"containers":[{"image":"k8s.gcr.io/coredns:1.2.6", "name":"coredns","resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"100m","memory":"70Mi"}}}]}}}}'
– Crabbing