Container runtime network not ready: cni config uninitialized [closed]
Asked Answered
H

14

60

I'm installing kubernetes(kubeadm) on centos VM running inside Virtualbox, so with yum I installed kubeadm, kubelet and docker.

Now while trying to setup cluster with kubeadm init --pod-network-cidr=192.168.56.0/24 --apiserver-advertise-address=192.168.56.33/32 i run into the following error :

Unable to update cni config: No networks found in /etc/cni/net.d

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

So I checked, no cni folder in /etc even that kubernetes-cni-0.6.0-0.x86_64 is installed. I Tried commenting KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf but it didn't work.

PS:

  • I'm installing behind proxy.

  • I have multiple network adapters:

    • NAT : 10.0.2.15/24 for Internet

    • Host Only : 192.168.56.33/32

    • And docker interface : 172.17.0.1/16

Docker version: 17.12.1-ce
kubectl version : Major:"1", Minor:"9", GitVersion:"v1.9.3"
Centos 7

Herculaneum answered 5/3, 2018 at 14:9 Comment(0)
A
38

Add pod network add-on - Weave Net

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
Appalachia answered 7/10, 2019 at 16:55 Comment(1)
if the above calico doesn't work then go to the link github.com/projectcalico/calico/blob/master/manifests/… copy and paste it into a new calico file and apply it it should workTiro
F
24

Stop and disable apparmor & restart the containerd service on that node will solve your issue

root@node:~# systemctl stop apparmor
root@node:~# systemctl disable apparmor 
root@node:~# systemctl restart containerd.service
Fiscal answered 21/9, 2022 at 21:6 Comment(7)
This, this one works. tnx.Untune
thanks this unlocked it, -- kubeadm 1.26.3 Calico CNI Containerd Docker.io -- Would be great to know the reason why.Worms
This did it for me. kubernetesVersion 1.27.3 using flannel. Followed by a systemctl restart kubelet.Homophonous
@Worms AppArmor is a Linux kernel security module that allows the system administrator to restrict programs' capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths, need to configure AppArmor if you want to allow k8s servicesFiscal
restart containerd.service it works for meDecarburize
now the 4th time Im here :) what does apparmor have to do with containerd?Corwun
If you are using RKE2, you should restart rke server instead of contained: systemctl restart rke2-server.serviceArchiplasm
M
13

There are several points to remember when setting up the cluster with "kubeadm init" and it is clearly documented on the Kubernetes site kubeadm cluster create:

  • "kubeadm reset" if you have already created a previous cluster
  • Remove the ".kube" folder from the home or root directory
  • (Also stopping the kubelet with systemctl will allow for a smooth setup)
  • Disable swap permanently on the machine, especially if you are rebooting your linux system
  • And not to forget, install a pod network add-on according to the instructions provided on the add on site (not Kubernetes site)
  • Follow the post initialization steps given on the command window by kubeadm.

If all these steps are followed correctly then your cluster will run properly.

And don't forget to do the following command to enable scheduling on the created cluster:

kubectl taint nodes --all node-role.kubernetes.io/master-

About how to install from behind proxy you may find this useful:

install using proxy

Myiasis answered 14/9, 2018 at 16:2 Comment(0)
L
6

Check this answer.

Use this PR (till will be approved):

kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

it's a known issue: coreos/flannel#1044

Lorikeet answered 20/2, 2019 at 6:9 Comment(0)
H
5

I could not see the helm server version:

$ helm version --tiller-namespace digital-ocean-namespace
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find a ready tiller pod

The kubectl describe node kubernetes-master --namespace digital-ocean-namespace command was showing the message:

NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

The nodes was not ready:

$ kubectl get node --namespace digital-ocean-namespace
NAME                  STATUS     ROLES    AGE   VERSION
kubernetes-master     NotReady   master   82m   v1.14.1
kubernetes-worker-1   NotReady   <none>   81m   v1.14.1

I had a version compatibility issue between Kubernetes and the flannel network.

My k8s version was 1.14 as seen in the command:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

After re-installing the flannel network with the command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

I could then see the helm server version:

$ helm version --tiller-namespace digital-ocean-namespace
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Hamstring answered 23/4, 2019 at 14:22 Comment(1)
kubectl apply -f raw.githubusercontent.com/coreos/flannel/master/Documentation/… this worked for me, thanksWhopping
G
4

Resolved this issue by installing Calico CNI plugin using following commands:

curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
Girardi answered 24/7, 2021 at 18:21 Comment(1)
for me the above calico worked till kubernetes version v1.25.4. for the newer versions i used the below link github.com/projectcalico/calico/blob/master/manifests/… ... copy and pasted it in the new calico file and applied it. it worked for meTiro
C
3

I solved this by Installing a Pod network add-o, I used Flannel pod network which is a very simple overlay network that satisfies the kubernetes requirements

you can do it with this command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

You can read more about this in the kubernetes documentation

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

Compelling answered 4/5, 2021 at 14:35 Comment(0)
H
2

It was a proxy error as mentionned in Github https://github.com/kubernetes/kubernetes/issues/34695

They suggested to use kubeadm init --use-kubernetes-version v1.4.1 but i change my network entirely (no proxy) and i manage to setup my cluster.

After that we can setup pod network with kubectl apply -f ... see https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

Herculaneum answered 6/3, 2018 at 14:57 Comment(0)
C
1

I face the same errors and it seens that It seens that systemd have a problems. I'm not remember my last systemd version. But update it solve the problems for me.

Celestinecelestite answered 2/9, 2019 at 16:23 Comment(0)
C
0

I faced same errors, I was seeing the issue after slave node joined to cluster. Slave node was showing status 'Not ready' after joining.

I checked kubectl describe node ksalve and observed the mentioned issue. After digging deeper I found that systemd was different in master and slave node. In master I have configured systemd however slave has default cfgroup only.

Once I removed the systemd from master node, slave status immediately changed to Ready.

Costmary answered 8/5, 2020 at 13:40 Comment(1)
Can you elaborate > I removed the systemd from master nodeLois
A
0

My problem was that I was updating the hostname after the cluster was created. By doing that, it's like the master didn't know it was the master.

I am still running:

sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname) [1][2]

but now I run it before the cluster initialization

Error that lead me to this from running sudo journalctl -u kubelet:

Unable to register node "ip-10-126-121-125.ec2.internal" with API server: nodes "ip-10-126-121-125.ec2.internal" is forbidden: node "ip-10-126-121-125" cannot modify node "ip-10-126-121-125.ec2.internal"
Anjaanjali answered 8/12, 2020 at 16:19 Comment(0)
W
0

In my case, it is because I forgot to open the 8285 port. 8285 port is used by flannel you need to open it from the firewall.

e.g:
if you use flannel addon and your OS is centOS:

firewall-cmd --permanent --add-port=8285/tcp 
firewall-cmd --reload
Waterspout answered 2/11, 2021 at 12:57 Comment(0)
K
0

In my case I restarted docker and the status changed to ready

sudo systemctl stop docker
sudo systemctl start docker
Kilometer answered 28/10, 2022 at 7:6 Comment(0)
G
-3

This is for AWS VPC CNI

  1. Step1 : kubectl get mutatingwebhookconfigurations -oyaml > mutating.txt

  2. Step 2: Kubectl delete -f mutating.txt

  3. Step3: Restart the node

  4. Step4 : You should be able to see the node is ready

  5. Step5: Install the mutatingwebhookconfiguration back

Galleywest answered 11/1, 2021 at 9:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.