openshift 3.11 install fails - Unable to update cni config: No networks found in /etc/cni/net.d",
Asked Answered
J

2

6

I'm trying to install Openshift 3.11 on a one master, one worker node setup.

The installation fails, and I can see in journalctl -r:

2730 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
2730 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d

Things I've tried:

  1. reboot master node
  2. Ensure that hostname is the same as hostname -f on all nodes
  3. Disable IP forwarding on master node as described on https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238 and https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux
  4. Applying kube-flannel, on master node as described on https://mcmap.net/q/326456/-container-runtime-network-not-ready-cni-config-uninitialized-closed
  5. unset http_proxy https_proxy on master node as described on https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637
  6. modify /etc/resolve.conf to have nameserver 8.8.8.8, as described on https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710
  7. created a file /etc/cni/net.d/80-openshift-network.conf with content { "cniVersion": "0.2.0", "name": "openshift-sdn", "type": "openshift-sdn" }, as described on https://mcmap.net/q/1916227/-okd-3-11-installation-failed-quot-control-plane-pods-didn-39-t-come-up-quot-quot-network-plugin-is-not-ready-cni-config-uninitialized-quot

The last step does appear to have allowed the master node to become ready, however the ansible openshift installer still fails with Control plane pods didn't come up.

For a more detailed description of the problem see https://github.com/openshift/openshift-ansible/issues/11874

Jacquard answered 30/8, 2019 at 0:44 Comment(5)
whats your architecture? arm or amd?Fanchan
Not entirely sure. The hardware is managed by vmware. And I've installed Openshift on the same hardware previously without issue.Jacquard
have you tried another CNI eg. weavenet. If there are issues like that its good to test another CNI to proof its not an unknown incompatibilityFanchan
No, I havent tried another CNI. I've installed OpenShift 3.11 before, and never needed to.Jacquard
That error can be a red herring as it will be thrown when other issues are present. Are the static pods for the control plane components coming up? Run docker ps -a on the master hosts. You can find the logs for the static pods in /var/log/containers. Usually the issue is with the master api pod not coming up and the rest of the pods failing due to that. I would suggest walking carefully through the installation section of the official documentation and making sure everything is order - firewalls/dns/lbs.Photocompose
J
1

The error was using a too recent version of Ansible.

Downgrading to Ansible 2.6 fixed the problem.

Jacquard answered 4/9, 2019 at 23:39 Comment(2)
Can you explain this in more detail? What did Ansible do that caused this?Darcydarda
Its not so much that Ansible caused this, its that the openshift-ansible project requires Ansible 2.6.Jacquard
C
1

Along with Step 6: make sure that hostname and hostname -f bot return the FQDN for your hosts.

https://github.com/openshift/openshift-ansible/issues/10798

Christabella answered 3/9, 2019 at 2:15 Comment(1)
Yes, hostname and hostname -f are the same.Jacquard
J
1

The error was using a too recent version of Ansible.

Downgrading to Ansible 2.6 fixed the problem.

Jacquard answered 4/9, 2019 at 23:39 Comment(2)
Can you explain this in more detail? What did Ansible do that caused this?Darcydarda
Its not so much that Ansible caused this, its that the openshift-ansible project requires Ansible 2.6.Jacquard

© 2022 - 2024 — McMap. All rights reserved.