Kubespray fails with "Found multiple CRI sockets, please use --cri-socket to select one"
Asked Answered
G

3

6

Problem encountered

When deploying a cluster with Kubespray, CRI-O and Cilium I get an error about having multiple CRI socket to choose from.

Full error

fatal: [p3kubemaster1]: FAILED! => {"changed": true, "cmd": " mkdir -p /etc/kubernetes/external_kubeconfig &&  /usr/local/bin/kubeadm  init phase   kubeconfig admin --kubeconfig-dir /etc/kubernetes/external_kubeconfig  --cert-dir /etc/kubernetes/ssl --apiserver-advertise-address 10.10.3.15 --apiserver-bind-port 6443  >/dev/null && cat /etc/kubernetes/external_kubeconfig/admin.conf && rm -rf /etc/kubernetes/external_kubeconfig ", "delta": "0:00:00.028808", "end": "2019-09-02 13:01:11.472480", "msg": "non-zero return code", "rc": 1, "start": "2019-09-02 13:01:11.443672", "stderr": "Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock", "stderr_lines": ["Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock"], "stdout": "", "stdout_lines": []}

Interesting part

kubeadm  init phase kubeconfig admin --kubeconfig-dir /etc/kubernetes/external_kubeconfig [...] >/dev/null,"stderr": "Found multiple CRI sockets, please use --cri-socket to select one: /var/run/dockershim.sock, /var/run/crio/crio.sock"}

What I've tried

  • 1) I've tried to set the --cri-socket flag inside /var/lib/kubelet/kubeadm-flags.env:
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --cri-socket=/var/run/crio/crio.sock"

=> Makes no difference

  • 2) I've checked /etc/kubernetes/kubeadm-config.yaml but it already contains the following section :
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.3.15
  bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8d9f2ff3xxxxxxxxxxxx
nodeRegistration:
  name: p3kubemaster1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  criSocket: /var/run/crio/crio.sock

=> Its already ending with the criSocket flag, so nothing to do...

  • 3) Tried to edit the ansible script to add the --cri-socket to the existing command but it fails with Unknow command --cri-socket

Existing :

{% if kubeadm_version is version('v1.14.0', '>=') %}
    init phase`

Tried :

{% if kubeadm_version is version('v1.14.0', '>=') %}
    init phase --crio socket /var/run/crio/crio.sock`

Theories

It seems that the problem comes from the command kubeadm init phase which is not compatible with the --crio-socket flag... (see point 3)

Even though the correct socket is set (see point 2) using the config file, the kubeadm init phase is not using it.

Any ideas would be apreciated ;-)
thx

Glassman answered 10/9, 2019 at 9:58 Comment(0)
G
1

I finally got it !

The initial kubespray command was:
kubeadm init phase kubeconfig admin --kubeconfig-dir {{ kube_config_dir }}/external_kubeconfig

⚠️ It seems that the --kubeconfig-dir flag was not taking into account the number of crio sockets.

So I changed the line to:
kubeadm init phase kubeconfig admin --config /etc/kubernetes/kubeadm-config.yaml


For people having similar issues:

The InitConfig part that made it work on the master is the following:

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.3.15
  bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8d9f2ff3c1eb2dafe5fbe2595ab4ab2d3eb1a
nodeRegistration:
  name: p3kubemaster1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  criSocket: /var/run/crio/crio.sock

In kubespray you must update the file roles/kubernetes/client/tasks/main.yml arround line 57.

You'll have to comment the initial --kubeconfig-dir section and replace it with the path of the InitConfig file.

For me it was generated by kubespray in /etc/kubernetes/kubeadm-config.yaml on the kube master. Check that this file exists on you side and that it contains the criSocket key in the nodeRegistration section.

Glassman answered 18/9, 2019 at 11:57 Comment(0)
H
13

This worked for me for multiple cri sockets

kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock

Image pull command before initialization for multiple cri:

kubeadm config images pull --cri-socket=unix:///var/run/cri-dockerd.sock

You can choose cri socket path from the following table. See original documentation here

Runtime Path to Unix domain socket
containerd unix:///var/run/containerd/containerd.sock
CRI-O unix:///var/run/crio/crio.sock
Docker Engine (using cri-dockerd) unix:///var/run/cri-dockerd.sock
Haught answered 6/7, 2022 at 6:20 Comment(0)
G
1

I finally got it !

The initial kubespray command was:
kubeadm init phase kubeconfig admin --kubeconfig-dir {{ kube_config_dir }}/external_kubeconfig

⚠️ It seems that the --kubeconfig-dir flag was not taking into account the number of crio sockets.

So I changed the line to:
kubeadm init phase kubeconfig admin --config /etc/kubernetes/kubeadm-config.yaml


For people having similar issues:

The InitConfig part that made it work on the master is the following:

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.3.15
  bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8d9f2ff3c1eb2dafe5fbe2595ab4ab2d3eb1a
nodeRegistration:
  name: p3kubemaster1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  criSocket: /var/run/crio/crio.sock

In kubespray you must update the file roles/kubernetes/client/tasks/main.yml arround line 57.

You'll have to comment the initial --kubeconfig-dir section and replace it with the path of the InitConfig file.

For me it was generated by kubespray in /etc/kubernetes/kubeadm-config.yaml on the kube master. Check that this file exists on you side and that it contains the criSocket key in the nodeRegistration section.

Glassman answered 18/9, 2019 at 11:57 Comment(0)
I
0

I have made some research and came upon this github thread.

Which than pointed me to another one here.

This seems to be a kubeadm issue which was already fixed and so the solution is available in v1.15 Could you please upgrade to that version (I am not sure which one you are using basing on both of your question that I have worked on) and see if the problem still persists?

Inchmeal answered 13/9, 2019 at 9:22 Comment(3)
Thanks for your help. I tracked which version of kubeadm was used by kubespray. Its kubeadm-v1.15.3. The last one it seems. :-/Glassman
I tried kubeadm init on the master and it failed as expected. Next I did kubeadm init phase --cri-socket /var/run/crio/crio.sock and it worked. So the issue stated on github IS resolved. But in my case its kubeadm init phase kubeconfig admin which requires the --cri-socket flag. And this fails with the error error: unknown flag: --cri-socketGlassman
The issue is still to combine the init phase command with the --cri-socket flag...Glassman

© 2022 - 2024 — McMap. All rights reserved.