kubernetes installation and kube-dns: open /run/flannel/subnet.env: no such file or directory
Asked Answered
D

2

12

Overview

kube-dns can't start (SetupNetworkError) after kubeadm init and network setup:

Error syncing pod, skipping: failed to "SetupNetwork" for 
"kube-dns-654381707-w4mpg_kube-system" with SetupNetworkError: 
"Failed to setup network for pod 
\"kube-dns-654381707-w4mpg_kube-system(8ffe3172-a739-11e6-871f-000c2912631c)\" 
using network plugins \"cni\": open /run/flannel/subnet.env: 
no such file or directory; Skipping pod"

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:42:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

Environment

VMWare Fusion for Mac

OS

NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

Kernel (e.g. uname -a)

Linux ubuntu-master 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

What is the problem

kube-system   kube-dns-654381707-w4mpg                0/3       ContainerCreating   0          2m
FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason          Message
  ---------     --------        -----   ----                    -------------   --------        ------          -------
  3m            3m              1       {default-scheduler }                    Normal          Scheduled       Successfully assigned kube-dns-654381707-w4mpg to ubuntu-master
  2m            1s              177     {kubelet ubuntu-master}                 Warning         FailedSync      Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-w4mpg_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-w4mpg_kube-system(8ffe3172-a739-11e6-871f-000c2912631c)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

What I expected to happen

kube-dns Running

How to reproduce it

root@ubuntu-master:~# kubeadm init
Running pre-flight checks
<master/tokens> generated token: "247a8e.b7c8c1a7685bf204"
<master/pki> generated Certificate Authority key and certificate:
Issuer: CN=kubernetes | Subject: CN=kubernetes | CA: true
Not before: 2016-11-10 11:40:21 +0000 UTC Not After: 2026-11-08 11:40:21 +0000 UTC
Public: /etc/kubernetes/pki/ca-pub.pem
Private: /etc/kubernetes/pki/ca-key.pem
Cert: /etc/kubernetes/pki/ca.pem
<master/pki> generated API Server key and certificate:
Issuer: CN=kubernetes | Subject: CN=kube-apiserver | CA: false
Not before: 2016-11-10 11:40:21 +0000 UTC Not After: 2017-11-10 11:40:21 +0000 UTC
Alternate Names: [172.20.10.4 10.96.0.1 kubernetes kubernetes.default     kubernetes.default.svc kubernetes.default.svc.cluster.local]
Public: /etc/kubernetes/pki/apiserver-pub.pem
Private: /etc/kubernetes/pki/apiserver-key.pem
Cert: /etc/kubernetes/pki/apiserver.pem
<master/pki> generated Service Account Signing keys:
Public: /etc/kubernetes/pki/sa-pub.pem
Private: /etc/kubernetes/pki/sa-key.pem
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 14.053453 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 0.508561 seconds
<master/apiclient> attempting a test deployment
<master/apiclient> test deployment succeeded
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 1.503838 seconds
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns

Kubernetes master initialised successfully!

You can now join any number of machines by running the following on each node:

kubeadm join --token=247a8e.b7c8c1a7685bf204 172.20.10.4
root@ubuntu-master:~# 
root@ubuntu-master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS              RESTARTS   AGE
kube-system   dummy-2088944543-eo1ua                  1/1       Running             0          47s
kube-system   etcd-ubuntu-master                      1/1       Running             3          51s
kube-system   kube-apiserver-ubuntu-master            1/1       Running             0          49s
kube-system   kube-controller-manager-ubuntu-master   1/1       Running             3          51s
kube-system   kube-discovery-1150918428-qmu0b         1/1       Running             0          46s
kube-system   kube-dns-654381707-mv47d                0/3       ContainerCreating   0          44s
kube-system   kube-proxy-k0k9q                        1/1       Running             0          44s
kube-system   kube-scheduler-ubuntu-master            1/1       Running             3          51s
root@ubuntu-master:~# 
root@ubuntu-master:~# kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created
root@ubuntu-master:~# 
root@ubuntu-master:~# 
root@ubuntu-master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS              RESTARTS   AGE
kube-system   dummy-2088944543-eo1ua                  1/1       Running             0          47s
kube-system   etcd-ubuntu-master                      1/1       Running             3          51s
kube-system   kube-apiserver-ubuntu-master            1/1       Running             0          49s
kube-system   kube-controller-manager-ubuntu-master   1/1       Running             3          51s
kube-system   kube-discovery-1150918428-qmu0b         1/1       Running             0          46s
kube-system   kube-dns-654381707-mv47d                0/3       ContainerCreating   0          44s
kube-system   kube-proxy-k0k9q                        1/1       Running             0          44s
kube-system   kube-scheduler-ubuntu-master            1/1       Running             3          51s
kube-system   weave-net-ja736                         2/2       Running             0          1h
Difficulty answered 10/11, 2016 at 18:43 Comment(0)
S
18

It looks like you have configured flannel before running kubeadm init. You can try to fix this by removing flannel (it may be sufficient to remove config file rm -f /etc/cni/net.d/*flannel*), but it's best to start fresh.

Snaky answered 12/11, 2016 at 2:54 Comment(3)
If there is a miscounfig, you should have more than one file, move the one , in my case 10-flannel.. to a different location ( as example your home ) and then restart the pods :-)Utoaztecan
should this be done in all the nodes? or just the master node?Adore
Just a note as it took a while for my penny to drop. In my case flannel files were not cleaned up after attempting to uninstal a separate kubernetes distribution. (I moved from kurl.sh that used flannel to k0s that does not). So, not a misconfiguration of the current distribution in my case - just a failed cleanup after previously attempting to uninstall kubernetes. https://mcmap.net/q/692399/-kubernetes-cannot-cleanup-flannel/498253 got me the rest of the way, and augments this answer nicely.Boling
T
5

open below file location(if exists, either create) and paste below data

vim /run/flannel/subnet.env

FLANNEL_NETWORK=10.240.0.0/16
FLANNEL_SUBNET=10.240.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Taverner answered 9/2, 2023 at 8:29 Comment(1)
Make sure to use the same CIDR that you used with "kubeadm init", e.g. "kubeadm init --pod-network-cidr=10.235.0.0/16"Extemporary

© 2022 - 2024 — McMap. All rights reserved.