K8s coredns and flannel nameserver limit exceeded [closed]
Asked Answered
I

2

12

i have been trying to setup k8s in a single node,everything was installed fine. but when i check the status of my kube-system pods,

CNI -> flannel pod has crashed, reason -> Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: x.x.x.x x.x.x.x x.x.x.x

CoreDNS pods status is ContainerCreating.

In My Office, the current server has been configured to have an static ip and when i checked /etc/resolv.conf

This is the output

# Generated by NetworkManager
search ORGDOMAIN.BIZ
nameserver 192.168.1.12
nameserver 192.168.2.137
nameserver 192.168.2.136
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 192.168.1.10
nameserver 192.168.1.11

i'm unable to find the root cause, what should i be looking at?

Insensitive answered 24/1, 2020 at 5:32 Comment(1)
Rolling back to an older version helped me. Full explanation here: Github: kube-proxy pods continuously CrashLoopBackOff #118461 sudo apt-get install -y kubelet=1.23.17-00 kubeadm=1.23.17-00 kubectl=1.23.17-00 on Ubuntu LTS 22.04Phenobarbital
F
16

In short, you have too many entries in /etc/resolv.conf.

This is a known issue:

Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm (>= 1.11) automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.

Also

Linux’s libc is impossibly stuck (see this bug from 2005) with limits of just 3 DNS nameserver records and 6 DNS search records. Kubernetes needs to consume 1 nameserver record and 3 search records. This means that if a local installation already uses 3 nameservers or uses more than 3 searches, some of those settings will be lost. As a partial workaround, the node can run dnsmasq which will provide more nameserver entries, but not more search entries. You can also use kubelet’s --resolv-conf flag.

If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check here for more information.

You possibly could change that in the Kubernetes code, but I'm not sure about the functionality. As it's set to that value for purpose.

Code can be located here

const (
    // Limits on various DNS parameters. These are derived from
    // restrictions in Linux libc name resolution handling.
    // Max number of DNS name servers.
    MaxDNSNameservers = 3
    // Max number of domains in search path.
    MaxDNSSearchPaths = 6
    // Max number of characters in search path.
    MaxDNSSearchListChars = 256
)
Fenestra answered 24/1, 2020 at 13:39 Comment(2)
Note that /run/systemd/resolve/resolv.conf will be auto-recreated after each k8s restart (is apparently not modifiable despite such errors?)Insensible
No, this is not helpful at all because /run/systemd/resolve/resolv.conf itself has more than three entries (actually two, but duplicated because of IPv6). So not useful?Guardsman
M
1

I have the same issue but only three entries in my resolv.conf.

Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 1.1.1.1

My resolv.conf

nameserver 10.96.0.10 
nameserver 1.1.1.1
nameserver 1.0.0.1
options timeout:1

But indeed my /run/systemd/resolv/resolv.conf was having redundant DNS.

nameserver 10.96.0.10
nameserver 1.1.1.1
nameserver 1.0.0.1
# Too many DNS servers configured, the following entries may be ignored.
nameserver 1.1.1.1
nameserver 1.0.0.1
nameserver 2606:4700:4700::1111
nameserver 2606:4700:4700::1001
search .

When erasing all 1.1.1.1 and 1.0.0.1, on systemd-resolved service restart, they reappear in double...

Matins answered 29/9, 2023 at 8:32 Comment(1)
did you manage to fix it ?Figurate

© 2022 - 2024 — McMap. All rights reserved.