coredns do not resolve service name correctly
Asked Answered
P

4

5

i use Kubernetes v1.11.3 ,it use coredns to resolve host or service name,but i find in pod ,the resolve not work correctly,

# kubectl get services --all-namespaces -o wide
NAMESPACE     NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE       SELECTOR
default       kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP          50d       <none>
kube-system   calico-etcd   ClusterIP   10.96.232.136   <none>        6666/TCP         50d       k8s-app=calico-etcd
kube-system   kube-dns      ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP    50d       k8s-app=kube-dns
kube-system   kubelet       ClusterIP   None            <none>        10250/TCP        32d       <none>
testalex      grafana       NodePort    10.96.51.173    <none>        3000:30002/TCP   2d        app=grafana
testalex      k8s-alert     NodePort    10.108.150.47   <none>        9093:30093/TCP   13m       app=alertmanager
testalex      prometheus    NodePort    10.96.182.108   <none>        9090:30090/TCP   16m       app=prometheus

following command no response

# kubectl exec -it k8s-monitor-7ddcb74b87-n6jsd -n testalex /bin/bash
[root@k8s-monitor-7ddcb74b87-n6jsd /]# ping k8s-alert
PING k8s-alert.testalex.svc.cluster.local (10.108.150.47) 56(84) bytes of data.

and no cordons output log

# kubectl logs coredns-78fcdf6894-h78sd -n kube-system

i think maybe something is wrong,but i can not locate the problem,another question is why the two coredns pods on the master node,it suppose to one on each node

UPDATE

it seems coredns work fine ,but i do not understand the ping command no return

[root@k8s-monitor-7ddcb74b87-n6jsd yum.repos.d]# nslookup kubernetes.default
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

[root@k8s-monitor-7ddcb74b87-n6jsd yum.repos.d]# cat /etc/resolv.conf
nameserver 10.96.0.10
search testalex.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

# kubectl get ep kube-dns --namespace=kube-system

NAME       ENDPOINTS                                                        AGE
kube-dns   192.168.121.3:53,192.168.121.4:53,192.168.121.3:53 + 1 more...   50d

also dns server can not be reached

# kubectl exec -it k8s-monitor-7ddcb74b87-n6jsd -n testalex /bin/bash
[root@k8s-monitor-7ddcb74b87-n6jsd /]# cat /etc/resolv.conf
nameserver 10.96.0.10
search testalex.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
[root@k8s-monitor-7ddcb74b87-n6jsd /]# ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10) 56(84) bytes of data.
^C
--- 10.96.0.10 ping statistics ---
9 packets transmitted, 0 received, 100% packet loss, time 8000ms

i think maybe i misconfig the network this is my cluster init command

 kubeadm init --kubernetes-version=v1.11.3  --apiserver-advertise-address=10.100.1.20 --pod-network-cidr=172.16.0.0/16 

and this is calico ip pool set

# kubectl exec -it calico-node-77m9l -n kube-system /bin/sh
Defaulting container name to calico-node.
Use 'kubectl describe pod/calico-node-77m9l -n kube-system' to see all of the containers in this pod.
/ # cd /tmp
/tmp # ls
calicoctl  tunl-ip
/tmp # ./calicoctl get ipPool
CIDR
192.168.0.0/16
Perseverance answered 16/11, 2018 at 6:56 Comment(0)
G
4

You can start by checking if the dns is working

Run the nslookup on kubernetes.default from inside the pod k8s-monitor-7ddcb74b87-n6jsd, check if it is working.

[root@k8s-monitor-7ddcb74b87-n6jsd /]# nslookup kubernetes.default
Server:     10.96.0.10
Address:    10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

If this returns output that means everything is working from the coredns. If output is not okay, then look into the the resolve.conf inside the pod k8s-monitor-7ddcb74b87-n6jsd, it should return output something like this:

[root@metrics-master-2 /]# cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:5

At last check the coredns endpoints are exposed using:

kubectl get ep kube-dns --namespace=kube-system
NAME       ENDPOINTS                       AGE
kube-dns   10.180.3.17:53,10.180.3.17:53    1h

You can verify if queries are being received by CoreDNS by adding the log plugin to the CoreDNS configuration (aka Corefile). The CoreDNS Corefile is held in a ConfigMap named coredns

Hope this helps.

EDIT:

You might be having this issue, Please have a look:

https://github.com/kubernetes/kubeadm/issues/1056

Gradate answered 16/11, 2018 at 7:17 Comment(0)
G
2

You cannot ping ipaddress or hostname of service cluster always,since it is virtual ip

service’s cluster IP is a virtual IP, and only has meaning when combined with the service port.You can try the same via srv recored(combination of virtual ip and port)(refer kubernetes in action by mark luksa)

Galah answered 10/11, 2019 at 11:32 Comment(0)
P
0

Thanks for the answer. This is the output. IP-s certainly not real.

[root@master ~]# nslookup kubernetes.default
Server:         203.150.92.12
Address:        203.150.92.12#53

** server can't find kubernetes.default: NXDOMAIN

[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://203.150.72.81:6443
coredns is running at https://203.150.72.81:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://203.150.72.81:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
metrics-server is running at https://203.150.72.81:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master ~]# cat /etc/resolv.conf
search invalid
nameserver 203.150.92.12
nameserver 203.150.92.10
nameserver 1111:c207::2:55
[root@master ~]# kubectl get ep kube-dns --namespace=kube-system
Error from server (NotFound): endpoints "kube-dns" not found
[root@master ~]#
Papeterie answered 25/12, 2018 at 10:35 Comment(0)
M
0

I think the reason why you cannot get ping working is because you are using iptables to redirect the request to service cluster IP to the correct pods. The iptables rule will only redirect the traffic to the service cluster IP with the exported ports. The icmp request is never been redirected to the real endpoints.

Maturate answered 30/7, 2019 at 6:30 Comment(1)
i have posted a similar question, could you take a look? #72002087Lavalava

© 2022 - 2024 — McMap. All rights reserved.