Can't see logs of kubernetes pod
Asked Answered
C

2

1

After installing kubernetes cluster following this guide I decided to check logs of system pod kube-scheduler to ensure that all works fine:

 kubectl logs --namespace kube-system kube-scheduler-user223225-pc

but I got next error message:

Error from server: Get https://10.2.2.131:10250/containerLogs/kube-system/kube-scheduler-user-pc/kube-scheduler: dial tcp 10.2.2.131:10250: getsockopt: no route to host

I try to get logs from other pods and got the same error.

I run the cluster on Ubuntu 16.04 and chose flannel network installed using the next commands:

kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl create --namespace kube-system -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Maybe, I missed something, also I see people suggest to configure firewall but it didn't help me:

sudo systemctl stop kubelet
sudo systemctl stop docker
sudo ifconfig cni0 down
sudo ifconfig flannel.1 down
sudo ifconfig docker0 down

sudo service docker start
sudo service kubelet start

sudo iptables -A FORWARD -i cni0 -j ACCEPT
sudo iptables -A FORWARD -o cni0 -j ACCEPT

Does someone know how to fix the issue with getting logs? Thank you in advance.

Camacho answered 23/6, 2017 at 8:56 Comment(0)
U
0

kubernetes process log will be logged in node syslog. you can look at /var/log/syslog file.

to validate the cluster configuration use kubectl command.

e.g.

kubectl get nodes kubectl get pods -o wide

also you can install the dashboard UI to check the cluster.

Unhair answered 23/6, 2017 at 13:0 Comment(10)
I have tried through dashboard UI too but I got the same message. All other commands such as kubectl get nodes or kubectl get pods work fine.Camacho
did you check the master node /var/log/syslog file?Unhair
Yes, I checked and got nothing suspicious after running command logs.Camacho
Log messages look like this: Jun 23 18:47:40 user223225-pc kubelet[5185]: I0623 18:47:40.065041 5185 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/5da95967-57ec-11e7-a9de-002522f9706f-flannel-token-bq608" (spec.Name: "flannel-token-bq608") pod "5da95967-57ec-11e7-a9de-002522f9706f" (UID: "5da95967-57ec-11e7-a9de-002522f9706f").Camacho
I think you have network issue with your cluster. this command works for me 'kubectl logs -n kube-system kube-scheduler-cm-01' in my cluster. can you give some more details about your environment? Are you using vagarnt?Unhair
No, I don't use vagrant. What kind of information do you need about my environment?Camacho
can you post this command output 'kubectl get pods -n kube-system -o wide'? Is kube-scheduler matches the master node ip?Unhair
No, not match. kube-scheduler-user223225-pc 1/1 Running 3 2d 10.2.2.131 user223225-pc enp5s0 Link encap:Ethernet inet addr:10.2.3.216Camacho
You pointed me the right direction ) The problem was with /etc/hosts -- the old ip address was set against the machine name. I've changed it and reinstalled kubeadm, after that logs started working !!)Camacho
Change your answer based on our discussion and I accept it.Camacho
C
0

From discussion between @sfgroups and @Kirill Liubun:

  • The root cause of the issue: old ip address was set in /etc/hosts and was used during Kubernetes cluster set-up. As a result there was an IP address mismatch between kube-scheduler pod and master node IP.

  • Issue has been resolved by fixing /etc/hosts, changing IP address to correct one and reinstalling kubernetes cluster using kubeadm

Cousteau answered 16/3, 2021 at 18:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.