How to troubleshoot metrics-server on kubeadm?
Asked Answered
R

2

9

I have a working 1.15.1 kubenetes cluster using kubeadm on bare-metal and just deployed metrics-server as in the docs:

git clone https://github.com/kubernetes-incubator/metrics-server.git
kubectl create -f metrics-server/deploy/1.8+/

After some time I try kubectl top node and I get as response:

error: metrics not available yet

Also when I try kubectl top pods I get:

W0721 20:01:31.786615 21232 top_pod.go:266] Metrics not available for pod default/pod-deployment-57b99df6b4-khh84, age: 27h31m59.78660593s error: Metrics not available for pod default/pod-deployment-57b99df6b4-khh84, age: 27h31m59.78660593s

I checked the pod and service for metrics-server and all of them are running fine. Where should I try to see a problem?

Richers answered 21/7, 2019 at 23:22 Comment(1)
Can you add logs of the metric server to question?Road
H
22

Edit the metric-server deployment like Subramanian Manickam's answer said, you can also do it with

$ kubectl edit deploy -n kube-system metrics-server

That will open a text editor with the deployment yaml-file where you can make the following changes:

Under spec.template.spec.containers, on the same level as name: metrics-server add

args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --metric-resolution=30s

and then under spec.template.spec at the same level as containers I also had to add:

hostNetwork: true

to fix the metrics-server working with the CNI (calico in my case).

Afterwards your deployment yaml should look something like this:

[...]
spec:
  [...]
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: metrics-server
      name: metrics-server
    spec:
      containers:
      - args:
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-insecure-tls
        - --metric-resolution=30s
        image: k8s.gcr.io/metrics-server-amd64:v0.3.3
        imagePullPolicy: Always
        name: metrics-server
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      dnsPolicy: ClusterFirst
      hostNetwork: true
[...]

After that it took about 10-15s for kubectl top pods to return some data.

Heterolysis answered 25/7, 2019 at 10:13 Comment(3)
I am also using calico with hostNetwork.. where do you see the difference if you add hostnetwork?Richers
When a pod is configured with hostNetwork: true, the applications running there can directly see the network interfaces of the host machine where the pod was started. For me that fixed it, because as far as I understood there was an issue reaching the metrics-server for me, even after adding the args. When running kubectl get apiservice v1beta1.metrics.k8s.io -o yaml (without hostNetwork enabled), there was an error reaching the service. Without it, there isn't.Heterolysis
While this answer works, I'm not sure that it should be followed. The Kubernetes Metrics Server Github Page clearly indicates --kubelet-insecure-tls should be used for testing purposes only. As this is the #1 answer on Google, could anyone please elaborate on this?Subtype
T
7

You have to add this command section after line number #33 on metrics-server-deployment.yaml file.

  command:
    - /metrics-server
    - --kubelet-preferred-address-types=InternalIP
    - --kubelet-insecure-tls

Once you have updated the file, you have to re-deploy the pod.

Typhoon answered 22/7, 2019 at 8:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.