prometheus node-exporter on kubernetes
Asked Answered
A

3

6

I have deployed prometheus on kubernetes cluster (EKS). I was able to successfully scrape prometheus and traefik with following

scrape_configs:
  # A scrape configuration containing exactly one endpoint to scrape:

  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    static_configs:
      - targets: ['prometheus.kube-monitoring.svc.cluster.local:9090']

  - job_name: 'traefik'
    static_configs:
      - targets: ['traefik.kube-system.svc.cluster.local:8080']

But node-exporter deployed as DaemonSet with following definition is not exposing the node metrics.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-monitoring
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      name: node-exporter
      labels:
        app: node-exporter
    spec:
      hostNetwork: true
      hostPID: true
      containers:
      - name: node-exporter
        image: prom/node-exporter:v0.18.1
        args:
        - "--path.procfs=/host/proc"
        - "--path.sysfs=/host/sys"
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: scrape
        resources:
          requests:
            memory: 30Mi
            cpu: 100m
          limits:
            memory: 50Mi
            cpu: 200m
        volumeMounts:
        - name: proc
          readOnly:  true
          mountPath: /host/proc
        - name: sys
          readOnly: true
          mountPath: /host/sys
      tolerations:
        - effect: NoSchedule
          operator: Exists
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys

and following scrape_configs in prometheus

scrape_configs:
  - job_name: 'kubernetes-nodes'
    scheme: http
    kubernetes_sd_configs:
    - role: node
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.kube-monitoring.svc.cluster.local:9100
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics 

I also tried to curl http://localhost:9100/metrics from one of the container, but got curl: (7) Failed to connect to localhost port 9100: Connection refused

What I am missing here with the configuration ?

After suggestion to install Prometheus by helm, I didn't install it on test cluster and tried to compare my original configuration with helm installed Prometheus.

Following pods were running :

NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0   2/2     Running   0          4m33s
prometheus-grafana-66c7bcbf4b-mh42x                      2/2     Running   0          4m38s
prometheus-kube-state-metrics-7fbb4697c-kcskq            1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-6bf9f                1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-gbrzr                1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-j6l9h                1/1     Running   0          4m38s
prometheus-prometheus-oper-operator-648f9ddc47-rxszj     1/1     Running   0          4m38s
prometheus-prometheus-prometheus-oper-prometheus-0       3/3     Running   0          4m23s

I didn't find any configuration for node exporter in pod prometheus-prometheus-prometheus-oper-prometheus-0 at /etc/prometheus/prometheus.yml

Astrosphere answered 9/7, 2019 at 19:40 Comment(1)
Seems like you are using prometheus operator, did you create a servicemonitor for node-exporter? Running kubectl get servicemonitors --all-namespaces to figure it out.Eversion
B
3

The previous advice to use Helm is highly valid, I would also recommend that.

Regarding your issue: thing is that you are not scraping nodes directly, you're using node-exporter for that. So role: node is incorrect, you should instead use role: endpoints. For that you also need to create service for all pods of your DaemonSet.

Here is working example from my environment (installed by Helm):

- job_name: monitoring/kube-prometheus-exporter-node/0
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - monitoring
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: exporter-node
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: metrics
    action: replace
Bromic answered 10/7, 2019 at 7:39 Comment(0)
S
1

How did you deploy Prometheus? Whenever i used the helm-chart (https://github.com/helm/charts/tree/master/stable/prometheus) the node-exporter had been deployed. Maybe this is a simpler solution.

Sobriquet answered 9/7, 2019 at 20:4 Comment(2)
I did not use helm. I know most of the tutorials uses helm.Astrosphere
My advice: use it. Its no fun to do everything from hand that helm does for you. Believe me, you don't want to maintain several standard-application-deployments.Sobriquet
A
0

I was stuck at the similar place. But here my node-exporters are not part of helm deployment since we have got the add-on node exporter from Tanzu kubernetes grid(k8s cluster). So I have created the service monitor and now I can see the service discovery and the count is what should be. But in the target section it is saying 0/4 count. Not able to see the metrics of nodes but when I can curl the localhost:9100/metrics I can see the data. Some where I missing the logic.

I checked the helm deployed node-exporter data, it looks same but what am I missing here?

Please ignore the indentation as they are missed while copy paste in mobile.

 - job_name: node-exporter
   scrape_interval: 15s
   scrape_timeout: 10s
   metrics_path: /metrics
   scheme: http
   kubernetes_sd_configs:
     - role: endpoints
       namespaces:
       names:
        - monitoring
  relabel_configs:
   - source_labels: 
       [__meta_kubernetes_service_label_app]
      separator: ;
      regex: exporter-node
      replacement: $1
      action: keep
   - source_labels: 
       [__meta_kubernetes_endpoint_port_name]
      separator: ;
      regex: metrics
      replacement: $1
      action: keep
  - source_labels: [__meta_kubernetes_namespace]
     separator: ;
     regex: (.*)
     target_label: namespace
     replacement: $1
     action: replace
 - source_labels: [__meta_kubernetes_pod_name]
   separator: ;
   regex: (.*)
   target_label: pod
   replacement: $1
   action: replace
- source_labels: [__meta_kubernetes_service_name]
   separator: ;
   regex: (.*)
   target_label: service
   replacement: $1
   action: replace
 - source_labels: [__meta_kubernetes_service_name]
   separator: ;
    regex: (.*)
   target_label: job
   replacement: ${1}
   action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: metrics
    action: replace
Aggappora answered 16/2, 2021 at 2:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.