How to configure kube-prometheus-stack helm installation to scrape a Kubernetes service?
Asked Answered
I

1

7

I have installed kube-prometheus-stack as a dependency in my helm chart on a local docker for Mac Kubernetes cluster v1.19.7. I can view the default prometheus targets provided by the kube-prometheus-stack.

I have a python flask service that provides metrics which I can view successfully in the kubernetes cluster using kubectl port forward.

However, I am unable to get these metrics displayed on the prometheus targets web interface.

The kube-prometheus-stack documentation states that Prometheus.io/scrape does not support annotation-based discovery of services. Instead the the reader is referred to the concept of ServiceMonitors and PodMonitors.

So, I have configured my service as follows:

---
kind:                       Service
apiVersion:                 v1  
metadata:
  name:                     flask-api-service                    
  labels:
    app:                    flask-api-service
spec:
  ports:
    - protocol:             TCP 
      port:                 4444
      targetPort:           4444
      name:                 web 
  selector:
    app:                    flask-api-service                    
    tier:                   backend 
  type:                     ClusterIP
---
apiVersion:                 monitoring.coreos.com/v1
kind:                       ServiceMonitor
metadata:
  name:                     flask-api-service
spec:
  selector:
    matchLabels:
      app:                  flask-api-service
  endpoints:
  - port:                   web 

Subsequently, I have setup a port forward to view the metrics:

Kubectl port-forward prometheus-flaskapi-kube-prometheus-s-prometheus-0 9090

Then visited prometheus web page at http://localhost:9090

When I select the Status->Targets menu option, my flask-api-service is not displayed.

I know that the service is up and running and I have checked that I can view the metrics for a pod for my flask-api-service using kubectl port-forward <pod name> 4444.

Looking at a similar issue it looks as though there is a configuration value serviceMonitorSelectorNilUsesHelmValues that defaults to true. Setting this to false makes the operator look outside it’s release labels in helm??

I tried adding this to the values.yml of my helm chart in addition to the extraScrapeConfigs configuration value. However, the flask-api-service still does not appear as an additional target on the prometheus web page when clicking the Status->Targets menu option.

prometheus:
  prometheusSpec:
    serviceMonitorSelectorNilUsesHelmValues: false
  extraScrapeConfigs: |
    - job_name: 'flaskapi'
    static_configs:
      - targets: ['flask-api-service:4444']

How do I get my flask-api-service recognised on the prometheus targets page at http://localhost:9090?

I am installing Kube-Prometheus-Stack as a dependency via my helm chart with default values as shown below:

Chart.yaml

apiVersion: v2
appVersion: "0.0.1"
description: A Helm chart for flaskapi deployment
name: flaskapi
version: 0.0.1
dependencies:
- name: kube-prometheus-stack
  version: "14.4.0"
  repository: "https://prometheus-community.github.io/helm-charts"
- name: ingress-nginx
  version: "3.25.0"
  repository: "https://kubernetes.github.io/ingress-nginx"
- name: redis
  version: "12.9.0"
  repository: "https://charts.bitnami.com/bitnami"

Values.yaml

docker_image_tag: dcs3spp/
hostname: flaskapi-service
redis_host: flaskapi-redis-master.default.svc.cluster.local 
redis_port: "6379"

prometheus:
  prometheusSpec:
    serviceMonitorSelectorNilUsesHelmValues: false
  extraScrapeConfigs: |
    - job_name: 'flaskapi'
    static_configs:
      - targets: ['flask-api-service:4444']
Imprint answered 30/3, 2021 at 16:51 Comment(2)
Please share the values.yaml (just the values you override) file you used to install prometheus via kube-prometheus-stack helm chart.Crosscheck
Thanks, details added to questionImprint
O
11

Prometheus custom resource definition has a field called serviceMonitorSelector. Prometheus only listens to those matched serviceMonitor. In case of helm deployment it is your release name.

release: {{ $.Release.Name | quote }}

So adding this field in your serviceMonitor should solve the issue. Then you serviceMonitor manifest file will be:


apiVersion:                 monitoring.coreos.com/v1
kind:                       ServiceMonitor
metadata:
  name:                     flask-api-service
  labels:
      release: <your_helm_realese_name_>
spec:
  selector:
    matchLabels:
      app:                  flask-api-service
  endpoints:
  - port:                   web 
Ohmage answered 30/3, 2021 at 17:33 Comment(9)
Magic, much appreciated, that did the trick. Many thanks!!!! Accepting answer and upvoting :)Imprint
Happy to help. :)Ohmage
@PulakKantiBhowmick You must be referring to this field: github.com/prometheus-community/helm-charts/blob/main/charts/… Where do you see it's value being the release name by default?Crosscheck
Please check here github.com/prometheus-community/helm-charts/blob/main/charts/…Ohmage
Thanks. For the sake of discussion, if the default matchLabels is release: {{ $.Release.Name | quote }} , do you think the statement here is wrong: github.com/prometheus-community/helm-charts/blob/main/charts/…Crosscheck
Seems ok to me. For that user have to set serviceMonitorSelectorNilUsesHelmValues: trueOhmage
Sorry to heckle you more on this, but did you meant to say serviceMonitorSelectorNilUsesHelmValues needs to be set to false for this statement: github.com/prometheus-community/helm-charts/blob/main/charts/… to be true. Only then it would set serviceMonitorSelector to {} and select all ServiceMonitors as per condition here: github.com/prometheus-community/helm-charts/blob/main/charts/…Crosscheck
Let us continue this discussion in chat.Crosscheck
In my case 'release' label had to be set not only on ServiceMonitor but on Service level as well.Fioritura

© 2022 - 2024 — McMap. All rights reserved.