I have the following docker-compose file:
version: '3.4'
services:
serviceA:
image: <image>
command: <command>
labels:
servicename: "service-A"
ports:
- "8080:8080"
serviceB:
image: <image>
command: <command>
labels:
servicename: "service-B"
ports:
- "8081:8081"
prometheus:
image: prom/prometheus:v2.32.1
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
restart: unless-stopped
expose:
- 9090
labels:
org.label-schema.group: "monitoring"
volumes:
prometheus_data: {}
The docker-compose contain also Prometheus instance with the following configuration:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090', 'serviceA:8080', 'serviceB:8081']
ServiceA and ServiceB exposing prometheus metrics(each one on it's own port).
When there is one instance from each service everything works fine but when i want to scale the services and run more than one instance the prometheus metrics collection started to messed up the metrics collection and the data is corrupted.
I looked for docker-compose service discovery for this issue but didn't found suitable one. How can I solve this?