Currently I am working with the Prometheus and getting a good result, I difficulty I am facing is that if the service restart my whole old data will lose. Is there any way to permanently store the Prometheus data in databases like mysql or PostgreSQL?
You can't write Prometheus data directly to a relational db (or any db for that matter). You have two choices:
- mount an external disk on your machine and configure Prometheus to write the data to whatever that mount location was
- Write a tiny web script which translates the Prometheus export format to whatever storage format you want. Then configure Prometheus to send data to the web script.
Information can be found on the Prometheus docs.
Now PostgreSQL support for Prometheus here
https://blog.timescale.com/prometheus-ha-postgresql-8de68d19b6f5
Traditiomal databases like MySQL and PostgreSQL aren't optimized for time series data which is collected by Prometheus. There are better solutions exist, which require less storage space and work faster with both inserts and selects.
Prometheus supports remote storage. When enabled, it stores all the new data in both local storage and remote strorage. There are multiple choices exist for the remote storage db with various tradeoffs. I'd recommend trying VictoriaMetrics. It natively supports Prometheus' query language - PromQL, so may be easily used as Prometheus datasource in Grafana.
InfluxDB would be another option:
https://www.influxdata.com/blog/influxdb-now-supports-prometheus-remote-read-write-natively/
Just configure the remote_write and remote_read in your Prometheus config and you are ready to go:
remote_write:
- url: 'http://{YOUR_INFLUX-DB}:{YOUR_INFLUX-DB_PORT}/api/v1/prom/write?db=metrics'
remote_read:
- url: 'http://{YOUR_INFLUX-DB}:{YOUR_INFLUX-DB_PORT}/api/v1/prom/read?db=metrics'
Sling can help with this as well:
https://blog.slingdata.io/export-data-from-prometheus-into-any-database
Here is a configuration example:
# replication.yaml
source: prometheus
target: postgres
defaults:
object: prometheus.{stream_name}
mode: full-refresh
streams:
gc_duration_by_job:
sql: |
sum(go_gc_duration_seconds)
by (job, instance, quantile)
# {"start": "now-2M", "end": "now-1d", "step": "1d"}
# incremental load, last 2 days of data, hourly
go_memstats_alloc_bytes_total:
sql: |
sum(go_memstats_alloc_bytes_total)
by (job, instance, quantile)
# {"start": "now-2d"}
primary_key: [timestamp, job, instance, quantile]
update_key: timestamp
mode: incremental
© 2022 - 2024 — McMap. All rights reserved.