Is any way to store Prometheus Data to External Database like mysql or postgreSQL [closed]
Asked Answered
B

5

7

Currently I am working with the Prometheus and getting a good result, I difficulty I am facing is that if the service restart my whole old data will lose. Is there any way to permanently store the Prometheus data in databases like mysql or PostgreSQL?

Bawdry answered 16/7, 2018 at 13:49 Comment(0)
T
4

You can't write Prometheus data directly to a relational db (or any db for that matter). You have two choices:

  1. mount an external disk on your machine and configure Prometheus to write the data to whatever that mount location was
  2. Write a tiny web script which translates the Prometheus export format to whatever storage format you want. Then configure Prometheus to send data to the web script.

Information can be found on the Prometheus docs.

Tonsure answered 19/7, 2018 at 20:0 Comment(2)
A great news is that now we can store Prometheus data to PostgreSQL using the following github.com/timescale/pg_Prometheus and can use timescale-db with PostgreSQL as other mediumBawdry
May be you forgot to mention about existent adapters ? And yes, now PostgreSQL adapter is readyBackrest
B
4

Now PostgreSQL support for Prometheus here

https://blog.timescale.com/prometheus-ha-postgresql-8de68d19b6f5

Bawdry answered 15/1, 2019 at 4:41 Comment(1)
Promscale is deprecated on April 30, 2023 timescale.com/blog/important-news-about-promscaleImpanation
C
3

Traditiomal databases like MySQL and PostgreSQL aren't optimized for time series data which is collected by Prometheus. There are better solutions exist, which require less storage space and work faster with both inserts and selects.

Prometheus supports remote storage. When enabled, it stores all the new data in both local storage and remote strorage. There are multiple choices exist for the remote storage db with various tradeoffs. I'd recommend trying VictoriaMetrics. It natively supports Prometheus' query language - PromQL, so may be easily used as Prometheus datasource in Grafana.

Chatelain answered 29/10, 2018 at 9:46 Comment(1)
A great news is that now we can store Prometheus data to PostgreSQL using the following github.com/timescale/pg_Prometheus and can use timescale-db with PostgreSQL as other mediumBawdry
S
1

InfluxDB would be another option:

https://www.influxdata.com/blog/influxdb-now-supports-prometheus-remote-read-write-natively/

Just configure the remote_write and remote_read in your Prometheus config and you are ready to go:

remote_write:
  - url: 'http://{YOUR_INFLUX-DB}:{YOUR_INFLUX-DB_PORT}/api/v1/prom/write?db=metrics'
remote_read:
  - url: 'http://{YOUR_INFLUX-DB}:{YOUR_INFLUX-DB_PORT}/api/v1/prom/read?db=metrics'
Sweettalk answered 13/3, 2020 at 15:41 Comment(0)
S
0

Sling can help with this as well:

https://blog.slingdata.io/export-data-from-prometheus-into-any-database

Here is a configuration example:

# replication.yaml
source: prometheus
target: postgres

defaults:
  object: prometheus.{stream_name}
  mode: full-refresh

streams:
  gc_duration_by_job:
    sql: |
      sum(go_gc_duration_seconds)
      by (job, instance, quantile)
      # {"start": "now-2M", "end": "now-1d", "step": "1d"}

# incremental load, last 2 days of data, hourly
  go_memstats_alloc_bytes_total:
    sql: |
      sum(go_memstats_alloc_bytes_total)
      by (job, instance, quantile)
      # {"start": "now-2d"}
    primary_key: [timestamp, job, instance, quantile]
    update_key: timestamp
    mode: incremental
Shipowner answered 17/4 at 14:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.