How to correctly scrape and query metrics in Prometheus every hour
Asked Answered
S

2

6

I would like Prometheus to scrape metrics every hour and display these hourly scrape events in a table in a Grafana dashboard. I have the global scrape interval set to 1h in the prometheus.yml file. From the prometheus visualizer, it seems like Prometheus scrapes around the 43 minute mark of every hour. However, it also seems like this data is only valid for about 3 minutes: Prometheus graph

My situation, then, is this: In a Grafana table, I set the min step of a query on this metric to 1h, but this causes the table to say that there are no data points. However, if I set the min step to 5 minutes, it displays the hourly scrape events with a timestamp on the 45 minute mark. My guess as to why this happens is that Prometheus starts on the dot of some hour and steps either forward or backward by the min step.

This does achieve what I would like to do, but it also has potential for incorrect behavior if Prometheus ever does something like can been seen at the beginning of the earlier graph. I also know that I can add a time shift, but it seems like it is always relative to the current time rather than an absolute time.

Is it possible to increase the amount of time that the scrape data is valid in Prometheus without having to scrape again every 3 minutes? Or maybe tell Prometheus to scrape at the 00 minute mark of every hour? Or if not, then can I add a relative time shift to the table so that it goes from the 45 minute mark instead of the 00 minute mark?

On a side note, in the above Prometheus graph, the irregular data was scraped after Prometheus was started. I had started Prometheus around 18:30 on the 22nd, but Prometheus didn't scrape until 23:30, and then it scraped at different intervals until it stabilized around 2:43 on the 23rd. Does anybody know why?

Superaltar answered 26/8, 2019 at 18:47 Comment(1)
I am not entirely sure if your problem will be solved by this, yet using the average value over the period set in grafana might help to fix all 'empty' values since the results aren't valid for long enough.Strode
K
8

Your data disappear because of the staleness strategy implemented in Prometheus. Once a sample has been ingested, the metric is considered stale after 5 minutes. I didn't find any configuration to change that value.

Scraping every hour is not really the philosophy of Prometheus. If your really need to scrape with such a low frequency, it could be a better idea to schedule a job sending the data to a push gateway or using a prom file fed to a node exporter (if it makes sense). You can then scrape this endpoint every 1-2 minutes.

You could also roll your own exporter that memorize the last scrape and scrape anew only if the data age exceeds one hour. (That's the solution I would prefer)

Now, as a quick solution you can request the data over the last hour and average on it. That way, you'll get the last (old) scrape taken into account:

avg_over_time(old_metric[1h])

It should work or have some transient incorrect values if there is some jitters in the scheduling of the scrape.

Regarding the issues you had about late scraping, I suspect the scraping failed at those dates. Prometheus retries only at the next schedule (1h in your case).

Kenwee answered 3/9, 2019 at 20:13 Comment(2)
Your answer was very helpful to me. I am facing a similar problem to the OP. I am waking sensors from deep sleep every minute and taking a measurement. Would you be kind enough to give a little extra detail about what you say is your preferred solution? Do you mean have some temporary storage which only exposes the metrics every 1 minute? Or to only allow Prometheus to scrape every minute? I am starting to feel like Prometheus may not be a good fit for these types of use-cases, and maybe InfluxDB would be better. Do you have any advice?Swett
I'd prefer the buffer solution because then you can control everything from Prometheus. Using batch job (node or push gateway) requires a scheduling of the batch and therefore an additional configuration point.Kenwee
B
0

If the metric is scraped with intervals exceeding 5 minutes, then Prometheus would return gaps to Grafana because of staleness mechanism. These gaps can be filled with the last raw sample value by wrapping the queried time series into last_over_time function. Just specify the lookbehind window in square brackets, which equals or exceeds the interval between samples. For example, the following query would fill gaps for my_gauge time series with one hour interval between samples:

last_over_time(my_gauge[1h])

See these docs for time durations format, which can be used in square brackets.

Billow answered 20/4, 2022 at 20:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.