Monitoring log files using some metrics exporter + Prometheus + Grafana
Asked Answered
H

4

38

I need to monitor very different log files for errors, success status etc. And I need to grab corresponding metrics using Prometheus and show in Grafana + set some alerting on it. Prometheus + Grafana are OK I already use them a lot with different exporters like node_exporter or mysql_exporter etc. Also alerting in new Grafana 4.x works very well.

But I have quite a problem to find suitable exporter/ program which could analyze log files "on fly" and extract metrics from them.

So far I tried:

  • mtail (https://github.com/google/mtail) - works but existing version cannot easily monitor more files - in general it cannot bind specific mtail program (receipt for analysis) to some specific log file + I cannot easily add log file name into tag
  • grok_exporter (https://github.com/fstab/grok_exporter) - works but I can extract only limited information + one instance can monitor only one log file which mean I would have to start more instances exporting on more ports and configure all off them in prometheus - which makes too many new points of failure
  • fluentd prometheus exporter (https://github.com/kazegusuri/fluent-plugin-prometheus) - works but looks like I can extract only very simple metrics and I cannot make any advanced regexp analysis of a line(s) from log file

Does any one here has a really running solution for monitoring advanced metrics from log files using "some exporter" + Prometheus + Grafana? Or instead of exporter some program from which I could grab results using Prometheus push gateway. Thanks.

Hiers answered 15/12, 2016 at 9:40 Comment(2)
Are the logs shipped/processes somewhere? Might be easier to hook into that process somehow.Approximate
Logs are grabbed by fluentd so I tried this but metrics I can get using fluentd prometheus exporter seems to be very simple and limited. I tried to add external processing in my bash scripts in fluentd but I do not know why inside fluentd it was incredibly slow with long lags. Outside fluentd scripts were OKHiers
K
19

Take a look at Telegraf. It does support tailing logs using input plugins logparser and tail. To export metrics as prometheus endpoint use prometheus_client output plugin. You also may apply on the fly aggregations. I've found it simpler to configure for multiple log files than grok_exporter or mtail

Knave answered 23/11, 2017 at 12:11 Comment(1)
yes you are right - I implemented telegraf and it satisfied what I neededHiers
D
4

Slightly newer answer:

I went looking for the same thing, and found "Loki" which is the Grafana log aggregator, and "Promtail" which is what collects the log files and pushes to Grafana (Loki). In effect, Loki is like Prometheus for log files.

Dialyse answered 10/1, 2023 at 20:59 Comment(0)
L
2

Those are the 3 answers currently for getting log data into Prometheus.

You could also look into getting whatever is producing the logs to expose Prometheus metrics directly.

Lemmons answered 15/12, 2016 at 10:7 Comment(4)
Problem is processes are quite different (bash scripts, go programs) but main problem is a lot of them are legacy things no one wants to fiddle with. So the safest way is to process log files.Hiers
Problem also is I have at least 5+ very different log files for every instance/ server and 15+ instancesHiers
@JosMac: Then you want centralized logging (e.g. with Graylog) and export metrics from there.Erskine
Thanks, @MartinSchröder - Graylog looks interesting but it is a complex solution and I just need some "tailing extractor" which will be able to calculate some metrics "on fly" and either expose them as web service or push them into Prometheus Push Gateway. Because I need to put metrics from log files into our overall monitoring and alerting in Grafana.Hiers
B
2

Try prometheus-python-exporter and write your customized exporter in python grepping whatever you want in your log files, then expose wanted metrics. There are several tutos to help yoy

Barden answered 28/6, 2021 at 18:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.