Combine HAProxy stats?
Asked Answered
T

2

7

I have two instances of HAProxy. Both instances have stats enabled and are working fine.

I am trying to combine the stats from both instances into one so that I can use a single HAProxy to view the front/backends stats. I've tried to have the stats listener on the same port for both haproxy instances but this isn't working. I've tried using the sockets interface but this only reports on one of the interfaces as well.

Any ideas?

My one haproxy config file looks like this:

global
    daemon
    maxconn 256
    log 127.0.0.1 local0 debug
    log-tag haproxy
    stats socket /tmp/haproxy

defaults
    log global
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

frontend http-in
    bind *:8000
    default_backend servers
    log global
    option httplog clf

backend servers
    balance roundrobin
    server ws8001 localhost:8001
    server ws8002 localhost:8002
    log global

listen admin
    bind *:7000
    stats enable
    stats uri /

The other haproxy config is the same except the front/backend server IPs are different.

Tobolsk answered 24/6, 2014 at 19:20 Comment(0)
S
3

This can't work. Haproxy keeps stats separated in each process. It has no capabilities to combine stats of multiple processes.

That said, you are of course free to use external monitoring tools like (munin, graphite or even nagios) which can aggregate the CSV data from multiple stats sockets and display them in unified graphs. These tools are however out-of-scope of core haproxy.

Schmitz answered 24/6, 2014 at 21:2 Comment(2)
Thank you for the comment... but wouldn't using the same socket allow multiple haproxy instances to write to the socket and potentially one to (just) read from it?Tobolsk
HAProxy doesn't read stats from sockets and aggregates them. While it might be possible to implement something like this, it is not available in HAProxy and thus can't work. Also, unix sockets work differently from what you assume they do. Rather they are more similar to a network endpoint where two processes create a bi-directional communication channel. Similar to an open network socket, only one process (or multiple processes that inherit the open socket descriptor along forks) can listen for new connections.Schmitz
G
9

While perhaps not an exact answer to this specific question, I've seen this kind of question enough that I think it deserves to be answered.

When running with nbproc greater than 1, the Stack Exchange guys have a unique solution. They have a listen section that receives SSL traffic and then uses send-proxy to 127.0.0.1:80. They then have a frontend that binds to 127.0.0.1:80 like this: bind 127.0.0.1:80 accept-proxy. Inside of that frontend they then bind that frontend, e.g. bind-process 1 and in the globals section the do the following:

global
    stats socket /var/run/haproxy-t1.stat level admin
    stats bind-process 1

The advantage of this is that they get multiple cores for SSL offloading and then a single core dedicated to load balancing traffic. All traffic ultimately flows through this frontend and therefore they can accurately measure stats from that frontend.

Gonorrhea answered 29/8, 2015 at 0:22 Comment(0)
S
3

This can't work. Haproxy keeps stats separated in each process. It has no capabilities to combine stats of multiple processes.

That said, you are of course free to use external monitoring tools like (munin, graphite or even nagios) which can aggregate the CSV data from multiple stats sockets and display them in unified graphs. These tools are however out-of-scope of core haproxy.

Schmitz answered 24/6, 2014 at 21:2 Comment(2)
Thank you for the comment... but wouldn't using the same socket allow multiple haproxy instances to write to the socket and potentially one to (just) read from it?Tobolsk
HAProxy doesn't read stats from sockets and aggregates them. While it might be possible to implement something like this, it is not available in HAProxy and thus can't work. Also, unix sockets work differently from what you assume they do. Rather they are more similar to a network endpoint where two processes create a bi-directional communication channel. Similar to an open network socket, only one process (or multiple processes that inherit the open socket descriptor along forks) can listen for new connections.Schmitz

© 2022 - 2024 — McMap. All rights reserved.