Layer4 "Connection refused" with haproxy
Asked Answered
O

4

6

I need some advise on how to setup haproxy. I have two web-servers up and running. For testing they run a simple node server on port 8080.

Now on my haproxy server I start haproxy which gives me the following:

$> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg 
[WARNING] 325/202628 (16) : Server node-backend/server-a is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 325/202631 (16) : Server node-backend/server-b is DOWN, reason: Layer4 timeout, check duration: 2001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 325/202631 (16) : backend 'node-backend' has no server available!

Just one note: If I do:

haproxy$> wget server-a:8080

I get the response from the node server.

Here is my haproxy.cfg:

#---------------------------------------------------------------------  
# Global settings
#---------------------------------------------------------------------
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy

    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    tcp
    log                     global
    option                  tcplog
    option                  dontlognull
    option http-server-close
#   option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  www
    bind                        *:80
    default_backend             node-backend

#---------------------------------------------------------------------
# round robin balancing between the various backends
#--------------------------------------------------------------------
backend node-backend
   balance roundrobin
   mode tcp
   server server-a 172.19.0.2:8080 check
   server server-b 172.19.0.3:8080 check

If I remove the check option it seems to work. Any suggestions how I can fix this checking mechanism of haproxy?

Obi answered 21/11, 2016 at 20:51 Comment(0)
T
6

You need to get exact ip address of your server with the help of command

ifconfig

and correct the below address in your haproxy.cfg file:

172.19.0.2:8080
172.19.0.3:8080 

or modify line like below

server server-a server-a:8080 check
server server-b server-b:8080 check
Talent answered 1/2, 2019 at 6:35 Comment(2)
If using Docker remember that you must set the networking to --network host if you wish to use 127.0.0.1. That's if it's on the same computer.Niu
@Niu could be helpful if you can give an exampleNummulite
A
2

Another note related to @basickarl's comment on docker. If you are sending into a docker (docker-compose) instance (namely where you have multiple instances of service running) you likely need to define the docker resolver and use it for dns resolution on your backend:

resolver:

resolvers docker_resolver
nameserver dns 127.0.0.11:53

backend usage of resolver:

backend main
    balance roundrobin
    option http-keep-alive
    server haproxyapp app:80 check inter 10s resolvers docker_resolver resolve-prefer ipv4
Aeniah answered 10/11, 2020 at 17:42 Comment(1)
Can you please explain the backend main section, especially the last line.Sore
P
1

Remove mode tcp and change it to mode http.

I'm just guessing here but I suppose haproxy is doing a tcp check against your web server and the web server can not respond to it.

In mode http it checks the web server in http mode and expects a "response 200" for L4 check and expects a string (whatever you defined) as a L7 check.

L4:

backend node-backend
   balance roundrobin
   mode http #(NOT NEEDED IF DEFINED IN DEFAULTS)
   option httpchk
   server server-a 172.19.0.2:8080 check
   server server-b 172.19.0.3:8080 check

L7:

backend node-backend
   balance roundrobin
   mode http #(NOT NEEDED IF DEFINED IN DEFAULTS)
   option httpchk get /SOME_URI
   http-check expect status 200
   server server-a 172.19.0.2:8080 check
   server server-b 172.19.0.3:8080 check
Preventive answered 22/11, 2016 at 15:40 Comment(3)
I though that L4 is set using mode tcp What exactly. So what exactly defines L4 or L7?Obi
L4 is a Layer 4 Check (OSI Model) L7 is a Layer 7 Check. so L4 would reply with status codes 500,404,200,301...etc. L7 would look at the "Content" returned by the request...http headers,json strings, whatever in the body of the resultPreventive
For clarification, Layer 4 checks do not return HTTP status codes like 500, 404, 200, or 301. These status codes are part of the application layer (Layer 7). A Layer 4 check typically involves checking if the network connection to a service is open and responsive, which would involve TCP control packets like SYN, ACK, FIN etc..Haerr
P
1

i tryied all this answers nothing works for me. only put the gateway IP of network work, for default bridge is 172.17.0.1.

In the servers put the : and with this haproxy connects with success.

My example of custom network with fixed ips and gateway:

----- haproxy config

backend be_pe_8545
    mode http
    balance     roundrobin

    server p1 172.20.0.254:18545 check inter 10s
    server p2 172.20.0.254:28545 check inter 10s

----- docker app / network
  docker_app: ... 
    networks:
      public_network:
        ipv4_address: 172.20.0.50 

  public_network:
    name: public_network
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet:  172.20.0.0/24
          gateway: 172.20.0.254
Powerful answered 14/10, 2022 at 19:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.