Following should work on your load balancer if you are able to run some NginX
alongside with HaProxy
. NginX
is (ab)used as a pure SSL terminator, not as a full featured web server, so no content is served by this NginX
.
Warning: This was done in a hurry, so nothing is verified that this really works. Some examples are missing, so sorry for the links.
I call this idea after the famous picture of Munchhausen, pulling himself and the horse out of a mire:
The Munchhausen Method
First, do a H2 setup in HaProxy like in the answer of Scott Farrell with following tweaks:
frontend http-in
mode http
bind *:80
option forwardfor
default_backend nodes-http
frontend https-in
mode tcp
bind *:443 ssl crt /etc/ssl/dummy.pem alpn h2,http/1.1
use_backend nodes-http2 if { ssl_fc_alpn -i h2 }
default_backend nodes-http
frontend http-lo
mode http
bind 127.0.0.1:82
#http-request set-header X-Forwarded-For req.hdr_ip([X-Forwarded-For])
default_backend nodes-http
backend nodes-http
mode http
server node1 web.server:80 check
backend nodes-http2
mode tcp
server loadbalancer 127.0.0.1:81 check send-proxy
This loops the HTTP/2
connection back to your loadbalancer machine and accepts the decoded requests to enter loadbalancing again via http-lo
.
Now on the LB itself, start NginX
to listen on Port 81
as in the config
instance to terminate the HTTP/2
connection and proxy it back to your loadbalancer again.
In NginX be sure to:
Use send-proxy-protocol in NginX
Terminate the SSL using HTTP/2
in NginX
Proxy everything transparently (aka. dumb) back to HaProxy
port 82
# Sorry, example `NginX`-config is missing here,
# but it includes something like:
proxy_pass http://127.0.0.1:82;
Do not forget to include the Client-IP via X-Forwarded-For
header in the proxy request (I do not know how to configure NginX to use the "Send Proxy" Protocol on outgoing proxy requests).
Note that this setup is mostly static. The changing part is about all those domains and their TLS-certs.
ASCII picture of HTTP/2
request flow
Browser
| HTTP/2
V
Loadbalancer HaProxy *:443
| frontend https-in
| backend nodes-http2
| send-proxy
| TCP (transparent, HTTP/2)
V
Loadbalancer NginX 127.0.0.1:81
| HTTP/2 termination
| proxy_protocol
| proxy_pass 127.0.0.1:82
| Add header X-Forwarded-For
| HTTP
V
Loadbalancer HaProxy 127.0.0.1:82
| frontend https-lo
| Forward Header X-Forwarded-For
| backend nodes-http
| # DO YOUR LOADBALANCING HERE
| HTTP
V
web.server:80
Yes, it loops 2 times through HaProxy, but thanks to how fast HaProxy works this works lightning fast.
The real inefficient part is when it comes to uncompress the HTTP/2
headers into plain HTTP
headers ..