HTTP/2 behind reverse proxy
Asked Answered
B

2

12

So far all the tutorials tell me that I need to enable SSL on my server to have HTTP/2 support.

In the given scenario, we have nginx in front of the backend Tomcat/Jetty server(s), and even though performance-wise it worth enabling HTTP/2 on the backend, the requirement to have HTTPS there as well seems to be an overkill.

HTTPS is not needed security-wise (only nginx is exposed), and is a bit cumbersome from the operational perspective - we'd have to add our certificates to each of the Docker containers that run the backend servers.

Isn't there a way around that provides HTTP/2 support all the way (or at least similar performance), and is less involved to set up?

Bathysphere answered 2/8, 2016 at 20:55 Comment(1)
> So far all the tutorials tell me that I need to enable SSL on my server to have HTTP/2 support. Presumably, the reason for that is that browsers only support http/2 over ssl: caniuse.com/#feat=http2 (see the #2 note)Emphasis
S
15

The typical setup that we recommend is to put HAProxy in front of Jetty, and configure HAProxy to offload TLS and Jetty to speak clear-text HTTP/2.

With this setup, you get the benefits of an efficient TLS offloading (done by HAProxy via OpenSSL), and you get the benefits of a complete end-to-end HTTP/2 communication.

In particular, the latter allows for Jetty to push content via HTTP/2, something that won't be possible if the backend communication is HTTP/1.1.

Additional benefits include less resource usage, less conversion steps (no need to convert from HTTP/2 to HTTP/1.1 and viceversa), the ability to fully use HTTP/2 features such as stream resetting all the way to the application. None of these benefits will work if there is a translation to HTTP/1.1 in the chain.

If Nginx is only used as a reverse proxy to Jetty, it is not adding any benefit and it is actually slowing down your system, having to convert requests to HTTP/1.1 and responses back to HTTP/2.

HAProxy does not do any conversion so it's way more efficient, and allows a full HTTP/2 stack with all the benefits that it brings with respect to HTTP/1.1.

Shake answered 3/8, 2016 at 12:47 Comment(7)
Is it possible to use nginx for TLS offloading?Bathysphere
Interesting! However if HAProxy terminates SSL then it presumably sets up a new HTTP/2 connection to Jetty. Is it possible to use all features (e.g. Push, stream resetting.. etc.) across two different HTTP/2 connections? If so then you're set up seems a very good one!Hotien
@BazzaDP, yes it is possible. This is the setup that we use to serve webtide.com and cometd.org. HAProxy just forwards the bytes that it decrypts to the backend, it has no knowledge that they are HTTP/2 bytes. Jetty on the backend serves clear-text HTTP/2, and leverages the advanced HTTP/2 push capabilities of Jetty. I have detailed the HAProxy and Jetty configuration here.Shake
Very nice. Plus one!Hotien
And yes @Bathysphere it's possible and common to TLS offload in Nginx if you want to keep that instead but then will be two connections.Hotien
There's no benefits in "complete end-to-end HTTP/2 communication" (and you're wrong about stream resetting feature), but bypassing HTTP/2 to the application you lose the ability to load-balance loading of multiple resources through one HTTP/2 connection.Porty
There are obvious benefits to end-to-end HTTP/2, starting from the translation to HTTP/1.1 and back and the capability of server-side applications to perform HTTP/2 push. The stream resetting feature is something that is being utilized by clients to reset long requests, especially when the server-side application is non-blocking, which is a common trend. @VBart, just read the StackOverflow questions of people that is having troubles with the HTTP/2 to legacy HTTP, for example: #38879380Shake
H
8

You don't need to speak HTTP/2 all the way through.

HTTP/2 primarily addresses latency issues which will affect your client->Nginx connections. Server to server connections (e.g. Nginx to Tomcat/Jetty) will presumably be lower latency and therefore have less to gain from HTTP/2.

So just enable HTTPS and HTTP/2 on Nginx and then have it continue to talk HTTP/1.1 to Tomcat/Jetty.

There's also a question of whether everything supports HTTP/2 all the way through (e.g. Nginx proxy_pass directive and Tomcat/Jetty), which again is less of an issue if only using HTTP/2 at the edge of your network.

Hotien answered 2/8, 2016 at 21:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.