HTTP2 with node.js behind nginx proxy
Asked Answered
F

5

80

I have a node.js server running behind an nginx proxy. node.js is running an HTTP 1.1 (no SSL) server on port 3000. Both are running on the same server.

I recently set up nginx to use HTTP2 with SSL (h2). It seems that HTTP2 is indeed enabled and working.

However, I want to know whether the fact that the proxy connection (nginx <--> node.js) is using HTTP 1.1 affects performance. That is, am I missing the HTTP2 benefits in terms of speed because my internal connection is HTTP 1.1?

Fulvia answered 13/1, 2017 at 14:40 Comment(2)
Good question that can be applied also in containerization like Docker SwarmNortheast
Hi, just curious, could you please share your nginx configuration? I'm having some trouble replicating the same behaviour on a Elastic Beanstalk environment.Kimball
J
110

In general, the biggest immediate benefit of HTTP/2 is the speed increase offered by multiplexing for the browser connections which are often hampered by high latency (i.e. slow round trip speed). These also reduce the need (and expense) of multiple connections which is a work around to try to achieve similar performance benefits in HTTP/1.1.

For internal connections (e.g. between webserver acting as a reverse proxy and back end app servers) the latency is typically very, very, low so the speed benefits of HTTP/2 are negligible. Additionally each app server will typically already be a separate connection so again no gains here.

So you will get most of your performance benefit from just supporting HTTP/2 at the edge. This is a fairly common set up - similar to the way HTTPS is often terminated on the reverse proxy/load balancer rather than going all the way through.

However there are potential benefits to supporting HTTP/2 all the way through. For example it could allow server push all the way from the application. Also potential benefits from reduced packet size for that last hop due to the binary nature of HTTP/2 and header compression. Though, like latency, bandwidth is typically less of an issue for internal connections so importance of this is arguable. Finally some argue that a reverse proxy does less work connecting a HTTP/2 connect to a HTTP/2 connection than it would to a HTTP/1.1 connection as no need to convert one protocol to the other, though I'm sceptical if that's even noticeable since they are separate connections (unless it's acting simply as a TCP pass through proxy). So, to me, the main reason for end to end HTTP/2 is to allow end to end Server Push, but even that is probably better handled with HTTP Link Headers and 103-Early Hints due to the complications in managing push across multiple connections and I'm not aware of any HTTP proxy server that would support this (few enough support HTTP/2 at backend never mind chaining HTTP/2 connections like this) so you'd need a layer-4 load balancer forwarding TCP packers rather than chaining HTTP requests - which brings other complications.

For now, while servers are still adding support and server push usage is low (and still being experimented on to define best practice), I would recommend only to have HTTP/2 at the end point. Nginx also doesn't, at the time of writing, support HTTP/2 for ProxyPass connections (though Apache does), and has no plans to add this, and they make an interesting point about whether a single HTTP/2 connection might introduce slowness (emphasis mine):

Is HTTP/2 proxy support planned for the near future?

Short answer:

No, there are no plans.

Long answer:

There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends. Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones.

On the other hand, implementing HTTP/2 protocol and request multiplexing within a single connection in the upstream module will require major changes to the upstream module.

Due to the above, there are no plans to implement HTTP/2 support in the upstream module, at least in the foreseeable future. If you still think that talking to backends via HTTP/2 is something needed - feel free to provide patches.

Finally, it should also be noted that, while browsers require HTTPS for HTTP/2 (h2), most servers don't and so could support this final hop over HTTP (h2c). So there would be no need for end to end encryption if that is not present on the Node part (as it often isn't). Though, depending where the backend server sits in relation to the front end server, using HTTPS even for this connection is perhaps something that should be considered if traffic will be travelling across an unsecured network (e.g. CDN to origin server across the internet).

EDIT AUGUST 2021

HTTP/1.1 being text-based rather than binary does make it vulnerable to various request smuggling attacks. In Defcon 2021 PortSwigger demonstrated a number of real-life attacks, mostly related to issues when downgrading front end HTTP/2 requests to back end HTTP/1.1 requests. These could probably mostly be avoided by speaking HTTP/2 all the way through, but given current support of front end servers and CDNs to speak HTTP/2 to backend, and backends to support HTTP/2 it seems it’ll take a long time for this to be common, and front end HTTP/2 servers ensuring these attacks aren’t exploitable seems like the more realistic solution.

Jellyfish answered 14/1, 2017 at 8:6 Comment(12)
Thanks for the extensive reply. Your comments on "translating" between the protocols and on the overall effectiveness of multiplexing in my setup were mostly the things I was looking for.Fulvia
Hi, would you mind to share the idea that how do you implement server push using reverse proxy service and backend service? I tried nodejs with spdy or the native http2 , both of them require SSL to work(and looks like this is the critical requirement to use http2 no matter what lib or platform). Well, I didn't get the idea to combine reverse proxy service with backend service because as far as I can see, we always use SSL only in the reverse proxy service, However, the backend service says they need it too now. And I can not agree more that it's a waste to do end to end encryption.Hamsun
Well for a start Nginx doesn't support Server Push, but if using Apache for instance, then can have HTTP/2 to client, then HTTP/1.1 to node. Then to implement server push you just add a link header from node in the response. Apache, will see the response, see that link header, and automatically request the resource and push it to the client.Jellyfish
Thanks! I think cloudfare use this way to provide server push too. Since this got widely(at least we saw two instances), looks like the reason why Nginx doesn't support server push is not a technical problem.Hamsun
There was a comment at the bottom of the original Cloudflare blog post (blog.cloudflare.com/announcing-support-for-http-2-server-push-2) asking if they would donate the code to Nginx but that wasn’t answered.Jellyfish
@BarryPollard if there is a benefit in multiplexing at the front end, then the same benefit exists towards the back end. The limit on the number of connections from browsers to front ends with HTTP/1.1 is naturally removed when a front end talks to a back end, but the same logic applies to HTTP/2: nowhere is mandated that a front end cannot open multiple HTTP/2 connections to the same back end server, if that is better than just one. Only browsers are somehow forced to open just one HTTP/2 connection.Burette
I disagree @Burette - The main benefit for the front end is BECAUSE latency is slow and bandwidth is limited - and HTTP/2 addresses this. That’s usually much less of an issue for backend connections. And over low latency/high bandwidth connections HTTP/2 will not perform much better than HTTP/1. Similarly multiple HTTP/2 connections have no real benefit over multiple HTTP/1 connections (and similar downsides). I’m not saying HTTP/2 is WORSE than HTTP/1 for backend connections - I’m just saying it’s not better and, given limited support for this anyway, it may not be worth it.Jellyfish
And btw @Burette apologies for misspelling your name in above and thanks for correcting. I believe that’s twice I’ve done that to you :-(Jellyfish
NGINX now supports HTTP2/Push! I have set it up as you have mentioned with NGINX doing the push and NodeJS sitting behind the proxy and it works beautifully! The big ticket items that I am pushing are a big minified .css and minified.js file. `Circumnutate
Yes NGINX supports Push, but still does not support HTTP/2 to a backend like Node. However, like you are doing, the better way to Push is probably using Link Headers (so get Nginx to push) so the lack of this proxy by HTTP/2 doesn't really matter.Jellyfish
I can think of three reasons why you would want to forward http/2 on to the backend: 1. GRPC uses http/2 (although probably not relevent in this case) 2. Allow the backend to use http/2's server-push feature 3. Preserve priority and stream information that is lost during an h2 to http/1 conversion.Goatsbeard
1 GRPC is not quite the same as HTTP/2 (Nginx supports this on backend but not normal HTTP/2 for example). 2 Server Push is covered in the answer - it gets complex when chaining connections. 3. Possibly though when mixing resources (e.g. static content served from webserver and dynamic from back end) this gets complicated anyway. Also nothing to say this is preserved.Jellyfish
C
19

NGINX now supports HTTP2/Push for proxy_pass and it's awesome...

Here I am pushing favicon.ico, minified.css, minified.js, register.svg, purchase_litecoin.svg from my static subdomain too. It took me some time to realize I can push from a subdomain.

location / {
            http2_push_preload              on;
            add_header                      Link "<//static.yourdomain.io/css/minified.css>; as=style; rel=preload";
            add_header                      Link "<//static.yourdomain.io/js/minified.js>; as=script; rel=preload";
            add_header                      Link "<//static.yourdomain.io/favicon.ico>; as=image; rel=preload";
            add_header                      Link "<//static.yourdomain.io/images/register.svg>; as=image; rel=preload";
            add_header                      Link "<//static.yourdomain.io/images/purchase_litecoin.svg>; as=image; rel=preload";
            proxy_hide_header               X-Frame-Options;
            proxy_http_version              1.1;
            proxy_redirect                  off;
            proxy_set_header                Upgrade $http_upgrade;
            proxy_set_header                Connection "upgrade";
            proxy_set_header                X-Real-IP $remote_addr;
            proxy_set_header                Host $http_host;
            proxy_set_header                X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header                X-Forwarded-Proto $scheme;
            proxy_pass                      http://app_service;
        }
Circumnutate answered 3/9, 2018 at 0:11 Comment(3)
I just bookmarked this question and want to add an official announcement link - Introducing HTTP/2 Server Push with NGINX 1.13.9 - to your answer, it contains several useful examples.Mulvaney
@IvanShatsky the page you refer to says one should not push resources that are likely cached. A server cannot know what a client has cached and the most common resources, the ones most likely cached (because they are on every page), are exactly the resources you would want to push. Push does not bypass the browser cache AFAIK.Blais
Is HTTP/2 server push still a thing? developer.chrome.com/blog/removing-pushAthwartships
T
17

In case someone is looking for a solution on this when it is not convenient to make your services HTTP2 compatible. Here is the basic NGINX configuration you can use to convert HTTP1 service into HTTP2 service.

server {
  listen [::]:443 ssl http2;
  listen 443 ssl http2;

  server_name localhost;
  ssl on;
  ssl_certificate /Users/xxx/ssl/myssl.crt;
  ssl_certificate_key /Users/xxx/ssl/myssl.key;

  location / {
    proxy_pass http://localhost:3001;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
  }
}
Tunstall answered 16/1, 2019 at 22:25 Comment(0)
M
9

NGINX does not support HTTP/2 as a client. As they're running on the same server and there is no latency or limited bandwidth I don't think it would make a huge different either way. I would make sure you are using keepalives between nginx and node.js.

https://www.nginx.com/blog/tuning-nginx/#keepalive

Miserere answered 13/1, 2017 at 17:22 Comment(3)
NGINX does now support HTTP/2 with proxy_pass.Blais
@Blais I think this is incorrect.Rafat
You could be right. Not sure where I got this info. Possibly nginx plus…Blais
L
3

You are not losing performance in general, because nginx matches the request multiplexing the browser does over HTTP/2 by creating multiple simultaneous requests to your node backend. (One of the major performance improvements of HTTP/2 is allowing the browser to do multiple simultaneous requests over the same connection, whereas in HTTP 1.1 only one simultaneous request per connection is possible. And the browsers limit the number of connections, too. )

Lachesis answered 13/1, 2017 at 21:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.