Can nginx be used as a reverse proxy for a backend websocket server?
Asked Answered
S

7

36

We're working on a Ruby on Rails app that needs to take advantage of html5 websockets. At the moment, we have two separate "servers" so to speak: our main app running on nginx+passenger, and a separate server using Pratik Naik's Cramp framework (which is running on Thin) to handle the websocket connections.

Ideally, when it comes time for deployment, we'd have the rails app running on nginx+passenger, and the websocket server would be proxied behind nginx, so we wouldn't need to have the websocket server running on a different port.

Problem is, in this setup it seems that nginx is closing the connections to Thin too early. The connection is successfully established to the Thin server, then immediately closed with a 200 response code. Our guess is that nginx doesn't realize that the client is trying to establish a long-running connection for websocket traffic.

Admittedly, I'm not all that savvy with nginx config, so, is it even possible to configure nginx to act as a reverse proxy for a websocket server? Or do I have to wait for nginx to offer support for the new websocket handshake stuff? Assuming that having both the app server and the websocket server listening on port 80 is a requirement, might that mean I have to have Thin running on a separate server without nginx in front for now?

Thanks in advance for any advice or suggestions. :)

-John

Sapowith answered 10/3, 2010 at 18:7 Comment(1)
Anyone still reading this do not accept the current answer below. The TCP proxy module works well and an answer below includes a link on how to set it up: github.com/yaoweibin/nginx_tcp_proxy_module and letseehere.com/reverse-proxy-web-socketsSoot
H
26

You can't use nginx for this currently[it's not true anymore], but I would suggest looking at HAProxy. I have used it for exactly this purpose.

The trick is to set long timeouts so that the socket connections are not closed. Something like:

timeout client  86400000 # In the frontend
timeout server  86400000 # In the backend

If you want to serve say a rails and cramp application on the same port you can use ACL rules to detect a websocket connection and use a different backend. So your haproxy frontend config would look something like

frontend all 0.0.0.0:80
  timeout client    86400000
  default_backend   rails_backend
  acl websocket hdr(Upgrade)    -i WebSocket
  use_backend   cramp_backend   if websocket

For completeness the backend would look like

backend cramp_backend
  timeout server  86400000
  server cramp1 localhost:8090 maxconn 200 check
Hewes answered 8/4, 2010 at 14:51 Comment(2)
This is great, thank you! I haven't used HAProxy before, but I've always been meaning to learn. Looks like I've got a good reason to do so now. :)Sapowith
This answer is no longer true (not surprising as it's 3 years old). Check out @mak's answer further down (at present) for how to configure this on nginx >= 1.3.13Soult
C
12

How about use my nginx_tcp_proxy_module module?

This module is designed for general TCP proxy with Nginx. I think it's also suitable for websocket. And I just add tcp_ssl_module in the development branch.

Cavan answered 14/9, 2010 at 9:3 Comment(4)
You think, but haven't tested it with WebSocket?Dekko
@Jonas: I don't know whether he'd tested this at the time he made that comment, but I can confirm that his TCP proxy now does explicitly support websockets.Redshank
This article explains how to set up, test, and use yaoweibin's module to host Websocket connections: letseehere.com/reverse-proxy-web-socketsPrentiss
I tested the module and it works well. However, you have to know that if you plan to serve http content with node and nginx on the standard port 80 then you can't use that module as one of the two will use port 80 and the other must use a different port. Go with the haproxy solution (as described by @mloughran) instead if this is your situation.Penney
C
11

nginx (>= 1.3.13) now supports reverse proxying websockets.

# the upstream server doesn't need a prefix! 
# no need for wss:// or http:// because nginx will upgrade to http1.1 in the config below
upstream app_server {
    server localhost:3000;
}

server {
    # ...

    location / {
        proxy_pass http://app_server;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_redirect off;
    }
}
Chalk answered 20/2, 2013 at 10:0 Comment(1)
@mark, While this works well for http, I have an issue with https. I somehow get 301. I had successfully setup nginx with websocket on http but over ssl I get t 301. github.com/websocket-rails/websocket-rails/issues/333 this is the issue I created. Let me know if you can help. ThanksSodality
M
7

Out of the box (i.e. official sources) Nginx can establish only HTTP 1.0 connections to an upstream (=backend), which means no keepalive is possibe: Nginx will select an upstream server, open connection to it, proxy, cache (if you want) and close the connection. That's it.

This is the fundamental reason frameworks requiring persistent connections to the backend would not work through Nginx (no HTTP/1.1 = no keepalive and no websockets I guess). Despite having this disadvantage there is an evident benefit: Nginx can choose out of several upstreams (load balance) and failover to alive one in case some of them failed.

Edit: Nginx supports HTTP 1.1 to backends & keepalive since version 1.1.4. "fastcgi" and "proxy" upstreams are supported. Here it is the docs

Maduro answered 10/3, 2010 at 19:48 Comment(1)
Got it, thanks. Essentially then, what I'm trying to do is currently impossible. Maybe someday nginx will support HTTP/1.1 keepalives to backends, but for now I'll have to come up with an alternate solution. Thanks for the response.Sapowith
L
5

For anyone that wondering about the same problem, nginx now officially supports HTTP 1.1 upstream. See nginx documentation for "keepalive" and "proxy_http_version 1.1".

Lysias answered 15/5, 2012 at 6:23 Comment(2)
Yes but it won't support websockets until version 1.3Soult
Indeed, and it should be noted that it hasn't made it in 1.3 yet either even though it's released. Their roadmap will give some info on the status of the Websocket implementation (currently planned for 1.3.x): trac.nginx.org/nginx/roadmapCalista
V
3

How about Nginx with the new HTTP Push module: http://pushmodule.slact.net/. It takes care of the connection juggling (so to speak) that one might have to worry about with a reverse proxy. It is certainly a viable alternative to Websockets which are not fully in the mix yet. I know developer of the HTTP Push module is still working on a fully stable version, but it is in active development. There are versions of it being used in production codebases. To quote the author, "A useful tool with a boring name."

Valona answered 10/3, 2010 at 20:8 Comment(1)
Thanks, that's a good suggestion. We actually were using that very module to achieve server push for a while, but now we're wanting to support bi-directional communication... And since we only need to support webkit browsers for our application, we're hoping to go with a pure websocket approach now. But I appreciate the response! :)Sapowith
L
2

I use nginx to reverse proxy to a comet style server with long polling connections and it works great. Make sure you configure proxy_send_timeout and proxy_read_timeout to appropriate values. Also make sure your back-end server that nginx is proxying to supports http 1.0 because I don't think nginx's proxy module does http 1.1 yet.

Just to clear up some confusion in a few of the answers: Keepalive allows a client to reuse a connection to send another HTTP request. It does not have anything to do with long polling or holding connections open until an event occurs which is what the original question was asking about. So it doesn't matter than nginx's proxy module only supports HTTP 1.0 which does not have keepalive.

Letti answered 26/5, 2010 at 11:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.