EventSource / Server-Sent Events through Nginx
Asked Answered
T

4

121

On server-side using Sinatra with a stream block.

get '/stream', :provides => 'text/event-stream' do
  stream :keep_open do |out|
    connections << out
    out.callback { connections.delete(out) }
  end
end

On client side:

var es = new EventSource('/stream');
es.onmessage = function(e) { $('#chat').append(e.data + "\n") };

When i using app directly, via http://localhost:9292/, everything works perfect. The connection is persistent and all messages are passed to all clients.

However when it goes through Nginx, http://chat.dev, the connection are dropped and a reconnection fires every second or so.

Nginx setup looks ok to me:

upstream chat_dev_upstream {
  server 127.0.0.1:9292;
}

server {
  listen       80;
  server_name  chat.dev;

  location / {
    proxy_pass http://chat_dev_upstream;
    proxy_buffering off;
    proxy_cache off;
    proxy_set_header Host $host;
  }
}

Tried keepalive 1024 in upstream section as well as proxy_set_header Connection keep-alive;in location.

Nothing helps :(

No persistent connections and messages not passed to any clients.

Teletype answered 2/12, 2012 at 19:12 Comment(0)
C
278

Your Nginx config is correct, you just miss few lines.

Here is a "magic trio" making EventSource working through Nginx:

proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;

Place them into location section and it should work.

You may also need to add

proxy_buffering off;
proxy_cache off;

That's not an official way of doing it.

I ended up with this by "trial and errors" + "googling" :)

Cass answered 2/12, 2012 at 20:5 Comment(13)
Having the server respond with a "X-Accel-Buffering: no" header helps a lot! (see: wiki.nginx.org/X-accel#X-Accel-Buffering)Belong
Have you had any luck with this and websockets? The websocket example on the nginx site automatically closes the connection header if nothing is set...Rolfe
That didn't work for me, until I also added the following :- proxy_buffering off; proxy_cache off;Machzor
Your trial-and-error + my first google hit = I love stack overflow. Thanks!Wreath
I needed the proxy_buffering off; and proxy_cache off; to get it working. Thanks to @MalcolmSparksAblaze
I'm having a similar issue, but the provided solution didn't work for me. Maybe take a look?Ankara
You just did the OFFICIAL WAY, great job! nginx.org/en/docs/http/ngx_http_upstream_module.html#keepaliveHugohugon
It works for proxying to webpack-hot-middleware. Thanks!Dibble
It's work for me after adding proxy_buffer_size 0; Thanks!!Kookaburra
Is "proxy_http_version 1.1;" absolutely required? By doing so, browser connections will be limited to only 6Blinni
I made it work just by adding "X-Accel-Buffering: no" to the response headerLumenhour
In context of ngnix ingress inside k8s this answer provides the solution: #58560548Bushy
While using the official nginx Docker image, either proxy_http_version 1.1; or proxy_buffering off; were both sufficient on their own.Kawasaki
P
39

Another option is to include in your response a 'X-Accel-Buffering' header with value 'no'. Nginx treats it specially, see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering

Pretense answered 29/10, 2015 at 12:20 Comment(1)
Thanks, this helped me solving the problem without changing nginx configGluttonous
S
14

Don't write this from scratch yourself. Nginx is a wonderful evented server and has modules that will handle SSE for you without any performance degradation of your upstream server.

Check out https://github.com/wandenberg/nginx-push-stream-module

The way it works is the subscriber (browser using SSE) connects to Nginx, and the connection stops there. The publisher (your server behind Nginx) will send a POST to Nginx at a corresponding route and in that moment Nginx will immediately forward to the waiting EventSource listener in the browser.

This method is much more scalable than having your ruby webserver handle these "long-polling" SSE connections.

Schaaff answered 14/2, 2015 at 20:42 Comment(1)
How can you make this highly available? like deploying 2 nginx instances and when you post a message to one of them, it is published to clients that are subscribed to both of them?Position
Q
5

Hi elevating this comment from Did to an answer: this is the only thing I needed to add when streaming from Django using HttpStreamingResponse through Nginx. All the other switches above didn't help, but this header did.

Having the server respond with a "X-Accel-Buffering: no" header helps a lot! (see: wiki.nginx.org/X-accel#X-Accel-Buffering) – Did Jul 1, 2013 at 16:24

Quash answered 5/5, 2023 at 4:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.