Server Scalability - HTML 5 websockets vs Comet
Asked Answered
T

4

28

Many Comet implementations like Caplin provide server scalable solutions.

Following is one of the statistics from Caplin site:

A single instance of Caplin liberator can support up to 100,000 clients each receiving 1 message per second with an average latency of less than 7ms.

How does this to compare to HTML5 websockets on any webserver? Can anyone point me to any HTML 5 websockets statistics?

Trochlear answered 2/2, 2012 at 5:1 Comment(2)
A comment on 'HTML 5 websockets vs Comet': as stated in other comments below Caplin's Liberator, along with a number of other 'Comet' servers, support WebSockets as a connection mechanism. When does a server stop being a Comet server? If it uses WebSockets is it still a Comet server? Is Comet an umbrella term for HTTP-Long Polling and HTTP Streaming? I'd recommend reading The Rumours of Comet’s Death Have Been Greatly Exaggerated.Outlaw
I'm going to keep this as a comment for now. But you really should consider EventSource (aka Server-Sent Events) as an option too. It makes scaling to many servers a lot easier because it's unidirectional (push-only).Unattended
Y
42

Disclosure - I work for Caplin.

There is a bit of misinformation on this page so I'd like to try and make it clearer..

I think we could split up the methods we are talking about into three camps..

  1. Comet HTTP polling - including long polling
  2. Comet HTTP streaming - server to client messages use a single persistent socket with no HTTP header overhead after initial setup
  3. Comet WebSocket - single bidirectional socket

I see them all as Comet, since Comet is just a paradigm, but since WebSocket came along some people want to treat it like it is different or replaces Comet - but it is just another technique - and unless you are happy only supporting the latest browsers then you can't just rely on WebSocket.

As far as performance is concerned, most benchmarks concentrate on server to client messages - numbers of users, numbers of messages per second, and the latency of those messages. For this scenario there is no fundamental difference between HTTP Streaming and WebSocket - both are writing messages down an open socket with little or no header or overhead.

Long polling can give good latency if the frequency of messages is low. However, if you have two messages (server to client) in quick succession then the second one will not arrive at the client until a new request is made after the first message is received.

I think someone touched on HTTP KeepAlive. This can obviously improve Long polling - you still have the overhead of the roundtrip and headers, but not always the socket creation.

Where WebSocket should improve upon HTTP Streaming in scenarios where there are more client to server messages. Relating these scenarios to the real world creates slightly more arbitrary setups, compared to the simple to understand 'send lots of messages to lots of clients' which everyone can understand. For example, in a trading application, creating a scenario where you include users executing trades (ie client to server messages) is easy, but the results a bit less meaningful than the basic server to client scenarios. Traders are not trying to do 100 trades/sec - so you end up with results like '10000 users receiving 100 messages/sec while also sending a client message once every 5 minutes'. The more interesting part for the client to server message is the latency, since the number of messages required is usually insignificant compared to the server to client messages.

Another point someone made above, about 64k clients, You do not need to do anything clever to support more than 64k sockets on a server - other than configuring the number file descriptors etc. If you were trying to do 64k connection from a single client machine, that is totally different as they need a port number for each one - on the server end it is fine though, that is the listen end, and you can go above 64k sockets fine.

Yoshi answered 3/2, 2012 at 9:56 Comment(1)
+1, exactly what I was referring to in my previous comments. Implementation of HTTP streaming and WebSockets are functionally identical, but you lose the full duplex capability of WebSockets. Long polling and short polling aren't really fair comparisons with WebSockets as the connection is re-initialized continuously.Donetsk
D
8

In theory, WebSockets can scale much better than HTTP but there are some caveats and some ways to address those caveats too.

The complexity of the handshake header processing of HTTP vs WebSockets is about the same. The HTTP (and initial WebSocket) handshake can easily be over 1K of data (due to cookies, etc). The important difference is that the HTTP handshake happens again every message. Once a WebSocket connection is established, the overhead per message is only 2-14 bytes.

The excellent Jetty benchmark links posted in @David Titarenco's answer (1, 2) show that WebSockets can easily achieve more than an order of magnitude better latency when compared to Comet.

See this answer for more information on scaling of WebSockets vs HTTP.

Caveats:

  • WebSocket connections are long-lived unlike HTTP connections which are short-lived. This significantly reduces the overhead (no socket creation and management for every request/response), but it does mean that to scale a server above 64k separate simultaneous client hosts you will need to use tricks like multiple IP addresses on the same server.

  • Due to security concerns with web intermediaries, browser to server WebSocket messages have all payload data XOR masked. This adds some CPU utilization to the server to decode the messages. However, XOR is one of the most efficient operations in most CPU architectures and there is often hardware assist available. Server to browser messages are not masked and since many uses of WebSockets don't require large amounts of data sent from browser to server, this isn't a big issue.

Doggone answered 2/2, 2012 at 14:1 Comment(6)
In HTTP streaming (what Caplin uses), the HTTP handshake doesn't happen every message - I've implemented it many times myself. It's essentially just an open (one-directional) socket connection. HTTP streaming performance will be very comparable to WebSockets (but there are several caveats, one of which is the lack of duplex).Donetsk
Caplin now offer WebSocket support. HTTP Streaming is a good option (as we've both stated) but with browsers that don't support a content-type of multipart/x-mixed-replace (everything other than Firefox?) this means the XHR.responseText continues to grow and at some point the streaming connection has to be dropped and restarted or the browser would ultimately run out of memory.Outlaw
@DavidTitarenco, I would be interested in what your opinion is for why the Comet latencies are almost 2 orders of magnitude different for Comet/long-poll vs WebSocket in that benchmark.Doggone
@leggetter, do you have any data on Caplin's HTTP streaming latencies (round-trip) vs Caplin WebSockets? Curious minds want to know.Doggone
@Doggone unfortunately, no. But I'll ask.Outlaw
There are different ways of implementing Comet (which is somewhat of an umbrella term). My opinion is that if you implemented Comet via HTTP streaming (using chunked-encoding), you would get comparable, if not identical, latencies as a websocket implementation. With long-polling (also known as HTTP pushing), this goes out the window as the connection needs to be re-initialized after every message is received.Donetsk
D
6

It's hard to know how that compares to anything because we don't know how big the (average) payload size is. Under the hood (as in how the server is implemented), HTTP streaming and websockets are virtually identical - apart from the initial handshake which is more complicated when done with HTTP obviously.

If you wrote your own websocket server in C (ala Caplin), you could probably reach those numbers without too much difficulty. Most websocket implementations are done through existing server packages (like Jetty) so the comparison wouldn't really be fair.

Some benchmarks:
http://webtide.intalio.com/2011/09/cometd-2-4-0-websocket-benchmarks/
http://webtide.intalio.com/2011/08/prelim-cometd-websocket-benchmarks/

However, if you look at C event lib benchmarks, like libev and libevent, the numbers look significantly sexier:
http://libev.schmorp.de/bench.html

Donetsk answered 2/2, 2012 at 5:15 Comment(6)
Great links, thanks! Actually, HTTP and WebSockets are quite different. The WebSocket handshake is designed to be compatible with HTTP so that both services can share the same port. They are often implemented in the same server, but after that they are very different. Once the WebSocket connection is established it is a raw channel that is full-duplex and bidrectional (more akin to a regular socket). And in some ways the WebSocket handshake is actually more complicated than plain HTTP because it allows CORS validation and there is a SHA1 challenge-response required as part of the handshake.Doggone
The latency, in terms of delivering messages from server to client, will be very similar if not identical after the initial connection. WebSockets benefits, as @Doggone points out, are that after the connection has been established the channel is full-duplex and bi-directional. So, if you were to benchmark bi-directional messaging using HTTP Streaming, where a second short-lived HTTP is required for client to server comms, against a WebSocket connection the WebSocket options is likely to be vastly superior. Caplin now offer WebSocket support so I'm confident they can beat 100k connections.Outlaw
@leggetter, my understanding is that with long-poll over HTTP/1.0 you have socket setup and HTTP request/response headers in both directions. Long-pool over HTTP/1.1 with keep-alive allows the socket to be re-used, but my understanding was that HTTP request/response headers are still sent/received. It's more difficult than I expected to find a conclusive answer to this. I would be very interested in wire dumps comparing various Comet/AJAX/long-poll solutions so that I can see exactly what's happening.Doggone
@Doggone My understanding of HTTP Long-Polling over HTTP/1.0 and HTTP/1.1 is the same as yours. However, HTTP Streaming is different as the connection is kept open between messages. If multipart-replace is used then there is additional data sent to delimit the message parts. But standard streaming just sends new data over the wire with no additional headers so I think it'll be as efficient as a WebSocket connection once established. The problem is that responseText buffer grows and the connection needs to be dropped and reconnected eventually.Outlaw
@Doggone According to Wikipedia's entry on HTTP Pipelining keep-alive is only enabled in Opera. Found via this post about SPDY.Outlaw
You don't have to use Pipelining, you can just use chunked-encoding, per: en.wikipedia.org/wiki/Chunked_transfer_encoding - This is how I've done it before, and all you need to send per message is [size in bytes]\r\n[message]\r\n. You can evaluate the stream in several ways. The easiest is to check the XHR status flag and interpret new information as you get it. This does not work in IE, however (there exist several messy iframe workarounds). As an addendum, I've never run into any browser closing a connection due to responseText getting too big. The limit bay be set to several MB.Donetsk
O
4

Ignoring any form of polling, which as explained elsewhere, can introduce latency when the update rate is high, the three most common techniques for JavaScript streaming are:

  1. WebSocket
  2. Comet XHR/XDR streaming
  3. Comet Forever IFrame

WebSocket is by far the cleanest solution, but there are still issues in terms of browser and network infrastructure not supporting it. The sooner it can be relied upon the better.

XHR/XDR & Forever IFrame are both fine for pushing data to clients from the server, but require various hacks to be made to work consistently across all browsers. In my experience these Comet approaches are always slightly slower than WebSockets not least because there is a lot more client side JavaScript code required to make it work - from the server's perspective, however, sending data over the wire happens at the same speed.

Here are some more WebSocket benchmark graphs, this time for our product my-Channels Nirvana.

Skip past the multicast and binary data graphs down to the last graph on the page (JavaScript High Update Rate)

In summary - The results show Nirvana WebSocket delivering 50 events/sec to 2,500k users with 800 microsecond latency. At 5,000 users (total of 250k events/sec streamed) the latency is 2 milliseconds.

Oriente answered 9/2, 2012 at 14:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.