WebSockets protocol vs HTTP
Asked Answered
P

6

452

There are many blogs and discussions about WebSocket and HTTP, and many developers and sites strongly advocate WebSockets, but I still can not understand why.

For example (arguments of WebSocket lovers):

HTML5 Web Sockets represents the next evolution of web communications—a full-duplex, bidirectional communications channel that operates through a single socket over the Web. - websocket.org

HTTP supports streaming: request body streaming(you are using it while uploading large files) and response body streaming.

During making the connection with WebSocket, client, and server exchange data per frame which is 2 bytes each, compared to 8 kilobytes of HTTP header when you do continuous polling.

Why do that 2 bytes not include TCP and under TCP protocols overhead?

GET /about.html HTTP/1.1
Host: example.org

This is ~48 bytes HTTP header.

HTTP chunked encoding - Chunked transfer encoding:

23
This is the data in the first chunk
1A
and this is the second one
3
con
8
sequence
0
  • So, the overhead per each chunk is not big.

Also, both protocols work over TCP, so all TCP issues with long-live connections are still there.

Questions:

  1. Why is the WebSockets protocol better?
  2. Why was it implemented instead of updating the HTTP protocol?
Pennoncel answered 5/2, 2013 at 9:5 Comment(7)
What is your question?Innerdirected
@Jonas, 1) why websockets protocol is better? 2) Why it was implemented instead of updating http protocol? 3) Why websockets are so promoted?Pennoncel
@JoachimPileborg, you can do it with TCP sockets or http too for desktop applications; and you have to use WebRTC to make browser-to-browser communication for websitePennoncel
@JoachimPileborg, it is webRTC for browser-to-browser, not websocketsPennoncel
@4esn0k, WS is not better, they are different and better for some specific tasks. 3) It's a new feature that people should be aware of and open up new possibilities for the WebInnerdirected
@JoachimPileborg: That's wrong, Websockets are a client server technology and not P2P.Innerdirected
...exactly what I am thinking ...Jacobian
P
670

1) Why is the WebSockets protocol better?

WebSockets is better for situations that involve low-latency communication especially for low latency for client to server messages. For server to client data you can get fairly low latency using long-held connections and chunked transfer. However, this doesn't help with client to server latency which requires a new connection to be established for each client to server message.

Your 48 byte HTTP handshake is not realistic for real-world HTTP browser connections where there is often several kilobytes of data sent as part of the request (in both directions) including many headers and cookie data. Here is an example of a request/response to using Chrome:

Example request (2800 bytes including cookie data, 490 bytes without cookie data):

GET / HTTP/1.1
Host: www.cnn.com
Connection: keep-alive
Cache-Control: no-cache
Pragma: no-cache
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.68 Safari/537.17
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: [[[2428 byte of cookie data]]]

Example response (355 bytes):

HTTP/1.1 200 OK
Server: nginx
Date: Wed, 13 Feb 2013 18:56:27 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie: CG=US:TX:Arlington; path=/
Last-Modified: Wed, 13 Feb 2013 18:55:22 GMT
Vary: Accept-Encoding
Cache-Control: max-age=60, private
Expires: Wed, 13 Feb 2013 18:56:54 GMT
Content-Encoding: gzip

Both HTTP and WebSockets have equivalent sized initial connection handshakes, but with a WebSocket connection the initial handshake is performed once and then small messages only have 6 bytes of overhead (2 for the header and 4 for the mask value). The latency overhead is not so much from the size of the headers, but from the logic to parse/handle/store those headers. In addition, the TCP connection setup latency is probably a bigger factor than the size or processing time for each request.

2) Why was it implemented instead of updating HTTP protocol?

There are efforts to re-engineer the HTTP protocol to achieve better performance and lower latency such as SPDY, HTTP 2.0 and QUIC. This will improve the situation for normal HTTP requests, but it is likely that WebSockets and/or WebRTC DataChannel will still have lower latency for client to server data transfer than HTTP protocol (or it will be used in a mode that looks a lot like WebSockets anyways).

Update:

Here is a framework for thinking about web protocols:

  • TCP: low-level, bi-directional, full-duplex, and guaranteed order transport layer. No browser support (except via plugin/Flash).

  • HTTP 1.0: request-response transport protocol layered on TCP. The client makes one full request, the server gives one full response, and then the connection is closed. The request methods (GET, POST, HEAD) have specific transactional meaning for resources on the server.

  • HTTP 1.1: maintains the request-response nature of HTTP 1.0, but allows the connection to stay open for multiple full requests/full responses (one response per request). Still has full headers in the request and response but the connection is re-used and not closed. HTTP 1.1 also added some additional request methods (OPTIONS, PUT, DELETE, TRACE, CONNECT) which also have specific transactional meanings. However, as noted in the introduction to the HTTP 2.0 draft proposal, HTTP 1.1 pipelining is not widely deployed so this greatly limits the utility of HTTP 1.1 to solve latency between browsers and servers.

  • Long-poll: sort of a "hack" to HTTP (either 1.0 or 1.1) where the server does not respond immediately (or only responds partially with headers) to the client request. After a server response, the client immediately sends a new request (using the same connection if over HTTP 1.1).

  • HTTP streaming: a variety of techniques (multipart/chunked response) that allow the server to send more than one response to a single client request. The W3C is standardizing this as Server-Sent Events using a text/event-stream MIME type. The browser API (which is fairly similar to the WebSocket API) is called the EventSource API.

  • Comet/server push: this is an umbrella term that includes both long-poll and HTTP streaming. Comet libraries usually support multiple techniques to try and maximize cross-browser and cross-server support.

  • WebSockets: a transport layer built-on TCP that uses an HTTP friendly Upgrade handshake. Unlike TCP, which is a streaming transport, WebSockets is a message based transport: messages are delimited on the wire and are re-assembled in-full before delivery to the application. WebSocket connections are bi-directional, full-duplex and long-lived. After the initial handshake request/response, there is no transactional semantics and there is very little per message overhead. The client and server may send messages at any time and must handle message receipt asynchronously.

  • SPDY: a Google initiated proposal to extend HTTP using a more efficient wire protocol but maintaining all HTTP semantics (request/response, cookies, encoding). SPDY introduces a new framing format (with length-prefixed frames) and specifies a way to layering HTTP request/response pairs onto the new framing layer. Headers can be compressed and new headers can be sent after the connection has been established. There are real world implementations of SPDY in browsers and servers.

  • HTTP 2.0: has similar goals to SPDY: reduce HTTP latency and overhead while preserving HTTP semantics. The current draft is derived from SPDY and defines an upgrade handshake and data framing that is very similar the the WebSocket standard for handshake and framing. An alternate HTTP 2.0 draft proposal (httpbis-speed-mobility) actually uses WebSockets for the transport layer and adds the SPDY multiplexing and HTTP mapping as an WebSocket extension (WebSocket extensions are negotiated during the handshake).

  • WebRTC/CU-WebRTC: proposals to allow peer-to-peer connectivity between browsers. This may enable lower average and maximum latency communication because as the underlying transport is SDP/datagram rather than TCP. This allows out-of-order delivery of packets/messages which avoids the TCP issue of latency spikes caused by dropped packets which delay delivery of all subsequent packets (to guarantee in-order delivery).

  • QUIC: is an experimental protocol aimed at reducing web latency over that of TCP. On the surface, QUIC is very similar to TCP+TLS+SPDY implemented on UDP. QUIC provides multiplexing and flow control equivalent to HTTP/2, security equivalent to TLS, and connection semantics, reliability, and congestion control equivalentto TCP. Because TCP is implemented in operating system kernels, and middlebox firmware, making significant changes to TCP is next to impossible. However, since QUIC is built on top of UDP, it suffers from no such limitations. QUIC is designed and optimised for HTTP/2 semantics.

References:

HTTP:

Server-Sent Event:

WebSockets:

SPDY:

HTTP 2.0:

WebRTC:

QUIC:

Pittman answered 5/2, 2013 at 15:54 Comment(10)
>>However, this doesn't help with client to server latency which requires a new connection to be established for each client to server message. - what about streaming of response body? i know, XMLHttpRequest API does not allow this, but it is exists. with streaming to the server you can stream from client side.Pennoncel
@4esn0k, I'm not aware of any way to do client->server HTTP streaming. All the solutions I know of are for server->client HTTP streaming. It's unlikely that a client->server streaming solution would ever be adopted or standardized because this would break HTTP semantics (even server->client streaming is questionable for many). WebSockets is the solution that was developed to allow low client->server (and thus round-trip) latency without adhering to HTTP semantics.Pittman
@kanaka: I feel sorry for you. You looked up all those sources to get the 150 rep, but the only reason OP put up the bounty was because he wanted someone to tell him he's right.Thieve
@Philipp, he asked a question that I had been wanting to research and document thoroughly anyway. The question of WebSockets vs other HTTP based mechanism comes up fairly often though so now there is a good reference to link to. But yes, it does seem likely the asker was looking for evidence to back up a preconceived notion about WebSockets vs HTTP particularly since he never selected an answer nor awarded the bounty.Pittman
Thanks for this very clear answer. Do you know if there is a kind of 'compatibility matrix' for these protocol? When I am developing a mobile website, can I use all of them?Valorous
@WardC caniuse.com give browser compatibility information (including mobile).Pittman
I remember hearing that a websocket uses a lot of bandwidth in order to keep the connection alive. Is that true?Freesia
@www139, no, at the WebSocket protocol level the connection stays open until one side or the other side closes the connection. You might also have to worry about TCP timeouts (a problem with any TCP-based protocol), but any sort of traffic every minute or two will keep the connection open. In fact, the WebSocket protocol definition specifies a ping/pong frame type, although even without that you could send a single byte (plus two byte header) and that would keep the connection open. 2-3 bytes every couple of minutes is not a significant bandwidth impact at all.Pittman
which protocol would be a great fit for online multiplayer gaming?Obliteration
Strictly speaking, is Websockets not rather an application layer protocol than a transport protocol?Rixdollar
T
180

You seem to assume that WebSocket is a replacement for HTTP. It is not. It's an extension.

The main use-case of WebSockets are Javascript applications which run in the web browser and receive real-time data from a server. Games are a good example.

Before WebSockets, the only method for JavaScript applications to interact with a server was through XmlHttpRequest. But these have a major disadvantage: The server can't send data unless the client has explicitly requested it.

But the new WebSocket feature allows the server to send data whenever it wants. This allows to implement browser-based games with a much lower latency and without having to use ugly hacks like AJAX long-polling or browser plugins.

So why not use normal HTTP with streamed requests and responses

In a comment to another answer you suggested to just stream the client request and response body asynchronously.

In fact, WebSockets are basically that. An attempt to open a WebSocket connection from the client looks like a HTTP request at first, but a special directive in the header (Upgrade: websocket) tells the server to start communicating in this asynchronous mode. First drafts of the WebSocket protocol weren't much more than that and some handshaking to ensure that the server actually understands that the client wants to communicate asynchronously. But then it was realized that proxy servers would be confused by that, because they are used to the usual request/response model of HTTP. A potential attack scenario against proxy servers was discovered. To prevent this it was necessary to make WebSocket traffic look unlike any normal HTTP traffic. That's why the masking keys were introduced in the final version of the protocol.

Thieve answered 5/2, 2013 at 14:54 Comment(10)
>> he server can't send data unless the client has explicitely requested it.; Web browser should initiate WebSockets connection... same as for XMLHttpRequestPennoncel
@Pennoncel The browser does initiate a websocket connection. But after it is established, both sides can send data whenever they want. That's not the case for XmlHttpRequest.Thieve
WHY this is not possible with HTTP ?Pennoncel
@Pennoncel I added a new section to my answerThieve
@Philipp, games are a good example where WebSockets shine. However, it's not real-time data from the server where you get the biggest win. You can get almost as good server->client latency using HTTP streaming/long-held connections. And with long-held requests servers can effectively send whenever they have data because the client has already sent the request and the server is "holding the request" until it has data. The biggest win for WebSockets is with client->server latency (and therefore round-trip). The client being able to send whenever it wants without request overhead is the real key.Pittman
@Philipp, another note: there are other methods in addition to XMLHttpRequest and WebSockets for JavaScript to interact with the server including hidden iframes and long-poll script tags. See the Comet wikipedia page for more details: en.wikipedia.org/wiki/Comet_(programming)Pittman
@Pittman I think you meant - The server being able to send whenever it wants without request overhead is the real key.Monopolist
How real time does "real-time data" have to be to justify using WebSockets?Legalism
If a server sends data to a client whenever it wants instead of only when the client has explicitly requested it, then how does the server know when the client needs data to be sent? Or does the client subscribe to the server and then data is sent to the client whenever an event occurs?Caduceus
Thinking about this some more. I think that if a javascript event in the browser is satisfied then the server sends data to the client.Caduceus
G
85

A regular REST API uses HTTP as the underlying protocol for communication, which follows the request and response paradigm, meaning the communication involves the client requesting some data or resource from a server, and the server responding back to that client. However, HTTP is a stateless protocol, so every request-response cycle will end up having to repeat the header and metadata information. This incurs additional latency in case of frequently repeated request-response cycles.

http

With WebSockets, although the communication still starts off as an initial HTTP handshake, it is further upgraded to follow the WebSockets protocol (i.e. if both the server and the client are compliant with the protocol as not all entities support the WebSockets protocol).

Now with WebSockets, it is possible to establish a full-duplex and persistent connection between the client and a server. This means that unlike a request and a response, the connection stays open for as long as the application is running (i.e. it’s persistent), and since it is full-duplex, two-way simultaneous communication is possible i.e now the server is capable of initiating communication and 'push' some data to the client when new data (that the client is interested in) becomes available.

websockets

The WebSockets protocol is stateful and allows you to implement the Publish-Subscribe (or Pub/Sub) messaging pattern which is the primary concept used in the real-time technologies where you are able to get new updates in the form of server push without the client having to request (refresh the page) repeatedly. Examples of such applications are Uber car's location tracking, Push Notifications, Stock market prices updating in real-time, chat, multiplayer games, live online collaboration tools, etc.

You can check out a deep dive article on Websockets which explains the history of this protocol, how it came into being, what it’s used for and how you can implement it yourself.

Here's a video from a presentation I did about WebSockets and how they are different from using the regular REST APIs: Standardisation and leveraging the exponential rise in data streaming

Gautea answered 25/4, 2019 at 15:0 Comment(2)
Many thanks for this clear explanation @Shrushtika.Paniculate
This is an excellent explanation of the differences between HTTP and websockets. So websockets are used when real-time data is needed? I think AJAX can be used to get real-time data so why use a websocket instead?Caduceus
J
42

For the TL;DR, here are 2 cents and a simpler version for your questions:

  1. WebSockets provides these benefits over HTTP:

    • Persistent stateful connection for the duration of the connection
    • Low latency: near-real-time communication between server/client due to no overhead of reestablishing connections for each request as HTTP requires.
    • Full duplex: both server and client can send/receive simultaneously
  2. WebSocket and HTTP protocol have been designed to solve different problems, I.E. WebSocket was designed to improve bi-directional communication whereas HTTP was designed to be stateless, distributed using a request/response model. Other than sharing the ports for legacy reasons (firewall/proxy penetration), there isn't much common ground to combine them into one protocol.

Jade answered 26/8, 2015 at 14:45 Comment(1)
Important that you mentioned the term stateful and stateless in your comparison (Y)Rodgerrodgers
H
28

Why is the WebSockets protocol better?

I don't think we can compare them side by side like who is better. That won't be a fair comparison simply because they are solving two different problems. Their requirements are different. It will be like comparing apples to oranges. They are different.

HTTP is a request-response protocol. The client (browser) wants something, the server gives it. That is. If what the data client wants is big, the server might send streaming data to avoid unwanted buffer problems. Here the main requirement or problem is how to make the request from clients and how to response the resources(hypertext) they request. That is where HTTP shine.

In HTTP, only client requests. The server only responds.

WebSocket is not a request-response protocol where only the client can request. It is a socket(very similar to TCP socket). Mean once the connection is open, either side can send data until the underlining TCP connection is closed. It is just like a normal socket. The only difference with TCP socket is WebSocket can be used on the web. On the web, we have many restrictions on a normal socket. Most firewalls will block other ports than 80 and 433 that HTTP used. Proxies and intermediaries will be problematic as well. So to make the protocol easier to deploy to existing infrastructures WebSocket use HTTP handshake to upgrade. That means when the first time connection is going to open, the client sent an HTTP request to tell the server saying "That is not HTTP request, please upgrade to WebSocket protocol".

Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13

Once the server understands the request and upgraded to WebSocket protocol, none of the HTTP protocols applied anymore.

So my answer is Neither one is better than each other. They are completely different.

Why was it implemented instead of updating the HTTP protocol?

Well, we can make everything under the name called HTTP as well. But shall we? If they are two different things, I will prefer two different names. So do Hickson and Michael Carter .

Heartstrings answered 19/1, 2018 at 19:25 Comment(0)
D
8

The other answers do not seem to touch on a key aspect here, and that is you make no mention of requiring supporting a web browser as a client. Most of the limitations of plain HTTP above are assuming you would be working with browser/ JS implementations.

The HTTP protocol is fully capable of full-duplex communication; it is legal to have a client perform a POST with a chunked encoding transfer, and a server to return a response with a chunked-encoding body. This would remove the header overhead to just at init time.

So if all you're looking for is full-duplex, control both client and server, and are not interested in extra framing/features of WebSockets, then I would argue that HTTP is a simpler approach with lower latency/CPU (although the latency would really only differ in microseconds or less for either).

Debatable answered 12/7, 2017 at 18:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.