HTTP header compression
Asked Answered
E

4

9

HTTP headers aren't very efficient. Dozens of bytes more than necessary are used between the minimal method and response headers.

Has there been any proposal to standardize a binary or compressed format for HTTP?

Is there a similar standard besides HTTP which is better suited to interactive mobile applications?

Excoriation answered 17/3, 2011 at 0:16 Comment(0)
T
11

As referenced in Stackoverflow - How to compress HTTP response headers?:

See Google's SPDY research project: Google's SPDY research project

From SPDY whitepaper:

The role of header compression

Header compression resulted in an ~88% reduction in the size of request headers and an ~85% reduction in the size of response headers. On the lower-bandwidth DSL link, in which the upload link is only 375 Kbps, request header compression in particular, led to significant page load time improvements for certain sites (i.e. those that issued large number of resource requests). We found a reduction of 45 - 1142 ms in page load time simply due to header compression.

Tailing answered 17/3, 2011 at 0:24 Comment(1)
SPDY looks pretty good and comprehensive, +1. I need to see if it's lightweight…Excoriation
E
6

HTTP/2.0, currently in a drafting phase, is an evolution of SPDY designed to address these issues.

Specifically, it replaces the request lines and headers with a compact binary format. It adds a server push facility and multiplexes streams over a single connection, to avoid the overhead of multiple connections and head-of-queue blocking. There are various other goodies.

I am working on a lightweight/cut-to-fit C++ implementation.

Excoriation answered 14/5, 2014 at 8:9 Comment(0)
S
5

This is an old question and I think it needs an update. Although I have no deeper understanding of this topic myself I stumbled over this very good article which explains the HPACK compression of HTTP/2.

In short it says:

  • SPDY was vulnerable to the CRIME attack so no one really used its header compression
  • HTTP/2 supports a new dedicated header compression algorithm, called HPACK
  • HPACK is resilient to CRIME
  • HPACK uses three methods of compression: Static Dictionary, Dynamic Dictionary, Huffman Encoding
Syllabogram answered 20/12, 2016 at 15:2 Comment(1)
CRIME was much talked-about in the development of HPACK, but it's not so clear that users were actually concerned about it. It was a very difficult attack to mount and prevention is a matter of total website design, not only the protocol. Yeah, this question needs a new answer on HPACK.Excoriation
F
0

Shortly, I would say no and no. HTTP was invented, IMHO, to do away with proprietary server/client communication. Does that mean you can't still do proprietary server/client communication? No. Go ahead and write your own server and protocol, open up whatever port you want and have a ball.

Frostwork answered 17/3, 2011 at 0:23 Comment(4)
Binary does not mean proprietary. There are RFCs for IP header compression, and I want the equivalent for the next level up the stack.Excoriation
I stand corrected then. If the RFCs do exist, they are not widely adopted which makes them only a step above proprietary for current practical use.Frostwork
Even uncompressed headers are binary at lower levels (TCP, IP, and lower) than HTTP. The transition in the stack between text and binary is arbitrary. The motivation for standardization is to draw on others' experience and operate with existing tools, however few in number, so running with a large herd is not a factor.Excoriation
The operating within existing tools is the point I was making. Not the large herd, poor word choice on my part. I was meaning to say that if it takes a custom apache plugin, or an unsupported one, to accomplish compressed http headers, it is sub-ideal.Frostwork

© 2022 - 2024 — McMap. All rights reserved.