Maximum on HTTP header values?
Asked Answered
C

7

428

Is there an accepted maximum allowed size for HTTP headers? If so, what is it? If not, is this something that's server specific or is the accepted standard to allow headers of any size?

Cholula answered 26/3, 2009 at 15:15 Comment(0)
R
414

No, HTTP does not define any limit. However most web servers do limit size of headers they accept. For example in Apache default limit is 8KB, in IIS it's 16K. Server will return 413 Entity Too Large error if headers size exceeds that limit.

Related question: How big can a user agent string get?

Raceme answered 26/3, 2009 at 15:20 Comment(6)
This answer states that maximum accepted header size by the server. But what is the maximum header size the web server (e.g. Apache) is capable of sending?Explanation
@Pacerier: That is 8175 bytes for Apache it looks like, but I'm still searching. Also do not expect to get useful error messages if you run into such a limit from whatever backend that is.Rosati
@hakre: IIRC, 8K for the whole line, counting the whole header lines (headers' names, white-space and headers' payloads).Raceme
Probably IIS is open to 16k because of SPNego and Kerberos protocols requirements, and they are often used for "Windows authentification".Boutte
Be aware of firewall limits! We had a bunch of users start to not be able to log in. Apparently on June 9 fortiguard updated their IPS definitions for HTTP.Server.Authorization.Buffer.Overflow to limit the length of an authorization header - See: fortiguard.com/encyclopedia/ips/12351 We had to guess at what length our authorization header could be due to lack of documentation. Ended up being okay at around 350 characters.Wizardry
I'm not sure it's always 413 — I see nodejs at least returns 431 Request Header Fields Too Large.Eternal
B
272

As vartec says above, the HTTP spec does not define a limit, however many servers do by default. This means, practically speaking, the lower limit is 8K. For most servers, this limit applies to the sum of the request line and ALL header fields (so keep your cookies short).

It's worth noting that nginx uses the system page size by default, which is 4K on most systems. You can check with this tiny program:

pagesize.c:

#include <unistd.h>
#include <stdio.h>

int main() {
    int pageSize = getpagesize();
    printf("Page size on your system = %i bytes\n", pageSize);
    return 0;
}

Compile with gcc -o pagesize pagesize.c then run ./pagesize. My ubuntu server from Linode dutifully informs me the answer is 4k.

Begrime answered 24/12, 2011 at 6:15 Comment(6)
For apache2, URL length is controlled by LimitRequestLine and LimitRequestFieldSize applies to each HTTP header line indivually... not the "sum of..."Frog
Cookies have a separate total size limit of 4093 bytes. #641438Unchaste
No need to write code to get the page size. From a terminal : getconf PAGESIZEFitz
This has probably changed since this answer was written, but the linked nginx page doesn't match the answer. The nginx page indicates that the default buffer size is 8k, and that the request can use 4 buffers by default (the buffer size itself limits the size of the request line and each individual header). So this suggests that nginx allows for somewhere between 16-32k (I'm assuming one line can't be split across two buffers, so the buffers may not be filled all the way up).Factitive
adding the value on apache 2.4 which remains the same : httpd.apache.org/docs/2.4/mod/core.html#limitrequestfieldsize : Apache 2.0, 2.2,2.4: 8KSimonne
How to verify if LimitRequestFieldSize setting (after updating) which is replaced works or not!?Louettalough
P
22

Here is the limit of most popular web server

  • Apache - 8K
  • Nginx - 4K-8K
  • IIS - 8K-16K
  • Tomcat - 8K – 48K
  • Node (<13) - 8K; (>13) - 16K
Palpebrate answered 10/3, 2020 at 18:24 Comment(0)
E
9

HTTP does not place a predefined limit on the length of each header field or on the length of the header section as a whole, as described in Section 2.5. Various ad hoc limitations on individual header field length are found in practice, often depending on the specific field semantics.

HTTP Header values are restricted by server implementations. Http specification doesn't restrict header size.

A server that receives a request header field, or set of fields, larger than it wishes to process MUST respond with an appropriate 4xx (Client Error) status code. Ignoring such header fields would increase the server's vulnerability to request smuggling attacks (Section 9.5).

Most servers will return 413 Entity Too Large or appropriate 4xx error when this happens.

A client MAY discard or truncate received header fields that are larger than the client wishes to process if the field semantics are such that the dropped value(s) can be safely ignored without changing the message framing or response semantics.

Uncapped HTTP header size keeps the server exposed to attacks and can bring down its capacity to serve organic traffic.

Source

Emerald answered 17/7, 2016 at 4:36 Comment(0)
F
6

RFC 6265, dated 2011, prescribes specific limits on cookies.

6.1. Limits

Practical user agent implementations have limits on the number and size of cookies that they can store. General-use user agents SHOULD provide each of the following minimum capabilities:

  • At least 4096 bytes per cookie (as measured by the sum of the length of the cookie's name, value, and attributes).

  • At least 50 cookies per domain.

  • At least 3000 cookies total.

Servers SHOULD use as few and as small cookies as possible to avoid reaching these implementation limits and to minimize network bandwidth due to the Cookie header being included in every request.

Servers SHOULD gracefully degrade if the user agent fails to return one or more cookies in the Cookie header because the user agent might evict any cookie at any time on orders from the user.

The intended audience of the RFC is what must be supported by a user-agent or a server. It appears that to tune your server to support what the browser allows you would need to configure 4096*50 as the limit. As the text that follows suggests, this does appear to be far in excess of what is needed for the typical web application. It would be useful to use the current limit and the RFC outlined upper limit and compare the memory and IO consequences of the higher configuration.

Fonteyn answered 20/4, 2021 at 13:59 Comment(1)
Btw, MS IE11 and MS Edge (pre-Chromium builds) have 10kb browser cookie limits that I find to be far more relevant than the server settings which appear to be for the most part configurable.Fonteyn
A
3

I also found that in some cases the reason for 502/400 in case of many headers could be because of a large number of headers without regard to size. from the docs

tune.http.maxhdr Sets the maximum number of headers in a request. When a request comes with a number of headers greater than this value (including the first line), it is rejected with a "400 Bad Request" status code. Similarly, too large responses are blocked with "502 Bad Gateway". The default value is 101, which is enough for all usages, considering that the widely deployed Apache server uses the same limit. It can be useful to push this limit further to temporarily allow a buggy application to work by the time it gets fixed. Keep in mind that each new header consumes 32bits of memory for each session, so don't push this limit too high.

https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.http.maxhdr

Aftermath answered 5/2, 2017 at 15:48 Comment(0)
T
1

If you are going to use any DDOS provider like Akamai, they have a maximum limitation of 8k in the response header size. So essentially try to limit your response header size below 8k.

Trespass answered 7/12, 2020 at 13:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.