HTTP2 ERR CONNECTION CLOSED (Too much overhead)
Asked Answered
G

1

6

We are developing a project using Angular in the front and Spring at the backend. Nothing new. But we have set-up the backend to use HTTP2 and from time to time we find weird problems.

Today I started playing with "Network Log Export" from chrome and I found this interesting piece of information in the HTTP2_SESSION line of the log.

t=43659 [st=41415]    HTTP2_SESSION_RECV_GOAWAY
                  --> active_streams = 4
                  --> debug_data = "Connection [263], Too much overhead so the connection will be closed"
                  --> error_code = "11 (ENHANCE_YOUR_CALM)"
                  --> last_accepted_stream_id = 77
                  --> unclaimed_streams = 0
t=43659 [st=41415]    HTTP2_SESSION_CLOSE
                  --> description = "Connection closed"
                  --> net_error = -100 (ERR_CONNECTION_CLOSED)
t=43661 [st=41417]    HTTP2_SESSION_POOL_REMOVE_SESSION
t=43661 [st=41417] -HTTP2_SESSION

It looks like the root of the problem for the ERR_CONNECTION_CLOSED is the server decides there are too much overhead from the same client and closes the connection.

The question is ¿Can we tune the server to accept overhead up to a certain limit? ¿how? I believe this is something we should be able to tune up in Spring or tomcat or somewhere there.

Cheers Ignacio

Gutshall answered 24/4, 2020 at 17:21 Comment(2)
I think there is a bug in Tomcat http2 implementation and this is the source of the problem. See this comment: github.com/apache/tomcat/commit/…Gutshall
Did you ever manage to solve this/configure this correctly? I'm getting similar issues. Posted in the github thread.Bilious
D
6

The overhead protection was put in place in response to a collection of CVE's reported against HTTP/2 in the middle of 2019. While Tomcat wasn't directly affected (the malicious input didn't trigger excessive load) we did take steps to block input that matched the malicious profile.

From your GitHub comment, you see issues with POSTs. That strongly suggests that the client is sending the POST data in multiple small packets rather than a smaller number of larger packets. Some clients (e.g. Chrome) are know to do this occasionally due to they way they buffer data.

A number of the HTTP/2 DoS attacks could be summarized as sending more overhead than data. While Tomcat wasn't directly affected, we took the decision to monitor for clients operating in this way and drop connections if any were found on the grounds that the client was likely to be malicious.

Generally, data packets reduce the overhead count, non-data packets increase the overhead count and (potentially) malicious packets increase the overhead count significantly. The idea is that an established, generally well-behaved, connection should be able to survive the occasional 'suspect' packet but any more than that will quickly trigger the connection to be closed.

In terms of small POST packets the key configuration setting is:

  • overheadCountFactor
  • overheadDataThreshold

The overhead count starts at -10. For every DATA frame received it is reduced by 1. For every SETTINGS, PRIORITY and PING frame it is increased by overheadCountFactor.If the overhead count goes above 0, the connection is closed.

In addition, if the average size of a received non-final DATA frame and the previously received DATA frame (on that same stream) is less than overheadDataThreshold then the overhead count is increased by overheadDataThreshold/(average size of current and previous DATA frames). In this way, the smaller the DATA frame, the greater the increase in the overhead. A small number of small non-final DATA frames should be enough to trigger connection closure.

The averaging is there so buffering such as exhibited by Chrome does not trigger the overhead protection.

To diagnose this problem you need to look at the logs to see what size non-final DATA frames are being sent by the client. I suspect that will show a series of non-final DATA frames with size less than 1024 (the default for overheadDataThreshold).

To fix the issue my recommendation is to look at the client first. Why is it sending small non-final DATA frames and what can be done to stop it?

If you need an immediate mitigation then you can reduce overheadDataThreshold. The information you get on DATA frame sizes sent by the client should guide you as to what to set this to. It needs to be smaller than DATA frames being sent by the client. In extremis you can set overheadDataThreshold to zero to disable the protection.

Dunkirk answered 27/4, 2020 at 8:20 Comment(4)
In our case A user goes through a dialog in which he sets different properties and provides information to complete a task. Once the information is collected, the browser sends a POST (small) to create an object (the task) on the server. So far everything is going well. Once that object has been successfully created, the front can perform a variable number of actions ranging from 3 to 15, depending on the previous answers on the form. The actions are only of 2 types: A POST to upload a file to the task. A PUT to set a property on the task. The problem is that there can be up to 15 PUTsGutshall
The number of requests is not the issue. Lots of small PUTs/POSTs won't trigger the protection provided any non-final DATA frames are a suitable size. Lots of small PUTs/POSTs with small, final DATA frames are fine. Again, you need to look at the size of the non-final DATA frames the client is sending.Dunkirk
Why would PINGs lead to more overhead? Can you not have a long-duration session kept alive by PINGs so that you can transfer data in the same session, but after a long period of time?Sudor
Repeated PINGs can be a sign of malicious behaviour (CVE-2019-9512). Tomcat isn't vulnerable to that CVE but Tomcat still considers PINGs to be overhead as a way of identifying and closing down such attacks. If this was an issue for users (no-one has reported that it is) then there is always the scope to modify the protection to only treat pings within x milliseconds of the previous ping as overhead.Dunkirk

© 2022 - 2024 — McMap. All rights reserved.