Do we still need a connection pool for microservices talking HTTP2?
Asked Answered
H

1

12

As HTTP2 supports multiplexing, do we need still a pool of connections for microservice communication? If yes, what are the benefits of having such a pool?

Example: Service A => Service B

Both the above services have only one instance available.

Multiple connections may help overcome OS buffer size limitation for each Connection(Socket)? What else?

Henbane answered 4/5, 2019 at 18:33 Comment(4)
The first question is how microservices use HTTP/2. Are they multiplexing requests? Pools will be always a good thing in my opinion and could be the separation layer needed. Pools could make use of multiplexing, your app wills still need a pool.Allgood
Yes, the microservices are multiplexing requests. Besides separation layer, are there any performance benefits?Henbane
For sure I guess, have not used http2 myself yet.Allgood
I can see there can be some benefits. For example, a single connection(socket) will have some limit on Socket buffer size(OS provided). I just want to know what are the other benefitsHenbane
C
20

Yes, you still need connection pool in a client contacting a microservice.

First, in general it's the server that controls the amount of multiplexing. A particular microservice server may decide that it cannot allow beyond a very small multiplexing.
If a client wants to use that microservice with a higher load, it needs to be prepared to open multiple connections and this is where the connection pool comes handy. This is also useful to handle load spikes.

Second, HTTP/2 has flow control and that may severely limit the data throughput on a single connection. If the flow control window are small (the default defined by the HTTP/2 specification is 65535 bytes, which is typically very small for microservices) then client and server will spend a considerable amount of time exchanging WINDOW_UPDATE frames to enlarge the flow control windows, and this is detrimental to throughput.
To overcome this, you either need more connections (and again a client should be prepared for that), or you need larger flow control windows.

Third, in case of large HTTP/2 flow control windows, you may hit TCP congestion (and this is different from socket buffer size) because the consumer is slower than the producer. It may be a slow server for a client upload (REST request with a large payload), or a slow client for a server download (REST response with a large payload).
Again to overcome TCP congestion the solution is to open multiple connections.

Comparing HTTP/1.1 with HTTP/2 for the microservice use case, it's typical that the HTTP/1.1 connection pools are way larger (e.g. 10x-50x) than HTTP/2 connection pools, but you still want connection pools in HTTP/2 for the reasons above.

[Disclaimer I'm the HTTP/2 implementer in Jetty].
We had an initial implementation where the Jetty HttpClient was using the HTTP/2 transport with an hardcoded single connection per domain because that's what HTTP/2 preached for browsers.
When exposed to real world use cases - especially microservices - we quickly realized how bad of an idea that was, and switched back to use connection pooling for HTTP/2 (like HttpClient always did for HTTP/1.1).

Cabala answered 4/5, 2019 at 22:1 Comment(2)
Thanks a lot for such a detailed answer. Also can you point me to any good examples of using Jetty HTTP/2 client? And do you recommend running internal microservices with TLS? (As most of the servers out there with HTTP/2 have it by default) It should the time to create a connection due to added handshake and negotiation. Is it the same in practice? or not noticeable?Henbane
Jetty HTTP/2 client examples: eclipse.org/jetty/documentation/9.4.x/…. Running internal microservices internally over TLS depends on how much confidentiality you want. Without TLS you're faster but have no confidentiality. Jetty server can understand clear-text HTTP/2 without problems (and that's how it's typically deployed in such cases). Creating a HTTP/2 connection is costly in general (with or without TLS) but it's typically amortized by using it for many requests.Cabala

© 2022 - 2024 — McMap. All rights reserved.