Communication between REST Microservices: Latency
Asked Answered
C

4

10

The problem I'm trying to solve is latency between Microservice communication on the backend. Scenario. Client makes a request to service A, which then calls service B that calls service C before returning a response to B which goes to A and back to the client.

Request: Client -> A -> B -> C
Response: C -> B -> A -> Client

The microservices expose a REST interface that is accessed using HTTP. Where each new HTTP connection between services to submit requests is an additional overhead. I'm looking for ways to reduce this overhead without bringing in another transport mechanism into the mix (i.e. stick to HTTP and REST as much as possible). Some answers suggest using Apache Thrift but I'd like to avoid that. Other possible solutions are using Messaging Queues which I'd also like to avoid. (To keep operational complexity down).

Has anyone experience in microservices communication using HTTP Connection pooling or HTTP/2? The system is deployed on AWS where service groups are fronted by a ELB.

Celin answered 17/4, 2016 at 17:59 Comment(2)
What contributes to the latency? You think it's establishing TCP connections?Reproval
Yes. Mainly just the new connection overhead.Celin
S
10

HTTP/1.0 working mode was to open a connection for each request, and close the connection after each response.

Using HTTP/1.0 from remote clients and clients inside microservices (e.g. those in A that call B, and those in B that call C) should be avoided because the cost of opening a connection for each request can contribute for most of the latency.

HTTP/1.1 working mode is to open a connection and then leave it open until either peer explicitly requests to close it. This allow for the connection to be reused for multiple requests, and it's a big win because it reduces the latency, it uses less resources, and in general it is more efficient.

Fortunately nowadays both remote clients (e.g. browsers) and clients inside microservices support HTTP/1.1 well, or even HTTP/2.

Surely browsers have connection pooling, and any decent HTTP client that you may use inside your microservices does also have connection pooling.

Remote clients and microservices clients should be using at least HTTP/1.1 with connection pooling.

Regarding HTTP/2, while I am a big promoter of HTTP/2 for browser-to-server usage, for REST microservices calls inside data centers I would benchmark the parameters you are interested in for both HTTP/1.1 and HTTP/2, and then see how they fare. I expect HTTP/2 to be on par with HTTP/1.1 for most cases, if not slightly better.

The way I would do it using HTTP/2 (disclaimer, I'm a Jetty committer) would be to offload TLS from remote clients using HAProxy, and then use clear-text HTTP/2 between microservices A, B and C using Jetty's HttpClient with HTTP/2 transport.

I'm not sure AWS ELB already supports HTTP/2 at the time of this writing, but if it does not please be sure to drop a message to Amazon asking to support it (many others already did that). As I said, alternatively you can use HAProxy.

For communication between microservices, you can use HTTP/2 no matter what is the protocol used by remote clients. By using Jetty's HttpClient, you can very easily switch between the HTTP/1.1 and the HTTP/2 transports, so this gives you the maximum of flexibility.

Sadiras answered 18/4, 2016 at 8:32 Comment(0)
S
0

If latency is really an issue to you then you should probably not be using service calls between your components. Rather you should minimize the number of times control passes to an out-of-band resource and be making the calls in-process, which is much faster.

However, in most cases, the overheads incurred by the service "wrappers" (channel construction, serialisation, marshalling, etc), are negligible enough and still well within adequate latency tolerances for the business process being supported.

So you should ask yourself:

  1. Is latency really an issue for you, in respect to the business process? In my experience only engineers care about latency. Your business customers do not.
  2. If latency is an issue, then can the latency definitively be attributed to the cost of making the service calls? Could there be another reason the calls are taking a long time?
  3. If it is the services, then you should look at consuming the service code as an assembly, rather than out-of-band.
Shriek answered 18/4, 2016 at 8:46 Comment(2)
Oh, right. Customers never care about latency. I should have known that I shouldn't care that every time I perform even the tiniest action on Jira it takes several seconds to complete. Drag and drop a story to schedule? 5 seconds. Edit a title? 5 seconds. You get the point. High latency is one of the most painful user experiences on the face of the planet.Cw
@Cw while it's nice to know that you personally care quite a lot about latency, the fact remains that it is probably the least valued non-functional requirement in most, if not all, large projects I've worked on, and any concerns around latency are usually ignored until the last possible minute. Unless a lack of speed or responsiveness results in a negative financial impact, business people simply don't care about it. I'm not expounding my own opinion about the importance of latency here, I'm just telling how it is in the real world. Don't hate the player...;)Shriek
L
0

For the benefit of others running into this problem, apart from using HTTP/2, SSL/TLS Offloading, Co-location, consider using Caching where you can. This not only improves performance, but reduces dependency on downstream services. Also, consider data formats that are perform well.

Lachrymatory answered 10/5, 2018 at 15:2 Comment(0)
D
0

latency between Micro-service communications is an issue for low latency applicaitons however number of call can be minimize by hybrid between micro-services and monolith

Emerging C++ microservices framework is best for low latency applications https://github.com/CppMicroServices/CppMicroServices

Digiovanni answered 6/3, 2019 at 3:39 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.