In akka-http, you can:
- Set akka.http.server.max-connections, which will prevent more than that number of connections. Exceeding this limit means clients will get connection timeouts.
- Set akka.http.server.pipelining-limit, which prevents a given connection from having more than this number of requests outstanding at once. Exceeding this means clients will get socket timeouts.
These are both forms of backpressure from the http server to the client, but both are very low level, and only indirectly related to your server's performance.
What seems better would be to backpressure at the http level, based on request rate as seen by the server. Probably by returning 429 - Too Many Requests. Request rate is arguably an indirect measure of performance too, but it seems closer than number of connections.
This seems like a fairly reasonable thing, but I'm having trouble finding any existing patterns. This is the closest reference I can find: https://github.com/akka/akka-http/issues/411
From what I can tell, the best approach would be to grab the Flow
you turn your Route
into, and insert it into a graph that has a global measure of request rate (or maybe a single processing queue) and a short-circuit that bypasses the Route
(by returning 429 or whatever) entirely.
Are there better ideas?