What is the optimal --max-requests setting for Starman?
Asked Answered
B

1

6

I am running a Dancer (v1.3202) app with Starman (v0.4014) and ngynx as a front end proxy. I am noticing a huge latency spike in my load balancer every couple of hours and wonder if it's the workers reaching their request limit and restarting. The latency goes from 30ms average to 1000ms or more. I checked the MongoDB and there are no long running queries. What does the --max-requests actually do regarding the workers and what happens when a worker reaches this limit?

Bevin answered 28/10, 2016 at 20:27 Comment(2)
FWIW, I strongly recommend uWSGI over Starman. Deeper in every way and more reliable in my experience.Carolus
My experience is different: I have a starman setup which is happily serving more than 100,000,000 requests per day, for many months without problems.Martella
M
4

What does the --max-requests setting do?

From starman --help:

--max-requests Number of the requests to process per one worker process. Defaults to 1000.

What this means is that each worker will exit after it processes that many requests. The master process will then launch a brand new worker for each worker that exits, maintaining the number of workers according to the --workers setting.

Using --max-requests is usually a good thing, especially if your app isn't the only thing running on the box, because perl (notoriously) does not give back memory that it uses. This recycling of worker processes is the way starman can give memory back for other processes to use. If your app actually leaks memory, this can also help keep your app running with good performance as opposed to your app eventually consuming all the memory and needing to be killed by the OS.

What is the optimal value for the --max-requests setting?

You should leave it at its default value of 1,000 unless you have a good reason to change it. If your app is the only thing running on the box and you're sure that it's not leaky, you could try using a higher value to recycle workers less often. If you know your app is leaky, you may want to use a lower value to recycle workers more often. However, generally this setting should actually have very little impact on performance.

That said, recycling workers could be responsible for spurious slow requests if your workers cache stuff in memory because new workers would need to spend some time rebuilding those caches, but there could be many other possible explanations. You'll need to do some profiling to find out what's really causing the specific slowness you're seeing.

Mulberry answered 9/1, 2017 at 1:59 Comment(1)
What about keep-alive connections? For me it seems that multiple HTTP requests over one keep-alive connection just count as one request. Can you confirm this observation?Martella

© 2022 - 2024 — McMap. All rights reserved.