CherryPy and concurrency
Asked Answered
D

3

6

I'm using CherryPy in order to serve a python application through WSGI.

I tried benchmarking it, but it seems as if CherryPy can only handle exactly 10 req/sec. No matter what I do.

Built a simple app with a 3 second pause, in order to accurately determine what is going on... and I can confirm that the 10 req/sec has nothing to do with the resources used by the python script.

__

Any ideas?

Diabolo answered 21/4, 2010 at 17:20 Comment(1)
Hey, just a friendly note - if diatoid's answer is correct do you want to mark it as accepted? :)Yardarm
T
31

By default, CherryPy's builtin HTTP server will use a thread pool with 10 threads. If you are still using the defaults, you could try increasing this in your config file.

[global]
server.thread_pool = 30
Triclinium answered 21/4, 2010 at 18:17 Comment(5)
@diatoid: thank you very much! Btw I thought that cherrypy was built in order to support the highest amount of req/sec possible.Diabolo
CherryPy tries to set sane defaults, which produce around 1200 req/sec max on my laptop. But then, those benchmark requests don't take 3 seconds each. The reality for your site should be somewhere in the middle; if your real requests take 3 seconds each you're probably doing something wrong ;)Doorpost
requests take 3 seconds because they are waiting for information to be collected from elsewhere. While waiting I am not using any reasources what so ever! So why do I have to leave my machine idle while I can serve more requests at the same time?Diabolo
You are using resources; for any webapp, you're keeping the socket open, which uses a file descriptor and an ephemeral port. In CherryPy, each child connection is bound to a thread for its lifetime, so you're also using one of the worker threads (and its 1MB stack size and any heap objects needed to handle each request). So you have a choice: either 1) increase the number of threads, 2) redesign your app to return 202 Accepted and poll for the answer (which would free up the socket, too), or 3) use an async webserver (and then fight latency issues instead of throughput ones).Doorpost
CherryPy increases the number of worker-thread as much as needed. thread_pool = 10 is the number of initial threads.Africah
O
3

This was extremely confounding for me too. The documentation says that CherryPy will automatically scale its thread pool based on observed load. But my experience is that it will not. If you have tasks which might take a while and may also use hardly any CPU in the mean time, then you will need to estimate a thread_pool size based on your expected load and target response time.

For instance, if the average request will take 1.5 seconds to process and you want to handle 50 requests per second, then you will need 75 threads in your thread_pool to handle your expectations.

In my case, I delegated the heavy lifting out to other processes via the multiprocessing module. This leaves the main CherryPy process and threads at idle. However, the CherryPy threads will still be blocked awaiting output from the delegated multiprocessing processes. For this reason, the server needs enough threads in the thread_pool to have available threads for new requests.

My initial thinking is that the thread_pool would not need to be larger than the multiprocessing pool worker size. But this turns out also to be a misled assumption. Somehow, the CherryPy threads will remain blocked even where there is available capacity in the multiprocessing pool.

Another misled assumption is that the blocking and poor performance have something to do with the Python GIL. It does not. In my case I was already farming out the work via multiprocessing and still needed a thread_pool sized on the average time and desired requests per second target. Raising the thread_pool size addressed the issue. Although it looks like and incorrect fix.

Simple fix for me:

cherrypy.config.update({
    'server.thread_pool': 100
})
Oldworld answered 27/4, 2020 at 22:32 Comment(0)
H
0

Your client needs to actually READ the server's response. Otherwise the socket/thread will stay open/running until timeout and garbage collected.

use a client that behaves correctly and you'll see that your server will behave too.

Hahnert answered 1/6, 2016 at 14:32 Comment(1)
Assuming that clients are well-behaved will leave you open to a "slow loris" attack.Ichor

© 2022 - 2024 — McMap. All rights reserved.