gunicorn: how to resolve "WORKER TIMEOUT"?
Asked Answered
B

21

347

I have setup gunicorn with 3 workers, 30 worker connections and using eventlet worker class. It is set up behind Nginx. After every few requests, I see this in the logs.

[ERROR] gunicorn.error: WORKER TIMEOUT (pid:23475)
None
[INFO] gunicorn.error: Booting worker with pid: 23514

Why is this happening? How can I figure out what's going wrong?

Baliol answered 1/6, 2012 at 18:3 Comment(2)
You were able to solve the problem ? Please share your thoughts as I also stuck with it. Gunicorn==19.3.1 and gevent==1.0.1Valorievalorization
Found the solution for it. Increased timeout to very large value and then I was able to see stack traceValorievalorization
P
332

We had the same problem using Django+nginx+gunicorn. From Gunicorn documentation we have configured the graceful-timeout that made almost no difference.

After some testings, we found the solution, the parameter to configure is: timeout (And not graceful timeout). It works like a clock..

So, Do:

1) open the gunicorn configuration file

2) set the TIMEOUT to what ever you need - the value is in seconds

NUM_WORKERS=3
TIMEOUT=120

exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--timeout $TIMEOUT \
--log-level=debug \
--bind=127.0.0.1:9000 \
--pid=$PIDFILE
Punish answered 19/6, 2014 at 11:52 Comment(4)
Thanks this is the right answer. And then, in order to save resources with many concurrent connections: pip install gevent , then worker_class gevent in your config file or -k gevent on the command line.Trifacial
Am running with supervisor so added it to conf.d/app.conf: command=/opt/env_vars/run_with_env.sh /path/to/environment_variables /path/to/gunicorn --timeout 200 --workers 3 --bind unix:/path/to/socket server.wsgi:applicationRadbourne
Add on - timeout unit is seconds, Command line: -t INT or --timeout INT (Default - 30 seconds). Workers silent for more than this many seconds are killed and restarted. details here - docs.gunicorn.org/en/stable/settings.html#settingsAlga
graceful_timeout is the number of seconds the workers have to gracefully shut down after receiving a restart signal, so it does not have any bearing on how long gunicorn will wait for a worker to serve a request.Twigg
C
81

On Google Cloud Just add --timeout 90 to entrypoint in app.yaml

entrypoint: gunicorn -b :$PORT main:app --timeout 90
Clearway answered 5/1, 2018 at 6:26 Comment(2)
Why 90 sec timeout?Feller
just pick a large number, 900. Not too large, if theres a real problem you don't want to wait indefinitelyCairns
G
37

Run Gunicorn with --log-level debug.

It should give you an app stack trace.

Guilford answered 18/8, 2012 at 16:21 Comment(3)
I'd love to get a stracktrace, but none of them work here, using gunicorn 19.4.5. Debug stuff is displayed, so i guess the flag was recognized, but not stacktrace on timeout.Person
Same here, no stack trace with the flag enabledDurazzo
You could override the worker_abort function in a config file to log a traceback.Torticollis
P
34

The Microsoft Azure official documentation for running Flask Apps on Azure App Services (Linux App) states the use of timeout as 600

gunicorn --bind=0.0.0.0 --timeout 600 application:app

https://learn.microsoft.com/en-us/azure/app-service/configure-language-python#flask-app

Parrish answered 21/1, 2021 at 11:22 Comment(1)
Seems a little excessive, but I do appreciate that is official documentation, so I will go with it.Fein
H
27

Is this endpoint taking too much time?

Maybe you are using flask without asynchronous support, so every request will block the call? To create async support, add the gevent worker. With gevent, a new call will spawn a new thread, and you app will be able to receive more requests simultaneously.

pip install gevent
gunicon .... --worker-class gevent
Hellhole answered 23/4, 2020 at 13:20 Comment(0)
L
17

WORKER TIMEOUT means your application cannot response to the request in a defined amount of time. You can set this using gunicorn timeout settings. Some application need more time to response than another.

Another thing that may affect this is choosing the worker type

The default synchronous workers assume that your application is resource-bound in terms of CPU and network bandwidth. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. An example of something that takes an undefined amount of time is a request to the internet. At some point the external network will fail in such a way that clients will pile up on your servers. So, in this sense, any web application which makes outgoing requests to APIs will benefit from an asynchronous worker.

When I got the same problem as yours (I was trying to deploy my application using Docker Swarm), I've tried to increase the timeout and using another type of worker class. But all failed.

And then I suddenly realised I was limitting my resource too low for the service inside my compose file. This is the thing slowed down the application in my case

deploy:
  replicas: 5
  resources:
    limits:
      cpus: "0.1"
      memory: 50M
  restart_policy:
    condition: on-failure

So I suggest you to check what thing slowing down your application in the first place

Laughlin answered 9/8, 2018 at 9:31 Comment(0)
M
16

Could it be this? http://docs.gunicorn.org/en/latest/settings.html#timeout

Other possibilities could be your response is taking too long or is stuck waiting.

Molina answered 6/8, 2013 at 3:34 Comment(0)
A
15

This worked for me:

gunicorn app:app -b :8080 --timeout 120 --workers=3 --threads=3 --worker-connections=1000

If you have eventlet add:

--worker-class=eventlet

If you have gevent add:

--worker-class=gevent
Aldin answered 8/6, 2020 at 1:1 Comment(1)
Fun facts, --worker-class and -k are analogues, as well as --timeout and -tBurnett
S
11

I've got the same problem in Docker.

In Docker I keep trained LightGBM model + Flask serving requests. As HTTP server I used gunicorn 19.9.0. When I run my code locally on my Mac laptop everything worked just perfect, but when I ran the app in Docker my POST JSON requests were freezing for some time, then gunicorn worker had been failing with [CRITICAL] WORKER TIMEOUT exception.

I tried tons of different approaches, but the only one solved my issue was adding worker_class=gthread.

Here is my complete config:

import multiprocessing

workers = multiprocessing.cpu_count() * 2 + 1
accesslog = "-" # STDOUT
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(q)s" "%(D)s"'
bind = "0.0.0.0:5000"
keepalive = 120
timeout = 120
worker_class = "gthread"
threads = 3
Systematic answered 9/5, 2019 at 19:24 Comment(0)
P
7

You need to used an other worker type class an async one like gevent or tornado see this for more explanation : First explantion :

You may also want to install Eventlet or Gevent if you expect that your application code may need to pause for extended periods of time during request processing

Second one :

The default synchronous workers assume that your application is resource bound in terms of CPU and network bandwidth. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. For instance, a request to the internet meets this criteria. At some point the external network will fail in such a way that clients will pile up on your servers.

Parthenos answered 7/5, 2014 at 10:28 Comment(2)
How would I actually make use of such a different worker class?Pittel
@FrederickNord it can be set via the -k/--worker_class option, see docs.gunicorn.org/en/stable/settings.html#worker-classAsleyaslope
S
7

I had very similar problem, I also tried using "runserver" to see if I could find anything but all I had was a message Killed

So I thought it could be resource problem, and I went ahead to give more RAM to the instance, and it worked.

Sperrylite answered 18/9, 2015 at 11:6 Comment(2)
I was seeing this problem with even with gevent and the timeout set correctly, out of memory was the problemStella
Yes. The timeout was because it took too long to talk to the worker with the server out of memory. I watched docker stats, fixed the code that was using up the memory, and was fine.Penner
E
2

If you are using GCP then you have to set workers per instance type.

Link to GCP best practices https://cloud.google.com/appengine/docs/standard/python3/runtime

Enjoyable answered 11/6, 2019 at 1:5 Comment(0)
V
2

In my case I came across this issue when sending larger(10MB) files to my server. My development server(app.run()) received them no problem but gunicorn could not handle them.

for people who come to the same problem I did. My solution was to send it in chunks like this: ref / html example, separate large files ref


    def upload_to_server():
        upload_file_path = location
    
        def read_in_chunks(file_object, chunk_size=524288):
            """Lazy function (generator) to read a file piece by piece.
            Default chunk size: 1k."""
            while True:
                data = file_object.read(chunk_size)
                if not data:
                    break
                yield data
    
        with open(upload_file_path, 'rb') as f:
            for piece in read_in_chunks(f):
                r = requests.post(
                    url + '/api/set-doc/stream' + '/' + server_file_name,
                    files={name: piece},
                    headers={'key': key, 'allow_all': 'true'})

my flask server:


    @app.route('/api/set-doc/stream/<name>', methods=['GET', 'POST'])
    def api_set_file_streamed(name):
        folder = escape(name)  # secure_filename(escape(name))
        if 'key' in request.headers:
            if request.headers['key'] != key:                
                return ''
        else:
            return ''
        for fn in request.files:
            file = request.files[fn]
            if fn == '':
                print('no file name')
                flash('No selected file')
                return 'fail'
            if file and allowed_file(file.filename):
                file_dir_path = os.path.join(app.config['UPLOAD_FOLDER'], folder)
                if not os.path.exists(file_dir_path):
                    os.makedirs(file_dir_path)
                file_path = os.path.join(file_dir_path, secure_filename(file.filename)) 
                with open(file_path, 'ab') as f:
                    f.write(file.read())
                return 'sucess'
        return ''

Vara answered 7/2, 2023 at 21:1 Comment(0)
T
1

timeout is a key parameter to this problem.

however it's not suit for me.

i found there is not gunicorn timeout error when i set workers=1.

when i look though my code, i found some socket connect (socket.send & socket.recv) in server init.

socket.recv will block my code and that's why it always timeout when workers>1

hope to give some ideas to the people who have some problem with me

Trolly answered 28/11, 2019 at 8:53 Comment(0)
H
1

Check that your workers are not killed by a health check. A long request may block the health check request, and the worker gets killed by your platform because the platform thinks that the worker is unresponsive.

E.g. if you have a 25-second-long request, and a liveness check is configured to hit a different endpoint in the same service every 10 seconds, time out in 1 second, and retry 3 times, this gives 10+1*3 ~ 13 seconds, and you can see that it would trigger some times but not always.

The solution, if this is your case, is to reconfigure your liveness check (or whatever health check mechanism your platform uses) so it can wait until your typical request finishes. Or allow for more threads - something that makes sure that the health check is not blocked for long enough to trigger worker kill.

You can see that adding more workers may help with (or hide) the problem.

Hardej answered 10/10, 2022 at 16:1 Comment(0)
L
0

For me, the solution was to add --timeout 90 to my entrypoint, but it wasn't working because I had TWO entrypoints defined, one in app.yaml, and another in my Dockerfile. I deleted the unused entrypoint and added --timeout 90 in the other.

Lastditch answered 20/11, 2019 at 4:17 Comment(0)
H
0

For me, it was because I forgot to setup firewall rule on database server for my Django.

Homocentric answered 14/9, 2020 at 7:4 Comment(0)
T
0

Frank's answer pointed me in the right direction. I have a Digital Ocean droplet accessing a managed Digital Ocean Postgresql database. All I needed to do was add my droplet to the database's "Trusted Sources".

(click on database in DO console, then click on settings. Edit Trusted Sources and select droplet name (click in editable area and it will be suggested to you)).

Tolman answered 15/10, 2020 at 12:19 Comment(0)
B
0

The easiest way that worked for me is to create a new config.py file in the same folder where your app.py exists and to put inside it the timeout and all your desired special configuration:

timeout = 999

Then just run the server while pointing to this configuration file

gunicorn -c config.py --bind 0.0.0.0:5000 wsgi:app

note that for this statement to work you need wsgi.py also in the same directory having the following

from myproject import app

if __name__ == "__main__":
    app.run()

Cheers!

Brandie answered 15/10, 2022 at 23:25 Comment(0)
Q
0

Apart from the gunicorn timeout settings which are already suggested, since you are using nginx in front, you can check if these 2 parameters works, proxy_connect_timeout and proxy_read_timeout which are by default 60 seconds. Can set them like this in your nginx configuration file as,

proxy_connect_timeout 120s;
proxy_read_timeout 120s;
Quantum answered 14/1, 2023 at 17:32 Comment(0)
D
-6

in case you have changed the name of the django project you should also go to

cd /etc/systemd/system/

then

sudo nano gunicorn.service

then verify that at the end of the bind line the application name has been changed to the new application name

Deach answered 15/9, 2022 at 9:13 Comment(1)
This answer is extremly bad, it has no value. You just saying "open notebook and verify that your config is fine". Also you should rename "gunicorn.service" to "yourprojectname.service"Cushitic

© 2022 - 2024 — McMap. All rights reserved.