Error: upstream prematurely closed connection while reading response header from upstream [uWSGI/Django/NGINX]
Asked Answered
V

7

28

I am currently ALWAYS getting a 502 on a query my users are doing... which usually returns 872 rows and takes 2.07 to run in MySQL. It is however returning a LOT of information. (Each row contains a lot of stuff). Any ideas?

Running the Django (tastypie Rest API), Nginx and uWSGI stack.

Server Config with NGINX

# the upstream component nginx needs to connect to
upstream django {
    server unix:///srv/www/poka/app/poka/nginx/poka.sock; # for a file socket
}

# configuration of the server
server {
    # the port your site will be served on
    listen  443;


    # the domain name it will serve for
    server_name xxxx; # substitute your machine's IP address or FQDN
    charset     utf-8;

    # max upload size
    client_max_body_size 750M;   # adjust to taste

    # Finally, send all non-media requests to the Django server.
    location / {
        uwsgi_pass  django;
        include     /srv/www/poka/app/poka/nginx/uwsgi_params; # the uwsgi_params file you installed
    }
}

UWSGI config

# process-related settings
# master
master          = true
# maximum number of worker processes
processes   = 2
# the socket (use the full path to be safe
socket          = /srv/www/poka/app/poka/nginx/poka.sock
# ... with appropriate permissions - may be needed
chmod-socket    = 666
# clear environment on exit
vacuum          = true

pidfile = /tmp/project-master.pid # create a pidfile
harakiri = 120 # respawn processes taking more than 20 seconds
max-requests = 5000 # respawn processes after serving 5000 requests
daemonize = /var/log/uwsgi/poka.log # background the process & log
log-maxsize = 10000000
#http://uwsgi-docs.readthedocs.org/en/latest/Options.html#post-buffering
post-buffering=1
logto = /var/log/uwsgi/poka.log # background the process & log
Vacla answered 28/2, 2014 at 23:25 Comment(4)
Obvious answer would be split the data or increase the timeout. Does that not work?Debase
Where can I increase that timeout? Increasing the harakiri doesn't help... I will need to actually split the data in near future... but I don't have the time right now ...Vacla
I assume 2.07 are seconds? Anything in the logs? Run uWSGI HTTP server directly to see if uWSGI or nginx is choking?Debase
Yeah in seconds... but the thing is that 872 rows is nothing for now... it might grow to 10,000 in a near future. But the user will eventually need to get those 10,000 rows one way or another into his iPad. Should I start looking into sending the data in batches?Vacla
S
17

This is unlikely to be an nginx config issue.

It's almost certainly that the backend is actually crashing (or just terminating the connection) rather than giving a malformed response. i.e. the error message is telling you what the problem is, but you're looking in the wrong place to solve it.

You don't give enough information to allow use to figure out what the exact issue is but if I had to guess:

which usually returns 872 rows and takes 2.07 to run in MySQL. It is however returning a LOT of information.

It's either timing out somewhere or running out of memory.

Seismology answered 1/3, 2014 at 16:27 Comment(4)
872 rows is nothing for now... it might grow to 10,000 in a near future. But the user will eventually need to get those 10,000 rows one way or another into his iPad. Should I start looking into sending the data in batches?Vacla
No, you should find what is causing the backend to terminate the request.Seismology
Upvoting for your out of memory mention. I think this is hard to realize at the short term that the memory could be an issue for such flow of errors.Tertian
This was the problem for me; I was running a complex request on a virtual server with 1GB RAM. Doubling the RAM of the server fixed the issue for me.Headed
C
7

I had the same issue, what fixed it for me is adding my domain in the settings.py e.g.:

ALLOWED_HOSTS = ['.mydomain.com', '127.0.0.1', 'localhost']

By same issue, I mean I couldn't even load the page, nginx would return a 502 without serving any pages where I could cause the application to crash.

And the nginx log contained:

Error: upstream prematurely closed connection while reading response header from upstream
Cubbyhole answered 25/9, 2015 at 21:2 Comment(0)
D
2

In your @django location block you can try adding some proxy read and connect timeout properties. e.g.

location @django {
   proxy_read_timeout 300;
   proxy_connect_timeout 300;
   proxy_redirect off;

   # proxy header definitions
   ...
   proxy_pass http://django;
}
Druse answered 27/8, 2014 at 6:3 Comment(1)
Top 3 line helped me to solve 502 ws errorOverhear
A
0

Sometimes it may be an authority problem. Check the project directory's authority.

Aspic answered 21/5, 2018 at 15:44 Comment(0)
P
0

It might be an uwsgi configuration issue instead of Nginx. I saw that you had uwsgi processes = 2 and harakiri = 120, have you tried changing those as well as other fields there one by one?

I had same issue but it wasn't my NGINX configuration, it was my UWSGI processes causing timeout errors when I posted JSONs from client side to server. I had processes as 5, I changed it to 1 and it solved the issue. For my application, I only needed to have 1 process run at time.

Here is the working UWSGI configuration autoboot ini file that solved the timeout issue and thus the 502 gateway issue (upstream closed prematurely).

autoboot.ini

#!/bin/bash

[uwsgi]
socket          = /tmp/app.sock

master          = true

chmod-socket    = 660
module          = app.wsgi
chdir           = home/app

close-on-exec = true # Allow linux shell via uWSGI

processes = 1
threads = 2
vacuum = true

die-on-term = true

Hope it helps.

Philemol answered 24/1, 2019 at 8:38 Comment(0)
E
0

Try this uwsgi config (uwsgi.ini):

[uwsgi]
 master          = true 
 socket          = /home/ubuntu/uwsgi.sock
 chmod-socket    = 666
 chdir           = /home/ubuntu/project
 wsgi-file       = /home/ubuntu/project/project/wsgi.py
 virtualenv      = /home/ubuntu/virtual
 vacuum          = true
 enable-threads  = true
 daemonize       = /home/ubuntu/logs/uwsgi.log

And run uwsgi --ini uwsgi.ini

and update nginx config to connect to the socket created

server {

   listen 80;
   server_name www.domain.com;



       location /static {
       
       alias /home/ubuntu/project/static;
       }

       location /media {
       
       alias /home/ubuntu/project/media;
       }
       
       location / {
       uwsgi_pass unix:///home/ubuntu/uwsgi.sock;
       include uwsgi_params;

        }

}

Eirena answered 15/2, 2022 at 10:57 Comment(0)
H
0

I was also having this issue, with intermitent and aparently random "502 upstream prematurely closed...". I was able to trace it to the Cheaper Subsystem: "502" entries in the nginx log matched the cheapening of a recently spawned worker shown in the uWsgi log as:

uwsgi.log

[19/Feb/2024:05:38:11 +0100] "GET ..."
[busyness] 10s average busyness is at 62%, will spawn 1 new worker(s)
Respawned uWSGI worker 3 (new pid: 230651)
[busyness] 10s average busyness is at 5%, cheap one of 3 running workers
worker 2 killed successfully (pid: 229735)
uWSGI worker 2 cheaped.
[19/Feb/2024:05:40:48 +0100] "GET ...]

nginx.log

2024/02/19 05:40:31 [error] 220739#220739: *528254 upstream prematurely closed connection while reading response header from upstream,

There was an spike at around 05:37 that caused uWsgi to spawn extra workers. But at some point at 05:40 the spike was gone and a worker was cheaped (i.e. killed) and somehow took the request with him.

I'm currently trying to tweak the cheap parameters: higher cheaper-overload values (~30) to avoid cheapening too soon, and enabling cheaper-busyness-verbose to get more info about the cheaper processing.

The point is uwsgi is killing and respawning workers all the time, either by Cheap or by Harakiri. Sometimes, they take the request with them to the grave.

Hannigan answered 19/2, 2024 at 8:31 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.