NGINX: upstream timed out (110: Connection timed out) while reading response header from upstream
Asked Answered
G

18

237

I have Puma running as the upstream app server and Riak as my background db cluster. When I send a request that map-reduces a chunk of data for about 25K users and returns it from Riak to the app, I get an error in the Nginx log:

upstream timed out (110: Connection timed out) while reading response header from upstream

If I query my upstream directly without nginx proxy, with the same request, I get the required data.

The Nginx timeout occurs once the proxy is put in.

**nginx.conf**

http {
    keepalive_timeout 10m;
    proxy_connect_timeout  600s;
    proxy_send_timeout  600s;
    proxy_read_timeout  600s;
    fastcgi_send_timeout 600s;
    fastcgi_read_timeout 600s;
    include /etc/nginx/sites-enabled/*.conf;
}

**virtual host conf**

upstream ss_api {
  server 127.0.0.1:3000 max_fails=0  fail_timeout=600;
}

server {
  listen 81;
  server_name xxxxx.com; # change to match your URL

  location / {
    # match the name of upstream directive which is defined above
    proxy_pass http://ss_api; 
    proxy_set_header  Host $http_host;
    proxy_set_header  X-Real-IP  $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_cache cloud;
    proxy_cache_valid  200 302  60m;
    proxy_cache_valid  404      1m;
    proxy_cache_bypass $http_authorization;
    proxy_cache_bypass http://ss_api/account/;
    add_header X-Cache-Status $upstream_cache_status;
  }
}

Nginx has a bunch of timeout directives. I don't know if I'm missing something important. Any help would be highly appreciated....

Gemmulation answered 11/9, 2013 at 12:1 Comment(1)
It should only timeout after 600s does it? You can fake it to time it by setting up a tcp server on 127.0.0.1:3000 that just accepts connections and does nothing with them, to see how long it takes. It should be 600s...Gahnite
U
138

This happens because your upstream takes too long to answer the request and NGINX thinks the upstream already failed in processing the request, so it responds with an error. Just include and increase proxy_read_timeout in location config block. Same thing happened to me and I used 1 hour timeout for an internal app at work:

proxy_read_timeout 3600;

With this, NGINX will wait for an hour (3600s) for its upstream to return something.

EDIT:

@IanSmith I totally agree with you. This is more of a temporary solution. Long term solution is to work with the upstream app stakeholders to improve the response times or change how the whole feature works by enqueuing a background job and then check status as you mentioned, or use websockets to return the status back as soon the job finishes.

Sometimes the upstream is a legacy or external application and there's little to none support, so this is also a last resort for those cases.

Unruly answered 13/9, 2017 at 19:51 Comment(4)
Note that having proxy_read_timeout in the http section might not help. I have the proxy_pass directive in the location section and only there the proxy_read_timeout setting made a difference. (nginx 1.16.0)Norris
Seems to work in http/server/location for me...maybe things have changed :)Gahnite
you can check the directive docs here just realize that you ca define it inside http server or location blockMannino
You should think really hard before putting your read timeout up to an hour. This is a big red flag for anyone in IT security. No app architecture should be designed like this. If an App developer thinks this is necessary tell them they're wrong and that they need to make their API asynchronous where you get a token to check on the status for a request.Leaf
V
73

You should always refrain from increasing the timeouts, I doubt your backend server response time is the issue here in any case.

I got around this issue by clearing the connection keep-alive flag and specifying http version as per the answer here: https://mcmap.net/q/117882/-nginx-reverse-proxy-causing-504-gateway-timeout

server {
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;

        # these two lines here
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        proxy_pass http://localhost:5000;
    }
}

Unfortunately I can't explain why this works and didn't manage to decipher it from the docs mentioned in the answer linked either so if anyone has an explanation I'd be very interested to hear it.

Vallievalliere answered 13/4, 2016 at 5:17 Comment(14)
Why would you not adjust the proxy_read_timeout if you know that the proxy (if even for a specific URL) required more processing time?Datura
Hi! I don't remember the exact issue any more but I think it wasn't related to the actual time for the url but rather that the timeout wasn't being processed correctly without these settings.Vallievalliere
@magicbacon this was years ago so I barely remember the case any more but, you changed the $http_host right? I'm guessing that wouldn't fly for https. Might be additional settings are required for proxying https requests as well.Vallievalliere
+1 ... this looks like an awkward hack but actually this is from the official docs :) nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive I have a slightly different problem "upstream prematurely closed connection while reading response header from upstream" when I use the upstream directive with keepalive and using these two lines seems to fix it.Lung
I love how people say "you should always do this".. "or never do that".. everyones case is different.. as is my case I have people uploading large (300MB) files from all over the world.. on different internet connections.. people with fast internet can send at high speed.. people in developing countries have slow internet and maybe it takes 2 hours to complete.. server must wait, server must not say "your internet is too slow so im terminating the connection".. So in this case i increase the timeout.Champerty
@TimDavis indeed, there are no absolutes. I've since this taken to using "it depends" to answer most things. I'd still say that generally, it's good to avoid increasing the timeouts though :) In this case, especially, it didn't have to do with the server response time, if you check the linked ticket. But maybe you have more insight into the actual problem here with keepalives, as I write, I'm not sure why the above worked and maybe there is something else I was missing?Vallievalliere
@Vallievalliere HTTP/1.1 allows chunked transfer encoding this may allow the connection to behave better as its expecting the request in chunks. You are right you should not mess with stuff unless you know its implications exactlyChamperty
@Vallievalliere Also I ran into issues with proxy_set_header Connection "" ... for me it should be proxy_set_header Upgrade $http_upgrade;Champerty
@TimDavis I see, maybe that's better. I guess it might depend on the traffic, like in this post saying it's required for WebSockets: serverlab.ca/tutorials/linux/web-servers-linux/…Vallievalliere
I'll make a case for this answer, ironically, having insisted our CTO on raising a timeout value just two days ago lol. Although you should, indeed, adjust timeout values in some cases, strongly agree that you should do so after trying not to. Especially the value worked for you in the past. You don't wanna give more time to a possible resource draining bug.Alkmaar
Like @Karussell, this also fixed a related upstream issue for me, in my case, my nginx server was throwing "upstream prematurely closed connection while reading upstream" errors randomly on some requests, with no rhyme or reason about why this was happening. Trying to find answers about this is hard because everyone says that "your webapp is causing the issue" (it isn't since directly connecting works, and because it happens randomly). However after scavenging the internet for answers, I couldn't find any talking about what's the correlation of doing this and why that fixed the issue.Arvell
I thought a bit more, trying to understand why this fixes the issue: In my case, I was using Ktor with the Netty engine, so I think this workaround is masking an underlying issue on the engine itself. mrpowergamerbr.com/us/blog/…Arvell
@Arvell I came across this many, many years ago and looking at the stats for this question, it looks like it is still an issue. I completely agree though, this shouldn't be more than a workaround, and it feels like an issue/bug in nginx. I am surprised it is still a thing.Vallievalliere
This solution works for me. I need to have nginx work as reverse proxy for supervisor http server, and the log page requires nginx to keep the connection alive. You can see the explanation of this solution in this answerWingo
L
67

First figure out which upstream is slowing by consulting the nginx error log file and adjust the read time out accordingly in my case it was fastCGI

2017/09/27 13:34:03 [error] 16559#16559: *14381 upstream timed out (110: Connection timed out) while reading response header from upstream, client:xxxxxxxxxxxxxxxxxxxxxxxxx", upstream: "fastcgi://unix:/var/run/php/php5.6-fpm.sock", host: "xxxxxxxxxxxxxxx", referrer: "xxxxxxxxxxxxxxxxxxxx"

So i have to adjust the fastcgi_read_timeout in my server configuration

 location ~ \.php$ {
     fastcgi_read_timeout 240;
     ...
 }

See: original post

Luing answered 27/9, 2017 at 14:19 Comment(2)
Here's a way to add timing info the failure to see how much you "need" to increase it to: #18627969 FWIWGahnite
Legend.. I've spent hours getting frustrated at this.Sladen
G
20

In your case it helps a little optimization in proxy, or you can use "# time out settings"

location / 
{        

  # time out settings
  proxy_connect_timeout 159s;
  proxy_send_timeout   600;
  proxy_read_timeout   600;
  proxy_buffer_size    64k;
  proxy_buffers     16 32k;
  proxy_busy_buffers_size 64k;
  proxy_temp_file_write_size 64k;
  proxy_pass_header Set-Cookie;
  proxy_redirect     off;
  proxy_hide_header  Vary;
  proxy_set_header   Accept-Encoding '';
  proxy_ignore_headers Cache-Control Expires;
  proxy_set_header   Referer $http_referer;
  proxy_set_header   Host   $host;
  proxy_set_header   Cookie $http_cookie;
  proxy_set_header   X-Real-IP  $remote_addr;
  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Godgiven answered 19/12, 2013 at 10:13 Comment(4)
For me it does make a difference having these settings in the location section. Having them in the http section did not help (pssibly because I also had proxy_pass in the location section.Norris
What exactly are you optimizing with these declarations?Arbutus
A page that is taking time to load is not getting time outs. Is passing also headers that might be using and is setting the buffer to a limit.Godgiven
It's nice to see a list of all the timeout settings in one place. For me, I needed to know which one to shorten to make my error response drop a misbehaving upstream promptly.Infrequent
P
15

I would recommend to look at the error_logs, specifically at the upstream part where it shows specific upstream that is timing out.

Then based on that you can adjust proxy_read_timeout, fastcgi_read_timeout or uwsgi_read_timeout.

Also make sure your config is loaded.

More details here Nginx upstream timed out (why and how to fix)

Piderit answered 22/4, 2017 at 17:36 Comment(1)
Beware that link has a very intrusive full screen advert that pops up as you scroll... and then doesn't actually give much actual information.Injustice
C
11

I think this error can happen for various reasons, but it can be specific to the module you're using. For example I saw this using the uwsgi module, so had to set "uwsgi_read_timeout".

Commissary answered 10/10, 2013 at 10:50 Comment(1)
I think uwsgi_read_timeout 3600; proxy_send_timeout 3600; proxy_read_timeout 3600; works for me.Toga
C
10

As many others have pointed out here, increasing the timeout settings for NGINX can solve your issue.

However, increasing your timeout settings might not be as straightforward as many of these answers suggest. I myself faced this issue and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This did not help me a single bit; there was no apparent change in NGINX' timeout settings. Now, many hours later, I finally managed to fix this problem.

The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn't exist, you should create it). I used the same settings as suggested in the thread:

proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
Cutright answered 9/2, 2019 at 9:54 Comment(0)
P
4

Please also check the keepalive_timeout of the upstream server.

I got a similar issue: random 502, with Connection reset by peer errors in nginx logs, happening when server was on heavy load. Eventually found it was caused by a mismatch between nginx' and upstream's (gunicorn in my case) keepalive_timeout values. Nginx was at 75s and upstream only a few seconds. This caused upstream to sometimes fall in timeout and drop the connection, while nginx didn't understand why.

Raising the upstream server value to match nginx' one solved the issue.

Porphyritic answered 9/7, 2021 at 15:29 Comment(0)
O
4

If you're using an AWS EC2 instance running Linux like I am you may also need to restart Nginx for the changes to take effect after adding proxy_read_timeout 3600; to etc/nginx/nginx.conf, I did: sudo systemctl restart nginx

Osborne answered 15/7, 2022 at 18:17 Comment(0)
C
2

I had the same problem and resulted that was an "every day" error in the rails controller. I don't know why, but on production, puma runs the error again and again causing the message:

upstream timed out (110: Connection timed out) while reading response header from upstream

Probably because Nginx tries to get the data from puma again and again.The funny thing is that the error caused the timeout message even if I'm calling a different action in the controller, so, a single typo blocks all the app.

Check your log/puma.stderr.log file to see if that is the situation.

Cryometer answered 26/12, 2016 at 19:28 Comment(0)
A
1

Hopefully it helps someone: I ran into this error and the cause was wrong permission on the log folder for phpfpm, after changing it so phpfpm could write to it, everything was fine.

Aldehyde answered 3/1, 2019 at 1:8 Comment(0)
H
1

I am facing the same issue.

I have added the lines below in my domain host file and restarted the nginx service, and now It's working fine.

proxy_read_timeout 600;

fastcgi_read_timeout 600;

service nginx reload 
Heraclitean answered 26/12, 2023 at 7:1 Comment(0)
E
0

From our side it was using spdy with proxy cache. When the cache expires we get this error till the cache has been updated.

Easley answered 18/6, 2014 at 21:26 Comment(0)
F
0

For proxy_upstream timeout, I tried the above setting but these didn't work.

Setting resolver_timeout worked for me, knowing it was taking 30s to produce the upstream timeout message. E.g. me.atwibble.com could not be resolved (110: Operation timed out).

http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver_timeout

Flor answered 25/11, 2019 at 13:44 Comment(0)
C
-2

we faced issue while saving content (customt content type) giving timeout error. Fixed this by adding all above timeouts, http client config to 600s and increasing memory for php process to 3gb.

Castellated answered 10/12, 2021 at 5:44 Comment(1)
timeouts, other updates were added in nginx.conf, php.ini and settings.php . Drupal is now very much bloated and not robust at all. It is not worth the effort to use DRupal for whatever beneifts it may have compared to the severe hassles it bugs you with. Wordpress Sitecore are much betterCastellated
R
-2

I test proxy_read_timeout 100s and find timeout 100s in access log,set 210s appeared,so you can set 600s or long with your web.
proxy_read_timeout
accesslog

Radiation answered 10/3, 2023 at 5:17 Comment(0)
N
-3

If you are using wsl2 on windows 10, check your version by this command:

wsl -l -v

you should see 2 under the version. if you don't, you need to install wsl_update_x64.

Neau answered 22/1, 2022 at 6:25 Comment(0)
T
-4

new add a line config to location or nginx.conf, for example: proxy_read_timeout 900s;

Teage answered 19/3, 2021 at 10:57 Comment(1)
not helping at allYamamoto

© 2022 - 2025 — McMap. All rights reserved.