PHP-FPM - upstream prematurely closed connection while reading response header
Asked Answered
E

2

12

Already saw this same question - upstream prematurely closed connection while reading response header from upstream, client But as Jhilke Dai said it not solved at all and i agree. Got same exact error on nginx+phpFPM installation. Current software versions: nginx 1.2.8 php 5.4.13 (cli) on FreeBSd9.1. Actually bit isolated this error and sure it happened when trying to import large files, larger than 3 mbs to mysql via phpMyadmin. Also counted that backend closing connection when 30 secs limit reached. Nginx error log throwing this

 [error] 49927#0: *196 upstream prematurely closed connection while reading response header from upstream, client: 7X.XX.X.6X, server: domain.com, request: "POST /php3/import.php HTTP/1.1", upstream: "fastcgi://unix:/tmp/php5-fpm.sock2:", host: "domain.com", referrer: "http://domain.com/phpmyadmin/db_import.php?db=testdb&server=1&token=9ee45779dd53c45b7300545dd3113fed"

My php.ini limits raised accordingly

upload_max_filesize = 200M
default_socket_timeout = 60
max_execution_time = 600
max_input_time = 600

my.cnf related limit

max_allowed_packet = 512M

Fastcgi limits

location ~ \.php$ {
# fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass unix:/tmp/php5-fpm.sock2;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;

fastcgi_intercept_errors on;
fastcgi_ignore_client_abort on;
fastcgi_connect_timeout 60s;
fastcgi_send_timeout 200s;
fastcgi_read_timeout 200s;
fastcgi_buffer_size 128k;
fastcgi_buffers 8 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;

Tried to change fastcgi timeouts as well buffer sizes, that's not helped. php error log doesn't show problem, enabled all notices, warning - nothing useful. Also tried disable APC - no effect.

Effendi answered 4/5, 2013 at 20:21 Comment(0)
C
3

I had this same issue, got 502 Bad Gateway frequently and randomly at my development machine (OSX + nginx + php-fpm), and solved it by changing some parameters at /usr/local/etc/php/5.6/php-fpm.conf:

I had this settings:

 pm = dynamic
 pm.max_children = 10
 pm.start_servers = 3
 pm.max_spare_servers = 5

... and changed them to:

pm = dynamic
pm.max_children = 10
pm.start_servers = 10
pm.max_spare_servers = 10

... and then restarted the php-fpm service.

This settings are based on what I found here: [https://bugs.php.net/bug.php?id=63395]

Camarillo answered 16/3, 2016 at 11:4 Comment(1)
Works! I looked all over for a solution, this is the only one I found that works.Petronille
P
0

How long does your script take to compute? Try to set, both in PHP and Nginx HUGE timeouts and monitor your system during the request. Then tune your values to optimise performance.

Also, lower the log level in PHP-FPM, maybe there is some type of warning, info or debug trace that can give you some info.

Finally, be careful with the number of children and processes available in PHP-FPM. Maybe Nginx is starving, waiting for a PHP-FPM child to be available.

Platus answered 24/5, 2014 at 9:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.