I am also getting the same problem about a year back then I have tried many things and in last I have done some of the hit and run things after documentation reading and my problem is gone. First the important things required to be set as:
FcgidBusyTimeout 300 [default]
FcgidBusyScanInterval 120 [default]
The purpose of this directive is to terminate hung applications. The default timeout may need to be increased for applications that can take longer to process the request. Because the check is performed at the interval defined by FcgidBusyScanInterval
, request handling may be allowed to proceed for a longer period of time
FcgidProcessLifeTime 3600 [default]
Idle application processes which have existed for greater than this time will be terminated, if the number of processses for the class exceeds FcgidMinProcessesPerClass
.
This process lifetime check is performed at the frequency of the configured FcgidIdleScanInterval
.
FcgidZombieScanInterval 3 [seconds default]
The module checks for exited FastCGI applications at this interval. During this period of time, the application may exist in the process table as a zombie (on Unix).
Note : All the above options Decrease or increase according to your application process time or needs or apply to specific vhost.
But My Problem resolve by this option:
Above options have tweaked my server but after some time the errors seems comming again but the error is really resolve by this:
FcgidOutputBufferSize 65536 [default]
I have change it to
FcgidOutputBufferSize 0
This is the maximum amount of response data the module will read from the FastCGI application before flushing the data to the client. This will flush the data instantly not waiting to have 64KB of bytes, which really helps me to flush out process more fast.
Other issues I got
if 500 Error coming from Nginx timing out. The fix:
/etc/nginx/nginx.conf
keepalive_timeout 125;
proxy_read_timeout 125;
proxy_connect_timeout 125;
fastcgi_read_timeout 125;
Intermittently I would get the MySQL "MySQL server has gone away" error, which required one more tweak:
/etc/my.conf
wait_timeout = 120
Then, just for funsies, I went ahead and upped my PHP memory limit, just in case:
/etc/php.ini
memory_limit = 256M
Using SuExec
mod_fastcgi
doesn't work at all under SuExec
on Apache 2.x
. I had nothing but trouble from it (it also had numerous other issues in our testing). The real cause of your problem is SuExec
In my case that was a startup for me, I starting Apache, mod_fcgid spawns exactly 5 processes for each vhost. Now, when using a simple upload script and submitting a file larger than 4-8KB all of those child processes are killed at once for the specific vhost the script was executed on.
It might be possible to make debug build or crank up logging in mod_fcgid which might give a clue.
I tried mod_fastcgi in the meantime for 1 year and I too can say with many others that SuExec is nothing but troublesome and runs not smoothly at all, in every case.