Nginx + PHP: stop process at canceled request
Asked Answered
W

2

18

I have Nginx 1.4.4 and PHP 5.5.6. I'm making long-polling requests. Problem is, that if I cancel the HTTP request sent via Ajax, requests are still processing (they don't stop). I tested it with the PHP mail() function at end of file, and mail is still coming the file didn't stop).

I'm worried, because I think that it might cause server crash because of the high load of unclosed requests. Yes, I tried ignore_user_abort(false); but with no changes. Is possible that I should change something in Nginx?

  location ~ \.php$ {    
    try_files $uri =404;
    include fastcgi_params;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;  
  }
Watermelon answered 27/11, 2013 at 19:19 Comment(6)
Once a decade, we get a good question on PHP. Welcome, O mighty chosen one!Gauger
Don't get me wrong: I love PHP. But unfortunately the PHP web SAPI was not designed for this kind of long-polling as each concurrent request requires its own process. You're really trying to put a square peg in a round hole with this approach. If you want an interactive application you need to look into different technologies. You can write websocket applications in PHP using ratchet. A more robust solution is to bite the bullet and learn a technology like node.js which was specifically designed to handle high concurrent client/request volume.Amass
as a workarround you could try to periodically flush some output, like $buffersize=256; echo str_repeat(" ", $buffersize); flush(); see issue#115 or echo and ob_implicit_flushKohler
Are you initiating the ajax connection via javascript on the php page, IE making an ajax call using jquery or some such? How is your polling setup? Are you using setTimeout or setInterval?Campball
Could you share some of the code that you're using?Donor
I am not sure but am wondering if having a control flag would be of any help : PHP script continues after closing / stopping pageGoldston
M
16

The bad news is you are almost certainly not going to be able to solve you problem how you want to solve it. The FastCGI signal sent when a client closes a connection before receiving a request is FCGI_ABORT_REQUEST

A Web server aborts a FastCGI request when an HTTP client closes its transport connection while the FastCGI request is running on behalf of that client. The situation may seem unlikely; most FastCGI requests will have short response times, with the Web server providing output buffering if the client is slow. But the FastCGI application may be delayed communicating with another system, or performing a server push.

Unfortunately it looks like neither the original fast-cgi implementation nor PHP-FPM support the FCGI_ABORT_REQUEST signal, and so can't be interrupted.

The good news is there are better ways to solve this problem. Basically you should never have requests that take a long time to process. Instead if a request needs a long time to process you should:

  • Push it to a queue of tasks that need to be processed.
  • Return the 'task ID' to the client.
  • Have the client poll periodically to see if that 'task' is completed and when it is completed display the results.

In addition to those 3 basic things - if you're concerned about wasting system resources when a client is no longer interested in the results of a request you should add:

  • Break tasks into small pieces of work, and only move tasks from one work 'state' to the next, if the client is still asking for the result.

You don't say what your long running task is - let's pretend that it's to download a large image file from another server, manipulate that image, and then store it in S3. So the states for this task would be something like:

TASK_STATE_QUEUED
TASK_STATE_DOWNLOADING //Moves to next state when finished download
TASK_STATE_DOWNLOADED
TASK_STATE_PROCESSING  //Moves to next state when processing finished
TASK_STATE_PROCESSED
TASK_STATE_UPLOADING_TO_S3 //Moves to next state when uploaded
TASK_STATE_FINISHED

So when the client sends the initial request, it gets back a taskID and then when it queries the state of that task, either:

  • The server reports that the task is still being worked on

or

  • If it's in one of the following states, the client request bumps it to the next status.

i.e.

TASK_STATE_QUEUED => TASK_STATE_DOWNLOADING
TASK_STATE_DOWNLOADED => TASK_STATE_PROCESSING
TASK_STATE_PROCESSED => TASK_STATE_UPLOADING_TO_S3

So only requests that the client is interested in continue to be processed.

btw I'd strongly recommend using something that is designed to work performantly as a Queue for holding the queue of tasks (e.g. Rabbitmq, Redis or Gearman ) rather than just using MySQL or any database. Basically, SQL just isn't that great at acting as a queue and you would be better using the appropriate technology from the start, rather than using the wrong tech to start, and then having to swap it out in an emergency when your database becomes overloaded when it's trying to do hundreds of inserts, updates per second just to manage the tasks.

As a side benefit, by breaking long running process up into tasks, it becomes really easy to:

  1. See where the processing time is being spent.
  2. See and detect fluctuations in processing time (e.g. if CPUS reach 100% utilization, then the image resize will be suddenly taking much longer).
  3. Throw more resources at the steps that are slow.
  4. You can give status update messages to the client, so they can see progress in the task, which gives a better UX rather than it just sitting there 'doing nothing'.
Melicent answered 30/11, 2013 at 1:3 Comment(2)
Great answer! But only works if you have control over the code. If not, here's a hack to kill off orphaned php-fpm processes: https://mcmap.net/q/741663/-how-do-i-get-the-php-fpm-process-to-terminate-when-a-user-aborts-request-nginxIrade
Apache bug #56188 covers mod_proxy_fcgi support of FCGI_ABORT_REQUEST.Whipping
P
2

What exactly are you doing in those long running requests? If whatever you are doing is causing the FastCGI process to wait for some system call like waiting for a database to return a result, the aborted HTTP client connection will not cause this call to be interrupted. If I recall correctly, the effect of ignore_user_abort(false) is merely that the PHP script is aborted as soon as it tries to output something to the (now lost) connection. The script will not write any output while it is waiting for a system call.

If possible, you should split the task the long running script is performing into smaller chunks and check the connection status in between processing them. Ensure that the script terminates if the connection was terminated:

while (!$done_yet) {
    if(connection_status() != CONNECTION_NORMAL) {
        break;
    }
    do_more_work();
}

In the PHP documentaion you'll find more information on connection handling if you like.

Post answered 28/11, 2013 at 9:38 Comment(1)
I tried this and it is not working for ajax http requests, can you please help me with any other ?Admix

© 2022 - 2024 — McMap. All rights reserved.