The bad news is you are almost certainly not going to be able to solve you problem how you want to solve it. The FastCGI signal sent when a client closes a connection before receiving a request is FCGI_ABORT_REQUEST
A Web server aborts a FastCGI request when an HTTP client closes its
transport connection while the FastCGI request is running on behalf of
that client. The situation may seem unlikely; most FastCGI requests
will have short response times, with the Web server providing output
buffering if the client is slow. But the FastCGI application may be
delayed communicating with another system, or performing a server
push.
Unfortunately it looks like neither the original fast-cgi implementation nor PHP-FPM support the FCGI_ABORT_REQUEST signal, and so can't be interrupted.
The good news is there are better ways to solve this problem. Basically you should never have requests that take a long time to process. Instead if a request needs a long time to process you should:
- Push it to a queue of tasks that need to be processed.
- Return the 'task ID' to the client.
- Have the client poll periodically to see if that 'task' is completed and when it is completed display the results.
In addition to those 3 basic things - if you're concerned about wasting system resources when a client is no longer interested in the results of a request you should add:
- Break tasks into small pieces of work, and only move tasks from one work 'state' to the next, if the client is still asking for the result.
You don't say what your long running task is - let's pretend that it's to download a large image file from another server, manipulate that image, and then store it in S3. So the states for this task would be something like:
TASK_STATE_QUEUED
TASK_STATE_DOWNLOADING //Moves to next state when finished download
TASK_STATE_DOWNLOADED
TASK_STATE_PROCESSING //Moves to next state when processing finished
TASK_STATE_PROCESSED
TASK_STATE_UPLOADING_TO_S3 //Moves to next state when uploaded
TASK_STATE_FINISHED
So when the client sends the initial request, it gets back a taskID and then when it queries the state of that task, either:
- The server reports that the task is still being worked on
or
- If it's in one of the following states, the client request bumps it to the next status.
i.e.
TASK_STATE_QUEUED => TASK_STATE_DOWNLOADING
TASK_STATE_DOWNLOADED => TASK_STATE_PROCESSING
TASK_STATE_PROCESSED => TASK_STATE_UPLOADING_TO_S3
So only requests that the client is interested in continue to be processed.
btw I'd strongly recommend using something that is designed to work performantly as a Queue for holding the queue of tasks (e.g. Rabbitmq, Redis or Gearman ) rather than just using MySQL or any database. Basically, SQL just isn't that great at acting as a queue and you would be better using the appropriate technology from the start, rather than using the wrong tech to start, and then having to swap it out in an emergency when your database becomes overloaded when it's trying to do hundreds of inserts, updates per second just to manage the tasks.
As a side benefit, by breaking long running process up into tasks, it becomes really easy to:
- See where the processing time is being spent.
- See and detect fluctuations in processing time (e.g. if CPUS reach 100% utilization, then the image resize will be suddenly taking much longer).
- Throw more resources at the steps that are slow.
- You can give status update messages to the client, so they can see progress in the task, which gives a better UX rather than it just sitting there 'doing nothing'.
flush
some output, like$buffersize=256; echo str_repeat(" ", $buffersize); flush();
see issue#115 or echo and ob_implicit_flush – Kohler