The Situation
I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).
I am using Supervisor to run my queue, and I am running 20 processes at a time. My supervisor config file looks like this:
[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log
There are a few oddities that I don't know how to explain or correct:
- My jobs fairly consistently fail after running for 60 to 65 seconds.
- After being marked as failed the job continues to run even after being marked as failed. Eventually they do end up resolving successfully.
- When I run the failed task in isolation to find the cause of the issue it succeeds just fine.
I strongly believe this is a timeout issue; however, I was under the impression that --timeout=0
would result in an unlimited timeout.
The Question
How can I prevent this temporary "failure" job state? Are there other places where a queue timeout might be invoked that I'm not aware of?
max_execution_time
your php.ini How much does it says? If it's 60 secs, there's your problem. try increasing the timeout. – Bobettemax_execution_time
was set to 30s, I'll explore and experiment along those lines). – Wizard