There are no issues with timeout argument in enqueue_call. Just tested it out with this example.
function.py
from time import sleep
def test(a, b, c):
sleep(a)
print str(b+c)
driver.py
from redis import Redis
from rq import Queue
from function import test
q = Queue('abc', connection=Redis())
q.enqueue_call(test, args=(300, 2, 3), timeout=200)
q.enqueue_call(test, args=(100, 2, 3), timeout=200)
Result:
13:08:11 abc: test.test(100, 2, 3) (4b4e96e5-af30-4175-ab94-ceaf9187e581)
5
13:08:13 abc: test.test(300, 2, 3) (04605c34-d039-42ad-954e-7f445f0f8bc9)
13:11:17 JobTimeoutException: Job exceeded maximum timeout value (200 seconds)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/rq/worker.py", line 568, in perform_job
rv = job.perform()
File "/usr/local/lib/python2.7/dist-packages/rq/job.py", line 495, in perform
self._result = self.func(*self.args, **self.kwargs)
File "./test.py", line 4, in test
sleep(a)
File "/usr/local/lib/python2.7/dist-packages/rq/timeouts.py", line 51, in handle_death_penalty
'value ({0} seconds)'.format(self._timeout))
JobTimeoutException: Job exceeded maximum timeout value (200 seconds)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/rq/worker.py", line 568, in perform_job
rv = job.perform()
File "/usr/local/lib/python2.7/dist-packages/rq/job.py", line 495, in perform
self._result = self.func(*self.args, **self.kwargs)
File "./test.py", line 4, in test
sleep(a)
File "/usr/local/lib/python2.7/dist-packages/rq/timeouts.py", line 51, in handle_death_penalty
'value ({0} seconds)'.format(self._timeout))
JobTimeoutException: Job exceeded maximum timeout value (200 seconds)
If you are using tools like supervisor to manage rq workers, try restarting the service.