How to correctly catch and process RQ timeouts in Python?
Asked Answered
S

1

8

Trying to find a good way to catch a timeout of an RQ job, so it can be requeued after the timeout.

Basically, the correct solution would provide a way (for example, an exception handler in the worker or something of the sort) to requeue the job that timed out. Also, if the job goes back to the failed queue, that's a good answer too.

Thanks very much! Any help will be appreciated!

Spielman answered 5/9, 2013 at 19:44 Comment(0)
S
4

Sounds like you want to use exception handling. From the docs:

Jobs can fail due to exceptions occurring. When your RQ workers run in the background, how do you get notified of these exceptions?

Default: the failed queue The default safety net for RQ is the failed queue. Every job that fails execution is stored in here, along with its exception information (type, value, traceback). While this makes sure no failing jobs "get lost", this is of no use to get notified pro-actively about job failure.

Custom exception handlers Starting from version 0.3.1, RQ supports registering custom exception handlers. This makes it possible to replace the default behaviour (sending the job to the failed queue) altogether, or to take additional steps when an exception occurs.

You could also store jobs in a redis sorted set with job_id as key and time.time() + timeout as score, and then have a worker run ZRANGEBYSCORE sorted_set 0 [current_time] and process whatever's returned as a timed-out job.

Sluice answered 6/9, 2013 at 9:42 Comment(1)
Thanks! This is correct. For future reference, the job raises a JobTimeoutException which can be handled by the worker exception handler, however you choose to define it.Spielman

© 2022 - 2024 — McMap. All rights reserved.