Cancel an already executing task in Python RQ?
Asked Answered
C

3

12

I am using http://python-rq.org/ to queue and execute tasks on Heroku worker dynos. These are long-running tasks and occasionally I need to cancel them in mid-execution. How do I do that from Python?

from redis import Redis
from rq import Queue
from my_module import count_words_at_url

q = Queue(connection=Redis())
result = q.enqueue(
             count_words_at_url, 'http://nvie.com')

and later in a separate process I want to do:

from redis import Redis
from rq import Queue
from my_module import count_words_at_url

q = Queue(connection=Redis())
result = q.revoke_all() # or something

Thanks!

Chemoprophylaxis answered 28/5, 2013 at 13:53 Comment(3)
Did you ever figure this out? If so, I'd love if you'd post the solution. ThxPrismoid
I didn't, unfortunately. I had to work way around it.Chemoprophylaxis
Relevant (but closed) github issue here: github.com/nvie/rq/issues/339Collide
H
13

If you have the job instance at hand simply

job.cancel()

Or if you can determine the hash:

from rq import cancel_job
cancel_job('2eafc1e6-48c2-464b-a0ff-88fd199d039c')

http://python-rq.org/contrib/

But that just removes it from the queue; I don't know that it will kill it if already executing.

You could have it log the wall time then check itself periodically and raise an exception/self-destruct after a period of time.

For manual, ad-hoc style, death: If you have redis-cli installed you can do something drastic like flushall queues and jobs:

$ redis-cli
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> exit

I'm still digging around the documentation to try and find how to make a precision kill.

Not sure if that helps anyone since the question is already 18 months old.

Hibachi answered 18/11, 2014 at 7:49 Comment(0)
P
4

I think the most common solution is to have the worker spawn another thread/process to do the actual work, and then periodically check the job metadata. To kill the task, set a flag in the metadata and then have the worker kill the running thread/process.

Pollination answered 8/3, 2018 at 17:35 Comment(1)
Can you please elaborate for how the worker may access the job metadata? Are you sure the job metadata is a accessible for the worker dynamically, i.e. that changes in the job metadata are reflected in realtime to the worker?Amaty
Q
1

From the docs:

You can use send_stop_job_command() to tell a worker to immediately stop a currently executing job. A job that’s stopped will be sent to FailedJobRegistry.

from redis import Redis
from rq.command import send_stop_job_command

redis = Redis()
send_stop_job_command(redis, job_id)
Quetzal answered 7/7, 2023 at 8:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.