First of all please don't consider this question as a duplicate of this question
I have a setup an environment which uses celery
and redis
as broker
and result_backend
. My question is how can I make sure that when the celery workers crash, all the scheduled tasks are re-tried, when the celery worker is back up.
I have seen advice on using CELERY_ACKS_LATE = True
, so that the broker will re-drive the tasks until it get an ACK, but in my case its not working. Whenever I schedule a task its immediately goes to the worker which persists it until the scheduled time of execution. Let me give some example:
I am scheduling a task like this: res=test_task.apply_async(countdown=600)
, but immediately in celery worker logs i can see something like : Got task from broker: test_task[a137c44e-b08e-4569-8677-f84070873fc0] eta:[2013-01-...]
. Now when I kill the celery worker, these scheduled tasks are lost. My settings:
BROKER_URL = "redis://localhost:6379/0"
CELERY_ALWAYS_EAGER = False
CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
CELERY_ACKS_LATE = True
..When a task is scheduled it is delivered to the worker immediately..
is true, this implies that selection ofworker
for execution is done much before actual execution begins. That hardly seems to be right for a scheduler (distributed systems) which is supposed to dynamically factor in the load before scheduling tasks acrossworker
s – Sedillo