celery with redis - unix timeout
Asked Answered
A

3

7

I have an app using celery for async-task and i use redis for its broker and result backend and i set redis to use unix socket. here is my url for celery and broker

brok = 'redis+socket://:ABc@/tmp/redis.sock'
app = Celery('NTWBT', backend=brok, broker=brok)
app.conf.update(
    BROKER_URL=brok,
    BROKER_TRANSPORT_OPTIONS={
        "visibility_timeout": 3600
    },
    CELERY_RESULT_BACKEND=brok,
    CELERY_ACCEPT_CONTENT=['pickle', 'json', 'msgpack', 'yaml'],
)

but every time i add a job celery gives me this error

Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 283, in trace_task
    uuid, retval, SUCCESS, request=task_request,
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 257, in store_result
    request=request, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 491, in _store_result
    self.set(self.get_key_for_task(task_id), self.encode(meta))
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 160, in set
    return self.ensure(self._set, (key, value), **retry_policy)
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 149, in ensure
    **retry_policy
  File "/usr/local/lib/python2.7/dist-packages/kombu/utils/__init__.py", line 246, in retry_over_time
    return fun(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 169, in _set
    pipe.execute()
  File "/usr/local/lib/python2.7/dist-packages/redis/client.py", line 2620, in execute
    self.shard_hint)
  File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 897, in get_connection
    connection = self.make_connection()
  File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 906, in make_connection
    return self.connection_class(**self.connection_kwargs)
TypeError: __init__() got an unexpected keyword argument 'socket_connect_timeout'

which option should i use for celery to donot set timeout for its redis connection?

Argumentation answered 16/12, 2015 at 9:5 Comment(0)
C
1

The problem in my case was, that the IP of my computer was blocked from the port on the server. After allowing TCP connection over this port from my local computer, celery could connect to the backend again.

Apart from this, some of the following celery settings might help to deal with timeouts (You can read more about them in the celery documentation).

# celery broker connection timouts and retries
broker_connection_retry = True  # Retries connecting to the broker
broker_connection_retry_on_startup = True  # Important as the worker is restarted after every task
broker_connection_max_retries = 10  # Maximum number of retries to establish a connection to the broker
broker_connection_timeout = 30  # Default timeout in s before timing out the connection to the AMQP server, default 4.0
broker_pool_limit = None  # connection pool is disabled and connections will be established / closed for every use

BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 36000,  # Increase time before tasks time out
                            'max_retries': 10,  # The max number of retries passed to kombu package
                            "interval_start": 0,  # Time in seconds when a retry is started
                            "interval_step": 60,  # The number of seconds it waits more with every retry
                            "interval_max": 600,  # The maximum number of seconds it waits for a retry
                            "retry_policy": {'timeout': 60.0},  # Increase timeout for connections to backend
                            }

result_backend_transport_options = {'visibility_timeout': 36000,  # Increase time before tasks time out
                                    'max_retries': 10,  # The max number of retries passed to kombu package
                                    "interval_start": 0,  # Time in seconds when a retry is started
                                    "interval_step": 60,  # The number of seconds it waits more with every retry
                                    "interval_max": 600,  # The maximum number of seconds it waits for a retry
                                    "retry_policy": {'timeout': 60.0},  # Increase timeout for connections to backend
                                    }

# Redis connection settings
redis_socket_timeout = 300
redis_socket_connect_timeout = 300  # Timeout for redis socket connections
redis_socket_keepalive = True 
redis_retry_on_timeout = True  # Not recommended for unix sockets
task_reject_on_worker_lost = True  # Retry the task if the worker is killed

# Handling of timeouts 
result_persistent = True  # Store results so they don't get lost, when the broker is restarted
worker_deduplicate_successful_tasks = True
worker_cancel_long_running_tasks_on_connection_loss = False
worker_proc_alive_timeout = 300  # The timeout in seconds (int/float) when waiting for a new worker process to start up.
Chesna answered 25/10, 2023 at 15:40 Comment(0)
P
0

it seems that this problem is related to a version of a redis-server installed on your system, the socket_connect_timeout was first introduced in redis 2.10.0.

so you need to update your version of redis.

if you are running on ubuntu server you can install the official apt repositories:

$ sudo apt-get install -y python-software-properties
$ sudo add-apt-repository -y ppa:rwky/redis
$ sudo apt-get update
$ sudo apt-get install -y redis-server

and update to the last version of celery.

This is the github issue in celery because not only you run into this problem : https://github.com/celery/celery/issues/2903

and if everything not work in any way for you, i suggest to use rabbitmq instead of Redis :

$ sudo apt-get install rabbitmq-server
$ sudo pip install librabbitmq

and in your app configure celery with this CELERY_BROKER_URL:

'amqp://guest:guest@localhost:5672//'

i hope this answer will fit all your needs. Cheers

Presber answered 24/5, 2016 at 9:13 Comment(0)
H
0

There are bugs in several libraries that causes this exception in Celery:

If you use Redis with UNIX socket as the broker, there's no easy fix yet. Unless you monkey-patch celery, kombu and/or redis-py libraries...

For now, I recommend that you use Redis with TCP connection, or switch to another broker, e.g. RabbitMQ.

Humberto answered 24/8, 2016 at 17:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.