Docker - Celery cannot connect to redis
Asked Answered
C

2

6

Project structure:

client
nginx
web/
   celery_worker.py
   project
   config.py
   api/

I have the following services in my docker-compose:

version: '3.6'

services:

  web:
    build:
      context: ./services/web
      dockerfile: Dockerfile-dev
    volumes:
      - './services/web:/usr/src/app'
    ports:
      - 5001:5000
    environment:
      - FLASK_ENV=development
      - APP_SETTINGS=project.config.DevelopmentConfig
      - DATABASE_URL=postgres://postgres:postgres@web-db:5432/web_dev 
      - DATABASE_TEST_URL=postgres://postgres:postgres@web-db:5432/web_test
      - SECRET_KEY=my_precious  
    depends_on:  
      - web-db
      - redis

  celery:
    image: dev3_web
    restart: always
    volumes:
      - ./services/web:/usr/src/app
      - ./services/web/logs:/usr/src/app
    command: celery worker -A celery_worker.celery --loglevel=INFO -Q cache
    environment:
      - CELERY_BROKER=redis://redis:6379/0
      - CELERY_RESULT_BACKEND=redis://redis:6379/0
    depends_on:
      - web
      - redis
    links:
      - redis:redis


  redis:
    image: redis:5.0.3-alpine
    restart: always
    expose:
      - '6379'
    ports:
      - '6379:6379'


  monitor:
    image: dev3_web
    ports:
      - 5555:5555
    command:  flower -A celery_worker.celery --port=5555 --broker=redis://redis:6379/0
    depends_on:
      - web
      - redis


  web-db:  
    build:
      context: ./services/web/project/db
      dockerfile: Dockerfile
    ports:
      - 5435:5432
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres


  nginx:
    build:
      context: ./services/nginx
      dockerfile: Dockerfile-dev
    restart: always
    ports:
      - 80:80
      - 8888:8888
    depends_on:
      - web
      - client
      - redis


  client:
    build:
      context: ./services/client
      dockerfile: Dockerfile-dev
    volumes:
      - './services/client:/usr/src/app'
      - '/usr/src/app/node_modules'
    ports:
      - 3007:3000
    environment:
      - NODE_ENV=development
      - REACT_APP_WEB_SERVICE_URL=${REACT_APP_WEB_SERVICE_URL}
    depends_on:
      - web
      - redis

CELERY LOG

However, celery is not being able to connect, from this log:

celery_1   | [2019-03-29 03:09:32,111: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Address not available..
celery_1   | Trying again in 2.00 seconds...

WEB LOG

and so is not web service (running the backend), by the same log:

web_1      | Waiting for postgres...
web_1      | PostgreSQL started
web_1      |  * Environment: development
web_1      |  * Debug mode: on
web_1      |  * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
web_1      |  * Restarting with stat
web_1      |  * Debugger is active!
web_1      |  * Debugger PIN: 316-641-271
web_1      | 172.21.0.9 - - [29/Mar/2019 03:03:17] "GET /users HTTP/1.0" 200 -
web_1      | 172.21.0.9 - - [29/Mar/2019 03:03:26] "POST /auth/register HTTP/1.0" 500 -
web_1      | Traceback (most recent call last):
web_1      |   File "/usr/lib/python3.6/site-packages/redis/connection.py", line 492, in connect
web_1      |     sock = self._connect()
web_1      |   File "/usr/lib/python3.6/site-packages/redis/connection.py", line 550, in _connect
web_1      |     raise err
web_1      |   File "/usr/lib/python3.6/site-packages/redis/connection.py", line 538, in _connect
web_1      |     sock.connect(socket_address)
web_1      | OSError: [Errno 99] Address not available
web_1      | 
web_1      | During handling of the above exception, another exception occurred:
web_1      | 
web_1      | Traceback (most recent call last):
web_1      |   File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 431, in _reraise_as_library_errors
web_1      |     yield
web_1      |   File "/usr/lib/python3.6/site-packages/celery/app/base.py", line 744, in send_task
web_1      |     self.backend.on_task_call(P, task_id)
web_1      |   File "/usr/lib/python3.6/site-packages/celery/backends/redis.py", line 265, in on_task_call
web_1      |     self.result_consumer.consume_from(task_id)
web_1      |   File "/usr/lib/python3.6/site-packages/celery/backends/redis.py", line 125, in consume_from
web_1      |     return self.start(task_id)
web_1      |   File "/usr/lib/python3.6/site-packages/celery/backends/redis.py", line 107, in start
web_1      |     self._consume_from(initial_task_id)
web_1      |   File "/usr/lib/python3.6/site-packages/celery/backends/redis.py", line 132, in _consume_from
web_1      |     self._pubsub.subscribe(key)
web_1      |   File "/usr/lib/python3.6/site-packages/redis/client.py", line 3096, in subscribe
web_1      |     ret_val = self.execute_command('SUBSCRIBE', *iterkeys(new_channels))
web_1      |   File "/usr/lib/python3.6/site-packages/redis/client.py", line 3003, in execute_command
web_1      |     self.shard_hint
web_1      |   File "/usr/lib/python3.6/site-packages/redis/connection.py", line 994, in get_connection
web_1      |     connection.connect()
web_1      |   File "/usr/lib/python3.6/site-packages/redis/connection.py", line 497, in connect
web_1      |     raise ConnectionError(self._error_message(e))
web_1      | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Address not available.
web_1      | 
web_1      | During handling of the above exception, another exception occurred:
web_1      | 
web_1      | Traceback (most recent call last):
web_1      |   File "/usr/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
web_1      |     return self.wsgi_app(environ, start_response)
web_1      |   File "/usr/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app
web_1      |     response = self.handle_exception(e)
web_1      |   File "/usr/lib/python3.6/site-packages/flask_cors/extension.py", line 161, in wrapped_function
web_1      |     return cors_after_request(app.make_response(f(*args, **kwargs)))
web_1      |   File "/usr/lib/python3.6/site-packages/flask/app.py", line 1741, in handle_exception
web_1      |     reraise(exc_type, exc_value, tb)
web_1      |   File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
web_1      |     raise value
web_1      |   File "/usr/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
web_1      |     response = self.full_dispatch_request()
web_1      |   File "/usr/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
web_1      |     rv = self.handle_user_exception(e)
web_1      |   File "/usr/lib/python3.6/site-packages/flask_cors/extension.py", line 161, in wrapped_function
web_1      |     return cors_after_request(app.make_response(f(*args, **kwargs)))
web_1      |   File "/usr/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception

REDIS LOG

Redis, however, seems to be working:

redis_1    | 1:C 29 Mar 2019 02:33:32.722 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1    | 1:C 29 Mar 2019 02:33:32.722 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1    | 1:C 29 Mar 2019 02:33:32.722 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1    | 1:M 29 Mar 2019 02:33:32.724 * Running mode=standalone, port=6379.
redis_1    | 1:M 29 Mar 2019 02:33:32.724 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1    | 1:M 29 Mar 2019 02:33:32.724 # Server initialized
redis_1    | 1:M 29 Mar 2019 02:33:32.724 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1    | 1:M 29 Mar 2019 02:33:32.725 * DB loaded from disk: 0.000 seconds
redis_1    | 1:M 29 Mar 2019 02:33:32.725 * Ready to accept connections

config.py

class DevelopmentConfig(BaseConfig):
    """Development configuration"""
    DEBUG_TB_ENABLED = True 
    DEBUG = True
    BCRYPT_LOG_ROUNDS = 4 
    #set key
    #sqlalchemy
    SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL')
    #SQLALCHEMY_DATABASE_URI= "sqlite:///models/data/database.db"
    # mail
    MAIL_SERVER='smtp.gmail.com'
    MAIL_PORT = 587
    MAIL_USE_TLS = True
    MAIL_DEBUG = True
    MAIL_USERNAME = '[email protected]'
    MAIL_PASSWORD = 'MEfAc6w74WGx'

    SEVER_NAME = 'http://127.0.0.1:8080'
    # celery broker
    REDIS_HOST = "0.0.0.0"
    REDIS_PORT = 6379
    BROKER_URL = os.environ.get('REDIS_URL', "redis://{host}:{port}/0".format(
                                                                    host=REDIS_HOST, 
                                                                    port=str(REDIS_PORT)))
    INSTALLED_APPS = ['routes']
    # celery config
    CELERYD_CONCURRENCY = 10
    CELERY_BROKER_URL = BROKER_URL
    CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
    CELERY_IMPORTS = ('project.api.routes.background',)

what am I missing here?

Clinandrium answered 29/3, 2019 at 3:33 Comment(8)
most likely your celery image is not using the CELERY_RESULT_BACKEND env you set. I would suggest you to log into that container and check if the configuration files are set properly to use redis instead of localhost for hostnameEducate
Same issue with your web container. You are trying to use redis on a localhost host, but you need to change your configuration to point to host redis.Educate
you mean CELERY_RESULT_BACKEND = 'redis://redis:6379/0' in config.py and change celery = Celery(__name__, broker='redis://localhost:6379/0') to broker='redis://redis:6379/0' ?Clinandrium
yeah, something like that. Wherever in web/celery you have defined the hostname to connect to, change it to redis instead of localhostEducate
I also see that you have not named your containers. You need to name your containers if you want to reference those containers in your other containers.Educate
what do you by by 'naming'? Are not the indented names below services each container name?Clinandrium
Never mind the name part. I thought you are using docker-compose v2. Sorry about that.Educate
log errors disappeared after your suggestions, thank you. care to answer? I'd gladly accept it.Clinandrium
E
20

TL;DR change redis://localhost:6379/0 to redis://redis:6379/0

When you run docker-compose, it creates a new network under which all your containers are running. Docker engine also creates an internal routing which allows all the containers to reference each other using their names.

In your case, your web and celery containers were trying to access redis over localhost. But inside the container, localhost means their own localhost. You need to change the configuration to point the hostname to the name of the container.

If you were not using docker, but had different machines for each of your container, localhost would have meant their own server. In order to connect to redis server, you would have passed the IP address of the machine on which redis was running. In docker, instead of IP address, you can just pass the name of the container because of the engine's routing discussed above.

Note that you can still assign static IP addresses to each of your container, and use those IP addresses instead of container_names. For more details, read the networking section of docker documents.

Educate answered 29/3, 2019 at 4:36 Comment(1)
The solution is to change redis://localhost:6379/0 to redis://redis:6379/0. This answer is correct but the actual solution is found in the comments of OP's post.Sisto
S
0

If anyone still has a problem try changing the localhost to its corresponding IP address which is 127.0.0.1 (it worked for me) so the code would look like this

redis://127.0.0.1:6379/0

instead of

redis://localhost:6379/0

So in my case I changed it to app = Celery('tasks', broker='redis://127.0.0.1:6379/0')

Skewback answered 5/10 at 9:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.