Docker&Celery - ERROR: Pidfile (celerybeat.pid) already exists
Asked Answered
B

7

22

Application consists of: - Django - Redis - Celery - Docker - Postgres

Before merging the project into docker, everything was working smooth and fine, but once it has been moved into containers, something wrong started to happen. At first it starts perfectly fine, but after a while I do receive folowing error:

celery-beat_1  | ERROR: Pidfile (celerybeat.pid) already exists.

I've been struggling with it for a while, but right now I literally give up. I've no idea of what is wrong with it.

Dockerfile:

FROM python:3.7

ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src


COPY /scripts/startup/entrypoint.sh entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system

COPY . /opt/services/djangoapp/src

RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \;

RUN sed -i "s|django.core.urlresolvers|django.urls |g" /usr/local/lib/python3.7/site-packages/vanilla/views.py
RUN cp /usr/local/lib/python3.7/site-packages/celery/backends/async.py /usr/local/lib/python3.7/site-packages/celery/backends/asynchronous.py
RUN rm /usr/local/lib/python3.7/site-packages/celery/backends/async.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/redis.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/rpc.py

RUN cd app && python manage.py collectstatic --no-input



EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "app", "example.wsgi:application", "--reload"]

docker-compose.yml:

version: '3'

services:

  djangoapp:
    build: .
    volumes:
      - .:/opt/services/djangoapp/src
      - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
      - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
      - static_local_volume:/opt/services/djangoapp/src/app/static
      - media_local_volume:/opt/services/djangoapp/src/app/media
      - .:/code
    restart: always
    networks:
      - nginx_network
      - database1_network # comment when testing
      # - test_database1_network # uncomment when testing
      - redis_network
    depends_on:
      - database1 # comment when testing
      # - test_database1 # uncomment when testing
      - migration
      - redis

  # base redis server
  redis:
    image: "redis:alpine"
    restart: always
    ports: 
      - "6379:6379"
    networks:
      - redis_network
    volumes:
      - redis_data:/data

  # celery worker
  celery:
    build: .
    command: >
      bash -c "cd app && celery -A example worker --without-gossip --without-mingle --without-heartbeat -Ofair"
    volumes:
      - .:/opt/services/djangoapp/src
      - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
      - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume    
      - static_local_volume:/opt/services/djangoapp/src/app/static
      - media_local_volume:/opt/services/djangoapp/src/app/media
    networks:
      - redis_network
      - database1_network # comment when testing
      # - test_database1_network # uncomment when testing
    restart: always
    depends_on:
      - database1 # comment when testing
      # - test_database1 # uncomment when testing
      - redis
    links:
      - redis

  celery-beat:
    build: .
    command: >
      bash -c "cd app && celery -A example beat"
    volumes:
      - .:/opt/services/djangoapp/src
      - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
      - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
      - static_local_volume:/opt/services/djangoapp/src/app/static
      - media_local_volume:/opt/services/djangoapp/src/app/media
    networks:
      - redis_network
      - database1_network # comment when testing
      # - test_database1_network # uncomment when testing
    restart: always
    depends_on:
      - database1 # comment when testing
      # - test_database1 # uncomment when testing
      - redis
    links:
      - redis

  # migrations needed for proper db functioning
  migration:
    build: .
    command: >
      bash -c "cd app && python3 manage.py makemigrations && python3 manage.py migrate"
    depends_on:
      - database1 # comment when testing
      # - test_database1 # uncomment when testing
    networks:
     - database1_network # comment when testing
     # - test_database1_network # uncomment when testing

  # reverse proxy container (nginx)
  nginx:
    image: nginx:1.13
    ports:
      - 80:80
    volumes:
      - ./config/nginx/conf.d:/etc/nginx/conf.d
      - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
      - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
      - static_local_volume:/opt/services/djangoapp/src/app/static
      - media_local_volume:/opt/services/djangoapp/src/app/media 
    restart: always
    depends_on:
      - djangoapp
    networks:
      - nginx_network

  database1: # comment when testing
    image: postgres:10 # comment when testing
    env_file: # comment when testing
      - config/db/database1_env # comment when testing
    networks: # comment when testing
      - database1_network # comment when testing
    volumes: # comment when testing
      - database1_volume:/var/lib/postgresql/data # comment when testing

  # test_database1: # uncomment when testing
    # image: postgres:10 # uncomment when testing
    # env_file: # uncomment when testing
      # - config/db/test_database1_env # uncomment when testing
    # networks: # uncomment when testing
      # - test_database1_network # uncomment when testing
    # volumes: # uncomment when testing
      # - test_database1_volume:/var/lib/postgresql/data # uncomment when testing


networks:
  nginx_network:
    driver: bridge
  database1_network: # comment when testing
    driver: bridge # comment when testing
  # test_database1_network: # uncomment when testing
    # driver: bridge # uncomment when testing
  redis_network:
    driver: bridge
volumes:
  database1_volume: # comment when testing
  # test_database1_volume: # uncomment when testing
  static_volume:  # <-- declare the static volume
  media_volume:  # <-- declare the media volume
  static_local_volume:
  media_local_volume:
  redis_data:

Please, ignore "test_database1_volume" as it exists only for test purposes.

Bryannabryansk answered 28/11, 2018 at 14:41 Comment(0)
N
26

Another solution (taken from https://mcmap.net/q/446350/-disable-pidfile-for-celerybeat) is to use --pidfile= (with no path) to not create a pidfile at all. Same effect as Siyu's answer above.

Negligence answered 30/5, 2019 at 18:28 Comment(0)
S
10

I believe there is a pidfile in your project directory ./ then when you run the container, it's mounted in. (therefore RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \; had no effect).

You can use celery --pidfile=/opt/celeryd.pid to specify a non mounted path so that it is not mirror on the host.

Salvatoresalvay answered 28/11, 2018 at 16:33 Comment(1)
This didn't work for me because the file persisted between container restarts. Instead I mounted a tmpfs directory (which is removed on container stop), and used --pidfile to point to a file in that location.Clearing
S
5

Although not professional in the slightest, I found adding:

celerybeat.pid

to my .dockerignore file was what fixed said issues.

Step answered 24/1, 2020 at 15:59 Comment(0)
H
4

Had the same issue as part of an Airflow setup (apache-airflow==2.3.4, celery==5.2.7), on Docker Compose:

ERROR: Pidfile (/airflow/airflow-worker.pid) already exists.
Seems we're already running? (pid: 1)

I tried to pass --pidfile (actually --pid under the Airflow umbrella) like so:

airflow celery worker --pid=

However this didn't work, and a .pid file was still being created. Maybe this is due to the additional Airflow layer.

Eventually, I figured that the original issue had to do with the Docker Compose restart policy (in my case, restart: always). Once the worker had failed once, subsequent restarts would find the already existing .pid file. This is because containers keep state on restart (see this or this).

A more permanent solution was to use a tmpfs, and point the .pid file there:

# docker-compose.yml

worker:
    image: {...}
    tmpfs:
      - /airflow-worker
    entrypoint: airflow celery worker --pid=/airflow-worker/airflow-worker.pid
    ...
...
Homeostasis answered 10/10, 2022 at 15:35 Comment(0)
M
2

I had this error with Airflow when I run it with docker-compose.

If you don't care about the current status of your Airflow, you can just delete airflow containers.

docker rm containerId

And after that, start the Airflow again:

docker-compose up

Mikimikihisa answered 13/9, 2021 at 11:14 Comment(0)
B
1

Other way, create a django command celery_kill.py

import shlex
import subprocess

from django.core.management.base import BaseCommand


class Command(BaseCommand):
    def handle(self, *args, **options):
        kill_worker_cmd = 'pkill -9 celery'
        subprocess.call(shlex.split(kill_worker_cmd))

docker-compose.yml :

  celery:
    build: ./src
    restart: always
    command: celery -A project worker -l info
    volumes:
      - ./src:/var/lib/celery/data/
    depends_on:
      - db
      - redis
      - app

  celery-beat:
    build: ./src
    restart: always
    command: celery -A project beat -l info --pidfile=/tmp/celeryd.pid
    volumes:
      - ./src:/var/lib/beat/data/
    depends_on:
      - db
      - redis
      - app

and Makefile:

run:
    docker-compose up -d --force-recreate
    docker-compose exec app python manage.py celery_kill
    docker-compose restart
    docker-compose exec app python manage.py migrate
Barboza answered 22/3, 2019 at 21:24 Comment(2)
I have fixed the problem by adding pidfile=/tmp/celeryd.pid" to the end of "celery -A proj beat -l info" en mi docker-compose as well as in your example thank youProjector
@Projector Specifying it as the /tmp directory is not a good idea. Because the default /tmp directory will be cleaned regularly, this will cause your celery program to be forced to restart or exit abnormally, if you do not change the default policy.Colter
D
1

The reason of this error is docker container stopped without normal Celery stopping process. The solution is simple. stop Celery before start.

Solution 1. write celery start command(ex> docker-entrypoint.sh, ...) as follow

celery multi stopwait w1 -A myproject
&& rm -f /var/run/celery/w1.pid  # remove stale pidfile
&& celery multi start w1 -A myproject-l info --pidfile=/var/run/celery/w1.pid 

Solution 2. (not recommended)

always run "docker-compose down" before "docker-compose up".

Disavow answered 28/7, 2020 at 15:46 Comment(2)
what if your machine crashes and docker-compose down is no longer an option?Venessavenetia
@PawełPolewicz in that case, fix the problem at docker-compose up timing(=Solution 1)Disavow

© 2022 - 2024 — McMap. All rights reserved.