Airflow: DockerOperator fails with Permission Denied error
Asked Answered
J

12

16

I'm trying to run a docker container via Airflow but getting Permission Denied errors. I have seen a few related posts and some people seem to have solved it via sudo chmod 777 /var/run/docker.sock which is a questionable solution at best, but it still didn't work for me (even after restarting docker. If anyone managed to solve this problem, please let me know!

Here is my DAG:

from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator

args = {
    'owner': 'airflow',
    'depends_on_past': False,
    'start_date': datetime(2020, 6, 21, 11, 45, 0),
    'retries': 1,
    'retry_delay': timedelta(minutes=1),
}

dag = DAG(
    "docker",
    default_args=args,
    max_active_runs=1,
    schedule_interval='* * * * *',
    catchup=False
)

hello_operator = DockerOperator(
    task_id="run_docker",
    image="alpine:latest",
    command="/bin/bash echo HI!",
    auto_remove=True,
    dag=dag
)

And here is the error that I'm getting:

[2020-06-21 14:01:36,620] {taskinstance.py:1145} ERROR - ('Connection aborted.', PermissionError(13, 'Permission denied'))
Traceback (most recent call last):
  File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
    self.send(msg)
  File "/usr/local/lib/python3.6/http/client.py", line 974, in send
    self.connect()
  File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
    sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/airflow/.local/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen
    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 400, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/packages/six.py", line 734, in reraise
    raise value.with_traceback(tb)
  File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
    self.send(msg)
  File "/usr/local/lib/python3.6/http/client.py", line 974, in send
    self.connect()
  File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
    sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied'))

Here is my setup:

Dockerfile:

FROM apache/airflow
RUN pip install --upgrade --user pip && \
    pip install --user psycopg2-binary && \
    pip install --user docker
COPY airflow/airflow.cfg /opt/airflow/

docker-compose.yaml:

version: "3"

services:

  postgres:
    image: "postgres:9.6"
    container_name: "postgres"
    environment:
      - POSTGRES_USER=airflow
      - POSTGRES_PASSWORD=airflow
      - POSTGRES_DB=airflow
    ports:
    - "5432:5432"
    volumes:
    - ./data/postgres:/var/lib/postgresql/data

  initdb:
    image: learning/airflow
    entrypoint: airflow initdb
    depends_on:
      - postgres

  webserver:
    image: learning/airflow
    restart: always
    entrypoint: airflow webserver
    environment:
      - EXECUTOR=Local
    healthcheck:
      test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-webserver.pid ]"]
      interval: 1m
      timeout: 5m
      retries: 3
    ports:
      - "8080:8080"
    depends_on:
      - postgres
    volumes:
    - ./airflow/dags:/opt/airflow/dags
    - ./airflow/plugins:/opt/airflow/plugins
    - ./data/logs:/opt/airflow/logs
    - /var/run/docker.sock:/var/run/docker.sock

  scheduler:
    image: learning/airflow
    restart: always
    entrypoint: airflow scheduler
    healthcheck:
      test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-scheduler.pid ]"]
      interval: 1m
      timeout: 5m
      retries: 3
    depends_on:
      - postgres
    volumes:
      - ./airflow/dags:/opt/airflow/dags
      - ./airflow/plugins:/opt/airflow/plugins
      - ./data/logs:/opt/airflow/logs
      - /var/run/docker.sock:/var/run/docker.sock
Jost answered 21/6, 2020 at 14:6 Comment(3)
Did you ever find a solution to this?Downcomer
Thanks for your question, I have added the answer.Boll
@Downcomer If you still have issue and remembered it, I have added a complete solution for this purpose.Boll
M
14

Even knowing that this question is old, my answer can still help other people that are having this problem.

I've found a elegant (and functional) solution in the following link:

https://onedevblog.com/how-to-fix-a-permission-denied-when-using-dockeroperator-in-airflow/

Quoting the article:

There is a more elegant approach which consists of “wrapping” the file around a service (accessible via TCP).

--

from the above link, the solution is to:

  • add an additional service docker-proxy to access localhost docker (/var/run/docker.sock) via tcp://docker-proxy:2375 using socat.
version: '3.7'
services:
  docker-proxy:
    image: bobrik/socat
    command: "TCP4-LISTEN:2375,fork,reuseaddr UNIX-CONNECT:/var/run/docker.sock"
    ports:
      - "2376:2375"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  • replace kwarg docker_url='unix://var/run/docker.sock' with docker_url='tcp://docker-proxy:2375' for all DockerOperators.
Medievalist answered 24/11, 2021 at 17:47 Comment(2)
thank you. running chmod 666 /var/run/docker.sock every time a system reboots got a bit tiring. i think Jorge's solution is the better solution if you are running airflow via docker-compose. (adding user airflow to host machine and then adding airflow to docker group did not seem to resolve permission denied issue for my setup).Adcock
So much better solution that setting 777 on /var/run/docker.sockKarren
N
7

If volume is already mapped to container, run chmod on HOST:

chmod 777 /var/run/docker.sock

Solved for me.

Nectareous answered 11/12, 2021 at 3:19 Comment(0)
I
2

I ran into this issue on windows (dev environment), using the puckel image. Note that the file /var/run/docker.sock does not exist on this image, I created it and changed the owner to the airflow user already existent in the puckel image

RUN touch /var/run/docker.sock
RUN chown -R airflow /var/run/docker.sock
Immunology answered 21/1, 2021 at 14:47 Comment(0)
N
1

I remember having issues similar to this and what I did, on top of what you have already done, was to dynamically add the docker group in the container with the GID of the docker.sock in a startup script like this:

#!/usr/bin/env bash
ARGS=$*

# Check if docker sock is mounted
if [[ -S /var/run/docker.sock ]];
then
    GROUP=`stat -c %g /var/run/docker.sock`
    groupadd -g $GROUP docker
    usermod -aG docker airflow
else
    echo "Docker unix sock not found. DockerOperators will not run."
fi

su airflow -c "/usr/bin/dumb-init -- /entrypoint $ARGS"

That way you don't touch the socket's permissions and the airflow user is still able to interact with it.

Some other considerations:

  • I had to redeclare the default user in the Dockerfile to start as root
  • Run airflow as user airflow
Nobility answered 8/9, 2021 at 13:16 Comment(0)
E
1

First things first, we need to mount /var/run/docker.sock as a volume, because it is the file through which the Docker Client and Docker Server can communicate, as is in this case - to launch a separate Docker container using the DockerOperator() from inside the running Airflow container. The UNIX domain socket requires either root permission, or Docker group membership. Since the Airflow user is not the root, we need to add it to the Docker group and this way it will get access to the docker.sock. For that you need to do the following:

1.1. Add a Docker group and your user to it in the terminal on your host machine (following the official Docker documentation)

   sudo groupadd docker
   sudo usermod -aG docker <your_user>
   newgrp docker 

1.2. Log out and log back in on your host machine

2.1. Get the Docker group id in the terminal on your host machine

   cut -d: -f3 < <(getent group docker)

2.2. Add the Airflow user to this docker group (use the GID from the line above) in the Airflow's docker-compose.yaml

   group_add:
     - <docker_gid>

3.1. Get your user id in the terminal on your host machine

   id -u <your_user>

3.2. Set your AIRFLOW_UID to match your user id (use the UID from the line above) on the host machine and AIRFLOW_GID to 0 in the Airflow's docker-compose.yaml

   user: "<your_uid>:-50000:0"

4.1. If you're creating your own Dockerfile for the separate container, add your user there

   ARG UID=<your_uid>
   ENV USER=<your_user>
   RUN useradd -u $UID -ms /bin/bash $USER
Emmaemmalee answered 9/8, 2022 at 13:28 Comment(0)
G
1

I had a permission denied because the docker container file docker.sock was accessed with user "airflow" in group root and the file was owned by the host docker group ID (instead of name) and user "root".

so i added the container user to group - docker group ID in the docker compose file

enter image description here

you can get your docker group id with cat /etc/group

you need to mount var/run/docker.sock in docker compose file for it work too

- /var/run/docker.sock:/var/run/docker.sock
Gans answered 26/4, 2023 at 8:7 Comment(0)
P
0

You can try to run your docker file with:

docker run -v /var/run/docker.sock:/var/run/docker.sock your_image_name
Photometer answered 16/10, 2020 at 18:59 Comment(0)
B
0

You don't need to create a new Docker image that Docker is installed in it. You only need to first change the permission of the /var/run/docker.sock file and then mount the binary file of Docker and socket into your container.

For this purpose, follow these steps in order:

  1. Change the permission of var/run/docker.sock:

    sudo chmod 777 /var/run/docker.sock
    
  2. Mount the docker.sock and Docker binary to the container:

    • if you are running using Docker command line (docker run), try the following:
      docker run -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker <image_name:image_tag>
      
    • if you are using docker-compose file:
      volumes:
        - <other-volumes>:<other-volumes>
        - /var/run/docker.sock:/var/run/docker.sock
        - /usr/bin/docker:/usr/bin/docker
      
Boll answered 8/6, 2023 at 21:2 Comment(0)
T
0

I found same solution as Jorge Nachtigall. Benjamin CabalonaJr posted it on Medium, but it has a bit different realization:

add this to your docker-compose.yaml

docker-socket-proxy:
image: tecnativa/docker-socket-proxy:0.1.1
environment:
  CONTAINERS: 1
  IMAGES: 1
  AUTH: 1
  POST: 1
privileged: true
volumes:
  - /var/run/docker.sock:/var/run/docker.sock:ro
restart: always

And don't forget to add this to _PIP_ADDITIONAL_REQUIREMENTS:

apache-airflow-providers-docker>=2.2.0

If you use apache-airflow-providers-docker<2.2.0 you'll not able to use decorator @task.docker.

here is DAG example

from datetime import timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
from airflow.utils.dates import days_ago

default_args = {
    'owner': 'airflow',
    'depends_on_past': False,
    'email': ['[email protected]'],
    'email_on_failure': False,
    'email_on_retry': False,
    'retries': 1,
    'retry_delay': timedelta(minutes=5),
}

dag = DAG(
    'dim_promotion',
    default_args=default_args,
    schedule_interval=None,
    start_date=days_ago(2),
)

dop = DockerOperator(
    api_version='1.37',
    docker_url='TCP://docker-socket-proxy:2375',
    command='echo Hello World',
    image='ubuntu',
    network_mode='bridge',
    task_id='docker_op_tester',
    dag=dag,
)
Td answered 10/7 at 6:44 Comment(0)
B
-1

Add another leading / to /var/run/docker.sock (at the source which is a part before :) in volumes, as below:

volumes:
    //var/run/docker.sock:/var/run/docker.sock
Blueberry answered 22/6, 2020 at 9:18 Comment(3)
Thanks for the suggestion! Unfortunately, it didn't seem to help. I also tried adding and extra / after the colon, but it did help either.Jost
This "//" is for windows view more in medium.com/analytics-vidhya/…Nations
There's no description, why this would probably help. Consider adding an explanation if you're providing an answer.Expellant
T
-1

In my case 'sudo' before command helped - I run

sudo docker-compose up -d --build dev

instead of

docker-compose up -d --build dev

and it helped. Issue was in lack of rights.

Trenttrento answered 9/11, 2020 at 12:17 Comment(0)
M
-3

Try

$sudo groupadd docker 

$sudo usermod -aG docker $USER
Marika answered 22/9, 2021 at 7:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.