GitLab CI runner can't connect to unix:///var/run/docker.sock in kubernetes
Asked Answered
W

5

32

GitLab's running in kubernetes cluster. Runner can't build docker image with build artifacts. I've already tried several approaches to fix this, but no luck. Here are some configs snippets:

.gitlab-ci.yml

image: docker:latest
services:
  - docker:dind

variables:
  DOCKER_DRIVER: overlay

stages:
  - build
  - package
  - deploy

maven-build:
  image: maven:3-jdk-8
  stage: build
  script: "mvn package -B --settings settings.xml"
  artifacts:
    paths:
      - target/*.jar

docker-build:
  stage: package
  script:
  - docker build -t gitlab.my.com/group/app .
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app
  - docker push gitlab.my.com/group/app

config.toml

concurrent = 1
check_interval = 0

[[runners]]
  name = "app"
  url = "https://gitlab.my.com/ci"
  token = "xxxxxxxx"
  executor = "kubernetes"
  [runners.kubernetes]
    privileged = true
    disable_cache = true

Package stage log:

running with gitlab-ci-multi-runner 1.11.1 (a67a225)
  on app runner (6265c5)
Using Kubernetes namespace: default
Using Kubernetes executor with image docker:latest ...
Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending
Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending
Running on runner-6265c5-project-4-concurrent-0h9lg9 via gitlab-runner-3748496643-k31tf...
Cloning repository...
Cloning into '/group/app'...
Checking out 10d5a680 as master...
Skipping Git submodules setup
Downloading artifacts for maven-build (61)...
Downloading artifacts from coordinator... ok        id=61 responseStatus=200 OK token=ciihgfd3W
$ docker build -t gitlab.my.com/group/app .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1

What am I doing wrong?

Windlass answered 17/3, 2017 at 21:2 Comment(0)
W
18

Don't need to use this:

DOCKER_DRIVER: overlay

cause it seems like OVERLAY isn't supported, so svc-0 container is unable to start with it:

$ kubectl logs -f `kubectl get pod |awk '/^runner/{print $1}'` -c svc-0
time="2017-03-20T11:19:01.954769661Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
time="2017-03-20T11:19:01.955720778Z" level=info msg="libcontainerd: new containerd process, pid: 20"
time="2017-03-20T11:19:02.958659668Z" level=error msg="'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded."

Also, add export DOCKER_HOST="tcp://localhost:2375" to the docker-build:

 docker-build:
  stage: package
  script:
  - export DOCKER_HOST="tcp://localhost:2375"
  - docker build -t gitlab.my.com/group/app .
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app
  - docker push gitlab.my.com/group/app
Windlass answered 20/3, 2017 at 12:7 Comment(0)
L
11

When using Kubernetes, you have to adjust your Build image to connect with the Docker engine.

Add to your build image:

DOCKER_HOST=tcp://localhost:2375

Quote from the docs:

Running the docker:dind also known as the docker-in-docker image is also possible but sadly needs the containers to be run in privileged mode. If you're willing to take that risk other problems will arise that might not seem as straight forward at first glance. Because the docker daemon is started as a service usually in your .gitlab-ci.yaml it will be run as a separate container in your pod. Basically containers in pods only share volumes assigned to them and an IP address by wich they can reach each other using localhost. /var/run/docker.sock is not shared by the docker:dind container and the docker binary tries to use it by default. To overwrite this and make the client use tcp to contact the docker daemon in the other container be sure to include DOCKER_HOST=tcp://localhost:2375 in your environment variables of the build container.

Gitlab-CI on Kubernetes

Luger answered 18/3, 2017 at 5:37 Comment(6)
thanks for a suggestion, mate, but I've already tried this :(Windlass
Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?Windlass
Did you guys solved this one? I'm also getting this error (including one with TCP)... My gitlab-runner is in docker container.Lindeman
I have same issue. Fixed by config volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"] as hereElastic
If the provided solution doesn't fix the issue, one can try export DOCKER_HOST=$DOCKER_PORT, which worked perfectly for meMeader
For me DOCKER_HOST=tcp://localhost:2375 still didn't work, but DOCKER_HOST=tcp://127.0.0.1:2375 did.Lymphocyte
R
3

based on @Yarik 's comment what worked for me was

- export DOCKER_HOST=$DOCKER_PORT

no other answers worked.

Redintegrate answered 29/9, 2019 at 15:21 Comment(0)
M
0

I had the same problem, and I could not get the above workarounds to work for me (I did not try the volumes trick mentioned by @fkpwolf).

Now GitLab has an alternative solution by using Kaniko, which did work for me:

The .gitlab-ci.yaml could then be something like this, in that case:

stages:
  - build
  - package
  - deploy

maven-build:
  image: maven:3-jdk-8
  stage: build
  script: "mvn package -B --settings settings.xml"
  artifacts:
    paths:
      - target/*.jar

docker-kaniko-build:
  stage: package
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - echo "{\"auths\":{\"gitlab.my.com\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_BUILD_TOKEN\"}}}" > /kaniko/.docker/config.json
    - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination gitlab.my.com/group/app

From the GitLab docs it is mentioned that:

kaniko solves two problems with using the docker-in-docker build method:

  • Docker-in-docker requires privileged mode in order to function, which is a significant security concern.
  • Docker-in-docker generally incurs a performance penalty and can be quite slow.

See: https://docs.gitlab.com/ee/ci/docker/using_kaniko.html

Meisel answered 25/10, 2019 at 13:57 Comment(0)
L
-1

DinD works on my K8s cluster.

  1. Please stop the gitlab runner on your K8s cluster
helm uninstall gitlab-runner --namespace gitlab 
  1. Update 'gitlab-runner-values.yaml' file by adding followings:
    [runners.kubernetes]
      privileged = true
      [[runners.kubernetes.volumes.host_path]]
        name = "docker"
        mount_path = "/var/run/docker.sock"
        read_only = true
        host_path = "/var/run/docker.sock"

gitlab-runner-values.yaml

runners:
  config: |
    [[runners]]
      [runners.kubernetes]
        namespace = "gitlab"
        poll_timeout = 7200
        privileged = true
      [[runners.kubernetes.volumes.host_path]]
        name = "docker"
        mount_path = "/var/run/docker.sock"
        read_only = true
        host_path = "/var/run/docker.sock"
      [runners.kubernetes.node_selector]
        "kubernetes.io/arch" = "amd64"
        "kubernetes.io/os" = "linux"
  1. Run the gitlab runner again on your K8s cluster
helm install --namespace gitlab gitlab-runner -f gitlab-runner-values.yaml gitlab/gitlab-runner
  1. Please adapt your gitlab-ci.yaml file like below:

Example:

variables:
  IMAGE_NAME: <DockerUsername>/<name>
  IMAGE_TAG: latest

stages:
  - build
  - deploy

build_image: 
  stage: build
  image: docker:20.10.16
  services: 
    - name: docker:20.10.16-dind
  script:
    - docker version
    - docker build -t $IMAGE_NAME:$IMAGE_TAG .
    - docker images
  tags:
    - k8s-gitlab-linux-runner

deploy:
  stage: deploy
  image: docker:20.10.16
  before_script:
    - docker login -u $REGISTRY_USER -p $REGISTRY_PASS # dockerhub username, pass in Project Settings > CI > Variables
    - docker images
  script:
    - docker push $IMAGE_NAME:$IMAGE_TAG
  tags:
    - k8s-gitlab-linux-runner

Lordship answered 19/7, 2023 at 11:34 Comment(1)
This poses a rather big security risk, given you run the containers in priviledged mode. More than that, it does not work on new K8s nodes, as there is no docker anymore, only containerd.Shire

© 2022 - 2024 — McMap. All rights reserved.