Jenkins on Kubernetes - working directory not accessible using workspaceVolume dynamicPVC
Asked Answered
E

1

6

I'm running Jenkins on EKS cluster with k8s plugin and i'd like to write a declarative pipeline in which I specify the pod template in each stage. So a basic example would be the following, in which in the first stage a file is created and in the second one is printed :

pipeline{
  agent none
  stages {  
    stage('First sample') {
      agent {
        kubernetes {
          label 'mvn-pod'
          yaml """
spec:
  containers:
  - name: maven
    image: maven:3.3.9-jdk-8-alpine
            """
                        }
                    }
        steps {
            container('maven'){
                sh "echo 'hello' > test.txt"
            }
        }
      }
      
      
    stage('Second sample') {
      agent {
        kubernetes {
          label 'bysbox-pod'
          yaml """
spec:
  containers:
  - name: busybox
    image: busybox
            """
        }
      }
      steps {
        container('busybox'){
            sh "cat test.txt"
        }
      }
    }  
  }
}

This clearly doesn't work since the two pods don't have any kind of shared memory. Reading this doc I realized I can use workspaceVolume dynamicPVC () in the yaml declaration of the pod so that the plugin creates and manages a persistentVolumeClaim in which hopefully i can write the data I need to share between stages.

Now, with workspaceVolume dynamicPVC (...) both pv and pvc are successfully created but the pod goes on error and terminates. In particular, the pods provisioned is the following :

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/psp: eks.privileged
    runUrl: job/test-libraries/job/sample-k8s/12/
  creationTimestamp: "2020-08-07T08:57:09Z"
  deletionGracePeriodSeconds: 30
  deletionTimestamp: "2020-08-07T08:58:09Z"
  labels:
    jenkins: slave
    jenkins/label: bibibu
  name: bibibu-ggb5h-bg68p
  namespace: jenkins-slaves
  resourceVersion: "29184450"
  selfLink: /api/v1/namespaces/jenkins-slaves/pods/bibibu-ggb5h-bg68p
  uid: 1c1e78a5-fcc7-4c86-84b1-8dee43cf3f98
spec:
  containers:
  - image: maven:3.3.9-jdk-8-alpine
    imagePullPolicy: IfNotPresent
    name: maven
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    tty: true
    volumeMounts:
    - mountPath: /home/jenkins/agent
      name: workspace-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-5bt8c
      readOnly: true
  - env:
    - name: JENKINS_SECRET
      value: ...
    - name: JENKINS_AGENT_NAME
      value: bibibu-ggb5h-bg68p
    - name: JENKINS_NAME
      value: bibibu-ggb5h-bg68p
    - name: JENKINS_AGENT_WORKDIR
      value: /home/jenkins/agent
    - name: JENKINS_URL
      value: ...
    image: jenkins/inbound-agent:4.3-4
    imagePullPolicy: IfNotPresent
    name: jnlp
    resources:
      requests:
        cpu: 100m
        memory: 256Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /home/jenkins/agent
      name: workspace-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-5bt8c
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: ...
  nodeSelector:
    kubernetes.io/os: linux
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: workspace-volume
    persistentVolumeClaim:
      claimName: pvc-bibibu-ggb5h-bg68p
  - name: default-token-5bt8c
    secret:
      defaultMode: 420
      secretName: default-token-5bt8c
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-08-07T08:57:16Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-08-07T08:57:16Z"
    message: 'containers with unready status: [jnlp]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-08-07T08:57:16Z"
    message: 'containers with unready status: [jnlp]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-08-07T08:57:16Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
    image: jenkins/inbound-agent:4.3-4
    imageID: docker-pullable://jenkins/inbound-agent@sha256:62f48a12d41e02e557ee9f7e4ffa82c77925b817ec791c8da5f431213abc2828
    lastState: {}
    name: jnlp
    ready: false
    restartCount: 0
    state:
      terminated:
        containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
        exitCode: 1
        finishedAt: "2020-08-07T08:57:35Z"
        reason: Error
        startedAt: "2020-08-07T08:57:35Z"
  - containerID: docker://96f747a132ee98f7bf2488bd3cde247380aea5dd6f84bdcd7e6551dbf7c08943
    image: maven:3.3.9-jdk-8-alpine
    imageID: docker-pullable://maven@sha256:3ab854089af4b40cf3f1a12c96a6c84afe07063677073451c2190cdcec30391b
    lastState: {}
    name: maven
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2020-08-07T08:57:35Z"
  hostIP: 10.108.171.224
  phase: Running
  podIP: 10.108.171.158
  qosClass: Burstable
  startTime: "2020-08-07T08:57:16Z"

Retrieving logs from jnlp container on the pod with kubectl logs name-of-the-pod -c jnlp -n jenkins-slaves led me towards this error :

Exception in thread "main" java.io.IOException: The specified working directory should be fully accessible to the remoting executable (RWX): /home/jenkins/agent
        at org.jenkinsci.remoting.engine.WorkDirManager.verifyDirectory(WorkDirManager.java:249)
        at org.jenkinsci.remoting.engine.WorkDirManager.initializeWorkDir(WorkDirManager.java:201)
        at hudson.remoting.Engine.startEngine(Engine.java:288)
        at hudson.remoting.Engine.startEngine(Engine.java:264)
        at hudson.remoting.jnlp.Main.main(Main.java:284)
        at hudson.remoting.jnlp.Main._main(Main.java:279)
        at hudson.remoting.jnlp.Main.main(Main.java:231)

I also tried to specify the accessModes as parameter of dynamicPVC, but the error is the same.
What am I doing wrong?

Thanks

Ecclesiastic answered 7/8, 2020 at 9:6 Comment(0)
A
3

The docker image being used is configured to run as a non-root user jenkins. By default PVCs will be created only allowing root-user access.

This can be configured using the security context, e.g.

securityContext:
  runAsUser: 1000
  runAsGroup: 1000
  fsGroup: 1000

(The jenkins user in that image is ID 1000)

Ania answered 19/8, 2020 at 17:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.