How to run Gitlab CI jobs in the same instance
Asked Answered
B

2

7

I have autoscaled the gitlab-runner on AWS spot instances. And it works fine.

And I have an issue when running the jobs. Below is my .gitlab-ci.yml and it has two stages.

stages:
 - build
 - dev1:build

build:
 stage: build
 script: 
  - docker build --rm -t broker-connect-dev1-${CI_COMMIT_SHORT_SHA} -f BrokerConnect/Dockerfile .
 only:
  - dev1/release
 tags:
  - itela-spot-runner     

build-dev1:
 stage: dev1:build
 script: 
  - docker tag broker-connect-dev1-${CI_COMMIT_SHORT_SHA}:latest 19950818/broker-connect:${DEV1_TAG} 
 only:
  - dev1/release
 tags:
  - itela-spot-runner  

And here comes the problem, since I am using spot instances to run the jobs sometimes the build stage happens in one spot instance and the dev1:build stage happens in another spot instance. When this happens dev1:build fails as it cannot find the image broker-connect-dev1-${CI_COMMIT_SHORT_SHA} because it has been built in a separate spot instance. In gitlab, or in gitlab-runner, is there a way to control this behavior and run these two jobs build and dev1:build in the same spot instance?

Benzol answered 26/1, 2021 at 13:54 Comment(0)
H
2

I have exactly the same problem as you. There is no real solution to this need in gitlabci because the solution has been designed to work with "perennial" runners and not "ephemeral" instances like AWS SPOT. And in the case of a perennial runner this problem does not arise because the "stages" that follow can reuse the configurations made by the previous "stages" on the same machine

In my case I found 2 possible workarounds

  1. Reproduce the steps (implemented in my company) This method consists in reproducing actions that we have already done in previous "courses". Advantage: Teams are not lost in the pipeline GUI and can see each "stage" as a separate job Disadvantage: We take more time during deployment because the last job of the last "stage" redoes all the actions of the previous job on the runner it uses Here is a code example to illustrate the solution (using the !reference system)
.scriptCheckHelm: 
  script: 
    - 'helm dependency build'
    - 'helm lint .'
    
stages: 
  - lint
  - build

Check_Conf: 
  stage: 'lint' 
  script:
    - !reference [.scriptCheckHelm, script]
  rules: 
    - if: '($CI_PIPELINE_SOURCE == "push")'
      when: 'always'
      allow_failure: false
  extends: .tags

Build_Package:
  stage: 'build'
  script:
    - !reference [.scriptCheckHelm, script]
    - 'helm package .'
  rules:
    - if: '($CI_PIPELINE_SOURCE == "push")&&($CI_COMMIT_TITLE == "DEPLOYMENT")'
      when: 'on_success'
      allow_failure: false
  extends: .tags

In this case, when we make commit with title "DEPLOYMENT", we had : PIPELINE with multiple job

  1. Run a single job This method consists in grouping all the actions in a single job Advantage : No time loss during a deployment, the runner executes all the actions one after the other Disadvantage: Users see only one job and have to look in the job log to identify the error
.scriptCheckHelm: 
  script: 
    - 'helm dependency build'
    - 'helm lint .'
    
stages: 
  - lint
  - build

Check_Conf: 
  stage: 'lint' 
  script:
    - !reference [.scriptCheckHelm, script]
  rules: 
    - if: '($CI_PIPELINE_SOURCE == "push")&&($CI_COMMIT_TITLE != "DEPLOYMENT")'
      when: 'always'
      allow_failure: false
  extends: .tags

Build_Package:
  stage: 'build'
  script:
    - !reference [.scriptCheckHelm, script]
    - 'helm package .'
  rules:
    - if: '($CI_PIPELINE_SOURCE == "push")&&($CI_COMMIT_TITLE == "DEPLOYMENT")'
      when: 'on_success'
      allow_failure: false
  extends: .tags

In this case, when we make commit with title "DEPLOYMENT", we had : PIPELINE with 1 job

Homophone answered 11/3, 2022 at 15:2 Comment(0)
G
0

The best way to control which jobs run on which runners is by using tags. You could tag a runner something like builds-images, then on any jobs that build images, or need to use images built by a previous step, use the same tag.

For example:

stages:
 - build
 - dev1:build

build:
 stage: build
 script: 
  - docker build --rm -t broker-connect-dev1-${CI_COMMIT_SHORT_SHA} -f BrokerConnect/Dockerfile .
 only:
  - dev1/release
 tags:
  - itela-spot-runner
  - builds-images   

build-dev1:
 stage: dev1:build
 script: 
  - docker tag broker-connect-dev1-${CI_COMMIT_SHORT_SHA}:latest 19950818/broker-connect:${DEV1_TAG} 
 only:
  - dev1/release
 tags:
  - itela-spot-runner
  - builds-images

Now you just need to have a runner (or runners) tagged with builds-images. If you're using gitlab.com or are self-hosted and have at least Gitlab version 13.2, you can edit a runner's details in the Runners page for a project (details here: https://docs.gitlab.com/ee/ci/runners/#view-and-manage-group-runners). Otherwise, tags can be set while registering a runner. For your use case, without further changing your .gitlab-ci.yml file, I'd only tag one runner.

The other option is to push the built image to either docker hub (https://docs.docker.com/docker-hub/), Gitlab's registry (https://docs.gitlab.com/ee/user/packages/container_registry/), or another registry that can support docker images (https://aws.amazon.com/ecr/). Then on any jobs that need the image, pull it down from the registry and use it.

For your example:

stages:
 - build
 - dev1:build

build:
 stage: build
 before_script:
   - docker login [registry_url] #...
 script: 
  - docker build --rm -t broker-connect-dev1-${CI_COMMIT_SHORT_SHA} -f BrokerConnect/Dockerfile .
  - docker push broker-connect-dev1-${CI_COMMIT_SHORT_SHA}
 only:
  - dev1/release
 tags:
  - itela-spot-runner     

build-dev1:
 stage: dev1:build
 before_script:
   - docker login [registry_url] #...
 script: 
  - docker pull broker-connect-dev1-${CI_COMMIT_SHORT_SHA}
  - docker tag broker-connect-dev1-${CI_COMMIT_SHORT_SHA}:latest 19950818/broker-connect:${DEV1_TAG} 
 only:
  - dev1/release
 tags:
  - itela-spot-runner
Groundsheet answered 26/1, 2021 at 15:54 Comment(5)
I'd prefer the first example you have given because I don't want to do any significant changes to pipeline as I have more than 10 pipelines with similar configurations, means I have to change them all. In your solution 1 there's a one question, so we need to have another runner summing up two runners to run the jobs? and how come adding builds-images tag to another runner going to help fix this problem? And how is it different from the tag itela-spot-runner already defined?Benzol
You can use your existing tag but your runner also has to have the tag. For multiple runners, they shouldn’t have the same tag unless you’re also using the second example. One use case for tags that I use often is a runner with access to a remote api or server to ssh into. I don’t want to open the firewall for all my runners, but maybe 3 of them. For this I’d add a tag to those runners like access-to-prod, then put that tag on my deploy jobs only, since others jobs don’t need that access. The tagged runners can still run any job, but my tagged jobs can only run where the firewall is open.Groundsheet
No the problem is, each job runs in different spot instances. I guess this isn't something we can tackle from .gitlab-ci.yml file. it has to be done from runner side...Benzol
If you don't want to push your built images to a registry of one kind or another, there isn't a way to get the image to all other runners. You'd have to either restrict all jobs that build or use a built image to one single runner, or push it to a registry and pull it from any job on any runner that needs it.Groundsheet
I guess I have to go with the second example or merge the jobs together, either way I have to change the pipeline jobs in all the yaml files. UghhhhBenzol

© 2022 - 2024 — McMap. All rights reserved.