GitLab Runner - How to allow only one Pipeline run at a time
Asked Answered
A

5

23

I am new to GitLab and facing a problem where if I trigger two pipelines at the same time on same gitlab-runner, they both run in parallel and results in failure. What I want is to limit the run to one pipeline at a time and others in queue.

I have set the concurrent = 1 in config.toml and restarted the runner but it didn't help. My ultimate goal is to prevent multi-pipeline run on the runner.

Thanks.

Amaras answered 1/4, 2020 at 7:18 Comment(1)
Does this answer your question? How to force GitLab to run a complete pipeline before starting a new one?Seizure
A
-7

Set the limit keyword in the runners section of your configuration to 1.

limit :

Limit how many jobs can be handled concurrently by this token. 0 (default) simply means don’t limit

and restart you runner

Anemic answered 1/4, 2020 at 14:0 Comment(2)
As stated by the doc, this limits concurrency of jobs, not pipelines. Runner can still start the first job of a second pipeline before doing all jobs of the first pipeline. Dig a little more and I think you'll always get to this issue, that's been postponed for ages: gitlab.com/gitlab-org/gitlab/-/issues/15536Seizure
The actual, most recent and relevant issue on gitlab.com: gitlab.com/gitlab-org/gitlab/-/issues/202186Seizure
S
15

Set resource_group in the Job, and give a unique name for all other tasks that should be blocked.

Example from the documentation:

deploy-to-production:
  script: deploy
  resource_group: production
Shadchan answered 12/1, 2022 at 15:31 Comment(4)
Thanks. This solved it for me. There is also a way to control concurrency when your limited pipeline is a child pipeline: docs.gitlab.com/ee/ci/resource_groups/…Felic
From my reading, this will not work in some cases. If you have a pipeline with multiple jobs and you set all of them to the same resource_group, GitLab isn't making sure that one pipeline finished before the other starts. Instead, it's making sure that only one job runs at a time. When one job finishes it can pick any job from either pipeline to run next.Saturable
@ToddWalton The dependencies of jobs can be configured with needs. Sounds like you want that, and a multi-stage pipeline. This question and answer is applicable to pipelines with only one single step.Shadchan
The question doesn't say it's a one-job pipeline. Also, using needs would not prevent multiple pipelines from running at a time. needs only orders jobs in a single pipeline. GitLab could still kick off jobs in a second pipeline after a job in a first pipeline finishes, irrespective of any needs keywords. It's funky, thinking through how the pipelines work, because there's so many configurable parts.Saturable
P
5

The answer by @phihag works also for pipelines with multiple jobs. The only thing missing is a specific configuration of the resource_group:
Set process_mode=oldest_first

curl --request PUT --data "process_mode=oldest_first" \
     --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/1/resource_groups/production"
Pokeweed answered 2/11, 2022 at 8:48 Comment(1)
Can we specify this in the pipeline code?Swampy
C
1

I had a similar problem in one of my projects where I needed to manage multiple stages in a pipeline with parallel jobs, while also maintaining a sort of ‘Mutex’ across all stages.

Based on the already provided solutions here, I managed to get this work with Parent-child pipelines in combination with ‘resource_groups’.

To implement this I used a resource_group for the parent job and configured it with strategy: depend to ensure the parent job waits until the child-pipeline has finished. The benefit of this solution is, that is is independent of the runner-configuration and the child-pipeline configuration.

Parent-Pipeline .gitlab-ci.yml:

stages:
  - trigger

parent:
  stage: trigger
  resource_group: mutex
  trigger:
    include: 
      - local: '.gitlab-ci-child.yml'
    strategy: depend

The actual pipeline configuration was moved to .gitlab-ci-child.yml.

Carbolated answered 27/5 at 14:11 Comment(0)
B
0

As others have mentioned resource_groups are the key feature that you need.

Setting concurrent to 1 within config.toml is not enough as you already mentioned. Even though Gitlab will execute one job after the another, it will pick up jobs in an unordered manner from different pipelines and that may be a problem as you already mentioned, especially when you use a shell executor and jobs from different pipelines work on the same build.

What you could do is for example give all jobs the same resource_group and then — now that is important — make an API call and set the process_mode of this resource_group to oldest_first, because the default is unordered.

Now the second pipeline will not start and wait for the first pipeline to finish and therefore, one pipeline runs at a time, exactly what you want.

Branchiopod answered 11/5, 2023 at 19:31 Comment(0)
A
-7

Set the limit keyword in the runners section of your configuration to 1.

limit :

Limit how many jobs can be handled concurrently by this token. 0 (default) simply means don’t limit

and restart you runner

Anemic answered 1/4, 2020 at 14:0 Comment(2)
As stated by the doc, this limits concurrency of jobs, not pipelines. Runner can still start the first job of a second pipeline before doing all jobs of the first pipeline. Dig a little more and I think you'll always get to this issue, that's been postponed for ages: gitlab.com/gitlab-org/gitlab/-/issues/15536Seizure
The actual, most recent and relevant issue on gitlab.com: gitlab.com/gitlab-org/gitlab/-/issues/202186Seizure

© 2022 - 2024 — McMap. All rights reserved.