The deployment environment 'Staging' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline
Asked Answered
V

8

15

I'm trying to get Bitbucket Pipelines to work with multiple steps that define the deployment area. When I do, I get the error

Configuration error The deployment environment 'Staging' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline. Please refer to our documentation for valid environments and their ordering.

From what I read, the deployment variable has to happen on a step by step basis.

How would I set up this example pipelines file to not hit that error?

image: ubuntu:18.04

definitions:
    steps:
        - step: &build
            name: npm-build
            condition:
                changesets:
                    includePaths:
                        # Only run npm if anything in the build directory was touched
                        - "build/**"
            image: node:14.17.5
            script:
              - echo 'build initiated'
              - cd build
              - npm install
              - npm run dev
              - echo 'build complete'
            artifacts:
              - themes/factor/css/**
              - themes/factor/js/**
        - step: &deploychanges
            name: Deploy_Changes
            deployment: Staging
            script:
              - echo 'Installing server dependencies'
              - apt-get update -q
              - apt-get install -qy software-properties-common
              - add-apt-repository -y ppa:git-ftp/ppa
              - apt-get update -q
              - apt-get install -qy git-ftp
              - echo 'All dependencies installed'
              - echo 'Transferring changes'
              - git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
              - echo 'File transfer complete'
        
        - step: &deploycompiled
            name: Deploy_Compiled
            deployment: Staging
            condition:
                changesets:
                    includePaths:
                        # Only run npm if anything in the build directory was touched
                        - "build/**"
            script:
              - echo 'Installing server dependencies'
              - apt-get update -q
              - apt-get install -qy software-properties-common
              - add-apt-repository -y ppa:git-ftp/ppa
              - apt-get update -q
              - apt-get install -qy git-ftp
              - echo 'All dependencies installed'
              - echo 'Transferring compiled assets'
              - git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/css/ -vv
              - git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --all --syncroot themes/factor/js/ -vv
              - echo 'File transfer complete'

pipelines:
    branches:
        master:
            - step: *build
                <<: *deploychanges
                deployment: Production
            - step:            
                <<: *deploycompiled
                deployment: Production

        dev:
            - step: *build
            - step: *deploychanges
            - step: *deploycompiled
Vanya answered 25/8, 2021 at 23:2 Comment(2)
in a nutshell in your case I would combine Deploy_changes and Deploy_combined to a single step with the deployment group. – Goulder
@VibhanshuBiswas We cant do these as we have a manual check in the pipeline - thus the need for two steps (min) to the prod env. I expect this is not an uncommon scenario – Thymus
F
9

The workaround for the issue with reusing Environment Variables without using the deployment clause for more than one steps in a pipeline I have found is to dump ENV VARS to a file and save it as an artifact that will be sourced in the following steps.

The code snippet for it would look like:

  steps:

    - step: &set-environment-variables
        name: 'Set environment variables'
        script:
          - printenv | xargs echo export > .envs
        artifacts:
          - .envs


    - step: &next-step
        name: "Next step in the pipeline"
        script:
          - source .envs
          - next_actions


pipelines:

  pull-requests:
    '**':
      - step:
          <<: *set-environment-variables
          deployment: my-deployment
      - step:
          <<: *next-step
          name: "Name of the next step being executed"

  branches:
    staging:
      - step:
          <<: *set-environment-variables
          deployment: my-deployment
      - step:
          <<: *next-step
          name: "Name of the next step being executed"

So far this solution works for me.

Update: after having an issue of "%s" appearing in the .envs file, which caused the later source .envs statement to fail, here is a slightly different approach to the initial step. It gets around that particular issue, but also only exports those variables you know you need in your pipeline - noting that there are a lot of bitbucket environment variables available to the first script step which will be available naturally to later scripts anyway, and it makes more sense (to me anyway) that you don't just dump out all environment variables to the .env artifact, but do it in a much more controlled manner.

   - step: &set-environment-variables
        name: 'Set environment variables'
        script:
          - echo "export SSH_USER=$SSH_USER" > .envs
          - echo "export SERVER_IP=$SERVER_IP" >> .envs
          - echo "export ANOTHER_ENV_VAR=$ANOTHER_ENV_VAR" >> .envs
        artifacts:
          - .envs

In this example, .envs will now contain only those 3 environment variables, and not a whole heap of system + bitbucket variables (and of course, no pesky %s characters either!)

Freddiefreddy answered 1/3, 2022 at 10:4 Comment(5)
I don't think it's good practice to keep your .env file with secrets in your downloadable artifacts πŸ˜‰ – Excerpt
May not be ideal, but I don't know a better approach to overcome this bitbucket limitation. The problem is you need these vars in further steps but you cannot declare them multiple times in the pipeline in a straightforward way. – Freddiefreddy
Thanks for this, really annoying that a hack is the best way to resolve this! – Avernus
tried this, but the resulting ".envs" file contains instances of "%s" for some reason, and then fails when its pulled back in via the source command - error is "bash: export: `%s': not a valid identifier". Not sure why they appear but it means this approach won't work (for me anyway!) - unless someone can explain/fix ? – Beneficence
Further to my earlier comment, I fixed it by only printing out known environment variables in the "&set-environment-variables" block as provided in this answer. I've edited the answer to show what I did – Beneficence
R
6

Using Stage feature

Bitbucket release a beta feature called 'stage' that support the use of one deployment environment for several steps.

Stages allow you to group pipeline steps logically with shared properties, such as grouping steps for the same deployment environment, locking a deployment environment for multiple steps (preventing other Pipeline runs from interacting with it), and sharing deployment variables across multiple sets of sequential steps.

So your pipeline will be:

pipelines:
  branches:
    master:
      - stage:
          name: 'Build and Deploy to prod'
          deployment: "Production" # <- put here your deployment name
          steps: # next steps will use the same deployment variables
            - step: *build
            - step: *deploychanges
            - step: *deploycompiled
    dev:
      - stage:
          name: 'Build and Deploy to dev'
          steps:
            - step: *build
            - step: *deploychanges
            - step: *deploycompiled

ref: https://support.atlassian.com/bitbucket-cloud/docs/stage-options/

Rapt answered 11/4, 2023 at 8:43 Comment(0)
C
1

Just got the same issue today.

I don't think there's currently a solution for this except rewrite the steps to not run two steps in on environment.

Waiting on https://jira.atlassian.com/browse/BCLOUD-18261 which planned to be released in July.

Related https://community.atlassian.com/t5/Bitbucket-questions/The-deployment-environment-test-in-your-bitbucket-pipelines-yml/qaq-p/971584

Concede answered 3/6, 2022 at 7:14 Comment(1)
I agree. It's just how Pipelines works. We are slowly moving to Github ever since they changed pricing to be more affordable. – Vanya
P
1

This is currently not available. They do have a ticket and it says it's being worked on. The best workaround currently appears to be creating multiple developer variable environments for steps that use the same variables.

Ex:

     - step: 
          <<: *InitialSetup
          deployment: Beta-Setup
      - step:
          <<: *Build
          deployment: Beta-Build

From the comments on the ticket:

Hey everyone, I know this is a long-winded workaround, and someone has probably already mentioned it, but I got around the issue by setting up "sub environments", one for each step. E.g. instead of having a "Staging" environment, I set up a "Staging Build" and "Staging Deploy" environment, and just had to duplicate the variables if necessary. I did the same for production.

Having to setup and maintain all these environments and variables can be a pain, but one can automate this to prevent human error, through setting up an OAuth client tool that interfaces with the API (you just need the "pipelines" scope), if one can be bothered to go to the effort (as I have: https://blog.programster.org/bitbucket-create-oauth-client-credentials).

I can't wait for this feature to be completed as that is the "real" solution, and a lot less effort!

Penicillin answered 17/10, 2022 at 18:45 Comment(0)
G
0

Normally what happens is that you deploy to an environment. So there is one step which deploys. So particularly u should put your "deployment" group to it specifically. This is how Bitbucket manages that if the deployment of the code happened or not. So its like you can have multiple steps where in one you are testing unit cases, integration cases, another one for building the binaries and the last one as an artifact deploys to the env marking deployment group. see the below example.

definitions: 
  steps:
    - step: &test-vizdom-services
        name: "Vizdom services unit Tests"
        image: mcr.microsoft.com/dotnet/core/sdk:3.1
        script:     
            - cd ./vizdom/vizdom.services.Tests
            - dotnet test vizdom.services.Tests.csproj    


pipelines:
  custom:
    DEV-AWS-api-deploy:  
      - step: *test-vizdom-services        
      - step:
          name: "Vizdom Webapi unit Tests"
          image: mcr.microsoft.com/dotnet/core/sdk:3.1
          script:
              - export ASPNETCORE_ENVIRONMENT=Dev       
              - cd ./vizdom/vizdom.webapi.tests
              - dotnet test vizdom.webapi.tests.csproj    
      - step:
          deployment: DEV-API
          name: "API: Build > Zip > Upload > Deploy"
          image: mcr.microsoft.com/dotnet/core/sdk:3.1
          script:
              - apt-get update
              - apt-get install zip -y
              - mkdir -p ~/deployment/release_dll
              - cd ./vizdom/vizdom.webapi
              - cp -r ../shared_settings ~/deployment              
              - dotnet publish vizdom.webapi.csproj -c Release -o ~/deployment/release_dll
              - cp Dockerfile ~/deployment/
              - cp -r deployment_scripts ~/deployment
              - cp deployment_scripts/appspec_dev.yml ~/deployment/appspec.yml
              - cd ~/deployment
              - zip -r $BITBUCKET_CLONE_DIR/dev-webapi-$BITBUCKET_BUILD_NUMBER.zip .
              - cd $BITBUCKET_CLONE_DIR
              - pipe: atlassian/aws-code-deploy:0.5.3
                variables:
                  AWS_DEFAULT_REGION: 'us-east-1'
                  AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                  AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                  COMMAND: 'upload'
                  APPLICATION_NAME: 'ss-webapi'
                  ZIP_FILE: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
                  S3_BUCKET: 'ss-codedeploy-repo'
                  VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'
              - pipe: atlassian/aws-code-deploy:0.5.3
                variables:
                  AWS_DEFAULT_REGION: 'us-east-1'
                  AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                  AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                  COMMAND: 'deploy'
                  APPLICATION_NAME: 'ss-webapi'
                  DEPLOYMENT_GROUP: 'dev-single-instance'
                  WAIT: 'false'
                  S3_BUCKET: 'ss-codedeploy-repo'
                  VERSION_LABEL: 'dev-webapi-$BITBUCKET_BUILD_NUMBER.zip'

So as you can see I have multiple steps for running test cases but I would finally build the binaries and deploy the code in final step. I could have broken it into separate steps but I dont want to waste the minutes of having to use another step because cloning and copying the artifact takes some time. Right now there are three steps it could have been broken into 4. where the 4th one would have been the deployment step. I hope this brings some clarity.

Also you can modify the names of the deployment groups as per your needs and can have up to 50 deployment groups :)

Goulder answered 26/8, 2021 at 5:36 Comment(3)
My issue then is that I have conditional artifacts being created on the build step. I can't combine my deployment steps since the git ftp push on the "deploycompiled" step should only happen conditionally if those artifacts are created. – Vanya
cant you write an internal script to check and do that for you? I mean creating artifacts in a directory is your control and you can do checks on that. – Goulder
If I knew how, I would :( – Vanya
V
0

Little did I know, it's intentional that deploy happens in one step and you can only define on one step the deployment environment. The following setup is what worked for us (plus the appropriate separate git-ftp files):

image: ubuntu:18.04

definitions:
    steps:
        - step: &build
            name: Build
            condition:
                changesets:
                    includePaths:
                    # Only run npm if anything in the build directory was touched
                        - "build/**"
            image: node:15.0.1
            script:
              - echo 'build initiated'
              - cd build
              - npm install
              - npm run prod
              - echo 'build complete'
            artifacts:
              - themes/factor/css/**
              - themes/factor/js/**
        - step: &deploy
            name: Deploy
            deployment: Staging
            script:
              - echo 'Installing server dependencies'
              - apt-get update -q
              - apt-get install -qy software-properties-common
              - add-apt-repository -y ppa:git-ftp/ppa
              - apt-get update -q
              - apt-get install -qy git-ftp
              - echo 'All dependencies installed'
              - echo 'Transferring changes'
              - git ftp init --user $FTP_USER --passwd $FTP_PASSWORD $FTP_ADDRESS push --force --changed-only -vv
              - echo 'File transfer complete'

pipelines:
    branches:
        master:
            - step: *build
            - step:            
                <<: *deploy
                deployment: Production

        dev:
            - step: *build
            - step: *deploy
Vanya answered 26/8, 2021 at 15:54 Comment(0)
R
0

With your cases, to solve the problems, you either solve the errors as following options:

  • Combine all steps into one big step
  • Or create different deployment variable group Staging DeployChanges and Staging DeployComplied, this way may lead to duplicate variable

ex:

        - step: &deploychanges
            name: Deploy_Changes
            deployment: Staging DeployChanges
            script:
              - ....
        
        - step: &deploycompiled
            name: Deploy_Compiled
            deployment: Staging DeployComplied
            ....

Ramin answered 23/9, 2021 at 12:4 Comment(3)
No, you wrong. We can use the dame deployment for many steps in the same pipeline but cannot use when you combine the artifact + deployment. That is the root cause of his error – Presuppose
@BuiAnhTuan are you sure? I cannot do it, even from ATLASSIAN team also said the same community.atlassian.com/t5/Bitbucket-questions/… – Ramin
you can check my comment and screenshot https://mcmap.net/q/779062/-the-deployment-environment-39-staging-39-in-your-bitbucket-pipelines-yml-file-occurs-multiple-times-in-the-pipeline – Presuppose
P
0

I guess we cannot use combine the Deployment with either Artifact or Cache. If I use standalone Deployment, so I can use the same deployment for multiple steps (as my screenshot). In case I add cache/artifact, will get same error as yours. enter image description here enter image description here

Presuppose answered 11/10, 2021 at 1:38 Comment(0)

© 2022 - 2024 β€” McMap. All rights reserved.