yarn build - error Command failed with exit code 137 - Bitbucket Pipelines out of memory - Using max memory 8192mb
Asked Answered
Z

3

11

Our react app is configured to build and deploy using the CRA scripts and Bitbucket Pipelines.

Most of our builds are failing from running yarn build with the following error:

error Command failed with exit code 137.

This is an out of memory error.

We tried setting GENERATE_SOURCEMAP=false as a deployment env variable but that did not fix the issue https://create-react-app.dev/docs/advanced-configuration/.

We also tried setting the max memory avialable for a step by running the following:

node --max-old-space-size=8192 scripts/build.js

Increasing to max memory did not resolve the issue.

This is blocking our development and we aren't sure what to do to resolve the issue.

We could move to a new CI/CD service but that is a lot more work than desired.

Are there other solutions that could solve this problem?

Below is the bitbucket-pipelines.yml file

image: node:14

definitions:
  steps:
    - step: &test
        name: Test
        script:
          - yarn
          - yarn test --detectOpenHandles --forceExit --changedSince $BITBUCKET_BRANCH
    - step: &build
        name: Build
        size: 2x
        script:
          - yarn
          - NODE_ENV=${BUILD_ENV} yarn build
        artifacts:
            - build/**
    - step: &deploy_s3
        name: Deploy to S3
        script:
          - pipe: atlassian/aws-s3-deploy:0.3.8
            variables:
              AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
              AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
              AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
              S3_BUCKET: $S3_BUCKET
              LOCAL_PATH: "./build/"
              ACL: 'public-read'
    - step: &auto_merge_down
        name: Auto Merge Down
        script:
          - ./autoMerge.sh stage || true
          - ./autoMerge.sh dev || true
  caches:
    jest: /tmp/jest_*
    node-dev: ./node_modules
    node-stage: ./node_modules
    node-release: ./node_modules
    node-prod: ./node_modules


pipelines:
  branches:
    dev:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-dev
                  - jest
                <<: *test
            - step:
                caches:
                  - node-dev
                <<: *build
                deployment: Dev Env
      - step:
          <<: *deploy_s3
          deployment: Dev
    stage:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-stage
                  - jest
                <<: *test
            - step:
                caches:
                  - node-stage
                <<: *build
                deployment: Staging Env
      - step:
          <<: *deploy_s3
          deployment: Staging
    prod:
      - parallel:
          fail-fast: true
          steps:
            - step:
                caches:
                  - node-prod
                  - jest
                <<: *test
            - step:
                caches:
                  - node-prod
                <<: *build
                deployment: Production Env
      - parallel:
          steps:
            - step:
                <<: *deploy_s3
                deployment: Production
            - step:
                <<: *auto_merge_down
Zealot answered 10/3, 2023 at 15:31 Comment(6)
I don't know what the hell are you building but something feels odd. Do you really need that amount of memory to build the app in your workstation? I'd say you look into the root cause for that amount of memory consumption. Developers can become somewhat irresponsible (resource-wise) when they are given awkwardly powerful workstations.Inadmissible
its an existing react app that has been added to for years. I am newer to the company and the project so I am not sure yetZealot
also @Inadmissible what do you mean by workstation? this is running in a bitbucket pipeline not a local environmentZealot
I mean your personal computer, laptop or whatever. Does this memory consumption reproduce while building the project? I reckon it does but every development workstation in the organization features 16GB+ so nobody notices the issue? But if it didn't, the answer to your question might be totally different.Inadmissible
It builds without error locally. We don't use a production build typically on local, but when running it, it takes a few minutes. Since my local computer has far more than 8GB of memory, it can build without error. The bitbucket pipeline only allows for 8GBZealot
Related https://mcmap.net/q/1176483/-bitbucket-out-of-memory-terser-webpack-plugin-running-jest-tests-in-pipeline/11715259Inadmissible
Z
8

Turns out the terser-webpack-plugin package was running max workers for jest workers during our yarn build step causing the out of memory error https://www.npmjs.com/package/terser-webpack-plugin

By removing that plugin from our package.json, it no longer fails the build and the jest workers are no longer spawned during the build.

You can also set parallel to false in the config for TerserWebpackPlugin to not spawn workers.

This seems incorrect and is causing our pipeline and likely others to go out of memory.

Zealot answered 20/3, 2023 at 12:15 Comment(0)
I
0

You can use even bigger builders with size: 4x and size: 8x but only with your self-hosted pipeline runners, which obviously will need at least 16GB

https://support.atlassian.com/bitbucket-cloud/docs/step-options/#Size

definitions:
  anchors:

    - &build-step
        name: Build
        size: 4x
        runs-on: 
          - 'self.hosted'
          - 'my.custom.label'
        script:
          - yarn
          - NODE_ENV=${BUILD_ENV} yarn build
        artifacts:
            - build/**
Inadmissible answered 11/3, 2023 at 11:30 Comment(3)
yeah i was afraid this was the only fix within the platformZealot
Just the easiest answer a random stranger made up while browsing. Possibly not the only possible fix. But yes, if self-hosted runners are not on the table I'd definitely be looking into building the app "outside the platform".Inadmissible
Yeah or reducing the build. I also introduced ideas like breaking up the app into multiple apps/micro frontends to decrease the memory per build. It's also using old versions of node and yarn which could make it take up more memory as wellZealot
S
-1

try adding the following deffinition:

definitions:
  services:
    docker:
      memory: 4096

found it when we had some simular issues like: https://confluence.atlassian.com/bbkb/bitbucket-pipeline-execution-hangs-on-docker-build-step-1189503836.html

Edit: sorry my bad, no you don't need docker. Note that the memory allocated is shared by both the script in the step and any services on the step, so maybe remove the parallel and let jest run on it's own before you start the build it can be a bit of memory hog. If you do must run in parallel at least limit the impact of jest by running tests sequentially jest --runInBand or via lower number of workers jest --maxWorkers=4

Sideband answered 10/3, 2023 at 15:54 Comment(4)
We aren't using docker as a service in this pipeline, so would that be required?Zealot
sorry my bad, no you don't need docker. Note that the memory allocated is shared by both the script in the step and any services on the step, so maybe remove the parallel and let jest run on it's own before you start the build it can be a bit of memory hog. If you do must run in parallel at least limit the impact of jest by running tests sequentially jest --runInBand or via lower number of workers jest --maxWorkers=4Sideband
@Sideband parallel steps are not services and run on independent build agents with independent resources.Inadmissible
@Inadmissible You're thinking of jobs, they run in different containers as per circleci.com/docs/parallelism-faster-jobs but if you do splitting you need circleCI split command. Jest tests can easily eat up 32GB on my linux notebook.Sideband

© 2022 - 2024 — McMap. All rights reserved.