Concourse CI - Build Artifacts inside source, pass all to next task
Asked Answered
D

2

5

I want to set up a build pipeline in Concourse for my web application. The application is built using Node.

The plan is to do something like this:

                                        ,-> build style guide -> dockerize
source code -> npm install -> npm test -|
                                        `-> build website -> dockerize

The problem is, after npm install, a new container is created so the node_modules directory is lost. I want to pass node_modules into the later tasks but because it is "inside" the source code, it doesn't like it and gives me

invalid task configuration:
  you may not have more than one input or output when one of them has a path of '.'

Here's my jobs set up

jobs:
  - name: test
    serial: true
    disable_manual_trigger: false
    plan:
      - get: source-code
        trigger: true

      - task: npm-install
        config:
          platform: linux
          image_resource:
            type: docker-image
            source: {repository: node, tag: "6" }
          inputs:
            - name: source-code
              path: .
          outputs:
            - name: node_modules
          run:
            path: npm
            args: [ install ]

      - task: npm-test
        config:
          platform: linux
          image_resource:
            type: docker-image
            source: {repository: node, tag: "6" }
          inputs:
            - name: source-code
              path: .
            - name: node_modules
          run:
            path: npm
            args: [ test ]

Update 2016-06-14

Inputs and outputs are just directories. So you put what you want output into an output directory and you can then pass it to another task in the same job. Inputs and Outputs can not overlap, so in order to do it with npm, you'd have to either copy node_modules, or the entire source folder from the input folder to an output folder, then use that in the next task.

This doesn't work between jobs though. Best suggestion I've seen so far is to use a temporary git repository or bucket to push everything up. There has to be a better way of doing this since part of what I'm trying to do is avoid huge amounts of network IO.

Dorweiler answered 10/6, 2016 at 14:13 Comment(3)
Is it okay if you post your updated pipeline.yml file in order to see what you've done because I'm running into a similar issue and I've been trying for days to fix it! It's driving me nuts.Cacie
I can't post the code but I can tell you the solution. I renamed it Jenkinsfile... binned Concourse and used Jenkins Blue Ocean instead. I am substantially happier. I even created a Vagrantfile which builds Jenkins into Docker on CoreOS and allows any of our developers to run the exact same pipeline on their machine as on any test, stage or live machine. It's not quite complete but I will open source it in the future and I'll try to remember to link to it here when I do.Dorweiler
Nice! I have normal Jenkins setup and I do everything on it, but lately I've been testing Concourse just for kicks and so far it's been very frustrating! I didn't know about Jenkins Blue Ocean but thanks to you, I'm going to check that out also! :)Cacie
G
4

There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:

https://github.com/ymedlop/npm-cache-resource

It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.

I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.

Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...

There is a similar one for gradle (on which the npm one is based upon) https://github.com/projectfalcon/gradle-cache-resource

Gossamer answered 27/2, 2017 at 9:14 Comment(0)
F
4

This doesn't work between jobs though.

This is by design. Each step (get, task, put) in a Job is run in an isolated container. Inputs and outputs are only valid inside a single job.

What connects Jobs is Resources. Pushing to git is one way. It'd almost certainly be faster and easier to use a blob store (eg S3) or file store (eg FTP).

Fortunia answered 19/8, 2016 at 14:39 Comment(2)
These are things I've considered. However, in order to avoid building a different package to the one that was tested by using a work around for something that was "by design" suggests this isn't the right tool for the job.Dorweiler
I'm not sure I understand what you mean by using a work around. Concourse is deliberately stateless for every step in every job, you have to explicitly tell it how to move things around. It takes a while to adjust, but it's not a "work around". Jenkins has a different model of operation, which more closely resembles many historical stateful workflows.Fortunia
G
4

There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:

https://github.com/ymedlop/npm-cache-resource

It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.

I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.

Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...

There is a similar one for gradle (on which the npm one is based upon) https://github.com/projectfalcon/gradle-cache-resource

Gossamer answered 27/2, 2017 at 9:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.