docker-compose volume on node_modules but is empty
Asked Answered
K

4

28

I'm pretty new with Docker and i wanted to map the node_modules folder on my computer (for debugging purpose).

This is my docker-compose.yml

web:
  build: .
  ports:
    - "3000:3000"
  links:
    - db
  environment:
    PORT: 3000
  volumes:
    - .:/usr/src/app
    - /usr/src/app/node_modules
db:
  image: mongo:3.3
  ports:
    - "27017:27017"
  command: "--smallfiles --logpath=/dev/null"

I'm with Docker for Mac. When i run docker-compose up -d all go right, but it create a node_modules folder on my computer but it's empty. I go into the bash of my container and ls node_modules, all the packages was there.

How can i get the content on the container on my computer too?

Thank you

Kern answered 17/7, 2016 at 21:15 Comment(5)
Mike, did you find a solution? I have the same problem: I want the node_modules folder to be mirrored from the container to the host, so WebStorm can see the dependencies, but all I can do is to run npm install on both host and container.Imminence
I didn't. sorryKern
OK, let's hope that someone would like to collect the bounty! :)Imminence
Nice! Thank you!Kern
Mike, @Alessandro, can you please give some feedback? Thanks!Thirteenth
T
28

TL;DR Working example, clone and try: https://github.com/xbx/base-server


You need a node_modules in your computer (outside image) for debugging purposes first (before run the container).

If you want debug only node_modules:

volumes:
    - /path/to/node_modules:/usr/src/app/node_modules

If you want debug both your code and the node_modules:

volumes:
    - .:/usr/src/app/

Remember that you will need run npm install at least one time outside the container (or copy the node_modules directory that the docker build generates). Let me now if you need more details.


Edit. So, without the need of npm in OSX, you can:

  1. docker build and then docker cp <container-id>:/path/to/node-modules ./local-node-modules/. Then in your docker-compose.yml mount those files and troubleshot whatever you want.
  2. Or, docker build and there (Dockerfile) do the npm install in another directory. Then in your command (CMD or docker-compose command) do the copy (cp) to the right directory, but this directory is mounted empty from your computer (a volume in the docker-compose.yml) and then troubleshot whatever you want.

Edit 2. (Option 2) Working example, clone and try: https://github.com/xbx/base-server I did it all automatically in this repo forked from the yours.

Dockerfile

FROM node:6.3

# Install app dependencies
RUN mkdir /build-dir
WORKDIR /build-dir
COPY package.json /build-dir
RUN npm install -g babel babel-runtime babel-register mocha nodemon
RUN npm install

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN ln -s /build-dir/node_modules node_modules

# Bundle app source
COPY . /usr/src/app

EXPOSE 1234
CMD [ "npm", "start" ]

docker-compose.yml

web:
  build: .
  ports:
    - "1234:1234"
  links:
    - db # liaison avec la DB
  environment:
    PORT: 1234
  command: /command.sh
  volumes:
    - ./src/:/usr/src/app/src/
    - ./node_modules:/usr/src/app/node_modules
    - ./command.sh:/command.sh
db:
  image: mongo:3.3
  ports:
    - "27017:27017"
  command: "--smallfiles --logpath=/dev/null"

command.sh

#!/bin/bash

cp -r /build-dir/node_modules/ /usr/src/app/

exec npm start

Please, clone my repo and do docker-compose up. It does what you want. PS: It can be improved to do the same in a better way (ie best practices, etc)

I'm in OSX and it works for me.

Thirteenth answered 2/5, 2017 at 0:48 Comment(12)
Since npm install is platform dependent, running it on the host might lead to cross-platform issues (host=mac, container=debian).Cymogene
Seems like you're suggesting to manually copy the npm install results to the volume. Is there a reason you prefer doing it manually rather than automatically as part of the build and entrypoint like I posted in my answer?Net
Will that symbolic link work when you mount in an empty volume? This is starting to look very similar to the answer I posted earlier.Net
The symbolic link is only to run de container in the case that you don't use the volume. Have you docker compose up what I did?Thirteenth
I've sended a fix. github.com/xbx/base-server/commit/…Thirteenth
@BMitch, I'm sorry. I just realized that you posted first the same solution as the mine (essentially the same). Credits for you thenThirteenth
Oh, woops, I thought the link was the other way around, now I'm following. I like this variant, saves one copy inside the image.Net
Yes @BMitch. A copy of tipically a lot of files (node_modules)Thirteenth
I like this solution, I didn't think about a bash script to copy the node_modules after the volume was mounted. Thanks a lot for your help!Imminence
Have someone solved the issue with node_modules? I don't want to install them on my host because of the possible cross-platform issues (@Cymogene wrote about this above too). Is it possible to install node_modules inside the Docker container and mirror them to host so that I could take a look at the sources when I need, and so that my IDE could see all the devDependencies like eslint and others?Flyfish
I tried your solution, after everything runs in the end it says /usr/local/bin/docker-entrypoint.sh: exec: line 8: /start.sh: not found. start.sh is just the comand.sh. If I run the two commands ONE by ONE, I am able to run them but they don't run together. If I run script, it gives that error, if I do bash -c "cp-command && yarn start", it says /app/bash not found, can you please help?Trill
Your example fails for me at ERROR [ 5/10] RUN npm install -g babel babel-runtime babel-register mocha nodemon``#9 7.356 npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install" "-g" "babel" "babel-runtime" "babel-register" "mocha" "nodemon" Docker for Mac version 20.10.7, build f0df350 docker-compose version 1.29.2, build 5becea4cAetna
N
31

First, there's an order of operations. When you build your image, volumes are not mounted, they only get mounted when you run the container. So when you are finished with the build, all the changes will only exist inside the image, not in any volume. If you mount a volume on a directory, it overlays whatever was from the image at that location, hiding those contents from view (with one initialization exception, see below).


Next is the volume syntax:

  volumes:
    - .:/usr/src/app
    - /usr/src/app/node_modules

tells docker-compose to create a host volume from the current directory to /usr/src/app inside the container, and then to map /usr/src/app/node_modules to an anonymous volume maintained by docker. The latter will appear as a volume in docker volume ls with a long uuid string that is relatively useless.

To map /usr/src/app/node_modules to a folder on your host, you'll need to include a folder name and colon in front of that like you have on the line above. E.g. /host/dir/node_modules:/usr/src/app/node_modules.

Named volumes are a bit different than host volumes in that docker maintains them with a name you can see in docker volume ls. You reference these volumes with just a name instead of a path. So node_modules:/usr/src/app/node_modules would create a volume called node_modules that you can mount in a container with just that name.

I diverged to describe named volumes because they come with a feature that turns into a gotcha with host volumes. Docker helps you out with named volumes by initializing them with the contents of the image at that location. So in the above example, if the named volume node_modules is empty (or new), it will first copy the contents of the image at /usr/src/app/node_modules` to this volume and then mount it inside your container.

With host volumes, you will never see any initialization, whatever is at that location, even an empty directory, is all you see in the container. There's no way to get contents from the image at that directory location to first copy out to the host volume at that location. This also means that directory permissions needed inside the container are not inherited automatically, you need to manually set the permissions on the host directory that will work inside the container.


Finally, there's a small gotcha with docker for windows and mac, they run inside a VM, and your host volumes are mounted to the VM. To get the volume mounted to the host, you have to configure the application to share the folder in your host to the VM, and then mount the volume in the VM into the container. By default, on Mac, the /Users folder is included, but if you use other directories, e.g. a /Projects directory, or even a lower case /users (unix and bsd are case sensitive), you won't see the contents from your Mac inside the container.


With that base knowledge covered, one possible solution is to redesign your workflow to get the directory contents from the image copied out to the host. First you need to copy the files to a different location inside your image. Then you need to copy the files from that saved image location to the volume mount location on container startup. When you do the latter, you should note that you are defeating the purpose of having a volume (persistence) and may want to consider adding some logic to be more selective about when you run the copy. To start, add an entrypoint.sh to your build that looks like:

#!/bin/sh
# copy from the image backup location to the volume mount
cp -a /usr/src/app_backup/node_modules/* /usr/src/app/node_modules/
# this next line runs the docker command
exec "$@"

Then update your Dockerfile to include the entrypoint and a backup command:

FROM node:6.3

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install -g babel babel-runtime babel-register mocha nodemon
RUN npm install

# Bundle app source
COPY . /usr/src/app
RUN cp -a /usr/src/app/. /usr/src/app_backup

EXPOSE 1234
ENTRYPOINT [ "/usr/src/app/entrypoint.sh" ]
CMD [ "npm", "start" ]

And then drop the extra volume from your docker-compose.yml:

  volumes:
    - .:/usr/src/app
Net answered 1/5, 2017 at 20:3 Comment(8)
I think you need to fix how the volume is mounted. Not to only drop the extra volume. As I posted in my answer.Thirteenth
If they want node_modules to be saved in ./node_modules the above works. Otherwise, yes, they need to specify a different volume mount as you've shown.Net
If I'm not wrong, specifiing a volume as that, it creates an anonymous volume. It lacks the local (host) directory.Thirteenth
The bottom volume inside a docker-compose.yml doesn't. The top volume section is me copying from the question and then explaining that it creates an anonymous volume. If there's a better way to phrase that, let me know.Net
In the part Next is the volume syntax:, it's actually placed in current directory as BMitch mentioned, not the anonymous volume, but there is no more details in docker documentation, this still be confusing me.Bucella
@Bucella .:/usr/src/app bind mounts the current directory as a volume. /usr/src/app/node_modules creates an anonymous volume. success.docker.com/article/different-types-of-volumesNet
@Net What do you think (pros vs cons) about stackoverflow.com/a/66994382 answer? Is it a better way of handling this issue compared to your method?Theressa
@Theressa The original solution from the OP's question is probably the best for most use cases. It reuses the modules from the docker build and avoids conflicting with platform specific stuff on the developers machine. I think a lot of people try to replace the common solution because they don't understand it, rather than it actually having issues.Net
T
28

TL;DR Working example, clone and try: https://github.com/xbx/base-server


You need a node_modules in your computer (outside image) for debugging purposes first (before run the container).

If you want debug only node_modules:

volumes:
    - /path/to/node_modules:/usr/src/app/node_modules

If you want debug both your code and the node_modules:

volumes:
    - .:/usr/src/app/

Remember that you will need run npm install at least one time outside the container (or copy the node_modules directory that the docker build generates). Let me now if you need more details.


Edit. So, without the need of npm in OSX, you can:

  1. docker build and then docker cp <container-id>:/path/to/node-modules ./local-node-modules/. Then in your docker-compose.yml mount those files and troubleshot whatever you want.
  2. Or, docker build and there (Dockerfile) do the npm install in another directory. Then in your command (CMD or docker-compose command) do the copy (cp) to the right directory, but this directory is mounted empty from your computer (a volume in the docker-compose.yml) and then troubleshot whatever you want.

Edit 2. (Option 2) Working example, clone and try: https://github.com/xbx/base-server I did it all automatically in this repo forked from the yours.

Dockerfile

FROM node:6.3

# Install app dependencies
RUN mkdir /build-dir
WORKDIR /build-dir
COPY package.json /build-dir
RUN npm install -g babel babel-runtime babel-register mocha nodemon
RUN npm install

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN ln -s /build-dir/node_modules node_modules

# Bundle app source
COPY . /usr/src/app

EXPOSE 1234
CMD [ "npm", "start" ]

docker-compose.yml

web:
  build: .
  ports:
    - "1234:1234"
  links:
    - db # liaison avec la DB
  environment:
    PORT: 1234
  command: /command.sh
  volumes:
    - ./src/:/usr/src/app/src/
    - ./node_modules:/usr/src/app/node_modules
    - ./command.sh:/command.sh
db:
  image: mongo:3.3
  ports:
    - "27017:27017"
  command: "--smallfiles --logpath=/dev/null"

command.sh

#!/bin/bash

cp -r /build-dir/node_modules/ /usr/src/app/

exec npm start

Please, clone my repo and do docker-compose up. It does what you want. PS: It can be improved to do the same in a better way (ie best practices, etc)

I'm in OSX and it works for me.

Thirteenth answered 2/5, 2017 at 0:48 Comment(12)
Since npm install is platform dependent, running it on the host might lead to cross-platform issues (host=mac, container=debian).Cymogene
Seems like you're suggesting to manually copy the npm install results to the volume. Is there a reason you prefer doing it manually rather than automatically as part of the build and entrypoint like I posted in my answer?Net
Will that symbolic link work when you mount in an empty volume? This is starting to look very similar to the answer I posted earlier.Net
The symbolic link is only to run de container in the case that you don't use the volume. Have you docker compose up what I did?Thirteenth
I've sended a fix. github.com/xbx/base-server/commit/…Thirteenth
@BMitch, I'm sorry. I just realized that you posted first the same solution as the mine (essentially the same). Credits for you thenThirteenth
Oh, woops, I thought the link was the other way around, now I'm following. I like this variant, saves one copy inside the image.Net
Yes @BMitch. A copy of tipically a lot of files (node_modules)Thirteenth
I like this solution, I didn't think about a bash script to copy the node_modules after the volume was mounted. Thanks a lot for your help!Imminence
Have someone solved the issue with node_modules? I don't want to install them on my host because of the possible cross-platform issues (@Cymogene wrote about this above too). Is it possible to install node_modules inside the Docker container and mirror them to host so that I could take a look at the sources when I need, and so that my IDE could see all the devDependencies like eslint and others?Flyfish
I tried your solution, after everything runs in the end it says /usr/local/bin/docker-entrypoint.sh: exec: line 8: /start.sh: not found. start.sh is just the comand.sh. If I run the two commands ONE by ONE, I am able to run them but they don't run together. If I run script, it gives that error, if I do bash -c "cp-command && yarn start", it says /app/bash not found, can you please help?Trill
Your example fails for me at ERROR [ 5/10] RUN npm install -g babel babel-runtime babel-register mocha nodemon``#9 7.356 npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install" "-g" "babel" "babel-runtime" "babel-register" "mocha" "nodemon" Docker for Mac version 20.10.7, build f0df350 docker-compose version 1.29.2, build 5becea4cAetna
G
5

The simplest solution

Configure the node_modules volume to use your local node_modules directory as its storage location using Docker Compose and the Local Volume Driver with a Bind Mount.

First, make sure you have a local node_modules directory, or create it, and then create a Docker volume for it in the named volumes section of your docker-compose file:

volumes:
  node_modules:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./local/relative/path/to/node_modules

Then, add your node_modules volume to your service:

ui:
  volumes:
    - node_modules:/container/path/to/node_modules

Just make sure you always make node_module changes inside the Docker container (using docker-compose exec), and it will be synchronized perfectly and available on the host for IDEs, code completion, debugging, etc.

Version Control Tip: When your Node package.json/package-lock.json files change, either when pulling, or switching branches, in addition to rebuilding the Image, you have to remove the Volume, and delete its contents:

docker volume rm example_node_modules
rm -rf local/relative/path/to/node_modules
mkdir local/relative/path/to/node_modules
Gambia answered 7/4, 2021 at 21:31 Comment(11)
Thank you, this is exactly what I wanted. Could you provide a brief explanation of the volume driver_opts?Nettles
@Nettles Since the driver is set to local in this case, the driver_opts are the options for the local driver. "type" is none here because we're using the host filesystem, otherwise it could be set to "nfs", or "cifs", etc. "o", short for "opt", a.k.a. "options" is a coma separated list of driver options, in this case "bind" to create a bind mount. And "device" is the storage location for the volume.Gambia
@shet_tayyy Did you get any errors, or any other feedback that might help to figure out what's wrong? I still use this solution and it works great, so it's probably something silly like an incorrect path, missing character in the config, or maybe permissions.Gambia
@Gambia My bad. I should have provided more details. Posting the error below: Error response from daemon: failed to mount local volume: mount ./node_modules:/var/lib/docker/volumes/fastify-beej_node_modules/_data, flags: 0x1000: no such file or directoryKeithakeithley
@Keithakeithley The local node_modules directory does need to be created manually first. If it already exists, then make sure your path is correct in the device setting. It's based on the relative location of your docker-compose file.Gambia
@Gambia Yes, the node_modules folder exists in the root. The docker-compose.dev.yml file is present in the root too. I am sharing the link of the repo for better understanding github.com/rashtay/fastify-beej/tree/github_actions. If you clone it, you would have to create a node_modules folder. You can check package.json for relevant scripts. I run yarn docker:build-devKeithakeithley
@Gambia I modified the path for node_modules under volume to ./node_modules:/usr/app/node_modules and now I am facing a different issue: $ ts-node-dev --poll --clear --respawn --transpile-only --inspect=0.0.0.0:9229 ./src/index.ts ts-node-dev: not found Container is unable to find the node module ts-node-dev which is a dev dependencyKeithakeithley
@Keithakeithley I found the problem. In your service, under volumes, you have: - ./node_modules:/usr/app/node_modules, but it should be - node_modules:/usr/app/node_modules without the ./ so that it points to the named volume.Gambia
Let us continue this discussion in chat.Keithakeithley
This seems to work! However: (1) creating the volume takes quite a while. For my (pretty lightweight) project, I had to use COMPOSE_HTTP_TIMEOUT=300 docker-compose up because the default of 60 was not long enough. (2) When removing this volume (e.g to do a fresh setup, or troubleshoot the above issue) I had to restart Docker every time, because of a known bug, see https://mcmap.net/q/167349/-docker-tries-to-mkdir-the-folder-that-i-mountJara
I've also found that for some reason, this config will cause docker-compose up to fail when the same command is run by a Makefile. From Make, there's some issue with the directory that I don't understand.Jara
P
1

I added upon @Robert's answer, as there were a couple of things not taken into consideration with it; namely:

  • cp takes too long and the user can't view the progress.
  • I want node_modules to be overwritten if it were installed through the host machine.
  • I want to be able to git pull while the container is running and not running and update node_modules accordingly, should there be any changes.
  • I only want this behavior during the development environment.

To tackle the first issue, I installed rsync on my image, as well as pv (because I want to view the progress while deleting as well). Since I'm using alpine, I used apk add in the Dockerfile:

# Install rsync and pv to view progress of moving and deletion of node_modules onto host volume.
RUN apk add rsync && apk add pv

I then changed the entrypoint.sh to look like so (you may substitute yarn.lock with package-lock.json):

#!/bin/ash

# Declaring variables.
buildDir=/home/node/build-dir
workDir=/home/node/work-dir
package=package.json
lock=yarn.lock
nm=node_modules

#########################
# Begin Functions
#########################

copy_modules () { # Copy all files of build directory to that of the working directory.
  echo "Calculating build folder size..."
  buildFolderSize=$( du -a $buildDir/$nm | wc -l )
  echo "Copying files from build directory to working directory..."
  rsync -avI $buildDir/$nm/. $workDir/$nm/ | pv -lfpes "$buildFolderSize" > /dev/null
  echo "Creating flag to indicate $nm is in sync..."
  touch $workDir/$nm/.docked # Docked file is a flag that tells the files were copied already from the build directory.
}

delete_modules () { # Delete old module files.
    echo "Calculating incompatible $1 direcotry $nm folder size..."
    folderSize=$( du -a $2/$nm | wc -l )
    echo "Deleting incompatible $1 directory $nm folder..."
    rm -rfv $2/$nm/* | pv -lfpes "$folderSize" > /dev/null # Delete all files in node_modules.
    rm -rf $2/$nm/.* 2> /dev/null # Delete all hidden files in node_modules.node_modules.
}

#########################
# End Functions
# Begin Script
#########################

if cmp -s $buildDir/$lock $workDir/$lock >/dev/null 2>&1 # Compare lock files.
  then
    # Delete old modules.
    delete_modules "build" "$buildDir"
    # Remove old build package.
    rm -rf $buildDir/$package 2> /dev/null
    rm -rf $buildDir/$lock 2> /dev/null
    # Copy package.json from working directory to build directory.
    rsync --info=progress2 $workDir/$package $buildDir/$package
    rsync --info=progress2 $workDir/$lock $buildDir/$lock
    cd $buildDir/ || return
    yarn
    delete_modules "working" "$workDir"
    copy_modules

# Check if the directory is empty, as it is when it is mounted for the first time.
elif [ -z "$(ls -A $workDir/$nm)" ]
  then
    copy_modules
elif [ ! -f "$workDir/$nm/.docked" ] # Check if modules were copied from build directory.
  then
    # Delete old modules.
    delete_modules "working" "$workDir"
    # Copy modules from build directory to working directory.
    copy_modules
else
    echo "The node_modules folder is good to go; skipping copying."
fi

#########################
# End Script
#########################

if [ "$1" != "git" ] # Check if script was not run by git-merge hook.
  then
    # Change to working directory.
    cd $workDir/ || return
    # Run yarn start command to start development.
    exec yarn start:debug
fi

I added pv to, at least, show the user the progress of what is happening. Also, I added a flag to appear to indicate that node_modules was installed through a container.

Whenever a package is installed, I utilized the postinstall and postuninstall hooks of the package.json file to copy the package.json and yarn.lock files from the working directory to the build directory to keep them up to date. I also installed the postinstall-postinstall package to make sure the postuninstall hook works.

"postinstall"  : "if test $DOCKER_FLAG = 1; then rsync -I --info=progress2 /home/node/work-dir/package.json /home/node/build-dir/package.json && rsync -I --info=progress2 /home/node/work-dir/yarn.lock /home/node/build-dir/yarn.lock && echo 'Build directory files updated.' && touch /home/node/work-dir/node_modules/.docked; else rm -rf ./node_modules/.docked && echo 'Warning: files installed outside container; deleting docker flag file.'; fi",
"postuninstall": "if test $DOCKER_FLAG = 1; then rsync -I --info=progress2 /home/node/work-dir/package.json /home/node/build-dir/package.json && rsync -I --info=progress2 /home/node/work-dir/yarn.lock /home/node/build-dir/yarn.lock && echo 'Build directory files updated.' && touch /home/node/work-dir/node_modules/.docked; else rm -rf ./node_modules/.docked && echo 'Warning: files installed outside container; deleting docker flag file.'; fi",

I used an environment variable called DOCKER_FLAG and set it to 1 in the docker-compose.yml file. That way, it won't run when someone installs outside a container. Also, I made sure to remove the .docked flag file so the script knows it has been installed using host commands.

As for the issue of synchronizing node_modules every time a pull occurs, I used a git hook; namely, the post-merge hook. Every time I pull, it will attempt to run the entrypoint.sh script if the container is running. It will also give an argument to the script git that the script checks to not run exec yarn:debug, as the container is already running. Here is my script at .git/hooks/post-merge:

#!/bin/bash

if [ -x "$(command -v docker)" ] && [ "$(docker ps -a | grep <container_name>)" ]
then
  exec docker exec <container_name> sh -c "/home/node/build-dir/entrypoint.sh git"
  exit 1
fi

If the container is not running, and I fetched the changes, then the entrypoint.sh script will first check if there are any differences between the lock files, and if there are, it will reinstall in the build directory, and do what it did when the image was created and container run in the first time. This tutorial may be used to be able to share hooks with teammates.


Note: Be sure to use docker-compose run..., as docker-compose up... won't allow for the progress indicators to appear.

Perretta answered 16/1, 2020 at 13:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.