docker container process running as non-root user cannot write to docker volume
Asked Answered
H

1

10

TLDR

  • everyone recommends processes within containers should never run as root
  • (except for kubernetes) there doesn't seem to be a good devops/configuration-as-code approach to get the right owner/permissions set on docker volumes, so the (non-root) user cannot write to the volume

What is then good practice when running a container process as a non-root user and I want to write to a (cloudstor, aws-ebs) docker volume.


Long story

It is considered bad practice in (and outside) docker containers to run processes as root (see for example ref1, ref2, ...). This might have security implications.

But when we start using docker volumes and that non-root user tries to write to the volume, the trouble begins. I fail to find a clean solution that will work on cloud infrastructure, without manual intervention. The working solutions I found all seem to fall short on some point (security, maintainability, ...).

As a side-note, we are deploying on docker-swarm using cloudstor to provision aws-ebs volumes. We hope to move to kubernetes one day, but we don't have kubernetes yet, so we try to find an alternative solution for our setup.

Solutions / Workarounds considered

1. Pre-create the volume within the docker image

As proposed here, if docker-compose creates a new volume, the permissions on the directory inside the image will be propagated.

downsides:

  • this won't work if the volume existed before, or if it is a folder on disk
  • if the volume is provisioned with cloudstor, probably this won't work either, because it won't be docker-compose provisioning the volume (not tested)

2. Use volumes-provisioner

hasnat created a volumes-provisioner image which could set the correct folder permissions just before the real container starts.

downsides:

  • need to add an extra service in the docker stack. This service dies almost instantly (after setting permissions).
  • The real container needs to depends_on the volumes_provisioner. When redeploying the same stack (after configuration changes), the order of execution is not guaranteed
  • ebs volumes can only be mounted on a single docker container, which caused many deployment issues

3. Use docker run to correct file permissions

Once the real container is running with the volume mounted (but still with the wrong permissions), we call

docker run --rm -u root -v ${MOUNT}:${TARGET} { real_image } chown -R user:group ${TARGET}

downsides:

  • the ebs volume can only be mounted in one container, so this will create conflicts
  • this command can only be run after the docker-stack has been deployed (otherwise the volume hasn't been provisioned yet), so there will be a delay between the real container startup and the correct permissions. This means that at startup, the real container finds the volume with the wrong permissions, so this will only work if the service keeps checking if the permissions have been corrected.

4. Change ownership when starting the container

This implies:

  • starting the process as root user (otherwise we don't have the right to change directory owner/permissions)
  • changing ownership/permissions
  • switching to non-root user

downsides:

  • there is still a (minor?) time period when the container process is running as root (security implication?)
  • need to hack entrypoints, override user,... of official images to get this working

5. Just run as root

This is the easiest solution, but then what about security? And everybody recommending not to do this?

6. Use kubernetes

As suggested here, with kubernetes we can assign a group id to a volume. This seems confirmed in the kubernetes documentation for pods.

downsides:

  • (sadly) we don't use kubernetes yet.
  • (not tested.)

7. create folders on the file system with correct permissions

Make sure directories exist on the file system with the correct owner/permissions.

downsides

  • this is not cloud-storage... what if the container switches to another node? or if the server crashes? (which is why we use cloudstor, which allows us even to switch availability zones)
  • doesn't seem very "configuration-as-code"
Halfbreed answered 9/9, 2020 at 15:48 Comment(3)
Running services as root in k8s is not recommended. For docker-compose environments we create everything without starting the services (docker-compose up --no-start) then set the permissions (docker run --rm -it --volumes-from ... --entrypoint chown alpine:3 -R 1000:1000 /data) then bring the services up. For k8s common practice afaik is using initContainers.Incongruous
aren't you able to add the docker user to a user group that has access to the volume ?Changteh
@Changteh the user running the process inside the container is different for each container and different from the docker userHalfbreed
N
3

I vote for the solution 4, there is no security issue to change the permissions as root then to start your application as non root. If there is a security hole in your application, the application is still not running as root whatever happened before it started. You can do this in a script use in the entrypoint.

Nertie answered 23/9, 2020 at 6:37 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.