How to execute command from one docker container to another
Asked Answered
Z

6

30

I'm creating an application that will allow users to upload video files that will then be put through some processing.

I have two containers.

  1. Nginx container that serves the website where users can upload their video files.
  2. Video processing container that has FFmpeg and some other processing stuff installed.

What I want to achieve. I need container 1 to be able to run a bash script on container 2.

One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill. I just want to execute a bash script.

Any suggestions?

Zettazeugma answered 25/11, 2019 at 15:51 Comment(2)
You could maybe just use a shared volume and watch for changes.. You can also share the /var/run/docker.sock and run docker commands from the container.Surpass
This is not really a good design. The techniques you'd need to solve this are the same ones as if the two containers were running on physically separate systems.Tko
A
20

You have a few options, but the first 2 that come to mind are:

  1. In container 1, install the Docker CLI and bind mount /var/run/docker.sock (you need to specify the bind mount from the host when you start the container). Then, inside the container, you should be able to use docker commands against the bind mounted socket as if you were executing them from the host (you might also need to chmod the socket inside the container to allow a non-root user to do this.
  2. You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.

Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.

Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.

Warning:

To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.

Ambagious answered 25/11, 2019 at 16:19 Comment(3)
I considered option 1 using docker sockets but seems a bit dangerous. I also don't like the idea of installing docker inside of docker. I will go with the SSHD option since it seems like it is the best choice. Thanks for your help, greatly appreciatedZettazeugma
A lot of folks would agree with you about the socket option. Really depends on what you're doing, but SSHD is definitely the safer option, especially for a prod system. glad i could help :)Ambagious
@NicolasBuch I hope you don't mind me asking but why does it seem dangerous to you? I understand that, if the container is open to the public, then yes, that is very dangerous. But I have got a cron container that needs to execute commands in the container. The cron container itself is not connected to the outside world.Dine
L
15

I wrote a python package especially for this use-case.

Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.

Example Code:

from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP

app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")

shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")

can be called easily like,

$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis

You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.

It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.

Ligroin answered 1/9, 2020 at 15:8 Comment(3)
Sounds cool! Isn't there a django version of it?Msg
This is creative, and probably a good solution in a lot of cases. Mainly, anything that uses a base image that includes Python but doesn't have sshd (which is a lot of base images). I would be cautious about this in a production environment, since it appears to expose these endpoints without any sort of authentication. But for a local test setup, this seems pretty easy.Ambagious
@Ambagious You can wrap the endpoints in a decorator to implement authentication, logging, etc. basically anything.Ligroin
P
13

Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :

  1. You'll need to install docker on the container (and do docker in docker stuff)
  2. You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.

So, this leaves us two solutions :

  1. Install ssh on you're container and execute the command through ssh
  2. Share a volume and have a process that watch for something to trigger your batch
Pee answered 25/11, 2019 at 16:19 Comment(3)
I was considering sharing the docker socket. But I don't like the idea of having to install docker in my docker images. It seems a bit dangerous, especially since I don't fully grasp which security vulnerabilities it might open. Watching a folder for files is not going to work either since I need to run different scrips based on some parameters. I would have to have many different folders for each scenario. So this leaves us with executing commands via SSH. This might be the best solution. I'll give it a try, thanks!Zettazeugma
+1 for the shared volume idea. Polling on a touchfile isn't normally my favorite approach, but it does the job and might be the most straightforward solution, depending on the situation.Ambagious
Also a great project is github.com/msoap/shell2http. It even allows to send a file through HTTP-Post. So not even a shared volume is needed.Baptist
C
4

It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:

# install SSH, if you don't have it already
sudo apt install openssh-server

# start the ssh service
sudo service start ssh

# start the daemon
sudo /usr/sbin/sshd -D &

Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):

useradd -m --no-log-init --system  --uid 1000 foobob -s /bin/bash -g sudo -G root

#change password
echo 'foobob:foobob' | chpasswd

Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.

# obtain container-id of target container using 'docker ps'
ssh foobob@<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL

You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):

sshpass -p 'foobob' ssh foobob@<container-id>
Charlet answered 17/7, 2020 at 20:52 Comment(4)
hi, thanks for sharing this solution. I tried it and was wondering where does the message.txt file go on the container 2? I put a verbose flag in the ssh cmd and at the end it showed: Transferred: sent 2836, received 2644 bytes, in 0.1 seconds Bytes per second: sent 30489.4, received 28425.2. Seems like the file was transferred?Towhee
This should simply create text file message.txt in your home directory on "container_2" with the single line of text 'hello bob ...'. There should be no file transferred. This report from ssh is only telling you about the text it sent and received back as communication overheadCharlet
thanks, i found out my working directory was set to another location so I didn't see it. I checked the home dir and saw the file. I appreciate you sharing this solution!Towhee
sshpass should come with a warning (and it does, in the man page): ssh intentionally makes it hard to enter passwords from a script, because that pattern almost invariably leads to insecure password management practices. If possible, it is probably better to use public keys that you build into image that runs on the target container.Ambagious
M
0

You could write a very basic API using Ncat and GNU sed’s e command.

If needed, install nmap-ncat and GNU sed, then run something like this in the container you want to control:

ncat -lkp 9000 | sed \
  -e '/^cmd1$/e /opt/foo.sh' \
  -e '/^stop$/e kill -s INT 1'

The entrypoint script would look like this:

ncat -lkp 9000 | sed \
  -e '/^cmd1$/e /opt/foo.sh' \
  -e '/^stop$/e kill -s INT 1' &
  exec /opt/some/daemon

exec is required to run the daemon with process ID 1, which is needed to stop it gracefully.

And to send commands to this container, use something like

echo stop|nc containername 9000

Note: you can use nc or ncat for sending commands, but on the receiving side nc from Busybox does not keep listening for new requests without using -e which would need a different approach.

When also using a restart policy with Docker Compose, this could be used to restart containers (for example, to reload configuration or certificates) without having to give the controlling container access to the Docker socket (/var/run/docker.sock), which is insecure.

Mannie answered 7/3, 2024 at 17:30 Comment(0)
E
-3

I believe

docker exec -it <container_name> <command>

should work, even inside the container.

You could also try to mount to docker.sock in the container you try to execute the command from:

docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Errantry answered 25/11, 2019 at 16:6 Comment(4)
calling docker from inside a container won't work on it's own. You need 2 things, 1: the /var/run/docker.sock shared, and 2: A version of docker running on your container, if the container and host are both linux, some user share there hosts docker binary. But the better option is to have a container image with docker installed so that the host can be windows or linux.Surpass
Fair enough, did not think about requirement to have docker installed in the running container.Errantry
No problem,. Also worth noting you would only need the docker cli inside the container. Missed that bit :)Surpass
Remember that this also allows the container to make arbitrary changes to arbitrary files on the host as root and generally take over the whole system. I would not do this casually.Tko

© 2022 - 2025 — McMap. All rights reserved.