How to run shell script on host from docker container?
Asked Answered
P

16

216

How to control host from docker container?

For example, how to execute copied to host bash script?

Pressure answered 23/8, 2015 at 6:44 Comment(7)
wouldn't that be exactly the opposite of isolating host from docker?Adley
Yes. But it's sometimes necessary.Pressure
possible duplicate of Execute host commands from within a docker containerAdley
Not sure about "control host" but I was recently at a talk by data scientists who are using docker to run scripts to process huge workloads (using AWS mounted GPUs) and output the result to the host. A very interesting use case. Essentially scripts packaged with a reliable execution environment thanks to dockerCracker
@Cracker And why they prefer app-containerization via docker instead of using system-level containers (LXC)?Pressure
@AlexUshakov I presume when spinning up X nodes for N hours then destroying them the benefits are in the orchestration of the environment to ensure it is identical to dev (except the size of the input data). It solves dependency hell ... but I cannot comment on LXC. I understand they often dedicate the entire machine/VM (and GPU) to one container which performs comparably to running on the bare VM. I'm no data scientist but I found these examples github.com/saiprashanths/dl-docker or emergingstack.com/2016/01/10/…Cracker
Maybe I'm trying to do it in the wrong way, but here's what I'd like to achieve: 1. there's a "docker package", on some repo, that contains a folder with docker-compose.yml and few other files 2. I git-clone this repo, cd into it's directory and fire docker-compose up 3. as the result I get: - A web-server with nginx/php-fpm/mysql stuff - A working directory with a project code on my host system - … which is also mounted to some folder on the webserver. I believe that getting project code implies to run few commands on the host from within Dockerfile?Perfective
E
36

That REALLY depends on what you need that bash script to do!

For example, if the bash script just echoes some output, you could just do

docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like

docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.

Eger answered 23/8, 2015 at 13:29 Comment(14)
I had idea to make container that connects to the host and creates new containers.Pressure
Docker doesn't seem to like your relative mount. This should work docker run --rm -v $(pwd)/mybashscript.sh:/work/mybashscript.sh ubuntu /work/mybashscript.shCracker
Thanks for the catch, I fixed the post!Eger
The first line starts a new ubuntu container, and mounts the script where it can be read. It does not allow the container access to the host filesystem, for instance. The second line exposes the host's /usr/bin to the container. In neither case does the container have full access to the host system. Maybe I'm wrong, but it seems like a bad answer to a bad question.Florin
Fair enough- the question was pretty vague. The question didn't ask for "full access to the host system". As described, if the bash script is only intended to echo some output, it wouldn't NEED any access to the host filesystem. For my second example, which was installing docker-compose, the only permission you need is access to the bin directory where the binary gets stored. As I said in the beginning- to do this you would have to have very specific ideas about what the script is doing to allow the right permissions.Eger
The question as I interpreted it is that it is intended for the host to run the script, not the container. So docker run is not the answer. Something like allowing the container to ssh to the host and run the script is the answer. I didn't even notice that @MohammedNoureldin has the right answer and is almost voted over the accepted answer.. I will help him do that.Startle
But... A docker container is not a VM. Everything it does is on the host system running in the host kernel. Depending on the flags a container can run any process and modify any part of the host system.Eger
I know its an old question, but for example. If mybashscript.sh echoes let's say the MAC address (something hardware specific), even though it gets invoked in the container, would the output be the same as if I were to run the script directly in a terminal on the host machine? Or does this method just give me access to the script, and the output would be exactly as if I had run the script in the container?Lovett
MAC address depends on the network settings. By default, each container gets its own ip address and mac address. However, using --network=host would connect the container to the host's network with no network isolation, and that command would output the same in the container as on the host.Eger
is it possible to run a shell on the host from a container if the container has access to the host docker.sock?Dinnerware
@AlexUshakov I know this question is quite old but I have the same use case, I came across jpetazzo.github.io/2015/09/03/… which suggested sharing the docker socket with your container. like this docker run -v /var/run/docker.sock:/var/run/docker.sock ...Chemistry
A container can control the host dicker daemin (and launch sibling containers) so long as it has access to the docker socket, has a docker client installed, and has the 'priviliged' flag. Remember that this basically gives the container root access to the host. You can also run a new instance of containerd inside the container, and there has been some progress on running docker without root permissions.Eger
Tried this, the script is executed in container, not on hostIngle
Yes- but the container isn't a separate thing. It is a process running on the host in a chroot and with a permissions namespace. When you do 'docker run' it launches a process and sets up permissions on what files it can see and things it can do. Its not the default, but you can give the process full root permission on the host as well as mount the host filesystem inside the container filesystem. You shouldn't, but you can. So if you know exactly what the script needs to do, you can setup your container to have all the needed permissions.Eger
T
201

This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.

In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.

I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed. So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.

PART 1 - Testing the named pipe concept without docker

On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:

mkfifo /path/to/pipe/mypipe

The pipe is created. Type

ls -l /path/to/pipe/mypipe 

And check the access rights start with "p", such as

prw-r--r-- 1 root root 0 mypipe

Now run:

tail -f /path/to/pipe/mypipe

The terminal is now waiting for data to be sent into this pipe

Now open another terminal window.

And then run:

echo "hello world" > /path/to/pipe/mypipe

Check the first terminal (the one with tail -f), it should display "hello world"

PART 2 - Run commands through the pipe

On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:

eval "$(cat /path/to/pipe/mypipe)"

Then, from the other terminal, try running:

echo "ls -l" > /path/to/pipe/mypipe

Go back to the first terminal and you should see the result of the ls -l command.

PART 3 - Make it listen forever

You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.

Instead of eval "$(cat /path/to/pipe/mypipe)", run:

while true; do eval "$(cat /path/to/pipe/mypipe)"; done

(you can nohup that)

Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.

PART 4 - Make it work even when reboot happens

The only caveat is if the host has to reboot, the "while" loop will stop working.

To handle reboot, here what I've done:

Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header

Don't forget to chmod +x it

Add it to crontab by running

crontab -e

And then adding

@reboot /path/to/execpipe.sh

At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed. Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.

Another option is to modify the script to put the output in a file, such as:

while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done

Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.

PART 5 - Make it work with docker

If you are using both docker compose and dockerfile like I do, here is what I've done:

Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container

Add this:

VOLUME /hostpipe

in your dockerfile in order to create a mount point

Then add this:

volumes:
   - /path/to/pipe:/hostpipe

in your docker compose file in order to mount /path/to/pipe as /hostpipe

Restart your docker containers.

PART 6 - Testing

Exec into your docker container:

docker exec -it <container> bash

Go into the mount folder and check you can see the pipe:

cd /hostpipe && ls -l

Now try running a command from within the container:

echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe

And it should work!

WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://mcmap.net/q/128385/-named-pipes-in-docker-container-folder-mounted-to-mac-os-x-file-system-through-boot2docker and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).

For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.

PART 7 - Example from Node.JS container

Here is how I send a command from my Node.JS container to the main host and retrieve the output:

const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"

console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)

console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()

console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
    if (Date.now() - timeoutStart > timeout) {
        clearInterval(myLoop);
        console.log("timed out")
    } else {
        //if output.txt exists, read it
        if (fs.existsSync(outputPath)) {
            clearInterval(myLoop);
            const data = fs.readFileSync(outputPath).toString()
            if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
            console.log(data) //log the output of the command
        }
    }
}, 300);
Tarantella answered 3/9, 2020 at 8:10 Comment(16)
This works nicely. What about security? I want to use this to start/stop docker containers from within a running container? Do I just make a dockeruser without any privileges except for running docker commands?Enterpriser
@Tarantella could you know how to run command in php? I try shell_exec('echo "mkdir -p /mydir" > /path/mypipe') but this not working. Any idea?Nicosia
of course the command works in a container, but not from php codeNicosia
Excellent, this works beautifully. Especially useful if you have already mounted some folders on both systems. Thanks for that!Highbinder
Credit goes to him, but my vote goes to you.Salutatory
I needed to be able to run docker-compose from within a container. I setup a new user called docker and put them the in the docker group. I set the permissions of the pipe to docker:docker and used the docker users crontab to run the @rebootWholesome
Thank you! Now what if I need to read results back from the command? Can we make a 2 way pipe?Ringster
Hi, I am trying to reproduce this without success. The script of reading the named pipe works if run from terminal. I echo something into it, and it does something on the other end. However, if the same script run at reboot, nothing happens...even echo "ls" > named_pipe does not return prompt! I tried putting '&' at the end of the command at crontab @reboot line, but still does not work. I am running it on a raspberry pi if this would make any difference. ThanksBeriberi
@Nicosia The answer is for the future generation. You need to give write permissions to mypipe: chmod o+w mypipeRoseline
for part 4 I used a systemd service rather than a cron jobStalder
if I run a bash script inside Nodejs app, is it running on the host? e.g ovs-ofctl add-flow ...Frug
Well explained answerAube
Presumably this pipe is just one-way, though. This is severely limited if STDOUT & STDERR cannot be passed back to the container.Cathode
This works great! you can use a shared file between the host and container (on the volume you mapped) to dump the command outputs you want and have the container read it. Also, i want to run this on my QNAP and it doesnt have the mkfifo command. i cant install coreutils neither.... anybody found a way to install mkfifo on QNAP?Bushey
Regarding security, if you don't trust what will be inserted into the pipe, you can use the contents only to run predefined scripts, e.g. SCRIPT=$(cat /path/to/pipe")Chimene
Step 2 & Step 3 could be simplified with tail -f /path/to/pipe/mypipe | bashGabo
P
153

Use a named pipe. On the host OS, create a script to loop and read commands, and then you call eval on that.

Have the docker container read to that named pipe.

To be able to access the pipe, you need to mount it via a volume.

This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.

My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.

Some of the other answers such as running a script under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.

Proprioceptor answered 17/4, 2018 at 8:42 Comment(10)
ATTENTION: This is the right/best answer, and it needs a little more praise. Every other answer is fiddling with asking "what you're trying to do" and making exceptions for stuff. I have a very specific use-case that requires me to be able to do this, and this is the only good answer imho. SSH above would require lowering security/firewall standards, and the docker run stuff is just flat out wrong. Thanks for this. I assume this doesn't get as many upvotes because it's not a simple copy/paste answer, but this is the answer. +100 points from me if I couldMorganite
For those looking for some more info, you can use the following script running on the host machine: unix.stackexchange.com/a/369465 Of course, you'll have to run it with 'nohup' and create some kind of supervisor wrapper in order to maintain it alive (maybe use a cron job :P)Palawan
I created a diagram to illustrate a use case: imgur.com/a/9Wkxqu9Palawan
This might be a good answer. However, it would be much better if you give more details and some more command line explanation. Is it possible to elaborate?Tegucigalpa
Upvoted, This works! Make a named pipe using 'mkfifo host_executor_queue' where the volume is mounted. Then to add a consumer which executes commands that are put into the queue as host's shell, use 'tail -f host_executor_queue | sh &'. The & at the end makes it run in the background. Finally to push commands into the queue use 'echo touch foo > host_executor_queue' - this test creates a temp file foo at home directory. If you want the consumer to start at system startup, put '@reboot tail -f host_executor_queue | sh &' in crontab. Just add relative path to host_executor_queue.Bobbysoxer
as a followup, my consumer on host machine kept dying for some reason. Just added nohup to the command. Its '@reboot nohup tail -f host_executor_queue | sh &' that keeps it running. see (unix.stackexchange.com/a/32580/350867)Bobbysoxer
This would be a good answer if it gave an example of how to do it. It's just a description with no links to any relevant content. Not a very good answer, but just a nudge in the right direction.Philosophize
Read teh pipe and eval: github.com/BradfordMedeiros/automate_firmware/blob/… Write to teh pipe: github.com/BradfordMedeiros/automate_firmware/blob/… If you dig around you can see building the docker image/running it with the args, etc.Proprioceptor
@LucasPottersky this should be basically be done by the person who poster the answer.Tegucigalpa
one post up there's a github link i added with this completely workingProprioceptor
T
96

The solution I use is to connect to the host over SSH and execute the command like this:

ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"

UPDATE

As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).

UPDATE: Named Pipes

The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.

However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.

Tegucigalpa answered 12/7, 2017 at 13:29 Comment(10)
As another workaround, container could output a set of commands and the host could run them after the container exits: eval $(docker run --rm -it container_name_to_output script)Startle
I need to run a command line on the Host from inside a Docker container, but when I go into the container, ssh is not found. Do you have any other suggestions?Carminecarmita
@RonRosenfeld, which Docker image are you using? in case of debian/ubuntu, run this: apt update && apt install openssh-client.Tegucigalpa
It would be whatever got installed on my Synology NAS. How can I tell?Carminecarmita
@RonRosenfeld, sorry I do not understand what you meanTegucigalpa
I think I know what you mean. The docker image from which I want to run the command is called linuxserver/nzbget:latest. The Synology NAS on which it is installed uses its own custom version of linux.Carminecarmita
@RonRosenfeld, ok, so you need to install OpenSSH somehow on it, depending on its repository managerTegucigalpa
what do you mean by it?Carminecarmita
@RonRosenfeld, you need to install OpenSSH somehow in the containerTegucigalpa
Thank you. I will try to figure that out.Carminecarmita
E
36

That REALLY depends on what you need that bash script to do!

For example, if the bash script just echoes some output, you could just do

docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like

docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh

But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.

Eger answered 23/8, 2015 at 13:29 Comment(14)
I had idea to make container that connects to the host and creates new containers.Pressure
Docker doesn't seem to like your relative mount. This should work docker run --rm -v $(pwd)/mybashscript.sh:/work/mybashscript.sh ubuntu /work/mybashscript.shCracker
Thanks for the catch, I fixed the post!Eger
The first line starts a new ubuntu container, and mounts the script where it can be read. It does not allow the container access to the host filesystem, for instance. The second line exposes the host's /usr/bin to the container. In neither case does the container have full access to the host system. Maybe I'm wrong, but it seems like a bad answer to a bad question.Florin
Fair enough- the question was pretty vague. The question didn't ask for "full access to the host system". As described, if the bash script is only intended to echo some output, it wouldn't NEED any access to the host filesystem. For my second example, which was installing docker-compose, the only permission you need is access to the bin directory where the binary gets stored. As I said in the beginning- to do this you would have to have very specific ideas about what the script is doing to allow the right permissions.Eger
The question as I interpreted it is that it is intended for the host to run the script, not the container. So docker run is not the answer. Something like allowing the container to ssh to the host and run the script is the answer. I didn't even notice that @MohammedNoureldin has the right answer and is almost voted over the accepted answer.. I will help him do that.Startle
But... A docker container is not a VM. Everything it does is on the host system running in the host kernel. Depending on the flags a container can run any process and modify any part of the host system.Eger
I know its an old question, but for example. If mybashscript.sh echoes let's say the MAC address (something hardware specific), even though it gets invoked in the container, would the output be the same as if I were to run the script directly in a terminal on the host machine? Or does this method just give me access to the script, and the output would be exactly as if I had run the script in the container?Lovett
MAC address depends on the network settings. By default, each container gets its own ip address and mac address. However, using --network=host would connect the container to the host's network with no network isolation, and that command would output the same in the container as on the host.Eger
is it possible to run a shell on the host from a container if the container has access to the host docker.sock?Dinnerware
@AlexUshakov I know this question is quite old but I have the same use case, I came across jpetazzo.github.io/2015/09/03/… which suggested sharing the docker socket with your container. like this docker run -v /var/run/docker.sock:/var/run/docker.sock ...Chemistry
A container can control the host dicker daemin (and launch sibling containers) so long as it has access to the docker socket, has a docker client installed, and has the 'priviliged' flag. Remember that this basically gives the container root access to the host. You can also run a new instance of containerd inside the container, and there has been some progress on running docker without root permissions.Eger
Tried this, the script is executed in container, not on hostIngle
Yes- but the container isn't a separate thing. It is a process running on the host in a chroot and with a permissions namespace. When you do 'docker run' it launches a process and sets up permissions on what files it can see and things it can do. Its not the default, but you can give the process full root permission on the host as well as mount the host filesystem inside the container filesystem. You shouldn't, but you can. So if you know exactly what the script needs to do, you can setup your container to have all the needed permissions.Eger
T
36

My laziness led me to find the easiest solution that wasn't published as an answer here.

It is based on the great article by luc juggery.

All you need to do in order to gain a full shell to your linux host from within your docker container is:

docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh

Explanation:

--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)

--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running) nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)

nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1. The whole command will then provide an interactive sh shell in the VM

This setup has major security implications and should be used with cautions (if any).

Tribe answered 28/7, 2020 at 18:14 Comment(3)
By far the best and easiest solution! thank you Shmulik for providing it(Yashar Koach!)Brazen
This is by far the easiest solutionLuciana
The trouble is this allows things outside of the docker to run inside the docker. However if you want to run a process located inside the container and allows that process to see and run commands outside of the container on the host, this method falls apart, because it can't find the command inside the docker container in the first place.Vientiane
H
12

Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080

#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json

PORT_NUMBER = 8080

# This class will handles any incoming request from
# the browser 
class myHandler(BaseHTTPRequestHandler):
        def do_POST(self):
                content_len = int(self.headers.getheader('content-length'))
                post_body = self.rfile.read(content_len)
                self.send_response(200)
                self.end_headers()
                data = json.loads(post_body)

                # Use the post data
                cmd = "your shell cmd"
                p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
                p_status = p.wait()
                (output, err) = p.communicate()
                print "Command output : ", output
                print "Command exit status/return code : ", p_status

                self.wfile.write(cmd + "\n")
                return
try:
        # Create a web server and define the handler to manage the
        # incoming request
        server = HTTPServer(('', PORT_NUMBER), myHandler)
        print 'Started httpserver on port ' , PORT_NUMBER

        # Wait forever for incoming http requests
        server.serve_forever()

except KeyboardInterrupt:
        print '^C received, shutting down the web server'
        server.socket.close()
Herbivore answered 21/6, 2018 at 22:57 Comment(5)
IMO this is the best answer. Running arbitrary commands on the host machine MUST be done through some kind of API (e.g. REST). This is the only way that security can be enforced and running processes can be properly controlled (e.g. killing, handling stdin, stdout, exit-code, and so on). If course it would be pretty if this API could run inside Docker, but personally I don't mind to run it on the host directly.Precinct
Please correct me if I'm wrong, but subprocess.Popen will run the script in the container, not on the host, right? (Regardless if the script's source is on the host or in the container.)Conceive
@Arjan, if you run the above script inside a container, Popen will execute the command in the container as well. However, if you run the above script from the host, Popen will execute the command on the host.Precinct
Thanks, @barney765. Running on the host to provide an API makes sense, like does your first comment. I guess (for me) the "bind the port -p 8080:8080 with the container" is the confusing part. I assumed the -p 8080:8080 was supposed to be part of the docker command, publishing that API's port from the container, making me think it was supposed to be running in the container (and that subprocess.Popen was supposed to do the magic to run things on the host, from the container). (Future readers, see How to access host port from docker container.)Conceive
this will not work with bind port 8080:8080 then spawning a server on 8080 since the port already in use at the hostNecrose
C
6

If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.

Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.

You can do this by adding the following volume args to your start command

docker run -v /var/run/docker.sock:/var/run/docker.sock ...

or by sharing /var/run/docker.sock within your docker compose file like this:

version: '3'

services:
   ci:
      command: ...
      image: ...
      volumes:
         - /var/run/docker.sock:/var/run/docker.sock

When you run the docker start command within your docker container, the docker server running on your host will see the request and provision the sibling container.

credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

Chemistry answered 5/3, 2019 at 19:58 Comment(4)
Consider that docker must be installed in the container, otherwise you will also need to mount a volume for the docker binary (e.g. /usr/bin/docker:/usr/bin/docker).Teodoor
Please be carefull when mounting the docker socket in your container, this could be a serious security issue: docs.docker.com/engine/security/security/…Winded
@Winded thanks, I've edited my answer to reflect the issues outlined by your resource.Chemistry
This does not answer the question, which is about running a script on the host, not in a containerArch
A
6

I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.

The solution is to create two named pipes:

mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out

Then, the solution using a loop, as suggested by @Vincent, would become:

# on the host
while true; do eval "$(cat exec_in)" > exec_out; done

And then on the docker container, we can execute the command and get the output using:

# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out

If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:

def fifo_exec(cmd)
  exec_in = '/path/to/pipe/exec_in'
  exec_out = '/path/to/pipe/exec_out'
  %x[ echo #{cmd} > #{exec_in} ]
  %x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Arelus answered 4/8, 2022 at 9:56 Comment(0)
F
1

As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp

https://docs.docker.com/reference/commandline/cp/

Once a file is copied, you can run it locally

Firework answered 23/8, 2015 at 6:53 Comment(7)
I know it. How to run this script from, in other words, inside docker container?Pressure
duplicate of #31721435 ?Firework
@AlexUshakov: no way. Doing that would break a lot of the advantages of docker. Don't do it. Don't try it. Reconsider what you need to do.Adley
See also Vlad's trick forums.docker.com/t/…Firework
you can always, on the host, get the value of some variable in your container, something like myvalue=$(docker run -it ubuntu echo $PATH) and test it regularly in a script shell (of course, you will use something else than $PATH, just is just an example), when it is some specific value, you launch your scriptFirework
@MarcusMüller It is absolutely possible and are plenty of good reasons for doing it; build jobs and tests that leverage the normalised environment of a Docker container are a great example and a common use case (especially in complex builds for cross platform software and/or running integration tests on a system with a complex stack).Fauve
Posted link is brokenSashasashay
D
1
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
# 
Dinnerware answered 14/12, 2018 at 15:12 Comment(2)
While this answer might resolve the OP's question, it is suggested that you explain how it works and why it resolves the issue. This helps new developers understand what is going on and how to fix this and similar issues themselves. Thanks for contributing!Soldo
is there a way we can run a shell script which available locally through docker containerPruett
T
1

I have a simple approach.

Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)

Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)

docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7 sh /test.sh

test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need. Now you will be able to get host context output.

Tackling answered 23/1, 2019 at 7:9 Comment(1)
According to docker official documentation on networking using host network, "However, in all other ways, such as storage, process namespace, and user namespace, the process is isolated from the host." Check out - docs.docker.com/network/network-tutorial-hostXenocryst
B
0

You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):

#! /bin/bash

touch .command_pipe
chmod +x .command_pipe

# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
            xargs -n1 -I "{}"  .command_pipe >> .command_pipe_log  &

 docker run -it --rm  \
   --name alpine  \
   -w /home/test \
   -v $PWD/.command_pipe:/dev/command_pipe \
   alpine:3.7 sh

rm -rf .command_pipe
kill %1

In this example, inside the container send commands to /dev/command_pipe, like so:

/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe

On the host, you can check if the network was created:

$ docker network ls | grep test2
8e029ec83afe        test2.network.com                            bridge              local
Budde answered 10/6, 2020 at 15:14 Comment(0)
S
0

In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine

Shandashandee answered 27/6, 2021 at 7:40 Comment(0)
D
0

Depending on the situation, this could be a helpful resource.

This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).

https://www.codingforentrepreneurs.com/blog/celery-redis-django/

Dykstra answered 16/2, 2023 at 21:27 Comment(0)
S
0

I'm not a fan of most the answers given for these reasons:

  1. Most of them pass arbitrary "code" to be executed through to the host side. For security, you would be better off defining commands with parameters and passing and executing those instead.

  2. Some of these approaches have boundary problems. Meaning, it is not possible to tell when the output from a command that has run on the host is complete.

  3. Most of these solutions are going to have multiprocessing problems. Meaning, what happens if two invocations of this mechanism end up interleaved?

  4. Most of these solutions have issues with arranging to have another process instantiated and running on the host side.

This solution tries to address those concerns using systemd:

  1. Create a new systemd service on the Docker host in /etc/systemd/system called [email protected] that contains this:
[Service]
Type = oneshot
ExecStart = /root/run_on_host.py %I
  1. The script (also on the Docker host) could look something like this (no it doesn't have to be python - it could be a shell or node.js script or whatever you like):
#!/bin/env python3

import sys

args = sys.argv[1].split(",")

print(f"args = {args}")
  1. Invoke it (from Docker host or Docker container) like this:

sudo systemctl start run-on-host@$(systemd-escape hello,world).service

  1. Monitor the execution (from Docker host or Docker container):
$ sudo journalctl -u run-on-host@$(systemd-escape hello,world).service
Jan 26 13:34:45 localhost run_on_host.py[2499830]: args = ['hello', 'world']
Jan 26 13:34:45 localhost systemd[1]: run-on-host@hello\x2cworld.service: Deactivated successfully.
Jan 26 13:34:45 localhost systemd[1]: Finished run-on-host@hello\x2cworld.service.
$

Some notes:

  1. This uses systemd's "parameter" convention as documented here under "Service Templates":

https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html

  1. I would recommend a pattern of arguments like this:

cmd,arg1,arg2,...,output

The first argument would be the operation to be performed and arguments to be passed to it. The output would be a file location to write the result (whatever it is) in to and should use a randomly generated filename to avoid multiprocessing clashes. The script on the host side should write to a temporary file, close it and then rename as indicated by output.

  1. While the above demonstrates how to do this from the command line (and assumes you've passed the D-Bus socket into the container and have compatible systemd tools within the container), this can easily be invoked from Python within the Docker container by using a D-Bus python library to send a start command to systemd. More info here: https://github.com/bernhardkaindl/python-sdbus-systemd

  2. The output file path should be on a Docker volume, and the run_on_host.py script should create that filename in the volume from the host side (using docker volume inspect to find its location on the host). The code in the container would poll for the output file to appear in the volume indicating the command has completed.

Sphalerite answered 26/1 at 19:43 Comment(2)
Can you be more clear which steps are done on the host and which are done in a containerServile
@Servile - made some edits. Does that help?Sphalerite
L
-3

To expand on user2915097's response:

The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.

Yes. But it's sometimes necessary.

No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

Lamellibranch answered 23/8, 2015 at 8:48 Comment(10)
I have use case: I have dockerized service A (src on github). In A repo I create proper hooks which after 'git pull' command create new docker image and run them (and remove old container of course). Next: github have web-hooks which allow to create POST request to arbitrary endpoint link after push on master. So I wan't create dockerized service B which will be that endpoint and which will only run 'git pull' in repo A in HOST machine (important: command 'git pull' must be executed in HOST environment - not in B environment because B cannot run new container A inside B...)Retroflexion
The problem: I want to have nothing in HOST except linux, git and docker. And i wanna have dockerizet service A and service B (which is in fact git-push handler which execute git pull on repo A after someone make git push on master). So git auto-deploy is problematic use-caseRetroflexion
@KamilKiełczewski I'm trying to do exactly the same, have you found a solution?Wolfgang
@Wolfgang - yes :) Look on this project - study it and you will find solution. (in time when i create this project, on ubuntu was't exist tool fswatch so I use inotify-tools - however currently I heard that this tool exist so you can simplify this solution a little)Retroflexion
Saying, "No, that's not the case" is narrow minded and assumeds you know every use-case in the world. Our use case is running tests. They need to run in containers to correctly test the environment, but given the nature of tests, they also need to execute scripts on the host.Numeration
well, I don't know your use case, or a lot of use cases. I do know that containerization is a servicing or isolation approach, and that breaking that isolation is usually a bad idea, as it introduces exactly the kind of host dependency that you'd want to avoid when doing things in containers instead of outside.Adley
@MarcusMüller I have a situation where the programs I need are only available to me if I run them in Docker. When they run, they add to a database on the host, which then has to be re-indexed. If the program were running on the host, the re-index command could be issued directly when the DB update is done. It seems I can either run that command on the host at scheduled times; or continuously scan the DB for changes; but it seems it would be more efficient for me to run the command when needed. ssh does not seem to be available within my docker environment. What would you suggest?Carminecarmita
Just for those wondering why I leave a -7 answer up: a) it's OK to be fallible. I was wrong. It's OK that this is documented here. b) The comments actually contribute value; deleting the answer would delete them, too. c) It still contributes a point of view that might be wise to consider (don't break your isolation if you don't have to. Sometimes you have to, though).Adley
@MarcusMüller That's an honorable thing to do. If someone else reads this, this is part of the "unknown unknowns" principle. That essentially one never has a full map of reality.Doer
Currently the answer has -5, but I upvoted it anyway. Most of the high voted answers ignore security. There is one hint that with a named pipe you should not eval the named pipe but start a defined action depending on the command in the pipe (eg. "backup" starts a backup script) and there is a low voted post with a python client. Defining an interface is a better approach than running arbitrary commands from a named pipe as root.Piccolo

© 2022 - 2024 — McMap. All rights reserved.