How to keep Docker container running after starting services?
Asked Answered
L

15

308

I've seen a bunch of tutorials that seem do the same thing I'm trying to do, but for some reason my Docker containers exit. Basically, I'm setting up a web-server and a few daemons inside a Docker container. I do the final parts of this through a bash script called run-all.sh that I run through CMD in my Dockerfile. run-all.sh looks like this:

service supervisor start
service nginx start

And I start it inside of my Dockerfile as follows:

CMD ["sh", "/root/credentialize_and_run.sh"]

I can see that the services all start up correctly when I run things manually (i.e. getting on to the image with -i -t /bin/bash), and everything looks like it runs correctly when I run the image, but it exits once it finishes starting up my processes. I'd like the processes to run indefinitely, and as far as I understand, the container has to keep running for this to happen. Nevertheless, when I run docker ps -a, I see:

➜  docker_test  docker ps -a
CONTAINER ID        IMAGE                            COMMAND                CREATED             STATUS                      PORTS               NAMES
c7706edc4189        some_name/some_repo:blah   "sh /root/run-all.sh   8 minutes ago       Exited (0) 8 minutes ago                        grave_jones

What gives? Why is it exiting? I know I could just put a while loop at the end of my bash script to keep it up, but what's the right way to keep it from exiting?

Lynnelynnea answered 10/9, 2014 at 21:18 Comment(3)
are you exposing the services' ports to outside (-p option to docker run)? (of course this won't prevent them to exit)Claudie
I was using ENTRYPOINT in my Dockerfile, and after the script defined in ENTRYPOINT (my init script) ran, it showed up in the logs but my container seemed to be exit. So, instead of ENTRYPOINT, I used RUN command to run the script and the container is still running in the background.Stannwood
Does this answer your question? Docker container will automatically stop after "docker run -d"Bluegrass
V
65

This is not really how you should design your Docker containers.

When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.

The container will "exit" when the process itself exits (in your case, that process is your bash script).


However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.

They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.

Valer answered 10/9, 2014 at 21:31 Comment(7)
Can you explain why I should only have one service running? I could add nginx to supervisor if necessary, but not sure why this should be necessary.Lynnelynnea
@Lynnelynnea The short answer is that this is how Docker works. Docker will only run one process (and its children) per container. It's recommended that this process be an actual application process (so that if it exits, Docker knows), but you can indeed use supervisor as that process. Note that you'll have to configure supervisor to run in the foreground (i.e. not daemonize), which is done through the --nodaemon option.Valer
I can't find documentation for being encouraged to run just one service anywhere, and it seems strange. What if you want to download credentials first, or run some start-up scripts that have to be done at run-time? Can you provide a link to your claim?Lynnelynnea
@Lynnelynnea This Docker blog post makes the case that running multiple processes (and, broadly speaking, viewing a container as a "small VPS") is suboptimal. In your case, the comment thread will probably be more relevant than the actual blog post.Valer
Docker base image is a terrible solution for a lot of enterprise problems because few serious companies use ubuntu, preferring instead the RHEL/Centos tree.Suzan
"Few serious companies" seems indefensible. The choice of OS would seem to be based entirely upon the use case. Any given company has lots of different environments including internal developer usage, internal employee usage, sales support, staging, POCs, and finally production (and even that is a vague term). I don't believe the OP mentioned their use case so, (sorry to be nitpicky) but this sort of comment seems to be the type that disseminates highly opinionated information with no argument as to why.Customary
Programs that want to background are a real problem in containerisation. The question whether to run multiple services in one container is orthogonal to that. So this is not the answer to the question.Wallas
S
405

If you are using a Dockerfile, try:

ENTRYPOINT ["tail", "-f", "/dev/null"]

(Obviously this is for dev purposes only, you shouldn't need to keep a container alive unless it's running a process eg. nginx...)

Sanmiguel answered 18/3, 2017 at 11:33 Comment(12)
I was using CMD["sleep", "1d"] but your solution seems betterMunitions
@GeorgiosPligoropoulos this will stuck in that line; maybe running in the background will workRoderica
Can also use CMD["sleep", "infinity"].Dezhnev
or 'cat' but people might say it's animal abuse. xDDirty
You may finish your entrypoint script with exec tail -f /dev/null but using tail as an entrypoint is a wrong answer.Wallas
ENTRYPOINT ping localhostStoichiometric
This solution does not handle the SIGTERM.Acaleph
In a docker-compose file, the following works too: entrypoint: tail -f /dev/null (if you need a binary in your docker for dev purpose)Prosperity
@MohammedNoureldin are you sure it doesn't listen to SIGTERM? askubuntu.com/questions/562921/will-kill-sigterm-stop-tail-fSkippy
@JohnC. yes, tail does not handle signals.Acaleph
about comments with CMD[], you'll need a space after CMD and before [, otherwise it will not workRogelioroger
I almost can't believe that CMD["sleep"] is really the best way to accomplish this. I wrote that in as a placeholder in my Dockerfile then came here to look for an "official" solution, only to find the kludge that I'd already come up with.Hage
H
144

I just had the same problem and I found out that if you are running your container with the -t and -d flag, it keeps running.

docker run -td <image>

Here is what the flags do (according to docker run --help):

-d, --detach=false         Run container in background and print container ID
-t, --tty=false            Allocate a pseudo-TTY

The most important one is the -t flag. -d just lets you run the container in the background.

Headliner answered 26/4, 2016 at 17:50 Comment(9)
I can't reproduce this. Would you please provide an example? Is there anything specific (e.g.: CMD) about Dockerfile we need in order this to work?Binny
This did not work for me. I used the command docker logs <image> to make sure it was an error that causes my docker container to exit. The exit status is 0 and the last output is confimation that my lighttpdserver is running: [ ok ] Starting web server: lighttpd. Hopple
I haven't been working with Docker for a while now. So it is possible that the command line interface changed and that this command doesn't work anymore.Headliner
I can confirm that this is indeed working with the latest docker version. If you want to later attach to this session, using -dit will also work.Kinesthesia
This works with docker version 17.05 on Ubuntu Trusty.Dulles
@123 ENTRYPOINT ["/start.sh"] I got this working now with additional twist like adding a hanging command tail -f /dev/null Yours
@Yours a script won't accept a tty, add exec bash or exec sh if bash isn't installed, to the end of start.sh. Then you can use the -t flagOrrery
This surely is not the way to keep containers running. For example, how do you apply that in Kubernetes?Wallas
This only works for containers whose default ENTRYPOINT is an executable that accepts user input, e.g. those that run bash. It is not a solution for all containers in general.Scranton
V
65

This is not really how you should design your Docker containers.

When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.

The container will "exit" when the process itself exits (in your case, that process is your bash script).


However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.

They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.

Valer answered 10/9, 2014 at 21:31 Comment(7)
Can you explain why I should only have one service running? I could add nginx to supervisor if necessary, but not sure why this should be necessary.Lynnelynnea
@Lynnelynnea The short answer is that this is how Docker works. Docker will only run one process (and its children) per container. It's recommended that this process be an actual application process (so that if it exits, Docker knows), but you can indeed use supervisor as that process. Note that you'll have to configure supervisor to run in the foreground (i.e. not daemonize), which is done through the --nodaemon option.Valer
I can't find documentation for being encouraged to run just one service anywhere, and it seems strange. What if you want to download credentials first, or run some start-up scripts that have to be done at run-time? Can you provide a link to your claim?Lynnelynnea
@Lynnelynnea This Docker blog post makes the case that running multiple processes (and, broadly speaking, viewing a container as a "small VPS") is suboptimal. In your case, the comment thread will probably be more relevant than the actual blog post.Valer
Docker base image is a terrible solution for a lot of enterprise problems because few serious companies use ubuntu, preferring instead the RHEL/Centos tree.Suzan
"Few serious companies" seems indefensible. The choice of OS would seem to be based entirely upon the use case. Any given company has lots of different environments including internal developer usage, internal employee usage, sales support, staging, POCs, and finally production (and even that is a vague term). I don't believe the OP mentioned their use case so, (sorry to be nitpicky) but this sort of comment seems to be the type that disseminates highly opinionated information with no argument as to why.Customary
Programs that want to background are a real problem in containerisation. The question whether to run multiple services in one container is orthogonal to that. So this is not the answer to the question.Wallas
B
51

you can run plain cat without any arguments as mentioned by bro @Sa'ad to simply keep the container working [actually doing nothing but waiting for user input] (Jenkins' Docker plugin does the same thing)

Belligerency answered 10/7, 2016 at 14:5 Comment(3)
in addition to my answer: but do understand that docker-compose (not daemonized) is used to show you the workflow of your container, so it might be handy to tail the log files of your started services. cheersBelligerency
or cat. jenkin's docker plugin does so.Pacifistic
Works in restrictive busybox containers (eg. no supervisord or /dev/null)Meet
B
50

The reason it exits is because the shell script is run first as PID 1 and when that's complete, PID 1 is gone, and docker only runs while PID 1 is.

You can use supervisor to do everything, if run with the "-n" flag it's told not to daemonize, so it will stay as the first process:

CMD ["/usr/bin/supervisord", "-n"]

And your supervisord.conf:

[supervisord]
nodaemon=true

[program:startup]
priority=1
command=/root/credentialize_and_run.sh
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
startsecs=0

[program:nginx]
priority=10
command=nginx -g "daemon off;"
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.log
autorestart=true

Then you can have as many other processes as you want and supervisor will handle the restarting of them if needed.

That way you could use supervisord in cases where you might need nginx and php5-fpm and it doesn't make much sense to have them apart.

Barnes answered 3/6, 2015 at 23:39 Comment(3)
Where in the docs does it say if PID 1 ends docker container stops running?Nature
@Nature That's essentially how process namespaces work; it's not Docker-specific so much as "the thing underlying all containers". From man7.org/linux/man-pages/man7/pid_namespaces.7.html: If the "init" process of a PID namespace terminates, the kernel terminates all of the processes in the namespace via a SIGKILL signal. This behavior reflects the fact that the "init" process is essential for the correct operation of a PID namespace.Outguess
@Barnes Thank you for this. So many years later this answer is still relevant and worked so much better than the script I cobbled together.Takamatsu
D
40

Motivation:

There is nothing wrong in running multiple processes inside of a docker container. If one likes to use docker as a light weight VM - so be it. Others like to split their applications into micro services. Me thinks: A LAMP stack in one container? Just great.

The answer:

Stick with a good base image like the phusion base image. There may be others. Please comment.

And this is yet just another plead for supervisor. Because the phusion base image is providing supervisor besides of some other things like cron and locale setup. Stuff you like to have setup when running such a light weight VM. For what it's worth it also provides ssh connections into the container.

The phusion image itself will just start and keep running if you issue this basic docker run statement:

moin@stretchDEV:~$ docker run -d phusion/baseimage
521e8a12f6ff844fb142d0e2587ed33cdc82b70aa64cce07ed6c0226d857b367
moin@stretchDEV:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS
521e8a12f6ff        phusion/baseimage   "/sbin/my_init"     12 seconds ago      Up 11 seconds

Or dead simple:

If a base image is not for you... For the quick CMD to keep it running I would suppose something like this for bash:

CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"

Or this for busybox:

CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"

This is nice, because it will exit immediately on a docker stop.

Just plain sleep or cat will take a few seconds before the container is forcefully killed by docker.

Updates

As response to Charles Desbiens concerning running multiple processes in one container:

This is an opinion. And the docs are pointing in this direction. A quote: "It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application." For sure it obviously much more powerful to devide your complex service into multiple containers. But there are situations where it can be beneficial to go the one container route. Especially for appliances. The GitLab Docker image is my favourite example of a multi process container. It makes deployment of this complex system easy. There is no way for mis-configuration. GitLab retains all control over their appliance. Win-Win.

Draftee answered 17/4, 2019 at 19:8 Comment(3)
I customized the centos7 base image to load PostgreSQL 11. You start that with a call to /usr/pgsql-11/bin/pg_ctl but pg_ctl exits once the server is running. Your suggestion to use trap worked great; it's the last line of my script pgstartwait.shAlpenhorn
It's a bit weird to say there is nothing wrong with running multiple processes in a single container, and then use that sentence to link to docs that start off by saying that it's not the best idea...Gunpaper
@CharlesDesbiens Thanks for your input. Please see my updated response.Draftee
C
16

Since docker engine v1.25 there is an option called init.
Docker-compose included this command as of version 3.7.

So my current CMD when running a container that should run into infinity:

CMD ["sleep", "infinity"]

and then run it using:

docker build
docker run --rm --init app

crf.: rm docs and init docs

Contraption answered 18/1, 2022 at 12:50 Comment(0)
E
14

Make sure that you add daemon off; to you nginx.conf or run it with CMD ["nginx", "-g", "daemon off;"] as per the official nginx image

Then use the following to run both supervisor as service and nginx as foreground process that will prevent the container from exiting

service supervisor start && nginx

In some cases you will need to have more than one process in your container, so forcing the container to have exactly one process won't work and can create more problems in deployment.

So you need to understand the trade-offs and make your decision accordingly.

Eolith answered 4/11, 2014 at 12:58 Comment(0)
O
6

Capture the PID of the ngnix process in a variable (for example $NGNIX_PID) and at the end of the entrypoint file do

wait $NGNIX_PID 

In that way, your container should run until ngnix is alive, when ngnix stops, the container stops as well

Ormandy answered 18/12, 2017 at 15:27 Comment(0)
D
2

Along with having something along the lines of : ENTRYPOINT ["tail", "-f", "/dev/null"] in your docker file, you should also run the docker container with -td option. This is particularly useful when the container runs on a remote m/c. Think of it more like you have ssh'ed into a remote m/c having the image and started the container. In this case, when you exit the ssh session, the container will get killed unless it's started with -td option. Sample command for running your image would be: docker run -td <any other additional options> <image name>

This holds good for docker version 20.10.2

Dulles answered 17/1, 2021 at 15:15 Comment(0)
K
1

There are some cases during development when there is no service yet but you want to simulate it and keep the container alive.

It is very easy to write a bash placeholder that simulates a running service:

while true; do
  sleep 100
done

You replace this by something more serious as the development progress.

Katerinekates answered 19/2, 2021 at 15:52 Comment(0)
R
0

I had a similar problem and could solve it by adding /usr/bin/env bash at the end of my bash script.

Ravelment answered 3/5, 2023 at 21:29 Comment(0)
F
0

If, on the one hand, Docker doctrine proposes that a container serves only one main service, I don't see a problem with having more than one. You can simply use CMD ["/bin/bash"] in you Dockerfile.

Felicefelicia answered 30/7, 2023 at 7:19 Comment(0)
V
0

When using docker compose you can add following line to your service. That will keep the container running always.

stdin_open: true

In case it may crash and you want to restart the container by itself, also add

restart: always
Vegetative answered 13/11, 2023 at 18:27 Comment(0)
D
-2

How about using the supervise form of service if available?

service YOUR_SERVICE supervise

Once supervise is successfully running, it will not exit unless it is killed or specifically asked to exit.

Saves having to create a supervisord.conf

Dorchester answered 17/9, 2019 at 11:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.