Use of Supervisor in docker
Asked Answered
N

3

57

I am not asking about the use of supervisor with dockers but just want to have my understanding validated.

I understand that docker runs a single process when it is run. Also, supervisor is used when we need to run multiple process within the container.

I have seen several examples where a container is started from base image and several service are installed and the container is committed to form a new image, all without supervisor.

So, my basic doubt was what is the difference between both approaches.

My understanding is that when docker container is stopped it sends a kill signal to the process with PID 1, PID 1 manages the child process and stops all child which is exactly what is done by supervisor, while we can install multiple process without supervisor only one process can be run when docker run is issued and when container is stopped only the PID 1 will be sent signals and other running process will not be stopped gracefully.

Please confirm how much my understanding about using supervisord is correct.

Nne answered 14/10, 2015 at 5:13 Comment(1)
Update Sept. 2016: see my new answer below: the docker daemon could take care of those zombie processes for you in docker 1.12.Vulpine
V
66

while we can install multiple process without supervisor, only one process can be run when docker run is issued and when container is stopped only the PID 1 will be sent signals and other running process will not be stopped gracefully.

Yes, although it depends on how your main process runs (foreground or background), and how it collects child processes.

That is what is detailed in "Trapping signals in Docker containers"

docker stop stops a running container by sending it SIGTERM signal, let the main process process it, and after a grace period uses SIGKILL to terminate the application.

The signal sent to container is handled by the main process that is running (PID 1).

If the application is in the foreground, meaning the application is the main process in a container (PID1), it could handle signals directly.

But:

The process to be signaled could be the background one and you cannot send any signals directly. In this case one solution is to set up a shell-script as the entrypoint and orchestrate all signal processing in that script.

The issue is further detailed in "Docker and the PID 1 zombie reaping problem"

Unix is designed in such a way that parent processes must explicitly "wait" for child process termination, in order to collect its exit status. The zombie process exists until the parent process has performed this action, using the waitpid() family of system calls.

The action of calling waitpid() on a child process in order to eliminate its zombie, is called "reaping".

The init process -- PID 1 -- has a special task. Its task is to "adopt" orphaned child processes.

https://static.mcmap.net/file/mcmap/ZG-AbGLDKwfiaFfnKnBoc7MpaRPQame/wp-content/uploads/2015/01/adoption.png

The operating system expects the init process to reap adopted children too.

Problem with Docker:

We see that a lot of people run only one process in their container, and they think that when they run this single process, they're done.
But most likely, this process is not written to behave like a proper init process.
That is, instead of properly reaping adopted processes, it's probably expecting another init process to do that job, and rightly so.

Using an image like phusion/baseimage-docker help managing one (or several) process(es) while keeping a main process init-compliant.

It uses runit instead of supervisord, for multi-process management:

Runit is not there to solve the reaping problem. Rather, it's to support multiple processes. Multiple processes are encouraged for security (through process and user isolation).
Runit uses less memory than Supervisord because Runit is written in C and Supervisord in Python.
And in some use cases, process restarts in the container are preferable over whole-container restarts.

That image includes a my_init script which takes care of the "reaping" issue.

In baseimage-docker, we encourage running multiple processes in a single container. Not necessarily multiple services though.
A logical service can consist of multiple OS processes, and we provide the facilities to easily do that.

Vulpine answered 14/10, 2015 at 7:35 Comment(4)
Thanks for your elaborate answer. I am trying phusion image and as I understand that whenever the container starts it runs whatever is in /etc/init.d . But, I have a service in the init.d which is not starting on container boot. Can you please help.Nne
Sure: can you ask a new question, with the details of your new setup? That way, I (and potentially others) can have a look.Vulpine
Oh.. my mistake its /etc/my_init.dNne
The key takeaway here is phusion/baseimage-dockerViolaviolable
V
19

Update Sept 2016 for docker 1.12 (Q4 2016/Q1 2017)

Arnaud Porterie just twitted:

[🐳] Just merged: with docker run --init, Rick Grimes will take care of all your zombies.

(commit eabae09)

See PR 26061: "Add init process for zombie fighting and signal handling" (and PR 26736)

This adds a small C binary for fighting zombies. It is mounted under /dev/init and is prepended to the args specified by the user. You enable it via a daemon flag, dockerd --init, as it is disable by default for backwards compat.

You can also override the daemon option or specify this on a per container basis with docker run --init=true|false.

You can test this by running a process like this as the pid 1 in a container and see the extra zombie that appears in the container as it is running.

int main(int argc, char ** argv) {
    pid_t pid = fork();
    if (pid == 0) {
        pid = fork();
        if (pid == 0) {
            exit(0);
        }
        sleep(3);
        exit(0);
    }
    printf("got pid %d and exited\n", pid);
    sleep(20);
}

The docker daemon now has the option

--init

Run an init inside containers to forward signals and reap processes

Vulpine answered 20/9, 2016 at 11:55 Comment(0)
S
2

This article in the Docker docs shows an example of running more than one process and utilizing supervisord as well.

https://docs.docker.com/config/containers/multi-service_container/

I have this working fine, but we are likely going to simply offload our worker processes to another container and only deal with one process in each. It feels like a simpler approach at this point.

Suggestive answered 4/3, 2020 at 17:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.