Python: how to kill child process(es) when parent dies?
Asked Answered
M

5

52

The child process is started with

subprocess.Popen(arg)

Is there a way to ensure it is killed when parent terminates abnormally? I need this to work both on Windows and Linux. I am aware of this solution for Linux.

Edit:

the requirement of starting a child process with subprocess.Popen(arg) can be relaxed, if a solution exists using a different method of starting a process.

Messere answered 2/5, 2014 at 18:40 Comment(7)
This is pretty vague, can you give some more details? Maybe describe what the parent and child processes are?Synthesize
the first solution from the link you provided works on Windows too.Conclusion
@J.F.Sebastian: sure, but the second one works if the process is terminated by sigkill.Messere
there is no sigkill on WindowsConclusion
@J.F.Sebastian: Let me rephrase. Child processes must exit if parent terminates for any reason whatsoever. The first solution does not guarantee it.Messere
the linux solutions works because there is special support in the OS kernel. There might be special support (different but similar functionality) on Windows (non-python specific). If you know win32 api calls; you could make them using ctypes if needed.Conclusion
for me I get warning that ResourceWarning: subprocess 40092 is still runningLeucocyte
S
45

Heh, I was just researching this myself yesterday! Assuming you can't alter the child program:

On Linux, prctl(PR_SET_PDEATHSIG, ...) is probably the only reliable choice. (If it's absolutely necessary that the child process be killed, then you might want to set the death signal to SIGKILL instead of SIGTERM; the code you linked to uses SIGTERM, but the child does have the option of ignoring SIGTERM if it wants to.)

On Windows, the most reliable options is to use a Job object. The idea is that you create a "Job" (a kind of container for processes), then you place the child process into the Job, and you set the magic option that says "when no-one holds a 'handle' for this Job, then kill the processes that are in it". By default, the only 'handle' to the job is the one that your parent process holds, and when the parent process dies, the OS will go through and close all its handles, and then notice that this means there are no open handles for the Job. So then it kills the child, as requested. (If you have multiple child processes, you can assign them all to the same job.) This answer has sample code for doing this, using the win32api module. That code uses CreateProcess to launch the child, instead of subprocess.Popen. The reason is that they need to get a "process handle" for the spawned child, and CreateProcess returns this by default. If you'd rather use subprocess.Popen, then here's an (untested) copy of the code from that answer, that uses subprocess.Popen and OpenProcess instead of CreateProcess:

import subprocess
import win32api
import win32con
import win32job

hJob = win32job.CreateJobObject(None, "")
extended_info = win32job.QueryInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation)
extended_info['BasicLimitInformation']['LimitFlags'] = win32job.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE
win32job.SetInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation, extended_info)

child = subprocess.Popen(...)
# Convert process id to process handle:
perms = win32con.PROCESS_TERMINATE | win32con.PROCESS_SET_QUOTA
hProcess = win32api.OpenProcess(perms, False, child.pid)

win32job.AssignProcessToJobObject(hJob, hProcess)

Technically, there's a tiny race condition here in case the child dies in between the Popen and OpenProcess calls, you can decide whether you want to worry about that.

One downside to using a job object is that when running on Vista or Win7, if your program is launched from the Windows shell (i.e., by clicking on an icon), then there will probably already be a job object assigned and trying to create a new job object will fail. Win8 fixes this (by allowing job objects to be nested), or if your program is run from the command line then it should be fine.

If you can modify the child (e.g., like when using multiprocessing), then probably the best option is to somehow pass the parent's PID to the child (e.g. as a command line argument, or in the args= argument to multiprocessing.Process), and then:

On POSIX: Spawn a thread in the child that just calls os.getppid() occasionally, and if the return value ever stops matching the pid passed in from the parent, then call os._exit(). (This approach is portable to all Unixes, including OS X, while the prctl trick is Linux-specific.)

On Windows: Spawn a thread in the child that uses OpenProcess and os.waitpid. Example using ctypes:

from ctypes import WinDLL, WinError
from ctypes.wintypes import DWORD, BOOL, HANDLE
# Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx
SYNCHRONIZE = 0x00100000
kernel32 = WinDLL("kernel32.dll")
kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD)
kernel32.OpenProcess.restype = HANDLE
parent_handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid)
# Block until parent exits
os.waitpid(parent_handle, 0)
os._exit(0)

This avoids any of the possible issues with job objects that I mentioned.

If you want to be really, really sure, then you can combine all these solutions.

Hope that helps!

Syriac answered 10/5, 2014 at 22:49 Comment(9)
One way of avoiding the race condition you mention is to add yourself to the job object before launching the child process; the child will inherit membership. Another way is to launch the child process suspended, and only resume it after adding it to the job.Bertine
For Windows 7, the shell's job object allows breaking away, so you can use the creation flag CREATE_BREAKAWAY_FROM_JOB to allow adding the process to a new job.Kurr
I don't understand, why is murdering the child process so complicated?Leucocyte
@CharlieParker the question is about how to handle the case where the parent terminates abnormally. If the parent segfaults or gets forcibly terminated (e.g. with kill -9), then it doesn't have a chance to murder the child.Syriac
It might be obvious to many, but to actually terminate, one needs to add win32job.TerminateJobObject(hJob, hProcess) after the process is no longer needed.Lombardo
Besides the race condition problem, it looks like there is also the possibility of a deadlock if any threads are used in the Python program since prcrtl is not a signal-safe system function.Croup
@HarryJohnston Yes, before calling subprocess.Popen(...), I called: win32job.AssignProcessToJobObject(hJob, win32api.OpenProcess(perms, False, os.getpid()))Roister
calling prctl on linuxBenioff
Any way to get the same effect as PR_SET_PDEATHSIG on macOS?Inelegance
H
11

The Popen object offers the terminate and kill methods.

https://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate

These send the SIGTERM and SIGKILL signals for you. You can do something akin to the below:

from subprocess import Popen

p = None
try:
    p = Popen(arg)
    # some code here
except Exception as ex:
    print 'Parent program has exited with the below error:\n{0}'.format(ex)
    if p:
        p.terminate()

UPDATE:

You are correct--the above code will not protect against hard-crashing or someone killing your process. In that case you can try wrapping the child process in a class and employ a polling model to watch the parent process. Be aware psutil is non-standard.

import os
import psutil

from multiprocessing import Process
from time import sleep


class MyProcessAbstraction(object):
    def __init__(self, parent_pid, command):
        """
        @type parent_pid: int
        @type command: str
        """
        self._child = None
        self._cmd = command
        self._parent = psutil.Process(pid=parent_pid)

    def run_child(self):
        """
        Start a child process by running self._cmd. 
        Wait until the parent process (self._parent) has died, then kill the 
        child.
        """
        print '---- Running command: "%s" ----' % self._cmd
        self._child = psutil.Popen(self._cmd)
        try:
            while self._parent.status == psutil.STATUS_RUNNING:
                sleep(1)
        except psutil.NoSuchProcess:
            pass
        finally:
            print '---- Terminating child PID %s ----' % self._child.pid
            self._child.terminate()


if __name__ == "__main__":
    parent = os.getpid()
    child = MyProcessAbstraction(parent, 'ping -t localhost')
    child_proc = Process(target=child.run_child)
    child_proc.daemon = True
    child_proc.start()

    print '---- Try killing PID: %s ----' % parent
    while True:
        sleep(1)

In this example I run 'ping -t localhost' b/c that will run forever. If you kill the parent process, the child process (the ping command) will also be killed.

Hummer answered 2/5, 2014 at 20:5 Comment(8)
This does not answer the question. If a parent process crashes, who is going to call p.terminate()? I am looking for a way, on Windows, to start a process so that it exits when its parent terminates for whatever reason. This is possible to do on Linux.Messere
You are right, it didn't address your question. Hopefully the above edit does so.Hummer
Neat. And looks portable. I like it.Messere
This... doesn't work at all, as far as I can tell, even after the edits. It looks like you're suggesting that the parent should watch itself to see if it has died, and if so then kill the child. Obviously this makes no sense. Maybe you actually meant that the example code should be split into a new program, so the parent spawns two processes: the child, and a watchdog that monitors both the parent and the child. But this just recreates the original problem -- what happens if the watchdog terminates abnormally? Who watches the watcher?Syriac
There are actually 3 PIDs created in this example. P1 (original python interpreter), P2 (multiprocessing.Process), and P3 (the Popen object created within P2). Because they are all separate PIDs, P2 is able to watch P1 to see if it goes away.Hummer
@NathanielJ.Smith: I agree with your point that nobody's watching the watcher. But for my purposes this solution happens to be good enough. The idea is that the process wrapper (P2) can be very simple, thereby very unlikely to crash (unlike parent process (P1) of arbitrary complexity). I see that the problem is deeper than I realized when I asked the question, but this solution takes care of my needs.Messere
Ah, I see that I missed the scroll bar on the example, which neatly clipped off the if __name__ == "__main__": block. Makes more sense with that in! Nonetheless, this approach seems needlessly complex and unreliable compared to more solutions using OS-level tools (which are available on both Linux and Windows), and creates new opportunities for problems -- for example, the code as currently written makes it impossible for the parent to monitor the child's life or get an exit code, and will leak watchdog processes if multiple children are run.Syriac
@NathanielJ.Smith: I tried both approaches, and have to say that my initial reaction was wrong. I was too quick to adopt this method for seeming implementation simplicity and portability. In addition, multiprocessing doc states that "a daemonic process is not allowed to create child processes". I also like that multiple children can be linked to the same job object. I am accepting yours as an answer.Messere
C
0

Since, from what I can tell, the PR_SET_PDEATHSIG solution can result in a deadlock when any threads are running in the parent process, I didn't want to use that and figured out another way. I created a separate auto-terminate process that detects when its parent process is done and kills the other subprocess that is its target.

To accomplish this, you need to pip install psutil, and then write code similar to the following:

def start_auto_cleanup_subprocess(target_pid):
    cleanup_script = f"""
import os
import psutil
import signal
from time import sleep

try:                                                            
    # Block until stdin is closed which means the parent process
    # has terminated.                                           
    input()                                                     
except Exception:                                               
    # Should be an EOFError, but if any other exception happens,
    # assume we should respond in the same way.                 
    pass                                                        

if not psutil.pid_exists({target_pid}):              
    # Target process has already exited, so nothing to do.      
    exit()                                                      
                                                                
os.kill({target_pid}, signal.SIGTERM)                           
for count in range(10):                                         
    if not psutil.pid_exists({target_pid}):  
        # Target process no longer running.        
        exit()
    sleep(1)
                                                                
os.kill({target_pid}, signal.SIGKILL)                           
# Don't bother waiting to see if this works since if it doesn't,
# there is nothing else we can do.                              
"""

    return Popen(
        [
            sys.executable,  # Python executable
            '-c', cleanup_script
        ],
        stdin=subprocess.PIPE
    )

This is similar to https://mcmap.net/q/25471/-python-how-to-kill-child-process-es-when-parent-dies that I had failed to notice, but I think the way that I came up with is easier for me to use because the process that is the target of cleanup is created directly by the parent. Also note that it is not necessary to poll the status of the parent, though it is still necessary to use psutil and to poll the status of the target subprocess during the termination sequence if you want to try, as in this example, to terminate, monitor, and then kill if terminate didn't work expeditiously.

Croup answered 9/9, 2022 at 8:22 Comment(1)
If you want to be able to easily identify the auto-terminator processes in ps output, you can install the setproctitle package. The auto-termination script can then use that to set its own process name.Croup
R
0

Hook exit of your process using SetConsoleCtrlHandler, and kill subprocess. I think I do a bit of a overkill there, but it works :)

import psutil, os

def kill_proc_tree(pid, including_parent=True):
    parent = psutil.Process(pid)
    children = parent.children(recursive=True)
    for child in children:
        child.kill()
    gone, still_alive = psutil.wait_procs(children, timeout=5)
    if including_parent:
        parent.kill()
        parent.wait(5)

def func(x):
    print("killed")
    if anotherproc:
        kill_proc_tree(anotherproc.pid)
    kill_proc_tree(os.getpid())

import win32api,shlex
win32api.SetConsoleCtrlHandler(func, True)      

PROCESSTORUN="your process"
anotherproc=None
cmdline=f"/c start /wait \"{PROCESSTORUN}\" "
anotherproc=subprocess.Popen(executable='C:\\Windows\\system32\\cmd.EXE', args=shlex.split(cmdline,posix="false"))
...
run program
...

Took kill_proc_tree from:

subprocess: deleting child processes in Windows

Raisaraise answered 29/10, 2022 at 16:8 Comment(0)
S
0

If the child process is something you've written in Python, a simple method is to periodically check if the parent has exited:

import os, sys, asyncio, psutil

async def check_orphaned():
        parent = psutil.Process(os.getppid())
        while True:
            if not parent.is_running():
                sys.exit()
            await asyncio.sleep(2.5)

# check if orphaned in the background
orphan_listener_task = asyncio.create_task(check_orphaned()))

Much simpler than setting up OS specific parent-child bindings, and should be good enough in most scenarios I think.

Seidule answered 3/2, 2024 at 2:30 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.