Non blocking subprocess.call
Asked Answered
K

5

75

I'm trying to make a non blocking subprocess call to run a slave.py script from my main.py program. I need to pass args from main.py to slave.py once when it(slave.py) is first started via subprocess.call after this slave.py runs for a period of time then exits.

main.py
for insert, (list) in enumerate(list, start =1):

    sys.args = [list]
    subprocess.call(["python", "slave.py", sys.args], shell = True)


{loop through program and do more stuff..}

And my slave script

slave.py
print sys.args
while True:
    {do stuff with args in loop till finished}
    time.sleep(30)

Currently, slave.py blocks main.py from running the rest of its tasks, I simply want slave.py to be independent of main.py, once I've passed args to it. The two scripts no longer need to communicate.

I've found a few posts on the net about non blocking subprocess.call but most of them are centered on requiring communication with slave.py at some-point which I currently do not need. Would anyone know how to implement this in a simple fashion...?

Kuomintang answered 17/4, 2013 at 23:15 Comment(0)
M
83

You should use subprocess.Popen instead of subprocess.call.

Something like:

subprocess.Popen(["python", "slave.py"] + sys.argv[1:])

From the docs on subprocess.call:

Run the command described by args. Wait for command to complete, then return the returncode attribute.

(Also don't use a list to pass in the arguments if you're going to use shell = True).


Here's a MCVE1 example that demonstrates a non-blocking suprocess call:

import subprocess
import time

p = subprocess.Popen(['sleep', '5'])

while p.poll() is None:
    print('Still sleeping')
    time.sleep(1)

print('Not sleeping any longer.  Exited with returncode %d' % p.returncode)

An alternative approach that relies on more recent changes to the python language to allow for co-routine based parallelism is:

# python3.5 required but could be modified to work with python3.4.
import asyncio

async def do_subprocess():
    print('Subprocess sleeping')
    proc = await asyncio.create_subprocess_exec('sleep', '5')
    returncode = await proc.wait()
    print('Subprocess done sleeping.  Return code = %d' % returncode)

async def sleep_report(number):
    for i in range(number + 1):
        print('Slept for %d seconds' % i)
        await asyncio.sleep(1)

loop = asyncio.get_event_loop()

tasks = [
    asyncio.ensure_future(do_subprocess()),
    asyncio.ensure_future(sleep_report(5)),
]

loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

1Tested on OS-X using python2.7 & python3.6

Mariel answered 17/4, 2013 at 23:17 Comment(6)
Thanks, this appears to work, however when I include a While loop in slave.py it seems get stuck and not perform anything in the loop (even with a timer.sleep() function..?Kuomintang
@Mariel : Can you please share a general example on how to use it ?. I mean how the control flow should look like when using it in a non blocking way. I appreciate it.Funnel
@Funnel -- Sure, I've added an example of using Popen in a non-blocking way.Mariel
Is there a way to start multiple processes asynchronously like p1 & p2 & and then wait on all of them using asyncio? I'm hoping this does not require the multiprocessing module.Tiannatiara
Sure, you'd just need a do_p1 function and a do_p2 function and you'd add both of them to the tasks list.Mariel
What's the difference between subprocess.Popen and the alternative approach using asyncio?Chicory
C
32

There's three levels of thoroughness here.

As mgilson says, if you just swap out subprocess.call for subprocess.Popen, keeping everything else the same, then main.py will not wait for slave.py to finish before it continues. That may be enough by itself. If you care about zombie processes hanging around, you should save the object returned from subprocess.Popen and at some later point call its wait method. (The zombies will automatically go away when main.py exits, so this is only a serious problem if main.py runs for a very long time and/or might create many subprocesses.) And finally, if you don't want a zombie but you also don't want to decide where to do the waiting (this might be appropriate if both processes run for a long and unpredictable time afterward), use the python-daemon library to have the slave disassociate itself from the master -- in that case you can continue using subprocess.call in the master.

Cellulitis answered 17/4, 2013 at 23:21 Comment(0)
S
5

For Python 3.8.x

import shlex
import subprocess

cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)

This will allow the parent process to exit while the child process continues to run. Not sure about zombies.

Tested on Python 3.8.1 on macOS 10.15.5

Semang answered 30/9, 2020 at 20:39 Comment(1)
Shlex seems like overkill for this example.Greenland
S
1

The easiest solution for your non-blocking situation would be to add & at the end of the Popen like this:

subprocess.Popen(["python", "slave.py", " &"])

This does not block the execution of the rest of the program.

Studley answered 22/9, 2022 at 16:20 Comment(2)
I'm pretty sure that that just passes " &" as an argument to "python"Bosun
&, in the context of trying to background things, is a shell directive. It has no effect when (as here) you aren't actually invoking a shell; so as Zendel says, it's just putting the string ' &' into the sys.argv of the child process.Nakia
R
0

If you want to start a function several times with different arguments in a non-blocking way, you can use the ThreadPoolExecuter.

You submit your function calls to the executer like this

from concurrent.futures import ThreadPoolExecutor

def threadmap(fun, xs):
    with ThreadPoolExecutor(max_workers=8) as executer:
        return list(executer.map(fun, xs))
Rooney answered 17/2, 2023 at 20:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.