I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
Note: This answer is less current than it was when posted in 2009. Using the subprocess
module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system()
and call it in the same way your shell script did, or you can spawn
it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT
flag).
See the documentation here.
subprocess.Popen
and subprocess.call
replacements for them: docs.python.org/2/library/… –
Iodoform subprocess
give us a hint how to detach a process with subprocess
? –
Empale ValueError: spawnv() arg 2 cannot be empty
from the code above. –
Ewart While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess
module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file
in the background. Note that calling .communicate()
on the object returned from Popen
will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
kill
command. You could also kill it from the task manager in windows. –
Technics Popen()
does not use a shell to run commands by default. stdout=PIPE
won't change it. –
Viera Popen()
to avoid blocking the main thread and if you need a daemon then look at python-daemon
package to understand how a well-defined daemon should behave. Your answer is ok if you remove everything starting with "But be wary" except for the link to subprocess' docs. –
Viera pid
property without blocking anything in the parent process.... see the answer by "f p" below (https://mcmap.net/q/25474/-start-a-background-process-in-python) –
Taro proc = subprocess.Popen(["rm","-r","some.file"])
, then to kill: proc.terminate()
–
Measles Popen(["ls", "-a"], stdout=subprocess.PIPE)
runs in the background. The problem is when you later use p.communicate()
which brings the process to the foreground. –
Marianmariana stdout=PIPE
unless you read from the pipe while the process is running otherwise the child process may hang foreever if the corresponding OS pipe buffer fills up. –
Viera subprocess.Popen(["/some/path/to/hive","-f","/some/path/to_HQL/hql.dat"])
–
Bowling os.fork()
unixy systems. –
Technics ps
. –
Technics communicate()
in place of wait()
... –
Dividivi start_new_session=True
for POSIX –
Iconic Note: This answer is less current than it was when posted in 2009. Using the subprocess
module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system()
and call it in the same way your shell script did, or you can spawn
it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT
flag).
See the documentation here.
subprocess.Popen
and subprocess.call
replacements for them: docs.python.org/2/library/… –
Iodoform subprocess
give us a hint how to detach a process with subprocess
? –
Empale ValueError: spawnv() arg 2 cannot be empty
from the code above. –
Ewart You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system
function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system
function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py
has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS
Process Creation Flag to the underlying CreateProcess
function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen()
with the close_fds=True
parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
close_fds=True
option works by detaching the process, but it didn't return back to my Python program. Hoping to find an option that truly executes a process and sends it to the background and then returns back to the Python program. –
Naamann Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout=
and then try to read()
, then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading
module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
os.fork()
is really useful, but it does have a notable downside of only being available on *nix. –
Zinfandel threading
: https://mcmap.net/q/25474/-start-a-background-process-in-python I think that might work on Windows. –
Kookaburra Unlike some prior answers that use subprocess.Popen
, this answer uses subprocess.run
instead. The issue with using Popen
is that if the process is not manually waited for until completion, a stale <defunct>
entry remains in the Linux process table as seen by ps
. These entries can add up.
In contrast to Popen
, when using subprocess.run
, by design run
waits for the process to complete, and so no such defunct entry will remain in the process table. Because subprocess.run
is blocking, it can be run in a thread. The rest of the code can continue after starting this thread. In this way, the process effectively runs in the background.
import subprocess, threading
kwargs = {stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True, **your_kwargs}
threading.Thread(subprocess.run, args=(your_command,), kwargs=kwargs).start()
Note that subprocess.call
also waits for the process to complete, and can be used similarly.
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I'm running Python 3.9.14 on Linux. I found the following worked for me in a similar situation:
import subprocess
cmd = "sleep 5 && ls /tmp >& ls.out &"
try:
runResult = subprocess.run(["bash", "-c", cmd])
except Exception as ex:
print( f"Failed to run '{cmd}'" )
if hasattr( ex, "message" ):
print( ex.message )
elif hasattr( ex, "strerror" ):
print( ex.strerror)
else:
print( ex )
If you run the above and quickly do an ls in the current directory, you will find that the "ls.out" file doesn't yet exist. Wait a few more seconds and the file is there. So, the command continues to run after Python exits.
The 'runResult' has a 'returncode' field that indicates whether the program launched successfully or not. I do not know of a good way of later killing the process from within Python.
I was also able to do a more Python-ish (Python-ly?) approach: I have a shell script named "runs5secs":
#!/bin/bash
sleep 5
ls
and I can run it in the background with:
import shlex
import subprocess
cmd = "sleep 5 && ls /tmp >& ls.out &"
logName = "./run5secs.out"
cmd = "./run5secs my1 your2"
try:
f = open( logName, 'w' )
except Exception as ex:
print( f"Failed to run '{cmd}'" )
if hasattr( ex, "message" ):
print( ex.message )
elif hasattr( ex, "strerror" ):
print( ex.strerror)
else:
print( ex )
args = shlex.split( cmd )
try:
cmdRes = subprocess.Popen( args, stdout=f,
stderr=subprocess.STDOUT,
universal_newlines=True )
except Exception as ex:
print( f"Failed to run '{cmd}'" )
if hasattr( ex, "message" ):
print( ex.message )
elif hasattr( ex, "strerror" ):
print( ex.strerror)
else:
print( ex )
print( cmdRes )
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
© 2022 - 2025 — McMap. All rights reserved.
subprocess.Popen()
is the new recommended way since 2010 (we are in 2015 now) and (3) the duplicated question redirecting here has also an accepted answer aboutsubprocess.Popen()
. Cheers :-) – Discontinuancesubprocess.Popen("<command>")
with <command> file led by a suitable shebang. Works perfect for me (Debian) with bash and python scripts, implicitelyshell
s and survives its parent process.stdout
goes to same terminal than the parent's. So this works much like&
in a shell which was OPs request. But hell, all the questions work out very complex while a little testing showed it in no time ;) – Mooch