Noah Friedman's answer is nice but fails for some combinations of n
and number of commands by skipping some final commands. I changed it to this, which i find much more readable:
import subprocess
commands = ['cmd1', 'cmd2', 'cmd3', 'cmd4', 'cmd5']
cpus = 2
while commands:
batch = commands[:cpus]
commands = commands[cpus:]
procs = [subprocess.Popen(i, shell=True) for i in batch]
for p in procs:
p.wait()
Also note that this is not optimal, as a single hanging process holds back the next batch from being processed. The delay is actually noticeable. This is better:
import subprocess
from concurrent.futures import ProcessPoolExecutor
def my_parallel_command(command):
subprocess.run(command, shell=True)
commands = ['cmd1', 'cmd2', 'cmd3', 'cmd4', 'cmd5']
cpus = 2
with ProcessPoolExecutor(max_workers = cpus) as executor:
futures = executor.map(my_parallel_command, commands)