Why does the ProcessPoolExecutor ignore the max_workers argument?
Asked Answered
H

1

2

I expected that a apscheduler.executors.pool.ProcessPoolExecutor with the max_workers argument set to 1, would not execute more than one job in parallel.

import subprocess

from apscheduler.executors.pool import ProcessPoolExecutor
from apscheduler.schedulers.blocking import BlockingScheduler


def run_job():
    subprocess.check_call('echo start; sleep 3; echo done', shell=True)

scheduler = BlockingScheduler(
        executors={'processpool': ProcessPoolExecutor(max_workers=1)})

for i in range(20):
    scheduler.add_job(run_job)
scheduler.start()                                

However actually up to ten jobs are executed in parallel.

Do I misunderstand the concept or is this a bug?

Hithermost answered 11/12, 2015 at 0:57 Comment(2)
Did you create more than one such executor by any chance? Seeing the code might help.Dry
You need to give a more complete example -- one I could run myself. For one, are you sure you're specifying the correct executor to run with?Motorbus
J
3

The reason this isn't working as expected is because you're not specifying which executor you want to run the job in.

Try this instead:

for i in range(20):
    scheduler.add_job(run_job, executor='processpool')
Jacinto answered 21/12, 2015 at 16:28 Comment(4)
Thank you very much for your help! Is it possible to prevent the Scheduler from adding the default executor?Hithermost
No, nor should it be. But you could define your own executor as "default".Motorbus
Would you care to elaborate on this a little bit? Obviously I'm misunderstanding something about the concept of apscheduler.Hithermost
You named your executor "processpool" but if you name it "default" it will be used by default without having to explicitly specify the name in add_job().Motorbus

© 2022 - 2025 — McMap. All rights reserved.