multiprocessing in python - what gets inherited by forkserver process from parent process?
Asked Answered
E

2

10

I am trying to use forkserver and I encountered NameError: name 'xxx' is not defined in worker processes.

I am using Python 3.6.4, but the documentation should be the same, from https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods it says that:

The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.

Also, it says:

Better to inherit than pickle/unpickle

When using the spawn or forkserver start methods many types from multiprocessing need to be picklable so that child processes can use them. However, one should generally avoid sending shared objects to other processes using pipes or queues. Instead you should arrange the program so that a process which needs access to a shared resource created elsewhere can inherit it from an ancestor process.

So apparently a key object that my worker process needs to work on did not get inherited by the server process and then passing to workers, why did that happen? I wonder what exactly gets inherited by forkserver process from parent process?

Here is what my code looks like:

import multiprocessing
import (a bunch of other modules)

def worker_func(nameList):
    global largeObject
    for item in nameList:
        # get some info from largeObject using item as index
        # do some calculation
        return [item, info]

if __name__ == '__main__':
    result = []
    largeObject # This is my large object, it's read-only and no modification will be made to it.
    nameList # Here is a list variable that I will need to get info for each item in it from the largeObject    
    ctx_in_main = multiprocessing.get_context('forkserver')
    print('Start parallel, using forking/spawning/?:', ctx_in_main.get_context())
    cores = ctx_in_main.cpu_count()
    with ctx_in_main.Pool(processes=4) as pool:
        for x in pool.imap_unordered(worker_func, nameList):
            result.append(x)

Thank you!

Best,

Ecumenicism answered 15/8, 2020 at 8:51 Comment(3)
I tried splitting nameList to say 4 chunks and use zip([largeObject]*4, nameLis_splittedt) in imap_unordered and unwrap it later in worker_func(), this way it did get largeObject into workers, but it becomes super slow. I am guessing it's due to largeObject's size.Ecumenicism
Initialize large object before the worker function definition.Miyamoto
@AnmolSinghJaggi Can you clarify on how to initialize? largeObject here is a NetworkX object which came from a series of previous calculations in __main__ that involves reading large pandas df and other memory consuming operations.Ecumenicism
T
11

Theory

Below is an excerpt from Bojan Nikolic blog

Modern Python versions (on Linux) provide three ways of starting the separate processes:

  1. Fork()-ing the parent processes and continuing with the same processes image in both parent and child. This method is fast, but potentially unreliable when parent state is complex

  2. Spawning the child processes, i.e., fork()-ing and then execv to replace the process image with a new Python process. This method is reliable but slow, as the processes image is reloaded afresh.

  3. The forkserver mechanism, which consists of a separate Python server with that has a relatively simple state and which is fork()-ed when a new processes is needed. This method combines the speed of Fork()-ing with good reliability (because the parent being forked is in a simple state).

Forkserver

The third method, forkserver, is illustrated below. Note that children retain a copy of the forkserver state. This state is intended to be relatively simple, but it is possible to adjust this through the multiprocess API through the set_forkserver_preload() method. enter image description here

Practice

Thus, if you want simething to be inherited by child processes from the parent, this must be specified in the forkserver state by means of set_forkserver_preload(modules_names), which set list of module names to try to load in forkserver process. I give an example below:

# inherited.py
large_obj = {"one": 1, "two": 2, "three": 3}
# main.py
import multiprocessing
import os
from time import sleep

from inherited import large_obj


def worker_func(key: str):
    print(os.getpid(), id(large_obj))
    sleep(1)
    return large_obj[key]


if __name__ == '__main__':
    result = []
    ctx_in_main = multiprocessing.get_context('forkserver')
    ctx_in_main.set_forkserver_preload(['inherited'])
    cores = ctx_in_main.cpu_count()
    with ctx_in_main.Pool(processes=cores) as pool:
        for x in pool.imap(worker_func, ["one", "two", "three"]):
            result.append(x)
    for res in result:
        print(res)

Output:

# The PIDs are different but the address is always the same
PID=18603, obj id=139913466185024
PID=18604, obj id=139913466185024
PID=18605, obj id=139913466185024

And if we don't use preloading

...
    ctx_in_main = multiprocessing.get_context('forkserver')
    # ctx_in_main.set_forkserver_preload(['inherited']) 
    cores = ctx_in_main.cpu_count()
...
# The PIDs are different, the addresses are different too
# (but sometimes they can coincide)
PID=19046, obj id=140011789067776
PID=19047, obj id=140011789030976
PID=19048, obj id=140011789030912
Titus answered 16/8, 2020 at 12:32 Comment(7)
Hi Alex, thanks so much for the answer. So I actually read Bojan's blog before, when I was trying to tackle the problem myself (surprisingly there aren't that many articles that touches on forkserver's methodology internally). I tried set_forkserver_preload(large_object) directly, naturally it didn't work :) I guess write a separate .py will work.Ecumenicism
But the thing is, as I explained in question's comments, large_object here is a NetworkX object which came from a series of previous calculations in __main__ that involves reading large pandas df and other memory consuming operations. How can I modify that to work like your inherited.py+main.py setup here?Ecumenicism
I am concerned that if I wrap all those operations that gets me large_object in the first place into a function and put it in inherited.py and import inherited and call the function, when I set_forkserver_preload(['inherited']), forkserver will also get the unnecessary file descriptors and other things in parent's process, that will defeat my purpose here, I just want forkserver to inherite large_object along.Ecumenicism
Try to move these calculations to the inherited module so that they happen when the server imports itTitus
I see what you are suggesting... I guess if I put those calculations directly into inherited.py, it will be executed twice (once when I imported the module and the other when server imports it), this might work if I just want a single-threaded safe process that workers can fork from. But here I am trying to get workers to not inherite uncessary resources and only get large_object. And I don't think putting those calculations in __main__ in inherited.py will work either, since now none of the processes will execute them including main and server.Ecumenicism
As I know ‘fork’ essentially snapshots all of the memory space of the parent process, and it is impossible to do this for a specific object. Maybe fork server is not the right approach for your taskTitus
I see, your points are valid. I guess forkserver is more about 'safe forks'. If the goal here is to get workers to inherite minimal resources, I am better off separating my code into 2, do the calculation first, pickle it, exit the interpreter, and start a fresh one to load the pickled large_object and just go nuts with either fork or forkserver.Ecumenicism
E
3

So after an inspiring discussion with Alex I think I have sufficient info to address my question: what exactly gets inherited by forkserver process from parent process?

Basically when the server process starts, it will import your main module and everything before if __name__ == '__main__' will be executed. That's why my code don't work, because large_object is nowhere to be found in server process and in all those worker processes that fork from the server process.

Alex's solution works because large_object now gets imported to both main and server process so every worker forked from server will also gets large_object. If combined with set_forkserver_preload(modules_names) all workers might even get the same large_object from what I saw. The reason for using forkserver is explicitly explained in Python documentations and in Bojan's blog:

When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.

The forkserver mechanism, which consists of a separate Python server with that has a relatively simple state and which is fork()-ed when a new processes is needed. This method combines the speed of Fork()-ing with good reliability (because the parent being forked is in a simple state).

So it's more on the safe side of concern here.

On a side note, if you use fork as the starting method though, you don't need to import anything since all child process gets a copy of parents process memory (or a reference if the system use COW-copy-on-write, please correct me if I am wrong). In this case using global large_object will get you access to large_object in worker_func directly.

The forkserver might not be a suitable approach for me because the issue I am facing is memory overhead. All the operations that gets me large_object in the first place are memory-consuming, so I don't want any unnecessary resources in my worker processes.

If I put all those calculations directly into inherited.py as Alex suggested, it will be executed twice (once when I imported the module in main and once when the server imports it; maybe even more when worker processes were born?), this is suitable if I just want a single-threaded safe process that workers can fork from. But since I am trying to get workers to not inherit unnecessary resources and only get large_object, this won't work. And putting those calculations in __main__ in inherited.py won't work either since now none of the processes will execute them, including main and server.

So, as a conclusion, if the goal here is to get workers to inherit minimal resources, I am better off breaking my code into 2, do calculation.py first, pickle the large_object, exit the interpreter, and start a fresh one to load the pickled large_object. Then I can just go nuts with either fork or forkserver.

Ecumenicism answered 16/8, 2020 at 20:0 Comment(1)
Not sure if I understand your problem 100% but if the only problem is executing the memory-heavy calculations twice you can get around that with admittedly some dirty hacking: The import of the module is only needed so you don't get a large_object is not defined" error. But you could set large_object in inherited.py to None for the initial import and only really import large_object when set_forkserver_preload() is called, e.g. by writing some value to a tmp config file in your main and checking that value in inherited.py to decide whether to actually load large_object or not.Coster

© 2022 - 2024 — McMap. All rights reserved.