fork in multi-threaded program
Asked Answered
I

2

26

I've heard that mixing forking and threading in a program could be very problematic, often resulting with mysterious behavior, especially when dealing with shared resources, such as locks, pipes, file descriptors. But I never fully understand what exactly the dangers are and when those could happen. It would be great if someone with expertise in this area could explain a bit more in detail what pitfalls are and what needs to be care when programming in a such environment.

For example, if I want to write a server that collects data from various different resources, one solution I've thought is to have the server spawns a set of threads, each popen to call out another program to do the actual work, open pipes to get the data back from the child. Each of these threads responses for its own work, no data interexchange in b/w them, and when the data is collected, the main thread has a queue and these worker threads will just put the result in the queue. What could go wrong with this solution?

Please do not narrow your answer by just "answering" my example scenario. Any suggestions, alternative solutions, or experiences that are not related to the example but helpful to provide a clean design would be great! Thanks!

Impolitic answered 5/8, 2009 at 20:18 Comment(1)
here is a good read with more details about the topic -Chenab
E
21

The problem with forking when you do have some threads running is that the fork only copies the CPU state of the one thread that called it. It's as if all of the other threads just died, instantly, wherever they may be.

The result of this is locks aren't released, and shared data (such as the malloc heap) may be corrupted.

pthread does offer a pthread_atfork function - in theory, you could take every lock in the program before forking, release them after, and maybe make it out alive - but it's risky, because you could always miss one. And, of course, the stacks of the other threads won't be freed.

Epilate answered 5/8, 2009 at 20:33 Comment(2)
Can you elaborate a little bit what does it mean by 'locks aren't released'? From the child perspective right? So the child can never acquire a lock?Impolitic
Correct. The fork clones all locks while they're still in the locked state.Epilate
H
0

It is really quite simple. The problems with multiple threads and processes always arise from shared data. If there is not shared data then there can be no possible issues arising.

In your example the shared data is the queue owned by the main thread - any potential contention or race conditions will arise here. Typical methods for "solving" these issues involve locking schemes - a worker thread will lock the queue before inserting any data, and the main thread will lock the queue before removing it.

Headphone answered 5/8, 2009 at 20:22 Comment(4)
I dunno - it could but normally your standard libraries are written in such a way as to be thread safe (sometimes you have to select thread safe versions of them). It depends on what your definition of shared data is and what the impact is.Headphone
often, we can't avoid to have shared data. like pipes, file descriptors, etc. They always are shared across fork. Now under linux, one may set O_CLOEXEC flag so a fd can be closed when forking, (I guess what that mean is close the fd on the child's address space), though I don't know if that would help if we add threads in? e.g. what if I open pipes in one thread and fork? what if another thread also does fork? which child will be able to see the pipe?Impolitic
@Epilate - no, malloc() only utilizes process memory. @Impolitic - threads within a process share FD, and while I didn't know O_CLOEXEC - the effects are the same regardless of which thread calls fork().Desireedesiri
The O_CLOEXEC flag does not cause the file descriptors to be closed on fork, as the name suggests it closes on exec.Immoral

© 2022 - 2024 — McMap. All rights reserved.