Difference between ForkJoinPool and normal ExecutionService?
Asked Answered
U

1

7

I read a great article about the fork-join framework in Java 7, and the idea is that, with ForkJoinPool and ForkJoinTask, the threads in the pool can get the sub tasks from other tasks, so it's able to use less threads to handle more tasks.

Then I tried to use a normal ExecutorService to do the same work, and found I can't tell the difference, since when I submit a new task to the pool, the task will be run on another available thread.

The only difference I can tell is if I use ForkJoinPool, I don't need to pass the pool to the tasks, because I can call task.fork() to make it running on another thread. But with normal ExecutorService, I have to pass the pool to the task, or make it a static, so inside the task, I can call pool.submit(newTask)

Do I miss something?

(You can view the living code from https://github.com/freewind/fork-join-test/tree/master/src)

Uveitis answered 1/5, 2015 at 14:25 Comment(1)
There are a few articles out there that explain which one of these two is better at different types of tasks. As far as I remember Fork-Join pool is better (and safe) for recursive type of task scheduling. In a scenario where a task spawns new tasks to the same ExecutorService (it is being executed on) there is a high chance of a deadlock if size of the ExecutorService's pool is not big enough. This is not a case with Fork-Join pool though.Robet
F
7

Although ForkJoinPool implements ExecutorService, it is conceptionally different from 'normal' executors.

You can easily see the difference if your tasks spawn more tasks and wait for them to complete, e.g. by calling

executor.invoke(new Task()); // blocks this thread until new task completes

In a normal executor service, waiting for other tasks to complete will block the current thread. There are two possible outcomes: If your executor service has a fixed number of threads, it might deadlock if the last running thread waits for another task to complete. If your executor dynamically creates new threads on demand, the number of threads might explode and you end up having thousands of threads which might cause starvation.

In opposite, the fork/join framework reuses the thread in the meantime to execute other tasks, so it won't deadlock although the number of threads is fixed:

new MyForkJoinTask().invoke();

So if you have a problem that you can solve recursively, think of using a ForkJoinPool as you can easily implement one level of recursion as ForkJoinTask.

Just check the number of running threads in your examples.

Freese answered 1/5, 2015 at 14:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.