Execution Context and Dispatcher - Best practices, useful configurations and Documentation
Asked Answered
W

1

15

Scala Execution Context and Dispatchers - Listing and comparison: Why ?

There are a lot of questions around what/how/what is the best Execution Context to use to execute futures on in Scala and how to configure the dispatcher. Still I never was able to find a longer list with pros and cons and configuration examples.

The best I could find was in the Akka Documentation: http://doc.akka.io/docs/akka/snapshot/scala/dispatchers.html and Play Documentation https://www.playframework.com/documentation/2.5.x/ThreadPools.

I would like to ask what configurations besides the scala.concurrent.ExecutionContext.Implicits.global and Akka defaults you use in your daily Dev lives, when you use them and what are the pros and cons .

Here are some of the ones I already have:

First unfinished overview

Standard: scala.concurrent.ExecutionContext.Implicits.global

Testing - ExecutionContext.fromExecutor(new ForkJoinPool(1))

  • use for testing
  • no parallelism

Play's default EC - play.api.libs.concurrent.Execution.Implicits._

Akka`s default Execution Context

Bulkheading

ExecutionContext.fromExecutor(new ForkJoinPool(n)) based on an separated dispatcher . Thanks to Sergiy Prydatchenko
Woodsum answered 6/12, 2015 at 12:10 Comment(1)
ExecutionContext.fromExecutor(new ForkJoinPool(n)) (or a separate Akka dispatcher) may be used not only for testing but for bulkheading (separating part of your Futures from another in terms of Executor).Loom
W
1

Ideally with only non-blocking code you would just use the frameworks execution context. Play Frameworks or Akka's.

But sometimes you have to use blocking API's. In one Play Framework and JDBC project, we followed their recommendation [1] and set the execution context to have 100 threads, and just used the default everywhere. That system was very fast for its usage and needs.

In a different Akka project where we had a mix of blocking and non-blocking code we had seperate dispatchers configured for the different features. Like "blocking-dispatcher", "important-feature-dispatcher" and "default-dispatcher". This performed fine, but was more complex than having 1 dispatcher, we had to know/guess/monitor how much each needed. We load tested it and found that at 1 thread it was too slow, if we had 5 threads it was better but after 10 threads it didnt get any faster. So we left it at 10 threads. Eventually we refactored away our blocking code and moved everything to the default.

But each use case is different, you need to profile and monitor your system to know whats right for you. If you have all non blocking code its easy, it should be 1 thread per CPU core.

[1] https://www.playframework.com/documentation/2.5.x/ThreadPools#Highly-synchronous

Watchband answered 18/5, 2017 at 10:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.