From this video, we can say that Loom is great. But it's not a replacement for kotlin coroutines. Coroutines are still the recommended thing to use when dealing with concurrent processes in Kotlin.
To compare them,
- Loom can improve the performance applications: it can run multiple virtual threads and it costs less to have blocked virtual threads than to have a regular threads blocked.
-Kotlin coroutines are intrusive because we cannot call suspend functions in normal function, which is not the case of Loom
- Structured concurrency is much more easier with Kotlin coroutines than Loom.
- Interoperability between Kotlin coroutines and reactive programming are more simpler as we can just use flows than the one between loom and reactive programming.
To wrap up, we can say that the best thing Loom has to propose is the virtual threads which can inprove performance, while Kotlin coroutines has more to offer. Why not using both of the features together?
Well, there is a concept introduced by Moskala in his "Kotlin Coroutines: Deep dive" book which is interesting. We can use Loom directly in Kotlin coroutines code and have a better performance while keeping structured concurrency and all the cool stuff we get in coroutines. To do that, we use virtual threads to replace Dispatchers.IO
. Below is an example of code that uses virtual thread.
val LoomDispatcher = Executors
.newVirtualThreadPerTaskExecutor()
.asCoroutineDispatcher()
val Dispatchers.Loom: CoroutineDispatcher
get() = LoomDispatcher
suspend fun main() = measureTimeMillis {
coroutineScope {
repeat(100_000) {
launch(Dispatchers.Loom) {
Thread.sleep(1000)
}
}
}
}.let(::println)
Because Dispatchers.IO
has just 64 threads, the code above will take over 26 minutes, but we can make use of limitedParallelism
to increase the number of thread and get the chance to execute the code fastly. The code using Dispatchers.IO
is as follow:
suspend fun main() = measureTimeMillis {
val dispatcher = Dispatchers.IO
.limitedParallelism(100_000)
coroutineScope {
repeat(100_000) {
launch(dispatcher) {
Thread.sleep(1000)
}
}
}
}.let(::println)
I didn't personnally run the code with virtual threads but it's said in the book that it took a bit more than two seconds to execute (This is amzing knowing that we block each of the 100 000 threads for 1 second), but the second code took around 30s to finish executing.
I would say the best way to make the code performant and use some cool stuff that coroutines offer, we can use Loom as a substutition for Dispatchers.IO
but keep using coroutines, though there is no any kind of recommendations in the documentations.
Dispatchers.IO
use virtual threads whileDispatchers.Default
uses platform threads. – Gluttony