Contention in concurrent use of java.util.Random
Asked Answered
W

2

15

Oracle Java documentation says:

Instances of java.util.Random are threadsafe. However, the concurrent use of the same java.util.Random instance across threads may encounter contention and consequent poor performance. Consider instead using ThreadLocalRandom in multithreaded designs.

What might be the reason behind poor performance?

Wendelin answered 10/3, 2014 at 23:32 Comment(1)
Presumably because there's some sort of synchronisation on the internal state?Majestic
F
16

Internally, java.util.Random keeps an AtomicLong with the current seed, and whenever a new random number is requested, there is contention in updating the seed.

From the implementation of java.util.Random:

protected int next(int bits) {
    long oldseed, nextseed;
    AtomicLong seed = this.seed;
    do {
        oldseed = seed.get();
        nextseed = (oldseed * multiplier + addend) & mask;
    } while (!seed.compareAndSet(oldseed, nextseed));
    return (int)(nextseed >>> (48 - bits));
}

On the other hand, ThreadLocalRandom ensures that the seed is updated without facing any contention, by having one seed per thread.

Fatness answered 10/3, 2014 at 23:36 Comment(1)
This also means that poor performance only happens if you call next very frequently as compareAndSet will fail more often if more threads call nextat the same time.Implant
S
3

The random class holds a synchronisation lock around the internal state, such that only one thread can access it at once - specifically, it uses AtomicLong. This means that if you attempt to read from it with multiple threads, only one thread can access it at once, causing other threads to wait until the lock is released.

ThreadLocalRandom can be used instead to provide transparent per-thread instancing, to ensure that the internal state is updated on a per-thread basis, thus avoiding the lock.

Note that, if implemented correctly, the AtomicLong update operation shouldn't perform horribly unless you're running a huge number of threads, as it can essentially be optimised down within the JVM to something like lock xchg on x86. The major computational cost, outside of the lock, is likely a combination of the long multiply and rotational shift.

Schell answered 10/3, 2014 at 23:40 Comment(2)
this isn't right. multiple threads can access an AtomicLong at once. there is no lock on access. the concurrent atomics use volatiles.Detinue
incorrect, the reason why Random suffers contention is AtomicLong.compareAndSet, which in itself suffers contention if used with many threads. Refer to #3556783Barthol

© 2022 - 2024 — McMap. All rights reserved.