Even faster inexpensive thread-safe counter?
Asked Answered
C

2

7

I've read this topic: C# Thread safe fast(est) counter and have implemented this feature in my parallel code. As far as I can see it all works fine, however it has measurably increased the processing time, as in 10% or so.

It's been bugging me a bit, and I think the problem lies in the fact that I'm doing a huge number of relatively cheap (<1 quantum) tasks on small data fragments which are well partioned and probably make good use of cache locality, thus running optimally. My best guess, based on what little I know about MESI, is that x86 LOCK prefix in Interlocked.Increment pushes cacheline into Exclusive mode and forces a cache miss on other cores and forces cache reload on every single parallel pass just for the sake of incrementing this counter. With 100ns-ish delay for cache miss and my workload it seems to add up. (Then again, I could be wrong)

Now, I don't see a way around it, but maybe I am missing something obvious. I was even thinking about using n counters (corresponding to degree of parallelization) and then incrementing each on specific core, however it seems unfeasible (detecting which core I am on will probably be more expensive, not to mention an elaborate if/then/else structure and messing up with the execution pipeline). Any ideas on how to break this beast? :)

Chindwin answered 17/7, 2015 at 17:20 Comment(6)
map-reduce is generally easier to reason about than interlocked/cache based approaches... If you can partition data than count each partition separately and than combine results...Avunculate
You mean split the batch into n streams and process each stream on a specific core with multiple increments? But I don't have a way of forcing affinity, no? Which means my streams will now be longer than 1 quantum and would be all over the cores, again forcing cache misses on their own counter, no? Even if its not interlocked, it will be dirty and force a write-read. I would have n-times less misses per counter but n times more counters.Chindwin
The partitioned model is used by Java's Striped64. The fast path (uncontended) is cheap, with complexity to dynamically grow the counters when contention is detected. I adapted it for growing an array of ring buffers, and it works very well in practice.Underfur
Can you "batch update" the counter, ie process several items and use Interlocked.Add instead of Interlocked.Increment?Klara
@Chindwin as long as you guarantee that batch is processed by only one thread at time you don't need any locking around batch-specific counter. Note that requirement is "only one a time", not even "one and only" or "one particular core". Indeed you have more counters, but if your batches are big enough it should be fine.Avunculate
I don't see how add would help, it still has to invalidate cache line. As for the other, it does matter because thread scheduler will preempt me at some point and then revive me on another core. That core will then have an already invalidated cache line.Chindwin
I
2

Operations from multiple cores on the same cache line contend in hardware. This is true for locked and for regular memory access. This is a real problem. Contending accesses do not scale at all when more cores are added. Scaling typically is hard negative.

You need to use multiple cache lines with each core using its own most of the time.

You could use a ThreadLocal<Holder> and class Holder { public int I; } for that. ThreadLocal supports enumerating all instances that have been created so that you can sum them. You also could use a struct that is padded to the cache line size. That's safer.

Note, that it is not important to use one counter per core. Per-thread is good enough because time quantums are incredibly long compared to increment operations. A few bad accesses are not a performance problem.

A faster option would be to use a Holder[]. Each thread draws a random array index once and then accesses that holder object. Array indexing is faster than thread local access. If the number of holder instances you use is much bigger (10x) than the number of threads there will be little contention. Most writes will go the same already cached line.

Instead of a random index you can use a List<Holder> and add items as more threads join the processing.

Injury answered 19/7, 2015 at 21:13 Comment(2)
Huh, good idea with indexed array, I actually even have a value I can binary AND to get 2^x wide uniform distribution across tasks. I'll experiment with x to see which one gives better results. I up-voted your answer, and if this fixes it I'll accept it.Chindwin
Good stuff, I avoided random and used unique task id AND 0x1F to derive index to be used in a 32 wide array and performance jumped back up, comparable to the speed seen without the counter. Accepting.Chindwin
S
2

I thought I would offer some clarification about cache coherence and what the LOCK prefix does in Intel architectures. Since it's too long for a comment and also answers some points your raised I think it's appropriate to post as an answer.

In the MESI cache coherence protocol, any write to a cache line will cause the state to change to exclusive, regardless of whether you are using the LOCK prefix or not. So if two processors are both accessing the same cache line repeatedly, and at least 1 of the processors is doing writes, then the processors will experience cache line misses when accessing the line that they are sharing. Whereas if they were both only reading from the line then they would have cache line hits because they could both keep the line in their private L1 cache in the shared state.

What the LOCK prefix does is restrict the amount of speculative work that a processor can do while waiting for the the locked instruction to finish executing. Section 8.1.2 of the Intel 64 and IA-32 Architectures Software Developer’s Manual says:

Locked operations are atomic with respect to all other memory operations and all externally visible events. Only instruction fetch and page table accesses can pass locked instructions. Locked instructions can be used to synchronize data written by one processor and read by another processor.

Under normal circumstances the processor is able to speculatively execute instructions while waiting for a cache miss to be resolved. But the LOCK prefix prevents this and essentially stalls the pipeline until the locked instruction finished execution.

Sicard answered 19/7, 2015 at 20:22 Comment(0)
I
2

Operations from multiple cores on the same cache line contend in hardware. This is true for locked and for regular memory access. This is a real problem. Contending accesses do not scale at all when more cores are added. Scaling typically is hard negative.

You need to use multiple cache lines with each core using its own most of the time.

You could use a ThreadLocal<Holder> and class Holder { public int I; } for that. ThreadLocal supports enumerating all instances that have been created so that you can sum them. You also could use a struct that is padded to the cache line size. That's safer.

Note, that it is not important to use one counter per core. Per-thread is good enough because time quantums are incredibly long compared to increment operations. A few bad accesses are not a performance problem.

A faster option would be to use a Holder[]. Each thread draws a random array index once and then accesses that holder object. Array indexing is faster than thread local access. If the number of holder instances you use is much bigger (10x) than the number of threads there will be little contention. Most writes will go the same already cached line.

Instead of a random index you can use a List<Holder> and add items as more threads join the processing.

Injury answered 19/7, 2015 at 21:13 Comment(2)
Huh, good idea with indexed array, I actually even have a value I can binary AND to get 2^x wide uniform distribution across tasks. I'll experiment with x to see which one gives better results. I up-voted your answer, and if this fixes it I'll accept it.Chindwin
Good stuff, I avoided random and used unique task id AND 0x1F to derive index to be used in a 32 wide array and performance jumped back up, comparable to the speed seen without the counter. Accepting.Chindwin

© 2022 - 2024 — McMap. All rights reserved.