Does Interlocked provide visibility in all threads?
Asked Answered
M

5

11

Suppose I have a variable "counter", and there are several threads accessing and setting the value of "counter" by using Interlocked, i.e.:

int value = Interlocked.Increment(ref counter);

and

int value = Interlocked.Decrement(ref counter);

Can I assume that, the change made by Interlocked will be visible in all threads?

If not, what should I do to make all threads synchronize the variable?

EDIT: someone suggested me to use volatile. But when I set the "counter" as volatile, there is compiler warning "reference to volatile field will not be treated as volatile".

When I read online help, it said, "A volatile field should not normally be passed using a ref or out parameter".

Moxley answered 5/2, 2009 at 17:10 Comment(1)
Yes, interlocked increment/decrement (on x86 and IA-64) automatically gives visibility to all threads since it has implicit memory barrier. Volatile is not necessary (although it's not illegal).Unsustainable
U
8

InterlockedIncrement/Decrement on x86 CPUs (x86's lock add/dec) are automatically creating memory barrier which gives visibility to all threads (i.e., all threads can see its update as in-order, like sequential memory consistency). Memory barrier makes all pending memory loads/stores to be completed. volatile is not related to this question although C# and Java (and some C/C++ compilers) enforce volatile to make memory barrier. But, interlocked operation already has memory barrier by CPU.

Please also take a look my another answer in stackoverflow.

Note that I have assume that C#'s InterlockedIncrement/Decrement are intrinsic mapping to x86's lock add/dec.

Unsustainable answered 26/1, 2010 at 19:16 Comment(1)
Hardware visibility alone is not sufficient to imply 'program' visibility.Hypozeuxis
H
7

Can I assume that, the change made by Interlocked will be visible in all threads?

This depends on how you read the value. If you "just" read it, then no, this won't always be visible in other threads unless you mark it as volatile. That causes an annoying warning though.

As an alternative (and much preferred IMO), read it using another Interlocked instruction. This will always see the updated value on all threads:

int readvalue = Interlocked.CompareExchange(ref counter, 0, 0);

which returns the value read, and if it was 0 swaps it with 0.

Motivation: the warning hints that something isn't right; combining the two techniques (volatile & interlocked) wasn't the intended way to do this.

Update: it seems that another approach to reliable 32-bit reads without using "volatile" is by using Thread.VolatileRead as suggested in this answer. There is also some evidence that I am completely wrong about using Interlocked for 32-bit reads, for example this Connect issue, though I wonder if the distinction is a bit pedantic in nature.

What I really mean is: don't use this answer as your only source; I'm having my doubts about this.

Halfprice answered 26/1, 2010 at 16:8 Comment(0)
T
3

Actually, they aren't. If you want to safely modify counter, then you are doing the correct thing. But if you want to read counter directly you need to declare it as volatile. Otherwise, the compiler has no reason to believe that counter will change because the Interlocked operations are in code that it might not see.

Tetraspore answered 5/2, 2009 at 17:20 Comment(1)
This is correct, although volatile is not the only means available. A Volatile.Read would probably be more fitting.Hypozeuxis
H
3

No; an Interlocked-at-Write-Only alone does not ensure that variable reads in code are actually fresh; a program that does not correctly read from a field as well might not be Thread-Safe, even under a "strong memory model". This applies to any form of assigning to a field shared between threads.

Here is an example of code that will never terminate due to the JIT. (It was modified from Memory Barriers in .NET to be a runnable LINQPad program updated for the question).

// Run this as a LINQPad program in "Release Mode".
// ~ It will never terminate on .NET 4.5.2 / x64. ~
// The program will terminate in "Debug Mode" and may terminate
// in other CLR runtimes and architecture targets.
class X {
    // Adding {volatile} would 'fix the problem', as it prevents the JIT
    // optimization that results in the non-terminating code.
    public int terminate = 0;
    public int y;

    public void Run() {
        var r = new ManualResetEvent(false);
        var t = new Thread(() => {
            int x = 0;
            r.Set();
            // Using Volatile.Read or otherwise establishing
            // an Acquire Barrier would disable the 'bad' optimization.
            while(terminate == 0){x = x * 2;}
            y = x;
        });

        t.Start();
        r.WaitOne();
        Interlocked.Increment(ref terminate);
        t.Join();
        Console.WriteLine("Done: " + y);
    }
}

void Main()
{
    new X().Run();
}

The explanation from Memory Barriers in .NET:

This time it is JIT, not the hardware. It’s clear that JIT has cached the value of the variable terminate [in the EAX register and the] program is now stuck in the loop highlighted above ..

Either using a lock or adding a Thread.MemoryBarrier inside the while loop will fix the problem. Or you can even use Volatile.Read [or a volatile field]. The purpose of the memory barrier here is only to suppress JIT optimizations. Now that we have seen how software and hardware can reorder memory operations, it’s time to discuss memory barriers ..

That is, an additional barrier construct is required on the read side to prevent issues with Compilation and JIT re-ordering / optimizations: this is a different issue than memory coherency!

Adding volatile here would prevent the JIT optimization, and thus 'fix the problem', even if such results in a warning. This program can also be corrected through the use of Volatile.Read or one of the various other operations that cause a barrier: these barriers are as much a part of the CLR/JIT program correctness as the underlying hardware memory fences.

Hypozeuxis answered 4/6, 2018 at 19:0 Comment(0)
A
2

Interlocked ensures that only 1 thread at a time can update the value. To ensure that other threads can read the correct value (and not a cached value) mark it as volatile.

public volatile int Counter;

Allopath answered 5/2, 2009 at 17:10 Comment(6)
when I marked as volatile, there is complier warning. "reference to volatile field will not be treat as volatile".Moxley
ignore that warning for this case: #425632Makkah
Apparently you don't need Volatile if you are using Interlocked, but if you are modifying without using Interlocked then you do.Allopath
Just to clarify. Mark items as volatile if you are going to read them without obtaining a lock. Use Interlocked.Increment to synchronise updating, or use a lock() on something. The warning you get about "ref not being treated as volatile" is generic and can be ignored in the case of Interlocked.Allopath
I'm afraid that this is not correct answer. Any other threads can see interlocked operation. It has visibility to all threads. Volatile is not necessary. If I'm wrong, please correct me.Unsustainable
@Unsustainable That is incorrect. CLR memory barriers (those that prevent re-ordering and invalid optimizations) are just as important as hardware memory fences. To ensure a barrier, the field must be volatile or the read must otherwise use an appropriate construct that establishes the appropriate barrier (such as Volatile.Read).Hypozeuxis

© 2022 - 2024 — McMap. All rights reserved.