It is known that, unlike Java's volatiles, .NET's ones allow reordering of volatile writes with the following volatile reads from another location. When it is a problem MemoryBarier
is recommended to be placed between them, or Interlocked.Exchange
can be used instead of volatile write.
It works but MemoryBarier
could be a performance killer when used in highly optimized lock-free code.
I thought about it a bit and came with an idea. I want somebody to tell me if I took the right way.
So, the idea is the following:
We want to prevent reordering between these two accesses:
volatile1 write
volatile2 read
From .NET MM we know that :
1) writes to a variable cannot be reordered with a following read from
the same variable
2) no volatile accesses can be eliminated
3) no memory accesses can be reordered with a previous volatile read
To prevent unwanted reordering between write and read we introduce a dummy volatile read from the variable we've just written to:
A) volatile1 write
B) volatile1 read [to a visible (accessible | potentially shared) location]
C) volatile2 read
In such case B cannot be reordered with A as they both access the same variable, C cannot be reordered with B because two volatile reads cannot be reordered with each other, and transitively C cannot be reordered with A.
And the question:
Am I right? Can that dummy volatile read be used as a lightweight memory barrier for such scenario?
volatile
at all in "highly optimized lock-free code." Your volatile read is costing, what, a hundred or more cycles? So a volatile read costs about half as much as an uncontended lock. Possibly even more than that. My suggestion would be to re-think your design so as to avoid volatile. bluebytesoftware.com/blog/2010/12/04/SayonaraVolatile.aspx – Squander