I have some code which must be thread safe whose behavior resembles this:
protected long m_RunningValue
protected long m_RunningCounter
protected object m_Lock = new object();
public long RunningValue { get { return Interlocked.Read(m_RunningValue); } }
public long RunningCounter { get { return Interlocked.Read(m_RunningCounter); } }
public void DoCalculation(int newValue, int newQuantity)
{
lock(m_Lock)
{
Interlocked.Add(ref m_RunningValueA, newValue);
Interlocked.Add(ref m_RunningCounter, newQuantity);
if(Interlocked.Read(ref newQuantity) == 0)
{
...m_RunningValue gets further modified here
}
}
}
The Calculation must lock both the value and the counter or a race condition could effect the if(...) block, however they do not need to be synchronized at all when being read out, i.e. if the counter and value changes between attempts to read both, that's 100% ok with me.
The Interlock on the read is there for thread-safety reading of a 64-bit value.
Is mixing interlocks and locks like this safe? I have read on other web pages that mixing them is unsafe, however I can't find clarification if this means that mixing them is a great way to introduce subtle bugs, or if at a system level this can corrupt the data structures involved.
Is the cost of all this interlocking (64-bit .NET 4.0 runtime) completely defeating the purpose of saving a ReaderWriterSlim lock around the property get() methods?
newQuantity
. With regards to performance, I would guess that it depends on the relative mix of reads and updates - if it's important, you should probably benchmark it instead of guessing. – Illuse