Advantages of using condition variables over mutex
Asked Answered
B

3

52

I was wondering what is the performance benefit of using condition variables over mutex locks in pthreads.

What I found is : "Without condition variables, the programmer would need to have threads continually polling (possibly in a critical section), to check if the condition is met. This can be very resource consuming since the thread would be continuously busy in this activity. A condition variable is a way to achieve the same goal without polling." (https://computing.llnl.gov/tutorials/pthreads)

But it also seems that mutex calls are blocking (unlike spin-locks). Hence if a thread (T1) fails to get a lock because some other thread (T2) has the lock, T1 is put to sleep by the OS, and is woken up only when T2 releases the lock and the OS gives T1 the lock. The thread T1 does not really poll to get the lock. From this description, it seems that there is no performance benefit of using condition variables. In either case, there is no polling involved. The OS anyway provides the benefit that the condition-variable paradigm can provide.

Can you please explain what actually happens.

Bertilla answered 20/1, 2011 at 0:7 Comment(0)
D
81

A condition variable allows a thread to be signaled when something of interest to that thread occurs.

By itself, a mutex doesn't do this.

If you just need mutual exclusion, then condition variables don't do anything for you. However, if you need to know when something happens, then condition variables can help.

For example, if you have a queue of items to work on, you'll have a mutex to ensure the queue's internals are consistent when accessed by the various producer and consumer threads. However, when the queue is empty, how will a consumer thread know when something is in there for it to work on? Without something like a condition variable it would need to poll the queue, taking and releasing the mutex on each poll (otherwise a producer thread could never put something on the queue).

Using a condition variable lets the consumer find that when the queue is empty it can just wait on the condition variable indicating that the queue has had something put into it. No polling - that thread does nothing until a producer puts something in the queue, then signals the condition that the queue has a new item.

Dan answered 20/1, 2011 at 0:16 Comment(6)
Right. A mutex only allows you to wait until the lock is available; a condition variable allows you to wait until some application-defined condition has changed.Ethiopian
Thank you for the example! Cleared things up :)Bertilla
I don't see why you can't drop the condition variable and just use the mutex in some simple cases. Make acquiring the mutex the "work is available" condition. Then have the producer thread acquire the mutex and the worker thread try to acquire it. When work is available, the producer unlocks the mutex. Then the OS will wake the worker with the mutex acquired and the worker will run to completion and then sleep (deadlock itself) by trying to re-acquire it's own mutex. Once the producer has more work, it can unlock the mutex. Care may need to be taken to check that the mutex is locked first.Whisenhunt
Assuming a non-recursive mutex (same thread can't lock more than once), @Eloff, your idea will deadlock everything, not just the consumer thread ("itself"). The producer thread can't unlock a mutex it hasn't locked, so it'll be stuck forever waiting for the consumer to unlock the mutex, which it never will because it's attempting to re-lock the mutex it's already locked.Hearne
@PfhorSlayer, I am not getting this "which it never will because it's attempting to re-lock the mutex it's already locked". Can't we simply unlock the mutex in the consumer when the data is processed? And on the next iteration lock mutex. Or are you suggesting that the consumer will hold the lock for most of the time and the producer will starve for the lock?Horologe
If the consumer unlocks the mutex and is able to re-lock it before the producer can, then the "work is available" condition is immediately violated. There's no guarantee about which thread will end up taking the lock, and the idea that acquiring the mutex is equivalent to "work is available" necessitates a specific and consistent ordering of which thread is able to execute at each step. Add more worker threads to the mix and the likelihood of one of the consumers acquiring the mutex before the producer can goes up dramatically.Hearne
E
11

You're looking for too much overlap in two separate but related things: a mutex and a condition variable.

A common implementation approach for a mutex is to use a flag and a queue. The flag indicates whether the mutex is held by anyone (a single-count semaphore would work too), and the queue tracks which threads are in line waiting to acquire the mutex exclusively.

A condition variable is then implemented as another queue bolted onto that mutex. Threads that got in line to wait to acquire the mutex can—usually once they have acquired it—volunteer to get out of the front of the line and get into the condition queue instead. At this point, you have two separate sets of waiters:

  • Those waiting to acquire the mutex exclusively
  • Those waiting for the condition variable to be signaled

When a thread holding the mutex exclusively signals the condition variable, for which we'll assume for now that it's a singular signal (unleashing no more than one waiting thread) and not a broadcast (unleashing all the waiting threads), the first thread in the condition variable queue gets shunted back over into the front (usually) of the mutex queue. Once the thread currently holding the mutex—usually the thread that signaled the condition variable—relinquishes the mutex, the next thread in the mutex queue can acquire it. That next thread in line will have been the one that was at the head of the condition variable queue.

There are many complicated details that come into play, but this sketch should give you a feel for the structures and operations in play.

Ely answered 20/1, 2011 at 1:59 Comment(3)
pthreads mutexes and condition variables are even simpler - there's no queue, just an unordered set of waiting processes.Ethiopian
When it's time to pick the "next" one in the signal case, one of them will be "first", so that set must be projected to a sequence. I understand there's not a FIFO guarantee, though one can certainly build a mutex/condition pair that does make that guarantee. Witness the various user-space "fair" implementations for Win32.Ely
Thank you! I understand the basic idea that condition variables are not really another way to implement mutex locks, but they provide a signalling structure to better manage the distribution of the lock among the competing threads..Bertilla
C
7

If you are looking for performance, then start reading about "non blocking / non locking" thread synchronization algorithms. They are based upon atomic operations, which gcc is kind enough to provide. Lookup gcc atomic operations. Our tests showed we could increment a global value with multiple threads using atomic operation magnitudes faster than locking with a mutex. Here is some sample code that shows how to add items to and from a linked list from multiple threads at the same time without locking.

For sleeping and waking threads, signals are much faster than conditions. You use pthread_kill to send the signal, and sigwait to sleep the thread. We tested this too with the same kind of performance benefits. Here is some example code.

Chimere answered 20/1, 2011 at 2:57 Comment(2)
Thanks a lot for pointing out the area of non-locking synchronization! I am looking into it now!Bertilla
Nice tip about using signals instead of condition variables.Whisenhunt

© 2022 - 2024 — McMap. All rights reserved.