How does a Mutex work? Does a mutex protect variables globally? Does the scope in which it is defined matter?
Asked Answered
E

5

6

Does a mutex lock access to variables globally, or just those in the same scope as the locked mutex?

Note that I had to change the title of this question, as a lot of answers seem to be confused as to what I was asking. This is not a question about the scope (global or otherwise) of a "mutex object", it is a question about what scope of variables are "locked" by a mutex.

I believe the answer to be that a mutex locks access to all variables, ie; all global and locally scoped variables. (This is a result of a mutex blocking thread execution rather than access to specific regions of memory.)

I am attempting to understand Mutexes.

I was attempting to understand what sections of memory, or equivalently, which variables, a mutex would lock.

However my understanding from reading around online is that Mutexes do not lock memory, they lock (or block) simultaneously running threads which are all members of the same process. (Is that correct?)

https://mortoray.com/2011/12/16/how-does-a-mutex-work-what-does-it-cost/

So my question has become simply "are mutexes global?"

... or are they perhaps "generally speaking global, but the stackoverflow community can imagine some special cases in which they are not?"

When originally considering my question, I was interested in things such as those shown in the following example.

// both in global scope, this mutex will lock any global scope variable?
int global_variable;
mutex global_variable_mutex;

int main()
{
    // one thread operates here and locks global_variable_mutex
    // before reading/writing

    {
        // local variables in a loop
        // launch some threads here, and wait later
        int local_variable;
        mutex local_variable_mutex;
        // wait for launched thread to return

        // does the mutex here prevent data races to the variable
        // global_variable ???
    }
}

One may assume this is pseudo-code for C++ or C, or any other similarly relevant language.

2021 edit: Question title has been changed to better reflect the contents of the question and associated answers.

Elurd answered 13/6, 2016 at 14:15 Comment(2)
Global like a global variable in a program? Or global as in the whole computer system? Microsoft Windows has named Mutex objects which can be per user session or per system. Linux can do POSIX shared memory locks.Shows
@SupunWijerathne Well not really - my understanding is that mutexes are related to threads rather than regions of memory - ie; a mutex blocks other threads from running rather than preventing access to certain regions of memory. Since I misunderstood that initially when I posted the question - none of the answers picked up on that and said anything about it.Elurd
R
7

So my question has become simply "are mutexes global?"

No. A mutex has a lock() and an unlock() method, and the only thing a mutex does is cause its lock() call (from any thread) not to return for as long as another thread has that mutex locked. When the thread that was holding the mutex locked calls unlock(), that is when the lock() call will return in the first thread. That way it is guaranteed that only a single thread will be holding the mutex-lock (i.e. executing in the region between its lock() call and its unlock() call) at any given time.

That's really all there is to it. So a mutex will effect only the threads that call lock() on that particular mutex, and nothing else.


Mutex stands for "Mutual Exclusion" - using one correctly ensures that only one thread at a time will ever be executing any "critical section" protected by the same mutex.

If there are some variables you only ever modify inside critical sections protected by the same mutex, your code doesn't have a data race. No matter whether they're global, static, or pointed to by different variables in different threads or any other way two threads might have a reference to the same object.

Resurrectionism answered 13/6, 2016 at 14:21 Comment(7)
By global I was referring to "does a mutex lock access to a variables globally?" From what you have explained here it seems as if the answer is therefore "yes"?Elurd
It has nothing to do with variables (which are just another form of memory); it has to do with what a thread does when the mutex is/isn't locked. Or to put it another way, if your thread(s) only access global variables while they have the mutex locked, then their accesses to those global variables will be serialized by the mutex. OTOH, if your threads access those global variables without locking the mutex first, then access will not be serialized. But that's up to the threads, not the mutex.Resurrectionism
So would a thread be able to access stuff in its registers/CPU cache while a mutex from another thread was locked?Elurd
Yes. Note that both threads need be locking the same mutex, though. If thread A only locks mutex A and thread B only locks mutex B, then neither thread's accesses will be serialized with respect to the other (since neither mutex's lock() call will ever block)Resurrectionism
So to further clarify, there is nothing in the mutex implementation that says which data is protected by which mutex. It's all by convention. If the programmer decides that a certain variable is protected by a certain mutex, it's the responsibility of the programmer to ensure that said mutex its always locked before accessing the variable. And if the programmer forgets to do this, the compiler won't spit out any error message. The program will appear to work just fine most of the time.Starter
@JeremyFriesner: I added a paragraph at the end describing the how a mutex does work, when used correctly to protect accesses to shared data. This seemed like a good answer for it, since none of the others mentioned "critical section" or "mutual exclusion", the concepts behind using a mutex correctly.Abiogenesis
@JeremyFriesner: I realized I had more to say but it was getting too big for an edit to someone else's answer. I ended up writing my own answer to explain the ideas behind locking, starting with what I added here. So now it's duplicated; I can roll it back if you like, or you can yourself, or trim it down, or even cite my answer if you want to do that. Or we can just leave it, I think that's fine and possibly best.Abiogenesis
E
5

When I asked this question I was confused...

When I originally asked this question, I was confused because I had no conceputal understanding of how a "mutex" functions in hardware, whereas I did have a conceptual understanding of many other things that exist in hardware. (For example, how a compiler converts text into machine readable instructions. How cache and memory work. How graphics or coprocessors work. How network hardware and interfaces work, etc.)

Misconception 1: Mutex does not lock memory locations

When I first heard about Mutex, long before writing this question, I misunderstood a mutex to be a feature which locks regions of memory. (That region might be global.)

This is not what happens. Other threads and processes can continue to access main memory and cache if another thread locks a mutex. You can see immediatly why such a design would be inefficient, since it would block all other system processes, for the sake of synchronizing one.

Misconception 2: The scope in which a mutex object is declared is irrelevant

The context of this is C code, and C like languages where you have scoped blocks defined by { and } however the same logic could apply to Python where scope is defined by indentation.

I believe that this misunderstanding came from the existance of scoped_lock objects, and similar concepts where scope is used to manage the lifetime (locking and unlocking, resources) of a Mutex object.

One could also argue that since pointers and references to a Mutex can be passed around a program, the scope of a Mutex couldn't be used to define what variables are "locked" by a mutex.

For example, I misunderstood the following snippet:

{
    int x, y, z;
    Mutex m;
    m.lock();
}

I believed that the above snippet would lock access to variables x, y and z from all other threads because x, y and z are declared in the same scope as the mutex m. This is also not how a mutex works.

Understanding 1: Mutex is typically implemented in hardware using atomic operations

Atomic operations are completely seperate from the concept of mutex, however they are a prerequisite to understanding how a mutex can exist, and how it can work.

When a CPU executes something like c = a + b, this involves a sequence of individual (atomic) operations. The word Atom is derived from Atomos meaning "indivisible", or "fundamental". (Atoms are divisible, but when theorists of Ancient Greece originally concieved of the objects from which matter was composed, they assumed that particles must be divisible down to some fundamental smallest possible component, which itself is indivisible. They were not too far wrong, since an atom is made from other fundamental particles which so far we understand to be indivisible.)

Returning to the point: c = a + b is something like the following:

  • load a from memory into register 1
  • load b from memory into register 2
  • do operation add: add contents of register 2 to register 1, result is in register 1
  • save register 1 to memory c

The add operation might take several clock cycles, and loading/saving to memory takes typically of order 100 clock cycles on modern x86 machines. However each operation is atomic in the sense that a single CPU instruction is being completed, and this instruction cannot be divided into any smaller step of smaller instructions. The instructions are themselves fundamental computing operations.

With that understood, there exists a set of atomic instructions which can do things such as:

  • load a value from memory increment it and save it to memory
  • load a value from memory decrement it and save it to memory
  • load a value from memory, compare it to a value which is already loaded into a register, and branch depending on the comparison result

Note that such operations are typically significantly slower than their non-atomic sequence counterparts. This is because optimizations such as pipelining are forfit when executing the above instructions. (I think?)

At this point my knowledge becomes a bit less accurate and more hand-wavey, but as far as I understand, these operations are typically implemented by having some digital logic inside the processor which blocks all other processes from running while these atomic operations (listed above) are executing.

Meaning: If there are 8 CPU cores running, if one core encounters an instruction like the above, it signals the other cores to stop running until it has finished that atomic operation. (It is at least something approximatly along these lines.)

Understanding 2: Actual mutex operation

Given the above, it is possible to implement a mutex using these atomic machine instructions. Other answers posted here suggest possible ways of doing it including something similar to reference counting. (Semaphore.)

How an acutal mutex in C++ works is this:

  • Each mutex object has a variable in memory associated with it, the value of this variable indicates whether a mutex is locked or not
  • This mutex variable is updated using the special atomic operations that a CPU supports for the purpose of allowing a mutex to be programmed
  • Elsewhere in memory there are some other variables/data which you want to protect/synchronize access to
  • This synchronization is done using the mutex variable/data
  • Before a thread reads/writes to some data/variable which needs to be accessed mutually exclusively by all threads which operate on it, that thread must first "lock" the special mutex data/variable
  • This is done using the atomic operations built into a CPU for the purpose of supporting mutex programming

So you see, the data which is "locked" and accessed mutually exclusively is entirely independent from the actual data used to store the state of the mutex.

  • If another thread wants to read/write the data which must be accessed mutually exclusively, it will try to lock the mutex. If the mutex is already locked, that means another thread has the right to access this data, and no other thread is permitted to, therefore this thread will typically go to sleep, and will be re-woken by the operating system when the mutex is next unlocked.

It is important to note the operating system thread (kernel) is critically involved in the mutex process. Typically, before a thread sleeps, it will tell the operating sytem that it wishes to be woken up again when the mutex is free. The operating system is also notified when other threads lock or unlock a mutex. Hence synchronization of information about the state of a mutex is passed via messages through the operating system kernel.

This is why writing a multiple thread OS kernel is (proabably) impossible (if not very difficult). I don't know if this has actually been done successfully. It sounds like a difficult problem which might be the subject of current CS research.

This is pretty much everything I know about the subject. Obviously my knowledge is not absolute...

Note: Feel free to correct my Greek history or x86 Machine Instruction knowledge in the comments section. No doubt not everything here is perfectly accurate.

Elurd answered 24/12, 2021 at 16:4 Comment(7)
atomic in the sense that a single CPU instruction is being completed - yikes. No. That gives atomicity wrt. interrupts, which is relevant for a uniprocessor machine or for signal handlers accessing the same data as the main thread they're running in. But not for code running on separate cores. e.g. x86 add [mem], eax is a non-atomic RMW, and even the load and store uops it decodes to won't be atomic if they span a cache-line boundary. See Atomicity on x86 and x86 load/store atomicity rulesAbiogenesis
We don't normally talk about atomicity of ALU operations, or instructions like add eax, ecx. Mostly because registers are thread-private, only observable after an interrupt (by kernel code, which stores reg values where other processes can read them with system calls like Linux ptrace which debuggers use, or of course by kernel code directly if it wanted). Or by eventual stores of those values to shared memory. Anyway, you need lock add [mem], eax to make it a single atomic RMW. See Can num++ be atomic for 'int num'?Abiogenesis
if one core encounters an instruction like the above, it signals the other cores to stop running until it has finished that atomic operation. - No, the effect is as-if the core had asserted a LOCK# signal that stopped other cores from accessing memory at all for the duration of the RMW, but actually it just delays responding to MESI cache-coherency requests to share that cache line, between the load and store sides of an atomic RMW. Separate cores can be executing atomic RMWs on separate cache lines without hurting each other's throughput at all, only about 20 clocks for L1d hit xchg.Abiogenesis
It is important to note the operating system thread (kernel) is critically involved in the mutex process. - Yes, only if there's contention that leads to a sleep. A good mutex doesn't make any system calls in the un-contended case, or when unlocking with no waiters. (Linux / glibc pthread_mutex is like that, otherwise yeah it uses futex to assist the sleep/wake process.) preshing.com/20111124/always-use-a-lightweight-mutex explains that some Windows mutexes suck, but not all.Abiogenesis
This is why writing a multiple thread OS kernel is (proabably) impossible (if not very difficult) "the kernel" already runs on all cores of your CPU. If a lock is already taken, instead of making a yield system call, it just directly calls schedule(). If there are places where a kernel must not block, like an interrupt handler, that code must not try to take a lock that might not be free.Abiogenesis
See also a simple spinlock without a sleep fallback for a concrete example of how simple a mutex can be. A real implementation would add at least a yield() system call after spinning a few times, although yes, for lower wakeup latency you'd want to design something that would let an unlocking thread detect that there were sleeping waiters and it should call futex(&mutex, FUTEX_WAKE_ONE) or something after unlocking.Abiogenesis
The first part of this answer is good, and the later parts are mostly not far off conceptually, at least in terms of the implications for actually using mutexes in user-space.Abiogenesis
C
2

As your question suggests, I assume you are asking your question independent of any programming language.

First it is important to understand what is a mutex and how it works? A mutex is a binary semaphore. Then what is a semaphore? A semaphore is an integer with following attributes,

  • You can initialize it into any permitted value (For a mutex, it is 1 or 0).
  • A thread can access the semaphore and it can increment or decrement its integer value.
  • When a thread decrements it, If the result is positive or zero, that thread can continue its process. If the result is negative, that thread will be waiting and the semaphore value will not be further decremented by any later thread.
  • If a thread increments it, (in that case semaphore value will be either positive or 0) and the result is 0, one of the waiting threads can continue execution.

So when there's a situation where a thread is trying to access a shared resource it will decrement the mutex value (from 0, so that other thread is waiting). And when it finishes, it will increment the mutex value (So that the waiting thread can continue). That's how the access control happens by means of a mutex (Binary semaphore).

I think you understand that your question is a non-applicable one here. As a simple answer for

So my question has become simply "are mutexes global?"

is simply NO.

Coarsen answered 13/6, 2016 at 14:52 Comment(0)
T
0

A mutex has whatever scope you assign to it. It can be global or local again based on where and how you declare it. If for example you declare a mutex in global memory in a place where you can access it globally, then it is indeed global. If instead you declare it at function or private class scope level, then only that function or class will have access to it.

That said, in order to be useful for synchronization, the mutex needs to be declared in a scope that can be accessed by the threads needing to synchronize on it. Whether that's at global scope or some local scope depends on your program structure. I'd advise declaring it at the highest scope accessible to the threads but no higher.

In your particular example, the mutex is indeed global because you've declared it in global memory.

Thrasher answered 13/6, 2016 at 17:7 Comment(0)
A
0

Locking doesn't operate on the variables it protects, it just works by giving threads a way to arrange that only one thread at a time will be doing something (like reading+writing a data structure). And that it will be finished, with memory effects visible, before the next thread's turn to read and maybe modify that data. (A readers+writers lock allows multiple readers but only one writer).

Any thread that can access the mutex object can lock / unlock it. The mutex object itself is a normal variable that you can put in any scope you want, even a local variable and then put a pointer to it somewhere that other threads can see. (Although normally you wouldn't do that.)

Mutex is named for "Mutual Exclusion" - using one correctly ensures that only one thread at a time will ever be executing any "critical section" (wikipedia) protected by the same mutex. Separate mutexes can allow different threads to hold different locks. Different functions or blocks that use the same mutex (normally because they access the same data) won't both run at once.

If there are some variables you only ever modify inside critical sections protected by the same mutex, those accesses won't be data race, and if you don't have other bugs, your code is thread-safe. No matter whether they're global, static, or pointed to by different variables in different threads or any other way two threads might have a reference to the same object.


If you write code that accesses shared data without taking a lock on a mutex, it might see a partially-updated value, especially for a struct with multiple pointers / integers. (And in C++, simultaneous accesses to non-atomic variables is undefined behaviour if they're not all reads).

Locking is a cooperative activity, normally nothing stops you from getting it wrong. If you're familiar with file locking, you may have heard of advisory vs. mandatory locks (the OS will deny open calls by other programs). Mutexes in multi-threaded programs are advisory; no memory protection or other hardware mechanism stops another thread from executing code that accesses the bytes of an object.

(At a low enough level, that's actually useful for lock-free atomics, especially with some control over ordering of those operations from memory barriers and/or release-store / acquire-load. And CPU cache hardware is up to the task of maintaining coherency from multiple accesses. But if you use locking, you don't have to worry about any of that. If you use locking incorrectly, understanding the possible symptoms might help identify that there is a locking problem.)

Some programs have phases where only a single thread is running, or only one that would need to touch certain variables, so enforced locking for every access to a variable isn't something that every language provides. (C++ std::atomic<T> is sort of like that; every access is as-if there was a lock/unlock of a lock protecting just that T object, except it's limited to operations that most CPUs can do without needing to lock/unlock a separate lock. Unless you use a large T, then there actually is a lock. Or if you use a memory order weaker than the default seq_cst, you can see orderings that wouldn't have been possible if all accesses acquiring/releasing locks.)

Besides, consistency between multiple variables is often important, so it matters that you hold one lock across multiple operations on multiple variables, or multiple members of the same struct.

Some tools can help detect code that doesn't respect a mutex while other threads are running, though, like clang -fsanitize=thread.

Abiogenesis answered 15/11, 2022 at 2:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.