What advantage does the new feature, "synchronized" block, in C++ provide?
Asked Answered
A

3

22

There's a new experimental feature (probably C++20), which is the "synchronized block". The block provides a global lock on a section of code. The following is an example from cppreference.

#include <iostream>
#include <vector>
#include <thread>
int f()
{
    static int i = 0;
    synchronized {
        std::cout << i << " -> ";
        ++i;       
        std::cout << i << '\n';
        return i; 
    }
}
int main()
{
    std::vector<std::thread> v(10);
    for(auto& t: v)
        t = std::thread([]{ for(int n = 0; n < 10; ++n) f(); });
    for(auto& t: v)
        t.join();
}

I feel it's superfluous. Is there any difference between the a synchronized block from above, and this one:

std::mutex m;
int f()
{
    static int i = 0;
    std::lock_guard<std::mutex> lg(m);
    std::cout << i << " -> ";
    ++i;       
    std::cout << i << '\n';
    return i; 
}

The only advantage I find here is that I'm saved the trouble of having a global lock. Is there more advantages of using a synchronized block? When should it be preferred?

Achromatic answered 3/8, 2017 at 14:28 Comment(11)
Not sure if this is actually the case but cppreference makes it sound like the first version is guaranteed to print in order in the first example while AFAIK the second version does not.Chancellery
@Chancellery Why wouldn't the first version print in order? Could you explain? As far as I understood, the whole set of code will be executed only once at a time, including printing, which will make everything in order. This is also the case for the mutex.Achromatic
"Although synchronized blocks execute as-if under a global lock, the implementations are expected to examine the code within each block and use optimistic concurrency (backed up by hardware transactional memory where available) for transaction-safe code and minimal locking for non-transaction safe code. "Oddment
@Oddment Can't optimizations do that, even when a user asks for a mutex-lock?Achromatic
@TheQuantumPhysicist, std::lock_guard and std::mutex are not part of the C++ language: They're just classes defined in a library. In particular, the compiler has no way of knowing what mutex means---how mutex operations interact with the operating system or, what effect they have on threads. A synchronized keyword would be very different in that respect.Nievesniflheim
No, optimizations cannot remove an explicit lock; locking has visible side effects.Merbromin
I see. Thanks for explaining. Please provide answers on the SO platform.Achromatic
I wonder whether it will also have any influence on visbility guarantees (i.e. acting like a fence regarding the memory model). But this is subtle, and I guess it may eventually be implementation dependent. (Besides that, as a Java guy, I had to smirk a bit when I saw this. Sure, they didn't get it all right with the first try. But at least they did this first try 25 years ago...)Micromillimeter
@TheQuantumPhysicist Glad to know another physics enthusiast (if that's what your pseudonym conveys) deeply involved in C++ programming.Accusative
@SeshadriR Actually I'm a physicist, but now doing software dev professionally. Check my profile for more info. Cheers!Achromatic
@SolomonSlow The standard library is an integral part of the language and the compiler is required to understand what standard library functions do and treat them as language primitives if that's what it takes to implement the standard correctly. In particular threads are pretty much a core language concept. The part of the standard that deals with the core language talks about threads all the time. The compiler absolutely 100% must understand threads, otherwise it isn't worth much.Fenderson
S
9

On the face of it, the synchronized keyword is similar to std::mutex functionally, but by introducing a new keyword and associated semantics (such the block enclosing the synchronized region) it makes it much easier to optimize these regions for transactional memory.

In particular, std::mutex and friends are in principle more or less opaque to the compiler, while synchronized has explicit semantics. The compiler can't be sure what the standard library std::mutex does and would have a hard time transforming it to use TM. A C++ compiler would be expected to work correctly when the standard library implementation of std::mutex is changed, and so can't make many assumptions about the behavior.

In addition, without an explicit scope provided by the block that is required for synchronized, it is hard for the compiler to reason about the extent of the block - it seems easy in simple cases such as a single scoped lock_guard, but there are plenty of complex cases such as if the lock escapes the function at which point the compiler never really knows where it could be unlocked.

Sidonia answered 3/8, 2017 at 18:29 Comment(0)
L
5

Locks do not compose well in general. Consider:

//
// includes and using, omitted to simplify the example
//
void move_money_from(Cash amount, BankAccount &a, BankAccount &b) {
   //
   // suppose a mutex m within BankAccount, exposed as public
   // for the sake of simplicity
   //
   lock_guard<mutex> lckA { a.m };
   lock_guard<mutex> lckB { b.m };
   // oversimplified transaction, obviously
   if (a.withdraw(amount))
      b.deposit(amount);
}

int main() {
   BankAccount acc0{/* ... */};
   BankAccount acc1{/* ... */};
   thread th0 { [&] {
      // ...
      move_money_from(Cash{ 10'000 }, acc0, acc1);
      // ...
   } };
   thread th1 { [&] {
      // ...
      move_money_from(Cash{ 5'000 }, acc1, acc0);
      // ...
   } };
   // ...
   th0.join();
   th1.join();
}

In this case, the fact that th0, by moving money from acc0 to acc1, is trying to take acc0.m first, acc1.m second, whereas th1, by moving money from acc1 to acc0, is trying to take acc1.m first, acc0.m second could make them deadlock.

This example is oversimplified, and could be solved by using std::lock() or a C++17 variadic lock_guard-equivalent, but think of the general case where one is using third party software, not knowing where locks are being taken or freed. In real-life situations, synchronization through locks gets tricky really fast.

The transactional memory features aim to offer synchronization that composes better than locks; it's an optimization feature of sorts, depending on context, but it's also a safety feature. Rewriting move_money_from() as follows:

void move_money_from(Cash amount, BankAccount &a, BankAccount &b) {
   synchronized {
      // oversimplified transaction, obviously
      if (a.withdraw(amount))
         b.deposit(amount);
   }
}

... one gets the benefits of the transaction being done as a whole or not at all, without burdening BankAccount with a mutex and without risking deadlocks due to conflicting requests from user code.

Leonerd answered 6/8, 2017 at 16:59 Comment(0)
U
0

I still think that mutai and locks are better in many situations due to their flexibility.

For example, you can make locks Rvalues so that a lock can only exist for the duration of an expression, greatly diminishing the possibility of deadlock.

You can also retrofit thread safety on classes missing member mutai by using a "locking smart pointer" that holds the mutex and locks only during the time the referent is being held by the locking smart pointer.

The synchronized keyword existed for a long time in Windows, with the CRITICAL_SECTION. It's been decades since I worked in Windows so I don't know if that is still a thing.

Unknown answered 3/8, 2023 at 19:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.