Within one thread, steady_clock::now()
is guaranteed to return monotonically increasing values. How does this interact with memory ordering and reads observed by multiple threads?
atomic<int> arg{0};
steady_clock::time_point a, b, c, d;
int e;
thread t1([&](){
a = steady_clock::now();
arg.store(1, memory_order_release);
b = steady_clock::now();
});
thread t2([&](){
c = steady_clock::now();
e = arg.load(memory_order_acquire);
d = steady_clock::now();
});
t1.join();
t2.join()
assert(a <= b);
assert(c <= d);
Here's the important bit:
if (e) {
assert(a <= d);
} else {
assert(c <= b);
}
Can these assert ever fail? Or have I misunderstood something about acquire release memory order?
What follows is mostly an explanation and elaboration of my code example.
Thread t1
writes to the atomic arg
. It also records the current time before and after the write in a
and b
respectively. steady_clock
guarantees that a <= b
.
Thread t2
reads from the atomic arg
and saves the value read in e
. It also records the current time before and after the read in c
and d
respectively. steady_clock
guarantees that c <= d
.
Both threads are then joined. At this point e
could be 0
or 1
.
If e
is 0
, then t2
read the value before t1
wrote it. Does this also imply that c = now()
in t2
happened before b = now()
in t1
?
If e
is 1
then t1
wrote the value before t2
read it. Does this also imply that a = now()
in t1
happened before d = now()
in t2
?
Here are some existing questions that don't answer what I'm asking:
Is there any std::chrono thread safety guarantee even with multicore context?
I'm not asking whether now()
is thread-safe. I know it is.
Is steady_clock monotonic across threads?
This one is much closer, but that example uses mutex
. Can I make the same assumptions about memory orderings weaker than seq_cst
?
system_clock::now
doesn't synchronize anything. – Drilysystem_clock
? – Fifi