What's the rationale for this where an uninitialized std::atomic
instance would be useful.
For the same reason basic "building block" user defined types should not do more than strictly needed, especially in unavoidable operations like construction.
But I can't think of another std:: class where the default constructor
will leave the object in an undefined state.
That's the case of all classes that don't need an internal invariant.
There is no expectation in generic code that T x;
will create a zero initialized object; but it's expected that it will create an object in a usable state. For a scalar type, any existing object is usable during its lifetime.
On the other hand, it's expected that
T x = T();
will create an object in a default state for generic code, for a normal value type. (It will normally be a "zero value" if the values being represented have such thing.)
Atomics are very different, they exist in a different "world"
Atomics aren't really about a range of values. They are about providing special guarantees for both reads, writes, and complex operations; atomics are unlike other data types in a lot of ways, as no compound assignment operation is ever defined in term of a normal assignment over that object. So usual equivalences don't hold for atomics. You can't reason on atomics as you do on normal objects.
You simply can't write generic code over atomics and normal objects; it would make no sense what so ever.
(See footnote.)
Summary
- You can have generic code, but not atomic-non atomic generic algorithms as their semantic don't belong in the same style of semantic definition (and it isn't even clear how C++ has both atomic and non atomic actions).
- "You don't pay for what you don't use."
- No generic code will assume that an uninitialized variable has a value; only that it's in a valid state for assignment and other operations that don't depend on the previous value (no compound assignment obviously).
- Many STL types are not initialized to a "zero" or default value by their default constructor.
[Footnote:
The following is "a rant" that is a technical important text, but not important to understand why the constructor of an atomic object is as it is.
They simply follow different semantic rules, in the most extremely deep way: in a way the standard doesn't even describe, as the standard never explains the most basic fact of multithreading: that some parts of the language are evaluated as a sequence of operations making progress, and that other areas (atomics, try_lock...) don't. In fact the authors of the standard clearly do not even see that distinction and do not even understand that duality. (Note that discussing these issues will often get your questions and answers both downvoted and deleted.)
This distinction is essential as without it (and again, it appears nowhere in the standard), exactly zero programs can even have multithread-defined behavior: only old style pre thread behavior can be explained without this duality.
The symptom of the C++ committee not getting what C++ is about is the fact they believe the "no thin air value" is a bonus feature and not an essential part of the semantics (not getting "no thin air" guarantee for atomics make the promise of sequential semantic for sequential programs even more indefensible).
--end note]
atomic<int> i{};
doesn't zero-initialize them either. – WeldT x = T();
works. But copying atomics doesn't really make sense as atomics are not about values about about behavior of basic operations (and a copy isn't an atomic operation). – Leucinestd::
class where the default constructor will leave the object in an undefined state. -std::size_t
orstd::uint32_t
are examples of that. They're typedefs not classes, but primitive types are a common use-case for the T instd::atomic<T>
. – Laager