According to cppreference, most uses of the volatile
keyword are to be deprecated in C++20. What is the disadvantage of volatile
? And what is the alternative solution when not using volatile
?
Why is volatile deprecated in C++20?
There's a good talk by the C++ committee language evolution chair on why.
Brief summary, the places that volatile
is being removed from didn't have any well defined meaning in the standard and just caused confusion.
Motivating (Ambiguous) Examples
- Volatile bit Fields should be specified by your hardware manual and/or compiler.
- Is
+=
a single/atomic instruction? How about++
? - How many reads/writes are needed for
compare_exchange
? What if it fails? - What does
void foo(int volatile n)
mean? orint volatile foo()
? - Should
*vp;
do a load? (This has changed twice in the standard.)
Threading
Historically, people have used volatile
to achieve thread safety in C and C++. In C++11, non-UB ways to create synchronization and shared state between threads were added. I recommend Back to Basics: Concurrency as a good introduction.
I can't seem to figure out if they are deprecating volatile-qualified methods. Writing "volatile correct code" in a manner in accordance with drdobbs.com/cpp/volatile-the-multithreaded-programmers-b/… is quite a good model. I'm hoping their intent isn't to break this model. –
Juliettejulina
In his discussion at the end he mentions that they won't be removing volatile-qualified member functions yet because there are still uses for them. The goal is for there to be a better way to describe what "volatile correctness" is trying to do. youtu.be/KJW_DLaVXIY?t=2714 Also, if there's a specific use of this needed for a hardware platform, I believe the compiler is allowed to implement the feature anyway as behavior that is undefined by the standard. The short of it though is that
volatile
isn't for threading. –
Rhetor Well, volatile is deprecated for marking scalars for multithreaded access (you mean). As the Dr Dobbs article shows, it is good tool for multithreaded programming paradigms at the object level. This leads to some compiler warnings that then have to be worked around as I just noticed. –
Juliettejulina
And not exactly as I just said. It is deprecated to use predecrement and preincrement on a volatile int (for example) but sampling the volatile - i.e. return its value as a non-volatile doesn't result in a compiler deprecation warning. –
Juliettejulina
The intent is to force usage of std::atomic< int > which will then support predecrement and preincrement correctly - i.e. by correctly using atomic instructions. However when writing generic code for both multithreaded and non-multithreaded usage it results in const_cast<> workarounds where the template implementor must cast volatile int ( in the non-multithreaded-compilation ) to int to avoid the deprecation warning. So, annoying and resulting in extra work.\ –
Juliettejulina
I guess the final question would be then: does std::atomic<> detect when it is not being compiled in a multithreaded environment and in that case just use simple decrements and increments instead of atomic operations which can be noticeably less performant. –
Juliettejulina
And my guess is "no" is the answer to that question - and I probably wouldn't want the answer to be "yes". Though we did notice that performance was brought down by about 10% when we used atomic increments and decrements on our reference counted objects in single-threaded mode. 10% on a website with 1000s of server instances matters quite a bit - so we instead implemented apartment-threaded objects to avoid atomic operations as much as we could because it saved tons of $$$$$. –
Juliettejulina
Yes, pre- and post- increment and decrement are different, this is about the reasons not the actual implemented standards, which are still being worked on. –
Rhetor
I would also assume that
std::atomic
doesn't check for if threading is enabled; however, that's not a reason to abuse volatile
for threading. volatile
is/was meant for describing when hardware could interact with your program and just has the effect of disabling the optimizer, that's not it's desired meaning. If optimizing atomic
s in single threaded environments is a serious problem I would ask your compiler implementer for that as a feature. But, I'd also wonder why you are using synchronization features in single threaded code. –
Rhetor Reference counting in single threaded code enables object reuse across different algorithms acting simultaneously. I worked on a dotcom's flight search engine - imagine the most complex software application you've ever imagined... I certainly had no idea what might be able to happen in 30 seconds on a flight search server running superfast C++. Everything was(is) specially engineered to run as fast as possible. –
Juliettejulina
Just compiling the application with the multithreaded static MSVC lib added 10% to the execution speed (more or less). Once again when you have 1000s of server instances running simultaneously on the site you care about 10%. –
Juliettejulina
Of course after working on it for 15 years straight I understood things a lot better. –
Juliettejulina
Yep, I hadn't thought of
shared_ptr
using std::atomic
under the hood; but, yeah, that's a reasonable use-case. Again, I would complain to the compiler or library implementer about that. Using volatile
will work but is a hack. –
Rhetor The other solution is to use C++20 requires() constraints to flavor the methods appropriately - which is a totally fine solution. Constraints and concepts are freaking unbelievable. Finally C++ has arrived. Yeah, I am just designing my own reference counted object and I don't work at that dotcom anymore. –
Juliettejulina
I want a reference counted object that doesn't cost 2 ****ing pointers for each shared_ptr<>... and I don't want to commit to boost (assuming boost has one with one pointer). –
Juliettejulina
Also I want my single-pointer shared pointer impl to be const and volatile correct - and to support pointers to const object, volatile objects, and const volatile objects appropriately with the appropriate inter-assignability between base and derived objects, etc. and to use std::atomic<> for multithreaded builds but just use simple non-atomic increment/decrement for non-multithreaded builds. –
Juliettejulina
This requires (designed correctly) that the AddRef() and Release() methods are both const and volatile qualified and the reference counting integer (or atomic<int>) is declared as mutable since you will want to be able to AddRef() and Release() a cv-qualified object. It is perfectly fine to delete a cv-qualified object. Anyway, this is where I run into the compiler warning: In my cv-qualified AddRef() and Release() methods. Because they are qualified as "volatile" they treat the contained "int" reference as "volatile int". And then I predecrement that "volatile int"... –
Juliettejulina
So, I have to const_cast<int&> the "volatile int" that the compiler warned me about. There are holes in the current solution is what I am saying... –
Juliettejulina
If the underlying hardware is "volatile" then you should mark it
volatile
. If you need to synchronize access to a value within your code you should use something like std::atomic
. If a value is both changed by hardware and needs synchronizing you should use both, the ideas are orthogonal. –
Rhetor Well, like I said: I am using the usage of volatile described in drdobbs.com/cpp/volatile-the-multithreaded-programmers-b/…. This requires that "threadsafe" methods are marked volatile and thus callable on "threadsafe" volatile objects. I don't want the hardware version of it - but because I mark my AddRef and Release methods volatile - because they are correctly volatile according to "volatile correct programming" as described in the article. However, then C++ forces a volatile int there - even though I don't want it to be volatile - because the method is volatile. –
Juliettejulina
In other words I would like keyword like "mutable" which says: This member object is not volatile even within a volatile-qualified method. –
Juliettejulina
Then I think deprecation of decrement on volatile scalars (for instance) would be reasonable: You have a way out: You can mark the scalar "nonvolatile" instead of const_cast<>ing it which just feels dirty. –
Juliettejulina
I agree, some keyword that means "This is thread safe." would probably be good, the current mixture of
volatile
, const
, and mutable
really don't capture the idea of "This function is thread safe." only "This object is thread safe.". Volatile acts close to what is wanted, which I think is why it has become used that way. –
Rhetor @DavidBien: You can use
gcc -Wa,-momit-lock-prefix=yes
for x86 to compile single-threaded code that uses std::atomic
. That just gets the assembler to ignore the lock
prefix by passing it the -momit-lock-prefix=yes
option; it won't get the compiler to optimize things into registers, but then add [mem], 1
is just a normal memory-destination increment, not an atomic RMW (and not a memory barrier). This means you don't have to abuse volatile
as a single-threaded-version-of-atomic<T>
, which I think was why you brought up multi-threading at all in regards to volatile in modern C++. –
Sholes Thanks @PeterCordes but that is a compilation bandaid - not a C++ feature/keyword. My code compiles fine and a lock will not be used but it requires a const_cast<> - which I would rather use than a rather hidden compiler option. But good to know about its existence. –
Juliettejulina
@DavidBien: That's fine; I expect real-world compilers will continue to support non-atomic RMW operations on
volatile
for some time. e.g. godbolt.org/z/9rzbe6d4d GCC does with -std=c++20
. But it doesn't compile it to a memory-destination add/sub, instead of does a separate load / dec / store even on x86. So that's not even atomic wrt. signals or interrupts on the same core (possible on CISC ISAs, but not RISC load-store machines). If you don't need that, and don't mind the inefficiency volatile
introduces with GCC, that's fine. –
Sholes Yes, exactly. In my case, due to the "throwsafe volatile method paradigm" my method is declared volatile which then coerces each member to be volatile which then produces the warning you show for which I use a const_cast<> to avoid. void _AddRefStrongNoThrow() const volatile noexcept { #if IS_MULTITHREADED_BUILD ++m_nRefWeak; ++m_nRefObj; #else ++const_cast< _TyRef & >( m_nRefWeak ); ++const_cast< _TyRef & >( m_nRefObj ); #endif } –
Juliettejulina
@PeterCordes I.e. this: godbolt.org/z/6aY7jc19d - in which the inefficiencies (and the warning) are gone under x64. –
Juliettejulina
@PeterCordes The point of this discussion is that I can declare a member "mutable" and that allows me to modify it even though I am in a const method. However, there is no C++ keyword such as, say, "nonvolatile" which I can use to declare a member non-volatile even in a volatile method. This is the missing C++ feature that I want and what would avoid my hack of using const_cast<> instead. –
Juliettejulina
Just because you're not using threads, does not mean you are not using concurrency. Utilizing ARM NVIC (nested vector interrupt controller) will provide you with plenty of concurrency. –
Abecedary
© 2022 - 2024 — McMap. All rights reserved.
volatile
that are NOT deprecated, because they are useful (e.g. in code that directly loads or stores from specified memory locations, such as in device drivers). Quite a few of the "deprecated uses" are related to ability to use features that too many progammers use - incorrectly - as a means of making a variable access atomic. The C++ library now (since C++11) provides a correct means of ensuring atomic access of variables, so it makes sense to discourage programmers from incorrectly usingvolatile
when the intent is atomic access. – Myrtamyrtaceous