Why does volatile exist?
Asked Answered
B

19

280

What does the volatile keyword do? In C++ what problem does it solve?

In my case, I have never knowingly needed it.

Behead answered 16/9, 2008 at 13:59 Comment(4)
Here is an interesting discussion about volatile with regards to the Singleton pattern: aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdfGraybeard
There is an intriguing technique that makes your compiler detect possible race conditions that relies heavily on the volatile keyword, you can read about it at http://www.ddj.com/cpp/184403766.Sundstrom
This is a nice resource with an example on when volatile can be used effectively, put together in pretty layman terms. Link : publications.gbdirect.co.uk/c_book/chapter8/…Nunes
Related: Why is volatile needed in C?Margherita
L
332

volatile is needed if you are reading from a spot in memory that, say, a completely separate process/device/whatever may write to.

I used to work with dual-port ram in a multiprocessor system in straight C. We used a hardware managed 16 bit value as a semaphore to know when the other guy was done. Essentially we did this:

void waitForSemaphore()
{
   volatile uint16_t* semPtr = WELL_KNOWN_SEM_ADDR;/*well known address to my semaphore*/
   while ((*semPtr) != IS_OK_FOR_ME_TO_PROCEED);
}

Without volatile, the optimizer sees the loop as useless (The guy never sets the value! He's nuts, get rid of that code!) and my code would proceed without having acquired the semaphore, causing problems later on.

Laurenalaurence answered 16/9, 2008 at 14:4 Comment(19)
In this case, what would happen if uint16_t* volatile semPtr was written instead? This should mark the pointer as volatile (instead of the value pointed to), so that checks to the pointer itself, e.g. semPtr == SOME_ADDR may not be optimized. This however implies a volatile pointed value again as well. No?Crompton
@Crompton No, it does not. In practice, what you suggest is likely what will happen. But theoretically, one could end up with a compiler that optimizes access to values because it decided that none of those values are ever changed. And if you meant volatile to apply to the value and not the pointer, you'd be screwed. Again, unlikely, but it's better to err on doing things right, than taking advantage of behavior that happens to work today.Litho
How would this not be a race on *semPtr?Covenantor
@Doug T. A better explanation is thisBarrator
@Litho "it decided that none of those values are ever changed" It decided wrongly. You just described a buggy compiler.Tallula
@BaummitAugen: This answer (and obviously the code) predate C++11. Without std::atomic, your only option was to hack things up yourself with inline asm or library function calls for any necessary barriers. You could wrap the volatile access in a read_once() function or macro like the Linux kernel does, but it would still boil down to this to get the asm you want on any sane implementation where an aligned volatile uint16_t can be read/written atomically. (i.e. on most specific platforms, the actual behaviour you get from this is well-defined.)Harumscarum
@PeterCordes Given C++11 has been around for a while, this answer should be updated to at least point out its legacy nature explicitly, shouldn't it? As it stands, I find this answer somewhat misleading (not intentionally, of course, but times have changed).Covenantor
@BaummitAugen: yes, probably. Although if it was a hardware device modifying the known memory location instead of another thread in the same program, this code would still be 100% appropriate. (Unless this semaphore is controlling access to some other memory, in which case you'd want atomic<uint16_t> or volatile atomic<uint16_t> to do acquire-loads. In this case the compiler couldn't hoist the load because that would make the loop infinite, so volatile isn't needed here. Can and does the compiler optimize out two atomic loads?)Harumscarum
@Tallula it did not decide wrongly. It made the correct deduction based on the information given. If you fail to mark something volatile, the compiler is free to assume that it is not volatile. That's what the compiler does when optimizing code. If there is more information, namely, that said data is in fact volatile, it's the responsibility to the programmer to provide that information. What you're claiming to by a buggy compiler is really just bad programming.Litho
@Litho No, it's a compiler that decided to ignore the volatile keyword that is present in the code.Tallula
@Tallula no, just because the volatile keyword appears once doesn't mean everything suddenly becomes volatile. I gave a scenario where the compiler does the right thing and achieves a result that is contrary to what the programmer wrongly expects. Just like the "most vexing parse" is not the sign of the compiler error, neither is the case here.Litho
@Litho Can you show pseudo code that would honor volatile that would give that unwanted result?Tallula
@Tallula check out this question - is this sufficient?Litho
The first sentence of this answer is fundamentally wrong in a very subtle, yet important, way. It claims volatile is needed under some conditions. But this is not so. For example, consider a platform that provides a native type, say atomic_int, that is documented to be suitable for reading from memory that completely separate devices might write to. Certainly volatile would not be needed on that platform. Because this is very often the case, in practice, volatile is only very rarely needed, even when you need this behavior.Clinandrium
@DavidSchwartz - could you please elaborate on this example? What would atomic_int do? In my world, we mainly use "atomic" to describe a series of operations that need to be performed sequentially w/o being interrupted by other operations. [cont...]Northing
[cont...] For example, in a bit-field update of some memory-mapped device registers, one uses a read-modify-write scheme for that address. It is important to protect the access from interruption and effect of a 2nd thread, or an IRQ handler. So, the access is wrapped with an "atomic" keywords to disable IRQs. I am not sure what your suggested type would do and how it is related to preventing optimization on sequential accesses to the same volatile resource.Northing
@Northing Suppose the platform offers a type called atomic_int that is documented to behave exactly the same as volatile int does. In that case, you would not need to use volatile on that platform since you could use atomic_int. But this answer says volatile is needed. That is wrong. Most platforms offer things like atomic_int that have guaranteed semantics without needing to use volatile.Clinandrium
@DavidSchwartz - so, if I get you correctly, you are suggesting kind of an alias for volatile int, as if there was a typedef volatile int atomic_int, and then say the use of volatile is not necessary? If so, then the same argument could be used to say that if the system provides a type called whole that behaves like int then using int is not necessary???! Also, I think that in my world, this won't be an appropriate use of the word atomic, as described above. Or did I completely missed your point?Northing
@Northing No, you got it. It is incorrect to say that volatile is necessary because other things can provide the guarantees and on every realistic platform, there are in fact other things that provide those guarantees and volatile is not actually used. It's very misleading to say something is "necessary" (in fact, outright wrong) when it is not even the most common solution.Clinandrium
P
100

volatile is needed when developing embedded systems or device drivers, where you need to read or write a memory-mapped hardware device. The contents of a particular device register could change at any time, so you need the volatile keyword to ensure that such accesses aren't optimised away by the compiler.

Phoney answered 16/9, 2008 at 14:1 Comment(3)
This is not only valid for embedded systems but for all device drivers development.Ralston
The only time I ever needed it on an 8bit ISA bus where you read the same address twice - the compiler had a bug and ignored it (early Zortech c++)Heterogamy
Volatile is very rarely adequate for control of external devices. Its semantics is wrong for modern MMIO: you have to make too many objects volatile and it hurts optimization. But modern MMIO behaves like normal memory until a flag is set so volatile should not be needed. Many drivers don't ever use volatile.Tallula
V
71

Some processors have floating point registers that have more than 64 bits of precision (eg. 32-bit x86 without SSE, see Peter's comment). That way, if you run several operations on double-precision numbers, you actually get a higher-precision answer than if you were to truncate each intermediate result to 64 bits.

This is usually great, but it means that depending on how the compiler assigned registers and did optimizations you'll have different results for the exact same operations on the exact same inputs. If you need consistency then you can force each operation to go back to memory by using the volatile keyword.

It's also useful for some algorithms that make no algebraic sense but reduce floating point error, such as Kahan summation. Algebraicly it's a nop, so it will often get incorrectly optimized out unless some intermediate variables are volatile.

Valerie answered 16/9, 2008 at 15:23 Comment(14)
When you compute numerical derivatives it is useful too, to make sure x + h - x == h you define hh = x + h - x as volatile so that a proper delta can be computed.Ornamental
+1, indeed in my experience there was a case when floating-point computations produced different results in Debug and Release, so unit tests written for one configuration were failing for another. We solved it by means of declaring one floating-point variable as volatile double instead of just double, so to ensure that it is truncated from FPU precision to 64-bit (RAM) precision before continuing further computations. The results were substantially different because of a further exaggeration of the floating-point error.Fenestra
Your definition of "modern" is a bit off. Only 32-bit x86 code that avoids SSE/SSE2 is affected by this, and it wasn't "modern" even 10 years ago. MIPS / ARM / POWER all have 64-bit hardware registers, and so does x86 with SSE2. C++ implementations x86-64 always use SSE2, and compilers have options like g++ -mfpmath=sse to use it for 32-bit x86 as well. You can use gcc -ffloat-store to force rounding everywhere even when using x87, or you can set the x87 precision to 53-bit mantissa: randomascii.wordpress.com/2012/03/21/….Harumscarum
But still good answer, for obsolete x87 code-gen, you can use volatile to force rounding in a few specific places without losing the benefits everywhere.Harumscarum
@PeterCordes Just because user registers have exactly the precision of a double does not mean internal registers used for intermediate results (that are never named in asm) don't have more precision/Tallula
@curiousguy: on sane ISAs it does mean that. Your idea would lead to a CPU that gives different math results depending on where an interrupt+context switch happened (and forced rounding to architectural register width by save/restore of FP registers). Everyone wants the same machine code with the same input data to give the same bit-exact output every time.Harumscarum
@PeterCordes FPU instr are interruptible?Tallula
@curiousguy: Internal buffers used during the evaluation of one instruction are purely an implementation detail and can't be affected by volatile. If it's a microcoded instruction like x87 fsin then you could in theory have extra precision kept between uops. But no it wouldn't be resumable, if it's interruptible it would abort.Harumscarum
I thought you meant something like MIPS f0 internally keeping more than 64-bit precision across a chain of FP instructions, so volatile to force store/reload would have an effect. Most FPUs (other than 387) only provide the IEEE Basic operations +-/ sqrt, which are required to produce correctly-rounded results (error <= 0.5ulp). So unless you keep extra internal precision between instructions, the results are fully specified. Fun fact: AMD *does keep extra internal data between FP instructions, but not extra precision; probably something like exponent/mantissa unpacking.Harumscarum
@PeterCordes With volatile you can decompose computations and give guaranteed effective type double to intermediate C++ values. Volatile is very reliable for that purpose: double x,y,z,a; volatile double r; r=y*z; a=x+r; (Ppl say that a cast has the same effect: x+(double)(y*z) but that relies on the compiler front end for the conversion to effective double precision of an expression of static type double, which was unreliable on at least one popular compiler.)Tallula
@curiousguy: WTF are you talking about? Earlier you were talking about architectural registers being 64-bit, not 80-bit. Your last comment only makes sense for e.g. 32-bit x86 using x87 where the compiler uses 80-bit x87 for temporaries like y*z. Just like FP_CONTRACT, gcc optimizes across statements by default, not just within expressions with rounding to actual double forced by assignments and casts, even though FLT_EVAL_METHOD = 2 says it should. That would be slow. But again, only a problem with >64-bit registers.Harumscarum
Let us continue this discussion in chat.Tallula
But floating point math is always slightly inaccurate anyway, so I don't understand why it makes a difference. Couldn't the exact same operations not necessarily give the same exact results anyway?Rectory
Or do I confuse inaccurate with inconsistent?Rectory
G
60

From a "Volatile as a promise" article by Dan Saks:

(...) a volatile object is one whose value might change spontaneously. That is, when you declare an object to be volatile, you're telling the compiler that the object might change state even though no statements in the program appear to change it."

Here are links to three of his articles regarding the volatile keyword:

Gagman answered 16/9, 2008 at 14:32 Comment(0)
C
23

You MUST use volatile when implementing lock-free data structures. Otherwise the compiler is free to optimize access to the variable, which will change the semantics.

To put it another way, volatile tells the compiler that accesses to this variable must correspond to a physical memory read/write operation.

For example, this is how InterlockedIncrement is declared in the Win32 API:

LONG __cdecl InterlockedIncrement(
  __inout  LONG volatile *Addend
);
Crippen answered 17/9, 2008 at 11:55 Comment(2)
You absolutely do NOT need to declare a variable volatile in order to be able to use InterlockedIncrement.Tallula
This answer is obsolete now that C++11 provides std::atomic<LONG> so you can write lockless code more safely without problems of having pure loads / pure stores optimized away, or reordered, or whatever else.Harumscarum
D
10

A large application that I used to work on in the early 1990s contained C-based exception handling using setjmp and longjmp. The volatile keyword was necessary on variables whose values needed to be preserved in the block of code that served as the "catch" clause, lest those vars be stored in registers and wiped out by the longjmp.

Dementia answered 16/9, 2008 at 21:3 Comment(0)
E
10

In Standard C, one of the places to use volatile is with a signal handler. In fact, in Standard C, all you can safely do in a signal handler is modify a volatile sig_atomic_t variable, or exit quickly. Indeed, AFAIK, it is the only place in Standard C that the use of volatile is required to avoid undefined behaviour.

ISO/IEC 9899:2011 §7.14.1.1 The signal function

¶5 If the signal occurs other than as the result of calling the abort or raise function, the behavior is undefined if the signal handler refers to any object with static or thread storage duration that is not a lock-free atomic object other than by assigning a value to an object declared as volatile sig_atomic_t, or the signal handler calls any function in the standard library other than the abort function, the _Exit function, the quick_exit function, or the signal function with the first argument equal to the signal number corresponding to the signal that caused the invocation of the handler. Furthermore, if such a call to the signal function results in a SIG_ERR return, the value of errno is indeterminate.252)

252) If any signal is generated by an asynchronous signal handler, the behavior is undefined.

That means that in Standard C, you can write:

static volatile sig_atomic_t sig_num = 0;

static void sig_handler(int signum)
{
    signal(signum, sig_handler);
    sig_num = signum;
}

and not much else.

POSIX is a lot more lenient about what you can do in a signal handler, but there are still limitations (and one of the limitations is that the Standard I/O library — printf() et al — cannot be used safely).

Elgar answered 27/7, 2013 at 18:14 Comment(0)
F
8

Developing for an embedded, I have a loop that checks on a variable that can be changed in an interrupt handler. Without "volatile", the loop becomes a noop - as far as the compiler can tell, the variable never changes, so it optimizes the check away.

Same thing would apply to a variable that may be changed in a different thread in a more traditional environment, but there we often do synchronization calls, so compiler is not so free with optimization.

Fechter answered 16/9, 2008 at 14:8 Comment(0)
I
7

I've used it in debug builds when the compiler insists on optimizing away a variable that I want to be able to see as I step through code.

Inflict answered 16/9, 2008 at 21:9 Comment(0)
P
7

Besides using it as intended, volatile is used in (template) metaprogramming. It can be used to prevent accidental overloading, as the volatile attribute (like const) takes part in overload resolution.

template <typename T> 
class Foo {
  std::enable_if_t<sizeof(T)==4, void> f(T& t) 
  { std::cout << 1 << t; }
  void f(T volatile& t) 
  { std::cout << 2 << const_cast<T&>(t); }

  void bar() { T t; f(t); }
};

This is legal; both overloads are potentially callable and do almost the same. The cast in the volatile overload is legal as we know bar won't pass a non-volatile T anyway. The volatile version is strictly worse, though, so never chosen in overload resolution if the non-volatile f is available.

Note that the code never actually depends on volatile memory access.

Polysemy answered 17/9, 2008 at 9:30 Comment(2)
Could you please elaborate on this with an example? It'd really help me understand better. Thanks!Mentalism
"The cast in the volatile overload" A cast is an explicit conversion. It's a SYNTAX construct. Many people makes that confusion (even standard authors).Tallula
G
6
  1. you must use it to implement spinlocks as well as some (all?) lock-free data structures
  2. use it with atomic operations/instructions
  3. helped me once to overcome compiler's bug (wrongly generated code during optimization)
Gerlachovka answered 16/9, 2008 at 14:4 Comment(3)
You are better off using a library, compiler intrinsics, or inline assembly code. Volatile is unreliable.Penicillate
1 and 2 both make use of atomic operations, but volatile does not provide atomic semantics and the platform-specific implementations of atomic will supercede the need for using volatile, so for 1 and 2, I disagree, you do NOT need volatile for these.Gave
Who says anything about volatile providing atomic semantics? I said you need to USE volatile WITH atomic operations and if you don't think it's true look at the declarations of interlocked operations of win32 API (this guy also explained this in his answer)Ralston
N
4

The volatile keyword is intended to prevent the compiler from applying any optimisations on objects that can change in ways that cannot be determined by the compiler.

Objects declared as volatile are omitted from optimisation because their values can be changed by code outside the scope of current code at any time. The system always reads the current value of a volatile object from the memory location rather than keeping its value in temporary register at the point it is requested, even if a previous instruction asked for a value from the same object.

Consider the following cases

1) Global variables modified by an interrupt service routine outside the scope.

2) Global variables within a multi-threaded application.

If we do not use volatile qualifier, the following problems may arise

1) Code may not work as expected when optimisation is turned on.

2) Code may not work as expected when interrupts are enabled and used.

Volatile: A programmer’s best friend

https://en.wikipedia.org/wiki/Volatile_(computer_programming)

Nahshunn answered 5/10, 2016 at 13:20 Comment(1)
The link you posted is extremely outdated and doesn't reflect current best practices.Brynhild
T
4

Other answers already mention avoiding some optimization in order to:

  • use memory mapped registers (or "MMIO")
  • write device drivers
  • allow easier debugging of programs
  • make floating point computations more deterministic

Volatile is essential whenever you need a value to appear to come from the outside and be unpredictable and avoid compiler optimizations based on a value being known, and when a result isn't actually used but you need it to be computed, or it's used but you want to compute it several times for a benchmark, and you need the computations to start and end at precise points.

A volatile read is like an input operation (like scanf or a use of cin): the value seems to come from the outside of the program, so any computation that has a dependency on the value needs to start after it.

A volatile write is like an output operation (like printf or a use of cout): the value seems to be communicated outside of the program, so if the value depends on a computation, it needs to be finished before.

So a pair of volatile read/write can be used to tame benchmarks and make time measurement meaningful.

Without volatile, your computation could be started by the compiler before, as nothing would prevent reordering of computations with functions such as time measurement.

Tallula answered 22/11, 2019 at 23:6 Comment(0)
Q
3

All answers are excellent. But on the top of that, I would like to share an example.

Below is a little cpp program:

#include <iostream>

int x;

int main(){
    char buf[50];
    x = 8;

    if(x == 8)
        printf("x is 8\n");
    else
        sprintf(buf, "x is not 8\n");

    x=1000;
    while(x > 5)
        x--;
    return 0;
}

Now, lets generate the assembly of the above code (and I will paste only that portions of the assembly which relevant here):

The command to generate assembly:

g++ -S -O3 -c -fverbose-asm -Wa,-adhln assembly.cpp

And the assembly:

main:
.LFB1594:
    subq    $40, %rsp    #,
    .seh_stackalloc 40
    .seh_endprologue
 # assembly.cpp:5: int main(){
    call    __main   #
 # assembly.cpp:10:         printf("x is 8\n");
    leaq    .LC0(%rip), %rcx     #,
 # assembly.cpp:7:     x = 8;
    movl    $8, x(%rip)  #, x
 # assembly.cpp:10:         printf("x is 8\n");
    call    _ZL6printfPKcz.constprop.0   #
 # assembly.cpp:18: }
    xorl    %eax, %eax   #
    movl    $5, x(%rip)  #, x
    addq    $40, %rsp    #,
    ret 
    .seh_endproc
    .p2align 4,,15
    .def    _GLOBAL__sub_I_x;   .scl    3;  .type   32; .endef
    .seh_proc   _GLOBAL__sub_I_x

You can see in the assembly that the assembly code was not generated for sprintf because the compiler assumed that x will not change outside of the program. And same is the case with the while loop. while loop was altogether removed due to the optimization because compiler saw it as a useless code and thus directly assigned 5 to x (see movl $5, x(%rip)).

The problem occurs when what if an external process/ hardware would change the value of x somewhere between x = 8; and if(x == 8). We would expect else block to work but unfortunately the compiler has trimmed out that part.

Now, in order to solve this, in the assembly.cpp, let us change int x; to volatile int x; and quickly see the assembly code generated:

main:
.LFB1594:
    subq    $104, %rsp   #,
    .seh_stackalloc 104
    .seh_endprologue
 # assembly.cpp:5: int main(){
    call    __main   #
 # assembly.cpp:7:     x = 8;
    movl    $8, x(%rip)  #, x
 # assembly.cpp:9:     if(x == 8)
    movl    x(%rip), %eax    # x, x.1_1
 # assembly.cpp:9:     if(x == 8)
    cmpl    $8, %eax     #, x.1_1
    je  .L11     #,
 # assembly.cpp:12:         sprintf(buf, "x is not 8\n");
    leaq    32(%rsp), %rcx   #, tmp93
    leaq    .LC0(%rip), %rdx     #,
    call    _ZL7sprintfPcPKcz.constprop.0    #
.L7:
 # assembly.cpp:14:     x=1000;
    movl    $1000, x(%rip)   #, x
 # assembly.cpp:15:     while(x > 5)
    movl    x(%rip), %eax    # x, x.3_15
    cmpl    $5, %eax     #, x.3_15
    jle .L8  #,
    .p2align 4,,10
.L9:
 # assembly.cpp:16:         x--;
    movl    x(%rip), %eax    # x, x.4_3
    subl    $1, %eax     #, _4
    movl    %eax, x(%rip)    # _4, x
 # assembly.cpp:15:     while(x > 5)
    movl    x(%rip), %eax    # x, x.3_2
    cmpl    $5, %eax     #, x.3_2
    jg  .L9  #,
.L8:
 # assembly.cpp:18: }
    xorl    %eax, %eax   #
    addq    $104, %rsp   #,
    ret 
.L11:
 # assembly.cpp:10:         printf("x is 8\n");
    leaq    .LC1(%rip), %rcx     #,
    call    _ZL6printfPKcz.constprop.1   #
    jmp .L7  #
    .seh_endproc
    .p2align 4,,15
    .def    _GLOBAL__sub_I_x;   .scl    3;  .type   32; .endef
    .seh_proc   _GLOBAL__sub_I_x

Here you can see that the assembly codes for sprintf, printf and while loop were generated. The advantage is that if the x variable is changed by some external program or hardware, sprintf part of the code will be executed. And similarly while loop can be used for busy waiting now.

Quinonoid answered 29/4, 2020 at 6:54 Comment(0)
S
2

Beside the fact that the volatile keyword is used for telling the compiler not to optimize the access to some variable (that can be modified by a thread or an interrupt routine), it can be also used to remove some compiler bugs -- YES it can be ---.

For example I worked on an embedded platform were the compiler was making some wrong assuptions regarding a value of a variable. If the code wasn't optimized the program would run ok. With optimizations (which were really needed because it was a critical routine) the code wouldn't work correctly. The only solution (though not very correct) was to declare the 'faulty' variable as volatile.

Sturgis answered 16/9, 2008 at 21:2 Comment(3)
It is a faulty assumption the idea that the compiler doesn't optimize access to volatiles. The standard knows nothing about optimizations. The compiler is required to respect what the standard dictates, but it is free to do any optimizations that don't interfere with the normal behavior.Teammate
From my experience 99.9% of all optimization "bugs" in gcc arm are errors on the part of the programmer. No idea if this applies to this answer. Just a rant on the general topicMissilery
@Teammate "It is a faulty assumption the idea that the compiler doesn't optimize access to volatiles" Source?Tallula
P
2

Your program seems to work even without volatile keyword? Perhaps this is the reason:

As mentioned previously the volatile keyword helps for cases like

volatile int* p = ...;  // point to some memory
while( *p!=0 ) {}  // loop until the memory becomes zero

But there seems to be almost no effect once an external or non-inline function is being called. E.g.:

while( *p!=0 ) { g(); }

Then with or without volatile almost the same result is generated.

As long as g() can be completely inlined, the compiler can see everything that's going on and can therefore optimize. But when the program makes a call to a place where the compiler can't see what's going on, it isn't safe for the compiler to make any assumptions any more. Hence the compiler will generate code that always reads from memory directly.

But beware of the day, when your function g() becomes inline (either due to explicit changes or due to compiler/linker cleverness) then your code might break if you forgot the volatile keyword!

Therefore I recommend to add the volatile keyword even if your program seems to work without. It makes the intention clearer and more robust in respect to future changes.

Paralysis answered 23/10, 2017 at 11:53 Comment(2)
Note that a function can have its code inlined while still generating a reference (resolved at link time) to the outline function; this will be the case of a partially inlined recursive function. A function could also have its semantic "inlined" by the compiler, that is the compiler assumes the side effects and result are within the possible side effects and results possible according to its source code, while still not inlining it. This is based on the "effective One Definition Rule" which states that all definitions of an entity shall be effectively equivalent (if not exactly identical).Tallula
Avoiding portably the inlining of a call (or "inlining" of its semantic) by a function whose body is visible by the compiler (even at link time with global optimization) is possible by using a volatile qualified function pointer: void (* volatile fun_ptr)() = fun; fun_ptr();Tallula
K
2

In the early days of C, compilers would interpret all actions that read and write lvalues as memory operations, to be performed in the same sequence as the reads and writes appeared in the code. Efficiency could be greatly improved in many cases if compilers were given a certain amount of freedom to re-order and consolidate operations, but there was a problem with this. Even though operations were often specified in a certain order merely because it was necessary to specify them in some order, and thus the programmer picked one of many equally-good alternatives, that wasn't always the case. Sometimes it would be important that certain operations occur in a particular sequence.

Exactly which details of sequencing are important will vary depending upon the target platform and application field. Rather than provide particularly detailed control, the Standard opted for a simple model: if a sequence of accesses are done with lvalues that are not qualified volatile, a compiler may reorder and consolidate them as it sees fit. If an action is done with a volatile-qualified lvalue, a quality implementation should offer whatever additional ordering guarantees might be required by code targeting its intended platform and application field, without requiring that programmers use non-standard syntax.

Unfortunately, rather than identify what guarantees programmers would need, many compilers have opted instead to offer the bare minimum guarantees mandated by the Standard. This makes volatile much less useful than it should be. On gcc or clang, for example, a programmer needing to implement a basic "hand-off mutex" [one where a task that has acquired and released a mutex won't do so again until the other task has done so] must do one of four things:

  1. Put the acquisition and release of the mutex in a function that the compiler cannot inline, and to which it cannot apply Whole Program Optimization.

  2. Qualify all the objects guarded by the mutex as volatile--something which shouldn't be necessary if all accesses occur after acquiring the mutex and before releasing it.

  3. Use optimization level 0 to force the compiler to generate code as though all objects that aren't qualified register are volatile.

  4. Use gcc-specific directives.

By contrast, when using a higher-quality compiler which is more suitable for systems programming, such as icc, one would have another option:

  1. Make sure that a volatile-qualified write gets performed everyplace an acquire or release is needed.

Acquiring a basic "hand-off mutex" requires a volatile read (to see if it's ready), and shouldn't require a volatile write as well (the other side won't try to re-acquire it until it's handed back) but having to perform a meaningless volatile write is still better than any of the options available under gcc or clang.

Karykaryl answered 30/7, 2018 at 17:1 Comment(0)
G
2

I would like to quote Herb Sutter's words from his GotW #95, which can help to understand the meaning of the volatile variables:

C++ volatile variables (which have no analog in languages like C# and Java) are always beyond the scope of this and any other article about the memory model and synchronization. That’s because C++ volatile variables aren’t about threads or communication at all and don’t interact with those things. Rather, a C++ volatile variable should be viewed as portal into a different universe beyond the language — a memory location that by definition does not obey the language’s memory model because that memory location is accessed by hardware (e.g., written to by a daughter card), have more than one address, or is otherwise “strange” and beyond the language. So C++ volatile variables are universally an exception to every guideline about synchronization because are always inherently “racy” and unsynchronizable using the normal tools (mutexes, atomics, etc.) and more generally exist outside all normal of the language and compiler including that they generally cannot be optimized by the compiler (because the compiler isn’t allowed to know their semantics; a volatile int vi; may not behave anything like a normal int, and you can’t even assume that code like vi = 5; int read_back = vi; is guaranteed to result in read_back == 5, or that code like int i = vi; int j = vi; that reads vi twice will result in i == j which will not be true if vi is a hardware counter for example).

Gesellschaft answered 9/1, 2021 at 10:27 Comment(2)
I somehow always find Herb Sutter's explanation a bit more confusing than others.Landin
I find it clear after having read https://mcmap.net/q/15694/-why-do-we-use-the-volatile-keyword-duplicateChita
F
1

One use I should remind you is, in the signal handler function, if you want to access/modify a global variable (for example, mark it as exit = true) you have to declare that variable as 'volatile'.

Florencia answered 5/5, 2017 at 10:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.