Is the following undefined behaviour?
union {
int foo;
float bar;
} baz;
baz.foo = 3.14 * baz.bar;
I remember that writing and reading from the same underlying memory between two sequence points is UB, but I am not certain.
Is the following undefined behaviour?
union {
int foo;
float bar;
} baz;
baz.foo = 3.14 * baz.bar;
I remember that writing and reading from the same underlying memory between two sequence points is UB, but I am not certain.
I remember that writing and reading from the same underlying memory between two sequence points is UB, but I am not certain.
Reading and writing to the same memory location in the same expression does not invoke undefined behavior until and unless that location is modified more than once between two sequence points or the side effect is unsequenced relative to the value computation using the value at the same location.
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined. [...]
The expression
baz.foo = 3.14 * baz.bar;
has well defined behaviour if bar
is initialized before. The reason is that the side effect to baz.foo
is sequenced relative to the value computations of the objects baz.foo
and baz.bar
.
[...] The side effect of updating the stored value of the left operand is sequenced after the value computations of the left and right operands. The evaluations of the operands are unsequenced.
printf("%d", i + i++);
has undefined behavior. –
Turrell Disclaimer: This answer addresses C++.
You're accessing an object whose lifetime hasn't begun yet - baz.bar
- which induces UB by [basic.life]/(6.1).
Assuming bar
has been brought to life (e.g. by initializing it), your code is fine; before the assignment, foo
need not be alive as no operation is performed that depends on its value, and during it, the active member is changed by reusing the memory and effectively initializing it. The current rules aren't clear about the latter; see CWG #1116. However, the status quo is that such assignments are indeed setting the target member as active (=alive).
Note that the assignment is sequenced (i.e. guaranteed to happen) after the value computation of the operands - see [expr.ass]/1.
u.a = u.b
is undefined, but u.a = B(u.b)
is fine? –
Endpaper float
, but that temporary's storage does certainly not overlap baz.foo
s. Then again, perhaps the committee was not precise enough in wording their note, and they actually did mean that u.a=f(u.b)
is not defined. Eitherway, lifetime rules are a mess. –
Skullcap baz.bar
has been initialized and baz.foo
has not subsequently been written to. Given the C / C++ reconciliation efforts in the 2011 versions of the standards, I would be very surprised to find that the same code has undefined behavior in C++. –
Cerous bar
is not alive in the snippet of the asker. –
Skullcap Answering for C, not C++
I thought this was Defined Behavior, but then I read the following paragraph from ISO C2x (which I guess is also present in older C standards, but didn't check):
6.5.16.1/3 (Assignment operators::Simple Assignment::Semantics):
If the value being stored in an object is read from another object that overlaps in any way the storage of the first object, then the overlap shall be exact and the two objects shall have qualified or unqualified versions of a compatible type; otherwise, the behavior is undefined.
So, let's consider the following:
union {
int a;
const int b;
} db;
union {
int a;
float b;
} ub1;
union {
uint32_t a;
int32_t b;
} ub2;
Then, it is Defined Behavior to do:
db.a = db.b + 1;
But it is Undefined Behavior to do:
ub1.a = ub1.b + 1;
or
ub2.a = ub2.b + 1;
The definition of compatible types is in 6.2.7/1 (Compatible type and composite type). See also: __builtin_types_compatible_p().
+
create a new object such that makes it defined again? I.e., ub1.a = ub1.b;
is undefined behavior for sure, but is it ub1.a = ub1.b + 1;
? I got a warning from that code, but I'm not convinced. –
Commuter ub1.a = ub1.b + 0;
defined?! –
Commuter long1 = long2 + 1;
perform an intermediate copy would substantially increase code size and execution time. –
Ruthenium long1 = long2+long3;
is to process the code as though it were long1 = long2; long1+=long3;
or long1 = long3; long1+=long2;
, but the first substitution is only valid if long1
and long3
are known to identify disjoint regions of storage, and the second is only valid if long1
and long2
are distinct. –
Ruthenium restrict
or somehow else different objects, it will use your optimized version, otherwise it has to go the whole route with a temporary. –
Blois *ap = *bp
from two non-restrict pointers I guess (or also one pointer and the variable it points to a = *ap
)? I can't think of any other valid cases. But in those cases, aliasing rules already prevent that, unless using char
, but char
aliasing is safe because of sizeof(char) == 1
(and because the standard says so), so how could this paragraph in the standard be useful without referring to unions? –
Commuter *ap = *bp
, if both pointers have the same type, then this paragraph doesn't apply, because obviously they have the same size and starting location, so aliasing rules are already enough. If they don't have the same type, then it's breaking aliasing rules. –
Commuter struct x{};
albeit it really was written via another struct y{};
which by this paragraph has to be identical enough for C to not cause representation problems. –
Blois x+=(*p) << 4;
on some PIC platforms, it may be most efficient on some 8-bit platforms to load *p
, shift left by four, add with carry to the bottom half of x
, reload *p
, shift right by four, and add to the top half of x
(the main cost of loading *p
would be setting up the pointer, which would only need to be done once). –
Ruthenium mov a,@r0 / swap / and a,#$F0 / add a,x.lo / mov x.low,a / mov a,@r0 / swap / and a,$0F / adc x.hi / mov x.hi,a
. All well and good unless p
might alias the bottom half of x
. –
Ruthenium The Standard uses the phrase "Undefined Behavior", among other things, as a catch-all for situations where many implementations would process a construct in at least somewhat predictable fashion (e.g. yielding a not-necessarily-predictable value without side effects), but where the authors of the Standard thought it impractical to try to anticipate everything that implementations might do. It wasn't intended as an invitation for implementations to behave gratuitously nonsensically, nor as an indication that code was erroneous (the phrase "non-portable or erroneous" was very much intended to include constructs that might fail on some machines, but would be correct on code which was not intended to be suitable for use with those machines).
On some platforms like the 8051, if a compiler were given a construct like someInt16 += *someUnsignedCharPtr << 4;
the most efficient way to process it if it didn't have to accommodate the possibility that the pointer might point to the lower byte of someInt16
would be to fetch *someUnsignedCharPtr
, shift it left four bits, add it to the LSB of someInt16
(capturing the carry), reload *someUnsignedCharPtr
, shift it right four bits, and add it along with the earlier carry to the MSB of someInt16
. Loading the value from *someUnsignedCharPtr
twice would be faster than loading it, storing its value to a temporary location before doing the shift, and then having to load its value from that temporary location. If, however, someUnsignedCharPtr
were to point to the lower byte of someInt16
, then the modification of that lower byte before the second load of someUnsignedCharPtr
would corrupt the upper bits of that byte which would, after shifing, be added to the upper byte of someInt16
.
The Standard would allow a compiler to generate such code, even though character pointers are exempt from aliasing rules, because it does not require that compilers handle all situations where unsequenced reads and writes affect regions of storage that partially overlap. If such accesses were performed usinng a union instead of a character pointer, a compiler might recognize that the character-type access would always overlap the least significant byte of the 16-bit value, but I don't think the authors of the Standard wanted to require that compilers invest the time and effort that might be necessary to handle such obscure cases meaningfully.
© 2022 - 2024 — McMap. All rights reserved.