Two months ago, I reported, as a clang++
bug, that the C++ program below sets z
to 4294967295
when compiled with clang++ -O2 -fno-strict-enums
.
enum e { e1, e2 } e;
long long x, y, z;
char *p;
void f(void) {
e = (enum e) 4294967295;
x = (long long) e;
y = e > e1;
z = &p[e] - p;
}
My bug report was closed as invalid because the program is undefined. My feeling was that using the option -fno-strict-enums
made it defined.
As far as I know, Clang does not have documentation worthy of the name, because it aims at being compatible with GCC with respect to the options it accepts and their meaning. I read GCC's documentation of the option -fno-strict-enums
as saying that the program should set the value of z
to -1
:
-fstrict-enums
Allow the compiler to optimize using the assumption that a value of enumerated type can only be one of the values of the enumeration (as defined in the C++ standard; basically, a value that can be represented in the minimum number of bits needed to represent all the enumerators). This assumption may not be valid if the program uses a cast to convert an arbitrary integer value to the enumerated type.
Note that only the option -fstrict-enums
is documented, but it seems clear enough that -fno-strict-enums
disables the compiler behavior that -fstrict-enums
enables. I cannot file a bug against GCC's documentation, because generating a binary that sets z
to -1
, what I understand -fno-strict-enums
to mandate, is exactly what g++ -O2 -fno-strict-enums
does.
Could anyone tell me what -fno-strict-enums
does in Clang (and in GCC if I have misunderstood what it does in GCC), and whether the value of the option has any effect at all anywhere in Clang?
For reference, my bug report is here and the Compiler Explorer link showing what I mean is here. The versions used as reference are Clang 10.0.1 and GCC 10.2 targeting an I32LP64 architecture.
(enum e) 4294967295
I expect the implementation-defined behavior to be applied when converting4294967295
to the underlying type ofenum e
. The implementation-defined behavior is warp-around. – Woodwindf
that behaves correctly for all initial values ofp
that make the source code off
defined. If you insist on seeing a calling context forf
that does not make the question meaningless, it can bep = malloc(5000000000U); if (!p) abort(); f();
– Woodwindf
is undefined for all possible calling contexts, that would also answer my question. But I don't think that “p would have to point to an array of 4294967294 elements” is that argument, becausep
can do just that and the code generated by the compiler has to behave correctly whenp
does that. – Woodwind-fno-strict-enums
do? Does it do anything?”. If you are convinced of this, perhaps you know the answer to that question? – Woodwind-fno-strict-enums
disables optimizations based on the strict definition of an enum’s value range. But violating ISO 14882 on the valid range of the enum is still undefined behavior. – Becalm-fno-strict-enums
do in Clang?" and that the rant about the bug report is just background noise that should be ignored. – Recencysizeof(e)
is not required to be big enough to hold the value you're storing into it. That's why your bug was closed. – Myof
would be defined. I expecte
to evaluate to-1
in the expression&p[e]
, and for this reason an example of valid context in which to callf
ischar c; p = &c + 1; f();
. – Woodwind