Does Visual C++ support "strict aliasing"?
Asked Answered
U

1

12

I recently was surprised to learn that the C and C++ language standards have a "strict aliasing" rule. In essence, the rule prohibits variables of differing types from referencing the same memory location.

As an example:

char buffer[4] = { 0x55, 0x66, 0x77, 0x88 };
int32 *p = (int32*)&buffer[0]; // illegal because buffer[0] and *p are different types

Most of the professional C++ developers I interact with are not familiar with this rule. Based on my research, it seems to affect mostly GCC/G++/CLANG users. Does Visual C++ support enabling/disabling this rule? If so, how does one do so?

Thank you

Ury answered 12/5, 2016 at 2:52 Comment(3)
AFAIK MSVC always behaves as if you set no-strict-aliasingProphet
It is a compiler with an 1-800 support phone number. So no.Dewaynedewberry
lol @ having a 1-800 support number :)Ury
S
6

"Strict aliasing" is a C++ rule restricting programs, not compilers. Since violating the rule is Undefined Behavior, no diagnostic required a compiler doesn't need to support it in any way.

That said, Microsoft is a bit less aggressive in applying optimizations. Only last week have they announced their new optimizer assumes no signed overflow, something that GCC has assumed for a few years already. Strict aliasing is going to break a few Windows headers, so those need fixing first. (A few types act as if they contain unions, but they're not formally defined as such)

Sillsby answered 12/5, 2016 at 8:25 Comment(12)
It would be helpful, IMHO, if there were a means via which compilers could promise to use integer semantics such that the result of an n-bit integer addition, subtraction, or multiplication would always behave like some integer which is congruent to 2ⁿ, but in case of overflow the choice of which integer could be non-deterministic. Code which could rely upon such behavioral constraints could in many cases be more efficient than code which had to prevent overflows at all costs.Autarch
@supercat: I'm not sure if you really need compiler help for that; it sounds like you could implement that in a library. Use unsigned internally; its operations correspond almost 1:1 with 2's complement signed arithmetic.Sillsby
One could write a library to perform such maths in such fashion as to always yield a result within the range of int, but forcing wrapping behavior--whether via code or compiler options--can be a major impediment to efficiency. If a programmer has to force wrapping behavior to be able to avoid having a compiler jump the rails, that will often prevent a compiler from generating code as efficient as what it could generate if it had the freedom to ignore wrapping (e.g. given i+x>y, if x and y are loop-invariant, a compiler could use one loop for the case where y-x wouldn't overflow...Autarch
...which would only need to calculate y-x once and could then use i>(y-x), and one or two loops for the case(s) where y-x does overflow, which could treat the comparison as always true or always false). In cases where code is supposed to identify "potentially interesting" objects, and no interesting object would cause overflow but some uninteresting objects might, having a few uninteresting objects classified as "potentially interesting" may cheaper than forcing the compiler to handle the objects the code doesn't care about deterministically.Autarch
@supercat: I agree, forcing wrapping behavior hurts performance. That's precisely why I think a library solution is appropriate. Let the default be fast and the wrapped class safe. You can add safety'to fast primitives, but you can't add speed to safe primitives.Sillsby
The problem is that hyper-modern compilers' behavior when calculations overflow isn't limited to yielding a non-deterministic result. Because overflow causes modern compilers to negate laws of time and causality, it must be prevented at all cost even if one would otherwise be willing to accept any possible result from the overflowed calculation and others derived from it. What's needed are means via which a programmer can indicate willingness to accept behavior that is non-deterministic but constrained.Autarch
PS--I disagree with your last sentence: adding directives to waive certain behavioral guarantees in such a way that compilers which don't support them could "handle" them entirely via the preprocessor would make it possible for programmers to enable optimizations which are much more aggressive and effective than anything compilers can do now, but would be much safer than having compilers writers ignore behavioral expectations which microcomputer compilers had honored for decades even though the Standard didn't require them.Autarch
@supercat: "unspecified" sounds like a hyperspecialized case. Consider that a library solution (say std::wrap<int>) would have zero performance overhead on real-world hardware. And well-defined behavior is a special case of unspecified behavior. So what makes your hypothetical type better than std::wrap<int> ? Directives are a major wart on the language, and IMO usually point to a design failure.Sillsby
Code which receives data from untrustworthy sources is (or should be) required to refrain from disrupting or illegitimately examining anything else in the execution environment, even when given maliciously-formed input. That will often be the only behavioral requirement when not given valid data, but I would hardly call it "hyper-specialized". An audiovisual decoder which is fed invalid input should have the freedom to generate any pattern of pixels and sounds subject to the constraint about exposing other parts of the execution environment, but should not have...Autarch
@supercat: Sure, but then you use std::wrap<int> at no performance loss. The problem with your idea is that it is strictly inferior to a well-defined outcome at the same speed. It's not even a good RNG.Sillsby
There are many situations where forcing wrapping behavior can have a significant performance cost, especially when one considers function in-lining and constant substitutions. There are many cases where loose integer semantics would allow a compiler to omit comparisons altogether, but precise integer semantics would necessitate their inclusion.Autarch
If programmers can say (int)(i+1) > j in cases where they need precise wrapping semantics (analogous to double d = (float)(float1*float2); when d needs to be a value that's precisely representable as float), then I don't see how that's "strictly inferior" to having precise wrapping always, since in cases where the programmer actually cares about whether wrapping is performed it could be readily requested, and the cast would help any humans reading the code recognize the importance of wrapping behavior to its meaning.Autarch

© 2022 - 2024 — McMap. All rights reserved.