gcc, strict-aliasing, and casting through a union
Asked Answered
B

7

38

Do you have any horror stories to tell? The GCC Manual recently added a warning regarding -fstrict-aliasing and casting a pointer through a union:

[...] Taking the address, casting the resulting pointer and dereferencing the result has undefined behavior [emphasis added], even if the cast uses a union type, e.g.:

    union a_union {
        int i;
        double d;
    };

    int f() {
        double d = 3.0;
        return ((union a_union *)&d)->i;
    }

Does anyone have an example to illustrate this undefined behavior?

Note this question is not about what the C99 standard says, or does not say. It is about the actual functioning of gcc, and other existing compilers, today.

I am only guessing, but one potential problem may lie in the setting of d to 3.0. Because d is a temporary variable which is never directly read, and which is never read via a 'somewhat-compatible' pointer, the compiler may not bother to set it. And then f() will return some garbage from the stack.

My simple, naive, attempt fails. For example:

#include <stdio.h>

union a_union {
    int i;
    double d;
};

int f1(void) {
    union a_union t;
    t.d = 3333333.0;
    return t.i; // gcc manual: 'type-punning is allowed, provided...' (C90 6.3.2.3)
}

int f2(void) {
    double d = 3333333.0;
    return ((union a_union *)&d)->i; // gcc manual: 'undefined behavior' 
}

int main(void) {
    printf("%d\n", f1());
    printf("%d\n", f2());
    return 0;
}

works fine, giving on CYGWIN:

-2147483648
-2147483648

Looking at the assembler, we see that gcc completely optimizes t away: f1() simply stores the pre-calculated answer:

movl    $-2147483648, %eax

while f2() pushes 3333333.0 onto the floating-point stack, and then extracts the return value:

flds   LC0                 # LC0: 1246458708 (= 3333333.0) (--> 80 bits)
fstpl  -8(%ebp)            # save in d (64 bits)
movl   -8(%ebp), %eax      # return value (32 bits)

And the functions are also inlined (which seems to be the cause of some subtle strict-aliasing bugs) but that is not relevant here. (And this assembler is not that relevant, but it adds corroborative detail.)

Also note that taking addresses is obviously wrong (or right, if you are trying to illustrate undefined behavior). For example, just as we know this is wrong:

extern void foo(int *, double *);
union a_union t;
t.d = 3.0;
foo(&t.i, &t.d); // undefined behavior

we likewise know this is wrong:

extern void foo(int *, double *);
double d = 3.0;
foo(&((union a_union *)&d)->i, &d); // undefined behavior

For background discussion about this, see for example:

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1422.pdf
http://gcc.gnu.org/ml/gcc/2010-01/msg00013.html
http://davmac.wordpress.com/2010/02/26/c99-revisited/
http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html
( = search page on Google then view cached page )

What is the strict aliasing rule?
C99 strict aliasing rules in C++ (GCC)

In the first link, draft minutes of an ISO meeting seven months ago, one participant notes in section 4.16:

Is there anybody that thinks the rules are clear enough? No one is really able to interpret them.

Other notes: My test was with gcc 4.3.4, with -O2; options -O2 and -O3 imply -fstrict-aliasing. The example from the GCC Manual assumes sizeof(double) >= sizeof(int); it doesn't matter if they are unequal.

Also, as noted by Mike Acton in the cellperformace link, -Wstrict-aliasing=2, but not =3, produces warning: dereferencing type-punned pointer might break strict-aliasing rules for the example here.

Blocky answered 25/5, 2010 at 16:6 Comment(7)
What optimization level did you compile at? The higher the optimization level, the more likely the compiler may be to rely on the strict aliasing rule. (As an aside, that quote from the committee meeting minutes could apply to many parts of the ISO standard :-P)Solanaceous
Small point: you should probably use int64_t to ensure that the integer element in the union is the same size as the double.Spathic
You might take a look at this example: https://mcmap.net/q/16417/-a-question-about-union-in-c-store-as-one-type-and-read-as-another-is-it-implementation-defined/…Solanaceous
Note that the union might have a stronger alignment requirement than each of its individual members.Bufford
John Regehr gives two interesting, short, examples of inconsistencies in GCC and Clang.Blocky
I believe the standard defines this as UB. Basing a judgment on the behaviour of compilers today is highly dangerous, as you never know what a compiler may do in future.Saavedra
@PaulR note that on any widely-used compiler today, for x86-64, int is always 32 bits and double is always 64 bits. long and pointer values change size depending on which compiler you use: windows: long is 32 bits and pointer is 64 bits, pretty much anything else: long is 64 bits and pointer is 64 bitsHie
O
13

The fact that GCC is warning about unions doesn't necessarily mean that unions don't currently work. But here's a slightly less simple example than yours:

#include <stdio.h>

struct B {
    int i1;
    int i2;
};

union A {
    struct B b;
    double d;
};

int main() {
    double d = 3.0;
    #ifdef USE_UNION
        ((union A*)&d)->b.i2 += 0x80000000;
    #else
        ((int*)&d)[1] += 0x80000000;
    #endif
    printf("%g\n", d);
}

Output:

$ gcc --version
gcc (GCC) 4.3.4 20090804 (release) 1
Copyright (C) 2008 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

$ gcc -oalias alias.c -O1 -std=c99 && ./alias
-3

$ gcc -oalias alias.c -O3 -std=c99 && ./alias
3

$ gcc -oalias alias.c -O1 -std=c99 -DUSE_UNION && ./alias
-3

$ gcc -oalias alias.c -O3 -std=c99 -DUSE_UNION && ./alias
-3

So on GCC 4.3.4, the union "saves the day" (assuming I want the output "-3"). It disables the optimisation that relies on strict aliasing and that results in the output "3" in the second case (only). With -Wall, USE_UNION also disables the type-pun warning.

I don't have gcc 4.4 to test, but please give this code a go. Your code in effect tests whether the memory for d is initialised before reading back through a union: mine tests whether it is modified.

Btw, the safe way to read half of a double as an int is:

double d = 3;
int i;
memcpy(&i, &d, sizeof i);
return i;

With optimisation on GCC, this results in:

    int thing() {
401130:       55                      push   %ebp
401131:       89 e5                   mov    %esp,%ebp
401133:       83 ec 10                sub    $0x10,%esp
        double d = 3;
401136:       d9 05 a8 20 40 00       flds   0x4020a8
40113c:       dd 5d f0                fstpl  -0x10(%ebp)
        int i;
        memcpy(&i, &d, sizeof i);
40113f:       8b 45 f0                mov    -0x10(%ebp),%eax
        return i;
    }
401142:       c9                      leave
401143:       c3                      ret

So there's no actual call to memcpy. If you aren't doing this, you deserve what you get if union casts stop working in GCC ;-)

Outofdate answered 2/6, 2010 at 15:17 Comment(4)
The whole point of the question is that the GCC Manual states unions will not always "save the day". But every test seems to show, like yours, that they in fact do work fine. Excluding, of course, the example flagged INVALID in the cellperformance link.Blocky
The manual (at least, the bit you quote) states that the code has undefined behavior. This does not rule out unions disabling strict aliasing assumptions. So it is possible that unions do always "save the day", and GCC includes the extra caveats merely to educate users, and prepare them for some future change. Which may or may not ever happen. I guess an expert on GCC optimisation could construct a failure case from first principles if there is one, otherwise it's random search.Outofdate
Agreed! I am not an expert on gcc, which is why I am asking SO.Blocky
@SteveJessop so is the recommendation to use memcpy instead of a union or a implicit cast?Keldon
F
6

Well it's a bit of necro-posting, but here is a horror story. I'm porting a program that was written with the assumption that the native byte order is big endian. Now I need it to work on little endian too. Unfortunately, I can't just use native byte order everywhere, as data could be accessed in many ways. For example, a 64-bit integer could be treated as two 32-bit integers or as 4 16-bit integers, or even as 16 4-bit integers. To make things worse, there is no way to figure out what exactly is stored in memory, because the software is an interpreter for some sort of byte code, and the data is formed by that byte code. For example, the byte code may contain instructions to write an array of 16-bit integers, and then access a pair of them as a 32-bit float. And there is no way to predict it or alter the byte code.

Therefore, I had to create a set of wrapper classes to work with values stored in the big endian order regardless of the native endianness. Worked perfectly in Visual Studio and in GCC on Linux with no optimizations. But with gcc -O2, hell broke loose. After a lot of debugging I figured out that the reason was here:

double D;
float F; 
Ul *pF=(Ul*)&F; // Ul is unsigned long
*pF=pop0->lu.r(); // r() returns Ul
D=(double)F; 

This code was used to convert a 32-bit representation of a float stored in a 32-bit integer to double. It seems that the compiler decided to do the assignment to *pF after the assignment to D - the result was that the first time the code was executed, the value of D was garbage, and the consequent values were "late" by 1 iteration.

Miraculously, there were no other problems at that point. So I decided to move on and test my new code on the original platform, HP-UX on a RISC processor with native big endian order. Now it broke again, this time in my new class:

typedef unsigned long long Ur; // 64-bit uint
typedef unsigned char Uc;
class BEDoubleRef {
        double *p;
public:
        inline BEDoubleRef(double *p): p(p) {}
        inline operator double() {
                Uc *pu = reinterpret_cast<Uc*>(p);
                Ur n = (pu[7] & 0xFFULL) | ((pu[6] & 0xFFULL) << 8)
                        | ((pu[5] & 0xFFULL) << 16) | ((pu[4] & 0xFFULL) << 24)
                        | ((pu[3] & 0xFFULL) << 32) | ((pu[2] & 0xFFULL) << 40)
                        | ((pu[1] & 0xFFULL) << 48) | ((pu[0] & 0xFFULL) << 56);
                return *reinterpret_cast<double*>(&n);
        }
        inline BEDoubleRef &operator=(const double &d) {
                Uc *pc = reinterpret_cast<Uc*>(p);
                const Ur *pu = reinterpret_cast<const Ur*>(&d);
                pc[0] = (*pu >> 56) & 0xFFu;
                pc[1] = (*pu >> 48) & 0xFFu;
                pc[2] = (*pu >> 40) & 0xFFu;
                pc[3] = (*pu >> 32) & 0xFFu;
                pc[4] = (*pu >> 24) & 0xFFu;
                pc[5] = (*pu >> 16) & 0xFFu;
                pc[6] = (*pu >> 8) & 0xFFu;
                pc[7] = *pu & 0xFFu;
                return *this;
        }
        inline BEDoubleRef &operator=(const BEDoubleRef &d) {
                *p = *d.p;
                return *this;
        }
};

For some really weird reason, the first assignment operator only correctly assigned bytes 1 through 7. Byte 0 always had some nonsense in it, which broke everything as there is a sign bit and a part of order.

I have tried to use unions as a workaround:

union {
    double d;
    Uc c[8];
} un;
Uc *pc = un.c;
const Ur *pu = reinterpret_cast<const Ur*>(&d);
pc[0] = (*pu >> 56) & 0xFFu;
pc[1] = (*pu >> 48) & 0xFFu;
pc[2] = (*pu >> 40) & 0xFFu;
pc[3] = (*pu >> 32) & 0xFFu;
pc[4] = (*pu >> 24) & 0xFFu;
pc[5] = (*pu >> 16) & 0xFFu;
pc[6] = (*pu >> 8) & 0xFFu;
pc[7] = *pu & 0xFFu;
*p = un.d;

but it didn't work either. In fact, it was a bit better - it only failed for negative numbers.

At this point I'm thinking about adding a simple test for native endianness, then doing everything via char* pointers with if (LITTLE_ENDIAN) checks around. To make things worse, the program makes heavy use of unions all around, which seems to work ok for now, but after all this mess I won't be surprised if it suddenly breaks for no apparent reason.

Finitude answered 27/10, 2011 at 12:23 Comment(6)
Teensy bit after the fact, but you could try compiling with -fno-strict-aliasing which will allow those pointer shenanigans at the (potential) cost of some performance.Kala
@user2472093, and have it bite me later on some other compiler? No, thank you. In fact, I think I already had it. Something about breaking on MSVC in release configuration. The worst thing is, it broke only on one particular value out of hundreds, and that particular value had only one bit wrong. But that bit was in the order part, so the result was completely different.Finitude
@user2472093: Any decent compiler should have an option equivalent to "-fno-strict-aliasing"; documenting a requirement to use such an option is probably safer than relying upon compilers to abide by any particular aliasing rules. It's ironic that increased aggressiveness on the part of compilers makes it necessary for programmers to actively block even forms of aliasing-based optimization which wouldn't have caused trouble [whether via compiler options, or by using memcpy all over the place], but that seems to be the state of affairs.Wildfowl
@supercat, but it's even safer to just get rid of incorrect aliasing and let the compiler optimize all those ugly char*s instead. That's what I did in the end and I don't remember any kind of troubles with this particular software since then.Finitude
@SergeyTachenov: It isn't. Since code which uses char* to get around more aggressive aliasing restrictions can often not be optimized to be as efficient as straightforward code which conformed to older aliasing requirements, some compilers have ceased regarding "char*" as a true "pointer to anything" type in an effort to regain efficiency. Since there's no telling what future compilers will do in that regard, I'd say it's simpler to write the most efficient code one can under the clearly and unambiguously-defined "-fno-strict-aliasing" rule [there may not be a formal name for it, but...Wildfowl
...since it would simply be "the C Stanard, with section XX deleted," any honest compiler writer is going to know how it should behave.Wildfowl
G
4

Your assertion that the following code is "wrong":

extern void foo(int *, double *);
union a_union t;
t.d = 3.0;
foo(&t.i, &t.d); // undefined behavior

... is wrong. Just taking the address of the two union members and passing them to an external function doesn't result in undefined behaviour; you only get that from dereferencing one of those pointers in an invalid way. For instance if the function foo returns immediately without dereferencing the pointers you passed it, then the behaviour is not undefined. With a strict reading of the C99 standard, there are even some cases where the pointers can be dereferenced without invoking undefined behaviour; for instance, it could read the value referenced by the second pointer, and then store a value through the first pointer, as long as they both point to a dynamically allocated object (i.e. one without a "declared type").

Goldiegoldilocks answered 11/7, 2010 at 13:25 Comment(16)
@davmac--You are right. I need to actually define a sample function. Perhaps void f(int *i, double *d){*i = 1; *d = 2;}? The two statements can be executed in either order, by strict-aliasing. But (I am guessing) if one added __attribute__((may_alias)) to the parameters, the statements would be executed as written.Blocky
@Joseph, strict-aliasing doesn't allow the stores to be executed in either order, because a store is allowed to change an object's effective type (C99 6.5p7). It does allow reads to be re-ordered with respect to stores that aren't allowed to alias them. A better sample would be int f(int *i, double *d) {*i = 1; *d = 2; return *i}Goldiegoldilocks
Ah, sorry, what I wrote doesn't quite make sense in this case seeing as we are talking about a union object with a declared type. However, basically you're storing to one union member and then to another; I think this is still allowed by the standard (even though you're doing it through pointers, not through the union type). Although admittedly the standard starts to make very little sense if you examine it too deeply. Recent C99 amendments (TC 3 I think) seem to allow storing to one union member and then reading another, but the value is then unspecified.Goldiegoldilocks
Technical Corrigendum 3 for C99 N1265 adds footnote 82: 'If the member used to access the contents of a union object is not the same as the member last used to store a value in the object, the appropriate part of the object representation of the value is reinterpreted as an object representation in the new type as described in 6.2.6 (a process sometimes called "type punning"). This might be a trap representation.'Blocky
And I believe f is "wrong', to use the words from TC3 6.5.2.3 Example 3, "because the union type is not visible within function f".Blocky
A long time later but: 6.5.2.3 example 3 is discussing access of "common initial sequence" members and isn't really relevant here. My reading of the standard in conjunction with GCC documentation and observed behavior of GCC 4.8.4 and LLVM 3.5.1 is that reads or writes to a union member via a pointer are only legal if the member is "currently active", i.e. was the last member stored to via the member access operator. Of course it could be that the compilers are wrong :)Goldiegoldilocks
@davmac: From what I can tell, gcc and clang deviate from the Standard in a number of cases that are clear and unambiguous, except when invoked via -fno-strict-aliasing. A DR included a proposal to amend the Standard so as to make effective types permanent; that proposal was explicitly rejected, but gcc and clang sometimes behave in ways that could not be justified any other way.Wildfowl
@Wildfowl I don't really believe that the standard is clear and unambiguous about strict aliasing at all (I think we've discussed this a number of times before so I'll say no more now). But certainly you can interpret "only legal if" in my comment above as "only legal according to these compiler vendors interpretation if", and that is how it was intended.Goldiegoldilocks
@davmac: Both gcc and clang suffer from "ABA" bugs in cases where code writes an object to change the active union member or Effective Type, performs some operation on the new type, and then writes an object to restore the old type. Such behavior would be justifiable if effective types were permanent, but not otherwise. While Standard doesn't actually specify that &union.activeMember yields a pointer that can be used to access that member, there'd be no point in allowing the & operator to be used with union members if it couldn't.Wildfowl
@Wildfowl I disagree. I don't think any of the compiler vendors are claiming that you can't use a pointer obtained via &union.activeMember, however there are questions around the lifetime of the object pointed to by such a pointer. There is no clear resolution to many such questions in the standard.Goldiegoldilocks
@davmac: They may not be claiming that such code isn't supported, but nether uses an aliasing model that can actually handle it reliably. Both attempt to omit stores which can't affect bit patterns stored in memory, and do so even in cases where such stores should would affect the Active Member of a union, or the Effective Type of dynamic storage.Wildfowl
@Wildfowl without a code example I can't really respond but perhaps you mean they re-order stores via inactive union members or lvalues of an incompatible type, which is consistent with their interpretation of the standard.Goldiegoldilocks
@davmac: See stackoverflow.com/questions/46592132/… or stackoverflow.com/questions/46205744/…Wildfowl
@Wildfowl I think you have two genuine compiler bugs there. I'm pretty sure they would be recognised as bugs if filed with the maintainers, so this doesn't really affect my answer, I think.Goldiegoldilocks
@davmac: I don't think gcc and clang can operate in a fashion which is both conforming and efficient without losing the ability to apply some genuinely-useful optimizations. I would think the best way to handle that would be to have a wider range of optimization modes, recognizing that the most aggressive mode would only be suitable for use with certain kinds of programs. That having said, having a compiler ensure that &unionObject.member will yield a pointer that will be usable until the next time something changes the union without going through that member should...Wildfowl
...be practical and efficient, without impairing useful optimizations. It would also make it easy to include a mode that would work correctly and efficiently with a huge amount of code that would otherwise require -fno-strict-aliasing. I don't think efficiently upholding the Standard without supporting such behaviors would be much harder than supporting such behaviors, but "almost" supporting the standard may be easier yet.Wildfowl
W
3

Aliasing occurs when the compiler has two different pointers to the same piece of memory. By typecasting a pointer, you're generating a new temporary pointer. If the optimizer reorders the assembly instructions for example, accessing the two pointers might give two totally different results - it might reorder a read before a write to the same address. This is why it is undefined behavior.

You are unlikely to see the problem in very simple test code, but it will appear when there's a lot going on.

I think the warning is to make clear that unions are not a special case, even though you might expect them to be.

See this Wikipedia article for more information about aliasing: http://en.wikipedia.org/wiki/Aliasing_(computing)#Conflicts_with_optimization

Wylie answered 25/5, 2010 at 16:33 Comment(3)
I am willing to accept a complicated example of the problem, on any commonly-used compiler.Blocky
I think the term "aliasing" usually recognizes that aliasing between two references has occurred within a particular context if (1) both references are used within that context, and (2) neither reference is visibly freshly derived from the other at the time of use. Even something like someAggregate.intMember = 23; would be UB under the Standard as written, but should not be considered "aliasing" since, between the time lvalue someAggregate.intMember is derived and the last time it is used, all operations on the storage will be performed using the latter lvalue.Wildfowl
If one accepts the footnote about the purpose of 6.5p7 being to tell compilers when they must recognize aliasing as suggesting that quality compilers shouldn't needlessly constrain the language in cases that don't involve actual aliasing, that would define how quality compilers should process someAggregate.intmember = 23;, eliminate the need for the Effective Type rule, and would be much better for programmers and compiler writers alike.Wildfowl
M
2

Have you seen this ? What is the strict aliasing rule?

The link contains a secondary link to this article with gcc examples. http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html

Trying a union like this would be closer to the problem.

union a_union {
    int i;
    double *d;
};

That way you have 2 types, an int and a double* pointing to the same memory. In this case using the double (*(double*)&i) could cause the problem.

Marrymars answered 1/6, 2010 at 16:4 Comment(2)
1) The question made two references to Mike Acton's extensive and quite informative article in the cellperformace link. Note, however, one of the other links disagreed with him.Blocky
2) Paul R already noted that in the real world sizeof(double) is often larger than sizeof(int). But this irrelevant here, and the example anyway came from the GCC Manual.Blocky
A
1

Here is mine: In think this is a bug in all GCC v5.x and later

#include <iostream>
#include <complex>
#include <pmmintrin.h>

template <class Scalar_type, class Vector_type>
class simd {
 public:
  typedef Vector_type vector_type;
  typedef Scalar_type scalar_type;
  typedef union conv_t_union {
    Vector_type v;
    Scalar_type s[sizeof(Vector_type) / sizeof(Scalar_type)];
    conv_t_union(){};
  } conv_t;

  static inline constexpr int Nsimd(void) {
    return sizeof(Vector_type) / sizeof(Scalar_type);
  }

  Vector_type v;

  template <class functor>
  friend inline simd SimdApply(const functor &func, const simd &v) {
    simd ret;
    simd::conv_t conv;

    conv.v = v.v;
    for (int i = 0; i < simd::Nsimd(); i++) {
      conv.s[i] = func(conv.s[i]);
    }
    ret.v = conv.v;
    return ret;
  }

};

template <class scalar>
struct RealFunctor {
  scalar operator()(const scalar &a) const {
    return std::real(a);
  }
};

template <class S, class V>
inline simd<S, V> real(const simd<S, V> &r) {
  return SimdApply(RealFunctor<S>(), r);
}



typedef simd<std::complex<double>, __m128d> vcomplexd;

int main(int argc, char **argv)
{
  vcomplexd a,b;
  a.v=_mm_set_pd(2.0,1.0);
  b = real(a);

  vcomplexd::conv_t conv;
  conv.v = b.v;
  for(int i=0;i<vcomplexd::Nsimd();i++){
    std::cout << conv.s[i]<<" ";
  }
  std::cout << std::endl;
}

Should give

c010200:~ peterboyle$ g++-mp-5 Gcc-test.cc -std=c++11 
c010200:~ peterboyle$ ./a.out 
(1,0) 

But under -O3: I THINK THIS IS WRONG AND A COMPILER ERROR

c010200:~ peterboyle$ g++-mp-5 Gcc-test.cc -std=c++11 -O3 
c010200:~ peterboyle$ ./a.out 
(0,0) 

Under g++4.9

c010200:~ peterboyle$ g++-4.9 Gcc-test.cc -std=c++11 -O3 
c010200:~ peterboyle$ ./a.out 
(1,0) 

Under llvm xcode

c010200:~ peterboyle$ g++ Gcc-test.cc -std=c++11 -O3 
c010200:~ peterboyle$ ./a.out 
(1,0) 
Andri answered 6/5, 2017 at 13:15 Comment(1)
I think your code avoids UB (in GNU C++, where union type-punning is defined, like in ISO C99/C11 but not ISO C++. Note that this is a C question). Anyway, looks like this bug was fixed in gcc6.3, but is still present gcc6.2: godbolt.org/g/M2mpSr. Note that gcc6.3 compiles it with .LC0: holding an FP constant, but gcc6.2 uses vxorpd to create 0.0 in a register. There is a warning: 51: ignoring attributes on template argument '__m128d {aka __vector(2) double}' (the definition of vcomplexd), but I don't know if that means the "vector" type is just double...Scary
S
0

I don't really understand your problem. The compiler did exactly what it was supposed to do in your example. The union conversion is what you did in f1. In f2 it's a normal pointer typecast, that you casted it to a union is irrelevant, it's still a pointer casting

Saadi answered 1/6, 2010 at 17:12 Comment(3)
2) The link to AndreyT's answer seems to imply gcc is right, and the rest of the world is wrong. But that is not the question. I am looking for horror stories. Or even a tiny example.Blocky
Ok. Did you follow up on the davmac.wordpress.com/2010/02/26/c99-revisited blog entry? Especially the davmac.wordpress.com/2009/10/25/… where he found an aliasing bug in MySQL. This might be what you're looking for.Coster
@tristopia: My question is very narrow, about union punning through a pointer. I've just asked a more general question in stackoverflow.com/questions/2958633/….Blocky

© 2022 - 2024 — McMap. All rights reserved.