After searching a long time for a performance bug, I read about denormal floating point values.
Apparently denormalized floating-point values can be a major performance concern as is illustrated in this question: Why does changing 0.1f to 0 slow down performance by 10x?
I have an Intel Core 2 Duo and I am compiling with gcc, using -O2
.
So what do I do? Can I somehow instruct g++ to avoid denormal values?
If not, can I somehow test if a float
is denormal?