This question demonstrates a very interesting phenomenon: denormalized floats slow down the code more than an order of magnitude.
The behavior is well explained in the accepted answer. However, there is one comment, with currently 153 upvotes, that I cannot find satisfactory answer to:
Why isn't the compiler just dropping the +/- 0 in this case?!? – Michael Dorgan
Side note: I have the impression that 0f is/must be exactly representable (furthermore - it's binary representation must be all zeroes), but can't find such a claim in the c11 standard. A quote proving this, or argument disproving this claim, would be most welcome. Regardless, Michael's question is the main question here.
An implementation may give zero and values that are not floating-point numbers (such as infinities and NaNs) a sign or may leave them unsigned.
+0.f
or-0.f
that are denormalized - it's the value in the array that zero is being added to that is denormalized (and causing the slowdown). – Profile/fp:fast
option might cause the compiler to optimize+0.f
- I don't know. – Profile