Why don't C++ compilers do better constant folding?
Asked Answered
F

3

60

I'm investigating ways to speed up a large section of C++ code, which has automatic derivatives for computing jacobians. This involves doing some amount of work in the actual residuals, but the majority of the work (based on profiled execution time) is in calculating the jacobians.

This surprised me, since most of the jacobians are propagated forward from 0s and 1s, so the amount of work should be 2-4x the function, not 10-12x. In order to model what a large amount of the jacobian work is like, I made a super minimal example with just a dot product (instead of sin, cos, sqrt and more that would be in a real situation) that the compiler should be able to optimize to a single return value:

#include <Eigen/Core>
#include <Eigen/Geometry>

using Array12d = Eigen::Matrix<double,12,1>;

double testReturnFirstDot(const Array12d& b)
{
    Array12d a;
    a.array() = 0.;
    a(0) = 1.;
    return a.dot(b);
}

Which should be the same as

double testReturnFirst(const Array12d& b)
{
    return b(0);
}

I was disappointed to find that, without fast-math enabled, neither GCC 8.2, Clang 6 or MSVC 19 were able to make any optimizations at all over the naive dot-product with a matrix full of 0s. Even with fast-math (https://godbolt.org/z/GvPXFy) the optimizations are very poor in GCC and Clang (still involve multiplications and additions), and MSVC doesn't do any optimizations at all.

I don't have a background in compilers, but is there a reason for this? I'm fairly sure that in a large proportion of scientific computations being able to do better constant propagation/folding would make more optimizations apparent, even if the constant-fold itself didn't result in a speedup.

While I'm interested in explanations for why this isn't done on the compiler side, I'm also interested for what I can do on a practical side to make my own code faster when facing these kinds of patterns.

Fin answered 31/8, 2018 at 10:29 Comment(16)
Other than b you don't have any constants. Locals and code to initialise them which may have visible side-effect. Also with floating point the compiler would have to full emulate the production environment to guarantee the same results.Sasha
Floating point numbers are not real numbers, they have rigorous correctness requirements which are violated by obvious optimisations. E.g. (1.0 / 3.0) * 3.0 != (1.0 * 3.0)/3.0 because rounding behaviour is fully specified, so you cannot simply cancel the 3.Caravaggio
The answer depends on the implementation of dot. Probably, it is not just a for loop with accumulation, but involves rescaling. No wonder that compilers can't optimize it.Elevated
So as @Evgeny says, it depends on the implementation, to optimise it the compiler would have to prove that the implementation of dot isn't going to introduce rounding, because to comply with the standard the rounding which mathematically you might think of as an "error" is in fact required so that calculations are reproducible and expressions have an unambiguous meaning.Caravaggio
The point of -ffast-math is to say "it's not necessary to comply with the standard". MSVC equivalent of fast-math is /fp:fast you may find that it does some optimisation if you specify that.Caravaggio
With floating point, if you are sure of your expected results, you should create a specific condition returning a predefined result. Compiler won't optimize it for you.Hermetic
To give a concrete example ((1.0 * 1343.0 + 100)/1343.0 - 100/1343.0) == 9.99999999999999890 not 1.0Caravaggio
The problem is with Array12d::dot(). You'll have to provide code for that to get help optimizing it.Enhanced
@Caravaggio Am I missing something here? The question has not been edited and already talks about how using -ffast-math still yields unsatisfactory results. Why are you elaborating so much on it?Tyranny
Might some constexpr functions help?Curnin
Once you added -ffast-math the remaining "problem" is explicit vectorization, see my answer.Dewittdewlap
You don't say what compiler switches you are using (except fastmath). Can we presume you are using -O3 and the MS equivalent?Georg
You can see the options in the godbolt. -O3 for gcc/clang, /Ox for MSVC.Fin
@Caravaggio You have to be careful with that assumption about rounding. As I understand it the compilers are allowed to use any bit precision that they like for constants. If they assume infinite precision they can use rational arithmetic and will constant fold. Many people are bitten by math that works different under different precisions. They are wrong of course: more precision is always better.Ung
@Dewittdewlap +1 vectorization is the real big gain here on modern cpus.Wegner
The question is about constant folding. Is analyzing array content a part of constant folding? That would surprise me. Here we have a field that could be anything, this is not a constant value known at compile time.Needham
D
74

This is because Eigen explicitly vectorize your code as 3 vmulpd, 2 vaddpd and 1 horizontal reduction within the remaining 4 component registers (this assumes AVX, with SSE only you'll get 6 mulpd and 5 addpd). With -ffast-math GCC and clang are allowed to remove the last 2 vmulpd and vaddpd (and this is what they do) but they cannot really replace the remaining vmulpd and horizontal reduction that have been explicitly generated by Eigen.

So what if you disable Eigen's explicit vectorization by defining EIGEN_DONT_VECTORIZE? Then you get what you expected (https://godbolt.org/z/UQsoeH) but other pieces of code might become much slower.

If you want to locally disable explicit vectorization and are not afraid of messing with Eigen's internal, you can introduce a DontVectorize option to Matrix and disable vectorization by specializing traits<> for this Matrix type:

static const int DontVectorize = 0x80000000;

namespace Eigen {
namespace internal {

template<typename _Scalar, int _Rows, int _Cols, int _MaxRows, int _MaxCols>
struct traits<Matrix<_Scalar, _Rows, _Cols, DontVectorize, _MaxRows, _MaxCols> >
: traits<Matrix<_Scalar, _Rows, _Cols> >
{
  typedef traits<Matrix<_Scalar, _Rows, _Cols> > Base;
  enum {
    EvaluatorFlags = Base::EvaluatorFlags & ~PacketAccessBit
  };
};

}
}

using ArrayS12d = Eigen::Matrix<double,12,1,DontVectorize>;

Full example there: https://godbolt.org/z/bOEyzv

Dewittdewlap answered 31/8, 2018 at 11:37 Comment(6)
Why can't the compiler optimize the remaining vector instructions? Is it a QoI issue or is there a technical reason?Klaus
@Klaus Presumably because nobody sat down to write detailed enough rules/model by which the compiler would track constant propagation through vector instructions. Some rules (such as multiplying by or adding 0.0) have evidently been included already, but it's probably difficult to make them as encompassing as the scalar ones.Tyranny
That would be technically possible by "un-vectorizing" the code, but this would go against what the user explicitly asked, so this is debatable whether its reasonable or not.Dewittdewlap
You are asking an awful lot of the compiler...for it to do what you want would require it to really develop some machine insight into the particulars of the problem. It's not impossible, but not the kind of think compiler writers focus on. To us humans, it is obvious that a dot product in N dimensions where all but the first element of one vector is zeros is a trivial multiplication, but that is not the compiler's focus. Further, as noted above, to maintain consistency floating point must do what it does. Python, for one, uses many 30 year-old Fortran libraries for this reason.Coeternity
Can you give any insights into why is MSVC not able to optimize this code ? perhaps theres a workaround ?Swetlana
I mean, since MSVC is able to constant fold doubles, etc.Swetlana
T
40

I was disappointed to find that, without fast-math enabled, neither GCC 8.2, Clang 6 or MSVC 19 were able to make any optimizations at all over the naive dot-product with a matrix full of 0s.

They have no other choice unfortunately. Since IEEE floats have signed zeros, adding 0.0 is not an identity operation:

-0.0 + 0.0 = 0.0 // Not -0.0!

Similarly, multiplying by zero does not always yield zero:

0.0 * Infinity = NaN // Not 0.0!

So the compilers simply cannot perform these constant folds in the dot product while retaining IEEE float compliance - for all they know, your input might contain signed zeros and/or infinities.

You will have to use -ffast-math to get these folds, but that may have undesired consequences. You can get more fine-grained control with specific flags (from http://gcc.gnu.org/wiki/FloatingPointMath). According to the above explanation, adding the following two flags should allow the constant folding:
-ffinite-math-only, -fno-signed-zeros

Indeed, you get the same assembly as with -ffast-math this way: https://godbolt.org/z/vGULLA. You only give up the signed zeros (probably irrelevant), NaNs and the infinities. Presumably, if you were to still produce them in your code, you would get undefined behavior, so weigh your options.


As for why your example is not optimized better even with -ffast-math: That is on Eigen. Presumably they have vectorization on their matrix operations, which are much harder for compilers to see through. A simple loop is properly optimized with these options: https://godbolt.org/z/OppEhY

Tyranny answered 31/8, 2018 at 11:23 Comment(1)
Only clang optimizes a for loop, gcc doesn't do it.Elevated
E
12

One way to force a compiler to optimize multiplications by 0's and 1`s is to manually unroll the loop. For simplicity let's use

#include <array>
#include <cstddef>
constexpr std::size_t n = 12;
using Array = std::array<double, n>;

Then we can implement a simple dot function using fold expressions (or recursion if they are not available):

<utility>
template<std::size_t... is>
double dot(const Array& x, const Array& y, std::index_sequence<is...>)
{
    return ((x[is] * y[is]) + ...);
}

double dot(const Array& x, const Array& y)
{
    return dot(x, y, std::make_index_sequence<n>{});
}

Now let's take a look at your function

double test(const Array& b)
{
    const Array a{1};    // = {1, 0, ...}
    return dot(a, b);
}

With -ffast-math gcc 8.2 produces:

test(std::array<double, 12ul> const&):
  movsd xmm0, QWORD PTR [rdi]
  ret

clang 6.0.0 goes along the same lines:

test(std::array<double, 12ul> const&): # @test(std::array<double, 12ul> const&)
  movsd xmm0, qword ptr [rdi] # xmm0 = mem[0],zero
  ret

For example, for

double test(const Array& b)
{
    const Array a{1, 1};    // = {1, 1, 0...}
    return dot(a, b);
}

we get

test(std::array<double, 12ul> const&):
  movsd xmm0, QWORD PTR [rdi]
  addsd xmm0, QWORD PTR [rdi+8]
  ret

Addition. Clang unrolls a for (std::size_t i = 0; i < n; ++i) ... loop without all these fold expressions tricks, gcc doesn't and needs some help.

Elevated answered 31/8, 2018 at 11:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.