Eigen Linear Solver for very small square matrix
Asked Answered
P

1

8

I am using Eigen on a C++ program for solving linear equation for very small square matrix(4X4).

My test code is like

template<template <typename MatrixType> typename EigenSolver>
Vertor3d solve(){
   //Solve Ax = b and A is a real symmetric matrix and positive semidefinite

   ... // Construct 4X4 square matrix A and 4X1 vector b

   EigenSolver<Matrix4d> solver(A);
   auto x = solver.solve(b);

   ... // Compute relative error for validating
}

I test some EigenSolver which include:

  1. FullPixLU
  2. PartialPivLU
  3. HouseholderQR
  4. ColPivHouseholderQR
  5. ColPivHouseholderQR
  6. CompleteOrthogonalDecomposition
  7. LDLT
  8. Direct Inverse

Direct Inverse is:

template<typename MatrixType>
struct InverseSolve
{
private:
    MatrixType  inv;
public:
    InverseSolve(const MatrixType &matrix) :inv(matrix.inverse()) {
    }
    template<typename VectorType>
    auto solve(const VectorType & b) {
        return inv * b;
    }
};

I found that the fast method is DirectInverse,Even If I linked Eigen with MKL , the result was not change.

This is the test result

  1. FullPixLU : 477 ms
  2. PartialPivLU : 468 ms
  3. HouseholderQR : 849 ms
  4. ColPivHouseholderQR : 766 ms
  5. ColPivHouseholderQR : 857 ms
  6. CompleteOrthogonalDecomposition : 832 ms
  7. LDLT : 477 ms
  8. Direct Inverse : 88 ms

which all use 1000000 matrices with random double from uniform distribution [0,100].I fristly construct upper-triangle and then copy to lower-triangle.

The only problem of DirectInverse is that its relative error slightly larger than other solver but acceptble.

Is there any faster or more felegant solution for my program?Is DirectInverse the fast solution for my program?

DirectInverse does not use the symmetric infomation so why is DirectInverse far faster than LDLT?

Pallaten answered 18/6, 2018 at 12:19 Comment(3)
Do you compute several solve per matrix or just one? Please include your benchmark-result as text into the question, not as image-link. For fixed sized matrices up to 4x4, inverse() is computed directly using co-factors, which is very fast and requires only on division, but is expected to be slightly less accurate, in general. The solve as you implemented it, will only require a very fast Matrix-Vector product. You can increase the accuracy by doing a refinement-step: x=inv*b; x+=inv*(b-A*x);. MKL will generally not help for small fixed-sized inputs.Ezraezri
Thanks for your answer.I would edit my problem with benchmark text.In my program,every matrix is computed and solved only once.Your adivse is very concrete and sounds like a good idea.My concern is that all my matrices is symmetric and inverse() does not use it.Should I reimplement a inverse() for my program?Pallaten
Small addendum (independent of the main reason): I suggest assigning Vector4d x = solver.solve(b); instead of auto x = .... Otherwise, you might have x evaluated multiple times.Ezraezri
E
7

Despite what many people suggest of never explicitly computing an inverse when you only want to solve a linear system, for very small matrices this can actually be beneficial, since there are closed-form solutions using co-factors.

All other alternatives you tested will be slower, since they will do pivoting (which implies branching), even for small fixed-sized matrices. Also, most of them will result in more divisions and be not vectorizable as good, as the direct computation.

To increase the accuracy (this technique can actually be used independent of the solver if required), you can refine an initial solution by solving the system again with the residual:

Eigen::Vector4d solveDirect(const Eigen::Matrix4d& A, const Eigen::Vector4d& b)
{
    Eigen::Matrix4d inv = A.inverse();
    Eigen::Vector4d x = inv * b;
    x += inv*(b-A*x);
    return x;
}

I don't think Eigen directly provides a way to exploit the symmetry of A here (for the directly computed inverse). You can try hinting that by explicitly copying a selfadjoint view of A into a temporary and hope that the compiler is smart enough to find common sub-expressions:

Eigen::Matrix4d tmp = A.selfadjointView<Eigen::Upper>();
Eigen::Matrix4d inv = tmp.inverse();

To reduce some divisions, you can also compile with -freciprocal-math (on gcc or clang), this will slightly reduce accuracy of course.

If this is really performance critical, try implementing a hand-tuned inverse_4x4_symmetric method. Exploiting the symmetry of inv * b will unlikely be beneficial for such small matrices.

Ezraezri answered 18/6, 2018 at 14:58 Comment(2)
In fact,inverse of symmetry matrix is also a symmetry matrix.selfadjointView<Eigen::Upper> also works well for inv.I will test the solution and compare it with hand-tuned solution.Thanks for your help again.Pallaten
inv.selfadjointView<Eigen::Upper>() * b certainly works, but it very likely will run slower. Using the symmetry for matrix vector products mostly makes sense, if it saves cache space.Ezraezri

© 2022 - 2024 — McMap. All rights reserved.