Eigen - Re-orthogonalization of Rotation Matrix
Asked Answered
N

7

18

After multiplying a lot of rotation matrices, the end result might not be a valid rotation matrix any more, due to rounding issues (de-orthogonalized)

One way to re-orthogonalize is to follow these steps:

  1. Convert the rotation matrix to an axis-angle representation (link)
  2. Convert back the axis-angle to a rotation matrix (link)

Is there something in Eigen library that does the same thing by hiding all the details? Or is there any better recipe?

This procedure has to be handled with care due to special singularity cases, so if Eigen provides a better tool for this it would be great.

Nogas answered 15/4, 2014 at 10:15 Comment(0)
B
16

I don't use Eigen and didn't bother to look up the API but here is a simple, computationally cheap and stable procedure to re-orthogonalize the rotation matrix. This orthogonalization procedure is taken from Direction Cosine Matrix IMU: Theory by William Premerlani and Paul Bizard; equations 19-21.

Let x, y and z be the row vectors of the (slightly messed-up) rotation matrix. Let error=dot(x,y) where dot() is the dot product. If the matrix was orthogonal, the dot product of x and y, that is, the error would be zero.

The error is spread across x and y equally: x_ort=x-(error/2)*y and y_ort=y-(error/2)*x. The third row z_ort=cross(x_ort, y_ort), which is, by definition orthogonal to x_ort and y_ort.

Now, you still need to normalize x_ort, y_ort and z_ort as these vectors are supposed to be unit vectors.

x_new = 0.5*(3-dot(x_ort,x_ort))*x_ort
y_new = 0.5*(3-dot(y_ort,y_ort))*y_ort
z_new = 0.5*(3-dot(z_ort,z_ort))*z_ort

That's all, were are done.

It should be pretty easy to implement this with the API provided by Eigen. You can easily come up with other orthoginalization procedures but I don't think it will make a noticable difference in practice. I used the above procedure in my motion tracking application and it worked beatifully; it's both stable and fast.

Borgerhout answered 15/4, 2014 at 11:17 Comment(4)
It should be noted that the normalization procedure here uses a Taylor expansion to approximate vector magnitude. If high precision is needed or the matrix is far from orthonormal then another method should be used.Purge
@Purge I already did: See my comment from 2 years ago.Borgerhout
Anonymous downvotes aren't helping anybody. What's wrong with the answer?Borgerhout
Your code snippet has some computational errors. It should be x_new = 0.5*(3-dot(x_ort,x_ort))*x_ort. What your code is doing is x_new = 1/ (2*(3-dot(x_ort,x_ort))*x_ort) Note how the parens are rearranged due to operation priority. I am guessing people copy-pasted your code and it didn't work for them and they didn't debug.Papist
V
11

You can use a QR decomposition to systematically re-orthogonalize, where you replace the original matrix with the Q factor. In the library routines you have to check and correct, if necessary, by negating the corresponding column in Q, that the diagonal entries of R are positive (close to 1 if the original matrix was close to orthogonal).

The closest rotation matrix Q to a given matrix is obtained from the polar or QP decomposition, where P is a positive semi-definite symmetric matrix. The QP decomposition can be computed iteratively or using a SVD. If the latter has the factorization USV', then Q=UV'.

Villalobos answered 15/4, 2014 at 12:26 Comment(5)
If definitely gives better results than what you get with the procedure in my answer but the price is the significantly higher computation costs.Borgerhout
Yes, one should do what is easiest. The question did not give a dimension, so it could be 3D as in your answer or something different. [[Easy in 3D is also the QR-decomposition using Givens rotations (which coincidentally use the Euler angle representation of the rotation matrix). Edit: I see that this was mentioned in the question, so 3D is it.]]Villalobos
In any case, at least I upvoted your answer. :) (Although the OP should have done that already in my opinion.)Borgerhout
Hi Lutzl, it's easy to do QR decomp and get Q matrix in Eigen: rotation.householderQr().householderQ(). I'm not familier with houseHolder method however. Is that good enough? Also could you please elaborate in your answer about "check and correct" and "negate the corresponding column" as extra step? Any references or code samples would be great as well!Tomasatomasina
Test it by applying the QR decomposition to a unit matrix I. It is possible that the result is [Q, R] = [-I, -I]. This happens because the Householder reflections are chosen to be the most stable, i.e., avoiding divisions by zero or near zero.Villalobos
W
6

Singular Value Decomposition should be very robust. To quote from the reference:

Let M=UΣV be the singular value decomposition of M, then R=UV.

For your matrix, the singular-values in Σ should be very close to one. The matrix R is guaranteed to be orthogonal, which is the defining property of a rotation matrix. If there weren't any rounding errors in calculating your original rotation matrix, then R will be exactly the same as your M to within numerical precision.

Westernmost answered 23/3, 2018 at 2:19 Comment(1)
This essentially is one way to compute the polar decomposition, M=RP, P=V^TΣV a s.p.d matrix, which gives the closest orthogonal matrix R to the given M. The cost is the non-trivial nature of the SVD algorithm.Villalobos
R
3

M is the matrix we want to orthonormalize, and R is the rotation matrix closest to M.

Analytic Solution

Matrix R = M*inverse(sqrt(transpose(M)*M));

Iterative Solution

// To re-orthogonalize matrix M, repeat:
M = 0.5f*(inverse(transpose(M)) + M);
// until M converges

M converges to R, the nearest rotation matrix. The number of digits of accuracy will approximately double with each iteration.

Check whether the sum of the squares of the elements of (M - M^-T)/2 is less than the square of your error goal to know when (M + M^-T)/2 meets your accuracy threshold. M^-T is the inverse transpose of M.

Why It Works

We want to find the rotation matrix R which is closest to M. We will define the error as the sum of squared differences of the elements. That is, minimize trace((M - R)^T (M - R)).

The analytic solution is R = M (M^T M)^-(1/2), outlined here.

The problem is that this requires finding the square root of M^T M. However, if we notice, there are many matrices whose closest rotation matrix is R. One of these is M (M^T M)^-1, which simplifies to M^-T, the inverse transpose. The nice thing is that M and M^-T are on opposite sides of R, (intuitively like a and 1/a are on opposite side of 1).

We recognize that the average, (M + M^-T)/2 will be closer to R than M, and because it is a linear combination, will also maintain R as the closest rotation matrix. Doing this recursively, we converge to R.

Worst Case Convergence (Speculative)

Because it is related to the Babylonian square root algorithm, it converges quadratically.

The exact worst case error after one iteration of a matrix M and error e is nextE:

nextE = e^2/(2 e + 2)
e = sqrt(trace((M - R)^T (M - R)))
R = M (M^T M)^-(1/2)
Regan answered 23/7, 2022 at 1:8 Comment(0)
N
2

In the meantime:

#include <Eigen/Geometry>

Eigen::Matrix3d mmm;
Eigen::Matrix3d rrr;
                rrr <<  0.882966, -0.321461,  0.342102,
                        0.431433,  0.842929, -0.321461,
                       -0.185031,  0.431433,  0.882966;
                     // replace this with any rotation matrix

mmm = rrr;

Eigen::AngleAxisd aa(rrr);    // RotationMatrix to AxisAngle
rrr = aa.toRotationMatrix();  // AxisAngle      to RotationMatrix

std::cout <<     mmm << std::endl << std::endl;
std::cout << rrr     << std::endl << std::endl;
std::cout << rrr-mmm << std::endl << std::endl;

Which is nice news, because I can get rid of my custom method and have one headache less (how can one be sure that he takes care of all singularities?),

but I really want your opinion on better/alternative ways :)

Nogas answered 15/4, 2014 at 14:20 Comment(3)
"Thanks for the answers so far!" Here at Stackoverflow we don't say thank you, we upvote and/or accept answers instead. "how can one be sure that he takes care of all singularities?" I guess it assumes that rrr is in fact a rotation matrix. If it isn't (and that's what you problem is, your rotation matrix is messed-up), then it most likely does something wrong. This approach seems to be more like an ad-hoc hack than a solution.Borgerhout
It is not really necessary to compute the angles, one can just multiply the axis rotation matrices back together without worrying about quadrants and singularities. This is then the QR decomposition using the Givens rotations method.Villalobos
And if you want to improve the A=QR decomposition, check if the off-diagonal part of R is small and use Q*(I+(R-R')/2) as a better approximation. If (R-I) is somewhat larger, use the better approximation I+(R-R')/2+(R-R')^2/8 or the exact Rodrigues formula to compute the matrix exponential of (R-R')/2. (R'=transpose of R, first identify k from the matrix, then apply formula.)Villalobos
L
1

An alternative is to use Eigen::Quaternion to represent your rotation. This is much easier to normalize, and rotation*rotation products are generally faster. If you have a lot of rotation*vector products (with the same matrix), you should locally convert the quaternion to a 3x3 matrix.

Learnt answered 20/11, 2018 at 15:48 Comment(0)
P
0

Here is an implementation using Eigen. It will minimally impact the rotation represented by the rotation matrix by adjusting any two vectors which are not orthogonal equally towards/away from each other.

void orthonormalize(Eigen::Matrix3d &rot) {
  Vector3d x = rot.col(0);
  Vector3d y = rot.col(1);
  Vector3d z = rot.col(2);

  // normalize
  x.normalize();
  y.normalize();
  z.normalize();

  // orthogonalize
  double errorXY = 0.5 * x.dot(y);
  double errorYZ = 0.5 * y.dot(z);
  double errorZX = 0.5 * z.dot(x);
  rot.col(0) = x - errorXY * y - errorZX * z;
  rot.col(1) = y - errorXY * x - errorYZ * z;
  rot.col(2) = z - errorYZ * y - errorZX * x;
}

We can test that it actually works with

double orthonormalityError(const Matrix3d &rot) {
  Matrix3d prod = rot * rot.transpose();
  return (prod - Matrix3d::Identity()).norm();
}

Matrix3d randomRotationMatrix() {
  Matrix3d rand_mat = Matrix3d::Random();
  HouseholderQR<Matrix3d> qr(rand_mat);
  return rot = qr.householderQ();
}

 void testOrthonormalize() {
  Eigen::Matrix3d rot = randomRotationMatrix();
  rot += Eigen::Matrix3d::Random() * 0.1;
  std::cout << orthonormalityError(rot) << std::endl;
  for (int j = 0; j < 4; j++) {
    orthonormalize(rot);
    std::cout << orthonormalityError(rot) << std::endl;
  }
}

int main(int argc, char** argv) {
  testOrthonormalize();
  return 0;
}

Which prints:

0.246795
0.00948343
1.45409e-05
3.59979e-11
3.4066e-16
Pinchcock answered 14/4, 2023 at 21:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.