No, this is not troubling. That matrix likely just doesn't work well with that particular eigenvector algorithm implementation. Efficient and stable general eigenvector computation is nontrivial, after all.
The Matrix
library is adapted from JAMA, a Java matrix package, which says it does a numerical computation and not a symbolic computation:
Not Covered. JAMA is by no means a complete linear algebra environment ... it focuses on the principle mathematical functionality required to do numerical linear algebra
The QR Algorithm: Numerical Computation
Looking at the source code for Matrix::EigenvalueDecomposition
, I've found that it names the usage of the QR algorithm. I don't fully understand the intricacies of the mathematics, but I think I might understand why this computation fails. The mechanism of computation works as stated:
At the k-th step (starting with k = 0), we compute the QR decomposition Ak=QkRk ... Under certain conditions,[4] the matrices Ak converge to a triangular matrix, the Schur form of A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved.
In "pseudo" Ruby, this conceptually means:
working_matrix = orig_matrix.dup
all_q_matrices = []
loop do
q, r = working_matrix.qr_decomposition
all_q_matrices << q
next_matrix = r * q
break if difference_between(working_matrix, next_matrix) < accuracy_threshold
end
eigenvalues = working_matrix.diagonal_values
For eigenvectors, it continues:
upon convergence, AQ = QΛ, where Λ is the diagonal matrix of eigenvalues to which A converged, and where Q is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of Q are the eigenvectors.
In "pseudo" Ruby, continued:
eigenvectors = all_q_matrices.inject(:*).columns
Floating Point Error in Numerical Computations
We can see that an iteration of numerical computations are made to compute the approximate eigenvalues, and as a side-effect, a bunch of approximate Q
matrices are collected. Then, these approximated Q
matrices are composed together to form the eigenvectors.
The compounding of approximations is what probably caused the wildly inaccurate results. An example of catastrophic cancellation on Math StackExchange shows a simple quadratic computation with 400% relative error. You might be able to imagine how an iterative matrix algorithm with repeated arithmetic operations could do much worse.
A Grain of Salt
Again, I don't have a deep understanding of the mathematics of the algorithm nor the implementation, so I don't know precisely what parts of the computation caused your specific 85110032990182200% error, but I hope you now can understand how it may have happened.
m*m.eigen.eigenvectors[2]
). It would be helpful if you could submit a bug report using your example. Maybe you can find even a small matrix which yields incorrect results? – Overtire