I'm trying to use Eigen::CholmodSupernodalLLT
for Cholesky decomposition, however, it seems that I could not get matrixL()
and matrixU()
. How can I extract matrixL()
and matrixU()
from Eigen::CholmodSupernodalLLT
for future use?
A partial answer to integrate what others have said.
Consider Y ~ MultivariateNormal(0, A). One may want to (1) evaluate the (log-)likelihood (a multivariate normal density), (2) sample from such density.
For (1), it is necessary to solve Ax = b where A is symmetric positive-definite, and compute its log-determinant. (2) requires L such that A = L * L.transpose() since Y ~ MultivariateNormal(0, A) can be found as Y = L u where u ~ MultivariateNormal(0, I).
A Cholesky LLT or LDLT decomposition is useful because chol(A)
can be used for both purposes. Solving Ax=b
is easy given the decomposition, andthe (log)determinant can be easily derived from the (sum)product of the (log-)components of D
or the diagonal of L
. By definition L can then be used for sampling.
So, in Eigen
one can use:
Eigen::SimplicialLDLT solver(A)
(orEigen::SimplicialLLT
), whensolver.solve(b)
and calculate the determinant usingsolver.vectorD().diag()
. Useful because ifA
is a covariance matrix, thensolver
can be used for likelihood evaluations, andmatrixL()
for sampling.Eigen::CholmodDecomposition
does not give access tomatrixL()
orvectorD()
but exposes.logDeterminant()
to achieve the (1) goal but not (2).Eigen::PardisoLDLT
does not give access tomatrixL()
orvectorD()
and does not expose a way to get the determinant.
In some applications, step (2) - sampling - can be done at a later stage so Eigen::CholmodDecomposition
is enough. At least in my configuration, Eigen::CholmodDecomposition
works 2 to 5 times faster than Eigen::SimplicialLDLT
(I guess because of the permutations done under the hood to facilitate parallelization)
Example: in Bayesian spatial Gaussian process regression, the spatial random effects can be integrated out and do not need to be sampled. So MCMC can proceed swiftly with Eigen::CholmodDecomposition
to achieve convergence for the uknown parameters. The spatial random effects can then be recovered in parallel using Eigen::SimplicialLDLT
. Typically this is only a small part of the computations but having matrixL()
directly from CholmodDecomposition
would simplify them a bit.
You cannot do this using the given class. The class you are referencing is equotation solver (which indeed uses cholesky decomposition). To decompose your matrix you should rather use Eigen::LLT
. Code example from their website:
MatrixXd A(3,3);
A << 4,-1,2, -1,6,0, 2,0,5;
LLT<MatrixXd> lltOfA(A);
MatrixXd L = lltOfA.matrixL();
MatrixXd U = lltOfA.matrixU();
As reported somewhere else, e.g., it cannot be done easily. I am copying a possible recommendation (answered by Gael Guennebaud himself), even if somewhat old:
If you really need access to the factor to do your own cooking, then better use the built-in
SimplicialL{D}LT<>
class. Extracting the factors from the supernodal internal represations of Cholmod/Pardiso is indeed not straightforward and very rarely needed. We have to check, but if Cholmod/Pardiso provide routines to manipulate the factors, like applying it to a vector, then we could letmatrix{L,U}()
return a pseudo expression wrapping these routines.
Developing code for extracting this is likely beyond SO, and probably a topic for a feature request.
Of course, the solution with LLT
is at hand (but not the topic of the OP).
SimplicialLDLT
in my application. in the end I found workarounds (especially because CholmodDecomposition
still offers logDeterminant()
) but it would be much cleaner if I could just extract L. –
Sunbonnet © 2022 - 2024 — McMap. All rights reserved.
ldlchol
outputs the actual decomposition so I was wondering why thematrixL()
method was not working in Eigen. Did you find an answer eventually? – Sunbonnetldlchol
, considering that Matlab also uses Cholmod under the hood from what I understand. It's also kind of counterintuitive that the name suggests LLT but there's no way to extract L itself. – Sunbonnet