Sample from multivariate normal/Gaussian distribution in C++
Asked Answered
B

4

39

I've been hunting for a convenient way to sample from a multivariate normal distribution. Does anyone know of a readily available code snippet to do that? For matrices/vectors, I'd prefer to use Boost or Eigen or another phenomenal library I'm not familiar with, but I could use GSL in a pinch. I'd also like it if the method accepted nonnegative-definite covariance matrices rather than requiring positive-definite (e.g., as with the Cholesky decomposition). This exists in MATLAB, NumPy, and others, but I've had a hard time finding a ready-made C/C++ solution.

If I have to implement it myself, I'll grumble but that's fine. If I do that, Wikipedia makes it sound like I should

  1. generate n 0-mean, unit-variance, independent normal samples (boost will do this)
  2. find the eigen-decomposition of the covariance matrix
  3. scale each of the n samples by the square-root of the corresponding eigenvalue
  4. rotate the vector of samples by pre-multiplying the scaled vector by the matrix of orthonormal eigenvectors found by the decomposition

I would like this to work quickly. Does someone have an intuition for when it would be worthwhile to check to see if the covariance matrix is positive, and if so, use Cholesky instead?

Bissau answered 26/5, 2011 at 17:25 Comment(1)
Try asking/migrating this to scicomp.stackexchange.comFictive
B
29

Since this question has garnered a lot of views, I thought I'd post code for the final answer that I found, in part, by posting to the Eigen forums. The code uses Boost for the univariate normal and Eigen for matrix handling. It feels rather unorthodox, since it involves using the "internal" namespace, but it works. I'm open to improving it if someone suggests a way.

#include <Eigen/Dense>
#include <boost/random/mersenne_twister.hpp>
#include <boost/random/normal_distribution.hpp>    

/*
  We need a functor that can pretend it's const,
  but to be a good random number generator 
  it needs mutable state.
*/
namespace Eigen {
namespace internal {
template<typename Scalar> 
struct scalar_normal_dist_op 
{
  static boost::mt19937 rng;    // The uniform pseudo-random algorithm
  mutable boost::normal_distribution<Scalar> norm;  // The gaussian combinator

  EIGEN_EMPTY_STRUCT_CTOR(scalar_normal_dist_op)

  template<typename Index>
  inline const Scalar operator() (Index, Index = 0) const { return norm(rng); }
};

template<typename Scalar> boost::mt19937 scalar_normal_dist_op<Scalar>::rng;

template<typename Scalar>
struct functor_traits<scalar_normal_dist_op<Scalar> >
{ enum { Cost = 50 * NumTraits<Scalar>::MulCost, PacketAccess = false, IsRepeatable = false }; };
} // end namespace internal
} // end namespace Eigen

/*
  Draw nn samples from a size-dimensional normal distribution
  with a specified mean and covariance
*/
void main() 
{
  int size = 2; // Dimensionality (rows)
  int nn=5;     // How many samples (columns) to draw
  Eigen::internal::scalar_normal_dist_op<double> randN; // Gaussian functor
  Eigen::internal::scalar_normal_dist_op<double>::rng.seed(1); // Seed the rng

  // Define mean and covariance of the distribution
  Eigen::VectorXd mean(size);       
  Eigen::MatrixXd covar(size,size);

  mean  <<  0,  0;
  covar <<  1, .5,
           .5,  1;

  Eigen::MatrixXd normTransform(size,size);

  Eigen::LLT<Eigen::MatrixXd> cholSolver(covar);

  // We can only use the cholesky decomposition if 
  // the covariance matrix is symmetric, pos-definite.
  // But a covariance matrix might be pos-semi-definite.
  // In that case, we'll go to an EigenSolver
  if (cholSolver.info()==Eigen::Success) {
    // Use cholesky solver
    normTransform = cholSolver.matrixL();
  } else {
    // Use eigen solver
    Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> eigenSolver(covar);
    normTransform = eigenSolver.eigenvectors() 
                   * eigenSolver.eigenvalues().cwiseSqrt().asDiagonal();
  }

  Eigen::MatrixXd samples = (normTransform 
                           * Eigen::MatrixXd::NullaryExpr(size,nn,randN)).colwise() 
                           + mean;

  std::cout << "Mean\n" << mean << std::endl;
  std::cout << "Covar\n" << covar << std::endl;
  std::cout << "Samples\n" << samples << std::endl;
}
Bissau answered 26/12, 2012 at 21:22 Comment(1)
I learned quite a few things from this example. And got a few ideas. Thank you.Latish
R
17

Here is a class to generate multivariate normal random variables in Eigen which uses C++11 random number generation and avoids the Eigen::internal stuff by using Eigen::MatrixBase::unaryExpr():

struct normal_random_variable
{
    normal_random_variable(Eigen::MatrixXd const& covar)
        : normal_random_variable(Eigen::VectorXd::Zero(covar.rows()), covar)
    {}

    normal_random_variable(Eigen::VectorXd const& mean, Eigen::MatrixXd const& covar)
        : mean(mean)
    {
        Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> eigenSolver(covar);
        transform = eigenSolver.eigenvectors() * eigenSolver.eigenvalues().cwiseSqrt().asDiagonal();
    }

    Eigen::VectorXd mean;
    Eigen::MatrixXd transform;

    Eigen::VectorXd operator()() const
    {
        static std::mt19937 gen{ std::random_device{}() };
        static std::normal_distribution<> dist;

        return mean + transform * Eigen::VectorXd{ mean.size() }.unaryExpr([&](auto x) { return dist(gen); });
    }
};

It can be used as

int size = 2;
Eigen::MatrixXd covar(size,size);
covar << 1, .5,
        .5, 1;

normal_random_variable sample { covar };

std::cout << sample() << std::endl;
std::cout << sample() << std::endl;
Ramulose answered 25/10, 2016 at 16:51 Comment(4)
Note this is C++14 because of the auto x lambda parameter. Change to double x to use with C++11.Darky
The square root of a matrix given its eigenvalue decomp. is Q*sqrt(Λ)*inv(Q) so it should be: eigenSolver.eigenvectors()*eigenSolver.eigenvalues().cwiseSqrt().asDiagonal()*(eigenSolver.eigenvectors().inverse())Anse
@Lykos: one needs a matrix M with M*M^t = Sigma, where Sigma is the correlation matrix. My code above evaluates that by using the eigendecomposition Sigma = U D U^t, and then uses M = U sqrt(D), which works (one could have also used a Cholesky decomp, but this has problems with positive semi-definite correlation matrices, i.e. with zero eigenvalues). Now you suggest M = U sqrt(D) U^t instead: this also works, because U is a orthogonal matrix, i.e. U U^t = I, but the evaluation requires more work.Ramulose
@Ramulose Surprisingly to me the two solutions are actually equivalent whilst rotated with respect to each other, as can be shown by taking N*trans(N) = S and M*M = S then trans(N*x)*inv(S)*(N*x) = trans(M*x)*inv(S)*(M*x) <-> trans(x)*trans(N)*inv(N*trans(N))*N*x = trans(x)*trans(M)*inv(M)*inv(M)*M*x <-> trans(x)*trans(N)*inv(trans(N))*inv(N)*N*x = trans(x)*M*inv(M)*inv(M)*M*x <-> trans(x)*x = trans(x)*x Well I learned something new today thxAnse
O
3

For a ready-made solution, the armadillo C++ library supports sampling from a multivariate Gaussian distribution (even from positive semi-definite covariance matrices) with the function mvnrnd().

Oxonian answered 29/1, 2022 at 0:24 Comment(0)
B
0

What about doing an SVD and then checking if the matrix is PD? Note that this does not require you to compute the Cholskey factorization. Although, I think SVD is slower than Cholskey, but they must both be cubic in number of flops.

Bejarano answered 1/6, 2011 at 16:48 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.