Efficient weighted covariances in RcppEigen
Asked Answered
M

2

17

I am trying to produce a function that can compute a series of weighted products

where W is a diagonal matrix. There are many W matrices but only a single X matrix.

To be efficient I can represent W as an array (w) containing the diagonal part. Then in R this would be crossprod(X, w*X)

or just crossprod(X * sqrt(w))

I could for loop over the series of W's, but that seems inefficient. The entire product can be though of as Only the w changes so the products X_i * X_j for column i and j can be recycled. The function I'd like to produce looks like this

Rcpp::List Crossprod_sparse(Eigen::MappedSparseMatrix<double> X, Eigen::Map<Eigen::MatrixXd> W) {
  int K = W.cols();
  int p = X.cols();

  Rcpp::List crossprods(W.cols());

  for (int k = 0; k < K; k++) {
    Eigen::SparseMatrix<double> matprod(p, p);
    for (int i = 0; i < p; i++) {
      Eigen::SparseVector<double> prod = X.col(i).cwiseProduct(W.col(k));
      for (int j = i; j < p; j++) {
        double out = prod.dot(X.col(j));
        matprod.coeffRef(i,j) = out;
        matprod.coeffRef(j,i) = out;
      }
    }
    matprod.makeCompressed();
    crossprods[k] = matprod;
  }

  return crossprods;
}

which returns the correct products, and should be efficient because of operating on the intermediate prod variable. However, for looping in R using crossprod seems to still be much faster, despite not taking advantage of recycling. How can I optimize this function more?

Misogamy answered 27/1, 2017 at 21:53 Comment(6)
R's crossproduses BLAS/LAPACK for matrix operations (namely, zgemm). These are super low-level, super optimized. I understand your intention, but your fight with linear algebra optimization may turn out to be very difficult.Choctaw
Those things are the bread and butter of such libraries though so a corresponding performant Eigen function may exists as well.Ortega
@Choctaw apologies for dumb question, but where does crossprod call BLAS/LAPACK? github.com/wch/r-source/blob/…Musty
@Musty See e.g. this call. The method you point to is (at least to my understanding) being used only if BLAS is not available (switch (R_Matprod)).Choctaw
you are using sparse matrices, have you tried with dense matrices?Catcher
How large and how sparse is X? Can you provide some (random) sample data?Introvert
M
2

You may try calculating the Cholesky decomposition of your weight matrix, multiply your matrix by this decomposition, and then calculate the crossproduct as listed in the RcppEigen documentation. Some example code using RcppEigen could be

#include <RcppEigen.h>

using Eigen::MatrixXd;
using Eigen::VectorXd;

//[[Rcpp::depends(RcppEigen)]]

// [[Rcpp::export]]
MatrixXd weightedCovariance(MatrixXd & X, MatrixXd & W) {
  int p = X.cols(); //assuming each row is a unique observation
  MatrixXd L = W.llt().matrixL();
  MatrixXd XtWX = MatrixXd(p, p).setZero().selfadjointView<Eigen::Lower>().rankUpdate(X.transpose() * L);
  return(XtWX);
}

// [[Rcpp::export]]
MatrixXd diag_weightedCovariance(MatrixXd & X, VectorXd & W) {
  int p = X.cols(); //assuming each row is a unique observation
  VectorXd w = W.cwiseSqrt();
  MatrixXd XtWX = MatrixXd(p, p).setZero().selfadjointView<Eigen::Lower>().rankUpdate(X.transpose() * w.asDiagonal());
  return(XtWX);
}

Eigen does a lot of optimization under the hood, so telling it that the result is symmetric should speed things up. Checking timings in R with microbenchmark:

set.seed(23847) #for reproducibility
require(microbenchmark)

#Create R version of Cpp function
Rcpp::sourceCpp('weighted_covar.cpp')

#generate data
p <- 100
n <- 1000
X <- matrix(rnorm(p*n), nrow=n, ncol=p)
W <- diag(1, n, n)
w <- diag(W)

R_res   <- crossprod(chol(W) %*% X ) #general weighted covariance
R_res_diag <- crossprod(sqrt(w) * X ) #utilizing your optimization, if we know it's diagonal
Cpp_res <- weightedCovariance(X, W)
Cpp_res_diag <- diag_weightedCovariance(X, w)

#make sure all equal
all.equal(R_res, Cpp_res)
#[1] TRUE
all.equal(R_res, R_res_diag)
#[1] TRUE
all.equal(Cpp_res_diag, R_res_diag)
#[1] TRUE

#check timings
microbenchmark(crossprod(chol(W) %*% X ))
# Unit: milliseconds
#                     expr      min      lq     mean  median       uq      max neval
# crossprod(chol(W) %*% X) 251.6066 262.739 275.1719 268.615 276.4994 479.9318   100

microbenchmark(crossprod(sqrt(w) * X ))
# Unit: milliseconds
#                   expr      min       lq     mean   median       uq     max neval
# crossprod(sqrt(w) * X) 5.264319 5.394289 5.499552 5.430885 5.496387 6.42099   100

microbenchmark(weightedCovariance(X, W))
# Unit: milliseconds
#                     expr      min       lq     mean   median       uq      max neval
# weightedCovariance(X, W) 26.64534 27.84632 31.99341 29.44447 34.59631 51.39726   100

microbenchmark(diag_weightedCovariance(X, w), unit = "ms")
# Unit: milliseconds
#                          expr     min       lq      mean   median        uq      max neval
# diag_weightedCovariance(X, w) 0.67571 0.702567 0.7469946 0.713579 0.7405515 1.321888   100

I also haven't used your sparse structure in this implementation so you may get more speed after accounting for that.

Mountain answered 11/6, 2019 at 15:59 Comment(0)
V
1

Generally, if you have a diagonal matrix in a product, you should pass just the diagonal coefficients w and use them as w.asDiagonal():

Eigen::MatrixXd foo(Eigen::SparseMatrix<double> const & X, Eigen::VectorXd const & w)
{
    return X.transpose() * w.asDiagonal() * X;
}

If you want to pre-compute everything except the multiplication with w, you can try storing the outer products of each row of X and accumulate them on demand:

class ProductHelper
{
    std::vector<Eigen::SparseMatrix<double> > matrices;
public:
    ProductHelper(Eigen::SparseMatrix<double> const& X_)
    {
        // The loop below is much more efficient with row-major X
        Eigen::SparseMatrix<double, Eigen::RowMajor> const &X = X_;
        matrices.reserve(X.rows());
        for(int i=0; i<X.rows(); ++i)
        {
            matrices.push_back(X.row(i).transpose()*X.row(i));
        }
    }

    Eigen::MatrixXd multiply(Eigen::VectorXd const& w) const
    {
        assert(w.size()==matrices.size());
        assert(w.size()>0);
        Eigen::MatrixXd A = w[0]*matrices[0]; 
        for(int i=1; i<w.size(); ++i)
        {
            A+=w[i]*matrices[i];
        }
        return A;
    }
};
Ventral answered 1/12, 2018 at 13:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.