I have some Eigen Matrices whose dimensions I don't know in advance, I only have an upper bound. I have a loop in which I fill those matrices (I initialize them using the upper bound) column by column until a stopping criterion is fulfilled (let's say after j iterations).
My problem is now: After the loop, I need those matrices for matrix multiplications (obviously using only the first j columns). The straightforward solution would be to use Eigen's conservativeResize and go right ahead and perform the matrix multiplication. Since the matrices tend to be quite large (100000+ dimensions) and (as far as I can see, not sure though) Eigen's conservativeResize reallocates the memory for the resized matrices and performs one deep copy, this solution is quite expensive.
I was thinking about writing my own custom matrix multiplication function, which uses the old (big) matrices, taking arguments specifying the number of columns to use. I fear though that Eigen's matrix multiplications are so much more optimized that in the end this solution is slower than just using conservative resizing and standard Eigen multiplication...
Should I just bite the bullet and use conservativeResize or does anyone have a better idea? BTW: The matrices we're talking about are used in 3 multiplications and 1 transpose after the loop/resize
Thanks in advance!
Edit: this is the relevant part of the code (where X is a MatrixXd, y is a VectorXd and numComponents is the number of latent variables PLS1 is supposed to use). The thing is though: at the beginning, numComponents will always be the number of dimensions in X (X.cols()) but the stopping criterion is supposed to check the relative improvement on the explained variance in the output vector (that, I have not implemented yet). If the relative improvement is too small, the algorithm is supposed to stop (since we are happy with the first j components) and then compute the regression coefficients. For that, I need the conservativeResize:
using namespace Eigen;
MatrixXd W,P,T,B;
VectorXd c,xMean;
double xMean;
W.resize(X.cols(),numComponents);
P.resize(X.cols(),numComponents);
T.resize(X.rows(),numComponents);
c.resize(numComponents);
xMean.resize(X.cols());
xMean.setZero();
yMean=0;
VectorXd yCopy=y;
//perform PLS1
for(size_t j=0; j< numComponents; ++j){
VectorXd tmp=X.transpose()*y;
W.col(j)=(tmp)/tmp.norm();
T.col(j)=X*W.col(j);
double divisorTmp=T.col(j).transpose()*T.col(j);
c(j)=(T.col(j).transpose()*y);
c(j)/=divisorTmp;
P.col(j)=X.transpose()*T.col(j)/divisorTmp;
X=X-T.col(j)*P.col(j).transpose();
y=y-T.col(j)*c(j);
if(/*STOPPINGCRITERION(TODO)*/ && j<numComponents-1){
numComponents=j+1;
W.conservativeResize(X.cols(),numComponents);
P.conservativeResize(X.cols(),numComponents);
T.conservativeResize(X.rows(),numComponents);
c.conservativeResize(numComponents);
}
}
//store regression matrix
MatrixXd tmp=P.transpose()*W;
B=W*tmp.inverse()*c;
yCopy=yCopy-T*c;
mse=(yCopy.transpose()*yCopy);
mse/=y.size();//Mean Square Error