I have a matrix NxM (usually 10k X 10k elements) describing a ground set. Each line represents an object and each column an specific feature. For example, in the matrix
f1 f2 f3
x1 0 4 -1
x2 1 0 5
x3 4 0 0
x4 0 1 0
the object x1 has value 0 in feature 1, value 4 in feature 1, and value 0 in feature -1. The values of this are general real numbers (double's).
I have to compute several custom distances/dissimilarities between all pairs of objects (all pair of lines). To compare, I want to compute the L1 (manhattan) and L2 (euclidean) distances.
I have use Eigen library to perform the most of my computations. To compute the L2 (euclidean), I use the following observation: for two vectors a and b of size n, we have:
||a - b||^2 = (a_1 - b_1)^2 + (a_2 - b_2)^2 + ... +(a_n - b_n)^2 = a_1^2 + b_1^2 - 2 a_1 b_1 + a_2^2 + b_2^2 - 2 a_2 b_2 + ... + a_n^2 + b_n^2 - 2 a_n b_n = a . a + b . b - 2ab
In other words, we rewrite the squared norm using the dot product of the vector by themselves and subtract twice the dot product between them. From that, we just take the square and we are done. In time, I found this trick a long time ago and unfortunately I lost the reference for the author.
Anyway, this enable to write a fancy code using Eigen (in C++):
Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic> matrix, XX, D;
// Load matrix here, for example
// matrix << 0, 4, -1,
// 1, 0, 5,
// 4, 0, 0,
// 0, 1, 0;
const auto N = matrix.rows();
XX.resize(N, 1);
D.resize(N, N);
XX = matrix.array().square().rowwise().sum();
D.noalias() = XX * Eigen::MatrixXd::Ones(1, N) +
Eigen::MatrixXd::Ones(N, 1) * XX.transpose();
D -= 2 * matrix * matrix.transpose();
D = D.cwiseSqrt();
For matrix 10k X 10k, we are able to compute the L2 distance for all pairs of objects/lines in less than 1 min (2 cores / 4 threads), which I personally consider a good performance for my purposes. Eigen is able to combine the operations and to use several low/high level optimizations to perform these computations. In this case, Eigen uses all cores available (and, of course, we can tune that).
However, I still need compute the L1 distance, but I couldn't figure out a good algebraic form to use with Eigen and obtain nice performance. Until now I have the following:
const auto N = matrix.rows();
for(long i = 0; i < N - 1; ++i) {
const auto &row = matrix.row(i);
#ifdef _OPENMP
#pragma omp parallel for shared(row)
#endif
for(long j = i + 1; j < N; ++j) {
distance(i, j) = (row - matrix.row(j)).lpNorm<1>();
}
}
But this take much longer time: for the same 10k X 10k matrix, this code uses 3.5 min, which is much worse considering that L1 and L2 are very close in their original form:
L1(a, b) = sum_i |a_i - b_i|
L2(a, b) = sqrt(sum_i |a_i - b_i|^2)
Any idea to how change L1 to use fast computations with Eigen? Or a better form to do that and I just didn't figure out.
Thank you very much for your help!
Carlos
D = XX.replicate(1,n) + XX.transpose().replicate(n,1);
– Aubine(a-b)
to+1
or-1
into asign
vector depending on sign ofa_i-b_i
and then do|a-b| = (a-b).sign
? – Sweven