How to whiten matrix in PCA
Asked Answered
C

4

12

I'm working with Python and I've implemented the PCA using this tutorial.

Everything works great, I got the Covariance I did a successful transform, brought it make to the original dimensions not problem.

But how do I perform whitening? I tried dividing the eigenvectors by the eigenvalues:

S, V = numpy.linalg.eig(cov)
V = V / S[:, numpy.newaxis]

and used V to transform the data but this led to weird data values. Could someone please shred some light on this?

Cupreous answered 4/7, 2011 at 18:9 Comment(1)
You might want to try a more specific mathematical venue, perhaps a mailing list associated with numpy or scikits.Uropygium
R
26

Here's a numpy implementation of some Matlab code for matrix whitening I got from here.

import numpy as np

def whiten(X,fudge=1E-18):

   # the matrix X should be observations-by-components

   # get the covariance matrix
   Xcov = np.dot(X.T,X)

   # eigenvalue decomposition of the covariance matrix
   d, V = np.linalg.eigh(Xcov)

   # a fudge factor can be used so that eigenvectors associated with
   # small eigenvalues do not get overamplified.
   D = np.diag(1. / np.sqrt(d+fudge))

   # whitening matrix
   W = np.dot(np.dot(V, D), V.T)

   # multiply by the whitening matrix
   X_white = np.dot(X, W)

   return X_white, W

You can also whiten a matrix using SVD:

def svd_whiten(X):

    U, s, Vt = np.linalg.svd(X, full_matrices=False)

    # U and Vt are the singular matrices, and s contains the singular values.
    # Since the rows of both U and Vt are orthonormal vectors, then U * Vt
    # will be white
    X_white = np.dot(U, Vt)

    return X_white

The second way is a bit slower, but probably more numerically stable.

Rickrickard answered 4/7, 2012 at 23:1 Comment(9)
Thanks! Shouldn't the svd be performed on the covariance matrix of X?Mirianmirielle
@Mirianmirielle I think you are confusing SVD with eigendecomposition. Using the SVD method you don't explicitly compute the covariance matrix beforehand - the columns of U will contain the eigenvectors of X * X.T, and the rows of Vt contain the eigenvectors of X.T * X. Since the rows of U and Vt are orthonormal vectors, the covariance matrix of U.dot(Vt) will be the identity.Rickrickard
All the other examples I saw perform the svd on the covariance matrix, e.g.gist.github.com/duschendestroyer/5170087 .Mirianmirielle
@Mirianmirielle The example you just linked to shows ZCA whitening, which is one of many different ways to whiten a matrix. For any orthogonal matrix R, R * X_white will also have identity covariance. In ZCA, R is chosen to be U (i.e. the eigenvectors of X * X.T). This particular transformation results in whitened data that is as close as possible to X (in the least-squares sense). If you just want whitened data you can compute X_white as above (take a look at the values in X_white.T * X_white if you don't believe me).Rickrickard
beware people that this is true just for symmetric matrices.. :)Viddah
@Viddah The SVD approach will also work for non-square inputs if you pass full_matrices=False to np.linalg.svd (see my update). Equivalently, you could replace the singular values with an observations-by-components identity matrix, e.g. U.dot(np.eye(*X.shape)).dot(Vt), but it's cheaper to simply avoid calculating the full matrices for U and Vt.Rickrickard
The result of the first and second solution is different for a 3*2 matrix like ([1,2],[2,4],[3,6]). SVD removes linearity but the first one doesn'tPrudy
Hi there, I think your covariance matrix computations assumes that the data has already been centered at zero, right?Seldan
Dear Ali_m, is this method suitable for large number of data for example 400 number?Manville
P
15

If you use python's scikit-learn library for this, you can just set the inbuilt parameter

from sklearn.decomposition import PCA
pca = PCA(whiten=True)
whitened = pca.fit_transform(X)

check the documentation.

Privilege answered 28/6, 2016 at 12:55 Comment(0)
V
1

I think you need to transpose V and take the square root of S. So the formula is

matrix_to_multiply_with_data = transpose( v ) * s^(-1/2 )

Vinegarette answered 6/7, 2011 at 21:54 Comment(0)
E
0

Use ZCA mapping instead

function [Xw] = whiten(X)
  % Compute and apply the ZCA mapping
  mu_X = mean(X, 1);
  X = bsxfun(@minus, X, mu_X);
  Xw = X / sqrtm(cov(X));
end 
Easiness answered 5/12, 2021 at 17:44 Comment(2)
what language is this?Astto
@Astto This is not.... the greatest language in the world, no ....This is just MATLAB.Easiness

© 2022 - 2024 — McMap. All rights reserved.