Could we get different solutions for eigenVectors from a matrix?
Asked Answered
I

4

5

My purpose is to find a eigenvectors of a matrix. In Matlab, there is a [V,D] = eig(M) to get the eigenvectors of matrix by using: [V,D] = eig(M). Alternatively I used the website WolframAlpha to double check my results.

We have a 10X10 matrix called M:

0.736538062307847   -0.638137874226607  -0.409041107160722  -0.221115060391256  -0.947102932298308  0.0307937582853794  1.23891356582639    1.23213871779652    0.763885436104244   -0.805948245321096
-1.00495215920171   -0.563583317483057  -0.250162608745252  0.0837145788064272  -0.201241986127792  -0.0351472158148094 -1.36303599752928   0.00983020375259212 -0.627205458137858  0.415060573134481
0.372470672825535   -0.356014310976260  -0.331871925811400  0.151334279460039   0.0983275066581362  -0.0189726910991071 0.0261595600177302  -0.752014960080128  -0.00643718050231003    0.802097123260581
1.26898635468390    -0.444779390923673  0.524988731629985   0.908008064819586   -1.66569084499144   -0.197045800083481  1.04250295411159    -0.826891197039745  2.22636770820512    0.226979917020922
-0.307384714237346  0.00930402052877782 0.213893752473805   -1.05326116146192   -0.487883985126739  0.0237598951768898  -0.224080566774865  0.153775526014521   -1.93899137944122   -0.300158630162419
7.04441299430365    -1.34338456640793   -0.461083493351887  5.30708311554706    -3.82919170270243   -2.18976040860706   6.38272280044908    2.33331906669527    9.21369926457948    -2.11599193328696
1   0   0   0   0   0   0   0   0   0
0   1   0   0   0   0   0   0   0   0
0   0   0   1   0   0   0   0   0   0
0   0   0   0   0   0   1   0   0   0

D:

2.84950796497613 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    1.08333535157800 + 0.971374792725758i   0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    1.08333535157800 - 0.971374792725758i   0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    -2.05253164206377 + 0.00000000000000i   0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    -0.931513274011512 + 0.883950434279189i 0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    -0.931513274011512 - 0.883950434279189i 0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    -1.41036956613286 + 0.354930202789307i  0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    -1.41036956613286 - 0.354930202789307i  0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    -0.374014257422547 + 0.00000000000000i  0.00000000000000 + 0.00000000000000i
0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.00000000000000 + 0.00000000000000i    0.165579401742139 + 0.00000000000000i

V:

-0.118788118233448 + 0.00000000000000i  0.458452024790792 + 0.00000000000000i   0.458452024790792 + -0.00000000000000i  -0.00893883603500744 + 0.00000000000000i    -0.343151745490688 - 0.0619235203325516i    -0.343151745490688 + 0.0619235203325516i    -0.415371644459693 + 0.00000000000000i  -0.415371644459693 + -0.00000000000000i -0.0432672840354827 + 0.00000000000000i 0.0205670999343567 + 0.00000000000000i
0.0644460666316380 + 0.00000000000000i  -0.257319460426423 + 0.297135138351391i -0.257319460426423 - 0.297135138351391i 0.000668740843331284 + 0.00000000000000i    -0.240349418297316 + 0.162117384568559i -0.240349418297316 - 0.162117384568559i -0.101240986260631 + 0.370051721507625i -0.101240986260631 - 0.370051721507625i 0.182133003667802 + 0.00000000000000i   0.0870047828436781 + 0.00000000000000i
-0.0349638967773464 + 0.00000000000000i -0.0481533171088709 - 0.333551383088345i    -0.0481533171088709 + 0.333551383088345i    -5.00304864960391e-05 + 0.00000000000000i   -0.0491721720673945 + 0.235973015480054i    -0.0491721720673945 - 0.235973015480054i    0.305000451960374 + 0.180389787086258i  0.305000451960374 - 0.180389787086258i  -0.766686233364027 + 0.00000000000000i  0.368055402163444 + 0.00000000000000i
-0.328483258287378 + 0.00000000000000i  -0.321235466934363 - 0.0865401147007471i    -0.321235466934363 + 0.0865401147007471i    -0.0942807049530764 + 0.00000000000000i -0.0354015249204485 + 0.395526630779543i    -0.0354015249204485 - 0.395526630779543i    -0.0584777280581259 - 0.342389123727367i    -0.0584777280581259 + 0.342389123727367i    0.0341847135233905 + 0.00000000000000i  -0.00637190625187862 + 0.00000000000000i
0.178211880664383 + 0.00000000000000i   0.236391683569043 - 0.159628238798322i  0.236391683569043 + 0.159628238798322i  0.00705341924756006 + 0.00000000000000i 0.208292766328178 + 0.256171148954103i  0.208292766328178 - 0.256171148954103i  -0.319285221542254 - 0.0313551221105837i    -0.319285221542254 + 0.0313551221105837i    -0.143900055026164 + 0.00000000000000i  -0.0269550068563120 + 0.00000000000000i
-0.908350536903352 + 0.00000000000000i  0.208752559894992 + 0.121276611951418i  0.208752559894992 - 0.121276611951418i  -0.994408141243082 + 0.00000000000000i  0.452243212306010 + 0.00000000000000i   0.452243212306010 + -0.00000000000000i  0.273997199582534 - 0.0964058973906923i 0.273997199582534 + 0.0964058973906923i -0.0270087356931836 + 0.00000000000000i 0.00197408431000798 + 0.00000000000000i
-0.0416872385315279 + 0.00000000000000i 0.234583850413183 - 0.210340074973091i  0.234583850413183 + 0.210340074973091i  0.00435502958971167 + 0.00000000000000i 0.160642433241717 + 0.218916331789935i  0.160642433241717 - 0.218916331789935i  0.276971588308683 + 0.0697020017773242i 0.276971588308683 - 0.0697020017773242i 0.115683515205146 + 0.00000000000000i   0.124212913671392 + 0.00000000000000i
0.0226165595687948 + 0.00000000000000i  0.00466011130798999 + 0.270099580217056i    0.00466011130798999 - 0.270099580217056i    -0.000325812684017280 + 0.00000000000000i   0.222664282388928 + 0.0372585184944646i 0.222664282388928 - 0.0372585184944646i 0.129604953142137 - 0.229763189016417i  0.129604953142137 + 0.229763189016417i  -0.486968076893485 + 0.00000000000000i  0.525456559984271 + 0.00000000000000i
-0.115277185508808 + 0.00000000000000i  -0.204076984892299 + 0.103102999488027i -0.204076984892299 - 0.103102999488027i 0.0459338618810664 + 0.00000000000000i  0.232009172507840 - 0.204443701767505i  0.232009172507840 + 0.204443701767505i  -0.0184618718969471 + 0.238119465887194i    -0.0184618718969471 - 0.238119465887194i    -0.0913994930540061 + 0.00000000000000i -0.0384824814248494 + 0.00000000000000i
-0.0146296269545178 + 0.00000000000000i 0.0235283849818557 - 0.215256480570249i 0.0235283849818557 + 0.215256480570249i -0.00212178438590738 + 0.00000000000000i    0.0266030060993678 - 0.209766836873709i 0.0266030060993678 + 0.209766836873709i -0.172989400304240 - 0.0929551855455724i    -0.172989400304240 + 0.0929551855455724i    -0.309302420721495 + 0.00000000000000i  0.750171291624984 + 0.00000000000000i

I was given the following results:

  1. Original Matrix:

Original Matrix M

  1. The results from WolframAlpha:

wolframalpha

  1. The results from Matlab Eig:

D(eigenvalues)

Eigen Values - D

V(eigenvectors)

eigenVectors

Is it possible to get different solutions for eigenVectors or it should be a unique answer. I am interested to get clarified on this concept.

It answered 24/10, 2012 at 0:10 Comment(0)
K
19

Eigenvectors are NOT unique, for a variety of reasons. Change the sign, and an eigenvector is still an eigenvector for the same eigenvalue. In fact, multiply by any constant, and an eigenvector is still that. Different tools can sometimes choose different normalizations.

If an eigenvalue is of multiplicity greater than one, then the eigenvectors are again not unique, as long as they span the same subspace.

Keeleykeelhaul answered 24/10, 2012 at 0:25 Comment(4)
Does it apply to imaginary part also? [ in eigenVector ] or let's ask in other words: as we use different normalizations, we get different eigenVectors. so far so good... Could we say the results that we get for eigenVectors ( from any of computation approaches) if the imaginary part from one of solutions/answers is ZERO for the rest of approaches we imaginary part would be ZERO for corresponding answer?It
I can multiply an eigenvector by any complex number, including i=sqrt(-1), and it is still an eigenvector. So you cannot claim what you want.Keeleykeelhaul
I want to see if I could use it as a rule or not for some work implementation. EX) Imagine one of the elements in eigenVector V[i,j] is equal to a+bi calculated by approach A. With another approach B: it is a'+ b'i in same place V[i,j]. Could we say these two different answers ( a+bi AND a'+ b'i ) are corresponding answers . [ if yes, is it necessary it satisfies the following condition: a^2 + b^2 = a'^2 + b'^2 ]It
You cannot say ANYTHING from a single element.Keeleykeelhaul
S
13

As woodchips points out (+1), eigenvectors are unique only up to a linear transformation. This fact is readily apparent from the definition, ie an eigenvector/eigenvalue pair solve the characteristic function A*v = k*v, where A is the matrix, v is the eigenvector, and k is the eigenvalue.

Let's consider a much simpler example than your (horrendous looking) question:

M = [1, 2, 3; 4, 5, 6; 7, 8, 9];
[EigVec, EigVal] = eig(M);

Matlab yields:

EigVec =
-0.2320   -0.7858    0.4082
-0.5253   -0.0868   -0.8165
-0.8187    0.6123    0.4082

while Mathematica yields:

EigVec = 
0.2833    -1.2833    1
0.6417    -0.1417    -2
1         1          1

From the Matlab documentation:

"For eig(A), the eigenvectors are scaled so that the norm of each is 1.0.".

Mathematica on the other hand is clearly scaling the eigenvectors so that so the final element is unity.

Even just eyeballing the outputs I've given, you can start to see the relationships emerge (in particular, compare the third eigenvector from both outputs).

By the way, I suggest you edit your question to have a more simple input matrix M, such as the one I've used here. This will make it much more readable for anyone who visits this page in the future. It is actually not that bad a question, but the way it is currently formatted will likely cause it to be down-voted.

Semaphore answered 24/10, 2012 at 0:38 Comment(2)
Why in some approaches eigenVectors has imaginary values and in some dont?It
@farzinparsa The problem of finding eigenvalues can be made equivalent to the problem of finding the roots of a polynomial. The proof of this should be in any good text on linear algebra. Now, it is common knowledge that the roots of polynomials can be imaginary (eg think of the quadratic formula from high-school). Therefore eigenvalues, and thus eigenvectors may be complex. Are there conditions guaranteeing real eigenvalues? Yes, if a matrix is symmetric, its eigenvalues will be real. This stuff is in any standard text on linear algebra.Semaphore
B
3

I completely agree with Mr.Colin T Bowers, that MATHEMATICA does the normalization so that last value of EigenVectors become one. Using MATLAB if anybody want to produce EigenVectors result like MATHEMATICA then we can tell MATLAB Normalize the last value of EigenVectors result to 1 using following normalization step.

M = [1, 2, 3; 4, 5, 6; 7, 8, 9];

[EigVec, EigVal] = eig(M);

sf=1./EigVec(end,:); %get the last value of each eigen vector and inverse for scale factor

sf=repmat(sf,size(EigVec,1),1); % Repeat Scale value of each element in the vector

Normalize_EigVec=EigVec.*sf;

Normalize_EigVec =

    0.2833   -1.2833    1.0000
    0.6417   -0.1417   -2.0000
    1.0000    1.0000    1.0000
Beamon answered 24/10, 2012 at 5:49 Comment(4)
Does it apply to imaginary part also? [ in eigenVector ] or let's ask in other words: as we use different normalizations, we get different eigenVectors. so far so good... Could we say the results that we get for eigenVectors ( from any of computation approaches) if the imaginary part from one of solutions/answers is ZERO for the rest of approaches we imaginary part would be ZERO for corresponding answer?It
I just received a notification that you tried to edit my answer and replace it with yours. I'm guessing you thought you were editing your own answer, but accidentally clicked mine instead :-) Anyway, no problems, the moderators picked it up and prevented the edit. By the way, this is a neat little extension to my answer - worth a +1.Semaphore
@Colin T Bowers: I didn't,I asked a question and looking for the answer. I dont have any answer to replace :) I want to see if I could use it as a rule or not for some work implementation. EX) Imagine one of the elements in eigenVector V[i,j] is equal to a+bi calculated by approach A. With another approach B: it is a'+ b'i in same place V[i,j]. Could we say these two different answers ( a+bi AND a'+ b'i ) are corresponding answers . [ if yes, is it necessary it satisfies the following condition: a^2 + b^2 = a'^2 + b'^2 ]It
@farzinparsa My comment was intended for veeresh, not you :-) In regards to your question, it would be best to say the two answers are equally valid. I don't think the condition you state is necessary or sufficient.Semaphore
G
0

As Rody points out the normalization Mathematica uses is to make the last element unity. The other eig functions like the QZ algorithm (which you have to use in Matlab coder for instance since Cholesky isn't supported), don't nomalize the way Matlab does for [V, lam] = eig(C). EX: [V,lam]= eig(C,eye(size(C)),'qz');

From the documentation http://www.mathworks.com/help/techdoc/ref/eig.html

Note: For eig(A), the eigenvectors are scaled so that the norm of each is 1.0. For eig(A,B), eig(A,'nobalance'), and eig(A,B,flag), the eigenvectors are not normalized. Also note that if A is symmetric, eig(A,'nobalance') ignores the nobalance option since A is already balanced.

For [V, lam]=eig(C); the eigenvectors are scaled so that the norm of each is 1.0. That’s what we need here. Matlab does that for the Cholesky formulation, so, how does one re-normalize the eigenvectors produced by QZ so they have that same scale? Like so:

W = V;
for i = 1:size(V,2) % for each column
    V(:,i) = V(:,i) / norm(V(:,i), 2);  % Normalize column i
end

This will find the length of each vector and divide the elements by that length to scale the vector. Mathamatica basically does the same thing, making the last element 1 instead of normalizing the vector. http://www.fundza.com/vectors/normalize/

Note, the vectors and values are not in the same order necessarily, so you may still need to sort them. Matlab's Cholesky algorithm produces the items in a sort order like so:

lam=diag(lam);
[sorted_lam,index]=sort(lam);
for cont=1:length(sorted_lam)
   sorted_V(:,cont)=V(:,index(cont));
end
W=sorted_W;
lam = diag(sorted_lam);

And even after doing this the signs may not be pointed in the same direction (the eigenvectors are still eigenvectors if they are multiplied times -1). Note the same sorting has to be applied to the lambda (eigenvalues) or those will be out of order.

The typical convention is to filp the signs of a column if the first element in the column is negative.

One thing you could do is flip the signs if more than 2 are negative:

%FLIP SIGNS IF MORE THAN 2 ARE NEGATIVE
W=sorted_W;
for i = 1:size(W,2) % for each column in V
    A = W(:,i);
    s=sign(A);
    inegatif=sum(s(:)==-1);
    if(inegatif>1)
        W(:,i) = -W(:,i);
    end
end

But this only really helps if the elements aren't close to 0 because if they are close to 0 a different algorithm might find the value on the other side of the 0 instead, but it's better than nothing.

One final thing, for the 'B' value (the Generalized eigenvalue problem input matrix), I am using 'eye(size(C))'. Is there an optimum way to select 'B' to improve this algorithm and make it give answers closer to those of Cholesky or be more accurate? You can use any (real matrix of the same size) value as B including A again or A' (A is the Input Matrix), but what is a 'good choice?' maybe A', I noticed for some inputs a 3x3 of -1 seems to give close to the same answers as 'chol'?

https://www.mathworks.com/help/matlab/ref/eig.html?searchHighlight=eig&s_tid=doc_srchtitle#inputarg_B

Gooch answered 4/11, 2016 at 14:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.