Finding generalized eigenvectors numerically in Matlab
Asked Answered
M

1

11

I have a matrix such as this example (my actual matrices can be much larger)

A = [-1 -2   -0.5;
      0  0.5  0;
      0  0   -1];

that has only two linearly-independent eigenvalues (the eigenvalue -1 is repeated). I would like to obtain a complete basis with generalized eigenvectors. One way I know how to do this is with Matlab's jordan function in the Symbolic Math toolbox, but I'd prefer something designed for numeric inputs (indeed, with two outputs, jordan fails for large matrices: "Error in MuPAD command: Similarity matrix too large."). I don't need the Jordan canonical form, which is notoriously unstable in numeric contexts, just a matrix of generalized eigenvectors. Is there a function or combination of functions that automates this in a numerically stable way or must one use the generic manual method (how stable is such a procedure)?

NOTE: By "generalized eigenvector," I mean a non-zero vector that can be used to augment the incomplete basis of a so-called defective matrix. I do not mean the eigenvectors that correspond to the eigenvalues obtained from solving the generalized eigenvalue problem using eig or qz (though this latter usage is quite common, I'd say that it's best avoided). Unless someone can correct me, I don't believe the two are the same.


UPDATE 1 – Five months later:

See my answer here for how to obtain generalized eigenvectors symbolically for matrices larger than 82-by-82 (the limit for my test matrix in this question).

I'm still interested in numeric schemes (or how such schemes might be unstable if they're all related to calculating the Jordan form). I don't wish to blindly implement the linear algebra 101 method that has been marked as a duplicate of this question as it's not a numerical algorithm, but rather a pencil-and paper method used to evaluate students (I suppose that it could be implemented symbolically however). If anyone can point me to either an implementation of that scheme or a numerical analysis of it, I'd be interested in that.

UPDATE 2 – Feb. 2015: All of the above is still true as tested in R2014b.

Midiron answered 3/9, 2013 at 20:57 Comment(15)
@natan: then I suggest you mark this question as a duplicate.Wolfhound
@natan: Yes, I saw that question/answer. It's not a duplicate. It's basically a manual procedure that replicates what is shown in many textbooks and on Wikipedia. I know how to obtain the generalized eigenvalues myself - I'm looking for something a bit more self contained and that uses builtin routines in order to be careful about numeric issues. Do you know of anything or any algorithms? Also, FYI, eigensys doesn't exist any more in R2013a, eig(sym(A)) must be used.Midiron
@DavidHeffernan: The error crops up at exactly 83-by-83 for a particular test matrix.Midiron
@DavidHeffernan: In terms of the ultimate size I'm interested in, these matrices represent the behavior of the nodes of a dynamical neural network, so the system could easily have 100+ dimensions. These matrices, Jacobians, are very sparse, so it'd be nice to have a method that could take advantage of that.Midiron
100x100 is tiny for numerical algos, even eigen problem solving. Anyway, I don't know anything about your specific problem. Augmenting your defective matrices is not something I've ever studied, so I cannot help.Sliver
See my answer here for how to obtain generalized eigenvectors symbolically for matrices larger than 82-by-82 (the limit for my test matrix in this question).Midiron
just read this post and voted to reopen the question...Tamtam
Thank you @bla. Let me know if you have a specific interest in this. I might ask a variant of this over at SciComp.StackExchange or Math.StackExchange. I also updated my answer to a related question – the jordan function is still limited for R2014b. I can't comment on how it might improve in a future release though... :-)Midiron
Just out of interest, what kind of matrices are you trying to diagonalize here? Because if they are derived from real-world data, it seems to me they would never be exactly defective... Or it should be due to some property of the system you are studying, and they would always be defective in the exact same way, so you should be able to deal with it differently... Because I don't think there are numerically stable jordan form algosAmygdala
@reverse_engineer: My matrices are not derived from real word data. They come from linearizing a system ODEs (competitve Lotka-Volterra equations) whose parameters have been specially designed to such that the Jacobian is defective (i.e, repeated eigenvalues). The is done to reduce the number of free parameters and obtain a desired behavior. The 3-by-3 example given in this question is a simple case. See my answer here for how to construct larger versions of this same system.Midiron
@reverse_engineer: Yes, as far as I know, there is no numerically stable algorithm for the Jordan form. But, I'm not interested in the Jordan form itself, just a stable means of obtaining the generalized eigenvalues directly, such that I can form a complete basis – or are the two inextricably linked?Midiron
@horchler: So using something along null((A-eye(3)*eigenvalue)^multiplicity), as per the definition of generalized eigenvectors would not be satisfactory for you?Lacour
@horchler: Oh, nevermind! I just saw the linked question with this approach.Lacour
@Midiron Yes, the two are linked because once you have generalized eigenvalues and eigenvectors, your Jordan form is defined, or when you have the Jordan form, you can just read the generalized eigenvectors... Theres this article that claims to have a numerical algo to compute jordan forms generally, but it's not implemented anywhere that I know of... But if you matrices are always defective in the exact same way, you can work around it with pseudoinverses I'll show you in an answer.Amygdala
@reverse_engineer: I was thinking that. Seems like it might be more correct to say that the algorithm for finding generalized eigenvectors is numerically unstable? Though the J=V\A*V part must contribute some as well. I'd also seen that paper/presentation. I'd trust it a bit more if it was peer-reviewed to some extent, but it's still interesting. I'm not sure they're saying that they can solve arbitrary systems though.Midiron
A
1

As mentioned in my comments, if your matrix is defective, but you know which eigenvectors/eigenvalue pair you want to consider as identical given your tolerance, you can proceed as with this example below:

% example matrix A:
A = [1 0 0 0 0; 
     3 1 0 0 0; 
     6 3 2 0 0; 
     10 6 3 2 0;
     15 10 6 3 2]
% Produce eigenvalues and eigenvectors (not generalized ones)
[vecs,vals] = eig(A)

This should output:

vecs =

     0         0         0         0    0.0000
     0         0         0    0.2236   -0.2236
     0         0    0.0000   -0.6708    0.6708
     0    0.0000   -0.0000    0.6708   -0.6708
1.0000   -1.0000    1.0000   -0.2236    0.2236


vals =

 2     0     0     0     0
 0     2     0     0     0
 0     0     2     0     0
 0     0     0     1     0
 0     0     0     0     1

Where we see that the first three eigenvectors are almost identical to working precision, as are the two last ones. Here, you must know the structure of your problem and identify the identical eigenvectors of identical eigenvalues. Here, eigenvalues are exactly identical, so we know which ones to consider, and we will assume that corresponding vectors 1-2-3 are identical and vectors 4-5. (In practice you will likely check the norm of the differences of eigenvectors and compare it to your tolerance)

Now we proceed to compute the generalized eigenvectors, but this is ill-conditioned to solve simply with matlab's \, because obviously (A - lambda*I) is not full rank. So we use pseudoinverses:

genvec21 = pinv(A - vals(1,1)*eye(size(A)))*vecs(:,1);
genvec22 = pinv(A - vals(1,1)*eye(size(A)))*genvec21;
genvec1 = pinv(A - vals(4,4)*eye(size(A)))*vecs(:,4);

Which should give:

genvec21 =

   -0.0000
    0.0000
   -0.0000
    0.3333
         0

genvec22 =

    0.0000
   -0.0000
    0.1111
   -0.2222
         0

genvec1 =

    0.0745
   -0.8832
    1.5317
    0.6298
   -3.5889

Which are our other generalized eigenvectors. If we now check these to obtain the jordan normal form like this:

jordanJ = [vecs(:,1) genvec21 genvec22 vecs(:,4) genvec1];
jordanJ^-1*A*jordanJ

We obtain:

ans =

2.0000    1.0000    0.0000   -0.0000   -0.0000
     0    2.0000    1.0000   -0.0000   -0.0000
     0    0.0000    2.0000    0.0000   -0.0000
     0    0.0000    0.0000    1.0000    1.0000
     0    0.0000    0.0000   -0.0000    1.0000

Which is our Jordan normal form (with working precision errors).

Amygdala answered 16/2, 2015 at 9:47 Comment(3)
I'm not sure how this can be numerically stable? It looks to be a form of the basic method for finding generalized eigenvalues. For a matrix of any considerable size with many repeated eigenvalues, one could effectively end up multiplying pinv(A - vals(1,1)*eye(size(A))) by itself many times.Midiron
I do like the idea of somehow taking advantage of structure somehow - at least in some cases, but that doesn't seem like that what you're doing here. It seems like one could devise something like that for my example here that only has two unique eigenvalues with one being repeated N-1 times. Might permuting/shifting eigenvector values might work in some cases?Midiron
@Midiron Ehh, yes you might be right in the sense that if one eigenvalue is repeated a great amount of times, there could be instability of this multiplication, but I suspect this is rare in practice, no?... I thought you just needed a non-symbolic way of computing generalized eigenvectors, and this is indeed the straightforward way. But the more I read about it now (probably not as much as you have read about it), the more it seems obvious that this problem is avoided and alternatives like Schur are used instead. Internet has very little info on this... Good luck with your research anyway!Amygdala

© 2022 - 2024 — McMap. All rights reserved.