Are eigenvectors returned by R function eigen() wrong?
Asked Answered
I

1

2
#eigen values and vectors
a <- matrix(c(2, -1, -1, 2), 2)

eigen(a)

I am trying to find eigenvalues and eigenvectors in R. Function eigen works for eigenvalues but there are errors in eigenvectors values. Is there any way to fix that?

code for eigen value and vectors

Impromptu answered 22/9, 2018 at 16:47 Comment(4)
What precisely do you consider to be an error here? Keep in mind that R returns eigenvectors normalized to unit length, and recall that they are also defined up to, in this case, the sign.Galligan
the eigen vectors should be like -1 1 1 -1 like but it comes out to be in decimal while the answers are purely non decimal @JuliusVainoraImpromptu
Then my comment and @李哲源's answer gives a reason for that: c(-1, 1) / sqrt(1^2+1^2) is c(-0.707, 0.707).Galligan
Yeah I got your point @JuliusVainora . I sincerly apologise for delay in response@李哲源Impromptu
J
4

Some paper work tells you

  • the eigenvector for eigenvalue 3 is (-s, s) for any non-zero real value s;
  • the eigenvector for eigenvalue 1 is (t, t) for any non-zero real value t.

Scaling eigenvectors to unit-length gives

s = ± sqrt(0.5) = ±0.7071068
t = ± sqrt(0.5) = ±0.7071068

Scaling is good because if the matrix is real symmetric, the matrix of eigenvectors is orthonormal, so that its inverse is its transpose. Taking your real symmetric matrix a for example:

a <- matrix(c(2, -1, -1, 2), 2)
#     [,1] [,2]
#[1,]    2   -1
#[2,]   -1    2

E <- eigen(a)

d <- E[[1]]
#[1] 3 1

u <- E[[2]]
#           [,1]       [,2]
#[1,] -0.7071068 -0.7071068
#[2,]  0.7071068 -0.7071068

u %*% diag(d) %*% solve(u)  ## don't do this stupid computation in practice
#     [,1] [,2]
#[1,]    2   -1
#[2,]   -1    2

u %*% diag(d) %*% t(u)      ## don't do this stupid computation in practice
#     [,1] [,2]
#[1,]    2   -1
#[2,]   -1    2

crossprod(u)
#     [,1] [,2]
#[1,]    1    0
#[2,]    0    1

tcrossprod(u)
#     [,1] [,2]
#[1,]    1    0
#[2,]    0    1

How to find eigenvectors using textbook method

The textbook method is to solve the homogenous system: (A - λI)x = 0 for the Null Space basis. The NullSpace function in my this answer would be helpful.

## your matrix
a <- matrix(c(2, -1, -1, 2), 2)

## knowing that eigenvalues are 3 and 1

## eigenvector for eigenvalue 3
NullSpace(a - diag(3, nrow(a)))
#     [,1]
#[1,]   -1
#[2,]    1

## eigenvector for eigenvalue 1
NullSpace(a - diag(1, nrow(a)))
#     [,1]
#[1,]    1
#[2,]    1

As you can see, they are not "normalized". By contrasts, pracma::nullspace gives "normalized" eigenvectors, so you get something consistent with the output of eigen (up to possible sign flipping):

library(pracma)

nullspace(a - diag(3, nrow(a)))
#           [,1]
#[1,] -0.7071068
#[2,]  0.7071068

nullspace(a - diag(1, nrow(a)))
#          [,1]
#[1,] 0.7071068
#[2,] 0.7071068
Julianjuliana answered 22/9, 2018 at 16:55 Comment(1)
Yes, I mean the NullSpace function. Actually it works fine for finding eigenvectors! But I think that it can be used as solution of linear homogenous systems as well. There is not any similar function or code in R to do what NullSpacedoes.Sanguinary

© 2022 - 2024 — McMap. All rights reserved.