Eigenvalues in a Nutshell

Post on 14-Dec-2014

31.055 views 2 download

description

Although eigenvalues are one of the most important concepts in linear algebra, some of us eigen-struggle with them without understanding their usefulness and beauty. In this talk I'll briefly review the definition of eigenvalues emphasizing the associated geometric idea and I'll show how can they be used in some applications.From the Un-Distinguished Lecture Series (http://ws.cs.ubc.ca/~udls/). The talk was given Mar. 16, 2007

Transcript of Eigenvalues in a Nutshell

Eigenvalues in a nutshellEigenvalues in a nutshell

Mariquita Flores Garrido

UDLS, March 16th 2007

• Scalar multiple of a vector

• Addition of vectors

Just in case…

x

λx

xx x

λx

λx λx10 ≤≤ λ λ≤1 01 ≤≤− λ 1−≤λ

v1

v2

v1 + v2

Linear Transformations

mnnm RRfRA a:⇒∈ ×

V. gr.

• Rectangular matrices

⎟⎟⎟

⎜⎜⎜

635241

⎟⎟⎠

⎞⎜⎜⎝

⎛11

⎟⎟⎟

⎜⎜⎜

975

=

Ax = b Transformation of x by A.

n x 1

=A x Ax

m x n m x 1

*Stretch/Compression *Rotation

Linear Transformations

• Square Matrices

*Reflection

⎟⎟⎠

⎞⎜⎜⎝

⎛2002

⎟⎟⎠

⎞⎜⎜⎝

⎛0110

⎟⎟⎠

⎞⎜⎜⎝

⎛− ϕϕ

ϕϕcossinsincos

nnnn RRfRA a:⇒∈ × (*endomorphism)

*Shear in y-direction *Shear in x-direction

Bonnus: Shear

⎟⎟⎠

⎞⎜⎜⎝

⎛10

1 k⎟⎟⎠

⎞⎜⎜⎝

⎛101

k

x x

V.gr. Shear in x-direction

y y⎟⎟⎠

⎞⎜⎜⎝

⎛yx

⎟⎟⎠

⎞⎜⎜⎝

⎛ +ykyx

Basis for a Subspace

A basis in Rn is a set of n linearly independent vectors.

e3

e2

e1

2e3

⎟⎟⎟

⎜⎜⎜

211

⎟⎟⎟

⎜⎜⎜

211

⎟⎟⎟

⎜⎜⎜

001

⎟⎟⎟

⎜⎜⎜

010

⎟⎟⎟

⎜⎜⎜

100

= 1 + 1 + 2

Basis for a Subspace

Any set of n linearly independent vectors can be a basis

e1

e2 V1

V2

⎟⎟⎠

⎞⎜⎜⎝

2

1

aa

⎟⎟⎠

⎞⎜⎜⎝

⎛−=⎟⎟

⎞⎜⎜⎝

⎛12

2

1

aa

Using canonical basis:

??2

1 =⎟⎟⎠

⎞⎜⎜⎝

⎛aa

V1

V2

Using V1, V2 … ?

EIGENVALUES

•"Eigen" - "own", "peculiar to", "characteristic" or "individual“; "propervalue“.

• An invariant subspace under an endomorphism.

• If A is n x n matrix, x ≠ 0 is called an eigenvector of A if

Ax = λx

and λ is called an eigenvalue of A.

*Stretch/Compression *Rotation

Quiz 1

• Square Matrices (endomorphism)

*Reflection

⎟⎟⎠

⎞⎜⎜⎝

⎛2002

⎟⎟⎠

⎞⎜⎜⎝

⎛0110

⎟⎟⎠

⎞⎜⎜⎝

⎛− ϕϕ

ϕϕcossinsincos

• Characteristic polynomial: A degree n polynomial in λ:

det(λI - A) = 0Scalars satisfying the eqn, are the eigenvalues of A.

V.gr.

• Spectrum (of A) : { λ1, λ2 , …, λn}

• Algebraic multiplicity (of λi): number of roots equal to λi.

• Eigenspace (of λi): Eigenvectors never come alone!

• Geometric multiplicity (of λi): number of lin. independent eigenvectors associated with λi.

Eigen – slang

02543

214321 2 =−−=

−−

⎯→⎯⎟⎟⎠

⎞⎜⎜⎝

⎛λλ

λλ

)()( kxkxAxkAxk

xAx

λλ

λ

=⋅=⋅

=

Eigen – slang

• Eigen – something: Something that doesn’t change under some transformation.

xx

edxed

=][

FAQ (yeah, sure)

• How old are the eigenvalues?They arose before matrix theory, in the context of differential equations.

Bernoulli, Euler, 18th Century.

Hilbert, 20th century.

• Do all matrices have eigenvalues?Yes. Every n x n matrix has n eigenvalues.

• Why are the eigenvalues important?

- Physical meaning (v.gr. string, molecular orbitals ).

- There are other concepts relying on eigenvalues (v.gr. singular values, condition number).

- They tell almost everything about a matrix.

1. A singular ↔ λ = 0.

2. A and AT have the same λ’s.

3. A symmetric Real λ’s..

4. A skew-symmetric Imaginary λ’s..

5. A symmetric positive definite λ’s > 0

6. A full rank Eigenvectors form a basis for Rn.

7. A symmetric Eigenvectors can be chosen orthonormal.

8. A real Eigenvalues and eigenvectors come in conjugate pairs.

9. A symmetric Number of positive eigenvalues equals the number of positive pivots. A diagonal λi = aii

Properties of a matrix reflected in its eigenvalues:

10. A and M-1AM have the same λ’s.

11. A orthogonal all |λ | = 1

12. A projector λ = 1,0

13. A Markov λmax = 1

14. A reflection λ = -1,1,…,1

15. A rank one λ = vTu

16. A-1 1/λ(A)

17. A + cI λ(A) + c

18. A diagonal λi = aii

19. Eigenvectors of AAT Basis for Col(A)

20. Eigenvectors of ATA Basis for Row(A)

Properties of a matrix reflected in its eigenvalues:

M

What’s the worst thing about eigenvalues?

Find them is painful; they are roots of the characteristic polynomial.

* How long does it take to calculate the determinant of a 25 x 25 matrix?

* How do we find roots of polynomials?

WARNING:

The following examples have been simplified to be presented in a short

talk about eigenvalues. Attendee discretion is advised.

Example 1: Face Identification

Eigenfaces: face identification technique.

(There are also eigeneyes, eigennoses, eigenmouths, eigenears,eigenvoices,…)

EIGENFACES

Given a set of images, and a “target face”, identify the

“owner” of the face.

128 images

(train set)

256 x 256

(test)

1. Preprocessing stage: linear transformations, morphing, warping,…

2. Representing faces: vectors (Γj) in a very high dimensional space.

V.gr.

Training set: 65536 x 128 matrix

3. Centering data: take the “average” image and define every Φj

∑=

Γ=Ψn

jjn 1

1

jj Γ−Ψ=Φ

],...,,[ 21 nA ΦΦΦ=

4. Eigenvectors of AAT are a basis for Col(A) (what’s the size of this matrix?), so instead of working with A, I can express every image in another basis.

* 5. PCA: reducing the dimension of the space. To solve the problem, the work is done in a smaller subspace, SL, using projections of each image onto SL.

6. It’s possible to get eigenvectors of AAT using eigenvectors of ATA.65436 x 65436 128 x 128

Example 2: Sparse Matrix Computations

ITERATIVE METHODS

 x = b

• Gauss-Jordan

• If  is 105 ×105 , Gauss Jordan would take approx. 290 years.

• Iterative methods: find some “good” matrix A and apply it to some initial vector until you get convergence.

• Choosing different A determines different methods (v.gr. Jacobi, Gauss-Seidel, Krylov subspace methods, …).

Example 2: ITERATIVE METHODS

0n

n

02

012

01

xA x

xA )A(Ax Ax x

Ax x

=

===A: huge matrix ( 106 ×106 )

x0 : initial guess

mn

mnn

mn

mnn

mmnn

vvv

vAvAvA

vvvAxA

mλαλαλα

ααα

ααα

+++=

+++=

+++=

L

L

L

22211

2211

22110

1

)(

=

M

econvergenci ⇒<1λ

• If A has full rank, its eigenvectors form a basis for Rm

• Iteration

Convergence, number of iterations, it’s all about eigenvalues…

Example 2: ITERATIVE METHODS

Example 3: Dynamical Systems

( Eigenvalues don’t have the main role here, but, who are you going to complain to?)

Arnold’s Cat

• Poincare recurrence theorem:

“ A system having a finite amount of energy and confined to a finite spatial volume will, after a sufficiently long time, return to an arbitrarily small neighborhood of its initial state.”

• Vladimir I. Arnold, Russian mathematician.

⎟⎟⎠

⎞⎜⎜⎝

⎛=

2111

A

Each pixel can be assigned to a unique pair of coordinates

(a two-dimensional vector)

⎟⎟⎠

⎞⎜⎜⎝

⎛⋅⎟⎟

⎞⎜⎜⎝

⎛=⎟⎟

⎞⎜⎜⎝

⎛=

1011

1101

2111

A(mod 1)

1 2 3 5

20 31 37 42

46 47 59 63

80797877

⎟⎟⎠

⎞⎜⎜⎝

⎛=

2111

A⎟⎟⎠

⎞⎜⎜⎝

⎛→=

85.52.

61.21λ

⎟⎟⎠

⎞⎜⎜⎝

⎛−→=

52.85.

38.02λ

1)det( =A V1

V2

More Applications

•Graph theory

•Differential Equations

•PageRank

•Physics

REFERENCES

•Chen Greif. CPSC 517 Notes, UBC/CS, Spring 2007.

•Howard Anton and Chris Rorres. Elementary Linear Algebra, Applications Version, 9th Ed. John Wiley & Sons, Inc. 2005

•Humberto Madrid de la Vega. Eigenfaces: Reconocimiento digital de facciones mediante SVD. Memorias del XXXVII Congreso SMM, 2005.

•Wikipedia: Eigenvalue, eigenvector and eigenspace.http://en.wikipedia.org/wiki/Eigenvalue