• date post

07-Mar-2018
• Category

## Documents

• view

220

4

Embed Size (px)

### Transcript of Chapter 6 Eigenvalues and 6 Eigenvalues and Eigenvectors Po-Ning Chen, Professor Department of...

• Chapter 6

Eigenvalues and Eigenvectors

Po-Ning Chen, Professor

Department of Electrical and Computer Engineering

National Chiao Tung University

Hsin Chu, Taiwan 30010, R.O.C.

• 6.1 Introduction to eigenvalues 6-1

Motivations

The static system problem of Ax = b has now been solved, e.g., by Gauss-Jordan method or Cramers rule.

However, a dynamic system problem such asAx = x

cannot be solved by the static system method.

To solve the dynamic system problem, we need to find the static featureof A that is unchanged with the mapping A. In other words, Ax maps to

itself with possibly some stretching ( > 1), shrinking (0 < < 1), or being

reversed ( < 0).

These invariant characteristics of A are the eigenvalues and eigenvectors.Ax maps a vector x to its column space C(A). We are looking for a v C(A)such that Av aligns with v. The collection of all such vectors is the set of eigen-

vectors.

• 6.1 Eigenvalues and eigenvectors 6-2

Conception (Eigenvalues and eigenvectors): The eigenvalue-eigenvector

pair (i,vi) of a square matrix A satisfies

Avi = ivi (or equivalently (A iI)vi = 0)where vi = 0 (but i can be zero), where v1,v2, . . . , are linearly independent.

How to derive eigenvalues and eigenvectors?

For 3 3 matrix A, we can obtain 3 eigenvalues 1, 2, 3 by solvingdet(AI) = 3+ c22+ c1+ c0 = 0.

If all 1, 2, 3 are unequal, then we continue to derive:Av1 = 1v1

Av2 = 2v2

Av3 = 3v3

(A 1I)v1 = 0(A 2I)v2 = 0(A 3I)v3 = 0

v1 = the only basis of the nullspace of (A 1I)v2 = the only basis of the nullspace of (A 2I)v3 = the only basis of the nullspace of (A 3I)

and the resulting v1, v2 and v3 are linearly independent to each other.

If A is symmetric, then v1, v2 and v3 are orthogonal to each other.

• 6.1 Eigenvalues and eigenvectors 6-3

If, for example, 1 = 2 = 3, then we derive:{(A 1I)v1 = 0(A 3I)v3 = 0

{{v1,v2} = the basis of the nullspace of (A 1I)v3 = the only basis of the nullspace of (A 3I)

and the resulting v1, v2 and v3 are linearly independent to each other

when the nullspace of (A 1I) is two dimensional. If A is symmetric, then v1, v2 and v3 are orthogonal to each other.

Yet, it is possible that the nullspace of (A 1I) is one dimensional, thenwe can only have v1 = v2.

In such case, we say the repeated eigenvalue 1 only have one eigenvector.

In other words, we say A only have two eigenvectors.

If A is symmetric, then v1 and v3 are orthogonal to each other.

• 6.1 Eigenvalues and eigenvectors 6-4

If 1 = 2 = 3, then we derive:(A 1I)v1 = 0 {v1,v2,v3} = the basis of the nullspace of (A 1I)

When the nullspace of (A1I) is three dimensional, then v1, v2 and v3are linearly independent to each other.

When the nullspace of (A 1I) is two dimensional, then we say A onlyhave two eigenvectors.

When the nullspace of (A 1I) is one dimensional, then we say Aonly have one eigenvector. In such case, the matrix-form eigensystem

becomes:

A[v1 v1 v1

]=[v1 v1 v1

] 1 0 00 1 00 0 1

If A is symmetric, then distinct eigenvectors are orthogonal to eachother.

• 6.1 Eigenvalues and eigenvectors 6-5

Invariance of eigenvectors and eigenvalues.

Property 1: The eigenvectors stay the same for every power of A.The eigenvalues equal the same power of the respective eigenvalues.

I.e., Anvi = ni vi.

Avi = ivi = A2vi = A(ivi) = iAvi = 2ivi

Property 2: If the nullspace of Ann consists of non-zero vectors, then 0 is theeigenvalue of A.

Accordingly, there exists non-zero vi to satisfy Avi = 0 vi = 0.

• 6.1 Eigenvalues and eigenvectors 6-6

Property 3: Assume with no loss of generality |1| > |2| > > |k|. Forany vector x = a1v1 + a2v2 + + akvk that is a linear combination of alleigenvectors, the normalized mapping P = 11A (namely,

Pv1 =

(1

1A

)v1 =

1

1(Av1) =

1

11v1 = v1)

converges to the eigenvector with the largest absolute eigenvalue (when being

applied repeatedly).

I.e.,

limk

Pkx = limk

1

k1Akx = lim

k1

k1

(a1A

kv1 + a2Akv2 + + akAkvk

)= lim

k1

k1

(a1

k1v1

+ a2k2v2 + + kkAkvk

transient states

)= a1v1.

• 6.1 How to determine the eigenvalues? 6-7

We wish to find a non-zero vector v to satisfy Av = v; then

(A I)v = 0, where v = 0 det(A I) = 0.

So by solving det(A I) = 0, we can obtain all the eigenvalues of A.

Example. Find the eigenvalues of A =

[0.5 0.5

0.5 0.5

].

Solution.

det(A I) = det([

0.5 0.50.5 0.5

])= (0.5 )2 0.52 = 2 = ( 1) = 0 = 0 or 1.

• 6.1 How to determine the eigenvalues? 6-8

Proposition: Projection matrix (defined in Section 4.2) has eigenvalues either 1

or 0.

Proof:

A projection matrix always satisfies P 2 = P . So P 2v = Pv = v. By definition of eigenvalues and eigenvectors, we have P 2v = 2v. Hence, v = 2v for non-zero vector v, which immediately implies = 2. Accordingly, is either 1 or 0.

Proposition: Permutation matrix has eigenvalues satisfying k = 1 for some

integer k.

Proof:

A purmutation matrix always satisfies Pk+1 = P for some integer k.

Example. P =

0 0 11 0 00 1 0

. Then, Pv =

v3v1v2

and P 3v = v. Hence, k = 3.

Accordingly, k+1v = v, which gives k = 1 since the eigenvalue of P cannotbe zero.

• 6.1 How to determine the eigenvalues? 6-9

Proposition: Matrix

mAm + m1Am1 + + 1A + 0I

has the same eigenvectors as A, but its eigenvalues become

mm + m1m1 + + 1 + 0,

where is the eigenvalue of A.

Proof:

Let vi and i be the eigenvector and eigenvalue of A. Then,(mA

m + m1Am1 + + 0I)vi =

(m

m + m1m1 + + 0)vi

Hence, vi and(m

m + m1m1 + + 0)are the eigenvector and eigen-

value of the polynomial matrix.

• 6.1 How to determine the eigenvalues? 6-10

Theorem (Cayley-Hamilton): For a square matrix A, define

f() = det(A I) = n + n1n1 + + 0.(Suppose A has linearly independent eigenvectors.) Then

f(A) = An + n1An1 + + 0I = all-zero n n matrix.

Proof: The eigenvalues of f(A) are all zeros and the eigenvectors of f(A) remain

the same as A. By definition of eigen-system, we have

f(A)[v1 v2 vn

]=[v1 v2 vn

]f(1) 0 00 f(2) 0... ... . . . ...

0 0 f(n)

.

Corollary (Cayley-Hamilton): (Suppose A has linearly independent eigen-

vectors.)

(1I A)(2I A) (nI A) = all-zero matrix

Proof: f() can be re-written as f() = (1 )(2 ) (n ).

• 6.1 How to determine the eigenvalues? 6-11

(Problem 11, Section 6.1) Here is a strange fact about 2 by 2 matrices with eigen-

values 1 = 2: The columns of A 1I are multiples of the eigenvector x2. Anyidea why this should be?

Hint: (1I A)(2I A) =[0 0

0 0

]implies

(1I A)w1 = 0 and (1I A)w2 = 0where (2I A) =

[w1 w2

]. Hence,

the columns of (2I A) give the eigenvectors of 1 if they are non-zero vectorsand

the columns of (1I A) give the eigenvectors of 2 if they are non-zero vectors .

So, the (non-zero) columns of A 1I are (multiples of) the eigen-vector x2.

• 6.1 Why Gauss-Jordan cannot solve Ax = x? 6-12

The forward elimination may change the eigenvalues and eigenvectors?

Example. Check eigenvalues and eigenvectors of A =

[1 22 5

].

Solution.

The eigenvalues of A satisfy det(A I) = ( 3)2 = 0. A = LU =

[1 0

2 1

] [1 20 9

]. The eigenvalues of U apparently satisfy

det(U I) = (1 )(9 ) = 0. Suppose u1 and u2 are the eigenvectors of U , respectively corresponding

to eigenvalues 1 and 9. Then, they cannot be the eigenvectors of A since if

they were,3u1 = Au1 = LUu1 = Lu1 =

[1 0

2 1

]u1 = u1 = 0

3u2 = Au2 = LUu2 = 9Lu2 = 9

[1 0

2 1

]u2 = u2 = 0.

Hence, the eigenvalues are nothing to do with pivots (except for a triangular A).

• 6.1 How to determine the eigenvalues? (Revisited) 6-13

Solve det(A I) = 0. det(A I) is a polynomial of order n.

f() = det(A I) = det

a1,1 a1,2 a1,3 a1,na2,1 a2,2 a2,3 a2,na3,1 a3,2 a3,3 a3,n... ... ... . . . ...

an,1 an,2 an,3 an,n

= (a1,1 )(a2,2 ) (an,n ) + terms of order at most (n 2)(By Leibniz formula)

= (1 )(2 ) (n )Observations

The coefficient of n is (1)n, provided n 1. The coefficient of n1 isni=1 i =ni=1 ai,i = trace(A), provided n 2. . . . The coefficient of 0 is ni=1 i = f(0) = det(A).

• 6.1 How to determine the eigenvalues? (Revisited) 6-14

These observations make easy the finding of the eigenvalues of 2 2 matrix.

Example. Find the eigenvalues of A =

[1 1

4 1

].

Solution.

{1 + 2 = 1 + 1 = 2

12 = 1 4 = 3= (1 2)2 = (1 + 2)2 412 = 16

= 1 2 = 4= = 3,1.

Example. Find the eigenvalues of A =

[1 1

2 2

].

Solution.

{1 + 2 = 3

12 = 0= = 3, 0.

• 6.1 Imaginary eigenvalues 6-15

In some cases, we have to allow imaginary eigenvalues.

In order to solve polynomial equations f() = 0, the mathematician was forcedto image that there exists a number x satisfying x2 = 1 .

By this technique, a polynomial equations of order n have exactly n (possiblycomplex, not real) solutions.

Example. Solve 2 + 1 = 0. = = i.

Based on this, to so