Eigenvalues and Eigenvectors

119
Chapter 6 Eigenvalues and Eigenvectors Po-Ning Chen, Professor Department of Electrical Engineering National Chiao Tung University Hsin Chu, Taiwan 30010, R.O.C.

description

Eigenvalues and Eigenvectors Motivations• The static system problem of Ax = b has now been solved, e.g., by Gauss-Jordan method or Cramer’s rule.

Transcript of Eigenvalues and Eigenvectors

Page 1: Eigenvalues and Eigenvectors

Chapter 6

Eigenvalues and Eigenvectors

Po-Ning Chen, Professor

Department of Electrical Engineering

National Chiao Tung University

Hsin Chu, Taiwan 30010, R.O.C.

Page 2: Eigenvalues and Eigenvectors

6.1 Introduction to eigenvalues 6-1

Motivations

• The static system problem of Ax = b has now been solved, e.g., by Gauss-

Jordan method or Cramer’s rule.

• However, a dynamic system problem such as

Ax = λx

cannot be solved by the static system method.

• To solve the dynamic system problem, we need to find the static feature

of A that is “unchanged” with the mapping A. In other words, Ax maps to

itself with possibly some stretching (λ > 1), shrinking (0 < λ < 1), or being

reversed (λ < 0).

• These invariant characteristics of A are the eigenvalues and eigenvectors.

Ax maps a vector x to its column space C(A). We are looking for a v ∈ C(A)

such that Av aligns with v. The collection of all such vectors is the set of eigen-

vectors.

Page 3: Eigenvalues and Eigenvectors

6.1 Eigenvalues and eigenvectors 6-2

Conception (Eigenvalues and eigenvectors): The eigenvalue-eigenvector

pair (λi,vi) of a square matrix A satisfies

Avi = λivi

where vi �= 0 (but λi can be zero).

Invariance of eigenvectors and eigenvalues.

• Property 1: The eigenvectors stay the same for every power of A.

The eigenvalues equal the same power of the respective eigenvalues.

I.e., Anvi = λni vi.

Avi = λivi =⇒ A2vi = A(λivi) = λiAvi = λ2ivi

• Property 2: If the null space of An×n consists of non-zero vectors, then 0 is

the eigenvalue of A.

Accordingly, there exists non-zero vi to satisfy Avi = 0 · vi = 0.

Page 4: Eigenvalues and Eigenvectors

6.1 Eigenvalues and eigenvectors 6-3

Property 3: Assume with no loss of generality |λ1| > |λ2| > · · · > |λk|. Forany vector x = a1v1 + a2v2 + · · · + akvk that is the linear combination of

all eigenvectors, the normalized mapping P = 1λ1A (namely, Pv1 = v1)

(when being applied repeatedly) converges to the eigenvector with the largest

absolute eigenvalue.

I.e.,

limk→∞

Pkx = limk→∞

1

λk1

Akx = limk→∞

1

λk1

(a1A

kv1 + a2Akv2 + · · · + akA

kvk

)= lim

k→∞1

λk1

(a1λ

k1v1︸ ︷︷ ︸

steady state

+ a2λk2v2 + · · · + λk

kAkvk︸ ︷︷ ︸

transient states

)= a1v1.

Page 5: Eigenvalues and Eigenvectors

6.1 How to determine the eigenvalues? 6-4

We wish to find a non-zero vector v to satisfy Av = λv; then

(A− λI)v = 0 ⇔ det(A− λI) = 0.

So by solving det(A− λI) = 0, we can obtain all the eigenvalues of A.

Example. Find the eigenvalues of A =

[0.5 0.5

0.5 0.5

].

Solution.

det(A− λI) = det

([0.5− λ 0.5

0.5 0.5− λ

])= (0.5− λ)2 − 0.52 = λ2 − λ = λ(λ− 1) = 0

⇒ λ = 0 or 1.

Page 6: Eigenvalues and Eigenvectors

6.1 How to determine the eigenvalues? 6-5

Proposition: Projection matrix (defined in Section 4.2) has eigenvalues either 1

or 0.

Proof:

• A projection matrix always satisfies P 2 = P . So P 2v = Pv = λv.

• By definition of eigenvalues and eigenvectors, we have P 2v = λ2v.

• Hence, λv = λ2v for non-zero vector v, which immediately implies λ = λ2.

• Accordingly, λ is either 1 or 0. �

Proposition: Permutation matrix has eigenvalues satisfying λk = 1 for some

integer k.

Proof:

• A purmutation matrix always satisfies Pk+1 = P for some integer k.

Example. P =

0 0 1

1 0 0

0 1 0

. Then, Pv =

v3v1v2

and P 3v = v. Hence, k = 3.

• Accordingly, λk+1v = λv, which gives λk = 1 since the eigenvalue of P cannot

be zero. �

Page 7: Eigenvalues and Eigenvectors

6.1 How to determine the eigenvalues? 6-6

Proposition: Matrix

αmAm + αm−1A

m−1 + · · · + α1A + α0I

has the same eigenvectors as A, but its eigenvalues become

αmλm + αm−1λ

m−1 + · · · + α1λ + α0,

where λ is the eigenvalue of A.

Proof:

• Let vi and λi be the eigenvector and eigenvalue of A. Then,(αmA

m + αm−1Am−1 + · · · + α0I

)vi =

(αmλ

m + αm−1λm−1 + · · · + α0

)vi

• Hence, vi and(αmλ

m + αm−1λm−1 + · · · + α0

)are the eigenvector and eigen-

value of the polynomial matrix. �

Page 8: Eigenvalues and Eigenvectors

6.1 How to determine the eigenvalues? 6-7

Theorem (Cayley-Hamilton): (Suppose A has linearly independent eigen-

vectors.)

f(A) = An + αn−1An−1 + · · · + α0I = all-zero matrix

if

f(λ) = det(A− λI) = λn + αn−1λn−1 + · · · + α0.

Proof: The eigenvalues of f(A) are all zeros and the eigenvectors of A remain

the same as A. By definition of eigen-system, we have

f(A)[v1 v2 · · · vn

]=[v1 v2 · · · vn

]f(λ1) 0 · · · 0

0 f(λ2) · · · 0... ... . . . ...

0 0 · · · f(λn)

.

Corollary (Cayley-Hamilton): (Suppose A has linearly independent eigen-

vectors.)

(λ1I −A)(λ2I − A) · · · (λnI − A) = all-zero matrix

Proof: f(λ) can be re-written as f(λ) = (λ1 − λ)(λ2 − λ) · · · (λn − λ). �

Page 9: Eigenvalues and Eigenvectors

6.1 How to determine the eigenvalues? 6-8

(Problem 11, Section 6.1) Here is a strange fact about 2 by 2 matrices with eigen-

values λ1 �= λ2: The columns of A− λ1I are multiples of the eigenvector x2. Any

idea why this should be?

Hint: (λ1I −A)(λ2I − A) =

[0 0

0 0

]implies

(λ1I − A)w1 = 0 and (λ1I − A)w2 = 0

where (λ2I − A) =[w1 w2

]. Hence,

the columns of (λ2I − A) give the eigenvectors of λ1 if they are non-zero vectors

and

the columns of (λ1I −A) give the eigenvectors of λ2 if they are non-zero vectors .

So, the (non-zero) columns of A − λ1I are (multiples of) the eigen-

vector x2.

Page 10: Eigenvalues and Eigenvectors

6.1 Why Gauss-Jordan cannot solve Ax = λx? 6-9

• The forward elimination may change the eigenvalues and eigenvectors?

Example. Check eigenvalues and eigenvectors of A =

[1 −2

2 5

].

Solution.

– The eigenvalues of A satisfy det(A− λI) = (λ− 3)2 = 0.

– A = LU =

[1 0

2 1

] [1 −2

0 9

]. The eigenvalues of U apparently satisfy

det(U − λI) = (1− λ)(9− λ) = 0.

– Suppose u1 and u2 are the eigenvectors of U , respectively corresponding

to eigenvalues 1 and 9. Then, they cannot be the eigenvectors of A since if

they were,3u1 = Au1 = LUu1 = Lu1 =

[1 0

2 1

]u1 =⇒ u1 = 0

3u2 = Au2 = LUu2 = 9Lu2 = 9

[1 0

2 1

]u2 =⇒ u2 = 0.

Hence, the eigenvalues are nothing to do with pivots (except for a triangular A).

Page 11: Eigenvalues and Eigenvectors

6.1 How to determine the eigenvalues? (revisited) 6-10

Solve det(A− λI) = 0.

• det(A− λI) is a polynomial of order n.

f(λ) = det(A− λI) = det

a1,1 − λ a1,2 a1,3 · · · a1,na2,1 a2,2 − λ a2,3 · · · a2,na3,1 a3,2 a3,3 − λ · · · a3,n... ... ... . . . ...

an,1 an,2 an,3 · · · an,n − λ

= (a1,1 − λ)(a2,2 − λ) · · · (an,n − λ) + · · · (By Leibniz formula)

= (λ1 − λ)(λ2 − λ) · · · (λn − λ)

Observations

• The coefficient of λn is (−1)n.

• The coefficient of λn−1 is∑n

i=1 λi =∑n

i=1 ai,i = trace(A).

• . . .

• The coefficient of λ0 is∏n

i=1 λi = f(0) = det(A).

Page 12: Eigenvalues and Eigenvectors

6.1 How to determine the eigenvalues? (revisited) 6-11

These observations make easy the finding of the eigenvalues of 2× 2 matrix.

Example. Find the eigenvalues of A =

[1 1

4 1

].

Solution.

•{λ1 + λ2 = 1 + 1 = 2

λ1λ2 = 1− 4 = −3=⇒ (λ1 − λ2)

2 = (λ1 + λ2)2 − 4λ1λ2 = 16

=⇒ λ1 − λ2 = 4

=⇒ λ = 3,−1. �

Example. Find the eigenvalues of A =

[1 1

2 2

].

Solution.

•{λ1 + λ2 = 3

λ1λ2 = 0=⇒ λ = 3, 0. �

Page 13: Eigenvalues and Eigenvectors

6.1 Imaginary eigenvalues 6-12

In some cases, we have to allow imaginary eigenvalues.

• In order to solve polynomial equations f(λ) = 0, the mathematician was forced

to image that there exists a number x satisfying x2 = −1 .

• By this technique, a polynomial equations of order n have exactly n (possibly

complex, not real) solutions.

Example. Solve λ2 + 1 = 0. =⇒ λ = ±i. �

• Based on this, to solve the eigenvalues, we were forced to accept imaginary

eigenvalues.

Example. Find the eigenvalues of A =

[0 1

−1 0

].

Solution. det(A− λI) = λ2 + 1 = 0 =⇒ λ = ±i. �

Page 14: Eigenvalues and Eigenvectors

6.1 Imaginary eigenvalues 6-13

Proposition: The eigenvalues of a symmetric matrix A (with real entries) are

real, and the eigenvalues of a skew-symmetric (or antisymmetric) matrix B are

pure imaginary.

Proof:

• Suppose Av = λv. Then,

Av∗ = λ∗v∗ (A real)

=⇒ (Av∗)Tv = λ∗(v∗)Tv=⇒ (v∗)TATv = λ∗(v∗)Tv=⇒ (v∗)TAv = λ∗(v∗)Tv (symmetry means AT = A)

=⇒ (v∗)Tλv = λ∗(v∗)Tv (Av = λv)

=⇒ λ‖v‖2 = λ∗‖v‖2 (eigenvector must be non-zero, i.e., ‖v‖2 �= 0)

=⇒ λ = λ∗

=⇒ λ real

Page 15: Eigenvalues and Eigenvectors

6.1 Imaginary eigenvalues 6-14

• Suppose Bv = λv. Then,

Bv∗ = λ∗v∗ (B real)

=⇒ (Bv∗)Tv = (λ∗v∗)Tv=⇒ (v∗)TBTv = λ∗(v∗)Tv=⇒ (v∗)T(−B)v = λ∗(v∗)Tv (skew-symmetry means BT = −B)

=⇒ (v∗)T(−λ)v = λ∗(v∗)Tv (Bv = λv)

=⇒ (−λ)‖v‖2 = λ∗‖v‖2 (eigenvector must be non-zero, i.e., ‖v‖2 �= 0)

=⇒ −λ = λ∗

=⇒ λ imaginary

Page 16: Eigenvalues and Eigenvectors

6.1 Eigenvalues/eigenvectors of inverse 6-15

For invertible A, the relation between eigenvalues and eigenvectors of A and A−1

can be well-determined.

Proposition: The eigenvalues and eigenvectors of A−1 are(1

λ1,v1

),

(1

λ2,v2

), . . . ,

(1

λn,vn

),

where {(λi,vi)}ni=1 are the eigenvalues and eigenvectors of A.

Proof:

• The eigenvalues of invertible A must be non-zero because det(A) �= 0.

• Suppose Av = λv, where v �= 0 and λ �= 0. (I.e., λ and v are eigenvalue and

eigenvector of A.)

• So, Av = λv =⇒ A−1(Av) = A−1(λv) =⇒ v = λA−1v =⇒ 1λv = A−1v.

Note: The eigenvalues of Ak and A−1 are λk and λ−1 with the same eigenvectors

as A, where λ is the eigenvalue of A.

Page 17: Eigenvalues and Eigenvectors

6.1 Eigenvalues/eigenvectors of inverse 6-16

Proposition: The eigenvalues of AT are the same as the eigenvalues of A. (But

they may have different eigenvectors.)

Proof: det(A− λI) = det ((A− λI)T) = det (AT − λIT) = det (AT − λI). �

Corollary: The eigenvalues of an invertible matrix A satisfying A−1 = AT are

on the unit circle of the complex plain.

Proof: Suppose Av = λv. Then,

Av∗ = λ∗v∗ (A real)

=⇒ (Av∗)Tv = λ∗(v∗)Tv=⇒ (v∗)TATv = λ∗(v∗)Tv=⇒ (v∗)TA−1v = λ∗(v∗)Tv (AT = A−1)

=⇒ (v∗)T 1λv = λ∗(v∗)Tv (A−1v = 1λv)

=⇒ 1λ‖v‖2 = λ∗‖v‖2 (eigenvector must be non-zero, i.e., ‖v‖2 �= 0)

=⇒ λλ∗ = |λ|2 = 1

Example. Find the eigenvalues of A =

[0 1

−1 0

]that satisfies AT = A−1.

Solution. det(A− λI) = λ2 + 1 = 0 =⇒ λ = ±i. �

Page 18: Eigenvalues and Eigenvectors

6.1 Determination of eigenvectors 6-17

• After the identification of eigenvalues via det(A−λI) = 0, we can determine

the respective eigenvectors using the null space technique.

• Recall (from slide 3-41) how we completely solve

Bn×nvn×1 = (An×n − λIn×n)vn×1 = 0n×1.

Answer:

R = rref(B) =

[Ir×r Fr×(n−r)

0(n−r)×r 0(n−r)×(n−r)

](with no row exchange)

⇒ Nn×(n−r) =

[ −Fr×(n−r)

I(n−r)×(n−r)

]Then, every solution v for Bv = 0 is of the form

v(null)n×1 = Nn×(n−r)w(n−r)×1 for any w ∈ n−r.

Here, we usually find the (n − r) orthonormal bases for the null space

as the representative eigenvectors, which are exactly the (n − r) columns of

Nn×(n−r) (with proper normalization).

Page 19: Eigenvalues and Eigenvectors

6.1 Determination of eigenvectors 6-18

(Problem 12, Section 6.1) Find three eigenvectors for this matrix P (projection

matrices have λ = 1 and 0):

Projection matrix P =

0.2 0.4 0

0.4 0.8 0

0 0 1

.

If two eigenvectors share the same λ, so do all their linear combinations. Find an

eigenvector of P with no zero components.

Solution.

• det(P − λI) = 0 gives λ(1− λ2) = 0; so λ = 0, 1.

• λ = 1: rref(P − I) =

1 −1

20

0 0 0

0 0 0

;

so F1×2 =[−1

2 0], andN3×2 =

1

2 0

1 0

0 1

, which implies eigenvectors =

1

2

1

0

and

001

.

Page 20: Eigenvalues and Eigenvectors

6.1 Determination of eigenvectors 6-19

• λ = 0: rref(P ) =

1 2 0

0 0 0

0 0 1

(with no row exchange);

so F =

2−0

, and N3×1 =

−2

1

0

, which implies eigenvector =

−2

1

0

.

Page 21: Eigenvalues and Eigenvectors

6.1 MATLAB revisited 6-20

In MATLAB:

• We can find the eigenvalues and eigenvectors of a matrix A by:

[V D]= eig(A); % Find the eigenvalues/vectors of A

• The columns of V are the eigenvectors.

• The diagonals of D are the eigenvalues.

Proposition:

• The eigenvectors corresponding non-zero eigenvalues are in C(A).

• The eigenvectors corresponding zero eigenvalues are in N (A).

Proof: The first one can be seen from Av = λv and the second one can be proved

by Av = 0. �

Page 22: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-21

(Problem 25, Section 6.1) Suppose A and B have the same eigenvalues λ1, . . . , λn

with the same independent eigenvectors x1, . . . ,xn. Then A = B. Reason:

Any vector x is a combintion c1x1 + · · · + cnxn. What is Ax? What is Bx?

Thinking over Problem 25: Suppose A and B have the same eigenvalues and

eigenvectors (not necessarily independent). Can we claim that A = B.

Answer to the thinking: Not necessarily.

As a counterexample, both A =

[1 −1

1 −1

]and B =

[2 −2

2 −2

]have eigenvalues 0, 0

and single eigenvector

[1

1

]but they are not equal.

If however the eigenvectors span the n-dimensional space (such as there are n of

them and they are linearly independent), then A = B.

Hint for Problem 25: A[x1 · · · xn

]=[x1 · · · xn

]λ1 0 · · · 0

0 λ2 · · · 0... ... . . . ...

0 0 · · · λn

.

This important fact will be re-emphasized in Section 6.2.

Page 23: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-22

(Problem 26, Section 6.1) The block B has eigenvalues 1, 2 and C has eigenvalues

3, 4 and D has eigenvalues 5, 7. Find the eigenvalues of the 4 by 4 matrix A:

A =

[B C

0 D

]=

0 1 3 0

−2 3 0 4

0 0 6 1

0 0 1 6

.

Thinking over Problem 26: The eigenvalues of

[B C

0 D

]are the eigenvalues of B

and D because we can show

det

([B C

0 D

])= det(B) · det(D).

(See Problems 23 and 25 in Section 5.2.)

Page 24: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-23

(Problem 23, Section 5.2) With 2 by 2 blocks in 4 by 4 matrices, you cannot always

use block determinants:∣∣∣∣A B

0 D

∣∣∣∣ = |A| |D| but

∣∣∣∣A B

C D

∣∣∣∣ = |A| |D| − |C| |B|.

(a) Why is the first statement true? Somehow B doesn’t enter.

Hint: Leibniz formula.

(b) Show by example that equality fails (as shown) when C enters.

(c) Show by example that the answer det(AD − CB) is also wrong.

Page 25: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-24

(Problem 25, Section 5.2) Block elimination subtracts CA−1 times the first

row[A B

]from the second row

[C D

]. This leaves the Schur complement

D − CA−1B in the corner:[I 0

−CA−1 I

] [A B

C D

]=

[A B

0 D − CA−1B

].

Take determinants of these block matrices to prove correct rules if A−1 exists:∣∣∣∣A B

C D

∣∣∣∣ = |A| |D − CA−1B| = |AD − CB| provided AC = CA.

Hint: det(A) · det(D − CA−1B) = det(A(D − CA−1B)

).

Page 26: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-25

(Problem 37, Section 6.1)

(a) Find the eigenvalues and eigenvectors of A. They depend on c:

A =

[.4 1− c

.6 c

].

(b) Show that A has just one line of eigenvectors when c = 1.6.

(c) This is a Markov matrix when c = .8. Then An will approach what matrix

A∞?

Definition (Markov matrix): A Markov matrix is a matrix with positive

entries, for which every column adds to one.

• Note that some researchers define the Markov matrix by replacing positive by

non-negative; thus, they will use the term positive Markov matrix. The below

observations hold for Markov matrices with “non-negative entries”.

Page 27: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-26

• 1 must be one of the eigenvalues of a Markov matrix.

Proof: A and AT have the same eigenvalues, and AT

1...1

=

1...1

. �

• The eigenvalues of a Markov matrix satisfy |λ| ≤ 1.

Proof: ATv = λv implies∑n

i=1 ai,jvi = λvj; hence, by letting vk satisfying

|vk| = max1≤i≤n |vi|, we have

|λvk| =∣∣∣∣∣

n∑i=1

ai,kvi

∣∣∣∣∣ ≤n∑i=1

ai,k |vi| ≤n∑

i=1

ai,k |vk| = |vk|

which implies the desired result. �

Page 28: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-27

(Problem 31, Section 6.1) If we exchange rows 1 and 2 and columns 1 and 2, the

eigenvalues don’t change. Find eigenvectors of A and B for λ1 = 11. Rank one

gives λ2 = λ3 = 0.

A =

1 2 1

3 6 3

4 8 4

and B = PAP T =

6 3 3

2 1 1

8 4 4

.

Thinking over Problem 31: This is sometimes useful in determining the eigenval-

ues and eigenvectors.

• If Av = λv and B = PAP−1 (for any invertible P ), then λ and u = Pv are

the eigenvalue and eigenvector of B.

Proof: Bu = PAP−1u = PAv = λPv = λu. �

For example, P =

0 1 0

1 0 0

0 0 1

. In such case, P−1 = P = P T.

Page 29: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-28

With the fact above, we can further claim that

Proposition. The eigenvalues of AB and BA are equal if one of A and B is

invertible.

Proof: This can be proved by (AB) = A(BA)A−1 or (AB) = B−1(BA)B. Note

that AB has eigenvector Av or B−1v if v is the eigenvector of BA. �

Page 30: Eigenvalues and Eigenvectors

6.1 Some discussions on problems 6-29

The eigenvectors of a diagonal matrix Λ =

λ1 0 · · · 0

0 λ2 · · · 0... ... . . . ...

0 0 · · · λn

are apparently

ei =

0...

0

1

0...

0

position i , i = 1, 2, . . . , n

Hence, A = SΛS−1 have the same eigenvalues as Λ and have eigenvectors vi =

Sei. This implies that

S =[v1 v2 · · · vn

]where {vi} are the eigenvectors of A.

What if S is not invertible? Then, we have the next theorem.

Page 31: Eigenvalues and Eigenvectors

6.2 Diagnolizing a matrix 6-30

A convenience for the eigen-system analysis is that we can diagonalize a matrix.

Theorem. A matrix A can be written as

A[v1 v2 · · · vn

]︸ ︷︷ ︸=S

=[v1 v2 · · · vn

]λ1 0 · · · 0

0 λ2 · · · 0... ... . . . ...

0 0 · · · λn

︸ ︷︷ ︸=Λ

where {(λi,vi)}ni=1 are the eigenvalue-eigenvector pairs of A.

Proof: The theorem holds by definition of eigenvalues and eigenvectors. �

Corollary. If S is invertible, then

S−1AS = Λ equivalently A = SΛS−1.

Page 32: Eigenvalues and Eigenvectors

6.2 Diagnolizing a matrix 6-31

This makes easy the computation of polynomial with argument A.

Proposition (recall slide 6-6): Matrix

αmAm + αm−1A

m−1 + · · · + α0I

has the same eigenvectors as A, but its eigenvalues become

αmλm + αm−1λ

m−1 + · · · + α0,

where λ is the eigenvalue of A.

Proposition:

αmAm + αm−1A

m−1 + · · · + α0I = S(αmΛ

m + αm−1Λm−1 + · · · + α0

)S−1

Proposition:

Am = SΛmS−1

Page 33: Eigenvalues and Eigenvectors

6.2 Diagnolizing a matrix 6-32

Exception

• It is possible that an n× n matrix does not have n eigenvectors.

Example. A =

[1 −1

1 −1

].

Solution.

– λ1 = 0 is apparently an eigenvalue for a non-invertible matrix.

– The second eigenvalue is λ2 = trace(A)− λ1 = [1 + (−1)]− λ1 = 0.

– This matrix however only has one eigenvector

[1

1

](or some may say “two

repeated eigenvectors”).

– In such case, we still have[1 −1

1 −1

]︸ ︷︷ ︸

A

[1 1

1 1

]︸ ︷︷ ︸

S

=

[1 1

1 1

]︸ ︷︷ ︸

S

[0 0

0 0

]︸ ︷︷ ︸

Λ

but S has no inverse. �

In such case, we cannot preform SAS−1; so we say A cannot be diagonalized.

Page 34: Eigenvalues and Eigenvectors

6.2 Diagnolizing a matrix 6-33

When will S be guaranteed to be invertible?

One possible answer: When all eigenvalues are distinct.

Proof: Suppose for a matrixA, the first k eigenvectors v1, . . . ,vk are linearly inde-

pendent, but the (k+1)th eigenvector is dependent on the previous k eigenvectors.

Then, for some unique a1, . . . , ak,

vk+1 = a1v1 + · · · + akvk,

which implies{λk+1vk+1 = Avk+1 = a1Av1 + . . . + akAvk = a1λ1v1 + . . . + akλkvk

λk+1vk+1 = a1λk+1v1 + . . . + akλk+1vk

Accordingly, λk+1 = λi for 1 ≤ i ≤ k; i.e., that vk+1 is linearly dependent on

v1, . . . ,vk only occurs when they have the same eigenvalues. (The proof is not

yet completed here! See the discussion and Example below.)

The above proof said as long as we have the (k+1)th eigenvector, then it is linearly

dependent on the previous k eigenvectors only when all eigenvalues are equal! But

sometimes, we do not guarantee to have the (k + 1)th eigenvector.

Page 35: Eigenvalues and Eigenvectors

6.2 Diagnolizing a matrix 6-34

Example. A =

5 4 2 1

0 1 −1 −1

−1 −1 3 0

1 1 −1 2

.

Solution.

• We have four eigenvalues 1, 2, 4, 4.

• But we can only have three eigenvectors for 1, 2 and 4.

• From the proof in the previous page, the eigenvectors for 1, 2 and 4 are linearly

indepedent because 1, 2 and 4 are not equal. �

Proof (Continue): One condition that guarantees to have n eigenvectors is that

we have n distinct eigenvalues, which now complete the proof. �

Definition (GM and AM): The number of linearly independent eigenvectors

for an eigenvalue λ is called its geometric multiplicity; the number of its

appearance in solving det(A− λI) is called its algebraic multiplicity.

Example. In the above example, GM=1 and AM=2 for λ = 4.

Page 36: Eigenvalues and Eigenvectors

6.2 Fibonacci series 6-35

Eigen-decomposition is useful in solving Fibonacci series

Problem: Suppose Fk+2 = Fk+1+Fk with initially F1 = 1 and F0 = 0. Find F100.

Answer:

•[Fk+2

Fk+1

]=

[1 1

1 0

] [Fk+1

Fk

]=⇒

[Fk+2

Fk+1

]=

[1 1

1 0

]k+1 [F1

F0

]• So,[

F100

F99

]=

[1 1

1 0

]99 [1

0

]= SΛ99S−1

[1

0

]

=

[1+

√5

21−√

52

1 1

](1+

√5

2

)990

0(1−√

52

)99[1+

√5

21−√

52

1 1

]−1 [1

0

]

=

[1+

√5

21−√

52

1 1

](1+

√5

2

)990

0(1−√

52

)99 1(

1+√5

2

)−(1−√

52

) [ 1 −1−√5

2

−1 1+√5

2

][1

0

]

Page 37: Eigenvalues and Eigenvectors

6.2 Fibonacci series 6-36

=

(1+

√5

2

)100 (1−√

52

)100(1+

√5

2

)99 (1−√

52

)99 1(

1+√5

2

)−(1−√

52

) [ 1

−1

]

=1(

1+√5

2

)−(1−√

52

)(1+

√5

2

)100−(1−√

52

)100(1+

√5

2

)99−(1−√

52

)99 .

Page 38: Eigenvalues and Eigenvectors

6.2 Generalization to uk = Aku0 6-37

• A generalization of the solution to Fibonacci series is as follows.

Suppose u0 = a1v1 + a2v2 + · · · + anvn.

Then uk = Aku0 = a1Akv1 + a2A

kv2 + · · · + anAkvn

= a1λk1v1 + a2λ

k2v2 + . . . + anλ

knvn.

Examination:

uk =

[Fk+1

Fk

]=

[1 1

1 0

]k [F1

F0

]= Aku0

and

u0 =

[1

0

]=

1(1+

√5

2

)−(1−√

52

) [1+√5

2

1

]− 1(

1+√5

2

)−(1−√

52

) [1−√5

2

1

]

So

u99 =

[F100

F99

]=

(1+

√5

2

)99(1+

√5

2

)−(1−√

52

) [1+√5

2

1

]−

(1−√

52

)99(1+

√5

2

)−(1−√

52

) [1−√5

2

1

].

Page 39: Eigenvalues and Eigenvectors

6.2 More applications of eigen-decomposition 6-38

(Problem 30, Section 6.2) Suppose the same S diagonalizes both A and B. They

have the same eigenvectors in A = SΛ1S−1 and B = SΛ2S

−1. Prove that AB =

BA.

Proposition: Suppose both A and B can be diagnosed, and one of A and B

have distinct eigenvalues. Then, A and B have the same eigenvectors if, and only

if, AB = BA.

Proof:

• (See Problem 30 above) If A and B have the same eigenvectors, then

AB = (SΛAS−1)(SΛBS

−1) = SΛAΛBS−1 = SΛBΛAS

−1 = (SΛBS−1)(SΛAS

−1) = BA.

• Suppose without loss of generality that the eigenvalues of A are all distinct.

Then, if AB = BA, we have for given Av� = λ�v� and u� = Bv�,

Au� = A(Bv�) = (AB)v� = (BA)v� = B(Av�) = B(λ�v�) = λ�(Bv�) = λ�u�.

Hence, u� = Bv� and v� are both the eigenvectors of A corresponding to the

same eigenvalue λ�.

Page 40: Eigenvalues and Eigenvectors

6.2 More applications of eigen-decomposition 6-39

Let u� =∑n

i=1 aivi, where {vi}ni=1 are linearly independent eigenvectors of A.{Au� =

∑ni=1 aiAvi =

∑ni=1 aiλivi

Au� = λ�u� =∑n

i=1 aiλ�vi

=⇒ ai(λi − λ�) = 0 for 1 ≤ i ≤ n.

This implies

u� =∑

i:λi=λ�

aivi = a�v� = Bv� .

Thus, v� is an eigenvector of B. �

Page 41: Eigenvalues and Eigenvectors

6.2 Problem discussions 6-40

(Problem 32, Section 6.2) Substitute A = SΛS−1 into the product (A−λ1I)(A−λ2I) · · · (A − λnI) and explain why this produces the zero matrix. We are sub-

stituting the matrix A for the number λ in the polynomial p(λ) = det(A− λI).

The Cayley-Hamilton Theorem says that this product is always p(A) =zero

matrix, even if A is not diagonalizable.

Thinking over Problem 32: The Cayley-Hamilton Theorem can be easily proved

if A is diagonalizable.

Corollary (Cayley-Hamilton): Suppose A has linearly independent eigen-

vectors.

(λ1I −A)(λ2I − A) · · · (λnI − A) = all-zero matrix

Proof:

(λ1I − A)(λ2I − A) · · · (λnI − A)

= S(λ1I − Λ)S−1S(λ2I − Λ)S−1 · · ·S(λnI − Λ)S−1

= S (λ1I − Λ)︸ ︷︷ ︸(1, 1)th entry= 0

(λ2I − Λ)︸ ︷︷ ︸(2, 2)th entry= 0

· · · (λnI − Λ)︸ ︷︷ ︸(n, n)th entry= 0

S−1

= S[all-zero entries

]S−1 =

[all-zero entries

]�

Page 42: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-41

We can use eigen-decomposition to solveu1(t)

u2(t)...

un(t)

=

du

dt= Au =

a1,1u1(t) + a1,2u2(t) + · · · + a1,nun(t)

a2,1u1(t) + a2,2u2(t) + · · · + a2,nun(t)...

an,1u1(t) + an,2u2(t) + · · · + an,nun(t)

where A is called the companion matrix.

By differential equation technique, we know that the solution is of the form

u = eλtv

for some constant λ and constant vector v.

Question: What are all λ and v that satisfydu

dt= Au?

Solution: Take u = eλtv into the equation:du

dt

(= λeλtv

)= Au

(= Aeλtv

)=⇒ λv = Av.

The answer are all eigenvalues and eigenvectors.

Page 43: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-42

Sincedu

dt= Au is a linear system, a linear combination of solutions is still a

solution.

Hence, if there are n eigenvectors, the complete solution can be represented as

c1eλ1tv1 + c2e

λ2tv2 + · · · + cneλntvn

where c1, c2, . . . , cn can be determined by the initial conditions.

Here, for convenience of discussion at this moment, we assume that A gives us

exactly n eigenvectors.

Page 44: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-43

It is sometimes convenient to re-express the solution ofdu

dt= Au is eAtu(0) for

a given initial condition u(0) as eAtu(0), i.e.,

c1eλ1tv1 + c2e

λ2tv2 + · · · + cneλntvn

=[v1 v2 · · · vn

]︸ ︷︷ ︸S

eλ1t 0 · · · 0

0 eλ2t · · · 0... ... . . . ...

0 0 · · · eλnt

c1c2...

cn

= eAtu(0)

where we define

eAt �∞∑k=0

1

k!(At)k =

∞∑k=0

tk

k!Ak =

∞∑k=0

tk

k!(SΛkS−1) = S

( ∞∑k=0

tk

k!Λk

)S−1

= S

∑∞

k=0tk

k!λk1 0 · · · 0

0∑∞

k=0tk

k!λk2 · · · 0

... ... . . . ...

0 0 · · · ∑∞k=0

tk

k!λkn

S−1 = S

eλ1t 0 · · · 0

0 eλ2t · · · 0... ... . . . ...

0 0 · · · eλnt

S−1.

and hence u(0) = Sc.

Page 45: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-44

Key to remember: Again, we define by convention that

f(A) = S

f(λ1) 0 · · · 0

0 f(λ2) · · · 0... ... . . . ...

0 0 · · · f(λn)

S−1.

So, eAt = S

eλ1t 0 · · · 0

0 eλ2t · · · 0... ... . . . ...

0 0 · · · eλnt

S−1.

Page 46: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-45

We can solve the second-order equation in the same manner.

Example. Solve my′′ + by′ + ky = 0.

Answer:

• Let z = y′. Then, the problem is reduced to{y′ = z

z′ = − kmy − b

mz

=⇒ du

dt= Au

with u =

[y

z

]and A =

[0 1

− km

− bm

].

• The complete solution for u is therefore

c1eλ1tv1 + c2e

λ2tv2.

Page 47: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-46

Example. Solve y′′ + y = 0 with initial y(0) = 1 and y′(0) = 0.

Solution.

• Let z = y′. Then, the problem is reduced to{y′ = z

z′ = −y=⇒ du

dt= Au

with u =

[y

z

]and A =

[0 1

−1 0

].

• The eigenvalues and eigenvectors of A are(ı,

[−ı

1

])and

(−ı,

1

]).

• The complete solution for u is therefore[y

y′

]= c1e

ıt

[−ı

1

]+ c2e

−ıt

1

]with initially

[1

0

]= c1

[−ı

1

]+ c2

1

].

So, c1 =ı2and c2 = − ı

2. Thus,

y(t) =ı

2eıt(−ı)− ı

2e−ıtı =

1

2eıt +

1

2e−ıt = cos(t).

Page 48: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-47

In practice, we will use a discrete approximation to approximate a continuous

function. There are however three different discrete approximations.

For example, how to approximate y′′(t) = −y(t) by a discrete system?

−Yn−1 Forward approximation

Yn+1 − 2Yn + Yn−1

(∆t)2=

Yn+1−Yn∆t − Yn−Yn−1

∆t

∆t= −Yn Centered approximation

−Yn+1 Backward approximation

Let’s take forward approximation as an example, i.e.,

Yn+1−Yn∆t︸ ︷︷ ︸Zn

− Yn−Yn−1∆t︸ ︷︷ ︸

Zn−1

∆t= −Yn−1.

Thus,{y′(t) = z(t)

z′(t) = −y(t)≈

Yn+1 − Yn

∆t= Zn

Zn+1 − Zn

∆t= −Yn

≈[Yn+1

Zn+1

]︸ ︷︷ ︸

un+1

=

[1 ∆t

−∆t 1

]︸ ︷︷ ︸

A

[Yn

Zn

]︸ ︷︷ ︸un

Page 49: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-48

Then, we obtain

un = Anu0 =

[1 1

ı −ı

] [(1 + ı∆t)n 0

0 (1− ı∆t)n

] [12− ı

212

ı2

] [1

0

]and

Yn =(1 + ı∆t)n + (1− ı∆t)n

2=[1 + (∆t)2

]n/2cos(n ·∆θ)

where ∆θ = tan−1(∆t).

Problem: Yn → ∞ as n large.

Page 50: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-49

Backward approximation will leave to Yn → 0 as n large.

{y′(t) = z(t)

z′(t) = −y(t)≈

Yn+1 − Yn

∆t= Zn+1

Zn+1 − Zn

∆t= −Yn+1

≈[1 −∆t

∆t 1

]︸ ︷︷ ︸

A−1

[Yn+1

Zn+1

]︸ ︷︷ ︸

un+1

=

[Yn

Zn

]︸ ︷︷ ︸un

A solution to the problem: Interleave the forward approximation with

the backward approximation.{y′(t) = z(t)

z′(t) = −y(t)≈

Yn+1 − Yn

∆t= Zn

Zn+1 − Zn

∆t= −Yn+1

≈[1 0

∆t 1

] [Yn+1

Zn+1

]︸ ︷︷ ︸

un+1

=

[1 ∆t

0 1

] [Yn

Zn

]︸ ︷︷ ︸un

We then perform this so-called leapfrog method. (See Problems 28 and 29.)[Yn+1

Zn+1

]︸ ︷︷ ︸

un+1

=

[1 0

∆t 1

]−1 [1 ∆t

0 1

] [Yn

Zn

]︸ ︷︷ ︸un

=

[1 ∆t

−∆t 1− (∆t)2

]︸ ︷︷ ︸|eigenvalues|=1 if ∆t≤2

[Yn

Zn

]︸ ︷︷ ︸un

Page 51: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-50

(Problem 28, Section 6.3) Centering y′′ = −y in Example 3 will produce Yn+1 −2Yn+Yn−1 = −(∆t)2Yn. This can be written as a one-step difference equation for

U = (Y, Z):

Yn+1 = Yn +∆t Zn

Zn+1 = Zn −∆t Yn+1

[1 0

∆t 1

] [Yn+1

Zn+1

]=

[1 ∆t

0 1

] [Yn

Zn

].

Invert the matrix on the left side to write this as Un+1 = AUn. Show that detA =

1. Choose the large time step ∆t = 1 and find the eigenvalues λ1 and λ2 = λ1 of

A:

A =

[1 1

−1 0

]has |λ1| = |λ2| = 1. Show that A6 is exactly I.

After 6 steps to t = 6, U6 equals U0. The exact y = cos(t) returns to 1 at t = 2π.

Page 52: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-51

(Problem 29, Section 6.3) That centered choice (leapfrog method ) in Problem 28 is

very successful for small time steps ∆t. But find the eigenvalues of A for ∆t =√2

and 2:

A =

[1

√2

−√2 −1

]and A =

[1 2

−2 −3

].

Both matrices have |λ| = 1. Compute A4 in both cases and find the eigenvectors

of A. That value ∆t = 2 is at the border of instability. Time steps ∆t > 2 will

lead to |λ| > 1, and the powers in Un = AnU0 will explode.

Note You might say that nobody would compute with ∆t > 2. But if an atom

vibrates with y′′ = −1000000y, then ∆t > .0002 will give instability. Leapfrog

has a very strict stability limit. Yn+1 = Yn + 3Zn and Zn+1 = Zn − 3Yn+1 will

explode because ∆t = 3 is too large.

Page 53: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-52

A better solution to the problem: Mix the forward approximation

with the backward approximation.{y′(t) = z(t)

z′(t) = −y(t)≈

Yn+1 − Yn

∆t=

Zn+1 + Zn

2Zn+1 − Zn

∆t= −Yn+1 + Yn

2

≈[1 −∆t

2∆t2

1

] [Yn+1

Zn+1

]︸ ︷︷ ︸

un+1

=

[1 ∆t

2

−∆t2

1

] [Yn

Zn

]︸ ︷︷ ︸un

We then perform this so-called trapezoidal method. (See Problem 30.)[Yn+1

Zn+1

]︸ ︷︷ ︸

un+1

=

[1 −∆t

2∆t2

1

]−1 [1 ∆t

2

−∆t2

1

] [Yn

Zn

]︸ ︷︷ ︸un

=1

1 +(∆t2

)2[1− (∆t

2

)2∆t

−∆t 1− (∆t2

)2]

︸ ︷︷ ︸|eigenvalues|=1 for all ∆t>0

[Yn

Zn

]︸ ︷︷ ︸un

Page 54: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-53

(Problem 30, Section 6.3) Another good idea for y′′ = −y is the trapezoidal method

(half forward/half back): This may be the best way to keep (Yn, Zn) exactly on a

circle.

Trapezoidal

[1 −∆t/2

∆t/2 1

] [Yn+1

Zn+1

]=

[1 ∆t/2

−∆t/2 1

] [Yn

Zn

].

(a) Invert the left matrix to write this equation as Un+1 = AUn. Show that A is an

orthogonal matrix: ATA = I. These points Un never leave the circle.

A = (I − B)−1(I + B) is always an orthogonal matrix if BT = −B (See the

proof on next page).

(b) (Optional MATLAB) Take 32 steps from U0 = (1, 0) to U32 with ∆t = 2π/32. Is

U32 = U0? I think there is a small error.

Page 55: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-54

Proof:

A = (I − B)−1(I +B)

⇒ (I − B)A = (I +B)

⇒ AT(I −B)T = (I + B)T

⇒ AT(I −BT) = (I + BT)

⇒ AT(I +B) = (I −B)

⇒ AT = (I −B)(I +B)−1

⇒ ATA = (I −B)(I +B)−1(I −B)−1(I + B)

= (I −B)[(I − B)(I +B)]−1(I +B)

= (I −B)[(I + B)(I − B)]−1(I +B)

= (I −B)(I −B)−1(I +B)−1(I + B)

= I

Page 56: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-55

Question: What if the number of eigenvectors is smaller than n?

Recall that it is convenient to say that the solution of du/dt = Au is eAtu(0)

for some constant vecor u(0).

Conveniently, we can presume that eAtu(0) is the solution of du/dt = Au even if

A does not have n eigenvectors.

Example. Solve y′′ − 2y′ + y = 0 with initially y(0) = 1 and y′(0) = 0.

Answer.

• Let z = y′. Then, the problem is reduced to{y′ = z

z′ = −y + 2z=⇒ du

dt=

[y′

z′

]=

[0 1

−1 2

]︸ ︷︷ ︸

A

[y

z

]= Au

• The solution is still eAtu(0) but

eAt �= SeΛtS−1

because there is only one eigenvector forA (it doesn’t matter whether we regard

this case as “S does not exist” or we regard this case as “S exists but has no

inverse”).

Page 57: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-56

[0 1

−1 2

]︸ ︷︷ ︸

A

[1 1

1 1

]︸ ︷︷ ︸

S

=

[1 1

1 1

]︸ ︷︷ ︸

S

[1 0

0 1

]︸ ︷︷ ︸

Λ

and eAt = eλIte(A−λI)t = eλIte(A−I)t

eAt = eIte(A−I)t (since (It)((A− I)t) = ((A− I)t)(It). See below.)

=

( ∞∑k=0

1

k!Iktk

)( ∞∑k=0

1

k!(A− I)ktk

)

=

(( ∞∑k=0

1

k!tk

)I

)(1∑

k=0

1

k!(A− I)ktk

)

where we know Ik = I for every k ≥ 0 and (A − I)k =all-zero matrix for

k ≥ 2. This gives

eAt =(etI)(I + (A− I)t) = et (I + (A− I)t)

and

u(t) = eAtu(0) = eAt[1

0

]= et (I + (A− I)t)

[1

0

]= et

[1− t

−t

]�

Note that eAeB, eBeA and eA+B may not be equal except AB = BA!

Page 58: Eigenvalues and Eigenvectors

6.3 Applications to differential equations 6-57

Properities of eAt

• The eigenvalues of eAt are eλt for all eigenvalues λ of A.

For example, eAt = et (I + (A− I)t) has repeated eigenvalues et, et.

• The eigenvectors of eAt remains the same as A.

For example, eAt = et (I + (A− I)t) has only one eigenvector

[1

1

].

• The inverse of eAt always exist (hint: eλt �= 0), and is equal to e−At.

For example, e−At = e−Ite(I−A)t = e−t(I − (A− I)t).

• The transpose of eAt is

(eAt)T

=

( ∞∑k=0

1

k!Aktk

)T

=

( ∞∑k=0

1

k!(AT)ktk

)= eA

Tt.

Hence, if AT = −A (skew-symmetric), then(eAt)T

eAt = eATteAt = I ; so, eAt

is an orthogonal matrix (Recall that a matrix Q is orthogonal if QTQ = I).

Page 59: Eigenvalues and Eigenvectors

6.3 Quick tip 6-58

• For a 2 × 2 matrix A =

[a1,1 a1,2a2,1 a2,2

], the eigenvector corresponding to the

eigenvalue λ1 is

[a1,2

λ1 − a1,1

].

(A− λ1I)

[a1,2

λ1 − a1,1

]=

[a1,1 − λ1 a1,2

a2,1 a2,2 − λ1

] [a1,2

λ1 − a1,1

]=

[0

?

]where ?= 0 because the two row vectors of (A− λ1I) are parallel.

This is especially useful when solving the differential equation because a1,1 = 0

and a1,2 = 1; hence, the eigenvector v1 corresponding to the eigenvalue λ1 is

v1 =

[1

λ1

].

Solve y′′ − 2y′ + y = 0. Let z = y′. Then, the problem is reduced to{y′ = z

z′ = −y + 2z=⇒ du

dt=

[y′

z′

]=

[0 1

−1 2

]︸ ︷︷ ︸

A

[y

z

]= Au =⇒ v1 =

[1

1

]

Page 60: Eigenvalues and Eigenvectors

6.3 Problem discussions 6-59

Problem 6.3B and Problem 31: A convenient way to solve d2u/dt = Au.

(Problem 31, Section 6.3) The cosine of a matrix is defined like eA, by copying

the series for cos t:

cos t = 1− 1

2!t2 +

1

4!t4 − · · · cosA = I − 1

2!A2 +

1

4!A4 − · · ·

(a) If Ax = λx, multiply each term times x to find the eigenvalue of cosA.

(b) Find the eigenvalues of A =

[π π

π π

]with eigenvectors (1, 1) and (1,−1). From

the eigenvalues and eigenvectors of cosA, find that matrix C = cosA.

(c) The second derivative of cos(At) is −A2 cos(At).

u(t) = cos(At)u(0) solved2u

dt2= −A2u starting from u′(0) = 0.

Construct u(t) = cos(At)u(0) by the usual three steps for that specific A:

1. Expand u(0) = (4, 2) = c1x1 + c2x2 in the eigenvectors.

2. Multiply those eigenvectors by and (instead of eλt).

3. Add up the solution u(t) = c1 x1 + c2 x2. (Hint: See slide 6-45.)

Page 61: Eigenvalues and Eigenvectors

6.3 Problem discussions 6-60

(Problem 6.3B, Section 6.3) Find the eigenvalues and eigenvectors of A and write

u(0) = (0, 2√2, 0) as a combination of the eigenvectors. Solve both equations

u′ = Au and u′′ = Au:

du

dt=

−2 1 0

1 −2 1

0 1 −2

u and

d2u

dt2=

−2 1 0

1 −2 1

0 1 −2

u with

du

dt(0) = 0.

The 1,−2, 1 diagonals make A into a second difference matrix (like a second

derivative).

u′ = Au is like the heat equation ∂u/∂t = ∂2u/∂x2.

Its solution u(t) will decay (negative eigenvalues).

u′′ = Au is like the wave equation ∂2u/∂t2 = ∂2u/∂x2.

Its solution will oscillate (imaginary eigenvalues).

Page 62: Eigenvalues and Eigenvectors

6.3 Problem discussions 6-61

Thinking over Problem 6.3B and Problem 31: Solved2u

dt= Au.

Solution. The answer should be eA1/2tu(0). Note that A1/2 has the same eigenvec-

tors as A but its eigenvalues are the square root of A.

eA1/2t = S

1/21 t 0 · · · 0

0 eλ1/22 t · · · 0

... ... . . . ...

0 0 · · · eλ1/2n t

S−1.

As an example from Problem 6.3B, A =

−2 1 0

1 −2 1

0 1 −2

.

The eigenvalues of A are −2 and −2±√2.

So,

eA1/2t = S

eı√2 t 0

0 eı√

2−√2 t 0

0 0 eı√

2+√2 t

S−1.

A more direct solution is u(t) = cos(At)u(0). See Problem 31.

Page 63: Eigenvalues and Eigenvectors

6.3 Problem discussions 6-62

Final reminder on the approach delivered in the textbook

• In solvingdu(t)

dt= At with initial u(0), it is convenient to find the solution as{u(0) = Sc = c1v1 + · · · + cnvn

u(t) = SeΛtc = c1eλ1v1 + · · · + cne

λnvn

This is what the textbook always does in Problems.

Page 64: Eigenvalues and Eigenvectors

6.3 Problem discussions 6-63

You should practice these problems by yourself!

Problem 15: How to solvedu

dt= Au− b (for invertible A)?

(Problem 15, Section 6.3) A particular solution to du/dt = Au−b is up = A−1b,

if A is invertible. The usual solutions to du/dt = Au give un. Find the complete

solution u = up + un:

(a)du

dt= u− 4 (b)

du

dt=

[1 0

1 1

]u−

[4

6

].

Problem 16: How to solvedu

dt= Au− ectb (for non-eigenvalue c of A)?

(Problem 16, Section 6.3) If c is not an eigenvalue of A, substitute u = ectv and

find a particular solution to du/dt = Au−ectb. How does it break down when c is

an eigenvalue of A? The “nullspace” of du/dt = Au contains the usual solutions

eλitxi.

Hint: The particular solution vp satisfies (A− cI)vp = b.

Page 65: Eigenvalues and Eigenvectors

6.3 Problem discussions 6-64

Problem 23: eAeB, eBeA and eA+B are not necessarily equal. (They are equal

when AB = BA.)

(Problem 23, Section 6.3) Generally eAeB is different from eBeA. They are both

different from eA+B. Check this using Problems 21-2 and 19. (If AB = BA, all

these are the same.)

A =

[1 4

0 0

]B =

[0 −4

0 0

]A +B =

[1 0

0 0

]

Problem 27: An interesting brain-storming problem!

(Problem 27, Section 6.3) Find a solution x(t), y(t) that gets large as t → ∞. To

avoid this instability a scientist exchanges the two equations:

dx/dt = 0x − 4y

dy/dt = −2x + 2ybecomes

dy/dt = −2x + 2y

dx/dt = 0x − 4y

Now the matrix

[−2 2

0 −4

]is stable. It has negative eigenvalues. How can this be?

Hint: The matrix is not the right one to be used to describe the transformed linear

equations.

Page 66: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-65

Definition (Symmetric matrix): A matrix A is symmetric if A = AT.

• When A is symmetric,

– its eigenvalues are all reals;

– it has n orthogonal eigenvectors (so it can be diagonalize). We can then

normalize these orthogonal eigenvectors to obtain an orthonormal basis.

Definition (Symmetric diagonalization): A symmetric matrix A can be

written as

A = QΛQ−1 = QΛQT with Q−1 = QT

where Λ is a diagonal matrix with real eigenvalues at the diagonal.

Page 67: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-66

Theorem (Spectral theorem or principal axis theorem) A symmetric

matrix A with distinct eigenvalues can be written as

A = QΛQ−1 = QΛQT with Q−1 = QT

where Λ is a diagonal matrix with real eigenvalues at the diagonal.

Proof:

• We have proved on slide 6-13 that a symmetric matrix has real eigenvalues

and a skew-symmetric matrix has pure imaginary eigenvalues.

• Next, we prove that eigenvectors are orthogonal, i.e., vT1v2 = 0, for unequal

eigenvalues λ1 �= λ2.

Av1 = λ1v1 and Av2 = λ2v2

=⇒ (λ1v1)Tv2 = (Av1)

Tv2 = vT1A

Tv2 = vT1Av2 = vT

1λ2v2

which implies

(λ1 − λ2)vT1v2 = 0.

Therefore, vT1v2 = 0 because λ1 �= λ2. �

Page 68: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-67

• A = QΛQT implies that

A = λ1q1qT1 + · · · + λnqnq

Tn

= λ1P1 + · · · + λnPn

Recall from slide 4-37, the projection matrix Pi onto a unit vector qi is

Pi = qi(qTiqi)

−1qTi = qiq

Ti

So, Ax is the sum of projections of vector x onto each eigenspace.

Ax = λ1P1x + · · · + λnPnx.

The spectral theorem can be extended to a symmetric matrix with repeated

eigenvalues.

To prove this, we need to introduce the famous Schur’s theorem.

Page 69: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-68

Theorem (Schur’s theorem): Every square matrix A can be factorized into

A = QTQ−1 with T upper triangular and Q† = Q−1,

where “†” denotes Hermisian transpose operation.

Further, if A has real eigenvalues (and hence has real eigenvectors), then Q and T

can be chosen real.

Proof: The existence of Q and T such that A = QTQ† and Q† = Q−1 can be

proved by induction.

• The theorem trivially holds when A is a 1× 1 matrix.

• Suppose that Schur’s Theorem is valid for all (n−1)× (n−1) matrices. Then,

we claim that Schur’s Theorem will hold true for all n× n matrices.

– This is because we can take t1,1 and q1 to be the eigenvalue and eigenvector

of An×n (as there must exist at least one pair of eigenvalue and eigenvector

for An×n). Then, choose any p2, . . ., pn such that they together with q1

span the n-dimensional space, and

P =[q1 p2 · · · pn

]is a orthonormal matrix.

Page 70: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-69

– We derive

P †AP =

q†1

p†2...

p†n

[Aq1 Ap2 · · · Apn

]

=

q†1

p†2...

p†n

[t1,1q1 Ap2 · · · Apn

](since A q1︸︷︷︸

eigenvector

= t1,1︸︷︷︸eigenvalue

q1)

=

t1,1 t1,2 · · · t1,n0...0

A(n−1)×(n−1)

where t1,j = q†1Apj and

A(n−1)×(n−1) =

p

†2Ap2 · · · p†

2Apn... . . . ...

p†nAp2 · · · p†

nApn

.

Page 71: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-70

– Since Schur’s Theorem is true for any (n− 1)× (n− 1) matrix, we can find

Q(n−1)×(n−1) and T(n−1)×(n−1) such that

A(n−1)×(n−1) = Q(n−1)×(n−1)T(n−1)×(n−1)Q†(n−1)×(n−1)

and

Q†(n−1)×(n−1) = Q−1

(n−1)×(n−1).

– Finally, define

Qn×n = P

[1 0T

0 Q(n−1)×(n−1)

]and

Tn×n =

t1,1 t1,2 · · · t1,n0...0

T(n−1)×(n−1)

satisfy that

Q†n×nQn×n =

[1 0T

0 Q†(n−1)×(n−1)

]P †P

[1 0T

0 Q(n−1)×(n−1)

]= In×n

Page 72: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-71

and

Qn×nTn×n = Pn×n

[1 0T

0 Q(n−1)×(n−1)

]t1,1 t1,2 · · · t1,n0...0

T(n−1)×(n−1)

= Pn×n

t1,1 t1,2 · · · t1,n0...0

Q(n−1)×(n−1)T(n−1)×(n−1)

= Pn×n

t1,1 t1,2 · · · t1,n0...0

A(n−1)×(n−1)Q(n−1)×(n−1)

= Pn×n

t1,1 t1,2 · · · t1,n0...0

A(n−1)×(n−1)

[1 0T

0 Q(n−1)×(n−1)

]

= Pn×n(P†n×nAn×nPn×n)

[1 0T

0 Q(n−1)×(n−1)

]= An×nQn×n

Page 73: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-72

• It remains to prove that if A has real eigenvalues, then Q and T can be chosen

real, which can be similarly proved by induction. Suppose “If A has real

eigenvalues, then Q and T can be chosen real.” is true for all (n− 1)× (n− 1)

matrices. Then, the claim should be true for all n× n matrices.

– For a given An×n with real eigenvalues (and real eigenvectors), we can

certainly have the real t1,1 and q1, and so are p2, . . . ,pn. This makes real

the resultant A(n−1)×(n−1) and t1,2, . . . , t1,n.

The eigenvector associated with a real eigenvalue can be chosen real. For

complex v and real λ and A, by its definition,

Av = λv

is equivalent to

A ·Re{v} = A · λ ·Re{v} and A · Im{v} = A · λ · Im{v}.

Page 74: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-73

– The proof is completed by noting that the eigenvalues of A(n−1)×(n−1) sat-

isfying

P †AP =

t1,1 t1,2 · · · t1,n0...0

A(n−1)×(n−1)

are also the eigenvalues ofAn×n; hence, they are all reals. So, by the validity

of the claimed statement for (n − 1) × (n − 1) matrices, the existence of

real Q(n−1)×(n−1) and real T(n−1)×(n−1) satisfying

A(n−1)×(n−1) = Q(n−1)×(n−1)T(n−1)×(n−1)Q†(n−1)×(n−1)

and

Q†(n−1)×(n−1) = Q−1

(n−1)×(n−1)

is confirmed. �

Page 75: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-74

Two important facts:

• P †AP and A have the same eigenvalues but possibly different eigenvectors. A

simple proof is that for v = Pv′,

(P †AP )v′ = P †Av = P †(λv) = λP †v = λv′.

• (Section 6.1: Problem 26 or see slide 6-22)

det(A) = det

([B C

0 D

])= det(B) · det(C)

Thus,

P †AP =

t1,1 t1,2 · · · t1,n0...0

A(n−1)×(n−1)

So the eigenvalues of A should be the eigenvalues of P †AP .

Page 76: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-75

Theorem (Spectral theorem or principal axis theorem) A symmetric

matrix A (not necessarily with distinct eigenvalues) can be written as

A = QΛQ−1 = QΛQT with Q−1 = QT

where Λ is a diagonal matrix with real eigenvalues at the diagonal.

Proof:

• A symmetric matrix certainly satisfies Schur’s Theorem.

A = QTQ† with T upper triangular and Q† = Q−1.

• A has real eigenvalues. So, both T and Q are reals.

• By AT = A, we have

AT = QT TQT = QTQT = A.

This immediately gives

T T = T

which implies the off-diagonals are zeros.

Page 77: Eigenvalues and Eigenvectors

6.4 Symmetric matrices 6-76

• By AQ = QT , equivalently, Aq1 = t1,1q1

Aq2 = t2,2q2...

Aqn = tn,nqn

we know that Q is the matrix of eigenvectors (and there are n of them) and T

is the matrix of eigenvalues. �

This result immediately indicates that a symmetric A can always be diagonalized.

Summary

• A symmetric matrix has real eigenvalues and n real orthogonal eigenvec-

tors.

Page 78: Eigenvalues and Eigenvectors

6.4 Problem discussions 6-77

(Problem 21, Section 6.4)(Recommended) This matrix M is skew-symmetric and

also . Then all its eigenvalues are pure imaginary and they also have

|λ| = 1. (‖Mx‖ = ‖x‖ for every x so ‖λx‖ = ‖x‖ for eigenvectors.) Find all

four eigenvalues from the trace of M :

M =1√3

0 1 1 1

−1 0 −1 1

−1 1 0 −1

−1 −1 1 0

can only have eigenvalues ı or − ı.

Thinking over Problem 14: The eigenvalues of an orthogonal matrix satisfies

|λ| = 1.

Proof:

‖Qv‖2 = (Qv)†Qv = v†Q†Qv = v†v = ‖v‖2implies

‖λv‖ = ‖v‖.�

|λ| = 1 and λ pure imaginary implies λ = ±ı.

Page 79: Eigenvalues and Eigenvectors

6.4 Problem discussions 6-78

(Problem 15, Section 6.4) Show that A (symmetric but complex) has only

one line of eigenvectors:

A =

[ı 1

1 −ı

]is not even diagonalizable: eigenvalues λ = 0, 0.

AT = A is not such a special property for complex matrices. The good property

is AT = A (Section 10.2). Then all λ’s are real and eigenvectors are orthogonal.

Thinking over Problem 15: That a symmetric matrix A satisfying AT = A has

real eigenvalues and n orthogonal eigenvectors is only true for real symmetric

matrices.

For a complex matrix A, we need to rephrase it as “A Hermitian symmetric

matrix A satisfying A† = A has real eigenvalues and n orthogonal eigenvectors.”

Inner product and norm for complex vectors:

v ·w = v†w and ‖v‖2 = v†v

Page 80: Eigenvalues and Eigenvectors

6.4 Problem discussions 6-79

Proof of the red-color claim: Suppose Av = λv. Then,

Av = λv

=⇒ (Av)†v = (λv)†v (i.e., (Av)Tv = (λv)Tv)

=⇒ v†A†v = λ∗v†v=⇒ v†Av = λ∗v†v=⇒ v†λv = λ∗v†v=⇒ λ‖v‖2 = λ∗‖v‖2=⇒ λ = λ∗

(Problem 28, Section 6.4) For complex matrices, the symmetry AT = A that

produces real eigenvalues changes to AT = A. From det(A − λI) = 0, find the

eigenvalues of the 2 by 2 “Hermitian” matrix A =[4 2 + ı; 2− ı 0

]= AT. To

see why eigenvalues are real when AT = A, adjust equation (1) of the text to

Ax = λx. (See the green box above.)

Page 81: Eigenvalues and Eigenvectors

6.4 Problem discussions 6-80

(Problem 27, Section 6.4) (MATLAB) Take two symmetric matrices with different

eigenvectors, say A =

[1 0

0 2

]and B =

[8 1

1 0

]. Graph the eigenvalues λ1(A+ tB)

and λ2(A + tB) for −8 < t < 8. Peter Lax says on page 113 of Linear Algebra

that λ1 and λ2 appear to be on a collision course at certain values of t. “Yet at

the last minute they turn aside.” How close do they come?

Correction for Problem 27: The problem should be “. . . Graph the eigenvalues

λ1 and λ2 of A + tB for −8 < t < 8. . . ..”

Hint: Draw the pictures of λ1(t) and λ2(t) with respect to t and check λ1(t)−λ2(t).

Page 82: Eigenvalues and Eigenvectors

6.4 Problem discussions 6-81

(Problem 29, Section 6.4)Normal matrices have ATA = AAT. For real matrices,ATA = AAT

includes symmetric, skew symmetric, and orthogonal matrices. Those have real λ, imaginary λ

and |λ| = 1. Other normal matrices can have any complex eigenvalues λ.

Key point: Normal matrices have n orthonormal eigenvectors. Those vectors xi

probably will have complex components. In that complex case orthogonality means xTixj = 0

as Chapter 10 explains. Inner products (dot products) become xTy.

The test for n orthonormal columns in Q becomes QTQ = I instead of QTQ = I.

A has n orthonormal eigenvectors (A = QΛQT) if and only if A is normal.

(a) Start from A = QΛQT with QTQ = I . Show that ATA = AAT.

(b) Now start from ATA = AAT. Schur found A = QTQT for every matrix A, with a triangular

T . For normal matrices we must show (in 3 steps) that this T will actually be diagonal.

Then T = Λ.

Step 1. Put A = QTQT into ATA = AAT to find T TT = T T T.

Step 2: Suppose T =

[a b

0 d

]has T TT = T T T. Prove that b = 0.

Step 3: Extend Step 2 to size n. A normal triangular T must be diagonal.

Page 83: Eigenvalues and Eigenvectors

6.4 Problem discussions 6-82

Important conclusion from Problem 29: A matrix A has n orthogonal eigenvec-

tors if, and only if, A is normal.

Definition (Normal matrix): A matrix A is normal if A†A = A†A.

Proof:

• Schur’s theorem: A = QTQ† with T upper triangular and Q† = Q−1.

• A†A = AA† =⇒ T †T = TT † =⇒ T diagonal =⇒ Qmatrix of eigenvectors

by AQ = QT . �

Page 84: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-83

Definition (Positive definite): A symmetric matrix A is positive defi-

nite if its eigenvalues are all positive.

• The above definition only applies to a symmetric matrix because a non-symmetric

matrix may have complex eigenvalues (which cannot be compared with zero)!

Properties of positive definite (symmetric) matrices

• Equivalent Definition: A is positive definite if, and only if, xTAx > 0 for

all non-zero x.

xTAx is usually referred to as the quadratic form!

The proofs can be found in Problems 18 and 19.

(Problem 18, Section 6.5) If Ax = λx then xTAx = . If xTAx > 0,

prove that λ > 0.

Page 85: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-84

(Problem 19, Section 6.5) Reverse Problem 18 to show that if all λ > 0 then

xTAx > 0. We must do this for every nonzero x, not just the eigenvectors.

So write x as a combination of the eigenvectors and explain why all “cross

terms” are xTixj = 0. Then xTAx is

(c1x1+· · ·+cnxn)T(c1λ1x1+· · ·+cnλnxn) = c21λ1x

T1x1+· · ·+c2nλnx

Tnxn > 0.

Proof (Problems 18):

– (only if part : Problem 19) Avi = λivi implies vTi Avi = λvT

ivi > 0 for all

eigenvalues {λi}ni=1 and eigenvectors {vi}ni=1. The proof is completed by

noting that with x =∑n

i=1 civi and {vi}ni=1 orthogonal,

xT(Ax) =

(n∑

i=1

civTi

) n∑j=1

cjλjvj

=

n∑i=1

c2i λi‖vi‖2 > 0.

– (if part : Problem 18) Taking x = vi, we obtain that vTi (Avi) = vT

i (λvi) =

λi‖vi‖2 > 0, which implies λi > 0. �

Page 86: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-85

• Based on the above proposition, we can conclude similar to two positive scalars

(as “if a and b are both positive, so is a + b”) that

Proposition: If A and B are both positive definite, so is A +B.

• The next property provides an easy way to construct positive definite matrices.

Proposition: If An×n = RTn×mRm×n and Rm×n has linearly independent

columns, then A is positive definite.

Proof:

– x non-zero =⇒ Rx non-zero.

– Then, xT(RTR)x = (Rx)T(Rx) = ‖Rx‖2 > 0. �

Page 87: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-86

Since (symmetric) A = QΛQT, we can choose R = QΛ1/2QT, which requires

R to be a square matrix. This observation gives another equivalent definition

of positive definite matrices.

Equivalent Definition: An×n is positive definite if, and only if, there exists

Rm×n with independent columns such that A = RTR.

Page 88: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-87

• Equivalent Definition: An×n is positive definite if, and only if, all pivots

are positive.

Proof:

– (By LDU decomposition,) A = LDLT, where D is a diagonal matrix with

pivots as diagonals, and L is a lower triangular matrix with 1 as diagonals.

Then,

xTAx = xT(LDLT)x = (LTx)TD(LTx)

= yTDy where y = LTx =

lT1x...lTnx

= d1y21 + · · · + dny

2n

= d1(lT1x)2

+ · · · + dn(lTnx)2.

– So, if all pivots are positive, xTAx > 0 for all non-zero x. Conversely, if

xTAx > 0 for all non-zero x, which in turns implies yTDy > 0 for all

non-zero y (as L is invertible), then pivot di must be positive for the choice

of yj = 0 except for j = i. �

Page 89: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-88

An extension of the previous proposition is:

Suppose A = BCBT with B invertible and C diagonal. Then, A is positive

definite if, and only if, diagonals of C are all positive! See Problem 35.

(Problem 35, Section 6.5) Suppose C is positive definite (so yTCy > 0 when-

ever y �= 0) and A has independent columns (so Ax �= 0 whenever x �= 0).

Apply the energy test to xTATCAx to show that ATCA is positive definite:

the crucial matrix in engineering.

Page 90: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-89

Equivalent Definition: An×n is positive definite if, and only if, the n upper

left determinants (i.e., the n leading principle minors) are all positive.

Definition (Minors, Principle minors and leading principle mi-

nors):

– A minor of a matrix A is the determinant of some smaller square matrix,

obtained by removing one or more of its rows or columns.

– The first-order (respectively, second-order, etc) minor is a minor, obtained

by removing (n− 1) (respectively, (n− 2), etc) rows or columns.

– A principle minor of a matrix is a minor, obtained by removing the same

rows and columns.

– A leading principle minor of a matrix is a minor, obtaining by removing

the last few rows and columns.

Page 91: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-90

The below example gives three upper left determinants (or three leading

principle minors), det(A1×1), det(A2×2) and det(A3×3).

a1,1 a1,2a2,1 a2,2

a1,3a2,3

a3,1 a3,2 a3,3

Proof of the equivalent definition: This can be proved based on slide 5-18:

By LU decomposition,

A =

1 0 0 · · · 0

l2,1 1 0 · · · 0

l3,1 l3,2 1 · · · 0... ... ... . . . ...

ln,1 ln,2 ln,3 · · · 1

d1 u1,2 u1,3 · · · u1,n0 d2 u2,3 · · · d2,n0 0 d3 · · · d3,n... ... ... . . . ...

0 0 0 · · · dn

=⇒ dk =det(Ak×k)

det(A(k−1)×(k−1))�

Page 92: Eigenvalues and Eigenvectors

6.5 Positive definite matrices 6-91

Equivalent Definition: An×n is positive definite if, and only if, all (2n − 2)

principle minors are positive.

• Based on the above equivalent definitions, a positive definite matrix cannot

have either zero or negative value in its main diagonals. See Problem 16.

(Problem 16, Section 6.5) A positive definite matrix cannot have a zero (or even

worse, a negative number) on its diagonal. Show that this matrix fails to have

xTAx > 0:

[x1 x2 x3

] 4 1 1

1 0 2

1 2 5

x1x2x3

is not positive when (x1, x2, x3) = ( , , ).

With the above discussion of positive definite matrices, we can proceed to de-

fine similar notions like negative definite, positive semidefinite, negative

semidefnite matrices.

Page 93: Eigenvalues and Eigenvectors

6.5 Positive semidefinite (or nonnegative definite) 6-92

Definition (Positive semidefinite): A symmetric matrix A is positive

semidefinite if its eigenvalues are all nonnegative.

Equivalent Definition: A is positive semidefinite if, and only if, xTAx≥0 for

all non-zero x.

Equivalent Definition: An×n is positive semidefinite if, and only if, there exists

Rm×n (perhaps with dependent columns) such that A = RTR.

Equivalent Definition: An×n is positive semidefinite if, and only if, all pivots

are nonnegative.

Equivalent Definition: An×n is positive semidefinite if, and only if, all principle

minors are non-negative.

Example. Non-positive semidefinite A =

1 0 0

0 0 0

0 0 −1

has (23−2) principle minors.

[1 0

0 0

],

[1 0

0 −1

],

[0 0

0 −1

],[1],[0],[−1].

Note: It is not sufficient to define the positive semidefinite based on the non-

negativity of leading principle minors. Check the above example.

Page 94: Eigenvalues and Eigenvectors

6.5 Negative definite, negative semidefinite, indefinite 6-93

We can similarly define negative definite.

Definition (Negative definite): A symmetric matrix A is negative def-

inite if its eigenvalues are all negative.

Equivalent Definition: A is negative definite if, and only if, xTAx<0 for all

non-zero x.

Equivalent Definition: An×n is negative definite if, and only if, there exists

Rm×n with independent columns such that A = −RTR.

Equivalent Definition: An×n is negative definite if, and only if, all pivots are

negative.

Equivalent Definition: An×n is negative definite if, and only if, all odd-order

leading principle minors are negative and all even-order leading principle minors

are positive.

Equivalent Definition: An×n is negative definite if, and only if, all odd-order

principle minors are negative and all even-order principle minors are positive.

Page 95: Eigenvalues and Eigenvectors

6.5 Negative definite, negative semidefinite, indefinite 6-94

We can also similarly define negative semidefinite.

Definition (Negative definite): A symmetric matrix A is negative

semidefinite if its eigenvalues are all non-positive.

Equivalent Definition: A is negative semidefinite if, and only if, xTAx≤0 for

all non-zero x.

Equivalent Definition: An×n is negative semidefinite if, and only if, there

exists Rm×n (possibly with dependent columns) such that A = −RTR.

Equivalent Definition: An×n is negative semidefinite if, and only if, all pivots

are non-positive.

Equivalent Definition: An×n is negative definite if, and only if, all odd-order

principle minors are non-positive and all even-order principle minors are non-

negative.

Note: It is not sufficient to define the negative semidefinite based on the non-

positivity of leading principle minors.

Finally, if some of the eigenvalues of a symmetric matrix are positive, and some are

negative, the matrix will be referred to as indefinite.

Page 96: Eigenvalues and Eigenvectors

6.5 Quadratic form 6-95

• For a positive definite matrix A2×2,

xTAx = xT(QΛQT)x = (QTx)TΛ(QTx)

= yTΛy where y = QTx =

[qT1x

qT2x

]= λ1

(qT1x)2

+ λ2

(qT2x)2

So, xTAx = c gives an ellipse if c > 0.

Example. A =

[5 4

4 5

]

– λ1 = 9 and λ2 = 1, and q1 =1√2

[1

1

]and q2 =

1√2

[1

−1

].

– xTAx = λ1 (qT1x)

2+ λ2 (q

T2x)

2= 9

(x1 + x2√

2

)2

+

(x1 − x2√

2

)2

{qT1x is an axis perpendicular to q1. (Not along q1 as the textbook said.)

qT2x is an axis perpendicular to q2.

Tip: A = QΛQT is called the principal axis theorem (cf. slide 6-66) because

xTAx = yTΛy = c is an ellipse with axes along the eigenvectors.

Page 97: Eigenvalues and Eigenvectors

6.5 Problem discussions 6-96

(Problem 13, Section 6.5) Find a matrix with a > 0 and c > 0 and a + c > 2b

that has a negative eigenvalue.

Missing point in Problem 13: The matrix to be determined is of the shape

[a b

b c

].

Page 98: Eigenvalues and Eigenvectors

6.6 Similar matrices 6-97

Definition (Similarity): Two matrices A and B are similar if B = M−1AM

for some invertible M .

Theorem: A and M−1AM have the same eigenvalues.

Proof: For eigenvalue λ and eigenvector v of A, define v′ = M−1v (hence, v =

Mv′). We then derive

(M−1AM)v′ = M−1A(Mv′) = M−1Av = λM−1v = λv′

So, λ is also the eigenvalue of M−1AM (associated with eigenvector v′ = M−1v).

Notes:

• The LU decomposition over a symmetric matrixA givesA = LDLT but A and

D (pivot matrix) apparently may have different eigenvalues. Why? Because

(LT)−1 �= L. Similarity is defined based on M−1,not MT.

• The converse to the above theorem is wrong!. In other words, we cannot

say that “two matrices with the same eigenvalues are similar.”

Example. A =

[0 1

0 0

]and B =

[0 0

0 0

]have the same eigenvalues 0, 0, but

they are not similar.

Page 99: Eigenvalues and Eigenvectors

6.6 Jordan form 6-98

• Why introducing matrix similarity?

Answer: We can then extend the diagonalization of a matrix with less than

n eigenvectors to the Jordan form.

• For an un-diagonalizable matrix A, we can find invertible M such that

A = MJM−1,

where

J =

J1 0 · · · 0

0 J2 · · · 0... ... . . . ...

0 0 · · · Js

and s is the number of distinct eigenvalues, and the size of Ji is equal to the

multiplicity of eigenvalue λi, and Ji is of the form

Ji =

λi 1 0 · · · 0

0 λi 1 · · · 0

0 0 λi · · · 0... ... ... . . . ...

0 0 0 · · · λi

Page 100: Eigenvalues and Eigenvectors

6.6 Jordan form 6-99

• The idea behind the Jordan form is that A is similar to J .

• Based on this, we can now compute

A100 = MJ100M−1 = M

J1001 0 · · · 0

0 J1002 · · · 0

... ... . . . ...

0 0 · · · J100s

M−1.

• How to find Mi corresponding to Ji?

Answer: By AM = MJ with M =[M1 M2 · · · Ms

], we know

AMi = MiJi for 1 ≤ i ≤ s.

Specifically, assume that the multiplicity of λi is two. Then,

A[v(1)i v

(2)i

]=[v(1)i v

(2)i

] [λi 1

0 λi

]which is equivalent to{

Av(1)i = λiv

(1)i

Av(2)i = v

(1)i + λiv

(2)i

=⇒{(A− λiI)v

(1)i = 0

(A− λiI)v(2)i = v

(1)i

Page 101: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-100

(Problem 14, Section 6.6) Prove that AT is always similar to A (we know the λ’s

are the same):

1. For one Jordan block Ji: Find Mi so that M−1i JiMi = JT

i .

2. For any J with blocks Ji: Build M0 from blocks so that M−10 JM0 = JT.

3. For any A = MJM−1: Show that AT is similar to JT and so to J and to A.

Page 102: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-101

Thinking over Problem 14: AT and A are always similar.

Answer:

• It can be easily checked thatu1,1 u1,2 · · · u1,nu2,1 u2,2 · · · u2,n... ... . . . ...

un,1 un,2 · · · un,n

=

0 · · · 0 1... · · · 1 0... · · · ... ...

1 · · · 0 0

un,n · · · un,2 un,1... . . . ... ...... · · · u2,2 u2,1

u1,n · · · u1,2 u1,1

0 · · · 0 1... · · · 1 0... · · · ... ...

1 · · · 0 0

= V −1

un,n · · · un,2 un,1... . . . ... ...... · · · u2,2 u2,1

u1,n · · · u1,2 u1,1

V

where here V represents a matrix with zero entries except vi,n+1−i = 1 for

1 ≤ i ≤ n. We note that V −1 = V T = V .

So, we can use the proper size of Mi = V to obtain

JTi = M−1

i JiMi.

Page 103: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-102

Define

M =

M1 0 · · · 0

0 M2 · · · 0... ... . . . ...

0 0 · · · Ms

We have

JT = MJM−1,

where

J =

J1 0 · · · 0

0 J2 · · · 0... ... . . . ...

0 0 · · · Js

and s is the number of distinct eigenvalues, and the size of Ji is equal to the

multiplicity of eigenvalue λi, and Ji is of the form

Ji =

λi 1 0 · · · 0

0 λi 1 · · · 0

0 0 λi · · · 0... ... ... . . . ...

0 0 0 · · · λi

Page 104: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-103

Finally, we know

– A is similar to J ;

– AT is similar to JT;

– JT is similar to J .

So, AT is similar to A. �

(Problem 19, Section 6.6) If A is 6 by 4 and B is 4 by 6, AB and BA have different

sizes. But with blocks,

M−1FM =

[I −A

0 I

] [AB 0

B 0

] [I A

0 I

]=

[0 0

B BA

]= G.

(a) What sizes are the four blocks (the same four sizes in each matrix)?

(b) This equation is M−1FM = G, so F and G have the same eigenvalues. F

has the 6 eigenvalues of AB plus 4 zeros; G has the 4 eigenvalues of BA plus

6 zeros. AB has the same eigenvalues as BA plus zeros.

Page 105: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-104

Thinking over Problem 19: Am×nBn×m and Bn×mAm×n have the same eigenval-

ues except for additional (m− n) zeros.

Solution: The example shows the usefulness of the similarity.

•[Im×m −Am×n

0n×m In×n

] [Am×nBn×m 0m×n

Bn×m 0n×n

] [Im×m Am×n

0n×m In×n

]=

[0m×m 0m×n

Bn×m Bn×mAm×n

]

• Hence,

[Am×nBn×m 0m×n

Bn×m 0n×n

]and

[0m×m 0m×n

Bn×m Bn×mAm×n

]are similar and have the

same eigenvalues.

• From Problem 26 in Section 6.1 (see slide 6-22), the desired claim is proved.�

Page 106: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-105

(Problem 21, Section 6.6) If J is the 5 by 5 Jordan block with λ = 0, find J2 and

count its eigenvectors (are these the eigenvectors?) and find its Jordan form (there

will be two blocks).

Problem 21 : Find the Jordan form of A = J2, where

J =

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

0 0 0 0 0

.

Solution.

• A = J2 =

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

0 0 0 0 0

0 0 0 0 0

, and the eigenvalues of A are 0, 0, 0, 0, 0.

Page 107: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-106

• Av1 = A

v1,1v2,1v3,1v4,1v5,1

=

v3,1v4,1v5,10

0

= 0 =⇒ v3,1 = v4,1 = v5,1 = 0 =⇒ v1 =

v1,1v2,10

0

0

.

• Av2 = A

v1,2v2,2v3,2v4,2v5,2

=

v3,2v4,2v5,20

0

= v1 =⇒

v3,2 = v1,1

v4,2 = v2,1

v5,2 = 0

=⇒ v2 =

v1,2v2,2v1,1v2,10

• Av3 = A

v1,3v2,3v3,3v4,3v5,3

=

v3,3v4,3v5,30

0

= v2 =⇒

v3,3 = v1,2

v4,3 = v2,2

v5,3 = v1,1

v2,1 = 0

=⇒ v3 =

v1,3v2,3v1,2v2,2v1,1

Page 108: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-107

• To summarize,

v1 =

v1,1v2,10

0

0

=⇒ v2 =

v1,2v2,2v1,1v2,10

=⇒ If v2,1 = 0, then v3 =

v1,3v2,3v1,2v2,2v1,1

v(1)1 =

1

0

0

0

0

=⇒ v(1)2 =

v(1)1,2

v(1)2,2

1

0

0

=⇒ v(1)3 =

v(1)1,3

v(1)2,3

v(1)1,2

v(1)2,2

1

v(2)1 =

0

1

0

0

0

=⇒ v(2)2 =

v(2)1,2

v(2)2,2

0

1

0

Page 109: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-108

Since we wish to choose each of v(1)2 and v

(2)2 to be orthogonal to both v

(1)1 and

v(2)1 ,

v(1)1,2 = v

(1)2,2 = v

(2)1,2 = v

(2)2,2 = 0,

i.e.,

v(1)1 =

1

0

0

0

0

=⇒ v(1)2 =

0

0

1

0

0

=⇒ v(1)3 =

v(1)1,3

v(1)2,3

0

0

1

v(2)1 =

0

1

0

0

0

=⇒ v(2)2 =

0

0

0

1

0

Since we wish to choose v(1)3 to be orthogonal to all of v

(1)1 , v

(2)1 , v

(1)2 and v

(2)2 ,

v(1)1,3 = v

(1)2,3 = 0.

Page 110: Eigenvalues and Eigenvectors

6.6 Problem discussions 6-109

As a result,

A[v(1)1 v

(1)2 v

(1)3 v

(2)1 v

(2)2

]=[v(1)1 v

(1)2 v

(1)3 v

(2)1 v

(2)2

]0 1 0 0 0

0 0 1 0 0

0 0 0 0 0

0 0 0 0 1

0 0 0 0 0

• Are all v(1)1 ,v

(1)2 ,v

(1)3 ,v

(2)1 and v

(2)2 eigenvectors of A (satisfying Av = λv)?

Hint: A does not have 5 eigenvectors. More specifically, A only has 2 eigen-

vectors? Which two? Think of it.

• Hence, the problem statement may not be accurate as I mark “(are these the

eigenvectors?)”.

Page 111: Eigenvalues and Eigenvectors

6.7 Singular value decomposition (SVD) 6-110

• Now we can represent all square matrix in the Jordan form:

A = M−1JM.

• What if the matrix Am×n is not a square one?

– Problem: Av = λv is not possible! Specifically, v cannot have two different

dimensionalities.

Am×nvn×1 = λvn×1 infeasible if n �= m.

– So, we can only have

Am×nvn×1 = σum×1.

Page 112: Eigenvalues and Eigenvectors

6.7 Singular value decomposition (SVD) 6-111

• If we can find enough numbers of orthogonal u and v such that

Am×n

[v1 v2 · · · vn

]n×n

=[u1 u2 · · · um

]m×m

σ1 · · · 0 0 · · · 0... . . . ... ... . . . ...

0 · · · σr 0 · · · 0

0 · · · 0 0 · · · 0... . . . ... ... . . . ...

0 · · · 0 0 · · · 0

m×n

.

where r is the rank of A, then we can perform the so-called singular value

decomposition (SVD)

A = UΣV −1.

Note again that the required enough number of orthogonal u and v may be

impossible when A has repeated “eigenvalues.”

Page 113: Eigenvalues and Eigenvectors

6.7 Singular value decomposition (SVD) 6-112

• If V is chosen to be an orthogonal matrix satisfying V −1 = V T, then we have

the so-called reduced SVD

A = UΣV T

=r∑

i=1

λiuivTi

=[u1 · · · ur

]m×r

σ1 · · · 0

... . . . ...

0 · · · σr

r×r

vT

1...

vTr

r×n

• Usually, we prefer to choose an orthogonal V (as well as orthogonal U).

• In the sequel, we will assume the found U and V are orthogonal matrices in

the first place; later, we will confirm that orthogonal U and V can always be

found.

Page 114: Eigenvalues and Eigenvectors

6.7 How to determine U , V and {σi}? 6-113

• A = UΣV T

=⇒ ATA =(UΣV T

)T (UΣV T

)= V ΣTUTUΣV T = V Σ2V T.

So, V is the (orthogonal) matrix of n eigenvectors of symmetric ATA.

• A = UΣV T

=⇒ AAT =(UΣV T

) (UΣV T

)T= UΣV TV ΣTUT = UΣ2UT.

So, U is the (orthogonal) matrix of m eigenvectors of symmetric AAT.

• Remember that ATA and AAT have the same eigenvalues except for additional

(m− n) zeros.

Section 6.6, Problem 19 (see slide 6-104): Am×nBn×m and Bn×mAm×n have

the same eigenvalues except for additional (m− n) zeros.

In fact, there are only r non-zero eigenvalues for ATA and AAT, which satisfy

λi = σ2i , where 1 ≤ i ≤ r.

Page 115: Eigenvalues and Eigenvectors

6.7 How to determine U , V and {σi}? 6-114

Example (Problem 21 in Section 6.6): Find the SVD of A = J2, where

J =

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

0 0 0 0 0

.

Solution.

• ATA =

0 0 0 0 0

0 0 0 0 0

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

and AAT =

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

0 0 0 0 0

0 0 0 0 0

.

• So V =

0 0 0 1 0

0 0 0 0 1

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

and U =

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

for Λ = Σ2 =

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

0 0 0 0 0

0 0 0 0 0

.

Page 116: Eigenvalues and Eigenvectors

6.7 How to determine U , V and {σi}? 6-115

• Hence, 0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

0 0 0 0 0

0 0 0 0 0

︸ ︷︷ ︸A

=

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

︸ ︷︷ ︸U

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

0 0 0 0 0

0 0 0 0 0

︸ ︷︷ ︸Σ

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

1 0 0 0 0

0 1 0 0 0

︸ ︷︷ ︸V T

We may compare this with the Jordan form.0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

0 0 0 0 0

0 0 0 0 0

︸ ︷︷ ︸A

=

1 0 0 0 0

0 0 0 1 0

0 1 0 0 0

0 0 0 0 1

0 0 1 0 0

︸ ︷︷ ︸M

0 1 0 0 0

0 0 1 0 0

0 0 0 0 0

0 0 0 0 1

0 0 0 0 0

︸ ︷︷ ︸J

1 0 0 0 0

0 0 1 0 0

0 0 0 0 1

0 1 0 0 0

0 0 0 1 0

︸ ︷︷ ︸M−1

Page 117: Eigenvalues and Eigenvectors

6.7 How to determine U , V and {σi}? 6-116

Remarks on the previous example

• In the above example, we only know Λ = Σ2 =

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

0 0 0 0 0

0 0 0 0 0

.

• Hence, Σ =

±1 0 0 0 0

0 ±1 0 0 0

0 0 ±1 0 0

0 0 0 0 0

0 0 0 0 0

.

• However, by Avi = σiui = (−σi)(−ui), we can always choose positive σi by

adjusting the sign of ui.

Page 118: Eigenvalues and Eigenvectors

6.6 SVD 6-117

Important notes:

• In terminology, σi, where 1 ≤ i ≤ r, is called the singular value.

Hence, the name of singular value decomposition is used.

• The singular value is always non-zero (even though the eigenvalues of A can

be zeros).

Example. A =

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

0 0 0 0 0

0 0 0 0 0

but Σ =

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

0 0 0 0 0

0 0 0 0 0

.

• It is good to know that:

The first r columns of V ∈ row space R(A) and are bases of R(A)

The last (n− r) columns of V ∈ null space N (A) and are bases of N (A)

The first r columns of U ∈ column space C(A) and are bases of C(A)

The last (m− r) columns of U ∈ left null space N (AT) and are bases of N (AT)

How useful the above facts are can be seen from the next example.

Page 119: Eigenvalues and Eigenvectors

6.6 SVD 6-118

Example. Find the SVD of A4×3 = x4×1yT1×3.

Solution.

• A is a rank-1 matrix. So, r = 1 and Σ =

σ1 0 0

0 0 0

0 0 0

0 0 0

, where σ1 �= 0. (Actually,

σ1 = 1.)

• The base of the row space is v1 =y

‖y‖; pick up perpendicular v2 and v3 that

span the null space.

• The base of the column space is u1 = x‖x‖; pick up perpendicular u2,u3,u4

that span the left null space.

• The SVD is then

A4×3 =

[x

‖x‖︸︷︷︸column space

u2 u3 u4︸ ︷︷ ︸left null space

]4×4

Σ4×3

yT

‖y‖vT2

vT3

3×3

row space}null space