Chapter 07

32
SSM: Linear Algebra Section 7.1 Chapter 7 7.1 1. If ~v is an eigenvector of A, then A~v = λ~v. Hence A 3 ~v = A 2 (A~v)= A 2 (λ~v)= A(Aλ~v)= A(λA~v)= A(λ 2 ~v)= λ 2 A~v = λ 3 ~v, so ~v is an eigenvector of A 3 with eigenvalue λ 3 . 3. We know A~v = λ~v, so (A +2I n )~v = A~v +2I n ~v = λ~v +2~v =(λ + 2)~v, hence ~v is an eigenvector of (A +2I n ) with eigenvalue λ + 2. 5. Assume A~v = λ~v and B~v = β~v for some eigenvalues λ, β. Then (A + B)~v = A~v + B~v = λ~v + β~v =(λ + β)~v so ~v is an eigenvector of A + B with eigenvalue λ + β. 7. We know A~v = λ~v so (A - λI n )~v = A~v - λI n ~v = λ~v - λ~v = ~ 0 so a nonzero vector ~v is in the kernel of (A - λI n ) so ker(A - λI n ) 6= { ~ 0} and A - λI n is not invertible. 9. We want a b c d 1 0 = λ 1 0 for any λ. Hence a c = λ 0 , i.e., the desired matrices must have the form λ b 0 d , they must be upper triangular. 11. We want a b c d 2 3 = -2 -3 . So, 2a +3b = -2 and 2c +3d = -3. Thus, b = -2-2a 3 , and d = -3-2c 3 . So all matrices of the form a -2-2a 3 c -3-2c 3 will fit. 13. Solving -6 6 -15 13 v 1 v 2 =4 v 1 v 2 , we get v 1 v 2 = 3 5 t t (with t 6= 0). 15. Any vector on L is unaffected by the reflection, so that a nonzero vector on L is an eigenvector with eigenvalue 1. Any vector on L is flipped about L, so that a nonzero vector on L is an eigenvector with eigenvalue -1. Picking a nonzero vector from L and one from L , we obtain a basis consisting of eigenvectors. 17. No (real) eigenvalues 19. Any nonzero vector in L is an eigenvector with eigenvalue 1, and any nonzero vector in the plane L is an eigenvectorwith eigenvalue 0. Form a basis consisting of eigenvectors by picking any nonzero vector in L and any two nonparallel vectors in L . 21. Any nonzero vector in R 3 is an eigenvector with eigenvalue 5. Any basis for R 3 consists of eigenvectors. 167

description

answer of chapter 7

Transcript of Chapter 07

Page 1: Chapter 07

SSM: Linear Algebra Section 7.1

Chapter 7

7.1

1. If ~v is an eigenvector of A, then A~v = λ~v.

Hence A3~v = A2(A~v) = A2(λ~v) = A(Aλ~v) = A(λA~v) = A(λ2~v) = λ2A~v = λ3~v, so ~v is aneigenvector of A3 with eigenvalue λ3.

3. We know A~v = λ~v, so (A + 2In)~v = A~v + 2In~v = λ~v + 2~v = (λ + 2)~v, hence ~v is aneigenvector of (A + 2In) with eigenvalue λ + 2.

5. Assume A~v = λ~v and B~v = β~v for some eigenvalues λ, β. Then (A + B)~v = A~v + B~v =λ~v + β~v = (λ + β)~v so ~v is an eigenvector of A + B with eigenvalue λ + β.

7. We know A~v = λ~v so (A − λIn)~v = A~v − λIn~v = λ~v − λ~v = ~0 so a nonzero vector ~v is inthe kernel of (A − λIn) so ker(A − λIn) 6= {~0} and A − λIn is not invertible.

9. We want

[

a bc d

] [

10

]

= λ

[

10

]

for any λ. Hence

[

ac

]

=

[

λ0

]

, i.e., the desired matrices

must have the form

[

λ b0 d

]

, they must be upper triangular.

11. We want

[

a bc d

] [

23

]

=

[

−2−3

]

. So, 2a + 3b = −2 and 2c + 3d = −3. Thus, b = −2−2a3 ,

and d = −3−2c3 . So all matrices of the form

[

a −2−2a3

c −3−2c3

]

will fit.

13. Solving

[

−6 6−15 13

][

v1

v2

]

= 4

[

v1

v2

]

, we get

[

v1

v2

]

=

[ 35 t

t

]

(with t 6= 0).

15. Any vector on L is unaffected by the reflection, so that a nonzero vector on L is aneigenvector with eigenvalue 1. Any vector on L⊥ is flipped about L, so that a nonzerovector on L⊥ is an eigenvector with eigenvalue −1. Picking a nonzero vector from L andone from L⊥, we obtain a basis consisting of eigenvectors.

17. No (real) eigenvalues

19. Any nonzero vector in L is an eigenvector with eigenvalue 1, and any nonzero vector inthe plane L⊥ is an eigenvector with eigenvalue 0. Form a basis consisting of eigenvectorsby picking any nonzero vector in L and any two nonparallel vectors in L⊥.

21. Any nonzero vector in R3 is an eigenvector with eigenvalue 5. Any basis for R3 consistsof eigenvectors.

167

Page 2: Chapter 07

Chapter 7 SSM: Linear Algebra

23. a. Since S = [~v1 · · ·~vn], S−1~vi = S−1(S~ei) = ~ei.

b. ith column of S−1AS

= S−1AS~ei

= S−1A~vi (by definition of S)

= S−1λi~vi (since ~vi is an eigenvector)

= λiS−1~vi

= λi~ei (by part a)

hence S−1AS =

λ1 0 0 · · · 00 λ2 0 · · · 0...0 0 0 · · · λn

.

25. See Figure 7.1.

Figure 7.1: for Problem 7.1.25.

27. See Figure 7.2.

29. See Figure 7.3.

31. See Figure 7.4.

33. We are given that ~x(t) = 2t

[

11

]

+ 6t

[

−11

]

, hence we know that the eigenvalues are 2

and 6 with corresponding eigenvectors

[

11

]

and

[

−11

]

respectively (see Fact 7.1.3), so

168

Page 3: Chapter 07

SSM: Linear Algebra Section 7.1

Figure 7.2: for Problem 7.1.27.

Figure 7.3: for Problem 7.1.29.

Figure 7.4: for Problem 7.1.31.

we want a matrix A such that A

[

1 −11 1

]

=

[

2 −62 6

]

. Multiplying on the right by

[

1 −11 1

]−1

, we get A =

[

4 −2−2 4

]

.

169

Page 4: Chapter 07

Chapter 7 SSM: Linear Algebra

35. Let λ be an eigenvalue of S−1AS. Then for some nonzero vector ~v, S−1AS~v = λ~v, i.e.,AS~v = Sλ~v = λS~v so λ is an eigenvalue of A with eigenvector S~v.

Conversely, if α is an eigenvalue of A with eigenvector ~w, then A~w = α~w, for some nonzero~w.

Therefore, S−1AS(S−1 ~w) = S−1A~w = S−1α~w = αS−1 ~w, so S−1 ~w is an eigenvector ofS−1AS with eigenvalue α.

37. a. A = 5

[

0.6 0.80.8 −0.6

]

is a scalar multiple of an orthogonal matrix. By Fact 7.1.2, the

possible eigenvalues of the orthogonal matrix are ±1, so that the possible eigenvaluesof A are ±5. In part b we see that both are indeed eigenvalues.

b. Solve A~v = ±5~v to get ~v1 =

[

21

]

, ~v2 =

[

−12

]

.

39. We want

[

a bc d

][

01

]

= λ

[

01

]

=

[

]

. So b = 0, and d = λ (for any λ). Thus, we need

matrices of the form

[

a 0c d

]

= a

[

1 00 0

]

+c

[

0 01 0

]

+d

[

0 00 1

]

. So,

[

1 00 0

]

,

[

0 01 0

]

,

[

0 00 1

]

is a basis of V , and dim(V )= 3.

41. We want

[

a bc d

] [

11

]

= λ1

[

11

]

, and

[

a bc d

] [

12

]

= λ2

[

12

]

. So, a + b = λ1 = c + d and

a + 2b = λ2 and 2λ2 = c + 2d.

So (a + 2b) − (a + b) = λ2 − λ1 = b, a = λ1 − b = 2λ1 − λ2. Also, (c + 2d) − (c + d) =

2λ2−λ1 = d, c = λ1−d = 2λ1−2λ2. So A must be of the form:

[

2λ1 − λ2 λ2 − λ1

2λ1 − 2λ2 2λ2 − λ1

]

=

λ1

[

2 −12 −1

]

+ λ2

[

−1 1−2 2

]

.

So a basis of V is

[

2 −12 −1

]

,

[

−1 1−2 2

]

, and dim(V )= 2.

43. A = AIn = A[~e1 . . . ~en ] = [ λ1~e1 . . . λn~en ], where the eigenvalues λ1, . . . , λn arearbitrary. Thus A can be any diagonal matrix, and dim(V ) = n.

45. Consider a vector ~w that is not parallel to ~v. We want A[~v ~w] = [λ~v a~v + b~w], where λ, aand b are arbitrary constants. Thus the matrices A in V are of the form A = [λ~v a~v +b~w][~v ~w]−1. Using summary 4.1.6, we see that [~v ~0][~v ~w]−1, [~0 ~v][~v ~w]−1, [~0 ~w][~v ~w]−1 is abasis of V , so that dim(V ) = 3.

170

Page 5: Chapter 07

SSM: Linear Algebra Section 7.1

47. Suppose V is a one-dimensional A-invariant subspace of Rn, and ~v is a non-zero vectorin V . Then A~v will be in V, so that A~v = λ~v for some λ, and ~v is an eigenvector ofA. Conversely, if ~v is any eigenvector of A, then V = span(~v) will be a one-dimensionalA-invariant subspace. Thus the one-dimensional A-invariant subspaces V are of the formV = span(~v), where ~v is an eigenvector of A.

49. The eigenvalues of the system are λ1 = 1.1, and λ2 = 0.9 and corresponding eigenvectors

are ~v1 =

[

100300

]

and ~v2 =

[

200100

]

, respectively. So if ~x0 =

[

100800

]

, we can see that

~x0 = 3~v1 −~v2. Therefore, by Fact 7.1.3, we have ~x(t) = 3(1.1)t

[

100300

]

− (0.9)t

[

200100

]

, i.e.

c(t) = 300(1.1)t − 200(0.9)t and r(t) = 900(1.1)t − 100(0.9)t.

51. Let ~v(t) =

[

c(t)r(t)

]

, and A~v(t) = ~v(t + 1), where A =

[

0 .75−1.5 2.25

]

. Now we will proceed

as in the example worked on Pages 292 through 295.

a. ~v(0) =

[

100200

]

, and we see that A~v(0) =

[

0 .75−1.5 2.25

] [

100200

]

=

[

150300

]

= 1.5

[

100200

]

.

So, ~v(t) = At~v(0) = At

[

100200

]

= (1.5)t

[

100200

]

.

So c(t) = 100(1.5)t and r(t) = 200(1.5)t.

b. ~v(0) =

[

100100

]

, and we see that A~v(0) =

[

0 .75−1.5 2.25

] [

100100

]

=

[

7575

]

= .75

[

100100

]

. So,

~v(t) = At~v(0) = At

[

100100

]

= (.75)t

[

100100

]

.

So c(t) = 100(.75)t and r(t) = 100(.75)t.

c. ~v(0) =

[

500700

]

. We can write this in terms of the previous eigenvectors as ~v(0) = 3

[

100100

]

+

2

[

100200

]

. So, ~v(t) = At~v(0) = At3

[

100100

]

+ At2

[

100200

]

= 3(.75)t

[

100100

]

+ 2(1.5)t

[

100200

]

.

So c(t) = 300(.75)t + 200(1.5)t and r(t) = 300(.75)t + 400(1.5)t.

53. Let ~v(t) =

a(t)b(t)c(t)

be the amount of gold each has after t days. And A~v(t) = ~v(t + 1).

a(t + 1) = 12b(t) + 1

2c(t), etc, so that A = 12

0 1 11 0 11 1 0

. A

111

=

111

, so

111

171

Page 6: Chapter 07

Chapter 7 SSM: Linear Algebra

has eigenvalue λ1 = 1. A

1−10

=

− 12

120

, so

1−10

has eigenvalue λ2 = − 12 . Also,

A

10−1

=

− 12

012

, so

10−1

has eigenvalue λ3 = − 12 .

a. ~v(0) =

612

= 3

111

+ 2

1−10

+

10−1

.

So, ~v(t) = At~v(0) = At

3

111

+ 2

1−10

+

10−1

= 3At

111

+ 2At

1−10

+ At

10−1

= 3λt1

111

+ 2λt2

1−10

+ λt3

10−1

= 3

111

+ 2(− 12 )t

1−10

+ (− 12 )t

10−1

.

So a(t) = 3 + 3(− 12 )t, b(t) = 3 − 2(− 1

2 )t and c(t) = 3 − (− 12 )t.

b. a(365) = 3 + 3(− 12 )365 = 3 − 3

2365 , b(365) = 3 − 2(− 12 )365 = 3 + 1

2364 and

c(365) = 3 − (− 12 )365 = 3 + 1

2365 . So, Benjamin will have the most gold.

7.2

1. λ1 = 1, λ2 = 3 by Fact 7.2.2.

3. det(A−λI2) = det

[

5 − λ −42 −1 − λ

]

= λ2−4λ+3 = (λ−1)(λ−3) = 0 so λ1 = 1, λ2 = 3.

5. det(A − λI2) = det

[

11 − λ −156 −7 − λ

]

= λ2 − 4λ + 13 so det(A − λI2) = 0 for no real λ.

7. λ = 1 with algebraic multiplicity 3, by Fact 7.2.2.

9. fA(λ) = −(λ − 2)2(λ − 1) so

λ1 = 2 (Algebraic multiplicity 2)

172

Page 7: Chapter 07

SSM: Linear Algebra Section 7.2

λ2 = 1.

11. fA(λ) = −λ3 − λ2 − λ − 1 = −(λ + 1)(λ2 + 1) = 0

λ = −1 (Algebraic multiplicity 1).

13. fA(λ) = −λ3 + 1 = −(λ − 1)(λ2 + λ + 1) so λ = 1 (Algebraic multiplicity 1).

15. fA(λ) = λ2 − 2λ + (1 − k) = 0 if λ1,2 =2±

√4−4(1−k)

2 = 1 ±√

k

The matrix A has 2 distinct real eigenvalues when k > 0, no real eigenvalues when k < 0.

17. fA(λ) = λ2 − a2 − b2 = 0 so λ1,2 = ±√

a2 + b2.

The matrix A represents a reflection about a line followed by a scaling by√

a2 + b2, hencethe eigenvalues.

19. True, since fA(λ) = λ2 − tr(A)λ + det(A) and the discriminant [tr(A)]2 − 4 det(A) ispositive if det(A) is negative.

21. If A has n eigenvalues, then fA(λ) = (λ1 − λ)(λ2 − λ) · · · (λn − λ). Then fA(λ) =(−λ)n + (λ1 + λ2 + · · · + λn)(−λ)n−1 + · · · + (λ1λ2 · · ·λn). But, by Fact 7.2.5, thecoefficient of (−λ)n−1 is tr(A). So, tr(A) = λ1 + · · · + λn.

23. fB(λ) = det(B − λIn) = det(S−1AS − λIn)

= det(S−1AS − λS−1InS)

= det(S−1(A − λIn)S) = det(S−1) det(A − λIn) det(S)

= (det S)−1 det(A − λIn) det(S) = det(A − λIn) = fA(λ)

Hence, since fA(λ) = fB(λ), A and B have the same eigenvalues.

25. A

[

bc

]

=

[

ab + cbcb + cd

]

=

[

(a + c)b(b + d)c

]

=

[

bc

]

since a + c = b + d = 1; therefore,

[

bc

]

is an

eigenvector with eigenvalue λ1 = 1.

Also, A

[

1−1

]

=

[

a − bc − d

]

= (a − b)

[

1−1

]

since a − b = −(c − d); therefore,

[

1−1

]

is an

eigenvector with eigenvalue λ2 = a − b. Note that |a − b| < 1; a possible phase portraitis shown in Figure 7.5.

27. a. We know ~v1 =

[

12

]

, λ1 = 1 and ~v2 =

[

1−1

]

, λ2 = 14 . If ~x0 =

[

10

]

then ~x0 = 13~v1+ 2

3~v2,

so by Fact 7.1.3,

173

Page 8: Chapter 07

Chapter 7 SSM: Linear Algebra

Figure 7.5: for Problem 7.2.25.

x1(t) = 13 + 2

3

(

14

)t

x2(t) = 23 − 2

3

(

14

)t.

If ~x0 =

[

01

]

then ~x0 = 13~v1 − 1

3~v2, so by Fact 7.1.3,

x1(t) = 13 − 1

3

(

14

)t

x2(t) = 23 + 1

3

(

14

)t. See Figure 7.6.

Figure 7.6: for Problem 7.2.27a.

174

Page 9: Chapter 07

SSM: Linear Algebra Section 7.2

b. At approaches 13

[

1 12 2

]

, as t → ∞. See part c for a justification.

c. Let us think about the first column of At, which is At~e1. We can use Fact 7.1.3 tocompute At~e1.

Start by writing ~e1 = c1

[

bc

]

+ c2

[

1−1

]

; a straightforward computation shows that

c1 = 1b+c

and c2 = cb+c

.

Now At~e1 = 1b+c

[

bc

]

+ cb+c

(λ2)t

[

1−1

]

, where λ2 = a − b.

Since |λ2| < 1, the second summand goes to zero, so that limt→∞

(At~e1) = 1b+c

[

bc

]

.

Likewise, limt→∞

(At~e2) = 1b+c

[

bc

]

, so that limt→∞

At = 1b+c

[

b bc c

]

.

29. The ith entry of A~e is [ai1ai2 · · · ain]~e =

n∑

j=1

aij = 1, so A~e = ~e and λ = 1 is an eigenvalue

of A, corresponding to the eigenvector ~e.

31. Since A and AT have the same eigenvalues (by Exercise 22), Exercise 29 states that λ = 1is an eigenvalue of A, and Exercise 30 says that |λ| ≤ 1 for all eigenvalues λ. Vector ~e

need not be an eigenvector of A; consider A =

[

0.9 0.90.1 0.1

]

.

33. a. fA(λ) = det(A − λI3) = −λ3 + cλ2 + bλ + a

b. By part a, we have c = 17, b = −5 and a = π, so M =

0 1 00 0 1π −5 17

.

35. A =

0 −1 0 01 0 0 00 0 0 −10 0 1 0

, with fA(λ) = (λ2 + 1)2

37. We can write fA(λ) = (λ − λ0)2g(λ), for some polynomial g. The product rule for

derivatives tells us that f ′A(λ) = 2(λ − λ0)g(λ) + (λ − λ0)

2g′(λ), so that f ′A(λ0) = 0, as

claimed.

175

Page 10: Chapter 07

Chapter 7 SSM: Linear Algebra

39. tr(AB) =tr

([

a bc d

][

e fg h

])

=tr

([

ae + bg −−−−−− cf + dh

])

= ae + bg + cf + dh.

tr(BA) =tr

([

e fg h

][

a bc d

])

=tr

([

ea + fc −−−−−− gb + hd

])

= ea + fc + gb + hd. So they

are equal.

41. So there exists an invertible S such that B = S−1AS, and tr(B) =tr(S−1AS) =tr((S−1A)S).By Exercise 40, this equals tr(S(S−1A)) =tr(A).

43. tr(AB − BA) =tr(AB)−tr(BA) =tr(AB)−tr(AB) = 0, but tr(In) = n, so no such A, Bexist. We have used Exercise 40.

45. fA(λ) = λ2−tr(A)λ+det(A) = λ2−2λ+(−3−4k). We want fA(5) = 25−10−3−4k = 0,or, 12 − 4k = 0, or k = 3.

47. Let M = [~v1 ~v2 ]. We want A[~v1 ~v2 ] = [~v1 ~v2 ]

[

2 00 3

]

, or, [ A~v1 A~v2 ] = [ 2~v1 3~v2 ].

Since ~v1 or ~v2 must be nonzero, 2 or 3 must be an eigenvalue of A.

49. As in problem 47, such an M will exist if A has an eigenvalue 2, 3 or 4.

7.3

1. λ1 = 7, λ2 = 9, E7 = ker

[

0 80 2

]

= span

[

10

]

, E9 = ker

[

−2 80 0

]

= span

[

41

]

Eigenbasis:

[

10

]

,

[

41

]

3. λ1 = 4, λ2 = 9, E4 = span

[

3−2

]

, E9 = span

[

11

]

Eigenbasis:

[

3−2

]

,

[

11

]

5. No real eigenvalues as fA(λ) = λ2 − 2λ + 2.

7. λ1 = 1, λ2 = 2, λ3 = 3, eigenbasis: ~e1, ~e2, ~e3

9. λ1 = λ2 = 1, λ3 = 0, eigenbasis:

100

,

010

,

−101

176

Page 11: Chapter 07

SSM: Linear Algebra Section 7.3

11. λ1 = λ2 = 0, λ3 = 3, eigenbasis:

1−1

0

,

10

−1

,

111

13. λ1 = 0, λ2 = 1, λ3 = −1, eigenbasis:

010

,

1−3

1

,

1−1

2

15. λ1 = 0, λ2 = λ3 = 1, E0 = span

010

. We can use Kyle Numbers to see that

E1 = ker

1 −1 2

−2−3−4

0−1

0

112

= span

1−1

2

.

There is no eigenbasis since the eigenvalue 1 has algebraic multiplicity 2, but the geometricmultiplicity is only 1.

17. λ1 = λ2 = 0, λ3 = λ4 = 1

with eigenbasis

1000

,

0−1

10

,

0100

,

0001

19. Since 1 is the only eigenvalue, with algebraic multiplicity 3, there exists an eigenbasis forA if (and only if) the geometric multiplicity of the eigenvalue 1 is 3 as well, that is, if

E1 = R3. Now E1 = ker

0 a b0 0 c0 0 0

is R3 if (and only if) a = b = c = 0.

If a = b = c = 0 then E1 is 3-dimensional with eigenbasis ~e1, ~e2, ~e3.

If a 6= 0 and c 6= 0 then E1 is 1-dimensional and otherwise E1 is 2-dimensional. Thegeometric multiplicity of the eigenvalue 1 is dim(E1).

21. We want A such that A

[

12

]

=

[

12

]

and A

[

23

]

= 2

[

23

]

=

[

46

]

, i.e. A

[

1 22 3

]

=

[

1 42 6

]

so A =

[

1 42 6

] [

1 22 3

]−1

=

[

5 −26 −2

]

.

The answer is unique.

23. λ1 = λ2 = 1 and E1 = span(~e1), hence there is no eigenbasis. The matrix represents ashear parallel to the x-axis.

177

Page 12: Chapter 07

Chapter 7 SSM: Linear Algebra

25. If λ is an eigenvalue of A, then Eλ = ker(A − λI3) = ker

−λ 1 00 −λ 1a b c − λ

.

The second and third columns of the above matrix aren’t parallel, hence Eλ is always1-dimensional, i.e., the geometric multiplicity of λ is 1.

27. By Fact 7.2.4, we have fA(λ) = λ2 − 5λ + 6 = (λ − 3)(λ − 2) so λ1 = 2, λ2 = 3.

29. Note that r is the number of nonzero diagonal entries of A, since the nonzero columns ofA form a basis of im(A). Therefore, there are n − r zeros on the diagonal, so that thealgebraic multiplicity of the eigenvalue 0 is n − r. It is true for any n × n matrix A thatthe geometric multiplicity of the eigenvalue 0 is dim(ker(A)) = n − rank(A) = n − r.

31. They must be the same. For if they are not, by Fact 7.3.7, the geometric multiplicitieswould not add up to n.

33. If S−1AS = B, then

S−1(A − λIn)S = S−1(AS − λS) = S−1AS − λS−1S = B − λIn.

35. No, since the two matrices have different eigenvalues (see Fact 7.3.6c).

37. a. A~v · ~w = (A~v)T ~w = (~vT AT )~w = (~vT A)~w = vT (A~w) = ~v · A~w

A symmetric

b. Assume A~v = λ~v and A~w = α~w for λ 6= α, then (A~v) · ~w = (λ~v) · ~w = λ(~v · ~w), and~v · A~w = ~v · α~w = α(~v · ~w).

By part a, λ(~v · ~w) = α(~v · ~w) i.e., (λ − α)(~v · ~w) = 0.

Since λ 6= α, it must be that ~v · ~w = 0, i.e., ~v and ~w are perpendicular.

39. a. There are two eigenvalues, λ1 = 1 (with E1 = V ) and λ2 = 0 (with E0 = V ⊥).

Now geometric multiplicity(1) = dim(E1) = dim(V ) = m, and

geometric multiplicity(0) = dim(E0) = dim(V ⊥) = n − dim(V ) = n − m.

Since geometric multiplicity(λ) ≤ algebraic multiplicity(λ), by Fact 7.3.7, and thealgebraic multiplicities cannot add up to more than n, the geometric and algebraicmultiplicities of the eigenvalues are the same here.

b. Analogous to part a: E1 = V and E−1 = V ⊥.

178

Page 13: Chapter 07

SSM: Linear Algebra Section 7.3

geometric multiplicity(1) = algebraic multiplicity(1) = dim(V ) = m, and

geometric multiplicity(−1) = algebraic multiplicity(−1) = dim(V ⊥) = n − m.

41. The eigenvalues of A are 1.2, −0.8, −0.4 with eigenvectors

962

,

2−2

1

,

1−2

2

.

Since ~x0 = 50

962

+50

2−2

1

+50

1−2

2

we have ~x(t) = 50(1.2)t

962

+50(−0.8)t

2−2

1

+

50(−0.4)t

1−2

2

, so, as t goes to infinity, j(t) : n(t) : a(t) approaches the proportion

9 : 6 : 2.

43. a. A = 12

0 1 11 0 11 1 0

b. After 10 rounds, we have A10

7115

7.66601567.66992197.6640625

.

After 50 rounds, we have A50

7115

7.666666666677.666666666677.66666666667

.

c. The eigenvalues of A are 1 and − 12 with E1 = span

111

and E− 1

2

= span

01

−1

,

−1−1

2

so ~x(t) =(

1 + c0

3

)

111

+(

− 12

)t

01

−1

+(

− 12

)t c0

3

−1−1

2

.

After 1001 rounds, Alberich will be ahead of Brunnhilde(

by(

12

)1001)

, so that Carl

needs to beat Alberich to win the game. A straightforward computation shows that

c(1001)−a(1001) =(

12

)1001(1− c0); Carl wins if this quantity is positive, which is the

case if c0 is less than 1.

179

Page 14: Chapter 07

Chapter 7 SSM: Linear Algebra

Alternatively, observe that the ranking of the players is reversed in each round: Who-ever is first will be last after the next round. Since the total number of rounds is odd(1001), Carl wants to be last initially to win the game; he wants to choose a smallernumber than both Alberich and Brunnhilde.

45. a. A =

[

0.1 0.20.4 0.3

]

,~b =

[

12

]

b. B =

[

A ~b0 1

]

c. The eigenvalues of A are 0.5 and −0.1 with associated eigenvectors

[

12

]

and

[

1−1

]

.

The eigenvalues of B are 0.5, −0.1, and 1. If A~v = λ~v then B

[

~v0

]

= λ

[

~v0

]

so

[

~v0

]

is

an eigenvector of B.

Furthermore,

241

is an eigenvector of B corresponding to the eigenvalue 1. Note that

this vector is

[

−(A − I2)−1~b

1

]

.

d. Write ~y(0) =

x1(0)x2(0)

1

= c1

120

+ c2

1−1

0

+ c3

241

.

Note that c3 = 1.

Now ~y(t) = c1(0.5)t

120

+ c2(−0.1)t

1−1

0

+

241

t→∞−→

241

so that ~x(t)t→∞−→

[

24

]

.

47. a. If ~x(t) =

r(t)p(t)w(t)

, then ~x(t + 1) = A~x(t) with A =

12

14 0

12

12

12

0 14

12

.

The eigenvalues of A are 0, 12 , 1 with eigenvectors

1−2

1

,

10

−1

,

121

.

Since ~x(0) =

100

= 14

1−2

1

+ 12

10

−1

+ 14

121

, ~x(t) = 12

(

12

)t

10

−1

+ 14

121

for

t > 0.

180

Page 15: Chapter 07

SSM: Linear Algebra Section 7.4

b. As t → ∞ the ratio is 1 : 2 : 1 (since the first term of ~x(t) drops out).

49. This “random” matrix A = [~0 ~v2 · · · ~vn] is unlikely to have any zeros above the diag-onal. In this case, the columns ~v2, . . . , ~vn will be linearly independent (none of them isredundant), so that rank(A) = n − 1 and geometric multiplicity(0) = dim(ker(A)) =n − rank(A) = 1. Alternatively, you can argue in terms of rref(A).

7.4

1. Matrix A is diagonal already, so it’s certainly diagonalizable. Let S = I2.

3. Diagonalizable. The eigenvalues are 0,3, with associated eigenvectors

[

−11

]

,

[

12

]

. If we

let S =

[

−1 11 2

]

, then S−1AS = D =

[

0 00 3

]

.

5. Fails to be diagonalizable. There is only one eigenvalue, 1, with a one-dimensionaleigenspace.

7. Diagonalizable. The eigenvalues are 2,−3, with associated eigenvectors

[

41

]

,

[

−11

]

. If

we let S =

[

4 −11 1

]

, then S−1AS = D =

[

2 00 −3

]

.

9. Fails to be diagonalizable. There is only one eigenvalue, 1, with a one-dimensionaleigenspace.

11. Fails to be diagonalizable. The eigenvalues are 1,2,1, and the eigenspace

E1 = ker(A − I3) = span(~e1) is only one-dimensional.

13. Diagonalizable. The eigenvalues are 1,2,3, with associated eigenvectors

100

,

110

,

121

.

If we let S =

1 1 10 1 20 0 1

, then S−1AS = D =

1 0 00 2 00 0 3

.

15. Diagonalizable. The eigenvalues are 1,−1, 1, with associated eigenvectors

210

,

110

,

001

.

If we let S =

2 1 01 1 00 0 1

, then S−1AS = D =

1 0 00 −1 00 0 1

.

181

Page 16: Chapter 07

Chapter 7 SSM: Linear Algebra

17. Diagonalizable. The eigenvalues are 0,3,0, with associated eigenvectors

1−1

0

,

111

,

10

−1

.

If we let S =

1 1 1−1 1 0

0 1 −1

, then S−1AS = D =

0 0 00 3 00 0 0

.

19. Fails to be diagonalizable. The eigenvalues are 1,0,1, and the eigenspace E1 = ker(A−I3)

= span(~e1) is only one-dimensional.

21. Diagonalizable for all values of a, since there are always two distinct eigenvalues, 1 and2. See Fact 7.4.3.

23. Diagonalizable for positive a. The characteristic polynomial is (λ − 1)2 − a, so that theeigenvalues are λ = 1 ± √

a. If a is positive, then we have two distinct real eigenvalues,so that the matrix is diagonalizable. If a is negative, then there are no real eigenvalues.If a is 0, then 1 is the only eigenvalue, with a one-dimensional eigenspace.

25. Diagonalizable for all values of a, b, and c, since we have three distinct eigenvalues, 1, 2,and 3.

27. Diagonalizable only if a = b = c = 0. Since 1 is the only eigenvalue, it is required thatE1 = R3, that is, the matrix must be the identity matrix.

29. Not diagonalizable for any a. The characteristic polynomial is −λ3 + a, so that there isonly one real eigenvalue, 3

√a, for all a. Since the corresponding eigenspace isn’t all of R3,

the matrix fails to be diagonalizable.

31. In Example 2 of Section 7.3 we see that the eigenvalues of A =

[

1 24 3

]

are −1 and 5,

with associated eigenvectors

[

1−1

]

and

[

12

]

. If we let S =

[

1 1−1 2

]

, then S−1AS =

D =

[

−1 00 5

]

.

Thus A = SDS−1 and At = SDtS−1 = 13

[

1 1−1 2

] [

(−1)t 00 55

][

2 −11 1

]

= 13

[

2(−1)t + 5t (−1)t+1 + 5t

2(5t) − 2(−1)t 2(5t) + (−1)t

]

33. The eigenvalues of A =

[

1 23 6

]

are 0 and 7, with associated eigenvectors

[

−21

]

and[

13

]

. If we let S =

[

−2 11 3

]

, then S−1AS = D =

[

0 00 7

]

. Thus A = SDS−1 and

182

Page 17: Chapter 07

SSM: Linear Algebra Section 7.4

At = SDtS−1 = 17

[

−2 11 3

] [

0 00 7t

] [

−3 11 2

]

= 17

[

7t 2(7t)3(7t) 6(7t)

]

= 7t−1A. We can

find the same result more directly by observing that A2 = 7A.

35. Matrix

[

−1 6−2 6

]

has the eigenvalues 3 and 2. If ~v and ~w are associated eigenvectors, and

if we let S = [~v ~w], then S−1

[

−1 6−2 6

]

S =

[

3 00 2

]

, so that matrix

[

−1 6−2 6

]

is indeed

similar to

[

3 00 2

]

.

37. Yes. Matrices A and B have the same characteristic polynomial, λ2 − 7λ + 7, so that

they have the same two distinct real eigenvalues λ1,2 = 7±√

212 . Thus both A and B are

similar to the diagonal matrix

[

λ1 00 λ2

]

, by Algorithm 7.4.4. Therefore A is similar to

B, by parts b and c of Fact 3.4.6.

39. The eigenfunctions with eigenvalue λ are the nonzero functions f(x) such that T (f(x)) =f ′(x) − f(x) = λf(x), or f ′(x) = (λ + 1)f(x). From calculus we recall that those are theexponential functions of the form f(x) = Ce(λ+1)x, where C is a nonzero constant. Thusall real numbers are eigenvalues of T , and the eigenspace Eλ is one-dimensional, spannedby e(λ+1)x.

41. The nonzero symmetric matrices are eigenmatrices with eigenvalue 2, since L(A) = A +AT = 2A in this case. The nonzero skew-symmetric matrices have eigenvalue 0, sinceL(A) = A + AT = A − A = 0. Yes, L is diagonalizable, since we have the eigenbasis[

1 00 0

]

,

[

0 11 0

]

,

[

0 00 1

]

,

[

0 1−1 0

]

(three symmetric matrices, and one skew-symmetric

one).

43. The nonzero real numbers are “eigenvectors” with eigenvalue 1, and the nonzero imaginarynumbers (of the form iy) are “eigenvectors” with eigenvalue −1. Yes, T is diagonalizable,since we have the eigenbasis 1,i.

45. The nonzero sequence (x0, x1, x2, . . .) is an eigensequence with eigenvalue λ ifT (x0, x1, x2, . . .) = (0, x0, x1, x2, . . .) = λ(x0, x1, x2, . . .) = (λx0, λx1, λx2, . . .). Thismeans that 0 = λx0, x0 = λx1, x1 = λx2, . . . , xn = λxn+1, . . . . If λ is nonzero, thenthese equations imply that x0 = 1

λ0 = 0, x1 = 1

λx0 = 0, x2 = 1

λx1 = 0, . . . , so that there

are no eigensequences in this case. If λ = 0, then we have x0 = λx1 = 0, x1 = λx2 =0, x2 = λx3 = 0, . . . , so that there aren’t any eigensequences either. In summary: Thereare no eigenvalues and eigensequences for T .

47. The nonzero even functions, of the form f(x) = a+cx2, are eigenfunctions with eigenvalue1, and the nonzero odd functions, of the form f(x) = bx, have eigenvalue −1. Yes, T isdiagonalizable, since the standard basis, 1, x, x2, is an eigenbasis for T .

183

Page 18: Chapter 07

Chapter 7 SSM: Linear Algebra

49. The matrix of T with respect to the standard basis 1, x, x2 is B =

1 −1 10 3 −60 0 9

. The

eigenvalues of B are 1, 3, 9, with corresponding eigenvectors

100

,

−120

,

1−4

4

. The

eigenvalues of T are 1,3,9, with corresponding eigenfunctions 1, 2x − 1, 4x2 − 4x + 1 =(2x − 1)2. Yes, T is diagonalizable, since the functions 1, 2x − 1, (2x − 1)2 from aneigenbasis.

51. The nonzero constant functions f(x) = b are the eigenfunctions with eigenvalue 0. If f(x)is a polynomial of degree ≥ 1, then the degree of f(x) exceeds the degree of f ′(x) by 1(by the power rule of calculus), so that f ′(x) cannot be a scalar multiple of f(x). Thus0 is the only eigenvalue of T , and the eigenspace E0 consists of the constant functions.

53. Suppose basis D consists of f1, . . . , fn. We are told that the D-matrix D of T is diagonal;let λ1, λ2, . . . , λn be the diagonal entries of D. By Fact 4.3.3., we know that [T (fi)]D =(ith column of D) = λi~ei, for i = 1, 2, . . . , n, so that T (fi) = λifi, by definition ofcoordinates. Thus f1, . . . , fn is an eigenbasis for T , as claimed.

55. Let A =

[

0 10 0

]

and B =

[

1 00 0

]

, for example.

57. Modifying the hint in Exercise 56 slightly, we can write

[

AB 0B 0

] [

Im A0 In

]

=

[

Im A0 In

]

[

0 0B BA

]

. Thus matrix M =

[

AB 0B 0

]

is similar to N =

[

0 0B BA

]

. By Fact 7.3.6a,

matrices M and N have the same characteristic polynomial.

Now fM (λ) = det

[

AB − λIm 0B −λIn

]

= (−λ)n det(AB − λIm) = (−λ)nfAB(λ). To

understand the second equality, consider Fact 6.1.8. Likewise, fN (λ)

= det

[

−λIm 0B BA − λIn

]

= (−λ)mfBA(λ).

It follows that (−λ)nfAB(λ) = (−λ)mfBA(λ). Thus matrices AB and BA have the samenonzero eigenvalues, with the same algebraic multiplicities.

If mult(AB) and mult(BA) are the algebraic multiplicities of 0 as an eigenvalue of ABand BA, respectively, then the equation (−λ)nfAB(λ) = (−λ)mfBA(λ) implies that

n + mult(AB) = m + mult(BA).

59. If ~v is an eigenvector with eigenvalue λ, then

184

Page 19: Chapter 07

SSM: Linear Algebra Section 7.4

fA(A)~v = ((−A)n + an−1An−1 + · · · + a1A + a0In)~v

= (−λ)n~v + an−1λn−1~v + · · · + a1λ~v + a0~v

= ((−λ)n + an−1λn−1 + · · · + a1λ + a0)~v

= fA(λ)~v = 0~v = ~0.

Since A is diagonalizable, any vector ~x in Rn can be written as a linear combination ofeigenvectors, so that fA(A)~x = ~0. Since this equation holds for all ~x in Rn, we havefA(A) = 0, as claimed.

61. a. B is diagonalizable since it has three distinct eigenvalues, so that S−1BS is diagonalfor some invertible S. But S−1AS = S−1I3S = I3 is diagonal as well. Thus A and Bare indeed simultaneously diagonalizable.

b. There is an invertible S such that S−1AS = D1 and S−1BS = D2 are both diago-nal. Then A = SD1S

−1 and B = SD2S−1, so that AB = (SD1S

−1)(SD2S−1) =

SD1D2S−1 and BA = (SD2S

−1)(SD1S−1) = SD2D1S

−1. These two results agree,since D1D2 = D2D1 for the diagonal matrices D1 and D2.

c. Let A be In and B a nondiagonalizable n × n matrix, for example, A =

[

1 00 1

]

and

B =

[

1 10 1

]

.

d. Suppose BD = DB for a diagonal D with distinct diagonal entries. The ij th entry ofthe matrix BD = DB is bijdjj = diibij . For i 6= j this implies that bij = 0. Thus Bmust be diagonal.

e. Since A has n distinct eigenvalues, A is diagonalizable, that is, there is an invertible Ssuch that S−1AS = D is a diagonal matrix with n distinct diagonal entries. We claimthat S−1BS is diagonal as well; by part d it suffices to show that S−1BS commuteswith D = S−1AS. This is easy to verify:

(S−1BS)D = (S−1BS)(S−1AS) = S−1BAS = S−1ABS = (S−1AS)(S−1BS) =D(S−1BS).

63. Recall from Exercise 62 that all the eigenspaces are two-dimensional.

185

Page 20: Chapter 07

Chapter 7 SSM: Linear Algebra

a. We need to solve the differential equation f ′′(x) = f(x). As in Example 18 of Sec-tion 4.1, we will look for exponential solutions. The function f(x) = ekx is a solutionif k2 = 1, or k = ±1. Thus the eigenspace E1 is the span of functions ex and e−x.

b. We need to solve the differential equation f ′′(x) = 0. Integration gives f ′(x) = C, aconstant. If we integrate again, we find f(x) = Cx + c, where c is another arbitraryconstant. Thus E0 = span(1, x).

c. The solutions of the differential equation f ′′(x) = −f(x) are the functions f(x) =a cos(x) + b sin(x), so that E−1 = span(cosx, sin x). See the introductory example ofSection 4.1 and Exercise 4.1.58.

d. Modifying part c, we see that the solutions of the differential equation f ′′(x) = −4f(x)are the functions f(x) = a cos(2x) + b sin(2x), so that E−4 = span(cos(2x), sin(2x)).

65. Let’s write S in terms of its columns, as S = [~v ~w ] .

We want A [~v ~w ] = [~v ~w ]

[

5 00 −1

]

, or, [ A~v A~w ] = [ 5~v −~w ] , that is, we want

~v to be in the eigenspace E5, and ~w in E−1. We find that E5 = span

[

12

]

and E−1 =

span

[

1−1

]

, so that S must be of the form

[

a

[

12

]

b

[

1−1

]]

= a

[

1 02 0

]

+ b

[

0 10 −1

]

.

Thus, a basis of the space V is

[

1 02 0

]

,

[

0 10 −1

]

, and dim(V ) = 2.

67. Let Eλ1= span(~v1, ~v2, ~v3) and Eλ2

= span(~w1, ~w2). As in Exercise 65, we can see that Smust be of the form [ ~x1 ~x2 ~x3 ~x4 ~x5 ] where ~x1, ~x2 and ~x3 are in Eλ1

and ~x4 and ~x5

are in Eλ2. Thus, we can write ~x1 = c1~v1 +c2~v2 +c3~v3, for example, or ~x5 = d1 ~w1 +d2 ~w2.

Using Summary 4.1.6, we find a basis: [~v1~0 ~0 ~0 ~0 ] , [~v2

~0 ~0 ~0 ~0 ] ,

[~v3~0 ~0 ~0 ~0 ] , [~0 ~v1

~0 ~0 ~0 ] , [~0 ~v2~0 ~0 ~0 ] , [~0 ~v3

~0 ~0 ~0 ] ,

[~0 ~0 ~v1~0 ~0 ] , [~0 ~0 ~v2

~0 ~0 ] , [~0 ~0 ~v3~0 ~0 ] , [~0 ~0 ~0 ~w1

~0 ] ,

[~0 ~0 ~0 ~w2~0 ] , [~0 ~0 ~0 ~0 ~w1 ] , [~0 ~0 ~0 ~0 ~w2 ] .

Thus, the dimension of the space of matrices S is 3 + 3 + 3 + 2 + 2 = 13.

7.5

1. z = 3 − 3i so |z| =√

32 + (−3)2 =√

18 and arg(z) = −π4 ,

186

Page 21: Chapter 07

SSM: Linear Algebra Section 7.5

so z =√

18(

cos(

−π4

)

+ i sin(

−π4

))

.

3. If z = r(cos θ + i sin θ), then zn = rn(cos(nθ) + i sin(nθ)).

zn = 1 if r = 1, cos(nθ) = 1, sin(nθ) = 0 so nθ = 2kπ for an integer k, and θ = 2kπn

,

i.e. z = cos(

2kπn

)

+ i sin(

2kπn

)

, k = 0, 1, 2, . . . , n − 1. See Figure 7.7.

Figure 7.7: for Problem 7.5.3.

5. Let z = r(cos θ+ i sin θ) then w = n√

r(

cos(

θ+2πkn

)

+ i sin(

θ+2πkn

))

, k = 0, 1, 2, . . . , n − 1.

7. |T (z)| = |z|√

2 and arg(T (z)) = arg(1 − i) + arg(z) = −π4 + arg(z) so T is a clockwise

rotation by π4 followed by a scaling of

√2.

9. |z| =√

0.82 + 0.72 =√

1.15, arg(z) = arctan(

− 0.70.8

)

≈ −0.72. See Figure 7.8.

Figure 7.8: for Problem 7.5.9.

The trajectory spirals outward, in the clockwise direction.

187

Page 22: Chapter 07

Chapter 7 SSM: Linear Algebra

11. Notice that f(1) = 0 so λ = 1 is a root of f(λ). Hence f(λ) = (λ − 1)g(λ), where

g(λ) = f(λ)λ−1 = λ2 − 2λ + 5. Setting g(λ) = 0 we get λ = 1 ± 2i so that f(λ) =

(λ − 1)(λ − 1 − 2i)(λ − 1 + 2i).

13. Yes, Q is a field. Check the axioms on Page 347.

15. Yes, check the axioms on Page 347. (additive identity 0 and multiplicative identity 1)

17. No, since multiplication is not commutative; Axiom 5 does not hold.

19. a. Since A has eigenvalues 1 and 0 associated with V and V ⊥ respectively and since V isthe eigenspace of λ = 1, by Fact 7.5.5, tr(A) = m, det(A) = 0.

b. Since B has eigenvalues 1 and −1 associated with V and V ⊥ respectively and since V isthe eigenspace associated with λ = 1, tr(A) = m−(n−m) = 2m−n, det B = (−1)n−m.

21. fA(λ) = (11 − λ)(−7 − λ) + 90 = λ2 − 4λ + 13 so λ1,2 = 2 ± 3i.

23. fA(λ) = (−λ)3 + 1 = −(λ − 1)(λ2 + λ + 1) so λ1 = 1, λ2,3 = −1±√

3i2 .

25. fA(λ) = λ4−1 = (λ2−1)(λ2 +1) = (λ−1)(λ+1)(λ− i)(λ+ i) so λ1,2 = ±1 and λ3,4 = ±i

27. By Fact 7.5.5, tr(A) = λ1 + λ2 + λ3, det(A) = λ1λ2λ3 but λ1 = λ2 6= λ3 by assumption,so tr(A) = 1 = 2λ2 + λ3 and det(A) = 3 = λ2

2λ3.

Solving for λ2, λ3 we get −1, 3 hence λ1 = λ2 = −1 and λ3 = 3. (Note that the eigenvaluesmust be real; why?)

29. tr(A) = 0 so λ1 + λ2 + λ3 = 0.

Also, we can compute det(A) = bcd > 0 since b, c, d > 0. Therefore, λ1λ2λ3 > 0.

Hence two of the eigenvalues must be negative, and the largest one (in absolute value)must be positive.

31. No matter how we choose A, 115A is a regular transition matrix, so that lim

t→∞

(

115A

)tis

a matrix with identical columns by Exercise 30. Therefore, the columns of At “become

more and more alike” as t approaches infinity, in the sense that limt→∞

ijth entry of At

ikth entry of At= 1

for all i, j, k.

188

Page 23: Chapter 07

SSM: Linear Algebra Section 7.5

33. a. C is obtained from B by dividing each column of B by its first component. Thus, thefirst row of C will consist of 1’s.

b. We observe that the columns of C are almost identical, so that the columns of B are“almost parallel” (that is, almost scalar multiples of each other).

c. Let λ1, λ2, . . . , λ5 be the eigenvalues. Assume λ1 real and positive and λ1 > |λj | for2 ≤ j ≤ 5.

Let ~v1, . . . , ~v5 be corresponding eigenvectors. For a fixed i, write ~ei =5

j=1

cj~vj ; then

(ith column of At) = At~ei = c1λt1~v1 + · · · + c5λ

t5~v5.

But in the last expression, for large t, the first term is dominant, so the ith column ofAt is almost parallel to ~v1, the eigenvector corresponding to the dominant eigenvalue.

d. By part c, the columns of B and C are almost eigenvectors of A associated with thelargest eigenvalue, λ1. Since the first row of C consists of 1’s, the entries in the firstrow of AC will be close to λ1.

35. We have fA(λ) = (λ1 − λ)(λ2 − λ) · · · (λn − λ)

= (−λ)n + (λ1 + λ2 + · · · + λn)(−λ)n−1 + · · · + (λ1λ2 · · ·λn). But, by Fact 7.2.5, thecoefficient of (−λ)n−1 is tr(A). So, tr(A) = λ1 + · · · + λn.

37. a. Use that w + z = w + z and wz = wz.[

w1 −z1

z1 w1

]

+

[

w2 −z2

z2 w2

]

=

[

w1 + w2 −(z1 + z2)z1 + z2 w1 + w2

]

is in H.

[

w1 −z1

z1 w1

] [

w2 −z2

z2 w2

]

=

[

w1w2 − z1z2 −(z1w2 + w1z2)z1w2 + w1z2 w1w2 − z1z2

]

is in H.

b. If A in H is nonzero, then det(A) = ww + zz = |w|2 + |z|2 > 0, so that A is invertible.

c. Yes; if A =

[

w −zz w

]

, then A−1 = 1|w|2+|z|2

[

w z−z w

]

is in H.

d. For example, if A =

[

i 00 −i

]

and B =

[

0 −11 0

]

, then AB =

[

0 −i−i 0

]

and

BA =

[

0 ii 0

]

.

189

Page 24: Chapter 07

Chapter 7 SSM: Linear Algebra

Figure 7.9: for Problem 7.5.39.

39. Figure 7.9 illustrates how Cn acts on the standard basis vectors ~e1, ~e2, . . . , ~en of Rn.

a. Based on Figure 7.9, we see that Ckn takes ~ei to ~ei+k “modulo n,” that is, if i + k

exceeds n then Ckn takes ~ei to ~ei+k−n (for k = 1, . . . , n − 1).

To put it differently: Ckn is the matrix whose ith column is ~ei+k if i+k ≤ n and ~ei+k−n

if i + k > n (for k = 1, . . . , n − 1).

b. The characteristic polynomial is 1 − λn, so that the eigenvalues are the n distinctsolutions of the equation λn = 1 (the so-called nth roots of unity), equally spacedpoints along the unit circle, λk = cos

(

2πkn

)

+i sin(

2πkn

)

, for k = 0, 1, . . . , n−1 (comparewith Exercise 5 and Figure 7.7.). For each eigenvalue λk ,

~vk =

λn−1k...

λ2k

λk

1

is an associated eigenvector.

c. The eigenbasis ~v0, ~v1, . . . , ~vn−1 for Cn we found in part b is in fact an eigenbasis forall circulant n × n matrices.

41. Substitute ρ = 1x

into 14ρ2 + 12ρ3 − 1 = 0;

14x2 + 12

x3 − 1 = 0

14x + 12− x3 = 0

x3 − 14x = 12

190

Page 25: Chapter 07

SSM: Linear Algebra Section 7.6

Now use the formula derived in Exercise 40 to find x, with p = −14 and q = 12. Thereis only one positive solution, x ≈ 4.114, so that ρ = 1

x≈ 0.243.

43. Note that f(z) is not the zero polynomial, since f(i) = det(S1 + iS2) = det(S) 6= 0, asS is invertible. A nonzero polynomial has only finitely many zeros, so that there is areal number x such that f(x) = det(S1 + xS2) 6= 0, that is, S1 + xS2 is invertible. NowSB = AS or (S1 + iS2)B = A(S1 + iS2). Considering the real and the imaginary part, wecan conclude that S1B = AS1 and S2B = AS2 and therefore (S1 +xS2)B = A(S1 +xS2).Since S1 + xS2 is invertible, we have B = (S1 + xS2)

−1A(S1 + xS2), as claimed.

45. If a 6= 0, then there are two distinct eigenvalues, 1 ± √a, so that the matrix is diagonal-

izable. If a = 0, then

[

1 1a 1

]

=

[

1 10 1

]

fails to be diagonalizable.

47. If a 6= 0, then there are three distinct eigenvalues, 0,±√a, so that the matrix is diago-

nalizable. If a = 0, then

0 0 01 0 a0 1 0

=

0 0 01 0 00 1 0

fails to be diagonalizable.

49. The eigenvalues are 0, 1, a − 1. If a is neither 1 nor 2, then there are three distincteigenvalues, so that the matrix is diagonalizable. Conversely, if a = 1 or a = 2, then thematrix fails to be diagonalizable, since all the eigenspaces will be one-dimensional (verifythis!).

7.6

1. λ1 = 0.9, λ2 = 0.8, so, by Fact 7.6.2, ~0 is a stable equilibrium.

3. λ1,2 = 0.8± (0.7)i so |λ1| = |λ2| =√

0.64 + 0.49 > 1 so ~0 is not a stable equilibrium.

5. λ1 = 0.8, λ2 = 1.1 so ~0 is not a stable equilibrium.

7. λ1,2 = 0.9± (0.5)i so |λ1| = |λ2| =√

0.81 + 0.25 > 1 and ~0 is not a stable equilibrium.

9. λ1,2 = 0.8± (0.6)i, λ3 = 0.7, so |λ1| = |λ2| = 1 and ~0 is not a stable equilibrium.

11. λ1 = k, λ2 = 0.9 so ~0 is a stable equilibrium if |k| < 1.

13. Since λ1 = 0.7, λ2 = −0.9,~0 is a stable equilibrium regardless of the value of k.

15. λ1,2 = 1 ± 110

√k

If k ≥ 0 then λ1 = 1 + 110

√k ≥ 1. If k < 0 then |λ1| = |λ2| > 1. Thus, the zero state isn’t

a stable equilibrium for any real k.

191

Page 26: Chapter 07

Chapter 7 SSM: Linear Algebra

17. λ1,2 = 0.6± (0.8)i = 1(cos θ ± i sin θ), where

θ = arctan(

0.80.6

)

= arctan(

0.80.6

)

= arctan(

43

)

≈ 0.927.

Eλ1= ker

[

−0.8i −0.80.8 −0.8i

]

= span

[

−1i

]

so ~w =

[

01

]

, ~v =

[

−10

]

.

~x0 =

[

01

]

= 1~w + 0~v, so a = 1 and b = 0. Now we use Fact 7.6.3:

~x(t) =

[

0 −11 0

] [

cos(θt) − sin(θt)sin(θt) cos(θt)

][

10

]

=

[

0 −11 0

] [

cos θtsin θt

]

=

[

− sin θtcos θt

]

, where

θ = arctan(

43

)

≈ 0.927.

The trajectory is the circle shown in Figure 7.10.

Figure 7.10: for Problem 7.6.17.

19. λ1,2 = 2 ± 3i, r =√

13, and θ = arctan(

32

)

≈ 0.98, so

λ1 ≈√

13(cos(0.98)+i sin(0.98)), [~w ~v] =

[

0 −11 0

]

,

[

ab

]

=

[

10

]

and ~x(t) ≈√

13t[

− sin(0.98t)cos(0.98t)

]

.

The trajectory spirals outwards; see Figure 7.11.

21. λ1,2 = 4 ± i, r =√

17, θ = arctan(

14

)

≈ 0.245 so

λ1 ≈√

17(cos(0.245) + i sin(0.245)), [~w ~v] =

[

0 51 3

]

,

[

ab

]

=

[

10

]

192

Page 27: Chapter 07

SSM: Linear Algebra Section 7.6

Figure 7.11: for Problem 7.6.19.

and ~x(t) ≈√

17t[

5 sin(0.245t)cos(0.245t) + 3 sin(0.245t)

]

The trajectory spirals outwards; see Figure 7.12.

Figure 7.12: for Problem 7.6.21.

23. λ1,2 = 0.4± 0.3i, r = 12 , θ = arctan

(

0.30.4

)

≈ 0.643

[~w ~v] =

[

0 51 3

]

,

[

ab

]

=

[

10

]

so ~x(t) =(

12

)t[

5 sin(θt)cos(θt) + 3 sin θ(t)

]

.

The trajectory spirals inwards as shown in Figure 7.13.

25. Not stable since if λ is an eigenvalue of A, then 1λ

is an eigenvalue of A−1 and∣

∣ = 1|λ| > 1.

193

Page 28: Chapter 07

Chapter 7 SSM: Linear Algebra

Figure 7.13: for Problem 7.6.23.

27. Stable since if λ is an eigenvalue of −A, then −λ is an eigenvalue of −A and | − λ| = |λ|.

29. Cannot tell; for example, if A =

[ 12 0

0 12

]

, then A + I2 is

[ 32 0

0 32

]

and the zero state is

not stable, but if A =

[− 12 0

0 − 12

]

then A + I2 =

[ 12 0

0 12

]

and the zero state is stable.

31. We need to determine for which values of det(A) and tr(A) the modulus of both eigen-values is less than 1. We will first think about the border line case and examine whenone of the moduli is exactly 1: If one of the eigenvalues is 1 and the other is λ, thentr(A) = λ + 1 and det(A) = λ, so that det(A) = tr(A)− 1. If one of the eigenvalues is −1and the other is λ, then tr(A) = λ− 1 and det(A) = −λ, so that det(A) = −tr(A)− 1. Ifthe eigenvalues are complex conjugates with modulus 1, then det(A) = 1 and |tr(A)| < 2(think about it!). It is convenient to represent these conditions in the tr-det plane, whereeach 2 × 2 matrix A is represented by the point (trA, detA), as shown in Figure 7.14.

If tr(A) = det(A) = 0, then both eigenvalues of A are zero. We can conclude thatthroughout the shaded triangle in Figure 7.14 the modulus of both eigenvalues will beless than 1, since the modulus of the eigenvalues changes continuously with tr(A) anddet(A) (consider the quadratic formula!). Conversely, we can choose sample points toshow that in all the other four regions in Figure 7.14 the modulus of at least one of theeigenvalues exceeds one; consider

the matrices

[

2 00 0

]

,

[

−2 00 0

]

,

[

2 00 −2

]

, and

[

0 −22 0

]

.

↑ ↑ ↑ ↑in (I) in (II) in (III) in (IV)

194

Page 29: Chapter 07

SSM: Linear Algebra Section 7.6

Figure 7.14: for Problem 7.6.31.

It follows that throughout these four regions, (I), (II), (III), and (IV), at least one of theeigenvalues will have a modulus exceeding one.

The point (trA, detA) is in the shaded triangle if det(A) < 1, det(A) > tr(A) − 1 anddet(A) > −tr(A) − 1. This means that |trA| − 1 < det(A) < 1, as claimed.

33. Take conjugates of both sides of the equation ~x0 = c1(~v + i ~w) + c2(~v − i ~w):

~x0 = ~x0 = c1(~v + i ~w) + c2(~v − i ~w) = c̄1(~v − i ~w) + c̄2(~v + i ~w) = c̄2(~v + i ~w) + c̄1(~v − i ~w).

The claim that c2 = c̄1 now follows from the fact that the representation of ~x0 as a linearcombination of the linearly independent vectors ~v + i ~w and ~v − i ~w is unique.

35. a. Let ~v1, . . . , ~vn be an eigenbasis for A. Then ~x(t) =

n∑

i=1

ciλti~vi and

‖~x(t)‖ = |n

i=1

ciλti~vi| ≤

n∑

i=1

‖ciλti~vi‖ =

n∑

i=1

|λi|t‖ci~vi‖ ≤n

i=1

‖ci~vi‖.

↑≤1

The last quantity,

n∑

i=1

‖ci~vi‖, gives the desired bound M .

195

Page 30: Chapter 07

Chapter 7 SSM: Linear Algebra

b. A =

[

1 10 1

]

represents a shear parallel to the x-axis, with A

[

k1

]

=

[

k + 11

]

, so that

~x(t) = At

[

01

]

=

[

t1

]

is not bounded. This does not contradict part a, since there is

no eigenbasis for A.

37. a. Write Y (t + 1) = Y (t) = Y, C(t + 1) = C(t) = C, I(t + 1) = I(t) = I .∣

Y = C + I + G0

C = γYI = 0

→ Y = γY + G0

Y = G0

1−γ

Y = G0

1−γ, C = γG0

1−γ, I = 0

b. y(t) = Y (t) − G0

1−γ, c(t) = C(t) − γG0

1−γ, i(t) = I(t)

Substitute to verify the equations.[

C(t + 1)i(t + 1)

]

=

[

γ γαγ − α αγ

] [

c(t)i(t)

]

c. A =

[

0.2 0.2−4 1

]

eigenvalues 0.6 ± 0.8i

not stable

d. A =

[

γ γγ − 1 γ

]

, trA = 2γ, detA = γ, stable (use Exercise 31)

e. A =

[

γ γαγ − α αγ

]

trA = γ(1 + α) > 0, detA = αγ

Use Exercise 31; stable if det(A) = αγ < 1 and trA − 1 = αγ + γ − 1 < αγ.

The second condition is satisfied since γ < 1.

Stable if γ < 1α

(

eigenvalues are real if γ ≥ 4α(1+α)2

)

39. Use Exercise 38: ~v = (I2 − A)−1~b =

[

0.9 −0.2−0.4 0.7

]−1 [

12

]

=

[

24

]

.

[

24

]

is a stable equilibrium since the eigenvalues of A are 0.5 and −0.1.

196

Page 31: Chapter 07

SSM: Linear Algebra True or False

41. Find the 2 × 2 matrix A that transforms

[

86

]

into

[

−34

]

and

[

−34

]

into

[

−8−6

]

:

A

[

8 −36 4

]

=

[

−3 −84 −6

]

and A =

[

−3 −84 −6

] [

8 −36 4

]−1

= 150

[

36 −7352 −36

]

.

There are many other correct answers.

True or False

1. T, by Fact 7.2.2

3. F; If

[

1 10 1

]

, then eigenvalue 1 has geometric multiplicity 1 and algebraic multiplicity. 2.

5. T; A = AIn = A[~e1 . . . ~en] = [λ1~e1 . . . λn~en] is diagonal.

7. T; Consider a diagonal 5 × 5 matrix with only two distinct diagonal entries.

9. T, by Summary 7.1.5

11. F; Consider A =

[

1 10 1

]

.

13. T; If A~v = 3~v, then A2~v = 9~v.

15. T, by Fact 7.5.5

17. T, by Example 6 of Section 7.5

19. T; If S−1AS = D, then ST AT (ST )−1 = D.

21. F; Consider A =

[

0 10 0

]

, with A2 =

[

0 00 0

]

.

23. F; Let A =

[

1 01 1

]

, for example.

25. T; If S−1AS = D, then S−1A−1S = D−1 is diagonal.

27. T; The sole eigenvalue, 7, must have geometric multiplicity 3.

29. F; Consider the zero matrix.

31. F; Consider the identity matrix.

197

Page 32: Chapter 07

Chapter 7 SSM: Linear Algebra

33. F; Let A =

[

1 10 1

]

and ~v =

[

10

]

, for example.

35. F; Let A =

[

2 00 3

]

, ~v =

[

10

]

, and ~w =

[

01

]

, for example.

37. T; The eigenvalues are 3 and −2.

39. T, by Fact 7.3.4

41. F; Consider a rotation through π/2.

43. F; Consider

[

1 00 1

]

and

[

1 10 1

]

.

45. T; There is an eigenbasis ~v1, . . . , ~vn, and we can write ~v = c1~v1 + · · ·+ cn~vn. The vectorsci~vi are either eigenvectors or zero.

47. T, by Fact 7.3.6a

49. T; Recall that the rank is the dimension of the image. If ~v is in the image of A, then A~vis in the image of A as well, so that A~v is parallel to ~v.

51. T; If A~v = λ~v for a nonzero ~v, then A4~v = λ4~v = ~0, so that λ4 = 0 and λ = 0.

53. T; If the eigenvalue associated with ~v is λ = 0, then A~v = ~0, so that ~v is in the kernel ofA; otherwise ~v = A

(

1λ~v)

, so that ~v is in the image of A.

55. T; Either A~u = 3~u or A~u = 4~u.

57. T; Suppose A~vi = αi~vi and B~vi = βi~vi, and let S = [~v1 . . . ~vn].Then ABS = BAS = [α1β1~v1 . . . αnβn~vn], so that AB = BA.

198