HW4 Solution of Math170A - UCSD Mathematics | Homescheng/hw4_170a.pdf ·  · 2012-03-09and the...

5

Click here to load reader

Transcript of HW4 Solution of Math170A - UCSD Mathematics | Homescheng/hw4_170a.pdf ·  · 2012-03-09and the...

Page 1: HW4 Solution of Math170A - UCSD Mathematics | Homescheng/hw4_170a.pdf ·  · 2012-03-09and the left hand side is in E i. Therefore consider A X c jw j = i X c jw j = i X l kv k:

HW4 Solution of Math170A

Shi(Fox) Cheng

Feb 27th, 2012

5.3.6

(i) Find eigenvalues.

Consider

det(A− λI) = λ2 − 11λ+ 17 = 0

Apply quadratic formula one can easily find two roots λ1 = 9.1401 and λ2 = 1.8599.

(ii) Find dominant eigenvector.

Dominant eigenvector is the eigenvector w.r.s.p to λ1. Look at (A − λ1)v = 0, one has v = c(1 0.1401)T ,

WLOG pick c = 1 which gives us the same result of example 5.3.5.

(iii) Calculate convergent ratio ‖qj+1 − v‖/‖qj − v‖.

Skipped.

(iv) Why ‖qj+1 − v‖/‖qjj − v‖ = |λ2/λ1|.

Since for two by two matrix, it has only two eigenvalues. Refer to the prove of the error convergent ratio

estimator on the bottom of page 314, it is exactly |λ2/λ1|.

(v) Why ‖qj+1 − v‖/‖qj − v‖ 6= |λ2/λ1| if matrix is larger.

The reason is that refer to the same prove on page 314, for larger matrix, the ratio |λ2/λ1| becomes the

upper bound of exact convergent ratio.

5.3.17

(a)

The process of Shift-and-Invert Strategy for power method, i.e compute qj+1 from qj , one need do

1

Page 2: HW4 Solution of Math170A - UCSD Mathematics | Homescheng/hw4_170a.pdf ·  · 2012-03-09and the left hand side is in E i. Therefore consider A X c jw j = i X c jw j = i X l kv k:

(i): q̂j+1 = (A− ρI)−1qj .

(ii): Pick sj+1 = ‖q̂j+1‖∞, i.e the maximum abs element of q̂j+1.

(iii): qj+1 = q̂j+1/sj+1.

Proceed this process from q0 for a few steps.

(b)

To calculate the eigenvalues and dominant eigenvector of A, do the exactly same calculation as problem 5.3.6,

and compare the convergence ratio ‖qj+1−v‖/‖qj−v‖ with theoretical convergence ratio |(λ1−8)/(λ2−8)|.

5.4.5

Let D = diag(d1, d2, · · · , dn) and vi = (0, 0, · · · , 1, 0 · · · )T where the ith element is one. Since

Dvi = divi, ∀i = 1, 2, · · · , n.

It implies each diag element di is an eigenvalue with the eigenvector vi. Clearly, those vis are linear inde-

pendent for i = 1 to n, thus n linear independent eigenvalues are found.

5.4.25

Proof.

Since similarity transformation preserves the eigenvalues, which means for B = S−1AS, if λi is an eigenvalue

of A, then it is one of B as well. Thus, this problem is just to show the Dim of eigenspace w.r.s.p to λi of

A(Denote this eigenspace as EiA) equals to the Dim of eigenspace w.r.s.p to λi of B(Denote this eigenspace

as EiB). Assume dim(EiA) = d, i.e any eigenvector w.r.s.p to λi of A, say v has v ∈ span{w1, · · · , wd},

where {w1, · · · , wd} is the basis of EiA, and can be written in linear combination form v =∑d

j=1 cjwj .

Now show the set {S−1wj}j=1,··· ,d is a linear independent set.

Consider

∑cjS−1wj = S−1

∑cjwj = 0,

since S−1 is nonsingular, thus the above equation equals to zero is equivalent to∑cjwj = 0, by the linear

independency of wjs, one has the coefficents cjs are 0. Hence set {S−1wj}j=1,··· ,d is a linear independent

set.

Claim EiB = span{S−1wj}j=1,··· ,d.

(i) Show EiB ⊂ span{S−1wj}j=1,··· ,d.

Pick any element v? ∈ EiB , one has Bv? = λiv?. Since SB = AS, then SBv? = λi(Sv?) = A(Sv?), i.e

2

Page 3: HW4 Solution of Math170A - UCSD Mathematics | Homescheng/hw4_170a.pdf ·  · 2012-03-09and the left hand side is in E i. Therefore consider A X c jw j = i X c jw j = i X l kv k:

Sv? ∈ EiA, thus it can be written as Sv? =∑cjwj , and obviously v? = S−1

∑cjwj =

∑cj(S−1wJ) ∈

span{wj}j=1,··· ,d.

(ii) Show EiB ⊃ span{S−1wj}j=1,··· ,d.

For any v? =∑cj(S−1wj), we have Bv? = BS−1

∑cjwj = S−1A

∑cjwj = λiS

−1∑cjwj = λiv

?, thus it

is in EiB .

All for these two points, we show our claim, and it implies dim(EiB) = d.

5.4.26

If A is semisimple matrix, then its eigenvalue has geometric mutiplicity equals to its algebraic multiplicity.

Proof.

Assume the algebraic multiplicity of λi is r, i.e there are r eigenvalues equals to λi. We denote the eigenspace

w.r.s.p to λi as Ei that the geometric mutiplicity of λi = dim(Ei)

(i) Show dim(Ei) ≥ r.

By theorem (5.4.6), one can decomposite matrix as

A = V −1

λ1

. . .

λi

. . .

λi

λi+1

. . .

[v1 · · · vn],

where the multiple eigenvalue λi appears r times on the diag of D and all the column of V from vi to vi+r−1

are linear independent eigenvectors w.r.s.p to λi, thus there are at least r linear independent vectors in Ei,

i.e dim(Ei) ≥ r.

(ii) Show dim(Ei) ≤ r.

Assume dim(Ei) = r + d > r, and the basis of Ei is {wj}j=1,··· ,r+d. Recall that those columns of V except

from vi to vi+r−1 are eigenvectors corresponding to other eigenvalues λj 6= λi, for simplicity, reindex them

as {v1, · · · , vn−r}. Now show the set {w1, · · · , wr+d, v1, · · · , vn−r} is a linear independent set. Consider∑cjwj +

∑lkvk = 0, use contradiction and let this set be linear dependent, then one has a nonzero solution

for those coefficents and particularly the coefficents of each sum is nonzero(because the sets of wjs and vjs

are already linear independent), i.e ∃ some cj 6= 0 and some lk 6= 0.Then we have∑cjwj = −

∑lkvk 6= 0,

3

Page 4: HW4 Solution of Math170A - UCSD Mathematics | Homescheng/hw4_170a.pdf ·  · 2012-03-09and the left hand side is in E i. Therefore consider A X c jw j = i X c jw j = i X l kv k:

and the left hand side is in Ei. Therefore consider

A∑

cjwj = λi

∑cjwj = −λi

∑lkvk.

−A∑

lkvk = −∑

lkAvk = −∑

lkλkvk, λk 6= λi.

Subtract them,

0 =∑

lk(λk − λi)vk 6= 0.

However, the right hand side is nonzero, thus it is a contradiction which implies the set {w1, · · · , wr+d, v1, · · · , vn−r}

is linear independent. Notice for Rn space at most it has n linear independent vectors, but we find n + d

many linear independent vecotors which is impossible, hence the original assumption is false, that will be

dim(Ei) ≤ r.

From (i) and (ii), we have dim(Ei) = r, i.e the geometric multiplicity equals to algebraic multiplicity.

Remark: If a and b are linear independent, b and c are linear independent, and a and c are linear

independent, this does not imply {a, b, c} is a linear independent set. Therefore what I presented in my

section to show the set is linear independent is incorrect, one should look at the set as a whole.

5.4.46

Given A = UΣV T , we have the spectral decomposition

ATA = V (ΣT Σ)V T = V (ΣT Σ)V T .

AAT = U(ΣΣT )UT = U(ΣΣT )UT .

5.4.56

Here I present two ways to show this problem.

Proof.

Method 1

According to Schur’s decomposition theorem 5.4.11, any matrix A = UTU?, where U is unitary and T is

upper triangular. It is not hard to see Schur’s decomposition preserves eigenvalues, i.e matrix T has the

same eigenvalues of A. For upper triangular matrix, its eigenvalues are on diag and its determinant equals

4

Page 5: HW4 Solution of Math170A - UCSD Mathematics | Homescheng/hw4_170a.pdf ·  · 2012-03-09and the left hand side is in E i. Therefore consider A X c jw j = i X c jw j = i X l kv k:

to the product of diag elements. Thus

det(A) = det(UTU?) = det(U) det(T ) det(U?) = det(UU?) det(T ) = det(I) det(T ) = Π(λi).

Method 2

Consider the characteristic polynomial det(λI − A) = P (λ) = 0. By Fundamental theorem of algebra, we

can write P (λ) = Π(λ− λi)ri ,where λi are the roots of P (λ) and ri are the geometric multiplicity w.r.s.p to

each root λi. Set λ = 0, we have

(−1)n det(A) = det(−A) = P (0) = Π(−λi)ri = (−1)nΠ(λi).

Hence, det(A) = Πλi.

5