Review Practice - University of California,...
Click here to load reader
Transcript of Review Practice - University of California,...
EIGENVECTORS, EIGENVALUES, AND DIAGONALIZATION (SOLUTIONS)
1. Review
-omitted-
2. Practice
(1) Diagonalize
[2 34 1
], if possible. If not possible, explain why not.
Answer:
A =
[1 −31 4
] [5 00 −2
] [4/7 3/7−1/7 1/7
]
(2) Diagonalize
4 0 22 4 20 0 4
, if possible. If not possible, explain why not.
This matrix is not diagonalizable. It has eigenvalue λ = 4, which occurs with multiplicity three.But the 4-eigenspace is only one-dimensional, therefore we cannot find an independent set of morethan one eigenvector.
(3) Find examples of each of the following:(a) A 2× 2 matrix with no real eigenvalues.[
0 −11 0
](b) A 2× 2 matrix with exactly one real eigenvalue, whose eigenspace is two-dimensional.[
0 00 0
](c) A 2× 2 matrix with exactly one real eigenvalue, whose eigenspace is one-dimensional.[
1 10 1
]
(4) Is there a basis for P2 with respect to which the differential operator ddx : P2 → P2 is diagonal, i.e.,
is represented by a diagonal matrix? Explain.Answer:No. The standard matrix for d
dx is 0 1 00 0 20 0 0
Its only eigenvalue is zero, and the 0-eigenspace is the same as the nullspace of the above matrix.[NB - the 0-eigenspace is always the same as the nullspace.] This nullspace is 1-dimensional, sincethere are two pivots in the matrix. Therefore we canot find three independent eigenvectors, so thematrix is not diagonalizable.
1
2 EIGENVECTORS, EIGENVALUES, AND DIAGONALIZATION (SOLUTIONS)
3. True or False
(1) An n× n matrix always has n distinct eigenvectors. TRUEA square matrix always has at least one nonzero eigenvector. [note that we have to allow complexeigenvectors (and eigenvalues) for this to be true. But we do allow these]. Any scalar multiple ofthis vector is also an eigenvector. Therefore there are actually infinitely many distinct eigenvectors,hence there are certainly n distinct eigenvectors
(2) An n× n matrix always has n independent eigenvectors. FALSEOnly diagonalizable n by n matrices have n independent eigenvectors
(3) If v1 and v2 form a basis for the nullspace of A− 17I, then for any x such that Ax = 17x, we havex = c1v1 + c2v2 for some scalars c1 and c2. TRUEThe nullspace of A − 17I is the same as the 17-eigenspace of A. If Ax = 17x, then x is in the17-eigenspace of A, hence can be written as a linear combination of v1 and v2
(4) If P is invertible and AP = PD, where D is diagonal, then the columns of P are all eigenvectors ofA. TRUELet P = [v1 . . .vn], and let the diagonal entries of D be d1, . . . , dn. Then AP = [Av1 . . . Avn], whilePD = [d1v1 . . . dnvn]. So AP and PD are equal if and only if their columns are the same, which isto say Av1 = d1v1, etc... In other words, they are equal if and only if each vi is an eigenvector ofA (with eigenvalue di).
(5) An n × n matrix is diagonalizable if and only if the sum of the dimensions of its eigenspaces is n.TRUE The matrix is diagonalizable if and only if there exist n linearly independent eigenvectors.A maximal independent set of eigenvectors can be found be taking a basis for each eigenspace andaggregating them into one set. This set will consist of n vectors if and only if the dimensions of theeigenspaces add up to n.
(6) Two similar matrices have the same eigenvalues. [Remember that M and N are similar if M =PNP−1 for some invertible matrix P ]. TRUEIf M and N are similar, then M = PNP−1 for some invertible matrix P . Now suppose λ is aneigenvalue of M , say Mv = λv. Then PNP−1v = λv implies N(P−1v) = λ(P−1v), showing thatP−1v is an eigenvector of N with eigenvalue λ. This shows that every eigenvalue of M is also aneigenvalue of N . The other direction is similar.
(7) If a matrix has the three eigenvalues 1, 2 and 3, and if v1 is a 1-eigenvector, v2 a 2-eigenvector, andv3 a 3-eigenvector for this matrix, then {v1,v2,v3} is linearly independent. TRUE“Eigenvectors from diferent eigenspaces form independent sets” (proof omitted)
(8) The eigenvalues of a matrix cannot tell you whether the matrix is invertible or not. FALSEA matrix is invertible if and only if it does not have 0 as an eigenvalue. Reason: the 0-eigenspace isthe nullspace
(9) The matrix
[0 −11 0
]has two distinct eigenvalues. TRUE
The eigenvalues are complex numbers: λ = ±i(10) If A = PDP−1, and the columns of an n×n matrix P form the basis B for Rn, then D is the matrix
for the linear transformation T (x) = Ax with respect to the basis B. TRUEP is invertible, so we can think of it as a change of basis matrix, from B to the standard basis. Thenthe equation A = PDP−1 expresses the following change of basis diagram:
Rn(std basis)A //
P−1
��
Rn(std basis)
Rn(basis B)D // Rn(basis B)
P
OO
(11) If A = MDM−1, and also A = ND′N−1, with D and D′ both diagonal, then D and D′ have thesame entries, but possibly in different orders. TRUEAccording to the diagonalization procedure, the diagonal entries of both D and D′ must be eigenvalues
EIGENVECTORS, EIGENVALUES, AND DIAGONALIZATION (SOLUTIONS) 3
of A. However, they might appear in different orders if the order of the basis of eigenvectors used toconstruct M and N are different.
(12) If the 2 × 2 matrix A represents a reflection across the line L (through the origin) in R2, then theline perpendicular to L is an eigenspace of A. TRUE Points on the line perpendicular to L stay onthat line when reflected across L. In fact, if v is on this perpendicular, then its image under thereflection is −v. So this line is a (-1)-eigenspace for A.
4. Towards the Jordan Normal Form (optional)
Not every matrix can be diagonalized, but we can always get “pretty close” to a diagonal matrix bychoosing the right basis. Here are a few ideas related to this claim.
(1) The matrix
[2 10 2
]is a 2× 2 Jordan block. Find its eigenvalue, and determine the corresponding
eigenspace.(2) Hopefully you found that the eigenspace is only one-dimensional. Therefore J2 is not diagonalizable,
since there is no basis for R2 consisting of eigenvectors for J2. However, we can make a basis of“generalized eigenvectors”. The usual eigenvectors v satisy (A−λI)v = 0. A generalized eigenvectoris a vector w such that (A− λI)kw = 0 for some positive integer k. In the case of J2, try to find ageneralized eigenvector w such that (A− 2I)2w = 0.
(3) Now consider the 3×3 Jordan block
4 1 00 4 10 0 4
. It has only one eigenvalue, 4, and the 4-eigenspace
is one-dimensional. Find a basis {v1} for this eigenspace, and extend it to a basis {v1,v2,v3}consisting of generalized eigenvectors for J3. That is to say (A − 4I)kvk = 0 for k = 1, 2, and 3.Equivalently, we have (A− 4I)v1 = 0, (A− 4I)v2 = v1, and (A− 4I)v3 = v2.
(4) Why do the generalized eigenvectors produced in this way always form a basis? Prove that ifv1, . . . ,vn ∈ Rn, and Av1 = 0, Av2 = v1, Av3 = v2, etc., with A invertible, then {v1, . . . ,vn} is anindependent set.
(5) You have shown how to extend an eigenbasis for a Jordan block matrix as above to a basis forRn. Now prove that if A is any matrix (say 3 × 3 for simplicity) with only one eigenvalue λ, withthe λ-eigenspace one-dimensional, then there is an matrix P such that A = PJP−1, where here
J =
λ 1 00 λ 10 0 λ
. [Hint: you would know what the columns of P were if A had 3 independent
eigenvectors - what should they be in this moe general case?] This is not a diagonalization of A, butit’s “pretty close”.