Chapter 10 Eigenvalues and Singular...

41
Chapter 10 Eigenvalues and Singular Values This chapter is about eigenvalues and singular values of matrices. Computational algorithms and sensitivity to perburbations are both discussed. 10.1 Eigenvalue and Singular Value Decompositions An eigenvalue and eigenvector of a square matrix A are a scalar λ and a nonzero vector x so that Ax = λx A singular value and pair of singular vectors of a square or rectangular matrix A are a nonnegative scalar σ and two nonzero vectors u and v so that Av = σu A H u = σv The superscript on A H stands for Hermitian transpose and denotes the complex conjugate transpose of a complex matrix. If the matrix is real, then A T denotes the same matrix. In Matlab these transposed matrices are denoted by A’. The term eigenvalue is a partial translation of the German “eigenvert.” A complete translation would be something like “own value” or “characteristic value,” but these are rarely used. The term singular value relates to the distance between a matrix and the set of singular matrices. Eigenvalues play an important role in situations where the matrix is a trans- formation from one vector space onto itself. Systems of linear ordinary differential equations are the primary examples. The values of λ can correspond to frequencies of vibration, or critical values of stability parameters, or energy levels of atoms. Singular values play an important role where the matrix is a transformation from one vector space to a different vector space, possibly with a different dimension. Systems of over- or underdetermined algebraic equations are the primary examples. The definitions of eigenvectors and singular vectors do not specify their nor- malization. An eigenvector x, or a pair of singular vectors u and v, can be scaled by 1

Transcript of Chapter 10 Eigenvalues and Singular...

Page 1: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

Chapter 10

Eigenvalues and SingularValues

This chapter is about eigenvalues and singular values of matrices. Computationalalgorithms and sensitivity to perburbations are both discussed.

10.1 Eigenvalue and Singular Value DecompositionsAn eigenvalue and eigenvector of a square matrix A are a scalar λ and a nonzerovector x so that

Ax = λx

A singular value and pair of singular vectors of a square or rectangular matrix Aare a nonnegative scalar σ and two nonzero vectors u and v so that

Av = σu

AHu = σv

The superscript on AH stands for Hermitian transpose and denotes the complexconjugate transpose of a complex matrix. If the matrix is real, then AT denotes thesame matrix. In Matlab these transposed matrices are denoted by A’.

The term eigenvalue is a partial translation of the German “eigenvert.” Acomplete translation would be something like “own value” or “characteristic value,”but these are rarely used. The term singular value relates to the distance betweena matrix and the set of singular matrices.

Eigenvalues play an important role in situations where the matrix is a trans-formation from one vector space onto itself. Systems of linear ordinary differentialequations are the primary examples. The values of λ can correspond to frequenciesof vibration, or critical values of stability parameters, or energy levels of atoms.Singular values play an important role where the matrix is a transformation fromone vector space to a different vector space, possibly with a different dimension.Systems of over- or underdetermined algebraic equations are the primary examples.

The definitions of eigenvectors and singular vectors do not specify their nor-malization. An eigenvector x, or a pair of singular vectors u and v, can be scaled by

1

Page 2: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

2 Chapter 10. Eigenvalues and Singular Values

any nonzero factor without changing any other important properties. Eigenvectorsof symmetric matrices are usually normalized to have Euclidean length equal to one,‖x‖2 = 1. On the other hand, the eigenvectors of nonsymmetric matrices often havedifferent normalizations in different contexts. Singular vectors are almost alwaysnormalized to have Euclidean length equal to one, ‖u‖2 = ‖v‖2 = 1. You can stillmultiply eigenvectors, or pairs of singular vectors, by −1 without changing theirlengths.

The eigenvalue-eigenvector equation for a square matrix can be written

(A− λI)x = 0, x 6= 0

This implies that A− λI is singular and hence that

det(A− λI) = 0

This definition of an eigenvalue, which does not directly involve the correspondingeigenvector, is the characteristic equation or characteristic polynomial of A. Thedegree of the polynomial is the order of the matrix. This implies that an n-by-nmatrix has n eigenvalues, counting multiplicities. Like the determinant itself, thecharacteristic polynomial is useful in theoretical considerations and hand calcula-tions, but does not provide a sound basis for robust numerical software.

Let λ1, λ2, . . . , λn be the eigenvalues of a matrix A, let x1, x2, . . . , xn be a setof corresponding eigenvectors, let Λ denote the n-by-n diagonal matrix with the λj

on the diagonal, and let X denote the n-by-n matrix whose jth column is xj . Then

AX = XΛ

It is necessary to put Λ on the right in the second expression so that each column ofX is multiplied by its corresponding eigenvalue. Now make a key assumption thatis not true for all matrices — assume that the eigenvectors are linearly independent.Then X−1 exists and

A = XΛX−1

with nonsingular X. This is known as the eigenvalue decomposition of the matrix A.If it exists, it allows us to investigate the properties of A by analyzing the diagonalmatrix Λ. For example, repeated matrix powers can be expressed in terms of powersof scalars.

Ap = XΛpX−1

If the eigenvectors of A are not linearly independent, then such a diagonal decom-position does not exist and the powers of A have a more complicated behavior.

If T is any nonsingular matrix, then

B = T−1AT

is known as a similarity transformation, and A and B are said to be similar. IfAx = λx and y = Tx, then By = λy. In other words, a similarity transforma-tion preserves eigenvalues. The eigenvalue decomposition is an attempt to find asimilarity transformation to diagonal form.

Page 3: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.1. Eigenvalue and Singular Value Decompositions 3

Written in matrix form, the defining equations for singular values and vectorsare

AV = UΣAHU = V ΣH

Here Σ is a matrix the same size as A that is zero except possibly on its maindiagonal. It turns out that singular vectors can always be chosen to be perpendicularto each other, so the matrices U and V , whose columns are the normalized singularvectors, satisfy UHU = I and V HV = I. In other words, U and V are orthogonalif they are real, or unitary if they are complex. Consequently

A = UΣV H

with diagonal Σ and orthogonal or unitary U and V . This is known as the singularvalue decomposition, or SVD, of the matrix A.

In abstract linear algebra terms, eigenvalues are relevant if a square, n-by-nmatrix A is thought of as mapping n-dimensional space onto itself. We try to finda basis for the space so that the matrix becomes diagonal. This basis might becomplex, even if A is real. In fact, if the eigenvectors are not linearly independent,such a basis does not even exist. The singular value decomposition is relevant ifa possibly rectangular, m-by-n matrix A is thought of as mapping n-space ontom-space. We try to find one change of basis in the domain and a usually differentchange of basis in the range so that the matrix becomes diagonal. Such basesalways exist and are always real if A is real. In fact, the transforming matricesare orthogonal or unitary, so they preserve lengths and angles and do not magnifyerrors.

A = U Σ V’

A = U Σ V’

Figure 10.1. Full and economy SVD

Page 4: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

4 Chapter 10. Eigenvalues and Singular Values

If A is m-by-n with m larger than n, then in the full SVD, U is a large squarem-by-m matrix. The last m − n columns of U are “extra”; they are not neededto reconstruct A. A second version of the SVD that saves computer memory if Ais rectangular is known as the economy-sized SVD. In the economy version, onlythe first n columns of U and first n rows of Σ are computed. The matrix V isthe same n-by-n matrix in both decompositions. Figure 10.1 shows the shapes ofthe various matrices in the two versions of the SVD. Both decompositions can bewritten A = UΣV H , even though the U and Σ in the economy decomposition aresubmatrices of the ones in the full decomposition.

10.2 A Small ExampleAn example of the eigenvalue and singular value decompositions of a small squarematrix is provided by one of the test matrices from the Matlab gallery.

A = gallery(3)

The matrix is

A =

−149 −50 −154537 180 546−27 −9 −25

This matrix was constructed in such a way that the characteristic polynomial factorsnicely.

det(A− λI) = λ3 − 6λ2 + 11λ− 6= (λ− 1)(λ− 2)(λ− 3)

Consequently the three eigenvalues are λ1 = 1, λ2 = 2, and λ3 = 3, and

Λ =

1 0 00 2 00 0 3

The matrix of eigenvectors can be normalized so that its elements are all integers.

X =

1 −4 7−3 9 −49

0 1 9

It turns out that the inverse of X also has integer entries.

X−1 =

130 43 13327 9 28−3 −1 −3

These matrices provide the eigenvalue decomposition of our example.

A = XΛX−1

Page 5: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.3. eigshow 5

The singular value decomposition of this matrix cannot be expressed so neatlywith small integers. The singular values are the positive roots of the equation

σ6 − 668737σ4 + 4096316σ2 − 36 = 0

but this equation does not factor nicely. The Symbolic Toolbox statement

svd(sym(A))

returns exact formulas for the singular values, but the overall length of the result is822 characters. So, we compute the SVD numerically.

[U,S,V] = svd(A)

produces

U =-0.2691 -0.6798 0.68220.9620 -0.1557 0.2243

-0.0463 0.7167 0.6959

S =817.7597 0 0

0 2.4750 00 0 0.0030

V =0.6823 -0.6671 0.29900.2287 -0.1937 -0.95400.6944 0.7193 0.0204

The expression U*S*V’ generates the original matrix to within roundoff error.For gallery(3), notice the big difference between the eigenvalues, 1, 2, and

3, and the singular values, 817, 2.47, and .003. This is related, in a way that wewill make more precise later, to the fact that this example is very far from being asymmetric matrix.

10.3 eigshow

The function eigshow is available in the Matlab demos directory. The input toeigshow is a real, 2-by-2 matrix A, or you can choose an A from a pull-down listin the title. The default A is

A =(

1/4 3/41 1/2

)

Initially, eigshow plots the unit vector x = [1, 0]’, as well as the vector Ax, whichstarts out as the first column of A. You can then use your mouse to move x, shownin green, around the unit circle. As you move x, the resulting Ax, shown in blue,also moves. The first four subplots in figure 10.2 show intermediate steps as x

Page 6: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

6 Chapter 10. Eigenvalues and Singular Values

xA*x

xA*x

xA*x

xA*x

xA*x

x

A*x

Figure 10.2. eigshow

traces out a green unit circle. What is the shape of the resulting orbit of Ax? Animportant, and nontrivial, theorem from linear algebra tells us that the blue curveis an ellipse. eigshow provides a “proof by GUI” of this theorem.

The caption for eigshow says “Make Ax parallel to x.” For such a directionx, the operator A is simply a stretching or magnification by a factor λ. In otherwords, x is an eigenvector and the length of Ax is the corresponding eigenvalue.

The last two subplots in figure 10.2 show the eigenvalues and eigenvectorsof our 2-by-2 example. The first eigenvalue is positive, so Ax lies on top of theeigenvector x. The length of Ax is the corresponding eigenvalue; it happens to be5/4 in this example. The second eigenvalue is negative, so Ax is parallel to x, butpoints in the opposite direction. The length of Ax is 1/2, and the corresponding

Page 7: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.4. Characteristic Polynomial 7

xy A*x

A*y

Figure 10.3. eigshow(svd)

eigenvalue is actually −1/2.You might have noticed that the two eigenvectors are not the major and

minor axes of the ellipse. They would be if the matrix were symmetric. The defaulteigshow matrix is close to, but not exactly equal to, a symmetric matrix. For othermatrices, it may not be possible to find a real x so that Ax is parallel to x. Theseexamples, which we pursue in the exercises, demonstrate that 2-by-2 matrices canhave fewer than two real eigenvectors.

The axes of the ellipse do play a key role in the singular value decomposition.The results produced by the “svd” mode of eigshow are shown in figure 10.3. Again,the mouse moves x around the unit circle, but now a second unit vector, y, followsx, staying perpendicular to it. The resulting Ax and Ay traverse the ellipse, but arenot usually perpendicular to each other. The goal is to make them perpendicular.If they are, they form the axes of the ellipse. The vectors x and y are the columnsof U in the SVD, the vectors Ax and Ay are multiples of the columns of V , and thelengths of the axes are the singular values.

10.4 Characteristic PolynomialLet A be the 20-by-20 diagonal matrix with 1, 2, . . . , 20 on the diagonal. Clearly, theeigenvalues of A are its diagonal elements. However, the characteristic polynomial,det(A− λI), turns out to be

λ20 − 210λ19 + 20615λ18 − 1256850λ17 + 53327946λ16

−1672280820λ15 + 40171771630λ14 − 756111184500λ13

+11310276995381λ12 − 135585182899530λ11

+1307535010540395λ10 − 10142299865511450λ9

+63030812099294896λ8 − 311333643161390640λ7

+1206647803780373360λ6 − 3599979517947607200λ5

Page 8: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

8 Chapter 10. Eigenvalues and Singular Values

+8037811822645051776λ4 − 12870931245150988800λ3

+13803759753640704000λ2 − 8752948036761600000λ

+2432902008176640000

The coefficient of −λ19 is 210, which is the sum of the eigenvalues. The coefficientof λ0, the constant term, is 20!, which is the product of the eigenvalues. The othercoefficients are various sums of products of the eigenvalues.

We have displayed all the coefficients to emphasize that doing any floating-point computation with them is likely to introduce large roundoff errors. Merelyrepresenting the coefficients as IEEE floating-point numbers changes five of them.For example, the last three digits of the coefficient of λ4 change from 776 to 392. Tosixteen significant digits, the exact roots of the polynomial obtained by representingthe coefficients in floating-point are

1.000000000000002.000000000000962.999999999866404.000000004959444.999999914734146.000000845716616.999994555448458.000024432568948.99992001186835

10.0001969649053710.9996284302406412.0005437436359112.9993807345579014.0005479886738014.9996265821705516.0001920830384716.9999277346177318.0000187517060418.9999969977438920.00000022354640

We see that just storing the coefficients in the characteristic polynomial as double-precision floating-point numbers changes the computed values of some of the eigen-values in the fifth significant digit.

This particular polynomial was introduced by J. H. Wilkinson around 1960.His perturbation of the polynomial was different from ours, but his point was thesame, namely that representing a polynomial in its power form is an unsatisfactoryway to characterize either the roots of the polynomial or the eigenvalues of thecorresponding matrix.

Page 9: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.5. Symmetric and Hermitian Matrices 9

10.5 Symmetric and Hermitian MatricesA real matrix is symmetric if it is equal to its transpose, A = AT . A complexmatrix is Hermitian if it is equal to its complex conjugate transpose, A = AH . Theeigenvalues and eigenvectors of a real symmetric matrix are real. Moreover, thematrix of eigenvectors can be chosen to be orthogonal. Consequently, if A is realand A = AT , then its eigenvalue decomposition is

A = XΛXT

with XT X = I = XXT . The eigenvalues of a complex Hermitian matrix turnout to be real, although the eigenvectors must be complex. Moreover, the matrixof eigenvectors can be chosen to be unitary. Consequently, if A is complex andA = AH , then its eigenvalue decomposition is

A = XΛXH

with Λ real and XHX = I = XXH .For symmetric and Hermitian matrices, the eigenvalues and singular values

are obviously closely related. A nonnegative eigenvalue, λ ≥ 0, is also a singularvalue, σ = λ. The corresponding vectors are equal to each other, u = v = x. Anegative eigenvalue, λ < 0, must reverse its sign to become a singular value, σ = |λ|.One of the corresponding singular vectors is the negative of the other, u = −v = x.

10.6 Eigenvalue Sensitivity and AccuracyThe eigenvalues of some matrices are sensitive to perturbations. Small changes inthe matrix elements can lead to large changes in the eigenvalues. Roundoff errorsintroduced during the computation of eigenvalues with floating-point arithmetichave the same effect as perturbations in the original matrix. Consequently, theseroundoff errors are magnified in the computed values of sensitive eigenvalues.

To get a rough idea of this sensitivity, assume that A has a full set of linearlyindependent eigenvectors and use the eigenvalue decomposition

A = XΛX−1

Rewrite this as

Λ = X−1AX

Now let δA denote some change in A, caused by roundoff error or any other kindof perturbation. Then

Λ + δΛ = X−1(A + δA)X

Hence

δΛ = X−1δAX

Taking matrix norms,

‖δΛ‖ ≤ ‖X−1‖‖X‖‖δA‖ = κ(X)‖δA‖

Page 10: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10 Chapter 10. Eigenvalues and Singular Values

where κ(X) is the matrix condition number introduced in the Linear Equationschapter. Note that the key factor is the condition of X, the matrix of eigenvectors,not the condition of A itself.

This simple analysis tells us that, in terms of matrix norms, a perturbation‖δA‖ can be magnified by a factor as large as κ(X) in ‖δΛ‖. However, since δΛ isusually not a diagonal matrix, this analysis does not immediately say how much theeigenvalues themselves may be affected. Nevertheless, it leads to the correct overallconclusion:

The sensitivity of the eigenvalues is estimated by the condition numberof the matrix of eigenvectors.

You can use the function condest to estimate the condition number of theeigenvector matrix. For example,

A = gallery(3)[X,lambda] = eig(A);condest(X)

yields

1.2002e+003

A perturbation in gallery(3) could result in perturbations in its eigenvalues thatare 1.2 ·103 times as large. This says that the eigenvalues of gallery(3) are slightlybadly conditioned.

A more detailed analysis involves the left eigenvectors, which are row vectorsyH that satisfy

yHA = λyH

In order to investigate the sensitivity of an individual eigenvalue, assume that Avaries with a perturbation parameter and let A denote the derivative with respectto that parameter. Differentiate both sides of the equation

Ax = λx

to get

Ax + Ax = λx + λx

Multiply through by the left eigenvector.

yHAx + yHAx = yH λx + yHλx

The second terms on each side of this equation are equal, so

λ =yHAx

yHx

Taking norms,

|λ| ≤ ‖y‖‖x‖yHx

‖A‖

Page 11: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.6. Eigenvalue Sensitivity and Accuracy 11

Define the eigenvalue condition number to be

κ(λ,A) =‖y‖‖x‖yHx

Then

|λ| ≤ κ(λ,A)‖A‖In other words, κ(λ,A) is the magnification factor relating a perturbation in thematrix A to the resulting perturbation in an eigenvalue λ. Notice that κ(λ,A) isindependent of the normalization of the left and right eigenvectors, y and x, andthat

κ(λ,A) ≥ 1

If you have already computed the matrix X whose columns are the righteigenvectors, one way to compute the left eigenvectors is to let

Y H = X−1

Then, since

Y HA = ΛY H

the rows of Y H are the left eigenvectors. In this case, the left eigenvectors arenormalized so that

Y HX = I

so the denominator in κ(λ,A) is yHx = 1 and

κ(λ,A) = ‖y‖‖x‖Since ‖x‖ ≤ ‖X‖ and ‖y‖ ≤ ‖X−1‖, we have

κ(λ,A) ≤ κ(X)

The condition number of the eigenvector matrix is an upper bound for the individualeigenvalue condition numbers.

The Matlab function condeig computes eigenvalue condition numbers. Con-tinuing with the gallery(3) example,

A = gallery(3)lambda = eig(A)kappa = condeig(A)

yields

lambda =1.00002.00003.0000

Page 12: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

12 Chapter 10. Eigenvalues and Singular Values

kappa =603.6390395.2366219.2920

This indicates that λ1 = 1 is slightly more sensitive than λ2 = 2 or λ3 = 3. Aperturbation in gallery(3) can result in perturbations in its eigenvalues that are200 to 600 times as large. This is consistent with the cruder estimate of 1.2 · 103

obtained from condest(X).To test this analysis, let’s make a small random perturbation in A = gallery(3)

and see what happens to its eigenvalues.

format longdelta = 1.e-6;lambda = eig(A + delta*randn(3,3))

lambda =

1.000113449994521.999920402761162.99996856435075

The perturbation in the eigenvalues is

lambda - (1:3)’

ans =1.0e-003 *0.11344999451923

-0.07959723883699-0.03143564924635

This is smaller than, but roughly the same size as, the estimates provided bycondeig and the perturbation analysis.

delta*condeig(A)

ans =1.0e-003 *0.603638964956650.395236637990140.21929204271846

If A is real and symmetric, or complex and Hermitian, then its right and lefteigenvectors are the same. In this case,

yHx = ‖y‖‖x‖

Page 13: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.6. Eigenvalue Sensitivity and Accuracy 13

and so for symmetric and Hermitian matrices,

κ(λ,A) = 1

The eigenvalues of symmetric and Hermitian matrices are perfectly well conditioned.Perturbations in the matrix lead to perturbations in the eigenvalues that are roughlythe same size. This is true even for multiple eigenvalues.

At the other extreme, if λk is a multiple eigenvalue that does not have a cor-responding full set of linearly independent eigenvectors, then the previous analysisdoes not apply. In this case, the characteristic polynomial for an n-by-n matrix canbe written

p(λ) = det(A− λI) = (λ− λk)mq(λ)

where m is the multiplicity of λk and q(λ) is a polynomial of degree n − m thatdoes not vanish at λk. A perturbation in the matrix of size δ results in a change inthe characteristic polynomial from p(λ) = 0 to something like

p(λ) = O(δ)

In other words,

(λ− λk)m = O(δ)/q(λ)

The roots of this equation are

λ = λk + O(δ1/m)

This mth root behavior says that multiple eigenvalues without a full set of eigen-vectors are extremely sensitive to perturbation.

As an artificial but illustrative example, consider the 16-by-16 matrix with 2’son the main diagonal, 1’s on the superdiagonal, δ in the lower left-hand corner, and0’s elsewhere.

A =

2 12 1

. . . . . .2 1

δ 2

The characteristic equation is

(λ− 2)16 = δ

If δ = 0, this matrix has an eigenvalue of multiplicity 16 at λ = 2, but there is onlyone eigenvector to go along with this multiple eigenvalue. If δ is on the order offloating-point roundoff error, that is, δ ≈ 10−16, then the eigenvalues are on a circlein the complex plane with center at 2 and radius

(10−16)1/16 = 0.1

Page 14: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

14 Chapter 10. Eigenvalues and Singular Values

A perturbation on the size of roundoff error changes the eigenvalue from 2.0000 to16 different values, including 1.9000, 2.1000, and 2.0924 + 0.0383i. A tiny changein the matrix elements causes a much larger change in the eigenvalues.

Essentially the same phenomenon, but in a less obvious form, explains thebehavior of another Matlab gallery example,

A = gallery(5)

The matrix is

A =-9 11 -21 63 -25270 -69 141 -421 1684

-575 575 -1149 3451 -138013891 -3891 7782 -23345 933651024 -1024 2048 -6144 24572

The computed eigenvalues, obtained from lambda = eig(A), are

lambda =-0.0408-0.0119 + 0.0386i-0.0119 - 0.0386i0.0323 + 0.0230i0.0323 - 0.0230i

How accurate are these computed eigenvalues?The gallery(5) matrix was constructed in such a way that its characteristic

equation is

λ5 = 0

You can confirm this by noting that A5, which is computed without any roundofferror, is the zero matrix. The characteristic equation can be easily solved by hand.All five eigenvalues are actually equal to zero. The computed eigenvalues give littleindication that the “correct” eigenvalues are all zero. We certainly have to admitthat the computed eigenvalues are not very accurate.

The Matlab eig function is doing as well as can be expected on this problem.The inaccuracy of the computed eigenvalues is caused by their sensitivity, not byanything wrong with eig. The following experiment demonstrates this fact. Startwith

A = gallery(5)e = eig(A)plot(real(e),imag(e),’r*’,0,0,’ko’)axis(.1*[-1 1 -1 1])axis square

Figure 10.4 shows that the computed eigenvalues are the vertices of a regular pen-tagon in the complex plane, centered at the origin. The radius is about 0.04.

Page 15: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.7. Singular Value Sensitivity and Accuracy 15

−0.1 −0.05 0 0.05 0.1−0.1

−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

0.06

0.08

0.1

Figure 10.4. plot(eig(gallery(5)))

Now repeat the experiment with a matrix where each element is perturbedby a single roundoff error. The elements of gallery(5) vary over four orders ofmagnitude, so the correct scaling of the perturbation is obtained with

e = eig(A + eps*randn(5,5).*A)

Put this statement, along with the plot and axis commands, on a single line anduse the up arrow to repeat the computation several times. You will see that thepentagon flips orientation and that its radius varies between 0.03 and 0.07, but thatthe computed eigenvalues of the perturbed problems behave pretty much like thecomputed eigenvalues of the original matrix

The experiment provides evidence for the fact that the computed eigenvaluesare the exact eigenvalues of a matrix A + E where the elements of E are on theorder of roundoff error compared to the elements of A. This is the best we canexpect to achieve with floating-point computation.

10.7 Singular Value Sensitivity and AccuracyThe sensitivity of singular values is much easier to characterize than the sensitivityof eigenvalues. The singular value problem is always perfectly well conditioned. Aperturbation analysis would involve an equation like

Σ + δΣ = UH(A + δA)V

But, since U and V are orthogonal or unitary, they preserve norms. Consequently,‖δΣ‖ = ‖δA‖. Perturbations of any size in any matrix cause perturbations ofroughly the same size in its singular values. There is no need to define conditionnumbers for singular values because they would always be equal to one. The Mat-lab function svd always computes singular values to full floating-point accuracy.

We have to be careful about what we mean by “same size” and “full accuracy.”Perturbations and accuracy are measured relative to the norm of the matrix or,

Page 16: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

16 Chapter 10. Eigenvalues and Singular Values

equivalently, the largest singular value.

‖A‖2 = σ1

The accuracy of the smaller singular values is measured relative to the largest one.If, as is often the case, the singular values vary over several orders of magnitude,the smaller ones might not have full accuracy relative to themselves. In particular,if the matrix is singular, then some of the σi must be zero. The computed valuesof these σi will usually be on the order of ε‖A‖ where ε is eps, the floating-pointaccuracy parameter.

This can be illustrated with the singular values of gallery(5). The state-ments

A = gallery(5)format long esvd(A)

produce

1.010353607103610e+0051.679457384066496e+0001.462838728086172e+0001.080169069985612e+0004.988578262459575e-014

The largest element of A is 93365, and we see that the largest singular value is alittle larger, about 105. There are three singular values near 100. Recall that all theeigenvalues of this matrix are zero, so the matrix is singular and the smallest singularvalue should theoretically be zero. The computed value is somewhere between ε andε‖A‖.

Now let’s perturb the matrix. Let this infinite loop run for a while.

while 1clcsvd(A+eps*randn(5,5).*A)pause(.25)

end

This produces varying output like this:

1.010353607103610e+0051.67945738406****e+0001.46283872808****e+0001.08016906998****e+000*.****************-0**

The asterisks show the digits that change as we make the random perturbations.The 15 digit format does not show any changes in σ1. The changes in σ2, σ3, andσ4 are smaller than ε‖A‖, which is roughly 10−11. The computed value of σ5 is allroundoff error, less than 10−11.

Page 17: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.8. Jordan and Schur Forms 17

The gallery(5) matrix was constructed to have very special properties forthe eigenvalue problem. For the singular value problem, its behavior is typical ofany singular matrix.

10.8 Jordan and Schur FormsThe eigenvalue decomposition attempts to find a diagonal matrix Λ and a nonsin-gular matrix X so that

A = XΛX−1

There are two difficulties with the eigenvalue decomposition. A theoretical difficultyis that the decomposition does not always exist. A numerical difficulty is that, evenif the decomposition exists, it might not provide a basis for robust computation.

The solution to the nonexistence difficulty is to get as close to diagonal aspossible. This leads to the Jordan canonical form. The solution to the robustnessdifficulty is to replace “diagonal” by “triangular” and to use orthogonal and unitarytransformations. This leads to the Schur form.

A defective matrix is a matrix with at least one multiple eigenvalue that doesnot have a full set of linearly independent eigenvectors. For example, gallery(5)is defective; zero is an eigenvalue of multiplicity five that has only one eigenvector.

The Jordan canonical form (JCF) is the decomposition

A = XJX−1

If A is not defective, then the JCF is the same as the eigenvalue decomposition.The columns of X are the eigenvectors and J = Λ is diagonal. But if A is de-fective, then X consists of eigenvectors and generalized eigenvectors. The matrixJ has the eigenvalues on the diagonal, and ones on the superdiagonal in positionscorresponding to the columns of X that are not ordinary eigenvectors. The rest ofthe elements of J are zero.

The function jordan in the Matlab Symbolic Toolbox uses Maple and unlimited-precision rational arithmetic to try to compute the JCF of small matrices whoseentries are small integers or ratios of small integers. If the characteristic polynomialdoes have not have rational roots, Maple regards all the eigenvalues as distinct andproduces a diagonal JCF.

The Jordan canonical form is a discontinuous function of the matrix. Almostany perturbation of a defective matrix can cause a multiple eigenvalue to separateinto distinct values and eliminate the ones on the superdiagonal of the JCF. Ma-trices that are nearly defective have badly conditioned sets of eigenvectors, and theresulting similarity transformations cannot be used for reliable numerical computa-tion.

A numerically satisfactory alternative to the JCF is provided by the Schurform. Any matrix can be transformed to upper triangular form by a unitary simi-larity transformation.

B = THAT

Page 18: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

18 Chapter 10. Eigenvalues and Singular Values

The eigenvalues of A are on the diagonal of its Schur form, B. Since unitarytransformations are perfectly well conditioned, they do not magnify any errors.

For example,

A = gallery(3)[T,B] = schur(A)

produces

A =-149 -50 -154537 180 546-27 -9 -25

T =0.3162 -0.6529 0.6882

-0.9487 -0.2176 0.22940.0000 0.7255 0.6882

B =1.0000 -7.1119 -815.8706

0 2.0000 -55.02360 0 3.0000

The diagonal elements of B are the eigenvalues of A. If A were symmetric, B wouldbe diagonal. In this case, the large off-diagonal elements of B measure the lack ofsymmetry in A.

10.9 The QR AlgorithmThe QR algorithm is one of the most important, widely used, and successful tools wehave in technical computation. Several variants of it are in the mathematical core ofMatlab. They compute the eigenvalues of real symmetric matrices, eigenvalues ofreal nonsymmetric matrices, eigenvalues of pairs of complex matrices, and singularvalues of general matrices. These functions are used, in turn, to find zeros ofpolynomials, to solve special linear systems, to assess stability, and for many othertasks in various toolboxes.

Dozens of people have contributed to the development of the various QR algo-rithms. The first complete implementation and an important convergence analysisare due to J. H. Wilkinson. Wilkinson’s book, The Algebraic Eigenvalue Problem[1], as well as two fundamental papers were published in 1965.

The QR algorithm is based on repeated use of the QR factorization that wedescribed in the chapter on least squares. The letter “Q” denotes orthogonal andunitary matrices and the letter “R” denotes right, or upper, triangular matrices.The qr function in Matlab factors any matrix, real or complex, square or rectan-gular, into the product of a matrix Q with orthonormal columns and matrix R thatis nonzero only in its upper, or right, triangle.

Using the qr function, a simple variant of the QR algorithm (known as thesingle-shift algorithm) can be expressed as a Matlab one-liner. Let A be any squarematrix. Start with

Page 19: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.9. The QR Algorithm 19

n = size(A,1)I = eye(n,n)

Then one step of the single-shift QR iteration is given by

s = A(n,n); [Q,R] = qr(A - s*I); A = R*Q + s*I

If you enter this on one line, you can use the up arrow key to iterate. Thequantity s is the shift; it accelerates convergence. The QR factorization makes thematrix triangular.

A− sI = QR

Then the reverse order multiplication, RQ, restores the eigenvalues because

RQ + sI = QT (A− sI)Q + sI = QT AQ

so the new A is orthogonally similar to the original A. Each iteration effectivelytransfers some “mass” from the lower to the upper triangle while preserving theeigenvalues. As the iterations are repeated, the matrix often approaches an uppertriangular matrix with the eigenvalues conveniently displayed on the diagonal.

For example, start with A = gallery(3):

-149 -50 -154537 180 546-27 -9 -25

The first iterate,

28.8263 -259.8671 773.92921.0353 -8.6686 33.1759

-0.5973 5.5786 -14.1578

already has its largest elements in the upper triangular. After five more iterationswe have

2.7137 -10.5427 -814.0932-0.0767 1.4719 -76.58470.0006 -0.0039 1.8144

As we know, this matrix was contrived to have its eigenvalues equal to 1, 2,/anyand 3. We can begin to see these three values on the diagonal. Five more iterationsgives

3.0716 -7.6952 802.12010.0193 0.9284 158.9556

-0.0000 0.0000 2.0000

One of the eigenvalues has been computed to full accuracy and the below-diagonalelement adjacent to it has become zero. It is time to deflate the problem andcontinue the iteration on the 2-by-2 upper left submatrix.

Page 20: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

20 Chapter 10. Eigenvalues and Singular Values

The QR algorithm is never practiced in this simple form. It is always precededby a reduction to Hessenberg form, in which all the elements below the subdiagonalare zero. This reduced form is preserved by the iteration and the factorizations canbe done much more quickly. Furthermore, the shift strategy is more sophisticated,and is different for various forms of the algorithm.

The simplest variant involves real, symmetric matrices. The reduced form inthis case is tridiagonal. Wilkinson provided a shift strategy that allowed him toprove a global convergence theorem. Even in the presence of roundoff error, we donot know of any examples that cause the implementation in Matlab to fail.

The SVD variant of the QR algorithm is preceded by a reduction to a bidiag-onal form which preserves the singular values. It has the same guaranteed conver-gence properties as the symmetric eigenvalue iteration.

The situation for real, nonsymmetric matrices is much more complicated. Inthis case, the given matrix has real elements, but its eigenvalues may well be com-plex. Real matrices are used throughout, with a double-shift strategy that canhandle two real eigenvalues, or a complex conjugate pair. Even thirty years ago,counterexamples to the basic iteration were known, and Wilkinson introduced an“ad hoc” shift to handle them. But no one has been able to prove a completeconvergence theorem. In principle, it is possible for the eig function in Matlab tofail with an error message about lack of convergence.

10.10 eigsvdgui

Figures 10.5 and 10.6 are snapshots of the output produced by eigsvdgui showingsteps in the computation of the eigenvalues of a nonsymmetric matrix and of asymmetric matrix. Figure 10.7 is a snapshot of the output produced by eigsvdguishowing steps in the computation of the singular values of a nonsymmetric matrix.

Figure 10.5. eigsvdgui, nonsymmetric matrix

The first phase in the computation shown in figure 10.5 of the eigenvalues ofa real, nonsymmetric, n-by-n matrix is a sequence of n − 2 orthogonal similaritytransformations. The kth transformation uses Householder reflections to introducezeros below the subdiagonal in the kth column. The result of this first phase isknown as a Hessenberg matrix; all the elements below the first subdiagonal arezero.

Page 21: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.10. eigsvdgui 21

for k = 1:n-2u = A(:,k);u(1:k) = 0;sigma = norm(u);if sigma ~= 0

if u(k+1) < 0, sigma = -sigma; endu(k+1) = u(k+1) + sigma;rho = 1/(sigma*u(k+1));v = rho*A*u;w = rho*(u’*A)’;gamma = rho/2*u’*v;v = v - gamma*u;w = w - gamma*u;A = A - v*u’ - u*w’;A(k+2:n,k) = 0;

endend

The second phase uses the QR algorithm to introduce zeros in the first subdi-agonal. A real, nonsymmetric matrix will usually have some complex eigenvalues,so it is not possible to completely transform it to the upper triangular Schur form.Instead, a real Schur form with 1-by-1 and 2-by-2 submatrices on the diagonal isproduced. Each 1-by-1 matrix is a real eigenvalue of the original matrix. Theeigenvalues of each 2-by-2 block are a pair of complex conjugate eigenvalues of theoriginal matrix.

Figure 10.6. eigsvdgui, symmetric matrix

The computation of the eigenvalues of a symmetric matrix shown in figure10.6 also has two phases. The result of the first phase is a matrix that is bothsymmetric and Hessenberg, so it is tridiagonal. Then, since all the eigenvaluesof a real, symmetric matrix are real, the QR iterations in the second phase cancompletely zero the subdiagonal and produce a real, diagonal matrix containing theeigenvalues.

Figure 10.7 shows the output produced by eigsvdgui as it computes thesingular values of a nonsymmetric matrix. Multiplication by any orthogonal matrixpreserves singular values, so it is not necessary to use similarity transformations.

Page 22: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

22 Chapter 10. Eigenvalues and Singular Values

Figure 10.7. eigsvdgui, singular value decomposition

The first phase use a Householder reflection to introduce zeros below the diagonalin each column, then a different Householder reflection to introduce zeros to theright of the first superdiagonal in the corresponding row. This produces an upperbidiagonal matrix with the same singular values as the original matrix. The QRiterations then zero the superdiagonal to produce a diagonal matrix containing thesingular values.

10.11 Principal Component AnalysisPrincipal component analysis, or PCA, approximates a general matrix by a sum ofa few “simple” matrices. By “simple” we mean rank one; all the rows are multiplesof each other, and so are all the columns. Let A by any real m-by-n matrix. Theeconomy-sized singular value decomposition

A = UΣV T

can be rewritten

A = E1 + E2 + . . . + Ep

where p = min(m,n). The component matrices Ek are rank one outer products,

Ek = σkukvTk

Each column of Ek is a multiple of uk, the kth column of U , and each row is amultiple of vT

k , the transpose of the kth column of V . The component matrices areorthogonal to each other in the sense that

EjETk = 0, j 6= k

The norm of each component matrix is the corresponding singular value,

‖Ek‖ = σk

Consequently, the contribution each Ek makes to reproducing A is determined bythe size of the singular value σk.

If the sum is truncated after r < p terms,

Ar = E1 + E2 + . . . + Er

Page 23: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.11. Principal Component Analysis 23

the result is a rank r approximation to the original matrix A. In fact, Ar is theclosest rank r approximation to A. It turns out that the error in this approximationis

‖A−Ar‖ = σr+1

Since the singular values are ordered in decreasing order, the accuracy of the ap-proximation increases as the rank increases.

Principal component analysis is used in a wide range of fields, including statis-tics, earth sciences, and archaeology. The description and notation vary widely.Perhaps the most common description is in terms of eigenvalues and eigenvectorsof the cross-product matrix AT A. Since

AT AV = V Σ2

the columns of V are the eigenvectors AT A. The columns of U , scaled by thesingular values, can then be obtained from

UΣ = AV

The data matrix A is frequently standardized by subtracting the means of thecolumns and dividing by their standard deviations. If this is done, the cross-productmatrix becomes the correlation matrix.

Factor analysis is a closely related technique that makes additional statisticalassumptions about the elements of A and modifies the diagonal elements of AT Abefore computing the eigenvalues and eigenvectors.

For a simple example of principal component analysis on the unmodified ma-trix A, suppose we measure the height and weight of six subjects and obtain thefollowing data.

A =47 1593 3553 1545 1067 2742 10

The blue bars in figure 10.8 plot this data.We expect height and weight to be strongly correlated. We believe there is

one underlying component — let’s call it “size” — that predicts both height andweight. The statement

[U,S,V] = svd(A,0)sigma = diag(S)

produces

U =0.3153 0.1056

Page 24: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

24 Chapter 10. Eigenvalues and Singular Values

1 2 3 4 5 60

20

40

60

80

100height

1 2 3 4 5 60

10

20

30

40weight

datapca

datapca

Figure 10.8. Principal component analysis of data

0.6349 -0.36560.3516 0.32590.2929 0.57220.4611 -0.45620.2748 0.4620

V =0.9468 0.32190.3219 -0.9468

sigma =156.4358

8.7658

Notice that σ1 is much larger than σ2. The rank one approximation to A is

E1 = sigma(1)*U(:,1)*V(:,1)’

E1 =46.7021 15.876294.0315 31.965752.0806 17.704643.3857 14.748868.2871 23.213940.6964 13.8346

In other words, the single underlying principal component is

Page 25: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.11. Principal Component Analysis 25

size = sigma(1)*U(:,1)

size =49.326999.316355.007645.824072.125042.9837

The two measured quantities are then well approximated byheight ≈ size*V(1,1)weight ≈ size*V(2,1)

The green bars in figure 10.8 plot these approximations.

Figure 10.9. Principal components of Durer’s magic square

A larger example involves digital image processing. The statements

load detailsubplot(2,2,1)image(X)

Page 26: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

26 Chapter 10. Eigenvalues and Singular Values

colormap(gray(64))axis image, axis offr = rank(X)title([’rank = ’ int2str(r)])

produce the first subplot in figure 10.9. The matrix X obtained with the loadstatement is 359-by-371 and is numerically of full rank. Its elements lie between1 and 64 and serve as indices into a gray-scale color map. The resulting pictureis a detail from Albrecht Durer’s etching “Melancolia II” showing a 4-by-4 magicsquare. The statements

[U,S,V] = svd(X,0);sigma = diag(S);semilogy(sigma,’.’)

produce the logarithmic plot of the singular values of X shown in figure 10.10. Wesee that the singular values decrease rapidly. There is one greater than 104 andonly six greater than 103.

0 50 100 150 200 250 300 35010

0

101

102

103

104

105

Figure 10.10. Singular values (log scale)

The other three subplots in figure 10.9 show the images obtained from principalcomponent approximations to X with r = 1, r = 20, and r = 100. The rank oneapproximation shows the horizontal and vertical lines that result from a singleouter product, E1 = σ1u1v

T1 . This checkerboard-like structure is typical of low

rank principal component approximations to images. The individual numerals arerecognizable in the r = 20 approximation. There is hardly any visible differencebetween the r = 100 approximation and the full rank image.

Although low rank matrix approximations to images do require less computerstorage and transmission time than the full rank image, there are more effectivedata compression techniques. The primary uses of principle component analysis inimage processing involve feature recognition.

Page 27: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.12. Circle Generator 27

10.12 Circle GeneratorThe following algorithm was used to plot circles on some of the first computerswith graphical displays. At the time, there was no Matlab and no floating-pointarithmetic. Programs were written in machine language and arithmetic was doneon scaled integers. The circle generating program looked something like this:

x = 32768y = 0

L: load yshift right 5 bitsadd xstore in xchange signshift right 5 bitsadd ystore in yplot x ygo to L

Why does this generate a circle? In fact, does it actually generate a circle? Thereare no trig functions, no square roots, no multiplications or divisions. It’s all donewith shifts and additions.

The key to this algorithm is the fact that the new x is used in the computationof the new y. This was convenient on computers at the time because it meant youneeded only two storage locations, one for x and one for y. But, as we shall see, itis also why the algorithm comes close to working at all.

Here is a Matlab version of the same algorithm.

h = 1/32;x = 1;y = 0;while 1

x = x + h*y;y = y - h*x;plot(x,y,’.’)drawnow

end

The M-file circlegen lets you experiment with various values of the step sizeh. It provides an actual circle in the background. Figure 10.11 shows the outputfor the carefully chosen default value, h = .20906. It’s not quite a circle. However,circlegen(h) generates better circles with smaller values of h. Try circlegen(h)for various h yourself.

If we let (xn, yn) denote the nth point generated, then the iteration is

xn+1 = xn + hyn

yn+1 = yn − hxn+1

Page 28: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

28 Chapter 10. Eigenvalues and Singular Values

−1 −0.5 0 0.5 1

−1

−0.5

0

0.5

1

h = 0.20906

Figure 10.11. circlegen

The key is the fact that xn+1 appears on the right in the second equation. Substi-tuting the first equation in the second gives

xn+1 = xn + hyn

yn+1 = −hxn + (1− h2)yn

Let’s switch to matrix-vector notation. Let xn now denote the two-vectorspecifying the nth point and let A be the circle generator matrix

A =(

1 h−h 1− h2

)

With this notation, the iteration is simply

xn+1 = Axn

This immediately leads to

xn = Anx0

So, the question is, for various values of h, how do powers of the circle generatormatrix behave?

For most matrices A, the behavior of An is determined by its eigenvalues. TheMatlab statement

[X,Lambda] = eig(A)

Page 29: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.12. Circle Generator 29

produces a diagonal eigenvalue matrix Λ and a corresponding eigenvector matrix Xso that

AX = XΛ

If X−1 exists, then

A = XΛX−1

and

An = XΛnX−1

Consequently, the powers An remain bounded if the eigenvector matrix is nonsin-gular and the eigenvalues λk, which are the diagonal elements of Λ, satisfy

|λk| ≤ 1

Here is an easy experiment. Enter the line

h = 2*rand, A = [1 h; -h 1-h^2], lambda = eig(A), abs(lambda)

Repeatedly press the up arrow key, then the Enter key. You should eventuallybecome convinced, at least experimentally, that

For any h in the interval 0 < h < 2, the eigenvalues of the circle generatormatrix A are complex numbers with absolute value 1.

The Symbolic Toolbox provides some assistance in actually proving this fact.

syms hA = [1 h; -h 1-h^2]lambda = eig(A)

creates a symbolic version of the iteration matrix and finds its eigenvalues.

A =[ 1, h][ -h, 1-h^2]

lambda =[ 1-1/2*h^2+1/2*(-4*h^2+h^4)^(1/2)][ 1-1/2*h^2-1/2*(-4*h^2+h^4)^(1/2)]

The statement

abs(lambda)

does not do anything useful, in part because we have not yet made any assumptionsabout the symbolic variable h.

We note that the eigenvalues will be complex if the quantity involved in thesquare root is negative, that is, if |h| < 2. The determinant of a matrix should bethe product of its eigenvalues. This is confirmed with

Page 30: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

30 Chapter 10. Eigenvalues and Singular Values

d = det(A)

or

d = simple(prod(lambda))

Both produce

d =1

Consequently, if |h| < 2, the eigenvalues, λ, are complex and their product is 1, sothey must satisfy |λ| = 1.

Because

λ = 1− h2/2± h√−1 + h2/4

it is plausible that, if we define θ by

cos θ = 1− h2/2

or

sin θ = h√

1− h2/4

then

λ = cos θ ± i sin θ

The Symbolic Toolbox confirms this with

theta = acos(1-h^2/2);Lambda = [cos(theta)-i*sin(theta); cos(theta)+i*sin(theta)]diff = simple(lambda-Lambda)

which produces

Lambda =[ 1-1/2*h^2-1/2*i*(4*h^2-h^4)^(1/2)][ 1-1/2*h^2+1/2*i*(4*h^2-h^4)^(1/2)]

diff =[ 0][ 0]

In summary, this proves that, if |h| < 2, the eigenvalues of the circle generatormatrix are

λ = e±iθ

The eigenvalues are distinct, hence X must be nonsingular and

An = X

(einθ 00 e−inθ

)X−1

Page 31: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

10.13. Further Reading 31

If the step size h happens to correspond to a value of θ that is 2π/p where pis an integer, then the algorithm generates only p discrete points before it repeatsitself.

How close does our circle generator come to actually generating circles? Infact, it generates ellipses. As the step size h gets smaller, the ellipses get closer tocircles. The aspect ratio of an ellipse is the ratio of its major axis to its minor axis.It turns out that the aspect ratio of the ellipse produced by the generator is equalto the condition number of the matrix of eigenvectors, X. The condition number ofa matrix is computed by the Matlab function cond(X) and is discussed in moredetail in the chapter on linear equations.

The solution to the 2-by-2 system of ordinary differential equations

x = Qx

where

Q =(

0 1−1 0

)

is a circle

x(t) =(

cos t sin t− sin t cos t

)x(0)

So, the iteration matrix(

cosh sin h− sin h cosh

)

generates perfect circles. The Taylor series for cos h and sin h show that the iterationmatrix for our circle generator

A =(

1 h−h 1− h2

)

approaches the perfect iterator as h gets small.

10.13 Further ReadingThe reference books on matrix computation [6, 7, 8, 9, 10, 11] discuss eigenvalues.In addition, the classic by Wilkinson [1] is still readable and relevant. ARPACK,which underlies the sparse eigs function, is described in [2].

Exercises10.1. Match the following matrices to the following properties. For each matrix,

choose the most descriptive property. Each property can be matched to oneor more of the matrices.

Page 32: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

32 Chapter 10. Eigenvalues and Singular Values

magic(4) Symmetrichess(magic(4)) Defectiveschur(magic(5)) Orthogonalpascal(6) Singularhess(pascal(6)) Tridiagonalschur(pascal(6)) Diagonalorth(gallery(3)) Hessenberg formgallery(5) Schur formgallery(’frank’,12) Jordan form[1 1 0; 0 2 1; 0 0 3][2 1 0; 0 2 1; 0 0 2]

10.2. (a) What is the largest eigenvalue of magic(n)? Why?(b) What is the largest singular value of magic(n)? Why?

10.3. As a function of n, what are the eigenvalues of the n-by-n finite Fouriertranform matrix, fft(eye(n))?

10.4. Try this:

n = 101;d = ones(n-1,1);A = diag(d,1) + diag(d,-1);e = eig(A)plot(-(n-1)/2:(n-1)/2,e,’.’)

Do you recognize the resulting curve? Can you guess a formula for the eigen-values of this matrix?

10.5. Plot the trajectories in the complex plane of the eigenvalues of the matrix Awith elements

ai,j =1

i− j + t

as t varies over the interval 0 < t < 1. Your plot should look something likefigure 10.12.

10.6. (a) In theory, the elements of the vector obtained from

condeig(gallery(5))

should be infinite. Why?(b) In practice, the computed values are only about 1010. Why?

10.7. This exercise uses the Symbolic Toolbox to study a classic eigenvalue testmatrix, the Rosser matrix.(a) You can compute the eigenvalues of the Rosser matrix exactly and orderthem in increasing order with

R = sym(rosser)e = eig(R)[ignore,k] = sort(double(e))e = e(k)

Page 33: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

Exercises 33

−10 −5 0 5 10−10

−8

−6

−4

−2

0

2

4

6

8

10

Figure 10.12. Eigenvalue trajectories

Why can’t you just use e = sort(eig(R))?(b) You can compute and display the characteristic polynomial of R with

p = poly(R)f = factor(p)pretty(f)

Which terms in f correspond to which eigenvalues in e?(c) What does each of these statements do?

e = eig(sym(rosser))r = eig(rosser)double(e) - rdouble(e - r)

(d) Why are the results in (c) on the order of 10−12 instead of eps?(e) Change R(1,1) from 611 to 612 and compute the eigenvalues of themodified matrix. Why do the results appear in a different form?

10.8. Both of the matrices

P = gallery(’pascal’,12)F = gallery(’frank’,12)

have the property that if λ is an eigenvalue, so is 1/λ. How well do thecomputed eigenvalues preserve this property? Use condeig to explain thedifferent behavior for the two matrices.

10.9. Compare these three ways to compute the singular values of a matrix.

svd(A)

Page 34: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

34 Chapter 10. Eigenvalues and Singular Values

sqrt(eig(A’*A))Z = zeros(size(A)); s = eig([Z A; A’ Z]); s = s(s>0)

10.10. Experiment with eigsvdgui on random symmetric and nonsymmetric ma-trices, randn(n). Choose values of n appropriate for the speed of your com-puter and investigate the three variants eig, symm, and svd. The title in theeigsvdgui shows the number of iterations required. Roughly, how does thenumber of iterations for the three different variants depend upon the orderof the matrix?

10.11. Pick a value of n and generate a matrix with

A = diag(ones(n-1,1),-1) + diag(1,n-1);

Explain any atypical behavior you observe with each of the following.

eigsvdgui(A,’eig’)eigsvdgui(A,’symm’)eigsvdgui(A,’svd’)

10.12. The NCM file imagesvd.m helps you investigate the use of principal com-ponent analysis in digital image processing. If you have them available, useyour own photographs. If you have access to the Matlab Image ProcessingToolbox, you may want to use its advanced features. However, it is possibleto do basic image processing without the toolbox.For an m-by-n color image in JPEG format the statement

X = imread(’myphoto.jpg’);

produces a three-dimensional m-by-n-by-3 array X with m-by-n integer subar-rays for the red, green, and blue intensities. It would be possible to computethree separate m-by-n singular value decompositions of the three colors. Analternative that requires less work involves altering the dimensions of X with

X = reshape(X,m,3*n)

and then computing one m-by-3n SVD.(a) The primary computation in imagesvd is done by

[V,S,U] = svd(X’,0)

How does this compare with

[U,S,V] = svd(X,0)

(b) How does the choice of approximating rank affect the visual qualities ofthe images? There are no precise answers here. Your results will dependupon the images you choose and the judgments you make.

10.13. This exercise investigates a model of the human gait developed by NikolausTroje at the Bio Motion Lab of Ruhr University in Bochum, Germany. TheirWeb page provides an interactive demo [3]. Two papers describing the workare also available on the Web [4, 5].

Page 35: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

Exercises 35

8

7

6

14

13

15

1

12

5

10

11

9

4

2

3

Figure 10.13. Walker at rest.

Troje’s data results from motion capture experiments involving subjects wear-ing reflective markers walking on a treadmill. His model is a five-term Fourierseries with vector-valued coefficients obtained by principal component anal-ysis of the experimental data. The components, which are also known aspostures or eigenpostures, correspond to static position, forward motion, side-ways sway, and two hopping/bouncing movements that differ in the phaserelationship between the upper and lower portions of the body. The model ispurely descriptive; it does not make any direct use of physical laws of motion.

The moving position v(t) of the human body is described by 45 functionsof time, which correspond to the location of 15 points in three-dimensionalspace. Figure 10.13 is a static snapshot. The model is

v(t) = v1 + v2 sinωt + v3 cosωt + v4 sin 2ωt + v5 cos 2ωt

If the postures v1, ..., v5 are regarded as the columns of a single 45-by-5 matrixV , the calculation of v(t) for any t involves a matrix-vector multiplication.The resulting vector can then be reshaped into a 15-by-3 array that exposesthe spatial coordinates. For example, at t = 0 the time-varying coefficientsform the vector w = [1 0 1 0 1]’. Consequently reshape(V*w,15,3) pro-duces the coordinates of the initial position.

The five postures for an individual subject are obtained by a combinationof principal component and fourier analysis. The individual characteristicfrequency ω is an independent speed parameter. If the postures are averagedover the subjects with a particular characteristic, the result is a model for the“typical” walker with that characteristic. The characteristics available in thedemo on the Web page include male/female, heavy/light, nervous/relaxed,and happy/sad.

Page 36: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

36 Chapter 10. Eigenvalues and Singular Values

Our M-file walker.m is based on the postures for a typical female walker,f1, . . . , f5 and a typical male walker, m1, . . . , m5. Slider s1 varies the timeincrement and hence the apparent walking speed. Sliders s2, · · · , s5 vary theamount that each component contributes to the overall motion. Slider s6

varies a linear combination of the female and male walkers. A slider settinggreater than 1.0 overemphasizes the characteristic. Here is the completemodel, including the sliders.

f(t) = f1 + s2f2 sin ωt + s3f3 cos ωt + s4f4 sin 2ωt + s5f5 cos 2ωtm(t) = m1 + s2m2 sinωt + s3m3 cos ωt + s4m4 sin 2ωt + s5m5 cos 2ωtv(t) = (f(t) + m(t))/2 + s6(f(t)−m(t))/2

(a) Describe the visual differences between the gait of the typical female andmale walkers.

(b) File walkers.mat contains four data sets. F and M are the postures of thetypical female and typical male obtained by analyzing all the subjects. A andB are the postures of two individual subjects. Are A and B male or female?

(c) Modify walker.m to add a waving hand as an additional, artificial, pos-ture.

(d) What does this program do?

load walkersF = reshape(F,15,3,5);M = reshape(M,15,3,5);for k = 1:5

for j = 1:3subplot(5,3,j+3*(k-1))plot([F(:,j,k) M(:,j,k)])ax = axis;axis([1 15 ax(3:4)])

endend

(e) Change walker.m to use a Fourier model parametrized by amplitude andphase. The female walker is

f(t) = f1 + s2a1 sin (ωt + s3φ1) + s4a2 sin (2ωt + s5φ2)

A similar formulation is used for the male walker. The linear combination ofthe two walkers using s6 is unchanged. The amplitude and phase are givenby

a1 =√

f22 + f2

3

a2 =√

f24 + f2

5

φ1 = tan−1(f3/f2)φ2 = tan−1(f5/f4)

Page 37: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

Exercises 37

10.14. In English, and in many other languages, vowels are usually followed by con-sonants and consonants are usually followed by vowels. This fact is revealedby a principal component analysis of the digraph frequency matrix for a sam-ple of text. English text uses 26 letters, so the digraph frequency matrix isa 26-by-26 matrix, A, with counts of pairs of letters. Blanks and all otherpunctuation are removed from the text and the entire sample is thought ofas circular or periodic so the first letter follows the last letter. The matrixentry ai,j is the number of times the ith letter is followed by the jth letterin the text. The row and column sums of A are the same: they count thenumber of times individual letters occur in the sample. So the fifth row andfifth column usually have the largest sums because the fifth letter, which is“E”, is usually the most frequent.A principal component analysis of A produces a first component,

A ≈ σ1u1vT1

that reflects the individual letter frequencies. The first right and left singularvectors, u1 and v1, have elements that are all of the same sign and thatare roughly proportional to the corresponding frequencies. We are primarilyinterested in the second principal component,

A ≈ σ1u1vT1 + σ2u2v

T2

The second term has positive entries in vowel-consonant and consonant-vowelpositions and negative entries in vowel-vowel and consonant-consonant posi-tions. The NCM collection contains a function digraph.m that carries outthis analysis. Figure 10.14 shows the output produced by analyzing Lincoln’sGettysburg address with

digraph(’gettysburg.txt’)

The ith letter of the alphabet is plotted at coordinates (ui,2, vi,2). The dis-tance of each letter from the origin is roughly proportional to its frequencyand the sign patterns cause the vowels to be plotted in one quadrant andthe consonants to be plotted in the opposite quadrant. There is even a littlemore detail. The letter “N” is usually preceded by a vowel and often followedby another consonant, like “D” or “G”, and so it shows up in a quadrantpretty much by itself. On the other hand “H” is often preceded by anotherconsonant, namely “T”, and followed by a vowel, “E”, so it also gets its ownquadrant.(a) Explain how digraph uses sparse to count letter pairs and create thematrix. help sparse should be useful.(b) Try digraph on other text samples. Roughly how many characters areneeded to see the vowel-consonant frequency behavior?(c) Can you find any text with at least several hundred characters that doesnot show the typical behavior?(d) Try digraph on .m files or other source code. Do computer programstypically have the same vowel-consonant behavior as prose?

Page 38: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

38 Chapter 10. Eigenvalues and Singular Values

−0.6 −0.4 −0.2 0 0.2 0.4 0.6

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

A

BC

D

E

FG

H

I

JK

L M

N

O

PQ

R

S

T

U

V

WXYZ

1149 characters

Figure 10.14. The second principal component of a digraph matrix

(e) Try digraph on samples from other languages. Hawaiian and Finnishare particularly interesting. You may need to modify digraph to accom-modate more or fewer than 26 letters. Do other languages show the samevowel-consonant behavior as English?

10.15. Explain the behavior of circlegen for each of the following values of thestep size h. What, if anything, is special about these particular values? Isthe orbit a discrete set of points? Does the orbit stay bounded, grow linearly,or grow exponentially? If necessary, increase the axis limits in circlegen sothat it shows the entire orbit. Recall that φ = (1+

√5)/2 is the golden ratio.

h =√

2− 2 cos (2π/30), ( the default)h = 1/φh = φh = 1.4140h =

√2

h = 1.4144h < 2h = 2h > 2

10.16. (a) Modify circlegen so that both components of the new point are deter-mined from the old point, that is,

xn+1 = xn + hyn

yn+1 = yn − hxn

(This is the explicit Euler’s method for solving the circle ordinary differential

Page 39: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

Exercises 39

equation.) What happens to the “circles”? What is the iteration matrix?What are its eigenvalues?(b) Modify circlegen so that the new point is determined by solving a 2-by-2system of simultaneous equations.

xn+1 − hyn+1 = xn

yn+1 + hxn+1 = yn

(This is the implicit Euler’s method for solving the circle ordinary differentialequation.) What happens to the “circles”? What is the iteration matrix?What are its eigenvalues?

10.17. Modify circlegen so that it keeps track of the maximum and minimumradius during the iteration and returns the ratio of these two radii as the valueof the function. Compare this computed aspect ratio with the eigenvectorcondition number, cond(X), for various values of h.

Page 40: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

40 Chapter 10. Eigenvalues and Singular Values

Page 41: Chapter 10 Eigenvalues and Singular Valuesmath.aalto.fi/opetus/numsym/04/matlab/moler/pdf/eigs.pdf · 2004-01-21 · 2 Chapter 10. Eigenvalues and Singular Values any nonzero factor

Bibliography

[1] J. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford,1965.

[2] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK Users’ Guide:Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted ArnoldiMethods, SIAM, 1998, 160 pages.http://www.caam.rice.edu/software/ARPACK

[3] Bio Motion Lab, Ruhr University,http://www.bml.psy.ruhr-uni-bochum.de/Demos/BMLwalker.html

[4] Nikolaus Troje,http://journalofvision.org/2/5/2

[5] Nikolaus Troje,http://www.biomotionlab.de/Text/WDP2002_Troje.pdf

[6] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Don-garra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney,and D. Sorensen, LAPACK Users’ Guide, Third Edition SIAM, 1999.http://www.netlib.org/lapack

[7] J. W. Demmel, Applied Numerical Linear Algebra, SIAM, 1997, 419 pages.

[8] G. H. Golub and C. F. Van Loan, Matrix Computations, 2nd Edition, TheJohn Hopkins University Press, Baltimore, 1989.

[9] G. W. Stewart, Introduction to Matrix Computations, Academic Press, NewYork, 1973.

[10] G. W. Stewart Matrix Algorithms: Basic Decompositions, SIAM, 1998, 458pages.

[11] L. N. Trefethen and D. Bau, III, Numerical Linear Algebra, SIAM, 1997,361 pages.

41