Chapter 10 Eigenvalues and Singular 2004-01-21آ  2 Chapter 10. Eigenvalues and Singular Values any...

download Chapter 10 Eigenvalues and Singular 2004-01-21آ  2 Chapter 10. Eigenvalues and Singular Values any nonzero

of 41

  • date post

    25-Apr-2020
  • Category

    Documents

  • view

    3
  • download

    0

Embed Size (px)

Transcript of Chapter 10 Eigenvalues and Singular 2004-01-21آ  2 Chapter 10. Eigenvalues and Singular Values any...

  • Chapter 10

    Eigenvalues and Singular Values

    This chapter is about eigenvalues and singular values of matrices. Computational algorithms and sensitivity to perburbations are both discussed.

    10.1 Eigenvalue and Singular Value Decompositions An eigenvalue and eigenvector of a square matrix A are a scalar λ and a nonzero vector x so that

    Ax = λx

    A singular value and pair of singular vectors of a square or rectangular matrix A are a nonnegative scalar σ and two nonzero vectors u and v so that

    Av = σu AHu = σv

    The superscript on AH stands for Hermitian transpose and denotes the complex conjugate transpose of a complex matrix. If the matrix is real, then AT denotes the same matrix. In Matlab these transposed matrices are denoted by A’.

    The term eigenvalue is a partial translation of the German “eigenvert.” A complete translation would be something like “own value” or “characteristic value,” but these are rarely used. The term singular value relates to the distance between a matrix and the set of singular matrices.

    Eigenvalues play an important role in situations where the matrix is a trans- formation from one vector space onto itself. Systems of linear ordinary differential equations are the primary examples. The values of λ can correspond to frequencies of vibration, or critical values of stability parameters, or energy levels of atoms. Singular values play an important role where the matrix is a transformation from one vector space to a different vector space, possibly with a different dimension. Systems of over- or underdetermined algebraic equations are the primary examples.

    The definitions of eigenvectors and singular vectors do not specify their nor- malization. An eigenvector x, or a pair of singular vectors u and v, can be scaled by

    1

  • 2 Chapter 10. Eigenvalues and Singular Values

    any nonzero factor without changing any other important properties. Eigenvectors of symmetric matrices are usually normalized to have Euclidean length equal to one, ‖x‖2 = 1. On the other hand, the eigenvectors of nonsymmetric matrices often have different normalizations in different contexts. Singular vectors are almost always normalized to have Euclidean length equal to one, ‖u‖2 = ‖v‖2 = 1. You can still multiply eigenvectors, or pairs of singular vectors, by −1 without changing their lengths.

    The eigenvalue-eigenvector equation for a square matrix can be written

    (A− λI)x = 0, x 6= 0 This implies that A− λI is singular and hence that

    det(A− λI) = 0 This definition of an eigenvalue, which does not directly involve the corresponding eigenvector, is the characteristic equation or characteristic polynomial of A. The degree of the polynomial is the order of the matrix. This implies that an n-by-n matrix has n eigenvalues, counting multiplicities. Like the determinant itself, the characteristic polynomial is useful in theoretical considerations and hand calcula- tions, but does not provide a sound basis for robust numerical software.

    Let λ1, λ2, . . . , λn be the eigenvalues of a matrix A, let x1, x2, . . . , xn be a set of corresponding eigenvectors, let Λ denote the n-by-n diagonal matrix with the λj on the diagonal, and let X denote the n-by-n matrix whose jth column is xj . Then

    AX = XΛ

    It is necessary to put Λ on the right in the second expression so that each column of X is multiplied by its corresponding eigenvalue. Now make a key assumption that is not true for all matrices — assume that the eigenvectors are linearly independent. Then X−1 exists and

    A = XΛX−1

    with nonsingular X. This is known as the eigenvalue decomposition of the matrix A. If it exists, it allows us to investigate the properties of A by analyzing the diagonal matrix Λ. For example, repeated matrix powers can be expressed in terms of powers of scalars.

    Ap = XΛpX−1

    If the eigenvectors of A are not linearly independent, then such a diagonal decom- position does not exist and the powers of A have a more complicated behavior.

    If T is any nonsingular matrix, then

    B = T−1AT

    is known as a similarity transformation, and A and B are said to be similar. If Ax = λx and y = Tx, then By = λy. In other words, a similarity transforma- tion preserves eigenvalues. The eigenvalue decomposition is an attempt to find a similarity transformation to diagonal form.

  • 10.1. Eigenvalue and Singular Value Decompositions 3

    Written in matrix form, the defining equations for singular values and vectors are

    AV = UΣ AHU = V ΣH

    Here Σ is a matrix the same size as A that is zero except possibly on its main diagonal. It turns out that singular vectors can always be chosen to be perpendicular to each other, so the matrices U and V , whose columns are the normalized singular vectors, satisfy UHU = I and V HV = I. In other words, U and V are orthogonal if they are real, or unitary if they are complex. Consequently

    A = UΣV H

    with diagonal Σ and orthogonal or unitary U and V . This is known as the singular value decomposition, or SVD, of the matrix A.

    In abstract linear algebra terms, eigenvalues are relevant if a square, n-by-n matrix A is thought of as mapping n-dimensional space onto itself. We try to find a basis for the space so that the matrix becomes diagonal. This basis might be complex, even if A is real. In fact, if the eigenvectors are not linearly independent, such a basis does not even exist. The singular value decomposition is relevant if a possibly rectangular, m-by-n matrix A is thought of as mapping n-space onto m-space. We try to find one change of basis in the domain and a usually different change of basis in the range so that the matrix becomes diagonal. Such bases always exist and are always real if A is real. In fact, the transforming matrices are orthogonal or unitary, so they preserve lengths and angles and do not magnify errors.

    A = U Σ V’

    A = U Σ V’

    Figure 10.1. Full and economy SVD

  • 4 Chapter 10. Eigenvalues and Singular Values

    If A is m-by-n with m larger than n, then in the full SVD, U is a large square m-by-m matrix. The last m − n columns of U are “extra”; they are not needed to reconstruct A. A second version of the SVD that saves computer memory if A is rectangular is known as the economy-sized SVD. In the economy version, only the first n columns of U and first n rows of Σ are computed. The matrix V is the same n-by-n matrix in both decompositions. Figure 10.1 shows the shapes of the various matrices in the two versions of the SVD. Both decompositions can be written A = UΣV H , even though the U and Σ in the economy decomposition are submatrices of the ones in the full decomposition.

    10.2 A Small Example An example of the eigenvalue and singular value decompositions of a small square matrix is provided by one of the test matrices from the Matlab gallery.

    A = gallery(3)

    The matrix is

    A =

      −149 −50 −154 537 180 546 −27 −9 −25

     

    This matrix was constructed in such a way that the characteristic polynomial factors nicely.

    det(A− λI) = λ3 − 6λ2 + 11λ− 6 = (λ− 1)(λ− 2)(λ− 3)

    Consequently the three eigenvalues are λ1 = 1, λ2 = 2, and λ3 = 3, and

    Λ =

     

    1 0 0 0 2 0 0 0 3

     

    The matrix of eigenvectors can be normalized so that its elements are all integers.

    X =

     

    1 −4 7 −3 9 −49

    0 1 9

     

    It turns out that the inverse of X also has integer entries.

    X−1 =

     

    130 43 133 27 9 28 −3 −1 −3

     

    These matrices provide the eigenvalue decomposition of our example.

    A = XΛX−1

  • 10.3. eigshow 5

    The singular value decomposition of this matrix cannot be expressed so neatly with small integers. The singular values are the positive roots of the equation

    σ6 − 668737σ4 + 4096316σ2 − 36 = 0 but this equation does not factor nicely. The Symbolic Toolbox statement

    svd(sym(A))

    returns exact formulas for the singular values, but the overall length of the result is 822 characters. So, we compute the SVD numerically.

    [U,S,V] = svd(A)

    produces

    U = -0.2691 -0.6798 0.6822 0.9620 -0.1557 0.2243

    -0.0463 0.7167 0.6959

    S = 817.7597 0 0

    0 2.4750 0 0 0 0.0030

    V = 0.6823 -0.6671 0.2990 0.2287 -0.1937 -0.9540 0.6944 0.7193 0.0204

    The expression U*S*V’ generates the original matrix to within roundoff error. For gallery(3), notice the big difference between the eigenvalues, 1, 2, and

    3, and the singular values, 817, 2.47, and .003. This is related, in a way that we will make more precise later, to the fact that this example is very far from being a symmetric matrix.

    10.3 eigshow The function eigshow is available in the Matlab demos directory. The input to eigshow is a real, 2-by-2 matrix A, or you can choose an A from a pull-down list in the title. The default A is

    A = (

    1/4 3/4 1 1/2

    )

    Initially, eigshow plots the unit vector x = [1, 0]’, as well as the vector Ax, which starts out as the first column of A. You can then use your mouse to move x, shown in green, around the unit circle. As you move x, the resulting Ax, shown in blue, also moves. The first four subplots in figure 10.2 show intermediate steps as x

  • 6 Chapter 10. Eigenvalues and Singular Values

    x A*x

    x A*x

    x A*x

    x A*x

    xA*x

    x

    A*x

    Figure 10.2. eigshow