AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

13

Transcript of AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Page 1: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

AMS526: Numerical Analysis I(Numerical Linear Algebra for

Computational and Data Sciences)Lecture 19: Computing the SVD;

Sparse Storage Formats

Xiangmin Jiao

Stony Brook University

Xiangmin Jiao Numerical Analysis I 1 / 13

Page 2: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Outline

1 Computing the SVD (NLA�31)

2 Sparse Storage Format

Xiangmin Jiao Numerical Analysis I 2 / 13

Page 3: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

SVD of A and Eigenvalues of A∗A

Intuitive idea for computing SVD of A ∈ Rm×n:

I Form A∗A and compute its eigenvalue decomposition A∗A = VΛV ∗

I Let Σ =√

Λ, i.e., diag(√λ1,√λ2, . . . ,

√λn)

I Solve system UΣ = AV to obtain U

This method is e�cient if m� n.

However, it may not be stable, especially for smaller singular valuesbecause of the squaring of the condition number

I For SVD of A, |σ̃k − σk | = O(εmachine‖A‖), where σ̃k and σk denotethe computed and exact kth singular value

I If computed from eigenvalue decomposition of A∗A,|σ̃k − σk | = O(εmachine‖A‖

2/σk), which is problematic if σk � ‖A‖

If one is interested in only relatively large singular values, thencomputing eigenvalues of A∗A is not a problem. For generalsituations, a more stable algorithm is desired.

Xiangmin Jiao Numerical Analysis I 3 / 13

Page 4: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

A Di�erent Reduction to Eigenvalue Problem

Typical algorithm for computing SVD are similar to computation ofeigenvalues

Consider A ∈ Cm×n, then Hermitian matrix H =

[0 A∗

A 0

]has

eigenvalue decomposition

H

[V V

U −U

]=

[V V

U −U

] [Σ 00 −Σ

],

where A = UΣV ∗ gives the SVD. This approach is stable.

In practice, such a reduction is done implicitly without forming thelarge matrix

Typically done in two phases

Xiangmin Jiao Numerical Analysis I 4 / 13

Page 5: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Two-Phase Method

In the �rst phase, reduce to bidiagonal form by applying di�erentorthogonal transformations on left and right, which involves O(mn2)operations

In the second phase, reduce to diagonal form using a variant of QRalgorithm or divide-and-conquer algorithm, which involves O(n2)operations for �xed precision

We hereafter focus on the �rst phaseXiangmin Jiao Numerical Analysis I 5 / 13

Page 6: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Golub-Kahan Bidiagonalization

Apply Householder re�ectors on both left and right sides

Work for Golub-Kahan bidiagonalization ∼ 4mn2 − 43n3 �ops

Xiangmin Jiao Numerical Analysis I 6 / 13

Page 7: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Lawson-Hanson-Chan Bidiagonalization

Speed up by �rst performing QR factorization on A

Work for LHC bidiagonalization ∼ 2mn2 + 2n3 �ops, which isadvantageous if m ≥ 5

3n

Xiangmin Jiao Numerical Analysis I 7 / 13

Page 8: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Three-Step Bidiagonalization

Hybrid approach: Apply QR at suitable time on submatrix with 5/3aspect ratio

Work for three-step bidiagonalization is ∼ 4mn2 − 43n3 − 2

3(m − n)3

Xiangmin Jiao Numerical Analysis I 8 / 13

Page 9: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Comparison of Performance

One-step

(Golub-Kahan)

Two-step (LHC)

Three-step bidiagonalization

Xiangmin Jiao Numerical Analysis I 9 / 13

Page 10: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Outline

1 Computing the SVD (NLA�31)

2 Sparse Storage Format

Xiangmin Jiao Numerical Analysis I 10 / 13

Page 11: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Sparse Linear System

Boundary value problems and implicit methods for time-dependentPDEs yield systems of linear algebraic equations to solve

A matrix is sparse if it has relatively few nonzeros in its entries

Sparsity can be exploited to use far less than O(n2) storage and O(n3)work required in standard approach to solving system with densematrix, assuming matrix is n × n

Xiangmin Jiao Numerical Analysis I 11 / 13

Page 12: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Storage Format of Sparse MatricesSparse-matrices are typically stored in special formats that store onlynonzero entries, along with indices to identify their locations in matrix,such as

I compressed-row storage (CRS)I compressed-column storage (CCS)I block compressed row storage (BCRS)

Banded matrices have their own special storage formats (such asCompressed Diagonal Storage (CDS))See survey at http://netlib.org/linalg/html_templates/node90.htmlExplicitly storing indices incurs additional storage overhead and makesarithmetic operations on nonzeros less e�cient due to indirectaddressing to access operands, so they are bene�cial only for verysparse matricesStorage format can have big impact the e�ectiveness of di�erentversions of same algorithm (with di�erent ordering of loops)Besides direct methods, these storage formats are also important inimplementing iterative and multigrid solvers

Xiangmin Jiao Numerical Analysis I 12 / 13

Page 13: AMS526: Numerical Analysis I (Numerical Linear Algebra for ...

Example of Compressed-Row Storage (CRS)

Xiangmin Jiao Numerical Analysis I 13 / 13