Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix...

29
Lecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated time-marching schemes, and in particular, the major distinction between explicit and implicit methods. 1

Transcript of Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix...

Page 1: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

Lecture 8: Time discretization and some matrix

algebra

October 13, 2015

1 Goal

The goal of this lecture is to discuss more sophisticated time-marching schemes,and in particular, the major distinction between explicit and implicit methods.

1

Page 2: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

2 Formal time marching

φ(t+ h) = eLhφ(0)

How to evaluate Th ≡ eLh in practice? A natural way is to expand the expo-nential in powers of Lh:

Th ∼ 1 + hL+ h2/2L2 + . . .

In the discretized version L is usually a sparse matrix, and so are all its powers.The method is conceptually simple because all matrices at the rhs are explicitlyknown. However, the bandwidth grows with the exponent, thus higher orderexpansions are simply too expensive. Better ways can be found by discretizingthe time integral as folows.

φ(t+ h)− φ(t) =

∫ t+h

t

Lφ(t′)dt′

Assume L independent of time for simplicity. The simplest approximation is:

φ(t+ h)− φ(t) = hLφ(t)

which corresponds to the Euler scheme discussed in the previous lecture. Thisis the simplest explicit method.

Consider next the trapezoidal rule

φ(t+ h)− φ(t) = hL(φ(t) + φ(t+ h))/2

This is second order accurate and, much more importantly, involves the unknownat t+ h at the rhs as well: thus the future at the lhs depends on the future attge rhs: simultaneous dependence. This is the prototypical implicit method/

Formally, life is easy, as one inverts simply as

φ(t+ h) = (1 + hL/2)−1(1− hL/2)φ(t)

which is the well-known Crank-Nicolson method. It can be readily shown thatthe norm of the CN propagator (1 +hL/2)/(1−hL/2) is always below 1, so themethod is unconditionally stable!

This is in stark variance with the Euler method, which is stable only underthe condition h|L| < 1, where |L| denotes the norm of the Liouville operator,basically its largest eigenvalue.

However, stability comes at a major price, namely the inverse matrix (1 +hL/2)−1 is a full matrix even if L is sparse. Thus, CN allows large steps withoutloosing stability (beware accuracy though), but requires matrix algebra solversat each step. Explicit methods, on the other hand are subjcet to strong stabilityconstarints, hence they march in small steps, each of which can howveer becompleted by simple matrix-vector products, matrix-free.

Which one to choose?This depends on the problem (and computers at hand).

2

Page 3: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

If steady state solutions are the only target, it might be worthwhile goingthru matrix algebra and compute the solutin in just a few very large steps. Ifon the other hand, the transient dynamics is of interest (more and more thecase in modern CFD), then explicit methods may serve us better.

Parallel computers favor explicit.

3 Solving matrix problems

We consider the matrix problem

Ax = b

with x a state vector of dimension N .Formally, x = inv(A) ∗ b solves the problem, but computing the inverse of a

N×N matrix has N ! complexity, unviable (don’t be fooled by symbolic solvers).Practical methods split into two major categories:- Direct- Iterative.Direct methods provide ”exact” solutions, i..e to machine roundoff, while

iterative methods content themselves with approximate solutions within a user-prescribed tolerance.

3.1 Direct methods: Gauss elimination

The matrix is transformed in so called LU form, i.e. A = LU , where L is alower-triangular and U is upper-triangular. Once in this form, the substitution

Ly = b

delivers andUx = y

The former can be solved sequentiallytop-to-bottom (forward pass), and thelatter too, bottom-to-top (backward pass).

However, the transformation A = LU has NB2 complexity, where B is thebandwidth of the matrix, the maximum distance between two non-zero elementsin a row, zeros included. The point is that for multidimensional problems B ismuch larger than the number of non-zero elements, call it z. Why? Becausecomputer storage is linear, hence physically contiguous elements may be sepa-rated by large strides in computer memory. This strongly depends on the wayvariables are numbered.

Example: two dimensional rectangle with 10× 3 grid points:

21 22 23 24 25 26 27 28 29 20

11 12 13 14 15 16 17 18 19 20

3

Page 4: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

1 2 3 4 5 6 7 8 9 10

With a five-stencil pattern (say a laplacian operator), the unknown 15 in-teracts with left/right and up/down neighbors, that is 14, 16, 25, 5, plus itself.Since zeros need to be stored, the bandwidth is 25-5=20, even though thereare only 5 nonzero elements/row in the matrix. Since we have numbered xy,the bandwidth is 2Nx, had we numbered yx the bandwidth would have been2Ny = 6, much better. Thus, numbering is key with Gaussian lelimination,and optimal numbering is a very complex problem for real-life geometries. Thismotivates the use of iterative methods.

4 Iterative methods

Here we seek approcimate solutions up to a given tolerance, θ, i.e, the processterminates whenever teh distance between two successive approximatuin fallsbelow the desired tolerance:

Ek ≡ |xk+1 − xk|/|xk| ≤ θ

An alternative stopping criteria is that the residual falls below threshold, i.e.

Rk ≡ |Axk − b| ≤ ε

The main appeal of iterative methods is that they do not require any matrixalgebra, only explicit matrix-vector products.

Thus, the complexity of an iterative solver can be estimated as

Cit = kzNNit

where z is the connectivity (number of non-zero elements per row) k a numericalfactor of order (1-10) and Nit is the number of iterations required to converge.This win over Gauss elimination under the condition:

Nit < B2/kz

Given that B >> z and k << B in d > 1 spatial dimensions, one expects theiterative solvers to outdo gaussian elimination most of the time under reasonabletolerance constraints *(typically 10−6 or so. This is however strongly dependenton type of iterator and the ”quality” of the matrix A. The best matrices arepositive-definite with small spread between the eigenvalues (well-conditioned)as they arise from smooth operators, such as the Laplacian. Poor-conditionedmatrices, such as those arising from advection, on the other hand, may takemuch longer to converge.

There are many families of iterative methods, here we briefly survey themain ones: splitting and gradient methods.

4

Page 5: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

4.1 Splitting methods

The idea is to split A = P +Q, so that the iteration loop tooks like:

Pxk+1 = b−Qxk

The idea is to choose P in such a way that it can be easily solved.The simpolest splitting method is the Jacobi iterator, in which P = D =

Diag(A), so that the iteration loop is:

Dxk+1 = b− (L+R)xk

This is very simple but slow, unless A is strongly diagonal dominant, i,e.

|aii| >∑j 6=i

|aij |

. Note that laplacians gives rise to diag-dominant matrices, while gradientsabsolutely not (the diagonal is zero)!

A straightforward improvement on Jacobi is the Gauss-Seidel splitting P =L + D, Q = R, where L=LT(A) is the left triangular of A and R=RT(A) theright triangular

Gauss Seidel:(L+D)xk+1 = b−Rxk

4.2 Acceleration by Successive Relaxation

Splitting methods can be considerably accelerated by adding a relaxation step.Once the k + 1 is obtained by splitting, one further relaxes according to:

xk+1,∗ = (1− ω)xk + ωxk+1

In other words, the k+1 solution is a blend of the previous value xk and theactual one xk+1. More precisely, in the so-called under-relaxation (SUR) range,0 < ω < 1, the relaxation is conservative, i.e. it pulls back towards the previousvalue. This is useful if it is perceieved that the iteration ls overshotting theasymptotic values, typically signalled by oscillatory behavior. The other regime,1 < ω < 2, denoted as over-relaxation (SOR), does precisely the opposite, i.e.it takes the solution beyond the new value. This is useful if the convergence isperceived to be too slow.

An elegant approximation theory can be developed to predict the optimalvalues of the relaxation parameter, (Tsebitchev relaxation) at each iteration.Essentially, the idea is to optimize the spectral radius of the matrix P , i.e.minimize the magnitude of its largest eigenvector (CHECK!).

5

Page 6: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

4.3 Gradient methods

Splitting methods, even with SOR/SUR acceleration, are often too slow forpractical applications. In principle one would like a method which is guaranteedto converge in at most N iterations, N being the size the problem.

A completely different and powerful class of methods is based on the ideathat solvibg the linear system Ax = b is equivalent to minimizing the followingenergy functional

E(x, x) =1

2(x,Ax)− (b, x)

where (x, y) stands for scalar product in N-dimensional spaceRN , i.e.∑N

j=1 xjyj .Note indeed that the exact solution, say x∗ obeysAx∗ = b, so that E(x∗, x∗) =

(x∗, 0) = 0. It is then natural to think of reaching this minimum by movingalong the steepest descent direction defined by the gradient of the energy func-tional, i.e. g ≡ ∂xE, which is note to coincide with the residual r = Ax − b.This leads to a fictitious time steepest-descent dynamics

x = −g

In terms of discrete iterations

xk+1 = xk − αkgk

where the parameter α decides how far one moves along the gradient direction.To pin down this value, one simply notes that the restriction of the energyfunction to the plane [xk, xk+1] is a quadratic function of α, so that the optimumα is identified by requiring ∂E/∂α = 0. This yields:

αk =(rk, rk)

(rk, Ark)

The two relations above, together with the definition rk = Axk− b, are fullyoperational and define the steepest descent (gradient) method.

The method is very simple, but has some shortcomings.First, it is slow near completion, simoply because the gradient tends to

vanish as the solution is approached, since by definition g = r = 0 as x → x∗.Moreover, it is sensitive to the matrix positiveness: if the search would meeswith a negative eigenvalue, the ”trajectory” would move away from the solution.Even with positive-definite matrices, a good condition is required, namely tehratio

κ =λmax

λmin

should be as close to 1 as possible. Otherwise small roundoff errors, would resultin very inaccurate moves

6

Page 7: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

4.4 Conjugate gradient

Moving along the gradients does not guarantee that the solution is reachedin N steps, even assuming exact computing. Moving along the conjugate gra-dient (CG) provides this important property. The CG is defined by the A-orthogonality condition

(c, At) = 0

where t is the tangent to the isosurface E(x,x)=const at point x. Clearly thisdiffers from the gradient, defined by (g, t) = 0.

In two dimensions a move along g followed by a move along c lands thetrajectory on the exact solution. The same remains true in N dimensions, pro-vided the search directions are chosen in such a way that the new direction isconjugate (A-orthogonal) to the previous one, while the sequence of residualsremains orthogonal. The sequence of serach directions, p0, p1

1. Initialize r0 = b−Ax0, p0 = r0;

2. Update

α0 = (r0, r0)/(p0Ap0);

x1 = x0 + α0p0;

r1 = r0 − α0Ap0;

3. Convergence test

If r1 < ε: Stop

Else: Prepare a new cycle

β0 = (r1, r1)/(r0Ar0);

p1 = r1 + β0p0;

Go to Update

5 Exercises

1. Write a computer program to solve the diffusion eqaution, test the accu-racy and stability upon changing the time mercahing schemes: Euler vsCrank Nicolson

2. Solve the diffusion equation using jacobi and gauss-seidel methods.

3. Same with advection-diffusion equation

7

Page 8: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

8

Page 9: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

9

Page 10: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

10

Page 11: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

11

Page 12: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

12

Page 13: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

13

Page 14: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

14

Page 15: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

15

Page 16: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

16

Page 17: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

17

Page 18: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

18

Page 19: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

19

Page 20: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

20

Page 21: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

21

Page 22: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

22

Page 23: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

23

Page 24: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

24

Page 25: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

25

Page 26: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

26

Page 27: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

27

Page 28: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

28

Page 29: Lecture 8: Time discretization and some matrix algebraLecture 8: Time discretization and some matrix algebra October 13, 2015 1 Goal The goal of this lecture is to discuss more sophisticated

29