Numerical Analysis - Part II...Numerical Analysis - Part II Anders C. Hansen Lecture 15 1/29...

29
Numerical Analysis - Part II Anders C. Hansen Lecture 15 1 / 29

Transcript of Numerical Analysis - Part II...Numerical Analysis - Part II Anders C. Hansen Lecture 15 1/29...

  • Numerical Analysis - Part II

    Anders C. Hansen

    Lecture 15

    1 / 29

  • Spectral Methods

    2 / 29

  • Chebyshev polynomials

    The Chebyshev polynomial of degree n is defined as

    Tn(x) := cos n arccos x , x ∈ [−1, 1],

    or, in a more instructive form,

    Tn(x) := cos nθ, x = cos θ, θ ∈ [0, π]. (1)

    3 / 29

  • The three-term recurrence relation

    1) The sequence (Tn) obeys the three-term recurrence relation

    T0(x) ≡ 1, T1(x) = x ,Tn+1(x) = 2xTn(x)− Tn−1(x), n ≥ 1,

    in particular, Tn is indeed an algebraic polynomial of degree n, withthe leading coefficient 2n−1. (The recurrence is due to the equalitycos(n+1)θ + cos(n−1)θ = 2 cos θ cos nθ via substitution x = cos θ,expressions for T0 and T1 are straightforward.)

    The recurrence yields

    T0(x) = 1, T1(x) = x , T2(x) = 2x2−1, T3(x) = 4x3−3x , . . . ,

    and Tn is called the nth Chebyshev polynomial (of the first kind).

    4 / 29

  • Chebyshev polynomials - How they look

    Figure : Chebyshev polynomials

    5 / 29

  • Chebyshev polynomials are orthogonal

    2) Also, (Tn) form a sequence of orthogonal polynomials with

    respect to the inner product (f , g)w :=∫ 1−1 f (x)g(x)w(x)dx , with

    the weight function w(x) := (1− x2)−1/2. Namely, we have

    (Tn,Tm)w =

    ∫ 1−1

    Tm(x)Tn(x)dx√

    1− x2=

    ∫ π0

    cosmθ cos nθ dθ

    =

    π, m = n = 0 ,π2 , m = n ≥ 1 ,0, m 6= n .

    (2)

    6 / 29

  • Chebyshev expansion

    Since (Tn)∞n=0 form an orthogonal sequence, a function f such that∫ 1

    −1 |f (x)|2w(x) dx

  • Connection to the Fourier expansion

    Letting x = cos θ and g(θ) = f (cos θ), we obtain∫ 1−1

    f (x)Tn(x)dx√

    1− x2=

    ∫ π0

    f (cos θ)Tn(cos θ) dθ =1

    2

    ∫ π−π

    g(θ) cos nθ dθ .

    (4)Given that cos nθ = 12 (e

    inθ + e−inθ), and using the Fourier expansion ofthe 2π-periodic function g ,

    g(θ) =∑n∈Z

    ĝneinθ, where ĝn =

    1

    ∫ π−π

    g(t)e−int dt, n ∈ Z ,

    we continue (4) as∫ 1−1

    f (x)Tn(x)dx√

    1− x2=π

    2(ĝ−n + ĝn) ,

    and from (3) we deduce that

    f̆n =

    {ĝ0, n = 0 ,

    ĝ−n + ĝn, n ≥ 1 .

    8 / 29

  • Properties of the Chebyshev expansion

    As we have seen, for a general integrable function f , thecomputation of its Chebyshev expansion is equivalent to the Fourierexpansion of the function g(θ) = f (cos θ). Since the latter isperiodic with period 2π, we can use a discrete Fourier transform(DFT) to compute the Chebyshev coefficients f̆n. [Actually, based onthis connection, one can perform a direct fast Chebyshev transform].

    Also, if f can be analytically extended from [−1, 1] (to the so-calledBernstein ellipse), then f̆n decays spectrally fast for n� 1 (with therate depending on the size of the ellipse). Hence, the Chebyshevexpansion inherits the rapid convergence of spectral methodswithout assuming that f is periodic.

    9 / 29

  • The algebra of Chebyshev expansions

    Let B be the set of analytic functions in [−1, 1] that can be extendedanalytically into the complex plane. We identify each such function withits Chebyshev expansion. Like the set A, the set B is a linear space and isclosed under multiplication. In particular, we have

    Tm(x)Tn(x) = cos (mθ) cos (nθ)

    = 12

    [ cos ((m − n)θ) + cos ((m + n)θ)]

    = 12

    [T|m−n|(x) + Tm+n(x)

    ]and hence,

    f (x)g(x) =∞∑

    m=0

    f̆mTm(x) ·∞∑n=0

    ğnTn(x)

    =1

    2

    ∞∑m,n=0

    f̆mğn[T|m−n|(x) + Tm+n(x)

    ]=

    1

    2

    ∞∑m,n=0

    f̆m(ğ|m−n| + ğm+n

    )Tn(x).

    10 / 29

  • Derivatives of Chebyshev polynomials

    Lemma 1 (Derivatives of Chebyshev polynomials)

    We can express derivatives T ′n in terms of (Tk) as follows,

    T ′2n(x) = (2n) · 2n∑

    k=1

    T2k−1(x), (5)

    T ′2n+1(x) = (2n + 1)[T0(x) + 2

    n∑k=1

    T2k(x)] . (6)

    11 / 29

  • Derivatives of Chebyshev polynomials

    Proof. From (1), we deduce

    Tm(x) = cosmθ ⇒ T ′m(x) =m sinmθ

    sin θx = cos θ .

    So, for m = 2n, (5) follows from the identity

    sin 2nθ

    sin θ= 2

    n∑k=1

    cos(2k−1)θ,

    which is verified as

    2 sin θn∑

    k=1

    cos (2k−1)θ =n∑

    k=1

    2 cos (2k−1)θ sin θ

    =n∑

    k=1

    [sin 2kθ − sin (2k−1)θ

    ]= sin 2nθ.

    12 / 29

  • Derivatives of Chebyshev polynomials

    Proof. Cont. For m = 2n + 1, (6) turns into identity

    sin(2n + 1)θ

    sin θ= 1 + 2

    n∑k=1

    cos 2kθ,

    and that follows from

    sin θ(

    1 + 2n∑

    k=1

    cos 2kθ)

    = sin θ +n∑

    k=1

    [sin(2k+1)θ − sin(2k − 1)θ

    ]= sin(2n + 1)θ.

    13 / 29

  • Application to PDEs

    Remark 2 (Application to PDEs)

    With Lemma 1 all derivatives of u can be expressed in anexplicit form as a Chebyshev expansion (cf. Exercise 19 onExample Sheets). For the computation of the Chebyshevcoefficients the function f has to be sampled at the so-calledChebyshev points cos (2πk/N), k = −N/2 + 1, . . . ,N/2. Thisresults into a grid, which is denser towards the edges. For ellipticproblems this is not problematic, however for initial value PDEssuch grids can cause numerical instabilities.

    14 / 29

  • Chebyshev expansion for the derivatives

    For an analytic function u, the coefficients ŭ(k)n of the Chebyshev

    expansion for its derivatives are given by the following recursion,

    ŭ(k)n = cn

    ∞∑m=n+1

    n+m odd

    m ŭ(k−1)m , ∀ k ≥ 1,

    where c0 = 1 and cn = 2 for n ≥ 1. This can be derived fromLemma 1 (the case m = 1 is the topic of Ex. 19 on the ExampleSheets).

    15 / 29

  • The spectral method for evolutionary PDEs

    We consider the problem∂u(x , t)

    ∂t= Lu(x , t), x ∈ [−1, 1], t ≥ 0 ,

    u(x , 0) = g(x), x ∈ [−1, 1],(7)

    with appropriate boundary conditions on {−1, 1} ×R+ and where Lis a linear operator, e.g. a differential operator (acting on the xvariable). We want to solve this problem by the method of lines(semi-discretization), using a spectral method for the approximationof u and its derivatives in the spatial variable x . Then, in a generalspectral method, we seek solutions uN(x , t) with

    uN(x , t) =∑n

    cn(t)ϕn(x), (8)

    where cn(t) are expansion coefficients and ϕn are basis functionschosen according to the specific structure of (7).

    16 / 29

  • The spectral method for evolutionary PDEs

    For example, we may take:

    1) the Fourier expansion with cn(t) = ûn(t), ϕn(x) = eiπnx for

    periodic boundary conditions,

    2) a polynomial expansion such as the Chebyshev expansion withcn(t) = ŭn(t), ϕn(x) = Tn(x) for other boundary conditions.

    The spectral approximation in space (8) results into a N×N systemof ODEs for the expansion coefficients {cn(t)}:

    c′ = Bc , (9)

    where B ∈ RN×N , and c = {cn(t)} ∈ RN . We can solve it withstandard ODE solvers (Euler, Crank-Nikolson, etc.) which as wehave seen are approximations to the matrix exponent in the exactsolution c(t) = etBc(0).

    17 / 29

  • The diffusion equation

    Consider the diffusion equation for a function u = u(x , t),ut = uxx , (x , t) ∈ [−1, 1]× R+ ,

    u(x , 0) = g(x), x ∈ [−1, 1] .(10)

    with the periodic boundary conditions u(−1, t) = u(1, t),ux(−1, t) = ux(1, t), and standard normalisation

    ∫ 1−1 u(x , t) dx = 0,

    both imposed for all values t ≥ 0.

    For each t, we approximate u(x , t) by its N-th order partial Fouriersum in x ,

    u(x , t) ≈ uN(x , t) =∑n∈ΓN

    ûn(t) eiπnx , ΓN := {−N/2+1, ...,N/2} .

    18 / 29

  • The diffusion equation

    Then, from (10), we see that each coefficient ûn fulfills the ODE

    û′n(t) = −π2n2ûn(t) . n ∈ ΓN (11)

    Its exact solution is ûn(t) = e−π2n2t ĝn for n 6= 0 and we set

    û0(t) = 0 due to the normalisation condition, so that

    uN(x , t) =∑n∈ΓN

    ĝn e−π2n2t eiπnx ,

    which is the exact solution truncated to N terms.

    Here, we were able to find the exact solution without solving ODEnumerically due to the special structure of the Laplacian. However,for more general PDE we will need a numerical method, and thusthe issue of stability arises, so we consider this issue on thatsimplified example.

    19 / 29

  • Stability analysis

    The system (11) has the form

    û′ = Bû , B = diag {−π2n2} , n ∈ ΓN ,

    and we note that (a) all the eigenvalues of B are negative, and that

    (b) they consist of the eigenvalues λ(2)n of the second order

    differentiation operator, with max |λ(2)n | = (N2 )2.

    If we approximate this system with the Euler method:

    ûk+1 = (I + τB)ûk , τ := ∆t,

    then we see that, for stability condition ‖I + τB‖ ≤ 1, we need toscale the time step τ = ∆t ∼ N−2.

    20 / 29

  • Stability analysis

    Note that, for the Crank-Nikolson scheme, since the spectrum of Bis negative, we get stability for any time step τ > 0.For a general linear operator L in (7) with constant coefficients, thematix B is again diagonal (hence normal), and provided that itspectrum is negative, for stability we must scale the time stepτ ∼ N−m, where m is the maximal order of differentiation.

    The scaling τ∼N−2 may seem similar to the scaling k∼h2 indifference methods which we viewed as a disadvantage, however inspectral methods we can take N, the order of partial Fourier orChebyshev sums to achieve a good appoximation, rather small. (Wemay still need to choose τ small enough to get a desired accuracy.)

    21 / 29

  • The diffusion equation with non-constant coefficient

    We want to solve the diffusion equation with a non-constant coefficienta(x) > 0 for a function u = u(x , t)

    ut = (a(x)ux)x , (x , t) ∈ [−1, 1]× R+ ,

    u(x , 0) = g(x), x ∈ [−1, 1] ,(12)

    with boundary and normalization conditions as before. Approximating u byits partial Fourier sum results in the following system of ODEs for thecoefficients ûn

    û′n(t) = −π2∑m∈Z

    mn ân−m ûm(t), n ∈ Z.

    For the discretization in time we may apply the Euler method, this gives

    ûk+1n = ûkn − τ π2

    ∑m∈ΓN

    mn ân−m ûkm , τ = ∆t , n ∈ ΓN

    or in the vector formûk+1 = (I + τB)ûk ,

    where B = (bm,n) = (−π2mn ân−m). For stability of Euler method, weagain need ‖I + τB‖ ≤ 1, but analysis here is less straightforward.

    22 / 29

  • Chebyshev methods for evolutionary problems

    Remark 3 (Chebyshev methods for evolutionary problems)

    In general, the boundary conditions for the considered PDEshave to be implemented in the Chebyshev expansion. If theboundary conditions are to be imposed exactly, either the basisfunctions have to be slightly modified, e.g., to Tn(x)− 1 insteadof Tn(x) for the boundary condition u(1) = 0, or we getadditional conditions on the expansion coefficients ŭn (cf.Exercise 20 from the Example Sheets).

    While the exact imposition is in general not a problem for thenumerical treatment of elliptic PDEs, as soon as the boundaryconditions depend on time we may run into serious stabilityissues. One way around this is the use of penalty methods inwhich the boundary conditions is added to the scheme later as apenalty term.

    23 / 29

  • Iterative methods for linear algebraic systems

    24 / 29

  • Solving linear systems with iterative methods

    The general iterative method for solving Ax = b is a rulexk+1 = fk(x

    0, x1, . . . , xk). We will consider the simplest ones: linear,one-step, stationary iterative schemes:

    xk+1 = Hxk + v, x0, v ∈ Rn. (13)

    Here one chooses H and v so that x∗, a solution of Ax = b, satisfiesx∗ = Hx∗ + v, i.e. it is the fixed point of the iteration (13) (if thescheme converges). Standard terminology:

    the iteration matrix H, the error ek := x∗ − xk , the residual rk := Aek = b− Axk .

    25 / 29

  • Solving linear systems – Iterative refinement

    For a given class of matrices A (e.g. positive definite matrices, oreven a single particular matrix), we are interested in convergentmethods, i.e. the methods such that xk → x∗ = A−1b for everystarting value x0. Subtracting x∗ = Hx∗ + v from (13) we obtain

    ek+1 = Hek = · · · = Hk+1e0, (14)

    i.e., a method is convergent if ek = Hke0 → 0 for any e0 ∈ Rn.

    (Iterative refinement). This is the scheme

    xk+1 = xk − S(Axk − b) .

    If S = A−1, then xk+1 = A−1b = x∗, so it is suggestive to choose Sas an approximation to A−1. The iteration matrix for this scheme isHS = I − SA.

    26 / 29

  • Solving linear systems – Splitting

    (Splitting). This is the scheme

    (A− B)xk+1 = −Bxk + b ,

    with the iteration matrix H = −(A− B)−1B. Any splitting can beviewed as an iterative refinement (and vice versa) because

    (A− B)xk+1 = −Bxk + b ⇔ (A− B)xk+1 = (A− B)xk − (Axk − b)

    ⇔ xk+1 = xk − (A− B)−1(Axk − b),

    so we should seek a splitting such that S = (A−B)−1 approximatesA−1.

    27 / 29

  • Solving linear systems – Convergence

    Theorem 4Let H ∈ Rn×n. Then lim

    k→∞Hkz = 0 for any z ∈ Rn if and only if

    ρ(H) < 1.

    Proof. 1) Let λ be an eigenvalue of (the real) H, real or complex,such that |λ| = ρ(H) ≥ 1, and let w be a corresponding eigenvector,i.e., Hw = λw. Then Hkw = λkw, and

    ‖Hkw‖∞ = |λ|k‖w‖∞ ≥ ‖w‖∞ =: γ > 0. (15)

    If w is real, we choose z = w , hence ‖Hkz‖∞ ≥ γ, and this cannottend to zero.If w is complex, then w = u + iv with some real vectors u, v. Butthen at least one of the sequences (Hku), (Hkv) does not tend tozero. For if both do, then also Hkw = Hku + iHkv→ 0, and thiscontradicts (15).

    28 / 29

  • Solving linear systems – Convergence

    Proof. Cont. 2) Now, let ρ(H) < 1, and assume for simplicity thatH possesses n linearly independent eigenvectors (wj) such thatHwj = λjwj . Linear independence means that every z ∈ Rn can beexpressed as a linear combination of the eigenvectors, i.e., thereexist (cj) ∈ C such that z =

    ∑nj=1 cjwj . Thus,

    Hkz =∑n

    j=1 cjλkj wj ,

    and since |λj | ≤ ρ(H) < 1 we have limk→∞Hkz = 0, as required. �

    29 / 29