University of Houston, Department of Mathematics Numerical Analysis II Chapter 2...

68
University of Houston, Department of Mathematics Numerical Analysis II Chapter 2 One-step methods for non-stiff differential equations 2.1 Examples of elementary one-step methods Consider the following initial value problem for a scalar differential equation of first order: y (x)= f (x, y(x)) , x [a, b] lR , y(a)= α . Equidistant partition of [a, b] by x k := a + kh , 0 k N with step length h := (b a)/N. Integration of the differential equation over [x k , x k+1 ] yields y(x k+1 ) y(x k )= x k+1 x k f (x, y(x)) dx . Idea: Approximation of the right-hand side by a quadrature formula.

Transcript of University of Houston, Department of Mathematics Numerical Analysis II Chapter 2...

  • University of Houston, Department of MathematicsNumerical Analysis II

    Chapter 2 One-step methods for non-stiff differential equations

    2.1 Examples of elementary one-step methods

    Consider the following initial value problem for a scalar differential equation of first order:

    y′(x) = f(x,y(x)) , x ∈ [a,b] ⊂ lR ,

    y(a) = α .

    Equidistant partition of [a,b] by

    xk := a + kh , 0 ≤ k ≤ N

    with step length h := (b− a)/N.

    Integration of the differential equation over [xk,xk+1] yields

    y(xk+1) − y(xk) =xk+1∫

    xk

    f(x,y(x)) dx .

    Idea: Approximation of the right-hand side by a quadrature formula.

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.1.1 Explicit Euler method

    x

    f(x,y(x))

    k+1xkx

    y

    ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

    ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

    xk+1∫

    xkf(x,y(x)) dx ≈ h f(xk,y(xk))

    If yk , 0 ≤ k ≤ N, is an approximation of the solutiony in xk, we obtain

    yk+1 = yk + h f(xk,yk) , 0 ≤ k ≤ N− 1 ,y0 = α .

    Interpretation: Approximation of the solution y = y(x) by its tangent

    x

    y

    x xx1 2

    y(x)

    0

    α

    y1

    y2

    ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

    ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

    ��������������������

    ������������������������������

    ��������������������������������

    ��������������������������������

    ��

    ��������������������

    ��������������������

    ��������������������������������������������������

    �����������

    �����������

    ������������������������������

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.1.2 Implicit Euler method (backward Euler)

    x

    y

    f(x,y(x))

    k+1xkx

    xk+1∫

    xkf(x,y(x)) dx ≈ h f(xk+1,y(xk+1))

    We obtain

    yk+1 = yk + h f(xk+1,yk+1) , 0 ≤ k ≤ N− 1 ,y0 = α .

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.1.3 Implicit trapezoidal rule

    x

    f(x,y(x))

    k+1xkx

    y

    ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

    ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

    xk+1∫

    xkf(x,y(x)) dx ≈ h2 [f(xk,y(xk)) + f(xk+1,y(xk+1))]

    We obtain

    yk+1 = yk +h2 [f(xk,yk) + f(xk+1,yk+1)] , 0 ≤ k ≤ N− 1 ,

    y0 = α .

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.1.4 Implicit midpoint rule

    x

    f(x,y(x))

    k+1xkx

    y

    ��������������������������������������

    xk+1∫

    xkf(x,y(x)) dx ≈ h f(xk +

    h2 ,y(xk +

    h2))

    Observing y(xk +h2) ≈

    12(y(xk) + y(xk+1)), we obtain

    yk+1 = yk + h f(xk +h2 ,

    12(yk + yk+1)) , 0 ≤ k ≤ N− 1 ,

    y0 = α .

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.1.5 Explicit midpoint rule (two-step method)

    y

    xx k+1x k+2

    f(x,y(x))

    xk

    ��������������������������������������

    xk+2∫

    xkf(x,y(x)) dx ≈ 2 h f(xk+1,y(xk+1))

    We obtain

    yk+2 = yk + 2 h f(xk+1,yk+1) , 0 ≤ k ≤ N− 2 ,y0 = α .

    y1 = α + h f(x0, α) (initialization) .

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.2 Consistency, stability, and convergence

    In this section, we consider the initial value problem

    y′(x) = f(x,y(x)) , x ∈ I := [a,b] ⊂ lR ,

    y(a) = α

    where f ∈ C(I×D) , D ⊂ lRd, and

    ‖f(x,y1) − f(x,y2)‖ ≤ L ‖y1 − y2‖ , (x,yi) ∈ I×D , 1 ≤ i ≤ 2 , L ≥ 0 .

    Definition 2.1 Explicit and implicit one-step methods

    Let Ih := {xk = a + kh | 0 ≤ k ≤ N} , N ∈ lN, be a partition of I = [a,b] with steplength

    h = (b− a)/N and Φ : Ih × Ih × lRd × lRd × lR+ → lR

    d. Then,

    yk+1 = yk + h Φ(xk,xk+1,yk,yk+1,h) , 0 ≤ k ≤ N− 1 ,

    y0 = α

    is called a one-step method with increment function Φ. The one-step method is said to be

    explicit, if Φ does not depend on yk+1, and is called implicit, otherwise.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Remark 2.2 Interpretation as a difference equation

    Setting I′h := Ih \ {xN} and introducing the grid function yh : Ih → lRd, the one-step

    method is equivalent to the first order difference equation

    yh(x + h) = yh(x) + h Φ(x,x + h,yh(x),yh(x + h),h) , x ∈ I′h ,

    yh(a) = α

    Definition 2.3 Global discretization error, convergence, order of convergence

    The grid functioneh(x) := yh(x) − y(x) , x ∈ Ih

    is called the global discretization error. The one-step method is said to be convergent, if

    maxx∈Ih‖eh(x)‖ → 0 (h→ 0) .

    It is said to be convergent of order p, if

    maxx∈Ih‖eh(x)‖ = O(h

    p) (h→ 0) .

    2

  • University of Houston, Department of MathematicsNumerical Analysis II

    Remark 2.4 Consistency

    If the solution y ∈ C1(I) is inserted into the one-step method, we should expect

    y(x + h) − y(x)

    h= h Φ(x,x + h,y(x),y(x + h),h),

    ↓ h→ 0 ↓ h→ 0

    y′(x) f(x,y(x))

    Definition 2.5 Local discretization error, consistency, order of consistency

    The grid function

    τ h(x) := h−1 [y(x + h) − y(x)] − Φ(x,x + h,y(x),y(x + h),h) , x ∈ I′h

    is called the local discretization error. The one-step method is said to be consistent with the

    initial value problem, ifh

    x∈I′h

    ‖τ h(x)‖ → 0 (h→ 0) .

    It is said to be consistent of order p, if

    h∑

    x∈I′h

    ‖τ h(x)‖ = O(hp) (h→ 0) .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Lemma 2.6 Sufficient condition for consistency

    The one-step method is consistent with the initial value problem, if

    (∗) maxx∈I′

    h

    ‖τ h(x)‖ → 0 (h→ 0) .

    The condition (∗) holds true if and only if

    (∗∗) maxx∈I′

    h

    ‖Φ(x,x + h,y(x),y(x + h),h) − f(x,y(x))‖ → 0 (h→ 0) .

    Proof: We have

    h∑

    x∈I′h

    ‖τ h(x)‖ ≤ maxx∈I′

    h

    ‖τ h(x)‖∑

    x∈I′h

    h = (b − a) maxx∈I′

    h

    ‖τ h(x)‖ .

    The equivalence of (∗) and (∗∗) follows readily from the representation

    τ h(x) = h−1 [y(x + h) − y(x)] − y′(x) + f(x,y(x)) − Φ(x,x + h,y(x),y(x + h),h) .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Example 2.7 Order of consisteny of the explicit Euler method

    Explicit Euler method:

    yh(x + h) = yh(x) + h f(x,yh(x)) , yh(a) = α .

    The local discretization error is given by:

    τ h(x) = h−1 [y(x + h) − y(x)] − f(x,y(x)) .

    Taylor expansion yields

    y(x + h) = y(x) + h y′(x) +1

    2h2 y′′(x) + O(h3) =⇒

    h−1 [y(x + h) − y(x)] = y′(x) +1

    2h y′′(x) + O(h2) = f(x,y(x)) +

    1

    2h y′′(x) + O(h2) .

    It follows that

    τ h(x) =1

    2h y′′(x) + O(h2) .

    If y ∈ C2(I), the explicit Euler method is consistent of order 1.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Example 2.8 Order of consistency of the implicit trapezoidal rule

    Implicit trapezoidal rule:

    yh(x + h) = yh(x) +h

    2[f(x,yh(x)) + f(x + h,yh(x + h))] , yh(a) = α .

    Taylor expansion results in

    y(x + h) = y(x) + h y′(x) +1

    2h2 y′′(x) +

    1

    6h3 y′′′(x) + O(h4) =⇒

    h−1 [y(x + h)− y(x)] = y′(x) +1

    2h y′′(x) +

    1

    6h2 y′′′(x) + O(h3) =⇒

    h−1 [y(x + h)− y(x)] = f(x,y(x)) +1

    2h [fx(x,y(x)) + fy(x,y(x)) f(x,y(x))] +

    1

    6h2 y′′′(x) + O(h3) .

    f(x + h,y(x + h)) = f(x,y(x)) + h fx(x,y(x)) + h fy(x,y(x)) y′(x) + O(h2) =⇒

    1

    2[f(x,y(x)) + f(x + h,y(x + h))] = f(x,y(x)) +

    1

    2h [fx(x,y(x)) + fy(x,y(x)) f(x,y(x))] + O(h

    2) .

    It follows thatτ h(x) = O(h

    2) .

    2

  • University of Houston, Department of MathematicsNumerical Analysis II

    We consider the explicit one-step method

    (⋆) yh(x + h) = yh(x) + h Φ(x,yh(x),h) , yh(a) = α .

    Definition 2.9 Asymptotic stability

    Consider the perturbed one-step method

    zh(x + h) = zh(x) + h [Φ(x, zh(x),h) + σh(x)] , zh(a) = α + βh .

    The one-step method (⋆) is said to be asymptotically stable, if there is hmax > 0 and for

    each ε > 0 there exists δ > 0 such that for perturbations σh , βh for which

    ‖βh‖ + h∑

    x∈I′h(x)

    ‖σh(x)‖ < δ

    the solution of the perturbed one-step method satisfies the condition

    maxx∈Ih‖(yh − zh)(x)‖ + h

    x∈I′h

    ‖(Dhyh − Dhzh)(x)‖ < ε ,

    uniformly in h < hmax, where (Dhyh)(x) := h−1 [yh(x + h)− yh(x)] , x ∈ I

    ′h.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Remark 2.10 Motivation for the discrete version of Gronwall’s lemma

    Consider the explicit Euler method with and without perturbations

    yh(x + h) = yh(x) + h f(x,yh(x)) , yh(a) = α ,

    zh(x + h) = zh(x) + h [f(x, zh(x)) + σh(x)] , zh(a) = α + βh .

    (i) For x = x0 = a, we have

    ‖(yh − zh)(x0)‖ = ‖βh‖ .

    (ii) For x = x1 = x0 + h, we obtain

    ‖(yh − zh)(x1)‖ ≤ ‖(yh − zh)(x0)‖ + h ‖σh(x0)‖ + h ‖f(x0,yh(x0)) − f(x0, zh(x0))‖ ≤

    ≤ ‖βh‖ + h ‖σh(x0)‖ + h L ‖(yh − zh)(x0)‖ .

    (iii) Finally, for x2 = x1 + h, it follows that

    ‖(yh − zh)(x2)‖ ≤ ‖(yh − zh)(x1)‖ + h ‖σh(x1)‖ + h ‖f(x1,yh(x1)) − f(x1, zh(x1))‖ ≤

    ≤ ‖βh‖ + h ‖σh(x0)‖ + h ‖σh(x1)‖ + h L ‖(yh − zh)(x0)‖ + h L ‖(yh − zh)(x1)‖ .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Lemma 2.11 Discrete version of Gronwall’s lemma

    Assume that there exist δ ≥ 0 and η ≥ 0 such that the grid function vh : Ih → lRd satisfies

    ‖vj+1‖ ≤ δ hj

    k=0‖vk‖ + η , 0 ≤ j ≤ N− 1 ,

    ‖v0‖ ≤ η .

    Then, there holds‖vj‖ ≤ η exp(δ (xj − a)) , 0 ≤ j ≤ N .

    Proof: Assume ‖vj‖ ≤M , 0 ≤ j ≤ N. We prove by induction:

    ‖vj‖ ≤ Mδm (xj − a)

    m

    m!+ η

    m−1∑

    ℓ=0

    δℓ (xj − a)ℓ

    ℓ!.

    1.Step: The assertion holds true for j = 0 , m ∈ lN and m = 0 , 0 ≤ j ≤ N.

    2.Step: Assume that the assertion is true for some m ∈ lN and 0 ≤ j ≤ N.

    3.Step: We have

    ‖vj+1‖ ≤ δ hj

    k=0[ M

    δm (xk − a)m

    m!+ η

    m−1∑

    ℓ=0

    δℓ (xk − a)ℓ

    ℓ!] + η .

    2

  • University of Houston, Department of MathematicsNumerical Analysis II

    Using the elementary inequality

    hj

    k=0(xk − a)

    ℓ ≤xj+1∫

    a(x − a)ℓ dx =

    (xj+1 − a)ℓ+1

    ℓ + 1,

    for m + 1 and j + 1 the estimate

    ‖vj+1‖ ≤ δ hj

    k=0[ M

    δm (xk − a)m

    m!+ η

    m−1∑

    ℓ=0

    δℓ (xk − a)ℓ

    ℓ!] + η

    yields the assertion

    ‖vj+1‖ ≤ Mδm+1 (xj+1 − a)

    m+1

    (m + 1)!+ η

    m∑

    ℓ=0

    δℓ (xj+1 − a)ℓ

    ℓ!.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Theorem 2.12 Sufficient condition for asymptotic stability

    Assume that the increment function Φ satisfies the following Lipschitz condition:There exist U ⊂ I×D of the graph (x,y(x)) , x ∈ I, of the solution y = y(x) of the initial value

    problem and hmax > 0 and LΦ ≥ 0 such that for all 0 < h < hmax:

    (⋄) ‖Φ(x,y1,h) − Φ(x,y2,h)‖ ≤ LΦ ‖y1 − y2‖ , (x,yi) ∈ U , 1 ≤ i ≤ 2 .

    Consider perturbations σh , βh such that

    ‖βh‖ + h∑

    x∈Ih

    ‖σh(x)‖ ≤ δ .

    Then, for the solution yh of the unperturbed and for the solution zh of the perturbed one-

    step method there holds

    (•) ‖(yh − zh)(x)‖ ≤ [ ‖βh‖ + h∑

    x∈Ih

    ‖σh(x)‖ ] exp(LΦ(x− a)) ,

    (••) ‖(Dhyh − Dhzh)(x)‖ ≤ LΦ ‖(yh − zh)(x)‖ + ‖σh(x)‖ .

    In particular, the one-step method is asymptotically stable.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Proof: Without restriction of generality assume that U = I×D and define

    wh(x) := zh(x) − yh(x) , x ∈ Ih .

    Setting wj := wh(xj), we obtain

    wj+1 = wj + h [ Φ(xj, zj,h) + σj − Φ(xj,yj,h) ] , w0 = βh =⇒

    wj+1 = w0 + hj

    k=0[ σk + Φ(xk, zk,h) − Φ(xk,yk,h) ] .

    Using the Lipschitz condition (⋄), it follows that

    ‖wj+1‖ ≤ ‖βh‖ + hN∑

    k=0‖σk‖ + LΦ h

    j∑

    k=0‖wk‖ .

    The discrete version of Gronwall’s lemma implies (•).(••) is an immediate consequence of

    (Dhwh)(x) = Φ(x, zh(x),h) + σh(x) − Φ(x,yh(x),h)

    and (⋄).

  • University of Houston, Department of MathematicsNumerical Analysis II

    Theorem 2.13 Consistency & Stability =⇒ Convergence

    Assume that the one-step method (∗) is consistent with the initial value problem and asymp-

    totically stable. Then, there holds

    (+) maxx∈Ih‖(yh − y)(x)‖ → 0 (h→ 0) ,

    (++) h∑

    x∈I′h

    ‖(Dhyh − Dhy)(x)‖ → 0 (h→ 0) .

    Proof: We define zh(x) := y(x) , x ∈ Ih. Then σh(x) = τ h(x) and βh = 0. Due to the consistency,

    for each ε > 0 there exist hmax(ε) > 0 and δ(ε) > 0 such that for 0 < h < hmax(ε):

    h∑

    x∈I′h

    ‖τ h(x)‖ < δ(ε) .

    We conclude by taking the asymptotic stability of the one-step method into account.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Theorem 2.14 Characterization of the convergence

    Assume that the one-step method (∗) satisfies the Lipschitz condition (⋄). Then, the follo-

    wing two statements are equivalent:

    (i) The one-step method (∗) is convergent. In particular, (+), (++) hold true.

    (ii) The one-step method (∗) is consistent with the initial value problem.

    For consistent one-step methods there exists hmax > 0 such that for all 0 < h < hmax the

    following a priori estimate holds true:

    ‖(yh − y)(x)‖ ≤ (h∑

    x∈I′h

    ‖τ h(x)‖) exp(LΦ (x − a)) .

    Moreover, if maxx∈I′

    h

    ‖τ h(x)‖ → 0 (h→ 0), then there holds

    maxx∈I′

    h

    ‖(Dhyh − Dhy)(x)‖ → 0 (h→ 0) .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Proof: (a) (ii) =⇒ (i):

    Theorem 2.12 implies the asymptotic stability of the one-step method and Theorem 2.13

    provides the convergence in the sense that (+), (++) hold true.

    The a priori estimate and the uniform convergence of the first differences can be deduced

    from Theorem 2.12 with zh(x) := y(x) so that σh(x) = τ h(x) and βh = 0.

    (b) (i) =⇒ (ii)

    Due to the convergence, for sufficiently small h the Lipschitz condition (⋄) can be applied to

    Φ(x,yh(x),h) − Φ(x,y(x),h)

    whence

    ‖τ h(x)‖ = ‖Dhy(x) − Φ(x,y(x),h)‖ = ‖(Dh)y(x) − Φ(x,y(x),h) − Dhyh(x) + Φ(x,yh(x),h)‖ ≤

    ≤ ‖(Dhyh − Dhy)(x)‖ + LΦ ‖(yh − y)(x)‖ .

    Multiplication by h and summation over x ∈ I′h allows to conclude.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Remark 2.15 Generalization to implicit one-step methods

    The notion of asymptotic atability can be generalized to implicit one-step methods.

    Analogous stability and convergence results can be established.

    Reamrk 2.16 Lipschitz condition for the inkrement function

    If the increment function Φ is defined in terms of the right-hand side f of the initial value

    problem, the Lipschitz condition (⋄) for Φ can be deduced from the associated Lipschitz

    condition for f.

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.3 Explicit Runge-Kutta methods

    Motivation and an example:

    x

    y

    x xx1 2

    y(x)

    0

    α

    y1

    y2

    �������������������������������

    �������������������������������

    ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

    ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

    ���������������������������������

    ���������������������������������

    �����������

    �����������

    ������������

    ������������������������������

    The explicit Euler method uses the approximate slope

    f(xk,yk) ≈ f(xk,y(xk)) of the solution curve at xk.

    Idea: Use a suitable linear combination of approximations

    of the slopes f(xk,y(xk)) and f(xk + a21h,y(xk + a21h)).

    This leads to the explicit one-step-method:

    (⋆1) yk+1 = yk + h [ b1 f(xk,yk) + b2 f(xk + a21 h,yk + a21 h f(xk,yk) ] , k ≥ 0 ,

    (∗2) y0 = α .

    Goal: Determine a21 , b1 , b2 such that the one-step method (∗1), (∗2) is consistent of order p = 2.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Taylor expansion (assuming sufficient smoothness of f):

    (+1)y(x + h)− y(x)

    h= y′(x) +

    h

    2y′′(x) + O(h2) = f(x,y(x)) +

    h

    2[fx(x,y(x)) + fy(x,y(x))f(x,y(x))] + O(h

    2).

    (+2) b1f(x,y(x)) + b2f(x + a21h,y(x) + a21hf(x,y(x))) = (b1 + b2)f(x,y(x)) +

    + h[a21b2fx(x,y(x)) + a21b2fy(x,y(x))f(x,y(x))] + O(h2).

    Comparison of the coefficients in (+1), (+2) yields

    b1 + b2 = 1 , a21 b2 =1

    2.

    Setting β := b2, we obtain

    b1 = 1 − β , a21 =1

    2β.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Definition 2.17 The methods of Runge and Heun

    The one-step method

    (⋆1) yk+1 = yk + (1− β) h f(xk,yk) + β h f(xk +1

    2βh,yk +

    h

    2βf(xk,yk)) , k ≥ 0 ,

    (⋆2) y0 = α

    with β = 1 is called improved polygon method or method of Runge.

    Choosing β = 12 it is referred to as the method of Heun.

    Problem: Does a suitable choice of β give the order of consistency p = 3?

  • University of Houston, Department of MathematicsNumerical Analysis II

    Determination of an additional term in the Taylor expansion:

    (+1)y(x + h)− y(x)

    h= y′(x) +

    h

    2y′′(x) +

    h2

    6y′′′(x) + O(h3) =

    = f +h

    2[fx + fyf ] +

    h2

    6[fxx + 2ffxy + f

    2fyy + fxfy + ff2y ] + O(h

    3) ,

    (+2) (1− β)f(x,y(x)) + βf(x +h

    2β,y(x) +

    h

    2βf(x,y(x))) =

    = f +h

    2[fx + fyf ] +

    h2

    8β[fxx + 2ffxy + f

    2fyy] + O(h3) .

    (+1), (+2) imply:

    τ h(x) =h2

    6[ (1−

    3

    4β) (fxx + 2ffxy + f

    2fyy) + fxfy + ff2y ] + O(h

    3) .

    Consequently:

    (i) The order of consistency p = 3 can not be achieved by a proper choice of β.

    (ii) Choosing β = 34, the sum of the absolute values of the coefficients of the derivatives in

    the leading error term of τ h is minimized.

    2

  • University of Houston, Department of MathematicsNumerical Analysis II

    Definition 2.18 Explicit Runge-Kutta method of stage sAn explicit one-step method of the form

    (⋄1) yk+1 = yk +s∑

    j=1bj kj , k ≥ 0 ,

    (⋄2) y0 = α

    such thatk1 := h f(xk,yk) ,

    k2 := h f(xk + a21h,yk + a21k1) ,

    k3 := h f(xk + a31h + a32h,yk + a31k1 + a32k2) ,

    · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·

    ki := h f(xk + cih,yk +i−1∑

    j=1aijkj) , ci :=

    i−1∑

    j=1aij , 1 ≤ i ≤ s

    is called an explicit Runge-Kutta method of stage s.

    It is uniquely determined by the s (s + 1)/2 parameters aij , 2 ≤ i ≤ s , 1 ≤ j ≤ i− 1, and

    bi , 1 ≤ i ≤ s.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Butcher scheme

    Explicit Runge-Kutta methods of stage s are described by the so-called Butcher scheme:

    c1 = 0 a21c2 a31 a32· · · ·

    · · · · ·

    · · · · · ·

    cs as1 as2 · · as,s−1b1 b2 · · bs−1 bs

  • University of Houston, Department of MathematicsNumerical Analysis II

    Lemma 2.19 Consistency of Runge-Kutta methods of stage s

    The Runge-Kutta method (⋄1), (⋄2) of stage s is consistent with the initial value problem, if

    (•)s∑

    i=1bi = 1

    Proof: For the local discretization error we obtain

    τ h(x) =y(x + h) − y(x)

    h−

    s∑

    i=1bi f(x + cih,y(x) +

    i−1∑

    j=1aijkj) ,

    where kj is given as in (⋄1) with yk replaced by y(x).

    For h→ 0 it follows that

    τ h(x) → y′(x) −

    s∑

    i=1bi f(x,y(x)) = y

    ′(x) − f(x,y(x)) = 0 .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Goal: Choose the parameters aij and bi such that a maximal order of consistency p can beachieved.

    Realization: For a large number of stages s one uses graph-theoretical concepts (Butcher trees).

    The following table contains the number of free parameters Ns and the number of determiningequations Ms for the stage s as well as the maximal order of consistency ps for stage s:

    s 1 2 3 4 5 6 7 8 9 10

    Ns 1 3 6 10 15 21 28 36 45 55

    Ms 1 2 4 8 17 37 85 200 486 1205

    ps 1 2 3 4 4 5 6 6 7 8

    If the number of determining equations is less than the number of free parameters, the free

    parameters are chosen such that the leading term in the global discretization error is minimized.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Example 2.20 3-stage Runge-Kutta method of consistency order p = 4

    The 4 determining equations for the 6 parameters are given by

    b1 + b2 + b3 = 1 ,

    b2 a21 + b3 (a31 + a32) =1

    2,

    b2 a221 + b3 (a

    231 + 2 a31 a32 + a

    232) =

    1

    3,

    b3 a32 a21 =1

    6.

    This classical 3-stage Runge-Kutta method has the Butcher scheme

    12

    12

    12 0

    12

    1 0 0 116

    13

    13

    16

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.4 Order and stepsize control

    2.4.1 Asymptotic expansion of the global discretization error

    Assume that(ESV1) yh(x + h) = yh(x) + h Φ(x,yh(x),h) , x ∈ I

    ′h ,

    (ESV2) yh(a) = α

    is an explicit one-step method of consistency order p.

    Assume further that the local discretization error has the asymptotic expansion

    (∗) y(x + h) − y(x) − h Φ(x,y(x),h) = dp+1(x) hp+1 + dp+2(x) h

    p+2 + ...

    Does the global discretization error eh(x) also have an asymptotic expansion

    (∗∗) eh(x) = ep(x) hp + ep+1 h

    p+1 + ... ?

    Can this asymptotic expansion be used for the construction of higher order methods?

  • University of Houston, Department of MathematicsNumerical Analysis II

    Idea: Define a ’new’ one-step method with the increment function Φ∗:

    (ESV∗1) y∗h(x + h) = y

    ∗h(x) + h Φ

    ∗(x,y∗h(x),h) , x ∈ I′h ,

    (ESV∗2) y∗h(a) = α .

    Setting δyh(x) := y∗h(x)− yh(x), the one-step method (ESV1) is equivalent to

    y∗h(x + h) − δyh(x + h) = y∗h(x) − δyh(x) + h Φ(x,y

    ∗h(x)− δyh(x),h) .

    A comparison with (ESV∗1) yields

    (⋄) h Φ∗(x,y∗h(x),h) = h Φ(x,y∗h(x)− δyh(x),h) + δyh(x + h) − δyh(x) .

    Observing (⋄), for the local discretization error of the ’new’ one-step method (ESV∗1), (ESV∗2)

    we obtain

    y(x + h)− y(x) − h Φ∗(x,y(x),h) = y(x + h)− y(x)− δyh(x + h) + δyh(x)− h Φ(x,y(x)− δyh(x),h) .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Taking

    y(x + h)− y(x) − h Φ∗(x,y(x),h) = y(x + h)− y(x)− δyh(x + h) + δyh(x)− h Φ(x,y(x)− δyh(x),h)

    into account and using the asymptotic expansion (∗), we get

    (+) y(x + h)− y(x) − h Φ∗(x,y(x),h) = dp+1(x) hp+1 + δyh(x) − δyh(x + h) +

    + h [ Φy(x,y(x),h) δyh(x) + O(‖δyh(x)‖2) ] + O(hp+2) .

    Settinge(x) := − h−p δyh(x) , e(a) := 0 ,

    it follows that

    (i) δyh(x) − δyh(x + h) = hp [e(x + h) − e(x)] = e′(x) hp+1 + O(hp+2) ,

    (ii) O(‖δyh(x)‖2) = O(h2p) ,

    (iii) Φy(x,y(x),h) = Φy(x,y(x),0) + O(h) ,

    (iv) Φy(x,y(x),0) = fy(x,y(x)) (is a consequence of the consistency) .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Inserting (i)− (iv) into (+) gives

    y(x + h)− y(x) − h Φ∗(x,y(x),h) = hp+1 [ dp+1(x) + e′(x) − fy(x,y(x)) e(x) ] + O(h

    p+2) .

    Hence, the one-step method (ESV∗1), (ESV∗2) is consistent of order p + 1, if

    e′(x) = fy(x,y(x)) e(x) − dp+1(x) , e(a) = 0 .

    In summary, we have shown:

    Theorem 2.23 Theorem of Hairer and Lubich

    Assume that f ∈ Cp+1(I×D) and that Φ ∈ Cp+1(I×D× lR+) is the increment function of a

    one-step method with

    y(x + h) − y(x) − h Φ(x,y(x),h) = dp+1(x) hp+1 + O(hp+2) .

    Moreover, let e = e(x) , x ∈ I, be the solution of the linear initial value problem

    e′(x) = fy(x,y(x)) e(x) − dp+1(x) , e(a) = 0 .

    2

  • University of Houston, Department of MathematicsNumerical Analysis II

    For the one-step method

    (ESV∗1) y∗h(x + h) = y

    ∗h(x) + h Φ

    ∗(x,y∗h(x),h) , x ∈ I′h ,

    (ESV∗2) y∗h(a) = α

    with the increment function

    Φ∗(x,y∗h(x),h) = Φ(x,y∗h(x) + e(x)h

    p,h) − [e(x + h) − e(x)] hp−1

    we have

    y(x + h) − y(x) − h Φ∗(x,y(x),h) = O(hp+2) .

    Remark 2.24 Extension and historical comments

    (i) Theorem 2.23 can be analogously extended to implicit one-step methods.

    (ii) The proof of an asymptotic expansion of the global discretization error for the explicit

    Euler method is due to Bauer,Rutishauser,Stiefel (1963) and Gragg (1964).

  • University of Houston, Department of MathematicsNumerical Analysis II

    Theorem 2.25 Theorem of Gragg (1964)

    Assume that f ∈ Cp+k(I×D) and that Φ ∈ Cp+k(I×D× lR+) , k ≥ 1, is the increment function

    of a one-step method with Φ(x,y,0) = f(x,y) , (x,y) ∈ I×D.

    Moreover, suppose that there are functions dp+i ∈ Ck−i(I) , 1 ≤ i ≤ k, such that for the local

    discretization error there holds

    y(x + h) − y(x) − h Φ(x,y(x),h) =k∑

    i=1dp+i(x) h

    p+i + O(hp+k+1) , x ∈ I′h .

    Then, the global discretization error has the asymptotic expansion

    eh(x) =k−1∑

    i=0ep+i(x) h

    p+i + Ep+k(x;h) hp+k , x ∈ Ih .

    Here, the coefficients ep+i , 0 ≤ i ≤ k− 1, are solutions of the linear initial value problems

    e′p+i(x) = fy(x,y(x)) ep+i(x) − dp+i+1(x) , x ∈ I ,

    ep+i(a) = 0 .

    Moreover, there exist hmax > 0 and a constant C > 0, independent of h, such that

    ‖Ep+k(x;h)‖ ≤ C , x ∈ Ih , 0 < h < hmax .

    2

  • University of Houston, Department of MathematicsNumerical Analysis II

    Proof: For the solution of the one-step method with the increment function Φ∗ (consistency

    order p + 1), Theorem 2.21 implies

    ‖y∗h(x) − y(x)‖ ≤ C hp+1 ,

    and hence,

    ‖yh(x) − ep(x) hp − y(x)‖ ≤ C hp+1 .

    Consequently, we have

    yh(x) − y(x) = ep(x) hp + Ep+1(x;h) h

    p+1 , ‖Ep+1(x;h)‖ ≤ C .

    For k > 1, we prove the assertion recursively, i.e., for Φ∗ we construct an increment function

    Φ∗∗ etc.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Theorem 2.26 Theorem of Stetter (1970)

    Assume that the one-step method

    yh(x + h) = yh(x) + h Φ(x,x + h,yh(x),yh(x + h),h) , x ∈ I′h ,

    yh(a) = α

    is symmetric, i.e.,

    Φ(x,x + h,yh(x),yh(x + h),h) = Φ(x + h,x,yh(x + h),yh(x),−h) .

    Then, under the assumptions of Theorem 2.25 there holds

    e2m+1(x) = 0 , x ∈ I ,

    i.e., there exists an asymptotic expansion in powers of h2.

    Proof: Homework

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.4.2 Explicit extrapolation methods

    Assumption: Asymptotic expansion of the global discretization error in powers of hγ

    Example 2.27 p = 1 , γ = 1

    For the step sizes h and h/2 there holds

    (∗1) yh(x) − y(x) = e1(x) h + E2(x;h) h2,

    (∗2) yh/2(x) − y(x) =1

    2e1(x) h +

    1

    4E2(x;h) h

    2 .

    Multiplication of (∗2) with 2 and subtraction from (∗1) yields

    (2 yh/2(x) − yh(x))︸ ︷︷ ︸

    =:ŷh(x)

    − y(x) = O(h2) .

    Consider the polynomial p1 ∈ P1(lR) with p1(hi) = yhi(x) , 1 ≤ i ≤ 2, where h1 := h and h2 := h/2.

    We obtain

    p1(t) =2

    h(yh(x) − yh/2(x)) (t −

    h

    2) + yh/2(x) =⇒ p1(0) = 2 yh/2(x) − yh(x) = ŷh(x) .

    Extrapolation to the step size h = 0.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Example 2.28 p = 2 , γ = 2

    For the step sizes h and h/2 we have

    (⋆1) yh(x) − y(x) = e2(x) h2 + E4(x;h) h

    4,

    (⋆2) yh/2(x) − y(x) =1

    4e2(x) h

    2 +1

    16E4(x;h) h

    4 .

    Multiplication of (⋆2) by 4 and subtraction from (⋆1) yields

    4 yh/2(x) − 3 yh(x) = O(h4) =⇒

    4 yh/2(x) − 3 yh(x)

    3︸ ︷︷ ︸

    =:ŷh(x)

    − y(x) = O(h4) .

    Consider the polynomial p1 ∈ P1(lR) in t2 with p1(hi) = yhi(x) , 1 ≤ i ≤ 2, where h1 := h and h2 := h/2.

    We obtain

    p1(t2) =

    4

    3h2(yh(x) − yh/2(x)) (t

    2 −h2

    4) + yh/2(x) =⇒ p1(0) =

    4

    3yh/2(x) −

    1

    3yh(x) = ŷh(x) .

    Extrapolation to the step size h = 0.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Implementation of the extrapolation method:

    Given a basic step size H > 0, choose a sequence of step sizes

    (⋄) hi := H/ni , ni ∈ lN , i = 1,2, ...

    represented by the sequence

    F := {n1,n2, ...}

    and compute the approximations

    y(H;hi) := yhi , i = 1,2, ...

    The associated sequence Ti,1 = y(H;hi) , i = 1,2, ... forms the first column of an extrapolationtable. Essentially, two types of extrapolation techniques are used:

    (i) Polynomial extrapolation (Aitken-Neville)

    Determine a polynomial in hγ

    T̃ik(h) := a0 + a1 hγ + ... + ak−1 h

    (k−1)γ ,

    such thatT̃ik(hj) := y(H;hj) , j = i, i− 1, ..., i− k + 1

    and extrapolate to h = 0: Tik := T̃ik(0).

  • University of Houston, Department of MathematicsNumerical Analysis II

    Recursion:

    Tik = Ti,k−1 +Ti,k−1 − Ti−1,k−1

    ( nini−k+1)γ − 1

    , i ≥ k , k ≥ 2 .

    y(H;h1) : T11y(H;h2) : T21 T22y(H;h3) : T31 T32 T33· : · · · ·

    · : · · · · ·

    y(H;hi) : Ti1 Ti2 Ti3 · · Tii

  • University of Houston, Department of MathematicsNumerical Analysis II

    (ii) Rational extrapolation (Stoer-Bulirsch)

    Notation: For m ∈ lN, denote by [m/2] := max {n ∈ lN | n ≤m/2} the integer part of m/2.

    Determine a rational function in hγ with the nominator polynomial of degree µ := [(k− 2)/2]and the denominator polynomial of degree ν := k− 2− µ:

    T̃ik(h) :=a0 + a1 h

    γ + ... + aµ hµγ

    b0 + b1 hγ + ... + bν hνγ,

    such thatT̃ik(hj) := y(H;hj) , j = i, i− 1, ..., i− k + 1

    and extrapolate to h = 0: Tik := T̃ik(0).

    Recursion:

    Tik = Ti,k−1 +Ti,k−1 − Ti−1,k−1

    ( nini−k+1)γ [1 −

    Ti,k−1 − Ti−1,k−1Ti,k−1 − Ti−1,k−2

    ] − 1, i ≥ k , k ≥ 2 .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Recursion:

    Tik = Ti,k−1 +Ti,k−1 − Ti−1,k−1

    ( nini−k+1)γ [1 −

    Ti,k−1 − Ti−1,k−1Ti,k−1 − Ti−1,k−2

    ] − 1, i ≥ k , k ≥ 2 .

    y(H,h1) =: T10 T11y(H,h2) =: T20 T21 T22y(H,h3) =: T30 T31 T32 T33· · · · · ·

    · · · · · · ·

    y(H,hi) =: Ti0 Ti1 Ti2 Ti3 · · Tii

  • University of Houston, Department of MathematicsNumerical Analysis II

    Appropriate sequences of step sizes:

    (i) Harmonic sequence

    ni = i , i ∈ lN ,

    FH = {1,2,3,4,5, ...} .

    (ii) Romberg’s sequence

    ni = 2i , i ∈ lN ,

    FR = {2,4,8,16,32, ...} .

    (iii) Bulirsch’ sequence

    ni =

    3 · 2(i−2)/2 , i even

    2(i+1)/2 , i odd

    FB = {2,3,4,6,8,12,16, ...} .

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.4.3 Order and step size control

    Essential tool: Estimator for the global discretization error

    eik := Tik − y(H) .

    Assumptions on the error behavior:

    (A1) εik := ‖eik‖.= γik σk H

    γk+1 , γik := (k−1∏

    j=0ni−j)

    −γ ,

    (A2) εi,k+1 := ‖ei,k+1‖ ≪ ‖eik‖ =: εik .

    Lemma 2.29 Discretization error and sequence of step sizes

    Under the assumption (A1), there holds

    εi+1,k.= (

    ni−k+1ni+1

    )γ εik ,

    i.e., in each column of the extrapolation table the discretization errors only differ by a

    factor that solely depends on the sequence of step sizes.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Under the assumptions (A1), (A2) we have

    (i) εik > εi+1,k ⇐⇒ εik

    εi+1,k

    (ii) εik > εi,k+1 ⇐⇒ εik ← εi,k+1

    Graphical representation of the error behavior in the extrapolation table:

    ε11↑

    ε21 ← ε22↑ ↑

    ε31 ← ε32 ← ε33· ← · ← · ← ·

    εk1 ← εk2 ← εk3 ← · ← εkk

    In each column k the smallest value is attained by εkk.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Consequence: Diagonal error estimator

    Determine an estimator for εkk := ‖Tkk − y(H)‖. According to the error table, we need infor-

    mation from row k + 1. We have

    ‖Tk+1,k+1 − y(H)‖ ≤ q ‖Tkk − y(H)‖ , q < 1 .

    It follows that(i) ‖Tkk − y(H)‖ ≤ ‖Tkk − Tk+1,k+1‖ + ‖Tk+1,k+1 − y(H)‖

    ︸ ︷︷ ︸

    ≤ q ‖Tkk − y(H)‖

    =⇒

    εkk = ‖Tkk − y(H)‖ ≤1

    1− q‖Tkk − Tk+1,k+1‖ ,

    (ii) ‖Tkk − y(H)‖ ≥ ‖Tkk − Tk+1,k+1‖ − ‖Tk+1,k+1 − y(H)‖ ≥

    ≥ ‖Tkk − Tk+1,k+1‖ − q ‖Tkk − y(H)‖ =⇒

    εkk = ‖Tkk − y(H)‖ ≥1

    1 + q‖Tkk − Tk+1,k+1‖ .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Consequently,ε̂kk := ‖Tkk − Tk+1,k+1‖

    is a suitable estimator for εkk.

    Disadvantage: For estimating the error of the approximation Tkk in row k, we need infor-

    mation from row k + 1 (additional computational work). If we have to compute the approxi-

    mations in row k + 1, then:

    Alternative: Subdiagonal estimator

    Setting‖Tk+1,k+1 − y(H)‖ ≤ q ‖Tk+1,k − y(H)‖ , q < 1 ,

    we obtain1

    1 + q‖Tk+1,k − Tk+1,k+1‖ ≤ εk+1,k ≤

    1

    1− q‖Tk+1,k − Tk+1,k+1‖ .

    Therefore,ε̄k+1,k := ‖Tk+1,k − Tk+1,k+1‖

    is an appropriate estimator for εk+1,k.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Lemma 2.30 Comparison of both estimators

    Under the assumptions (A1), (A2) there holds

    ε̄k+1,k

    ε̂kk

    .=εk+1,k

    εkk

    .= (n1

    nk+1)γ ≪ 1 .

    Consequence: For a prespecified tolerance eps, the requirement ε̂kk ≤ eps is more restrictive

    than ε̄k+1,k ≤ eps.

    Therefore: Alternative diagonal estimator of Stoer:

    ε̂kk (n1

    nk+1)γ ≤ eps .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Development of an order and step size control

    (i) Step size control

    Convergence monitor for the extrapolation table:

    Assume that for suitably chosen bounds αik there holds

    (⋆) εik ≤ αik .

    Observe(⋄) εik

    .= γik σk︸ ︷︷ ︸

    =:Cik

    Hγk+1 .

    Let Hik be the step size for which equality holds in (⋆), i.e.,

    εik.= Cik H

    γk+1ik = αik =⇒ Cik = αik H

    −(γk+1)ik .

    Inserting this into (⋄) yields

    Hik.= H γk+1

    √√√√√

    αik

    εik.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Growth of the bounds αikWe assume an ideal growth as in the error table:

    (A3) αi+1,k.= (

    ni−k+1ni+1

    )γ αik .

    Lemma 2.31 Column-wise step size control

    Under the assumptions (A1), (A3) there holds

    Hik.= Hi+1,k , i = k,k + 1, ...

    Hence, we only need one estimate per column.

    Diagonal error estimator: Estimator for Hkk

    Ĥkk := H γk+1√√√√√

    αkk

    ε̂kk, k ∈ lN .

    Subdiagonal error estimator: Estimator forHk+1,k

    H̄k+1,k := H γk+1√√√√√

    αk+1,k

    ε̄k+1,k, k ∈ lN .

  • University of Houston, Department of MathematicsNumerical Analysis II

    (ii) Order control by minimization of the computational work

    Step size Hik in the extrapolation table at position (i,k) which yields convergence (set αik = eps):

    Hik = H γk+1√√√√√

    eps

    εik.

    Ai : Computational work (Number of evaluations of the increment function) for the computa-

    tion of Tii .

    Recursion (exemplified for the explicit Euler method)

    A1 = 1 ,

    Ai+1 = Ai + ni+1 − 1 , i = 1,2, ...

    Normalized work unit (work per unit step):

    Wik :=H

    HikAi

  • University of Houston, Department of MathematicsNumerical Analysis II

    For i = k + 1, use the subdiagonal error estimator: With H̄k+1,k there holds

    W̄k+1,k := Ak+1 γk+1√√√√√

    ε̄k+1,k

    eps.

    Determine the optimal column index q according to

    W̄q+1,q = min1≤k≤kf

    W̄k+1,k ,

    where kf denotes the largest available column index.

    After having determined q, choose

    H̄q+1,q = H γq+1√√√√√

    eps

    ε̄q+1,q

    as an estimate for the step size in the next step.

  • University of Houston, Department of MathematicsNumerical Analysis II

    Methods as a basis for extrapolation techniques:

    (i) Explicit Euler method: h-extrapolation

    FH = {1,2,3,4, ...} , ni = i ,

    A1 = n1 ,

    Ai+1 = Ai + ni+1 − 1 , i ∈ lN .

    (ii) Symmetric one-step methods

    (•) Implicit trapezoidal rule(•) Implicit midpoint rule

    Attention: Solution by fixed point iteration perturbs h2-expansion.

    (iii) Explicit midpoint rule: h2 -extrapolation

    yk+1 = yk−1 + 2 h f(xk,yk) , k ∈ lN ,

    y1 = y0 + h f(x0,y0) (initial step) ,

    y0 = α .

    Problem: Does the non-symmetric initial step perturb the h2-expansion for the symmetric

    discretization?

  • University of Houston, Department of MathematicsNumerical Analysis II

    Theorem 2.32 h2-expansion for the explicit midpoint rule

    Consider the ’stabilized’ form of the explicit midpoint rule applied to an autonomous

    system of differential equations:

    (EMP1) y1 = y0 + h f(y0) (’initial step’) ,

    (EMP2) yk+1 = yk−1 + 2 h f(yk) , 1 ≤ k ≤ ℓ ,

    (EMP3) Sℓ =1

    4[yℓ−1 + 2 yℓ + yℓ+1] (’final step’) .

    Under the assumption f ∈ C2N+2(D), for fixed x = hℓ there exists the following asymptotic

    expansion of the global discretization error:

    (†) Sℓ − y(x) =N∑

    j=1[ uj(x) + (−1)

    ℓ vj(x) ] h2j + EN+1(x;h) h

    2N+2 ,

    where uj , vj , 1 ≤ j ≤ N, are solutions of an initial value problem for a staggered system of

    differential equations of the form

    (AWP1) u′j = fy uj + inhomogeneous terms , v

    ′j = − fy vj + inhomogeneous terms ,

    (AWP2) uj(a) =1

    2gj(a) , vj(a) = −

    1

    2gj(a) .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Moreover, there exist hmax > 0 and C > 0, independent of h, such that

    (‡) ‖EN+1(x;h)‖ ≤ C , 0 ≤ h ≤ hmax .

    For ℓ = 2m, we obtain:

    (⋆1) Sℓ − y(x) =N∑

    j=1wj(x) h

    2j + WN+1(x;h) h2N+2 ,

    (⋆2)

    w1(x) = u1(x) +14 y′′(x)

    wj(a) = WN+1(a;h) = 0 , 1 ≤ j ≤ N .

    Proof: (Stetter (1970)): Formulate the two-step method as a symmetric one-step method

    of double dimension. Setting

    h̄ := 2 h , ξk := y2k , ηk := y2k+1 −h̄

    2f(ξk) ,

    it follows thatξ0 = y0 , η0 = y1 −

    2f(y0) = y0 ,

    ξk+1 − ξk = y2k+2 − y2k = 2 h f(y2k+1) = h̄ f(ηk +h̄

    2f(ξk)) ,

    ηk+1 − ηk = y2k+3 −h̄

    2f(ξk+1) − y2k+1 +

    2f(ξk) =

    2[ f(ξk) + f(ξk+1) ] .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Observing

    ξk+1 − ξk = h̄ f(ηk +h̄

    2f(ξk)) , βk+1 − βk =

    2[ f(ξk) + f(ξk+1) ] ,

    it follows that

    ξk+1 − ξkh̄

    = f(ηk +h̄

    2f(ξk)

    ︸ ︷︷ ︸

    =y2k+1

    ) , ξ0 = y0 ,ηk+1 − ηk

    h̄=

    1

    2[ f(ξk) + f(ξk+1) ] , η0 = y0 .

    For h̄→ 0 we obtain

    ξ′(x) = f(η(x)) , ξ(a) = y0 , η′(x) = f(ξ(x)) , η(a) = y0 .

    The existence and uniqueness theorem of Picard-Lindelöf yields

    ξ(x) = η(x) = y(x) , x ∈ I .

  • University of Houston, Department of MathematicsNumerical Analysis II

    The difference equationξk+1 − ξk

    h̄= f(ηk +

    2f(ξk)

    ︸ ︷︷ ︸

    =y2k+1

    )

    can be written in symmetric form using

    y2k+1 = ηk +h̄

    2f(ξk) +

    1

    2[ηk+1 − ηk −

    2f(ξk) −

    2f(ξk+1)]

    ︸ ︷︷ ︸

    =0

    =1

    2(ηk + ηk+1) +

    4(f(ξk) − f(ξk+1))

    Stetter’s theorem (cf. Theorem 2.26) yields the following h2-expansion:

    (ASE1) ξh̄(x) − y(x) =N∑

    j=1pj(x) h̄

    2j + PN+1(x; h̄) h̄2N+2 ,

    (ASE2) ηh̄(x) − y(x) =N∑

    j=1qj(x) h̄

    2j + QN+1(x; h̄) h̄2N+2 ,

    wherep′j(x) = fy(y(x)) qj(x) + ... , pj(a) = 0 ,

    q′j(x) = fy(y(x)) pj(x) + ... , qj(a) = 0 .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Observing

    (ASE1) ξh̄(x) − y(x) =N∑

    j=1pj(x) h̄

    2j + PN+1(x; h̄) h̄2N+2 ,

    (ASE2) ηh̄(x) − y(x) =N∑

    j=1qj(x) h̄

    2j + QN+1(x; h̄) h̄2N+2 ,

    the back transformation h = h̄/2 results in

    (⋄1) y2k = ξk =⇒ y2k − y(x2k) = y2k − ξk︸ ︷︷ ︸

    =0

    + ξk − y(x2k),

    (⋄2) y2k+1 =1

    2(ηk + ηk+1) +

    h

    2(f(ξk) − f(ξk+1)) .

    Inserting (ASE1), (ASE2) into (⋄1), (⋄2) yields

    y2k − y(x2k) =N∑

    j=1ej(x) h

    2j + EN+1(x;h) h2N+2 ,

    y2k+1 − y(x2k+1) =N∑

    j=1gj(x) h

    2j + GN+1(x;h) h2N+2 ,

    where e′j(x) = fy(y(x)) gj(x) + ... , ej(a) = 0 , g′j(x) = fy(y(x)) ej(x) + ... .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Using the decoupling

    uj :=1

    2(ej + gj) , vj :=

    1

    2(ej − gj) ,

    the assertion follows from

    e′j(x) = fy(y(x)) gj(x) + ... , ej(a) = 0 ,

    g′j(x) = fy(y(x)) ej(x) + ... .

    Realization: Choose ℓ = ni = 2mi and

    F 2H = {2,4,6,8,10, ...} , ni = 2i ,

    A1 = n1 + 1 ,

    Ai+1 = Ai + ni+1 , i ∈ lN .

    Codes: DIFSYS (Bulirsch/Stoer) , DIFEX1 (Deuflhard) , ODEX (Hairer/Wanner)

  • University of Houston, Department of MathematicsNumerical Analysis II

    2.5 Step size control for explicit Runge-Kutta methods

    Assume that yk+1 is an approximation of y(xk+1) computed by means of an explicit Runge-

    Kutta method of order pyk+1 = yk + h Φ(xk,yk,h) .

    Problem: Estimator for the global discretization error

    ek+1 := yk+1 − y(xk+1) .

    Lemma 2.33 Error estimator

    Assume that ŷk+1 is a more accurate approximation of y(xk+1) according to

    ‖ŷk+1 − y(xk+1)‖ ≤ q ‖yk+1 − y(xk+1)‖ , q < 1 .

    Then,ε̂k+1 := ‖ŷk+1 − yk+1‖

    is an estimator for ek+1 in the sense that

    1

    1 + qε̂k+1 ≤ ‖ek+1‖ ≤

    1

    1− qε̂k+1 .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Assumption: ŷk+1 is an approximation of order p + 1:

    ‖ŷk+1 − y(xk+1)‖ ≤ C hp+1 .

    Given a prespecified relative accuracy ε, for a reasonable new step size ĥ := xk+2 − xk+1 we

    should have

    (•) C ĥp+1 .= ρ ε ,

    where ρ < 1 represents an appropriate ’safety factor’.

    If yk+1 satisfies the requirement

    ε̂k+1 := ‖ŷk+1 − yk+1‖.= ε .= C hp+1 ,

    we have C .= ε̂k+1/hp+1. Inserting into (•) and solving for ĥ gives

    ĥ = h (ρ ε

    ε̂k+1)1/(p+1) .

    Two strategies for the computation of ŷk+1:

    (i) Extrapolation

    (ii) Embedded Runge-Kutta methods of higher order .

  • University of Houston, Department of MathematicsNumerical Analysis II

    (i) Extrapolation

    Let yk+1 and ȳk+1 be the approximations associated with the step sizes h resp. h/2, i.e.,

    (◦1) yk+1 − y(xk+1).= C hp ,

    (◦2) ȳk+1 − y(xk+1).= C (

    h

    2)p = 2−p C hp .

    Subtraction of (◦2) from (◦1) results in

    yk+1 − ȳk+1.= C hp (1 − 2−p) = C hp

    2p − 1

    2p,

    and hence,

    C .=2p − 1

    2ph−p (yk+1 − ȳk+1) .

    Inserting C into (◦2) and solving for y(xk+1) gives

    ŷk+1 = ȳk+1 +ȳk+1 − yk+1

    2p − 1.

  • University of Houston, Department of MathematicsNumerical Analysis II

    (ii) Embedded Runge-Kutta methods of higher order

    (ii)1 Runge-Kutta-Fehlberg method

    Assume that yk+1 is an approximation of y(xk+1) obtained by an s-stage Runge-Kutta method

    of order p:yk+1 = yk +

    s∑

    i=1bi ki ,

    ki = h f(xk + ci h , yk +i−1∑

    j=1aij kj) ,

    ci =i−1∑

    j=1aij .

    Compute ŷk+1 as the solution of an (s + t)-stage ’embedded’ Runge-Kutta method of order p + 1:

    ŷk+1 = yk +s+t∑

    i=1b̂i ki ,

    ki = h f(xk + ci h , yk +i−1∑

    j=1aij kj) ,

    ci =i−1∑

    j=1aij .

  • University of Houston, Department of MathematicsNumerical Analysis II

    Butcher scheme of the embedded (s + t)-stage Runge-Kutta method

    c2 | a21· | · ·

    · | · · ·

    cs | as1 · · as,s−1· | · · · · ·

    · | · · · · · ·

    cs+t | as+t,1 · · as+t,s−1 · · as+t,s+t−1yk+1 | b1 · · · bsŷk+1 | b̂1 · · · b̂s · · b̂s+t

  • University of Houston, Department of MathematicsNumerical Analysis II

    Disadvantage: Computation is continued with yk instead of ŷk.

    (ii)2 Embedded Runge-Kutta method by Dormand-Prince

    Idea: Use an embedded method, but continue with ŷk:

    yk+1 = ŷk +s∑

    i=1bi ki ,

    ki = h f(xk + ci h , ŷk +i−1∑

    j=1aij kj)

    and take advantage of ’Fehlberg’s trick’ for the reduction of the remaining free parameters.

    Fehlberg’s trick for t = 1: At the following step xk+1 7−→ xk+2 use

    ks+1 = h f(xk + cs+1 h , ŷk +s∑

    j=1as+1,j kj)

    ask1 = h f(xk+1 , ŷk+1) .

    Observing ŷk+1 = ŷk +s+1∑

    j=1b̂jkj, this leads to

    cs+1 = 1 , b̂s+1 = 0 , b̂i = as+1,i , 1 ≤ i ≤ s .