Time-stepping techniqueskuzmin/cfdintro/lecture8.… · Time-stepping techniques Unsteady flows...

21
Time-stepping techniques Unsteady flows are parabolic in time use ‘time-stepping’ methods to advance transient solutions step-by-step or to compute stationary solutions time space zone of influence dependence domain of future present past Initial-boundary value problem u = u(x,t) ∂u ∂t + Lu = f in Ω × (0,T ) time-dependent PDE Bu =0 on Γ × (0,T ) boundary conditions u = u 0 in Ω at t =0 initial condition Time discretization 0= t 0 <t 1 <t 2 <...<t M = T u 0 u 0 in Ω Consider a short time interval (t n ,t n+1 ), where t n+1 = t n t Given u n u(t n ) use it as initial condition to compute u n+1 u(t n+1 )

Transcript of Time-stepping techniqueskuzmin/cfdintro/lecture8.… · Time-stepping techniques Unsteady flows...

Time-stepping techniques

Unsteady flows are parabolic in time ⇒ use ‘time-stepping’ methods to

advance transient solutions step-by-step or to compute stationary solutions

time

space

zone of influence

dependencedomain of

future

present

past

Initial-boundary value problem u = u(x, t)

∂u∂t + Lu = f in Ω × (0, T ) time-dependent PDE

Bu = 0 on Γ × (0, T ) boundary conditions

u = u0 in Ω at t = 0 initial condition

Time discretization 0 = t0 < t1 < t2 < . . . < tM = T u0 ≈ u0 in Ω

• Consider a short time interval (tn, tn+1) , where tn+1 = tn + ∆t

• Given un ≈ u(tn) use it as initial condition to compute un+1 ≈ u(tn+1)

Space-time discretization

Space discretization: finite differences / finite volumes / finite elements

Unknowns: ui(t) time-dependent nodal values / cell mean values

Time discretization: (i) before or (ii) after the discretization in space

The space and time variables are essentially decoupled and can be discretized

independently to obtain a sequence of (nonlinear) algebraic systems

A(un+1, un)un+1 = b(un) n = 0, 1, . . . , M − 1

Method of lines (MOL) L → Lh yields an ODE system for ui(t)

duh

dt+ Lhuh = fh on (tn, tn+1) semi-discretized equations

FEM approximation uh(x, t) =N∑

j=1

uj(t)ϕj(x), uni ≈ u(xi, t

n)

Galerkin method of lines

Weak formulation∫

Ω

(

∂u∂t + Lu − f

)

v dx = 0, ∀v ∈ V, ∀t ∈ (tn, tn+1)

ddt (u, v) + a(u, v) = l(v), ∀v ∈ V → d

dt (uh, vh) + a(uh, vh) = l(vh), ∀vh ∈ Vh

Differential-Algebraic Equations MCdudt + Au = b t ∈ (tn, tn+1)

where MC = mij is the mass matrix and u(t) = [u1(t), . . . , uN (t)]T

Matrix coefficients mij = (ϕi, ϕj), aij = a(ϕi, ϕj), bi = l(ϕi)

Mass lumping MC → ML = diagmi, mi =∑

j

mij = (ϕi,∑

j

ϕj) =∫

Ωϕi dx

due to the fact that∑

j

ϕj ≡ 1. In the 1D case FDM=FVM=FEM+ lumping

Two-level time-stepping schemes

Lumped-mass discretization MLdudt + Au = b, R(u, t) = M−1

L [Au − b]

First-order ODE system

dudt + R(u, t) = 0 for t ∈ (tn, tn+1)

u(tn) = un, ∀n = 0, 1, . . . , M − 1

Standard θ−scheme (finite difference discretization of the time derivative)

un+1 − un

∆t+

[

θR(un+1, tn+1) + (1 − θ)R(un, tn)]

= 0 0 ≤ θ ≤ 1

where ∆t = tn+1 − tn is the time step and θ is the implicitness parameter

θ = 0 forward Euler scheme explicit, O(∆t)

θ = 1/2 Crank-Nicolson scheme implicit, O(∆t)2

θ = 1 backward Euler scheme implicit, O(∆t)

Fully discretized problem

Consistent-mass discretization MCdudt + Au = b, R(u, t) = M−1

C [Au − b]

[MC + θ∆tA]un+1 = [MC − (1 − θ)∆tA]un + ∆t bn+θ

where bn+θ = θbn+1 + (1 − θ)bn, 0 ≤ θ ≤ 1, n = 0, . . . , M − 1

In general, time discretization is performed using numerical methods for ODEs

Initial value problem

du(t)dt = f(t, u(t))

u(tn) = unon (tn, tn+1)

Exact integration∫ tn+1

tn

dudt dt = un+1 − un =

∫ tn+1

tn f(t, u(t)) dt

un+1 = un + f(τ, u(τ))∆t, τ ∈ (tn, tn+1) by the mean value theorem

Idea: evaluate the integral numerically using a suitable quadrature rule

Example: standard time-stepping schemes

Numerical integration on the interval (tn, tn+1)

FE

tn+1 ttnf

left endpoint

BE

tn+1 ttnf

right endpoint

CN

tn+1 ttnf

trapezoidal rule

LF

tn+1 ttnf

midpoint rule

Forward Euler un+1 = un + f(tn, un)∆t +O(∆t)2

Backward Euler un+1 = un + f(tn+1, un+1)∆t +O(∆t)2

Crank-Nicolson un+1 = un + 12 [f(tn, un) + f(tn+1, un+1)]∆t +O(∆t)3

Leapfrog method un+1 = un + f(tn+1/2, un+1/2)∆t +O(∆t)3

Properties of time-stepping schemes

Time discretization tn = n∆t, ∆t = TM ⇒ M = T

∆t

Accumulation of truncation errors n = 0, . . . , M − 1

ǫlocτ = O(∆t)p ⇒ ǫglobτ = Mǫloc

τ = O(∆t)p−1

Remark. The order of a time-stepping method (i.e., the asymptotic rate at

which the error is reduced as ∆t → 0) is not the sole indicator of accuracy

The optimal choice of the time-stepping scheme depends on its purpose:

• to obtain a time-accurate discretization of a highly dynamic flow problem

(evolution details are essential and must be captured) or

• to march the numerical solution to a steady state starting with some

reasonable initial guess (intermediate results are immaterial)

The computational cost of explicit and implicit schemes differs considerably

Explicit vs. implicit time discretization

Pros and cons of explicit schemes

⊕ easy to implement and parallelize, low cost per time step

⊕ a good starting point for the development of CFD software

⊖ small time steps are required for stability reasons, especially

if the velocity and/or mesh size are varying strongly

⊖ extremely inefficient for solution of stationary problems unless

local time-stepping i. e. ∆t = ∆t(x) is employed

Pros and cons of implicit schemes

⊕ stable over a wide range of time steps, sometimes unconditionally

⊕ constitute excellent iterative solvers for steady-state problems

⊖ difficult to implement and parallelize, high cost per time step

⊖ insufficiently accurate for truly transient problems at large ∆t

⊖ convergence of linear solvers deteriorates/fails as ∆t increases

Example: 1D convection-diffusion equation

∂u∂t + v ∂u

∂x = d∂2u∂x2 in (0, X) × (0, T )

u(0) = u(1) = 0, u|t=0 = u0f(t, u(t)) = −v ∂u

∂x + d∂2u∂x2

Lagrangian representationdu(x(t), t)

dt= d

∂2u

∂x2pure diffusion equation

where ddt is the substantial derivative along the characteristic lines dx(t)

dt = v

Initial profile is convected at speed v

and smeared by diffusion if d > 0

d = 0 ⇒du(x(t), t)

dt= 0

For the pure convection equation u is

constant along the characteristics

v d = 0d > 0t = T

X x0t = 0u

Example: 1D convection-diffusion equation

Uniform space-time mesh

xi = i∆x, ∆x = XN , i = 0, . . . , N

tn = n∆t, ∆t = TM , n = 0, . . . , M

un −→ un+1, u0i = u0(xi)

Fully discretized equation 0 ≤ θ ≤ 1

un+1i = un

i + [θfn+1h + (1 − θ)fn

h ]∆t

Central difference / lumped-mass FEM xi+1xi1 xi x0tT

X

tn+1tntn1x x

ttunknown

known

un+1i − un

i

∆t= θ

[

−vun+1

i+1 − un+1i−1

2∆x+ d

un+1i−1 − 2un+1

i + un+1i+1

(∆x)2

]

+ (1 − θ)

[

−vun

i+1 − uni−1

2∆x+ d

uni−1 − 2un

i + uni+1

(∆x)2

]

a sequence of tridiagonal linear systems i = 1, . . . , N, n = 0, . . . M − 1

Example: 1D convection-diffusion equation

Standard θ−scheme (two-level)

un+1i − un

i

∆t+ v

un+θi+1 − un+θ

i−1

2∆x= d

un+θi−1 − 2un+θ

i + un+θi+1

(∆x)2

un+θi = θun+1

i + (1 − θ)uni , 0 ≤ θ ≤ 1

Forward Euler (θ = 0) un+1i = h(un

i−1, uni , un

i+1)

Backward Euler (θ = 1) un+1i = h(un+1

i−1 , uni , un+1

i+1 )

Crank-Nicolson (θ = 12 ) un+1

i = h(un+1i−1 , un

i−1, uni , un

i+1, un+1i+1 )

Leapfrog time-stepping un+1i = un−1

i + 2∆tfnh

un+1i − un−1

i

2∆t+ v

uni+1 − un

i−1

2∆x= d

uni−1 − 2un

i + uni+1

(∆x)2

(explicit, three-level) un+1i = h(un

i−1, un−1i , un

i , uni+1)

FEtntn+1

BEtntn+1

CNtntn+1

LFtntn+1tn1

Fractional-step θ−scheme

Given the parameters θ ∈ (0, 1), θ′ = 1 − 2θ, and α ∈ [0, 1] subdivide the time

interval (tn, tn+1) into three substeps and update the solution as follows

Step 1. un+θ = un + [αf(tn+θ, un+θ) + (1 − α)f(tn, un)]θ∆t

Step 2. un+1−θ = un+θ + [(1 − α)f(tn+1−θ, un+1−θ) + αf(tn+θ, un+θ)]θ′∆t

Step 3. un+1 = un+1−θ + [αf(tn+1, un+1) + (1 − α)f(tn+1−θ, un+1−θ)]θ∆t

Properties of this time-stepping method

• second-order accurate in the special case θ = 1 −√

22

• coefficient matrices are the same for all substeps if α = 1−2θ1−θ

• combines the advantages of Crank-Nicolson and backward Euler

Predictor-corrector and multipoint methods

Objective: to combine the simplicity of explicit schemes and robustness of

implicit ones in the framework of a fractional-step algorithm, e.g.,

1. Predictor un+1 = un + f(tn, un)∆t forward Euler

2. Corrector un+1 = un + 12 [f(tn, un) + f(tn+1, un+1)]∆t Crank-Nicolson

or un+1 = un + f(tn+1, un+1)∆t backward Euler

Remark. Stability still leaves a lot to be desired, additional correction steps

usually do not pay off since iterations may diverge if ∆t is too large

Order barrier: two-level methods are at most second-order accurate, so

extra points are needed to construct higher-order integration schemes

Adams methods tn+1, . . . , tn−m, m = 0, 1, . . .

Runge-Kutta methods tn+α ∈ [tn, tn+1], α ∈ [0, 1]

Adams methods

Derivation: polynomial fitting

Truncation error: ǫglobτ = O(∆t)p

for polynomials of degree p − 1 which

interpolate function values at p points tn+1 tpresent futurepastf

tntn1tn2Adams-Bashforth methods (explicit)

p = 1 un+1 = un + ∆tf(tn, un) forward Euler

p = 2 un+1 = un + ∆t2 [3f(tn, un) − f(tn−1, un−1)]

p = 3 un+1 = un + ∆t12 [23f(tn, un) − 16f(tn−1, un−1) + 5f(tn−2, un−2)]

Adams-Moulton methods (implicit)

p = 1 un+1 = un + ∆tf(tn+1, un+1) backward Euler

p = 2 un+1 = un + ∆t2 [f(tn+1, un+1) + f(tn, un)] Crank-Nicolson

p = 3 un+1 = un + ∆t12 [5f(tn+1, un+1) + 8f(tn, un) − f(tn−1, un−1)]

Adams methods

Predictor-corrector algorithm

1. Compute un+1 using an Adams-Bashforth method of order p − 1

2. Compute un+1 using an Adams-Moulton method of order p with

predicted value f(tn+1, un+1) instead of f(tn+1, un+1)

Pros and cons of Adams methods

⊕ methods of any order are easy to derive and implement

⊕ only one function evaluation per time step is performed

⊕ error estimators for ODEs can be used to adapt the order

⊖ other methods are needed to start/restart the calculation

⊖ time step is difficult to change (coefficients are different)

⊖ tend to be unstable and produce nonphysical oscillations

Runge-Kutta methods

Multipredictor-multicorrector algorithms of order p

p = 2 un+1/2 = un + ∆t2 f(tn, un) forward Euler / predictor

un+1 = un + ∆tf(tn+1/2, un+1/2) midpoint rule / corrector

p = 4 un+1/2 = un + ∆t2 f(tn, un) forward Euler / predictor

un+1/2 = un + ∆t2 f(tn+1/2, un+1/2) backward Euler / corrector

un+1 = un + ∆tf(tn+1/2, un+1/2) midpoint rule / predictor

un+1 = un + ∆t6 [f(tn, un) + 2f(tn+1/2, un+1/2) Simpson rule

+ 2f(tn+1/2, un+1/2) + f(tn+1, un+1)] corrector

Remark. There exist ‘embedded’ Runge-Kutta methods which perform extra

steps in order to estimate the error and adjust ∆t in an adaptive fashion

General comments

Pros and cons of Runge-Kutta methods

⊕ self-starting, easy to operate with variable time steps

⊕ more stable and accurate than Adams methods of the same order

⊖ high order approximations are rather difficult to derive; p function

evaluations per time step are required for a p−th order method

⊖ more expensive than Adams methods of comparable order

Adaptive time-stepping strategy ∆t ∆t ∆t ∆t ∆t ∆t ∆t ∆t

makes it possible to achieve the desired accuracy at a relatively low cost

Explicit methods: use the largest time step satisfying the stability condition

Implicit methods: estimate the error and adjust the time step if necessary

Automatic time step control

Objective: make sure that ||u − u∆t|| ≈ TOL (prescribed tolerance)

Local truncation error

1. u∆t = u + ∆t2e(u) + O(∆t)4

2. um∆t = u + m2∆t2e(u) + O(∆t)4

Heuristic error analysis

e(u) ≈um∆t − u∆t

∆t2(m2 − 1)

Remark. It is tacitly assumed that the error at t = tn is equal to zero.

Estimate of the relative error

||u − u∆t∗ || ≈

(

∆t∗∆t

)2||u∆t − um∆t||

m2 − 1= TOL

Adaptive time stepping

∆t2∗ = TOL∆t2(m2 − 1)

||u∆t − um∆t||

Richardson extrapolation: eliminate the leading term

u =m2u∆t − um∆t

m2 − 1+O(∆t)4 fourth-order accurate

Practical implementation

Automatic time step control can be executed as follows

Given the old solution un do:

1. Make one large time step of size m∆t to compute um∆t

2. Make m small substeps of size ∆t to compute u∆t

3. Evaluate the relative solution changes ||u∆t − um∆t||

4. Calculate the ‘optimal’ value ∆t∗ for the next time step

5. If ∆t∗ ≪ ∆t, reset the solution and go back to step 1

6. Set un+1 = u∆t or perform Richardson extrapolation

⊖ Note that the cost per time step increases substantially (um∆t may be as

expensive to obtain as u∆t due to slow convergence at large time steps).

⊕ On the other hand, adaptive time-stepping enhances the robustness of the

code, the overall efficiency and the credibility of simulation results.

Evolutionary PID controller

Another simple mechanism for capturing the dynamics of the flow:

• Monitor the relative changes en = ||un+1−un||||un+1|| of an indicator variable u

• If en > ǫ reject the solution and repeat the time step using ∆t∗ = ǫen

∆tn

• Adjust the time step smoothly so as to approach the prescribed tolerance

∆tn+1 =

(

en−1

en

)kP(

TOL

en

)kI(

e2n−1

enen−2

)kD

∆tn

• Limit the growth and reduction of the time step so that

∆tmin ≤ ∆tn+1 ≤ ∆tmax, l ≤∆tn+1

∆tn≤ L

Empirical PID parameters kP = 0.075, kI = 0.175, kD = 0.01

Pseudo time-stepping

Solutions to boundary value problems of the form Lu = f represent the

steady-state limit of the associated time-dependent problem

∂u

∂t+Lu = f, u(x, 0) = u0(x) in Ω

where u0 is an ‘arbitrary’ initial condition. Therefore, the numerical solution

can be ‘marched’ to the steady state using a pseudo time-stepping technique.

• can be interpreted as an iterative solver for the stationary problem

• the artificial time step represents an adjustable relaxation parameter

• evolution details are immaterial ⇒ ∆t should be as large as possible

• the unconditionally stable backward Euler method is to be recommended

• explicit schemes can be used in conjunction with local time-stepping

• it is worthwhile to perform an adaptive time step control (e.g., PID)