Incompressible Fluid Flow

45
1 OVERVIEW Differential Formulation Finite Difference methods ψ-ζ Formulation : 2D flows Transient & Steady-state solutions Artificial Compressibility method: ( u, v, p) Steady-state solutions Marker & Cell Method : ( u, v, p) Transient solutions (mainly) Integral Formulation Finite-Volume methods Finite-Element methods SIMPLE / SIMPLER : u, v, P Steady-state solutions Transient solutions Incompressible Fluid Flow Conservation Laws Navier-Stokes Equations Parabolic in Time, Elliptic in Space Time marching / integration: Explicit methods Implicit methods Predictor-corrector mthds Fractional-step methods Continuity Equation Elliptic in space Direct Solution Iterative solution: Point methods Line methods Multi-grid Energy Equation As per NS Equation

Transcript of Incompressible Fluid Flow

1

OVERVIEW

Differential Formulation Finite Difference methods

ψ-ζ Formulation : 2D flows Transient & Steady-state solutions Artificial Compressibility method: ( u, v, p) Steady-state solutions Marker & Cell Method : ( u, v, p) Transient solutions (mainly)

Integral Formulation Finite-Volume methods Finite-Element methods

SIMPLE / SIMPLER : u, v, P Steady-state solutions Transient solutions

Incompressible Fluid Flow

Conservation Laws

Navier-Stokes Equations

Parabolic in Time, Elliptic in Space Time marching / integration: Explicit methods Implicit methods Predictor-corrector mthds Fractional-step methods

Continuity Equation Elliptic in space

Direct Solution Iterative solution: Point methods Line methods Multi-grid

Energy Equation As per NS Equation

2

REFERENCES 1. Smith, G. D., Numerical Solution of Partial Differential Equations : Finite-

Difference Methods. 3rd Edn., 1985. 2. Ames, W. F., Numerical Methods for Partial Differential Equations. 2nd

Edn., 1977. 3. Young, D. M., Iterative Solution of Large Linear Systems. 1971. 4. Hageman, P. & Young, D. M., Applied Iterative Methods. 1981. 5. Roach, P. J., Computational Fluid Dynamics. 2nd Edn., Hermosa, 1975. 6. Anderson, D.A., Tannehill, J. C. & Fletcher, R. H., Computational Fluid

Mechanics and Heat Transfer. McGraw-Hill, 1981. 7. Patankar, S. V., Numerical Heat Transfer and Fluid Flow. Taylor &

Francis, 1980. 8. Fletcher, C. A. J., Computational Techniques for Fluid Dynamics, Vol 1 : Fundamental and General Techniques. Vol 2 : Specific Techniques for Different Flow Categories. Springer-Verlag, 1988. 9. Hirsch, C., Numerical Computation of International and External Flows

Vol 1 : Fundamentals of Numerical Discretization. Vol 2 : Computational Methods for Inviscid and Viscous Flows. John-Wiley, 1988.

Remarks:

References 1 and 2 deal generally with numerical methods for partial differential equations. References 3 and 4 are fairly specialized texts for elliptic-type equations. Reference 5 has a good coverage of ψ−ζ formulation. Reference 6 is good general reference. Its coverage for incompressible fluid flow is somewhat limited. A good reference for those implementing Finite-Volume Scheme is Reference 7. Reference 8 is the best overall reference. It may be expensive however. Reference 9 is a good source of reference material, with bias for compressible fluid flows. These books are all available in the NUS Library.

3

INCOMPRESSIBLE FLUID FLOW PROBLEM Physical Domain and Variables Field variables: Velocity ( , )u v and pressure p are functions of ( , ),x y t . Note: If thermodynamic effects are significant, a Temperature T field will be involved. ☼ Governing Equations: Conservation of Mass (Continuity condition):

0u vx y∂ ∂

+ =∂ ∂

(1)

Conservation of Momentum:

21x

u u u pu v u ft x y x

υρ

∂ ∂ ∂ ∂+ + = − + ∇ +

∂ ∂ ∂ ∂ (2)

21

yv v v pu v v ft x y y

υρ

∂ ∂ ∂ ∂+ + = − + ∇ +

∂ ∂ ∂ ∂ (3)

where

: ( / , / )x y∇ = ∂ ∂ ∂ ∂ 2 2 2 2 2: / /x y∇ = ∇ ⋅∇ = ∂ ∂ + ∂ ∂

( , ) :x yf f = body force per unit fluid mass.

Domain D

Boundary ∂D

Outflow boundary

Inflow Boundary

No-slip B.C. at solid wall

4

Conservation of Energy:

( )p ST T Tc u v k T Qt x y

ρ ⎛ ⎞∂ ∂ ∂+ + ≈ ∇ ⋅ ∇ +⎜ ⎟∂ ∂ ∂⎝ ⎠

(4)

for low-speed incompressible flow. Fluid density ρ is a function of T, but not of p for incompressible fluid.

:pcρ enthalpy, k : thermal conductivity,

:SQ heat addition per unit volume. ☼ A solution of the flow problem means finding the dependence of the flow field ( , )u v and pressure field p on the space ( , )x y and time t variables: Unsteady solution: ( , , ), ( , , ) and ( , , )u x y t v x y t p x y t . Steady solution: ( , ), ( , ) and ( , )u x y v x y p x y . ☼ Governing Equation: Numerical Treatment

Mass conservation: A constraint condition on

the flow field at all times.

Typically enforced via a linear elliptic equation: Poisson equation Numerical form: =Au b

Momentum conservation: Energy conservation:

Evolution equations

Integration in time

Numerical form: ( )ddt

= −u A u b

5

Streamfunction Vorticity ( , )ψ ζ Formulation For two-dimensional incompressible flow, the mass conservation and momentum conservation equations may be reformulated in terms of streamfunction ( , , )x y tψ and vorticity ( , , )x y tζ fields. The velocity field ( , )u v may be expressed in terms of streamfunction (up to a constant) as follows:

uyψ∂

=∂

and vxψ∂

= −∂

. (5)

The vorticity field is the curl of the velocity field ( , )u v :

: ( , ) v uu vx y

ζ ∂ ∂= ∇× = −

∂ ∂ , (6)

and from the above definitions: ( , , )x y tζ

2ψ ζ∇ = − , (7) a Poisson equation referred to as the Streamfunction Vorticity equation. An equation governing the evolution of the vorticity field ( , , )x y tζ may be derived by taking the curl of the momentum equations (2-3):

x∂∂

[ y-momentum equation (3) ] − y∂∂

[ x-momentum equation (2) ]

to give

2 2

2 2y xf fu

t x y x y x yζ ζ ζ ζ ζν υ

∂⎛ ⎞ ⎛ ⎞∂ ∂ ∂ ∂ ∂ ∂+ + = + + −⎜ ⎟ ⎜ ⎟∂ ∂ ∂ ∂ ∂ ∂ ∂⎝ ⎠⎝ ⎠

(8a)

2( ) ( ) ( , )x yu v f f

t x yζ ζ ζ υ ζ∂ ∂ ∂+ + = ∇ +∇×

∂ ∂ ∂ (8b)

6

2( ) ( ) 1 ( , )

Re x yu v f f

t x yζ ζ ζ ζ∂ ∂ ∂+ + = ∇ +∇×

∂ ∂ ∂ , (8c)

where ref refRe /U L υ= is the Reynolds number. Equation (8) is referred to as the Vorticity Transport Equation. Note that the continuity condition has been used in (8b). ☼ The 2D flow of an incompressible fluid is governed by the following Streamfunction Vorticity and the Vorticity Transport equations in the absence of body forces:

2ψ ζ∇ = − , (7)

2( ) ( ) 1Re

u vt x yζ ζ ζ ζ∂ ∂ ∂+ + = ∇

∂ ∂ ∂. (8)

There are only 2 dependent field variables ( , )ψ ζ compared to the 3 primitive field variables ( , , )u v p in the original conservation of mass and momentum formulation (1-3). The pressure field variable p is eliminated in the ( , )ψ ζ formulation. ☼ The pressure field if needed can be obtained from the solution ( , )ψ ζ of the flow field by solving separately a Poisson equation for pressure. The Poisson Equation for ( , , )p x y t may be derived by taking the divergence of the momentum equation:

∇⋅[momentum equation] [ -mom equ (2)] [ -mom equ (3)]x yx y∂ ∂

= +∂ ∂

(9) Exercise: Show that the Poisson equation for ( , , )p x y t is given by

7

22 2 22

2 22px y x yψ ψ ψ⎡ ⎤⎛ ⎞∂ ∂ ∂

∇ = −⎢ ⎥⎜ ⎟∂ ∂ ∂ ∂⎝ ⎠⎢ ⎥⎣ ⎦. (10)

8

LINEAR ELLIPTIC PROBLEMS Second-order Partial Differential Equations in 2D:

2 2 2

2 2

u u u u uA B C D E Fu Gx x y y x y∂ ∂ ∂ ∂ ∂

+ + + + + =∂ ∂ ∂ ∂ ∂ ∂

(1)

where , , , , , andA B C D E F G may be a function of ( , )x y . The above equation is

Elliptic : 2 4 0B AC− < . (2a) Hyperbolic : 2 4 0B AC− > . (2b) Parabolic : 2 4 0B AC− = . (2c)

(Fletcher, Vol. 1 gives fairly detailed account of PDE classification. Understanding the characteristics of the PDE is crucial for correct implemetation of boundary conditions) ☼ The ellipticity of the equation is preserved by coordinate tranformation.

Well known examples of elliptic PDEs in fluid mechanics are:

Laplace Equation: 2 2

22 2 0u uu

x y∂ ∂

∇ = + =∂ ∂

(3)

Poisson Equation: 2 2

22 2

u uu gx y∂ ∂

∇ = + =∂ ∂

(4)

9

Solve 2 2

2 2

u u gx y∂ ∂

+ =∂ ∂

in a domain D with u h= given on the boundary D∂ .

N 5 4 3 2 1 1 2 3 4 9 M Applying second-order

FD approximation:

( ) ( )1, , 1, , 1 , , 1

,2 2

2 2i j i j i j i j i j i ji j

u u u u u ug

x y+ − + −− + − +

+ =∆ ∆

21, 1, , 1 , 1 , ,4 ( )i j i j i j i j i j i ju u u u u g x+ − + −+ + + − = ∆

2

10,1 8,1 9,3 9,2 9,2 9,14 ( )u u u u g x h+ + − = ∆ −

23,2 2,3 2,2 22 1,2 2,14 ( )u u u g x h h+ − = ∆ − −

☼ Each interior point yields a linear algebraic equation. There are ( 2) ( 2)M N− × − unknown u’s and equations:

Au b= where A is an [ ] [ ]( 2) ( 2) ( 2) ( 2)M N M N− × − × − × − square matrix.

2u g∇ =D

∂D ∆x

∆y

∆x = ∆y

i-1 i i+1

j

j+1

j-1

10

SOLUTION OF LINEAR ALGEBRAIC EQUATION SYSTEMS The linear algebraic system

Au b= (1) has a unique solution 1u A b−= if matrix A is non-singular. In practice, it is not always easy to obtain the numerical solution of (1) when A is a large matrix. Direct methods Iterative methods Determine the exact solution u in a finite number of computational steps (in the absence of round-off errors)

Determine the solution u in repetitive manner as the limit of a converging sequence of approximations:

{ }lim k

ku u

→∞= .

For general matrices A: Cramer’s rule: Det( ) / Det( )k ku A A= where kA is A with its kth column replaced by b. Operation counts: ([ 1]!)O N + . Gaussian elimination: Essentially an LU decomposition of matrix A to give LUu b= where L and U are lower and upper triangular matrices resp. Operation counts: 3( )O N .

Point-iterative methods for general matrices A: Jacobi method (J) Gauss-Seidel method (GS) Successive Over-relaxation method (SOR)

Thomas Algorithm for tridiagonal matrices: Operation counts: ( )O N . Fast-Fourier Transform for Poisson equations on a 2k grid: Operation counts: 2( log )O N N .

Block- or Line-iterative methods / Alternating-direction implicit methods (ADI). Strongly implicit Procedure (SIP) Conjugate Gradient methods Multi-grid procedures.

11

☼ Problems with Direct Methods:

Accumulation of round-off errors limits size of A.

Large storage requirement compared to iterative methods.

Applicable to restricted classes of problems.

Algorithms may be very complex. ☼ Thomas Algorithm: ☼

12

ITERATIVE METHODS FOR LINEAR SYSTEMS For large matrices, iterative methods may be the most practical; being relatively easy to program, and applicable to large classes of problems. Point-iterative methods:

Jacobi (J) method Gauss-Seidel (GS) method Successive Over-relaxation method (SOR) and SSOR

These methods generate a sequence of approximations { }0

k

ku

= to the solution

of Au b= by an equation/scheme of the form:

1k ku Gu c+ = + (1) where 0u is the initial guess solution. ☼ Consider the system of linear equations:

11 1 12 2 13 3 14 4 1

21 1 22 2 23 3 24 4 2

31 1 32 2 33 3 34 4 3

41 1 42 2 43 3 44 4 4

a u a u a u a u ba u a u a u a u ba u a u a u a u ba u a u a u a u b

+ + + =+ + + =+ + + =+ + + =

(2)

Jacobi (J) Method Rewrite (1) as

1 12 2 13 3 14 4 111

2 21 1 23 3 24 4 222

3 31 1 32 2 34 4 333

4 41 1 42 2 43 3 444

{ }/{ }/{ }/{ }/

b a u a u a u aub a u a u a u aub a u a u a u aub a u a u a u au

− − −=− − −=− − −=− − −=

and calculate successive approximations of solution based on the following:

13

11 12 2 13 3 14 4 111

12 21 1 23 3 24 4 222

13 31 1 32 2 34 4 333

14 41 1 42 2 43 3 444

{ }/{ }/{ }/{ }/

k k kk

k k kk

k k kk

k k kk

b a u a u a u aub a u a u a u aub a u a u a u aub a u a u a u au

+

+

+

+

− − −=− − −=− − −=− − −=

(2)

This is the Jacobi method. The general form for N unknowns is

1

1

1 ( 1, , )N

k kii ij j

jii iij i

bu a u i Na a

+

=≠

= − =∑ … (3)

Both equation (2) and the general form (3) may clearly be written in the form:

1k ku Gu c+ = + ☼ The Gauss-Seidel (GS) Method The Gauss-Seidel method is a variation of the Jacobi method in which values of 1k

iu + are used as soon as they become available. Thus, we write equation (2) as

11 12 2 13 3 14 4 111

112 21 1 23 3 24 4 222

1 113 31 1 32 2 34 4 333

1 1 114 41 1 42 2 43 3 444

{ }/{ }/{ }/{ }/

k k kk

k k kk

k k kk

k k kk

b a u a u a u aub a u a u a u aub a u a u a u aub a u a u a u au

+

++

+ ++

+ + ++

− − −=− − −=− − −=− − −=

(4)

and the general form for N unknowns is

11 1

1 1

1 ( 1, , )i N

k k ki i ij j ij j

j j iii

u b a u a u i Na

−+ +

= = +

⎧ ⎫= − − =⎨ ⎬

⎩ ⎭∑ ∑ … . (5)

It will be seen that the Gauss-Seidel method may also be cast in the form of (1). ☼

14

The Successive Over-relaxation Method (SOR) Loop 1, , .i N= …

GS Step: Calculate an intermediate 1kiu + based on the GS step.

Relaxation Step: Calculate 1 1( )k k k ki i i iu u u uω+ += + −

1 (1 )k ki iu uω ω+= + − . (6)

where 0 2ω< < . End i-Loop Using equation (5) from GS method in (6)

11 1

1 1

(1 ) ( 1, , )i N

k k k ki i ij j ij j i

j j iii

u b a u a u u i Naω ω

−+ +

= = +

⎧ ⎫= − − + − =⎨ ⎬

⎩ ⎭∑ ∑ … (7)

which is the general form for SOR method. Note:

1ω < : Case of Under-Relaxation. 1ω = : GS method is recovered.

The Jacobi Over-relaxation Method (JOR) is obtained if the GS Step is

replaced by a Jacobi Step instead.

The SOR method may also be cast in the form of (1). ☼ The Symmetric Successive Over-relaxation Method (SOR) One SSOR iteration comprises 2 × SOR sweeps: Step 1: An SOR sweep of the unknowns from 1 2 3, , , Nu u u u… .

Step 2: An SOR sweep of the unknowns from 1 2 1, , , ,N N Nu u u u− − … .

15

The SSOR has a better distribution of error, as persistent sweep in one direction may lead to error accumulation if sweep direction is not consistent with physics of problem. ☼

16

Residual Error and Increment (∆)

1k ku Gu c+ = + generates a sequence of approximations 0{ }k kku =∞= from the

initial guess 0u . We define the following:

Residual vector :k kr b Au= − (8)

or 1

:N

k ki i ij j

j

r b a u=

= −∑ component-wise (9)

Increment vector 1:k k ku u u+∆ = − (10)

or 1k k ki i iu u u+∆ = − component-wise (11)

☼ Jacobi method in ∆-form:

From (3) 1

1

1 ( 1, , )N

k kii ij j

jii iij i

bu a u i Na a

+

=≠

= − =∑ …

1

1

1 ( 1, , )N

k k kii i ij j

jii ii

bu u a u i Na a

+

=

− = − =∑ …

or /k ki i iiu r a∆ = (12)

17

Gauss-Seidel method in ∆-form: Rewrite the Gauss-Seidel equation (5) as

1 1 11 1

1 1 1 1

1

( 1, , )

i N i ik k k k k k ki i i ij j ij j ii i ij j ij j

j j i j jii

u u b a u a u a u a u a ua

i N

− − −+ +

= = + = =

⎧ ⎫− = − − − − +⎨ ⎬

⎩ ⎭=

∑ ∑ ∑ ∑…

1

1 1

1 1

1 ( ) ( 1, , )N i

k k k k ki i i ij j ij j j

j jii

u u b a u a u u i Na

−+ +

= =

⎧ ⎫− = − − − =⎨ ⎬

⎩ ⎭∑ ∑ …

or 1

1

1 ( 1, , )i

k k ki i ij j

jii

u r a u i Na

=

⎧ ⎫∆ = − ∆ =⎨ ⎬

⎩ ⎭∑ … (13)

☼ Successive Over-Relaxation Method in ∆-form: Exercise: Show that for the SOR:

1

1

( 1, , )i

k k ki i ij j

jii

u r a u i Naω −

=

⎧ ⎫∆ = − ∆ =⎨ ⎬

⎩ ⎭∑ … (14)

From (6): ( )1k k ki i iu u uω +∆ = −

Identifying ( )1k ki iu u+ − in (6) with the SOR increment k

iu∆ given by (13)

yields the requisite formula.

Answer behind

18

Norms and Numerical Convergence ☼ The Maximum ( l∞ ) and Euclidean ( 2l ) Norms of a Vector: Norm is a measure of the magnitude/size of a vector. We will not go into the properties of norms and normed vector spaces in this introductory course. It suffices to know their definitions. (Reference: Numerical Analysis by Burden R.L. & Faires J.D. 7th Ed. 2001 Brooks/Cole) For any vector 1( )N

i ix x == ,

The Maximum or l∞ -norm: { }1, ,

: max ii Nx x

∞ ==

… (15)

The Euclidean or 2l -norm: 22 1

: ( )Nii

x x=

= ∑ (16)

Note: 2

x x N x∞ ∞≤ ≤ .

i.e. The ∞

… and 2

… are equivalent norms. For a given vector norm … , the induced or natural matrix norm is defined by:

1 0: max max

x x

BxB Bx

x= ≠= = (17)

for any matrix B. ☼

19

Convergence:

The sequence { }0

k

ku

= generated by an iterative procedure such as

1k ku Gu c+ = + converges to the solution if the sequence of residuals { }0

k

kr

=

tends to zero, i.e.

lim 0k

kr

→∞= , (18)

where is … a suitable norm. For iterative procedures, computation is stopped (numerical convergence) when

1. kr TOL< , or (19)

2. 1k k ku u u TOL+∆ = − < (20)

where TOL is a small real number number specified by the user. Examples:

{ } 8

1, ,max 10 .k k

ii Nr r −

∞ == <

1 1 2 6

12 2( ) 10Nk k k k k

i iiu u u u u+ + −

=∆ = − = − <∑ .

20

Matrix Notation for J, GS and SOR Methods The Jacobi Method: Decompose the matrix A as follows:

( )Au D L U u b= − − = (21) where the matrices D : comprises the diagonal elements of A. L− :comprises elements from the lower triangle of A (no diagonal elements) U− :comprises elements from the upper triangle of A (no diagonal elements) Rearrange (7) as follows:

( )Du L U u b= + + 1 ( )k kDu L U u b+ = + +

1 1 1 1[( ) ] ( )k k ku D L U u b D L U u D b+ − − −= + + = + + (22) which is equations (2) & (3) in matrix language. Note that D is non-singular and therefore invertible. Equation clearly is of the form of equation (1):

1k ku Gu c+ = + . ☼ The Gauss-Seidel Method: Re-arrange (5) as follows

11 1

1 1

( 1, , )i N

k k kii i ij j ij j i

j j i

a u a u a u b i N−

+ +

= = +

+ = − + =∑ ∑ …

This is the same as decomposing and rewriting Au b= as

1( ) k kD L u Uu b+− = + (23) where ( )D L− is the lower triangle of A and is invertible. Then

21

1 1 1( ) ( )k ku D L Uu D L b+ − −= − + − (24) is the matrix form of the GS method. ☼ The Successive Over-Relaxation Method Exercise: Show for SOR, the matrix equation is given in the form of

1k ku Gu c+ = + by:

1 1 1( ) [(1 ) ] ( )k ku D L D U u D L bω ω ω ω ω+ − −= − − + + − (25) Re-arrange equation (7) as follows:

11 1

1 1

(1 ) ( 1, , )i N

k k k kii i ij j ii i ij j i

j j i

a u a u a u a u b i Nω ω ω ω−

+ +

= = +

+ = − − + =∑ ∑ … .

We can immediately see that

1( ) [(1 ) ]k kD L u D U u bω ω ω ω+− = − + + and hence (25), since ( )D Lω− is non-singular (lower triangular matrix with diagonal). ☼

Answer behind

22

Applications to Poisson Equation in a Rectangular Domain

Solve 2 2

2 2

u u gx y∂ ∂

+ =∂ ∂

in domain D, with u h= on D∂ . (26)

N 3 2 1 1 2 3 4 8 9 10 M Second-order (five-point) central difference at interior point ( , )i j yields:

21, 1, , 1 , 1 , ,4 ( )i j i j i j i j i j i ju u u u u g x+ − + −+ + + − = ∆ (27)

Applying Jacobi method: Rewrite difference equation (27) at interior point as

( )2, 1, 1, , 1 , 1 ,

1 ( )4i j i j i j i j i j i ju u u u u g x+ − + −= + + + − ∆ .

Jacobi iteration is based on

( )1 2, 1, 1, , 1 , 1 ,

1 ( )4

k k k k ki j i j i j i j i j i ju u u u u g x+

+ − + −= + + + − ∆ . (28)

At corner interior point (2,2) , the computation follows

2u g∇ =D

∂D∆x

∆y

∆x = ∆y

i-1 i i+1

j

j+1

j-1

23

( )1 22,2 3,2 1,2 2,3 2,1 2,2

1 ( )4

k k ku u h u h g x+ = + + + − ∆ .

At the bottom point interior point (2,9), computation follows

( )1 29,2 10,2 8,2 9,3 9,1 9,2

1 ( )4

k k k ku u u u h g x+ = + + + − ∆

☼ Jacobi Method - Programming Aspects: ☼ Applying the GS Method: For the GS method, we write the computing formula (28) as

( )1 1 1 2, 1, 1, , 1 , 1 ,

1 ( )4

k k k k ki j i j i j i j i j i ju u u u u g x+ + +

+ − + −= + + + − ∆ . (29)

Applying the SOR Method:

( )1 1 1 2, 1, 1, , 1 , 1 , ,( ) (1 )

4k k k k k ki j i j i j i j i j i j i ju u u u u g x uω ω+ + +

+ − + −= + + + − ∆ + − . (30)

☼ GS and SOR Methods - Programming Aspects: ☼

,i j

, 1i j −

1,i j−

, 1i j +

1,i j+

24

Block/Line Iterative Schemes

The Jacobi Line Over-Relaxation Method (JLOR) The Successive Line Over-Relaxation Method (SLOR) Zebra Line Relaxation Method Red-Black Points Relaxation Method (a point-method)

We shall consider the applications of these methods to the Poisson equation

2u g∇ = although they are applicable to other problems as well. ☼ The Jacobi Line Over-Relaxation (JLOR) Method N 2 1 1 2 3 4 M The Jacobi Method has

( )1 2, 1, 1, , 1 , 1 ,

1 ( )4

k k k k ki j i j i j i j i j i ju u u u u g x+

+ − + −= + + + − ∆

Treat the ,i ju on the i-th vertical line as unknowns for the (k+1) iteration. At the interior point ( , )i j

1ku + : unknown at (k + 1)ku : known at (k + 1)

2u g∇ =D

∂D ∆x

∆y

∆x = ∆y

i-1 i i+1

j

j+1

j-1

25

( )1 1 1 2, 1, 1, , 1 , 1 ,

1 ( ) ( 2, , 1)4

k k k k ki j i j i j i j i j i ju u u u u g x j N+ + +

+ − + −= + + + − ∆ = −…

(31)

Rearrange the unknowns 1,_ ( 2, , 1)k

iu j N+ = −… along the i-th vertical line to give

( )1 1 1 2, 1 , , 1 1, 1, ,4 ( ) ( 2, , 1)k k k k k

i j i j i j i j i j i ju u u u u g x j N+ + +− + + −− + − = + − ∆ = −…

(32) which is a tridiagonal system of linear equations for unknowns. The tridiagonal system of equations can be solved very efficient by the Thomas algorithm. ☼ The JLOR comprises 2 steps as follows: Loop 2, , 1i M= −…

Step 1: Solve tridiagonal system of equations (32) to obtain { } 11

,2

Nki j

ju

−+

=.

Step 2: Carry out relaxation by

1 1, , ,(1 )k k k

i j i j i ju u uω ω+ += + − for 2, , 1j N= −… . (33)

End i-Loop The JLOR procedure may be applied along horizontal lines, or in alternate vertical and horizontal sweeps.

1ω = yields the Jacobi-line method. ☼

26

Exercise: Show that the tridiagonal linear system (32) for the JLOR method is equivalent to the following tridiagonal system in ∆-form

, 1 , , 1 ,4k k k ki j i j i j i ju u u rω− +−∆ + ∆ − ∆ = − . (34)

Rewrite (33) in ∆-form

1 1k k kij ij iju u u

ω+ = + ∆ (35)

Use (35) to replace all the overbared terms in (32) and rearrange to give

, 1 , , 1 ,4k k k ki j i j i j i ju u u rω− +−∆ + ∆ − ∆ = − , (34)

which is a tridiagonal system of linear equations in the unknown increments

, ( 2, , 1)ki ju j N∆ = −… . The Thomas algorithm may be used to solve the

unknowns. Having solved for , ( 2, , 1)k

i ju j N∆ = −… , compute

1, , ,k k ki j i j i ju u u+ = + ∆ . (36)

Answer behind

27

The Successive Line Over-Relaxation (SLOR) Method N 3 2 1 1 2 3 4 M Treat the ,i ju on the i-th vertical line as unknowns for the (k+1) iteration.

Treat the 1,i ju − on the (i−1)-th vertical line as knowns for the (k+1) iteration. At the interior point ( , )i j

( )1 1 1 1 2, 1, 1, , 1 , 1 ,

1 ( ) ( 2, , 1)4

k k k k ki j i j i j i j i j i ju u u u u g x j N+ + + +

+ − + −= + + + − ∆ = −…

(37) Rearranging

( )1 1 1 1 2, 1 , , 1 1, 1, ,4 ( ) ( 2, , 1)k k k k k

i j i j i j i j i j i ju u u u u g x j N+ + + +− + + −− + − = + − ∆ = −…

(38)

yields a tridiagonal system of linear equations for unknowns { } 11

,2

Nki j

ju

−+

=.

1ku + : unknown at (k + 1), ku : known at (k + 1)1ku + : known at (k + 1),

2u g∇ =D

∂D ∆x

∆y

∆x = ∆y

i-1 i i+1

j

j+1

j-1

28

The SLOR method thus comprises 2 steps Loop 2, , 1i M= −…

Step 1: Solve tridiagonal system of equations (38) to obtain { } 11

,2

Nki j

ju

−+

=.

Step 2: Carry out relaxation by

1 1, , ,(1 )k k k

i j i j i ju u uω ω+ += + − for 2, , 1j N= −… . (39)

End i-Loop ☼ Exercise: Derive the ∆-form of SLOR method. Rewrite (33) in ∆-form

1 1k k kij ij iju u u

ω+ = + ∆ (40)

Use (34) to replace all the overbarred terms leading to

, 1 , , 1 1, ,4 ( )k k k k ki j i j i j i j i ju u u u rω− + −−∆ + ∆ − ∆ = ∆ − , (41)

which is a tridiagonal system of linear equations in the unknown increments

, ( 2, , 1)ki ju j N∆ = −… . Note that 1,

ki ju −∆ is known from preceding

computation at the (i −1)th vertical line. The Thomas algorithm may be used to solve the unknowns. Having solved for , ( 2, , 1)k

i ju j N∆ = −… , compute 1, , ,k k ki j i j i ju u u+ = + ∆ .

Answer behind

29

Zebra Line Relaxation (ZLR) Method The method involves the application of a relaxation methods in TWO Sweeps over TWO alternating series of lines. 3 2 1 1 2 3 4 5 One iteration of the ZLR method involves 2 steps:

1/ 2k ku u +→ and 1/ 2 1k ku u+ +→ Step 1: 1/ 2k ku u +→

Even value of i : Apply the JLOR method over even-number vertical lines

1/ 2 1/ 2 1/ 2 2

, 1 , , 1 1, 1,4 ( ) ( 2, , 1)k k k k ki j i j i j i j i j iju u u u u g x j N+ + +

− + − +− + − = + − ∆ = −…(42)

Solve the tridiagonal system for { } 11/ 2

,2

Nki j

ju

−+

=.

Relaxation: 1/ 2 1/ 2, , ,(1 ) ( 2, , 1)k k k

i j i j i ju u u j Nω ω+ += + − = −… (43)

Even-number line, Odd-number line.

2u g∇ =D

∂D∆x

∆y

∆x = ∆y

i-1 i i+1

j

j+1

j-1

30

Odd value of i : 1/ 2, ,:k k

i j i ju u+ = (44) Step 2: 1/ 2 1k ku u+ +→ .

Even value of i : 1 1/ 2, ,:k k

i j i ju u+ += (45)

Odd value of i : Apply the JLOR method over odd-number vertical lines

1 1 1 1/ 2 1/ 2 2

, 1 , , 1 1, 1,4 ( ) ( 2, , 1)k k k k ki j i j i j i j i j iju u u u u g x j N+ + + + +

− + − +− + − = + − ∆ = −…(46)

Solve the tridiagonal system for { } 11

,2

Nki j

ju

−+

=.

Relaxation: 1 1 1/ 2, , ,(1 ) ( 2, , 1)k k k

i j i j i ju u u j Nω ω+ + += + − = −… (47) Note:

Alternating series of horizontal lines may be used. Different relaxation factors may be used in the 2 steps. Used as smoother in multi-grid schemes.

31

Red-Black Point (RBP) Relaxation Method In this approach, a point relaxation method is applied to two distinct sets of grid points: red points and black points. N 3 2 1 1 2 3 4 5 One iteration of the RBP method involves 2 steps:

1/ 2k ku u +→ and 1/ 2 1k ku u+ +→ Step 1: 1/ 2k ku u +→

Red points (i + j even ): Apply JOR

( )1/ 2 2, 1, 1, , 1 , 1 ,

1 ( ) ( even)4

k k k k ki j i j i j i j i j i ju u u u u g x i j+

+ − + −= + + + − ∆ +

Relaxation: 1/ 2 1/ 2, , ,(1 ) ( even)k k k

i j i j i ju u u i jω ω+ += + − +

Black points ( i + j odd): 1/ 2

, ,:k ki j i ju u+ =

j

j+1

j-1

2u g∇ =D

∂D ∆x

∆y

∆x = ∆y

i-1 i i+1

Red points ( i + j even), Black points ( i + j odd)

32

Step 2: 1/ 2 1k ku u+ +→

Red points (i + j even ): 1 1/ 2, ,:k k

i j i ju u+ +=

Black points ( i + j odd):

( )1 1/ 2 1/ 2 1/ 2 1/ 2 2, 1, 1, , 1 , 1 ,

1 ( ) ( odd)4

k k k k ki j i j i j i j i j i ju u u u u g x i j+ + + + +

+ − + −= + + + − ∆ +

Relaxation: 1 1 / 2, , ,(1 ) ( odd)k k k

i j i j i ju u u i jω ω+ + += + − + Note:

Different relaxation factors may be used in the 2 steps. Used in multi-grid schemes.

33

JACOBI METHOD - PROGRAMMING ASPECTS: Pseudocode Start of Program Declare ARRAYS : UK(i,j), UK1(i,j), G(i,j) Initialization: Set N, M, TOL Set initial guess solution and compute fixed RHS: Loop j=2,N-1 Loop i=2,M-1 UK(i,j)= initial guess solution Compute G(i,j) End i-loop End j-loop Set B.C. :

Loop j=1,N UK(1,j) and UK(M,j) set to boundary value h

End j-Loop Loop i=1,M

UK(i,1) and UK(I,N) set to boundary value h End i-loop

Jacobi iteration Label 100

Loop i=2,M–1 Loop j=2,N–1

UK1(i,j)=0.25*( UK(i+1,j)+UK(i−1,j) +UK(i,j+1)+UK(,j−1)− G(i,j)*DX*DX)

End j-Loop End I-Loop

34

Check for Convergence – based on increment between consecutive iterations:

Loop i=2,M–1 Loop j=2,N–1

IF( ABS( UK1(i,j) −UK(i,j) ) > TOL) go to Label 200 End j-Loop End i-Loop

Go to End of Program (Quit program) Label 200 : Reset UK to UK1

Loop i=2,M–1 Loop j=2,N–1

UK(i,j)=UK1(i,j) End j-Loop End i-Loop Goto Label 100

End of Program

35

GAUSS SEIDEL (GS) & SUCCESSIVE OVER-RELAXATIO (SOR) METHODS - PROGRAMMING ASPECTS: Pseudocode Start of Program Declare ARRAYS : UK(i,j),G(i,j) Initialization: Set N, M, TOL, OMEGA (for SOR method) etc. Set initial guess solution and compute fixed RHS: Loop j=2,N-1 Loop i=2,M-1 UK(i,j)= initial guess solution Compute G(i,j) End i-loop End j-loop Set B.C. :

Loop j=1,N UK(1,j) and UK(M,j) set to boundary value h

End j-Loop Loop i=1,M

UK(i,1) and UK(i,N) set to boundary value h End i-loop

GS / SOR iteration ( Set OMEGA=1.0 for GS iteration) Label 100

Loop i=2,M–1 Loop j=2,N–1

R=0.25*( UK(i+1,j) + UK(i−1,j) + UK(i,j+1) + UK(,j−1) − G(i,j)*DX*DX)

UK(i,j)= OMEGA*R + (1-OMEGA)*UK(i,j) ( for SOR method) End j-Loop End I-Loop

36

Check for Convergence – based on residual error in this case:

Loop i=2,M–1 Loop j=2,N–1

RES=G(i,j)*DX*DX + 4*UK(i,j) − UK(i+1,j) − UK(i-1,j) − UK(i,j+1) − UK(,j–1)

IF( ABS(RES) > TOL) goto Label 100 End j-Loop End i-Loop

End of Program

37

Convergence of the iterative schemes The iteration scheme

1k ku Gu c+ = + (1)

generates a sequence of approximations { }0

k

ku

= to the solution u of Au b= .

The solution u satisfies u Gu c= + . Define error :k ke u u= − . (2) Then 1 1 [ ( ) ] [ ( ) ] ( ) ( )k k k k ke u u G u c G u c G u u G e+ += − = + − + = − = Or 0( )k ke G e= . (3) ☼ Assume the generating matrix of the iterative scheme is diagonalizable (possesses only simple eigenvalues 1 2, , , Nλ λ λ… ). Then

11 2Diag( , , , )NG T Tλ λ λ −= … , (4)

where T is a non-singular matrix. Then [ ] 1 0

1 2Diag( , , , ) kkNe T T eλ λ λ −= …

[ ]1 1 01 2Diag( , , , ) kk

NT e T eλ λ λ− −= …

Or ( ) 01 2Diag ( ) ,( ) , ,( )k k k k

Nε λ λ λ ε= … , where 1k kT eε −=

and 0( ) ( 1,2, , )k k

i i i Nε λ ε= = … . (5) Thus, up to a change of basis, the individual components of error evolves according to (5) in the course of iteration. Hence the error kε (and hence )ke decays to zero if 1 ( 1,2, , )i i Nλ < = … . The decay is slowest for the component of error corresponding to { }

1,...,max ii N

λ=

.

38

The error { }ke of the approximating sequence { }0

k

ku

= thus decays to zero

with iterations (i.e. the sequence { }0

k

ku

= converges to solution u ) if

{ }

1,...,( ) : max 1.ii NGρ λ

== < (6)

( )Gρ is known as the spectral radius of matrix G: Note:

In general case, the sequence of approximations { }0

k

ku

= converges to the

solution u of Au b= if and only if ( ) 1Gρ < . ☼ The rate of convergence to the solution is also determined by the value of

( ) 1Gρ < . The quantity

: ln( ( ))Gσ ρ= − (7) is known as the asymptotic rate of convergence (‘asymptotic’ refers to ‘at large values of k’). From (5)

[ ] 0( ) (property of norm | | )kk G s sε ρ ε∞ ∞≤ ≤v v

( )0ln / ln( ( ))k k Gε ε ρ≤

( )0ln /k

kε ε

σ

−≤ (8)

Let 0/ 1/ ( 2.718...)k e eε ε = = .

39

Then (8) yields 1/k σ≤ . Hence (1/ )σ gives an estimate of the number of iterations needed to reduce the error by a factor of 1/ e . ☼ Detail condition for the convergence of the classical iterative methods may be found in Young (1971). Some of the simpler criteria for convergence are:

The GS method converges if A is strictly diagonal dominant i.e.

1,

( 1,..., )N

ii ijj j i

a a i N= ≠

> =∑

The GS method converges if A is symmetric ( TA A= ) and positive

definite ( 0 for all vectorT A N> −w w w ).

The spectral radius of the SOR iteration matrix Gω satisfies

( ) 1Gωρ ω≥ − . Hence the SOR method converges only if 0 2ω< < .

If A is symmetric and positive definite, then the SOR method converges if and only if 0 2ω< < .

40

To have an appreciation of the relative rates of convergence of the various classical iterative schemes, consider the case of 2u g∇ = on a square domain with uniform grid of M intervals on each side. For a start, the spectral radii of the iteration matrices are given by:

J 2( ) cos( / ) 1 0.5( / )G M Mρ π π= ≈ −

[Note: 2 4cos( ) 1 / 2! / 4! ...x x x= − + − ]

GS 2 2( ) [cos( / )] 1 ( / )G M Mρ π π= ≈ −

opt

SOR( )Gωρ =J 2

SORoptJ 2 J 2

1 1 ( ) 2 21 , and1 1 ( ) 1 1 ( )M

ρ π ωρ ρ

− −≈ − =

+ − + −

JL JLOR GS

1( ) : ( ) ( )G G Gωρ ρ ρ== =

opt

JL 2SLOR

JL 2

1 1 ( )( ) 1 2 21 1 ( )

GMω

ρ πρρ

− −= ≈ −

+ − and

opt

SLOR SLORopt 1 ( )Gωω ρ= +

☼ From the above spectral radii, the asymptotic rates of convergence

( ) ln( ( ))G Gσ ρ= − are given by:

J 2 2 2ln(1 0.5( / ) ) 0.5( / ) ( )M M O Mσ π π −≈ − − ≈ =

[Note: 2 3ln(1 ) / 2 / 3 ...x x x x+ = − + − ]

GS JL 2 2

J

( / ) ( )2

M O Mσ σ π

σ

−= ≈ =

= ×

opt

SOR 1

GS GS

2 / ( )

(2 / ) ( )

M O M

M O Mωσ π

π σ σ

−≈ =

≈ =

41

opt

opt

SLOR 1

SOR

2 2 / ( )

2

M O Mω

ω

σ π

σ

−≈ =

The SSOR converges twice as fast as the SOR but requires twice the amount of work per iteration. ☼

42

ALTERNATING DIRECTION IMPLICIT (ADI) METHOD – ELLIPTIC PORBLEMS The ADI method is a very powerful method for solving multi-dimensional problems for which the operator may be decomposed along spatial directions.

For example:

2 22

2 2

x y

u ux y

Su S u S u

∂ ∂∇ ≡ +

∂ ∂≡ +

(1a,b)

Let x yA A u A u= + represents the corresponding spatial difference operator.

The discretized elliptic equation ( )x yAu A A u b= + = may be given the following temporal interpretation:

du Au bdt

= − , (2)

for which a solution is sought in the limit of / 0du dt → . Applying the Backward Euler time scheme:

1 11 ( )k k ku u Au bt

+ +− = −∆

or 1(1 ) k kt A u u t b+− ∆ = − ∆ or in ∆-form (1 ) k kt A u t r− ∆ ∆ = −∆ (3) ☼ We now carry out an approximate factorization of the 2D difference operator into the product of 1D difference operators as follows:

2(1 ) [1 ( )] (1 )(1 ) ( )x y x yt A t A A t A t A O t− ∆ = − ∆ + = − ∆ − ∆ + ∆ (4)

43

where the higher-order term 2( )O t∆ is ignored1. Hence (3) becomes

(1 )(1 ) .k kx yt A t A u t r− ∆ − ∆ ∆ = −∆ (5)

or (1 ) (1 ) k k

x y

k

A A u r

u

τ τ ωτ− − ∆ = −

(6)

where t∆ is replaced by the parameter τ and a new parameter ω is introduced. We are able to do these because we need not adhere to a strict interpretation of time in this problem.

Defining a intermediate quantity : (1 )k kyu t A u∆ = − ∆ ∆ (see (6)), we have:

(1 ) k kxA u rτ ωτ− ∆ = − (7a)

(1 ) k kyA u uτ− ∆ = ∆ . (7b)

We note that the 1D operators (1 )xAτ− and (1 )yAτ− can be made diagonal dominant when τ is small. ☼ For the differential operator given in (1),

1 1 2( 2 ) /( )x x xA E I E x−= − + ∆ and 1 1 2( 2 ) /( )y y yA E I E y−= − + ∆ . (8a,b) Hence applying (7a) and (7b) to an interior point ( , )i j yields:

( )1 2 21, , 1, ,2k k k k

i j i j i j i ju x u u x rτ ω−− +−∆ + ∆ + ∆ − ∆ = − ∆ (9a)

( )1 2, 1 , , 1 ,2k k k k

i j i j i j i ju y u u uτ −− +−∆ + ∆ + ∆ − ∆ = ∆ (9b)

1 The omission is consistent with the application of the first-order Backward Euler scheme, but this is not a crucial factor in the present case. Why?

44

Along a horizontal grid line: ( , ) where 2,..., 1i j i M= − , Equation (9a)

yields a tridiagonal system of equations in { } 1

,2

Mki j

iu

=∆ (see figure).

N 1 1 M Similarly, along a vertical grid line: ( , ) where 2,..., 1i j j N= − , Equation

(9a) yields a tridiagonal system of equations in { } 1

, 2

Nki j j

u−

=∆ (see figure).

☼ Implementation of ADI method: Step 1: Make an initial guess 0u . Calculate 0 0r b Au= − Step 2: Given ku and kr

Loop j = 2,…,N-1 Solve tridiagonal systems (9a) along the horizontal grid lines for

ku∆ . End j-Loop

Step 3:

Loop i = 2,…,M-1 Solve tridiagonal systems (9b) along the vertical grid lines for

ku∆ . End i-Loop

i

j

45

Step 4: 1k k ku u u+ = + ∆ and 1 1k kr b Au+ += − Step 5: Check for convergence: say 1| |kr TOL+ < and repeat Step 2 to Step 5 as necessary. (Steps 2-5 constitute one iteration cycle.) ☼ Notes:

The parameters τ and ω may be optimized to make the ADI method efficient. For optimal parameters, the ADI method converges significantly faster than the SOR method.

The optimal value of 2ω ≈ . The optimal min max1/ A Aτ λ λ= where

minAλ and max

Aλ are the minimum and maximum eigenvalues of matrix A.

For A derived from the Laplacian operator, the ADI method above is unconditionally stable for 0τ > .

The ADI scheme is fairly robust, so that even with approximate

optimization, it frequently converges faster than the point and line methods.

More details about optimization of ADI methods may be found in Ames (1977) and references therefrom or 1. Wachspress E.L. 1966 Iterative Solutions of Elliptic Systems, Prectice-

Hall 2. Mitchell A.R. and Griffiths D.F. 1980 The Finite Difference Method in

Partial Differential Equations, Wiley. ☼