SOR Method

93
Iterative Techniques in Matrix Algebra Relaxation Techniques for Solving Linear Systems Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011 Brooks/Cole, Cengage Learning

Transcript of SOR Method

Page 1: SOR Method

Iterative Techniques in Matrix Algebra

Relaxation Techniques for Solving Linear Systems

Numerical Analysis (9th Edition)R L Burden & J D Faires

Beamer Presentation Slidesprepared byJohn Carroll

Dublin City University

c© 2011 Brooks/Cole, Cengage Learning

Page 2: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1 Residual Vectors & the Gauss-Seidel Method

2 Relaxation Methods (including SOR)

3 Choosing the Optimal Value of ω

4 The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 2 / 36

Page 3: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1 Residual Vectors & the Gauss-Seidel Method

2 Relaxation Methods (including SOR)

3 Choosing the Optimal Value of ω

4 The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 2 / 36

Page 4: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1 Residual Vectors & the Gauss-Seidel Method

2 Relaxation Methods (including SOR)

3 Choosing the Optimal Value of ω

4 The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 2 / 36

Page 5: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1 Residual Vectors & the Gauss-Seidel Method

2 Relaxation Methods (including SOR)

3 Choosing the Optimal Value of ω

4 The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 2 / 36

Page 6: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1 Residual Vectors & the Gauss-Seidel Method

2 Relaxation Methods (including SOR)

3 Choosing the Optimal Value of ω

4 The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 3 / 36

Page 7: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

MotivationWe have seen that the rate of convergence of an iterativetechnique depends on the spectral radius of the matrix associatedwith the method.

One way to select a procedure to accelerate convergence is tochoose a method whose associated matrix has minimal spectralradius.We start by introducing a new means of measuring the amount bywhich an approximation to the solution to a linear system differsfrom the true solution to the system.The method makes use of the vector described in the followingdefinition.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 4 / 36

Page 8: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

MotivationWe have seen that the rate of convergence of an iterativetechnique depends on the spectral radius of the matrix associatedwith the method.One way to select a procedure to accelerate convergence is tochoose a method whose associated matrix has minimal spectralradius.

We start by introducing a new means of measuring the amount bywhich an approximation to the solution to a linear system differsfrom the true solution to the system.The method makes use of the vector described in the followingdefinition.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 4 / 36

Page 9: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

MotivationWe have seen that the rate of convergence of an iterativetechnique depends on the spectral radius of the matrix associatedwith the method.One way to select a procedure to accelerate convergence is tochoose a method whose associated matrix has minimal spectralradius.We start by introducing a new means of measuring the amount bywhich an approximation to the solution to a linear system differsfrom the true solution to the system.

The method makes use of the vector described in the followingdefinition.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 4 / 36

Page 10: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

MotivationWe have seen that the rate of convergence of an iterativetechnique depends on the spectral radius of the matrix associatedwith the method.One way to select a procedure to accelerate convergence is tochoose a method whose associated matrix has minimal spectralradius.We start by introducing a new means of measuring the amount bywhich an approximation to the solution to a linear system differsfrom the true solution to the system.The method makes use of the vector described in the followingdefinition.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 4 / 36

Page 11: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

DefinitionSuppose x̃ ∈ IRn is an approximation to the solution of the linearsystem defined by

Ax = b

The residual vector for x̃ with respect to this system is

r = b− Ax̃

CommentsA residual vector is associated with each calculation of anapproximate component to the solution vector.The true objective is to generate a sequence of approximationsthat will cause the residual vectors to converge rapidly to zero.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 5 / 36

Page 12: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

DefinitionSuppose x̃ ∈ IRn is an approximation to the solution of the linearsystem defined by

Ax = b

The residual vector for x̃ with respect to this system is

r = b− Ax̃

CommentsA residual vector is associated with each calculation of anapproximate component to the solution vector.

The true objective is to generate a sequence of approximationsthat will cause the residual vectors to converge rapidly to zero.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 5 / 36

Page 13: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

DefinitionSuppose x̃ ∈ IRn is an approximation to the solution of the linearsystem defined by

Ax = b

The residual vector for x̃ with respect to this system is

r = b− Ax̃

CommentsA residual vector is associated with each calculation of anapproximate component to the solution vector.The true objective is to generate a sequence of approximationsthat will cause the residual vectors to converge rapidly to zero.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 5 / 36

Page 14: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Looking at the Gauss-Seidel MethodSuppose we let

r(k)i = (r (k)

1i , r (k)2i , . . . , r (k)

ni )t

denote the residual vector for the Gauss-Seidel method

correspondingto the approximate solution vector x(k)

i defined by

x(k)i = (x (k)

1 , x (k)2 , . . . , x (k)

i−1, x (k−1)i , . . . , x (k−1)

n )t

The m-th component of r(k)i is

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 6 / 36

Page 15: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Looking at the Gauss-Seidel MethodSuppose we let

r(k)i = (r (k)

1i , r (k)2i , . . . , r (k)

ni )t

denote the residual vector for the Gauss-Seidel method correspondingto the approximate solution vector x(k)

i defined by

x(k)i = (x (k)

1 , x (k)2 , . . . , x (k)

i−1, x (k−1)i , . . . , x (k−1)

n )t

The m-th component of r(k)i is

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 6 / 36

Page 16: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Looking at the Gauss-Seidel MethodSuppose we let

r(k)i = (r (k)

1i , r (k)2i , . . . , r (k)

ni )t

denote the residual vector for the Gauss-Seidel method correspondingto the approximate solution vector x(k)

i defined by

x(k)i = (x (k)

1 , x (k)2 , . . . , x (k)

i−1, x (k−1)i , . . . , x (k−1)

n )t

The m-th component of r(k)i is

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 6 / 36

Page 17: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

Looking at the Gauss-Seidel Method (Cont’d)

Equivalently, we can write r (k)mi in the form:

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i+1

amjx(k−1)j − amix

(k−1)i

for each m = 1, 2, . . . , n.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 7 / 36

Page 18: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i+1

amjx(k−1)j − amix

(k−1)i

Looking at the Gauss-Seidel Method (Cont’d)

In particular, the i th component of r(k)i is

r (k)ii = bi −

i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k−1)i

so

aiix(k−1)i + r (k)

ii = bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 8 / 36

Page 19: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i+1

amjx(k−1)j − amix

(k−1)i

Looking at the Gauss-Seidel Method (Cont’d)

In particular, the i th component of r(k)i is

r (k)ii = bi −

i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k−1)i

so

aiix(k−1)i + r (k)

ii = bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 8 / 36

Page 20: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

(E) aiix(k−1)i + r (k)

ii = bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

Looking at the Gauss-Seidel Method (Cont’d)

Recall, however, that in the Gauss-Seidel method, x (k)i is chosen to be

x (k)i =

1aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

so (E) can be rewritten as

aiix(k−1)i + r (k)

ii = aiix(k)i

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 9 / 36

Page 21: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

(E) aiix(k−1)i + r (k)

ii = bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

Looking at the Gauss-Seidel Method (Cont’d)

Recall, however, that in the Gauss-Seidel method, x (k)i is chosen to be

x (k)i =

1aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

so (E) can be rewritten as

aiix(k−1)i + r (k)

ii = aiix(k)i

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 9 / 36

Page 22: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

aiix(k−1)i + r (k)

ii = aiix(k)i

Looking at the Gauss-Seidel Method (Cont’d)Consequently, the Gauss-Seidel method can be characterized aschoosing x (k)

i to satisfy

x (k)i = x (k−1)

i +r (k)iiaii

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 10 / 36

Page 23: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual VectorsWe can derive another connection between the residual vectorsand the Gauss-Seidel technique.

Consider the residual vector r(k)i+1, associated with the vector

x(k)i+1 = (x (k)

1 ,. . ., x (k)i , x (k−1)

i+1 , . . ., x (k−1)n )t .

We have seen that the m-th component of r(k)i is

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 11 / 36

Page 24: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual VectorsWe can derive another connection between the residual vectorsand the Gauss-Seidel technique.

Consider the residual vector r(k)i+1, associated with the vector

x(k)i+1 = (x (k)

1 ,. . ., x (k)i , x (k−1)

i+1 , . . ., x (k−1)n )t .

We have seen that the m-th component of r(k)i is

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 11 / 36

Page 25: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual VectorsWe can derive another connection between the residual vectorsand the Gauss-Seidel technique.

Consider the residual vector r(k)i+1, associated with the vector

x(k)i+1 = (x (k)

1 ,. . ., x (k)i , x (k−1)

i+1 , . . ., x (k−1)n )t .

We have seen that the m-th component of r(k)i is

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 11 / 36

Page 26: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

A 2nd Connection with Residual Vectors (Cont’d)

Therefore, the i th component of r(k)i+1 is

r (k)i,i+1 = bi −

i∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

= bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k)i

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 12 / 36

Page 27: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)mi = bm −

i−1∑j=1

amjx(k)j −

n∑j=i

amjx(k−1)j

A 2nd Connection with Residual Vectors (Cont’d)

Therefore, the i th component of r(k)i+1 is

r (k)i,i+1 = bi −

i∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

= bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k)i

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 12 / 36

Page 28: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)i,i+1 = bi −

i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k)i

A 2nd Connection with Residual Vectors (Cont’d)

By the manner in which x (k)i is defined in

x (k)i =

1aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

we see that r (k)

i,i+1 = 0.

In a sense, then, the Gauss-Seidel technique ischaracterized by choosing each x (k)

i+1 in such a way that the i thcomponent of r(k)

i+1 is zero.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 13 / 36

Page 29: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)i,i+1 = bi −

i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k)i

A 2nd Connection with Residual Vectors (Cont’d)

By the manner in which x (k)i is defined in

x (k)i =

1aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

we see that r (k)

i,i+1 = 0. In a sense, then, the Gauss-Seidel technique ischaracterized by choosing each x (k)

i+1 in such a way that the i thcomponent of r(k)

i+1 is zero.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 13 / 36

Page 30: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1 Residual Vectors & the Gauss-Seidel Method

2 Relaxation Methods (including SOR)

3 Choosing the Optimal Value of ω

4 The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 14 / 36

Page 31: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Reducing the Norm of the Residual VectorChoosing x (k)

i+1 so that one coordinate of the residual vector iszero, however, is not necessarily the most efficient way to reducethe norm of the vector r(k)

i+1.

If we modify the Gauss-Seidel procedure, as given by

x (k)i = x (k−1)

i +r (k)iiaii

to

x (k)i = x (k−1)

i + ωr (k)iiaii

then for certain choices of positive ω we can reduce the norm ofthe residual vector and obtain significantly faster convergence.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 15 / 36

Page 32: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Reducing the Norm of the Residual VectorChoosing x (k)

i+1 so that one coordinate of the residual vector iszero, however, is not necessarily the most efficient way to reducethe norm of the vector r(k)

i+1.If we modify the Gauss-Seidel procedure, as given by

x (k)i = x (k−1)

i +r (k)iiaii

to

x (k)i = x (k−1)

i + ωr (k)iiaii

then for certain choices of positive ω we can reduce the norm ofthe residual vector and obtain significantly faster convergence.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 15 / 36

Page 33: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Reducing the Norm of the Residual VectorChoosing x (k)

i+1 so that one coordinate of the residual vector iszero, however, is not necessarily the most efficient way to reducethe norm of the vector r(k)

i+1.If we modify the Gauss-Seidel procedure, as given by

x (k)i = x (k−1)

i +r (k)iiaii

to

x (k)i = x (k−1)

i + ωr (k)iiaii

then for certain choices of positive ω we can reduce the norm ofthe residual vector and obtain significantly faster convergence.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 15 / 36

Page 34: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Reducing the Norm of the Residual VectorChoosing x (k)

i+1 so that one coordinate of the residual vector iszero, however, is not necessarily the most efficient way to reducethe norm of the vector r(k)

i+1.If we modify the Gauss-Seidel procedure, as given by

x (k)i = x (k−1)

i +r (k)iiaii

to

x (k)i = x (k−1)

i + ωr (k)iiaii

then for certain choices of positive ω we can reduce the norm ofthe residual vector and obtain significantly faster convergence.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 15 / 36

Page 35: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation MethodsIntroducing the SOR Method

Methods involving

x (k)i = x (k−1)

i + ωr (k)iiaii

are called relaxation methods.

For choices of ω with 0 < ω < 1,the procedures are called under-relaxation methods.We will be interested in choices of ω with 1 < ω, and these arecalled over-relaxation methods.They are used to accelerate the convergence for systems that areconvergent by the Gauss-Seidel technique.The methods are abbreviated SOR, for SuccessiveOver-Relaxation, and are particularly useful for solving the linearsystems that occur in the numerical solution of certainpartial-differential equations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

Page 36: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation MethodsIntroducing the SOR Method

Methods involving

x (k)i = x (k−1)

i + ωr (k)iiaii

are called relaxation methods. For choices of ω with 0 < ω < 1,the procedures are called under-relaxation methods.

We will be interested in choices of ω with 1 < ω, and these arecalled over-relaxation methods.They are used to accelerate the convergence for systems that areconvergent by the Gauss-Seidel technique.The methods are abbreviated SOR, for SuccessiveOver-Relaxation, and are particularly useful for solving the linearsystems that occur in the numerical solution of certainpartial-differential equations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

Page 37: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation MethodsIntroducing the SOR Method

Methods involving

x (k)i = x (k−1)

i + ωr (k)iiaii

are called relaxation methods. For choices of ω with 0 < ω < 1,the procedures are called under-relaxation methods.We will be interested in choices of ω with 1 < ω, and these arecalled over-relaxation methods.

They are used to accelerate the convergence for systems that areconvergent by the Gauss-Seidel technique.The methods are abbreviated SOR, for SuccessiveOver-Relaxation, and are particularly useful for solving the linearsystems that occur in the numerical solution of certainpartial-differential equations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

Page 38: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation MethodsIntroducing the SOR Method

Methods involving

x (k)i = x (k−1)

i + ωr (k)iiaii

are called relaxation methods. For choices of ω with 0 < ω < 1,the procedures are called under-relaxation methods.We will be interested in choices of ω with 1 < ω, and these arecalled over-relaxation methods.They are used to accelerate the convergence for systems that areconvergent by the Gauss-Seidel technique.

The methods are abbreviated SOR, for SuccessiveOver-Relaxation, and are particularly useful for solving the linearsystems that occur in the numerical solution of certainpartial-differential equations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

Page 39: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation MethodsIntroducing the SOR Method

Methods involving

x (k)i = x (k−1)

i + ωr (k)iiaii

are called relaxation methods. For choices of ω with 0 < ω < 1,the procedures are called under-relaxation methods.We will be interested in choices of ω with 1 < ω, and these arecalled over-relaxation methods.They are used to accelerate the convergence for systems that areconvergent by the Gauss-Seidel technique.The methods are abbreviated SOR, for SuccessiveOver-Relaxation, and are particularly useful for solving the linearsystems that occur in the numerical solution of certainpartial-differential equations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

Page 40: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR MethodA More Computationally-Efficient Formulation

Note that by using the i-th component of r(k)i in the form

r (k)ii = bi −

i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k−1)i

we can reformulate the SOR equation

x (k)i = x (k−1)

i + ωr (k)iiaii

for calculation purposes as

x (k)i = (1− ω)x (k−1)

i +ω

aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 17 / 36

Page 41: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR MethodA More Computationally-Efficient Formulation

Note that by using the i-th component of r(k)i in the form

r (k)ii = bi −

i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k−1)i

we can reformulate the SOR equation

x (k)i = x (k−1)

i + ωr (k)iiaii

for calculation purposes

as

x (k)i = (1− ω)x (k−1)

i +ω

aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 17 / 36

Page 42: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR MethodA More Computationally-Efficient Formulation

Note that by using the i-th component of r(k)i in the form

r (k)ii = bi −

i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j − aiix

(k−1)i

we can reformulate the SOR equation

x (k)i = x (k−1)

i + ωr (k)iiaii

for calculation purposes as

x (k)i = (1− ω)x (k−1)

i +ω

aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 17 / 36

Page 43: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation (Cont’d)To determine the matrix form of the SOR method, we rewrite

x (k)i = (1− ω)x (k−1)

i +ω

aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

as

aiix(k)i + ω

i−1∑j=1

aijx(k)j = (1− ω)aiix

(k−1)i − ω

n∑j=i+1

aijx(k−1)j + ωbi

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 18 / 36

Page 44: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation (Cont’d)To determine the matrix form of the SOR method, we rewrite

x (k)i = (1− ω)x (k−1)

i +ω

aii

bi −i−1∑j=1

aijx(k)j −

n∑j=i+1

aijx(k−1)j

as

aiix(k)i + ω

i−1∑j=1

aijx(k)j = (1− ω)aiix

(k−1)i − ω

n∑j=i+1

aijx(k−1)j + ωbi

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 18 / 36

Page 45: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

aiix(k)i + ω

i−1∑j=1

aijx(k)j = (1− ω)aiix

(k−1)i − ω

n∑j=i+1

aijx(k−1)j + ωbi

A More Computationally-Efficient Formulation (Cont’d)In vector form, we therefore have

(D − ωL)x(k) = [(1− ω)D + ωU]x(k−1) + ωb

from which we obtain:

The SOR Method

x(k) = (D − ωL)−1[(1− ω)D + ωU]x(k−1) + ω(D − ωL)−1b

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 19 / 36

Page 46: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

aiix(k)i + ω

i−1∑j=1

aijx(k)j = (1− ω)aiix

(k−1)i − ω

n∑j=i+1

aijx(k−1)j + ωbi

A More Computationally-Efficient Formulation (Cont’d)In vector form, we therefore have

(D − ωL)x(k) = [(1− ω)D + ωU]x(k−1) + ωb

from which we obtain:

The SOR Method

x(k) = (D − ωL)−1[(1− ω)D + ωU]x(k−1) + ω(D − ωL)−1b

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 19 / 36

Page 47: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

x(k) = (D − ωL)−1[(1− ω)D + ωU]x(k−1) + ω(D − ωL)−1b

Letting

Tω = (D − ωL)−1[(1− ω)D + ωU]

and cω = ω(D − ωL)−1b

gives the SOR technique the form

x(k) = Tωx(k−1) + cω

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 20 / 36

Page 48: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

x(k) = (D − ωL)−1[(1− ω)D + ωU]x(k−1) + ω(D − ωL)−1b

Letting

Tω = (D − ωL)−1[(1− ω)D + ωU]

and cω = ω(D − ωL)−1b

gives the SOR technique the form

x(k) = Tωx(k−1) + cω

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 20 / 36

Page 49: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

x(k) = (D − ωL)−1[(1− ω)D + ωU]x(k−1) + ω(D − ωL)−1b

Letting

Tω = (D − ωL)−1[(1− ω)D + ωU]

and cω = ω(D − ωL)−1b

gives the SOR technique the form

x(k) = Tωx(k−1) + cω

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 20 / 36

Page 50: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

x(k) = (D − ωL)−1[(1− ω)D + ωU]x(k−1) + ω(D − ωL)−1b

Letting

Tω = (D − ωL)−1[(1− ω)D + ωU]

and cω = ω(D − ωL)−1b

gives the SOR technique the form

x(k) = Tωx(k−1) + cω

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 20 / 36

Page 51: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

ExampleThe linear system Ax = b given by

4x1 + 3x2 = 243x1 + 4x2 − x3 = 30

− x2 + 4x3 = −24

has the solution (3, 4,−5)t .

Compare the iterations from the Gauss-Seidel method and theSOR method with ω = 1.25 using x(0) = (1, 1, 1)t for bothmethods.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 21 / 36

Page 52: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

ExampleThe linear system Ax = b given by

4x1 + 3x2 = 243x1 + 4x2 − x3 = 30

− x2 + 4x3 = −24

has the solution (3, 4,−5)t .Compare the iterations from the Gauss-Seidel method and theSOR method with ω = 1.25 using x(0) = (1, 1, 1)t for bothmethods.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 21 / 36

Page 53: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)

For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are

x (k)1 = −0.75x (k−1)

2 + 6

x (k)2 = −0.75x (k)

1 + 0.25x (k−1)3 + 7.5

x (k)3 = 0.25x (k)

2 − 6

and the equations for the SOR method with ω = 1.25 are

x (k)1 = −0.25x (k−1)

1 − 0.9375x (k−1)2 + 7.5

x (k)2 = −0.9375x (k)

1 − 0.25x (k−1)2 + 0.3125x (k−1)

3 + 9.375

x (k)3 = 0.3125x (k)

2 − 0.25x (k−1)3 − 7.5

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 22 / 36

Page 54: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are

x (k)1 = −0.75x (k−1)

2 + 6

x (k)2 = −0.75x (k)

1 + 0.25x (k−1)3 + 7.5

x (k)3 = 0.25x (k)

2 − 6

and the equations for the SOR method with ω = 1.25 are

x (k)1 = −0.25x (k−1)

1 − 0.9375x (k−1)2 + 7.5

x (k)2 = −0.9375x (k)

1 − 0.25x (k−1)2 + 0.3125x (k−1)

3 + 9.375

x (k)3 = 0.3125x (k)

2 − 0.25x (k−1)3 − 7.5

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 22 / 36

Page 55: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are

x (k)1 = −0.75x (k−1)

2 + 6

x (k)2 = −0.75x (k)

1 + 0.25x (k−1)3 + 7.5

x (k)3 = 0.25x (k)

2 − 6

and the equations for the SOR method with ω = 1.25 are

x (k)1 = −0.25x (k−1)

1 − 0.9375x (k−1)2 + 7.5

x (k)2 = −0.9375x (k)

1 − 0.25x (k−1)2 + 0.3125x (k−1)

3 + 9.375

x (k)3 = 0.3125x (k)

2 − 0.25x (k−1)3 − 7.5

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 22 / 36

Page 56: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method: Solution (2/3)

Gauss-Seidel Iterations

k 0 1 2 3 · · · 7

x (k)1 1 5.250000 3.1406250 3.0878906 3.0134110

x (k)2 1 3.812500 3.8828125 3.9267578 3.9888241

x (k)3 1 −5.046875 −5.0292969 −5.0183105 −5.0027940

SOR Iterations (ω = 1.25)

k 0 1 2 3 · · · 7

x (k)1 1 6.312500 2.6223145 3.1333027 3.0000498

x (k)2 1 3.5195313 3.9585266 4.0102646 4.0002586

x (k)3 1 −6.6501465 −4.6004238 −5.0966863 −5.0003486

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 23 / 36

Page 57: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method: Solution (2/3)

Gauss-Seidel Iterations

k 0 1 2 3 · · · 7

x (k)1 1 5.250000 3.1406250 3.0878906 3.0134110

x (k)2 1 3.812500 3.8828125 3.9267578 3.9888241

x (k)3 1 −5.046875 −5.0292969 −5.0183105 −5.0027940

SOR Iterations (ω = 1.25)

k 0 1 2 3 · · · 7

x (k)1 1 6.312500 2.6223145 3.1333027 3.0000498

x (k)2 1 3.5195313 3.9585266 4.0102646 4.0002586

x (k)3 1 −6.6501465 −4.6004238 −5.0966863 −5.0003486

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 23 / 36

Page 58: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

For the iterates to be accurate to 7 decimal places,

the Gauss-Seidel method requires 34 iterations,as opposed to 14 iterations for the SOR method with ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 24 / 36

Page 59: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)For the iterates to be accurate to 7 decimal places,

the Gauss-Seidel method requires 34 iterations,

as opposed to 14 iterations for the SOR method with ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 24 / 36

Page 60: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)For the iterates to be accurate to 7 decimal places,

the Gauss-Seidel method requires 34 iterations,as opposed to 14 iterations for the SOR method with ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 24 / 36

Page 61: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1 Residual Vectors & the Gauss-Seidel Method

2 Relaxation Methods (including SOR)

3 Choosing the Optimal Value of ω

4 The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 25 / 36

Page 62: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

An obvious question to ask is how the appropriate value of ω ischosen when the SOR method is used?

Although no complete answer to this question is known for thegeneral n × n linear system, the following results can be used incertain important situations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 26 / 36

Page 63: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

An obvious question to ask is how the appropriate value of ω ischosen when the SOR method is used?Although no complete answer to this question is known for thegeneral n × n linear system, the following results can be used incertain important situations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 26 / 36

Page 64: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem (Kahan)If aii 6= 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This impliesthat the SOR method can converge only if 0 < ω < 2.

The proof of this theorem is considered in Exercise 9, Chapter 7 ofBurden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,2011.

Theorem (Ostrowski-Reich)If A is a positive definite matrix and 0 < ω < 2, then the SOR methodconverges for any choice of initial approximate vector x(0).

The proof of this theorem can be found in Ortega, J. M., NumericalAnalysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 27 / 36

Page 65: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem (Kahan)If aii 6= 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This impliesthat the SOR method can converge only if 0 < ω < 2.

The proof of this theorem is considered in Exercise 9, Chapter 7 ofBurden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,2011.

Theorem (Ostrowski-Reich)If A is a positive definite matrix and 0 < ω < 2, then the SOR methodconverges for any choice of initial approximate vector x(0).

The proof of this theorem can be found in Ortega, J. M., NumericalAnalysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 27 / 36

Page 66: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem (Kahan)If aii 6= 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This impliesthat the SOR method can converge only if 0 < ω < 2.

The proof of this theorem is considered in Exercise 9, Chapter 7 ofBurden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,2011.

Theorem (Ostrowski-Reich)If A is a positive definite matrix and 0 < ω < 2, then the SOR methodconverges for any choice of initial approximate vector x(0).

The proof of this theorem can be found in Ortega, J. M., NumericalAnalysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 27 / 36

Page 67: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem (Kahan)If aii 6= 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This impliesthat the SOR method can converge only if 0 < ω < 2.

The proof of this theorem is considered in Exercise 9, Chapter 7 ofBurden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,2011.

Theorem (Ostrowski-Reich)If A is a positive definite matrix and 0 < ω < 2, then the SOR methodconverges for any choice of initial approximate vector x(0).

The proof of this theorem can be found in Ortega, J. M., NumericalAnalysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 27 / 36

Page 68: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

TheoremIf A is positive definite and tridiagonal, then ρ(Tg) = [ρ(Tj)]

2 < 1, andthe optimal choice of ω for the SOR method is

ω =2

1 +√

1− [ρ(Tj)]2

With this choice of ω, we have ρ(Tω) = ω − 1.

The proof of this theorem can be found in Ortega, J. M., NumericalAnalysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 28 / 36

Page 69: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

TheoremIf A is positive definite and tridiagonal, then ρ(Tg) = [ρ(Tj)]

2 < 1, andthe optimal choice of ω for the SOR method is

ω =2

1 +√

1− [ρ(Tj)]2

With this choice of ω, we have ρ(Tω) = ω − 1.

The proof of this theorem can be found in Ortega, J. M., NumericalAnalysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 28 / 36

Page 70: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

TheoremIf A is positive definite and tridiagonal, then ρ(Tg) = [ρ(Tj)]

2 < 1, andthe optimal choice of ω for the SOR method is

ω =2

1 +√

1− [ρ(Tj)]2

With this choice of ω, we have ρ(Tω) = ω − 1.

The proof of this theorem can be found in Ortega, J. M., NumericalAnalysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 28 / 36

Page 71: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

ExampleFind the optimal choice of ω for the SOR method for the matrix

A =

4 3 03 4 −10 −1 4

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 29 / 36

Page 72: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)This matrix is clearly tridiagonal, so we can apply the result in theSOR theorem if we can also show that it is positive definite.

Because the matrix is symmetric, the theory tells us that it ispositive definite if and only if all its leading principle submatriceshas a positive determinant.This is easily seen to be the case because

det(A) = 24, det([

4 33 4

])= 7 and det ([4]) = 4

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 30 / 36

Page 73: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)This matrix is clearly tridiagonal, so we can apply the result in theSOR theorem if we can also show that it is positive definite.Because the matrix is symmetric, the theory tells us that it ispositive definite if and only if all its leading principle submatriceshas a positive determinant.

This is easily seen to be the case because

det(A) = 24, det([

4 33 4

])= 7 and det ([4]) = 4

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 30 / 36

Page 74: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)This matrix is clearly tridiagonal, so we can apply the result in theSOR theorem if we can also show that it is positive definite.Because the matrix is symmetric, the theory tells us that it ispositive definite if and only if all its leading principle submatriceshas a positive determinant.This is easily seen to be the case because

det(A) = 24, det([

4 33 4

])= 7 and det ([4]) = 4

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 30 / 36

Page 75: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR MethodSolution (2/3)We compute

Tj = D−1(L + U)

=

14 0 0

0 14 0

0 0 14

0 −3 0−3 0 1

0 1 0

=

0 −0.75 0−0.75 0 0.25

0 0.25 0

so that

Tj − λI =

−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 31 / 36

Page 76: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR MethodSolution (2/3)We compute

Tj = D−1(L + U)

=

14 0 0

0 14 0

0 0 14

0 −3 0−3 0 1

0 1 0

=

0 −0.75 0−0.75 0 0.25

0 0.25 0

so that

Tj − λI =

−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 31 / 36

Page 77: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR MethodSolution (2/3)We compute

Tj = D−1(L + U)

=

14 0 0

0 14 0

0 0 14

0 −3 0−3 0 1

0 1 0

=

0 −0.75 0−0.75 0 0.25

0 0.25 0

so that

Tj − λI =

−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 31 / 36

Page 78: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR MethodSolution (2/3)We compute

Tj = D−1(L + U)

=

14 0 0

0 14 0

0 0 14

0 −3 0−3 0 1

0 1 0

=

0 −0.75 0−0.75 0 0.25

0 0.25 0

so that

Tj − λI =

−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 31 / 36

Page 79: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)Therefore

det(Tj − λI) =

∣∣∣∣∣∣−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

∣∣∣∣∣∣

= −λ(λ2 − 0.625)

Thusρ(Tj) =

√0.625

andω =

2

1 +√

1− [ρ(Tj)]2=

21 +

√1− 0.625

≈ 1.24.

This explains the rapid convergence obtained in the last example whenusing ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

Page 80: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)Therefore

det(Tj − λI) =

∣∣∣∣∣∣−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

∣∣∣∣∣∣ = −λ(λ2 − 0.625)

Thusρ(Tj) =

√0.625

andω =

2

1 +√

1− [ρ(Tj)]2=

21 +

√1− 0.625

≈ 1.24.

This explains the rapid convergence obtained in the last example whenusing ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

Page 81: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)Therefore

det(Tj − λI) =

∣∣∣∣∣∣−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

∣∣∣∣∣∣ = −λ(λ2 − 0.625)

Thusρ(Tj) =

√0.625

andω =

2

1 +√

1− [ρ(Tj)]2=

21 +

√1− 0.625

≈ 1.24.

This explains the rapid convergence obtained in the last example whenusing ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

Page 82: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)Therefore

det(Tj − λI) =

∣∣∣∣∣∣−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

∣∣∣∣∣∣ = −λ(λ2 − 0.625)

Thusρ(Tj) =

√0.625

andω =

2

1 +√

1− [ρ(Tj)]2=

21 +

√1− 0.625

≈ 1.24.

This explains the rapid convergence obtained in the last example whenusing ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

Page 83: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)Therefore

det(Tj − λI) =

∣∣∣∣∣∣−λ −0.75 0−0.75 −λ 0.25

0 0.25 −λ

∣∣∣∣∣∣ = −λ(λ2 − 0.625)

Thusρ(Tj) =

√0.625

andω =

2

1 +√

1− [ρ(Tj)]2=

21 +

√1− 0.625

≈ 1.24.

This explains the rapid convergence obtained in the last example whenusing ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

Page 84: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1 Residual Vectors & the Gauss-Seidel Method

2 Relaxation Methods (including SOR)

3 Choosing the Optimal Value of ω

4 The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 33 / 36

Page 85: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (1/2)

To solveAx = b

given the parameter ω and an initial approximation x(0):

INPUT the number of equations and unknowns n;the entries aij , 1 ≤ i , j ≤ n, of the matrix A;the entries bi , 1 ≤ i ≤ n, of b;the entries XOi , 1 ≤ i ≤ n, of XO = x(0);the parameter ω; tolerance TOL;maximum number of iterations N.

OUTPUT the approximate solution x1, . . . , xn or a messagethat the number of iterations was exceeded.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 34 / 36

Page 86: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (1/2)

To solveAx = b

given the parameter ω and an initial approximation x(0):

INPUT the number of equations and unknowns n;the entries aij , 1 ≤ i , j ≤ n, of the matrix A;the entries bi , 1 ≤ i ≤ n, of b;the entries XOi , 1 ≤ i ≤ n, of XO = x(0);the parameter ω; tolerance TOL;maximum number of iterations N.

OUTPUT the approximate solution x1, . . . , xn or a messagethat the number of iterations was exceeded.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 34 / 36

Page 87: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1Step 2 While (k ≤ N) do Steps 3–6:

Step 3 For i = 1, . . . , n

set xi = (1− ω)XOi +1aii

(−

∑i−1j=1 aijxj −

∑nj=i+1 aijXOj + bi

)]Step 4 If ||x− XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)Step 5 Set k = k + 1Step 6 For i = 1, . . . , n set XOi = xi

Step 7 OUTPUT (‘Maximum number of iterations exceeded’)STOP (The procedure was successful)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

Page 88: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1Step 2 While (k ≤ N) do Steps 3–6:

Step 3 For i = 1, . . . , n

set xi = (1− ω)XOi +1aii

(−

∑i−1j=1 aijxj −

∑nj=i+1 aijXOj + bi

)]

Step 4 If ||x− XO|| < TOL then OUTPUT (x1, . . . , xn)STOP (The procedure was successful)

Step 5 Set k = k + 1Step 6 For i = 1, . . . , n set XOi = xi

Step 7 OUTPUT (‘Maximum number of iterations exceeded’)STOP (The procedure was successful)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

Page 89: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1Step 2 While (k ≤ N) do Steps 3–6:

Step 3 For i = 1, . . . , n

set xi = (1− ω)XOi +1aii

(−

∑i−1j=1 aijxj −

∑nj=i+1 aijXOj + bi

)]Step 4 If ||x− XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)

Step 5 Set k = k + 1Step 6 For i = 1, . . . , n set XOi = xi

Step 7 OUTPUT (‘Maximum number of iterations exceeded’)STOP (The procedure was successful)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

Page 90: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1Step 2 While (k ≤ N) do Steps 3–6:

Step 3 For i = 1, . . . , n

set xi = (1− ω)XOi +1aii

(−

∑i−1j=1 aijxj −

∑nj=i+1 aijXOj + bi

)]Step 4 If ||x− XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)Step 5 Set k = k + 1

Step 6 For i = 1, . . . , n set XOi = xi

Step 7 OUTPUT (‘Maximum number of iterations exceeded’)STOP (The procedure was successful)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

Page 91: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1Step 2 While (k ≤ N) do Steps 3–6:

Step 3 For i = 1, . . . , n

set xi = (1− ω)XOi +1aii

(−

∑i−1j=1 aijxj −

∑nj=i+1 aijXOj + bi

)]Step 4 If ||x− XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)Step 5 Set k = k + 1Step 6 For i = 1, . . . , n set XOi = xi

Step 7 OUTPUT (‘Maximum number of iterations exceeded’)STOP (The procedure was successful)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

Page 92: SOR Method

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1Step 2 While (k ≤ N) do Steps 3–6:

Step 3 For i = 1, . . . , n

set xi = (1− ω)XOi +1aii

(−

∑i−1j=1 aijxj −

∑nj=i+1 aijXOj + bi

)]Step 4 If ||x− XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)Step 5 Set k = k + 1Step 6 For i = 1, . . . , n set XOi = xi

Step 7 OUTPUT (‘Maximum number of iterations exceeded’)STOP (The procedure was successful)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

Page 93: SOR Method

Questions?