2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12...

13
Linear Algebra for Wireless Communications Wireless Communications Lect re 1 Lecture: 1 Introduction Ove Edfors Department of Electroscience L dU i it Lund University 2009-12-09 Ove Edfors 1 Lectures Preliminary plan About one lecture every 2 nd week T l t bf (thi d t k) Two lectures before x-mas (this and next week) Detailed schedule Detailed schedule Web pages here http://www.eit.lth.se/course/phd006 A detailed schedule is available on these pages, but it is subject to changes 2009-12-09 Ove Edfors 2 Course material There is no single textbook in this course! The course slides constitute the contents of the course, while the material originates from many sources, e.g.: If you are buying one, I d thi ! G. Strang. Linear Algebra and Its Applications. (Latest) Fourth edition: Brooks/Cole recommend this one! G.H. Golub. & C.F. van Loan. Matrix Computations. Johns Hopkins. T. Kailath. Linear Systems (Appendix). Prentice Hall. G. Strang. Introduction to Applied Mathematics. Wellesley Cambridge Press. K. Ogata. Discrete-time Control Systems. Prentice-Hall. L.L. Scharf. Statistical Signal Processing. Addison Wesley. S.M. Kay. Fundamentals of Statistical Signal Processing (Est. Theory / Det. Theory). Prentice Hall. T.M. Cover & J.A. Thomas. Elements of Information Theory. Wiley. 2009-12-09 Ove Edfors 3 Home assignments One set of problems with each lecture. To pass the course, solutions must be handed in before the next lecture, and about 80% correct (in total over the whole course). 2009-12-09 Ove Edfors 4

Transcript of 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12...

Page 1: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Linear Algebra forWireless CommunicationsWireless Communications

Lect re 1Lecture: 1

Introduction

Ove EdforsDepartment of Electroscience

L d U i itLund University

2009-12-09 Ove Edfors 1

Lectures

• Preliminary plan

– About one lecture every 2nd weekT l t b f (thi d t k)– Two lectures before x-mas (this and next week)

• Detailed schedule• Detailed schedule– Web pages here http://www.eit.lth.se/course/phd006– A detailed schedule is available on these pages, but it is subject to p g , j

changes

2009-12-09 Ove Edfors 2

Course material

• There is no single textbook in this course!

• The course slides constitute the contents of the course, while the material originates from many sources, e.g.: If you are buying one, I

d thi !

– G. Strang. Linear Algebra and Its Applications. (Latest) Fourth edition: Brooks/Cole

recommend this one!

– G.H. Golub. & C.F. van Loan. Matrix Computations. Johns Hopkins.– T. Kailath. Linear Systems (Appendix). Prentice Hall.– G. Strang. Introduction to Applied Mathematics. Wellesley Cambridge

Press.– K. Ogata. Discrete-time Control Systems. Prentice-Hall.– L.L. Scharf. Statistical Signal Processing. Addison Wesley.– S.M. Kay. Fundamentals of Statistical Signal Processing (Est. Theory /

Det. Theory). Prentice Hall.– T.M. Cover & J.A. Thomas. Elements of Information Theory. Wiley.

2009-12-09 Ove Edfors 3

Home assignments

• One set of problems with each lecture.

• To pass the course, solutions must be– handed in before the next lecture, and– about 80% correct (in total over the whole course).

2009-12-09 Ove Edfors 4

Page 2: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Why should you bother?Because you find things like this in journal papers on wireless communications:

These exaples are from (conf. paper):Luc Deneire and Dirk T.M. Slock. A DETERMINISTICSCHUR METHOD FOR MULTICHANNELBLIND IDENTIFICATION 2nd IEEEWorkshop on SignalBLIND IDENTIFICATION. 2nd IEEEWorkshop on SignalProcessing Advances in Wireless Communications,May 9-12, 1999 - Annapolis, MD, USA, pp-275–2781.

2009-12-09 Ove Edfors 5

Why should you bother?

... or stuff like this:

These exaples are from:Jeong Geun Kim & Marwan M. Krunz. Bandwidthallocation in wireless networks with guaranteedpacket-loss performance. IEEE/ACM Transactions onNetworking vol 8 no 3 June 2000Networking, vol 8, no 3, June 2000

2009-12-09 Ove Edfors 6

Why should you bother?

... or stuff like this:

These exaples are from:O. Edfors et al. OFDM channel estimation by singularvalue decomposition. IEEE Transactions onpCommunications, vol. 46, no. 7, pp. 931-939, July 1998.

2009-12-09 Ove Edfors 7

Simple examples:Packet transmission over FIR channelPacket transmission over FIR channel

FIR channelof length L

Noisen

xk hk

g

N transmitteddata points

nk

yk Receivedsignal

( )1

0

L

k k n k n kkn

y x h n x h n−

−=

= ∗ + = +∑0n=

Over the ”interesting” interval between k = 0 and k = N + L - 1 where there are contributionsfrom the data to the received signal, we have an equivalent matrix model:

h⎡ ⎤0

00 0 0

1L

hh

y x nh

⎡ ⎤⎢ ⎥⎢ ⎥⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥

M

M O1

1 01 1 1

L

LN L N N L

hh h

y x n

−+ − − + −

⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = + = +⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎢ ⎥

y Hx nM O

M M M

O M

2009-12-09 Ove Edfors 8

1Lh −

⎢ ⎥⎢ ⎥⎣ ⎦

Page 3: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Simple examples:Packet transmission over FIR channelPacket transmission over FIR channel

0hh

⎡ ⎤⎢ ⎥M 0

0 0 01

1 0

L

L

hy x n

hh h

⎢ ⎥⎢ ⎥⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = + = +⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

y Hx n

M

M OM M M

1 01 1 1

1

LN L N N L

L

y x n

h

+ − − + −

⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

O M

Some fundamental problems:

If the channel H and the noise properties are known, we may wantf

Known data points are

to detect the data points x from the received signal y.

If the data points x and noise properties are known, we may wantto estimate the channel H (under the restrictions given by its structure)

points are called pilots.

to estimate the channel H (under the restrictions given by its structure).

If the data points x are known, we may want to estimate both thechannel H (under the restrictions given by its structure) and the noise

2009-12-09 Ove Edfors 9

channel H (under the restrictions given by its structure) and the noiseproperties.

Simple examples:Multiple transmit and receive antennasMultiple transmit and receive antennas

11

21

1xy h h n⎡ ⎤

⎡ ⎤= +⎢ ⎥⎣ ⎦

TX diversity (MISO):

1 1,1 1,2 12

y h h nx

⎡ ⎤= +⎢ ⎥⎣ ⎦⎣ ⎦

1 RX diversity (SIMO):2

11,11 1

12 12 2

hy nx

hy n⎡ ⎤⎡ ⎤ ⎡ ⎤

= +⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦

RX diversity (SIMO):

2,12 2hy n⎣ ⎦ ⎣ ⎦⎣ ⎦

1

2

1

2TX&RX diversity (MIMO):

2 21,1 1,21 1 1

2,1 2,22 2 2

h hy x nh hy x n⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦

2009-12-09 Ove Edfors 10

Simple examples:Multiple transmit and receive antennasMultiple transmit and receive antennas

The ”general” case with MT TX antennas andMR RX antennas:

1 1 1 2 1 Mh h hy x n⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤L1,1 1,2 1,1 1 1

2 2,1 2,2 2, 2 2

T

T

M

M

h h hy x ny h h h x n

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = + = +⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥

y Hx nL

M M MM M O M

,1 ,2 ,R T TR R R TM M MM M M M

y x nh h h

⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦

M M MM M O M

L

Some fundamental problems:- How do we model the channel matrix H?- How do we model the noise (interference) n?- How much data can we transmit through

?2009-12-09 Ove Edfors 11

such a channel?

A few words aboutvectors matrices and

some operationssome operations

2009-12-09 Ove Edfors 12

Page 4: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Basic norms

Euclidian norm of an N vector xEuclidian norm of an N vector x

2

1

N

kk

x= ∑xThe Euclidian norm of1k=

Frobenius norm of an MxN matrix A

The Euclidian norm ofa vector is a special caseof the Frobenius norm ofmatrices.

2

,1 1

M N

i jFi j

a= =

= ∑∑A

2009-12-09 Ove Edfors 13

Basic matrix operations

Matrix multiplication is ASSOCIATIVE:

( ) ( )=A BC AB C

Matrix operations are DISTRIBUTIVE:

( ) ( )

( )+ = +A B C AB AC( )( )

+ = +

+ = +

A B C AB AC

B C D BD CD

Matrix multiplication is NOT COMMUTATIVE:

(usually)≠AB AB

( y)

2009-12-09 Ove Edfors 14

Matrix inverse

A matrix A is said to be invertible if there exists a matrix B such thatA matrix A is said to be invertible if there exists a matrix B such that

= =AB BA I Both must be fulfilled!

There exists at most one such matrix B, it is called the inverse of A and is denoted A-1:

1 1− −= =A A AA I

1

The product of two invertible matrices is invertible and is given by:

( ) 1 1 1− − −=AB B A Note change of order!

2009-12-09 Ove Edfors 15

Matrix transpose

⎡ ⎤ ⎡ ⎤1,1 1,2 1,

2,1 2,2 2,

N

N

a a aa a a

⎡ ⎤⎢ ⎥⎢ ⎥=A

L

L

1,1 2,1 ,1

1,1 2,2 ,2

N

NT

a a aa a a

⎡ ⎤⎢ ⎥⎢ ⎥=A

L

L

1 2M M M Na a a

⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

AM M O M

L 1 2M M N Ma a a

⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

AM M O M

L,1 ,2 ,M M M N⎣ ⎦ 1, 2, ,M M N M⎣ ⎦

( )T T T=AB B AProperties

( ) ( ) 11 T T −− =A A

2009-12-09 Ove Edfors 16

Page 5: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Matrix Hermitian transpose

1,1 1,2 1,Na a a⎡ ⎤⎢ ⎥

L ( ) ( )* * TH TA A= = =A2,1 2,2 2,Na a a⎢ ⎥

⎢ ⎥=⎢ ⎥⎢ ⎥

AL

M M O M

( ) ( )* * *

1,1 2,1 ,1* * *

Na a aa a a

⎡ ⎤⎢ ⎥⎢ ⎥

L

,1 ,2 ,M M M Na a a⎢ ⎥⎣ ⎦L

1,1 2,2 ,2

* * *

Na a a⎢ ⎥= ⎢ ⎥⎢ ⎥

L

M M O M* * *

1, 2, ,M M N Ma a a⎢ ⎥⎢ ⎥⎣ ⎦L

x* denotes complex conjugate of x

( )H H H=AB B AProperties

( ) ( ) 11 H H −− =A A

2009-12-09 Ove Edfors 17

Inner product of vectors

The inner product of two N vectors x and y isp y

1y⎡ ⎤⎢ ⎥

1y⎡ ⎤⎢ ⎥

Real case Complex (and real) case

2* * *1 2

HN

yx x x

y

⎢ ⎥⎢ ⎥⎡ ⎤= ⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦

x y LM

[ ] 21 2

TN

yx x x

y

⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

x y LM

Ny⎣ ⎦Ny⎣ ⎦

The Euclidian norm can be expressed using this inner product:

1

22* * *N

H

xx

x x x x

⎡ ⎤⎢ ⎥⎢ ⎥⎡ ⎤⎣ ⎦ ∑x x x 1 2

1N j

i

N

x x x x

x=

⎢ ⎥⎡ ⎤= = =⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦

∑x x x LM

2009-12-09 Ove Edfors 18

Hadamard (aka Shur) product

⎡ ⎤ b b b⎡ ⎤1,1 1,2 1,

2,1 2,2 2,

N

N

a a aa a a

⎡ ⎤⎢ ⎥⎢ ⎥=A

L

L

1,1 1,2 1,

2,1 2,2 2,

N

N

b b bb b b⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥

B

L

L

1 2M M M Na a a

⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

AM M O M

L ,1 ,2 ,M M M Nb b b⎢ ⎥⎢ ⎥⎣ ⎦

BM M O M

L,1 ,2 ,M M M N⎣ ⎦ , , ,⎣ ⎦

a b a b a b⎡ ⎤L

MxN MxN

1,1 1,1 1,2 1,2 1, 1,

2,1 2,1 2,2 2,2 2, 2,

N N

N N

a b a b a ba b a b a b

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥

A B

L

L

M M O M

,1 ,1 ,2 ,2 , ,M M M M M N M Na b a b a b⎢ ⎥⎢ ⎥⎣ ⎦

M M O M

L

2009-12-09 Ove Edfors 19

MxN

Hadamard (aka Shur) product (cont.)

=A B B AProperties

( )TT T =A B B A

=A B B AProperties

( )A B B A

2009-12-09 Ove Edfors 20

Page 6: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Kronecker product

⎡ ⎤ b b b⎡ ⎤1,1 1,2 1,

2,1 2,2 2,

N

N

a a aa a a

⎡ ⎤⎢ ⎥⎢ ⎥=A

L

L

1,1 1,2 1,

2,1 2,2 2,

Q

Q

b b bb b b⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥

B

L

L

1 2M M M Na a a

⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

AM M O M

L ,1 ,2 ,P P P Qb b b⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

BM M O M

L,1 ,2 ,M M M N⎣ ⎦ Q⎣ ⎦

1 1 1 2 1 Na a a⎡ ⎤B B BL

MxN PxQ

1,1 1,2 1,

2,1 2,2 2,

N

N

a a aa a a

⎡ ⎤⎢ ⎥⎢ ⎥⊗ =⎢ ⎥

B B BB B B

A BL

M M O M

,1 ,2 ,M M M Na a a⎢ ⎥⎢ ⎥⎣ ⎦B B B

M M O M

L(MP) (NQ)

2009-12-09 Ove Edfors 21

(MP)x(NQ)

Kronecker product (cont.)

( )+ ⊗ = ⊗ + ⊗A B C A C B CProperties

( ) ( ) ( )α α α⊗ = ⊗ = ⊗A B A B A B

( ) ( )⊗ ⊗ = ⊗ ⊗A B C A B C

( )( )( )( )⊗ ⊗ = ⊗A B C D AC BD

( ) 1 1 1− − −⊗ ⊗A B A B( )⊗ = ⊗A B A B

2009-12-09 Ove Edfors 22

Matrix trace

The trace of an NxN matrix A is defined as

( ) ,trN

i ia=∑A (sum of diagonal elements)1i=

Properties( ) ( )tr tr T=A A( ) ( )

( ) ( ) ( )tr tr tr+ = +A B A B

( ) ( )tr trα α=A A( ) ( )tr trα α=A A

2009-12-09 Ove Edfors 23

Determinant

The determinant of an NxN matrix A can be interpreted as the volume of a parallelepipedThe determinant of an NxN matrix A can be interpreted as the volume of a parallelepipedin RN where the edges come from the columns (or rows) of A.

z

⎡ ⎤ a⎡ ⎤

z

det this volume=A

1,1 1,2 1,3

2,1 2,2 2,3

a a aa a a⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥

A1,1

2,1

3,1

aaa

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

1,2

2,2

aa⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥

y

3,1 3,2 3,3a a a⎢ ⎥⎣ ⎦3,2a⎢ ⎥⎣ ⎦

1,3

2,3

aa⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥

xThe determinant changes sign dependingon the order of the columns!

2009-12-09 Ove Edfors 24

3,3a⎢ ⎥⎣ ⎦

Page 7: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Determinant (cont.)

The determinant can also be interpreted as the change of volume when aThe determinant can also be interpreted as the change of volume when alinear transformation y = Ax is applied to a body of a certain volume.

=y Ax

Volume V1 Volume V2

V2

1

detVV

= A

2009-12-09 Ove Edfors 25

Cofactor expansion of determinant

The determinant of A can be computed by expanding it in the cofactors of thep y p gi:th row:

,1 ,1 ,2 ,2 , , , ,1

detN

i i i i i N i N i j i jj

a A a A a A a A= + + + =∑A K1j=

where the cofactor Ai,j is the determinant of the minor Mi,j with the correct sign:

i j( ), ,1 deti ji j i jA += − M

Th i M f A i th t i f d b i i d l j f AThe minor Mi,j of A is the matrix formed by removing row i and column j of A.

This can be used to recursively calculate the This also works along aydeterminant of any NxN matrix. After N-1steps we arrive at the scalar case.

This also works along acolumn j.

2009-12-09 Ove Edfors 26

Cramer’s rule

The j:th component xj ofj p j

1−=x A bis

det jx =B where

1,1 1, 1 1 1, 1 1,j j N

j

a a b a a− +⎡ ⎤⎢ ⎥= ⎢ ⎥B

L L

M M M M M M Mdetjx =

Awhere

,1 , 1 , 1 ,

j

N N j N N j N Na a b a a− +

⎢ ⎥⎢ ⎥⎣ ⎦

B M M M M M M M

L L

The vector b replaces the j:th l f Acolumn of A.

2009-12-09 Ove Edfors 27

Eigenvalues and eigenvectors

The fundamental equation for the determination of eigenvalues and eigenvectors

λ=Ax x

The fundamental equation for the determination of eigenvalues and eigenvectorscorresponding to the matrix A is

This equation is nonlinear, since it contains the product of two unknowns x and λ.

Rewriting it asRewriting it as

( )λ− =A I x 0we see that the number λ will only be an eigenvalue of A with a corresponding non-zeroy g p geigenvector x if and only if

( )det 0λ− =A Iwhich is the characteristic equation for the matrix A.

2009-12-09 Ove Edfors 28

Page 8: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Eigenvalues and eigenvectors (cont.)

Each of the following conditions is necessary and sufficient for the number λ to be anEach of the following conditions is necessary and sufficient for the number λ to be aneigenvalue of A:

1. There is a nonzero vector x such that Ax = λx.2 Th t i A λI i i l2. The matrix A-λI is singular.3. det(A-λI) = 0.

The sum of the N eigenvalues of A equals the sum of the diagonal elements of A:

( )trN N

aλ = =∑ ∑ A( ),1 1

trn n nn n

aλ= =∑ ∑ A

The product of the N eigenvalues of A equals the determinant of A:p g q

( )detN

nλ =∏ A

2009-12-09 Ove Edfors 29

1n=

Matrix types and someypof their properties

2009-12-09 Ove Edfors 30

Diagonal matrices

Identity

1⎡ ⎤⎢ ⎥= ⎢ ⎥I O

Its own inverseAll eigenvalues = 1Determinant = 1

1⎢ ⎥⎢ ⎥⎣ ⎦

Diagonal

1d⎡ ⎤⎢ ⎥= ⎢ ⎥D O

Inverse diagonal matrixEigenvalues are on diagonalD t i t d t f di l l t

nd⎢ ⎥⎢ ⎥⎣ ⎦

Determinant = product of diagonal elements

2009-12-09 Ove Edfors 31

Triangular matrices

* * * *⎡ ⎤

Upper triangular0 * * *0 0 * *

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥

UInverse upper triangularProduct U1U2 upper triangularEigenvalues are on the diagonal0 0 * *

0 0 0 *⎢ ⎥⎢ ⎥⎣ ⎦

Eigenvalues are on the diagonalDeterminant = product of diag. elements

* 0 0 0* * 0 0⎡ ⎤⎢ ⎥ Inverse lower triangular

Lower triangular* * 0 0* * * 0

⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥

LProduct L1L2 lower triangularEigenvalues are on the diagonalDeterminant = product of diag. elements

* * * *⎢ ⎥⎣ ⎦

2009-12-09 Ove Edfors 32

Page 9: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Symmetric/Hermitian matrices

Symmetric T=A A Inverse is symmetricEigenvalues are real

Skew-symmetric T= −A AInverse is skew-symmetricDiagonal elements are zeroEigenvalues are imaginary (or zero)

Hermitian H=A A Inverse is HermitianEigenvalues are real

Sk H iti HA A

Eigenvalues are real

Inverse is skew-HermitianDiagonal elements are zeroSkew-Hermitian H= −A A Diagonal elements are zeroEigenvalues are imaginary (or zero)

2009-12-09 Ove Edfors 33

Toeplitz and circulant matrices

A Toeplitz matrix is a square matrix where each decending diagonal fromleft to right is constant (elements only depend on the row/col index difference i j):left to right is constant (elements only depend on the row/col index difference i - j):

( )0 1 2 1Na a a a− − − −⎡ ⎤⎢ ⎥

L LHas only 2N-1 degrees of freedom

1 0 1

2 1 0

a a aa a a

⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥

O O M

O O M

Has only 2N 1 degrees of freedomEfficient numerical algorithmsToeplitz matrices commute

2 1 0

2

a a aa

a a−

= ⎢ ⎥⎢ ⎥⎢ ⎥

A O O M

M O O O O

M O O O 0 1

1 2 1 0N

a aa a a a

⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

M O O O

L L

If, in addition, aN-k = a-k the matrix is called a circulant matrix.

2009-12-09 Ove Edfors 34

Hankel matrices

A Hankel matrix is a square matrix where each skew-diagonal is constant (elementsonly depend on the row/col index sum i + j):only depend on the row/col index sum i + j):

0 1 2 1Na a a a −⎡ ⎤⎢ ⎥

L LHas only 2N-1 degrees of freedom

1 2 3

2 3 4

a a aa a a

⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥

O O M

O O M

Has only 2N 1 degrees of freedom”Upside-down Toeplitz”SymmetricEfficient numerical algorithms

2 3 4

2 4Naa a

= ⎢ ⎥⎢ ⎥⎢ ⎥

AM O O O O

M O O O 2 4 2 3

1 2 4 2 3 2 2

N N

N N N N

a aa a a a

− −

− − − −

⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

M O O O

L L

If, in addition, aN+k = ak the matrix is circulant.

2009-12-09 Ove Edfors 35

Gaussian elimination -a fundamental algorithm

2009-12-09 Ove Edfors 36

Page 10: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Gaussian elimination

Standard Gaussian elimination 3 3 1Row Row Row= +2 1 1 14 1 0 2

uw

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥

2 1 1 1u⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥

3 3 1Row Row Row= +

4 1 0 22 2 1 7

ww

= −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦

0 1 2 40 3 2 8

ww

⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦0 3 2 8w⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

2 2 2 1Row Row Row= − ×3 3 3 2Row Row Row= + ×

2 1 1 1u⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥

2 1 1 1u⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥

3 3 3 2Row Row Row= + ×

0 1 2 42 2 1 7

ww

⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦

0 1 2 40 0 4 4

ww

⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦

2009-12-09 Ove Edfors 37

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

Gaussian elimination (cont.)

The three elementary row operations can be expressed as a series of lower triangularmatrix multiplications

2 2 2 1Row Row Row= − ×

matrix multiplications

2,1

1 0 02 1 0

⎡ ⎤⎢ ⎥= −⎢ ⎥E ,

0 0 1⎢ ⎥⎢ ⎥⎣ ⎦

1 0 0⎡ ⎤

3 3 1Row Row Row= + 3,1

1 0 00 1 0⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥

E1 0 1⎢ ⎥⎣ ⎦

1 0 0⎡ ⎤

3 3 3 2Row Row Row= + × 3,2

1 0 00 1 00 3 1

⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

E

2009-12-09 Ove Edfors 38

0 3 1⎢ ⎥⎣ ⎦

Gaussian elimination (cont.)

Applying the elemantary Gauss transformation matrices to our coefficient matrix

2 1 14 1 0

⎡ ⎤⎢ ⎥= ⎢ ⎥A 4 1 0

2 2 1= ⎢ ⎥⎢ ⎥−⎣ ⎦

A The applicationorder of these matrices

is important!gives

3,2 3,1 2,1 =E E E A

1 0 0 1 0 0 1 0 0 2 1 1 2 1 10 1 0 0 1 0 2 1 0 4 1 0 0 1 2⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− = − −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥0 1 0 0 1 0 2 1 0 4 1 0 0 1 20 3 1 1 0 1 0 0 1 2 2 1 0 0 4⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

2009-12-09 Ove Edfors 39

Gaussian elimination (cont.)

What’s so special with elemantary Gauss transformation matrices?What s so special with elemantary Gauss transformation matrices?

They have the following structure:

1⎡ ⎤⎢ ⎥⎢ ⎥O

1⎡ ⎤⎢ ⎥⎢ ⎥O

,, 1i j

i je

⎢ ⎥⎢ ⎥

= ⎢ ⎥⎢ ⎥⎢ ⎥

EO 1

,, 1i j

i je−

⎢ ⎥⎢ ⎥

= ⎢ ⎥−⎢ ⎥⎢ ⎥

EO

1

⎢ ⎥⎢ ⎥⎣ ⎦

O

1

⎢ ⎥⎢ ⎥⎣ ⎦

O

... always invertible and inverse is very simple to calculate!

2009-12-09 Ove Edfors 40

Page 11: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Gaussian elimination (cont.)

Let’s take a closer look at our Gaussian elimination example:

1 0 0 1 0 0 1 0 0 2 1 1 2 1 10 1 0 0 1 0 2 1 0 4 1 0 0 1 2⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥0 1 0 0 1 0 2 1 0 4 1 0 0 1 20 3 1 1 0 1 0 0 1 2 2 1 0 0 4

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− = − −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

1 0 0⎡ ⎤⎢ ⎥

2 1 1 2 1 1⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥2 1 0

7 3 1

⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦

Product of lower

triangular matrices is

4 1 0 0 1 22 2 1 0 0 4

⎢ ⎥ ⎢ ⎥= − −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦7 3 1⎢ ⎥⎣ ⎦matrices is

lower triangular.

⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

2009-12-09 Ove Edfors 41

Gaussian elimination (cont.)Multiply by inverse from the left!

1 11 0 0 2 1 12 1 0 4 1 0

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− =⎢ ⎥ ⎢ ⎥

11 0 02 1 0

−⎡ ⎤⎢ ⎥−⎢ ⎥

2 1 10 1 2⎡ ⎤⎢ ⎥− −⎢ ⎥

11 0 02 1 0

−⎡ ⎤⎢ ⎥−⎢ ⎥

7 3 1 2 2 1⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦

Inverse of lower triangular is lower triangular.

7 3 1⎢ ⎥⎢ ⎥−⎣ ⎦ 0 0 4

⎢ ⎥⎢ ⎥−⎣ ⎦7 3 1

⎢ ⎥⎢ ⎥−⎣ ⎦

I 2 1 1 1 0 0 2 1 14 1 0 2 1 0 0 1 2

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥

Inverse of lower triangular is lower triangular.

4 1 0 2 1 0 0 1 22 2 1 1 3 1 0 0 4

⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦We have demonstrated

=A LU

We have demonstratedthat this A can be decomposedinto a lower- and an upper-triangular matrix.

This is an LU decomposition

of A.

2009-12-09 Ove Edfors 42

Gaussian elimination (cont.)

Definition: The pivot elements in Gaussian elimination are the diagonal elementswe use to eliminate the sub-diagonal elements.

If a matrix can be LU decomposed,the pivot elements are

those on the diagonal of U.

A matrix A can be decomposed as the product LU, of a lower-triangular matrix Land an upper-triangular matrix U, as long as long as no pivot elements are zero.

Property

2009-12-09 Ove Edfors 43

The LDU decompositionThe LU decomposition of a matrix A is slightly ”unsymmetric” in the sense that L hasones on its diagonal, whereas U does not (in general).

1,1 1,2 1,

2 1 2 2

1 0 01 0

Nu u ul u⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥A LU

L L

O M O M2,1 2,2

1,

1 2

01 0 0

N n

N N N N

ul l u

⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

A LUM M O M O O

L L,1 ,2 ,N N N N⎣ ⎦ ⎣ ⎦A more symmetric version is the LDU decomposition

% %1 0 0 1d u u⎡ ⎤⎡ ⎤ ⎡ ⎤

%

1,2 1,1

2,1 2

1 0 0 11 0 1

0

Nd u ul d

⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥= = ⎢ ⎥⎢ ⎥ ⎢ ⎥

A LDU

L L

O M O M

M M O O M O O 1,

,1 ,2

01 0 0 1

N N

N N N

ul l d

−⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦ ⎣ ⎦

M M O O M O OL L

%

2009-12-09 Ove Edfors 44

where we have (from the LU decomposition) 1 1,1d u= and , , , ,/ /i j i j i i i j iu u u u d= =

Page 12: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

The LDU decomposition (cont.)

Property

If a matrix A can be LDU decomposed, then the three matrices L, D and U are unique!

If a matrix A is symmetric and can be LDU decomposed, then the upper triagular

Property

If a matrix A is symmetric and can be LDU decomposed, then the upper triagularmatrix U is transpose of the lower triangular matrix L and

T=A LDL

2009-12-09 Ove Edfors 45

Gaussian elimination (cont.)

Will Gaussian elimination always work?Will Gaussian elimination always work?

If a pivot element is zero, we are in more or less truble!

2 1 1 2⎡ ⎤ Pi t l t b t th iLess trouble: 2 1 1 20 0 3 10 0 1 2

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥

Pivot element zero, but there isa non-zero element further down.

Can be fixed by a row exhange!0 0 1 20 3 4 2⎢ ⎥⎢ ⎥⎣ ⎦

2 1 1 20 0 3 1⎡ ⎤⎢ ⎥⎢ ⎥

More trouble: Pivot element zero and all elementsbelow it are zero. 0 0 3 1

0 0 1 2

⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥

This matrix is singular and itcan’t be fixed by a row exhange!

2009-12-09 Ove Edfors 46

0 0 4 2⎢ ⎥⎣ ⎦

Gaussian elimination (cont.)

Row exchanges are performed by permutation matrices. To exchange rows j and k multiplyg p y p g j p yby

1⎡ ⎤⎢ ⎥⎢ ⎥O

10 1

1

⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥

from (j,j) to (j,k)

Is its own inverse

,

1

1j k

⎢ ⎥⎢ ⎥

= ⎢ ⎥⎢ ⎥⎢ ⎥

P O

11 0

1

⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥

from (k,k) to (k,j)

1

⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

O

2009-12-09 Ove Edfors 47

Gaussian elimination (cont.)

Can row permutations save our LU/LDU decomposition?

A th t d th f ll i l t G t f ti dAssume that we need the following elementary Gauss transformations and rowexchanges to complete the Gaussian elimination:

U

=E P E E A U

Uppertriangular.

3,2 2,3 3,1 2,1 =E P E E A U

No longer lower triangular,due to the permutation matrix Pdue to the permutation matrix P2,3.

This will not lead to a LU factorization, since the inverse of E3,2P2,3E3,1E2,1cannot be lower triangular.

A partial solution: Make the necessary row exchanges before the Gaussian elimination.

2009-12-09 Ove Edfors 48

Page 13: 2009-12-09 Ove Edfors 3 2009-12-09 Ove Edfors 4 · yhxn l mmmmmom mmmr mm mmrr ... l l aa amm mn12 ⎢ ... αααaba b ab⊗=⊗ = ...

Gaussian elimination (cont.)

In the non-singular case, there is a permutation matrix P that reorders the rows of A, sog pthat PA admits decomosition with non-zero pivots:

=PA LUThis permutation P matrix is the product of the permutation matrices Pj,k required tocomplete the Gaussian elimination of A.

2009-12-09 Ove Edfors 49

Gaussian elimination (cont.)

• In theory, an exchange of rows is only necessary when we t i t l tencounter a zero pivot element.

• In practice, we obtain better numerical results if we also make ro e changes hen pi ot elements are close tomake row exchanges when pivot elements are close to zero.

2009-12-09 Ove Edfors 50