EE603 Class Notes Version 1 John Stensby

34
EE603 Class Notes Version 1 John Stensby 603CH13.DOC 13-1 Chapter 13 Series Representation of Random Processes Let X(t) be a deterministic, generally complex-valued, signal defined on [0, T] with Xt dt T () 2 0 <∞ z . (13-1) Let φ k (t), k 0, be a complete orthonormal basis for the vector space of complex-valued, square integrable functions on [0, T]. The functions φ k satisfy φ φ k j T t t dt k j k j () () , , * z = = = 0 1 0 . (13-2) Then, we can expand X(t) in the generalized Fourier series Xt t Xt t dt m m m m m T () () () () = = = * z x x 1 0 φ φ (13-3) for t in the interval [0,T]. In (13-3), convergence is not pointwise. Instead, Equation (13-3) converges in the mean square sense. That is, we have limit N→∞ = - = z Xt t dt k N k T () () x k 1 2 0 0 φ . (13-4) It is natural to ask if similar results can be obtained for finite power, m.s. Riemann integrable random processes. The answer is yes. Obviously, for random process X(t), the expansion coefficients x k will be random variables. In general, the coefficients x k will be pair-wise

Transcript of EE603 Class Notes Version 1 John Stensby

Page 1: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-1

Chapter 13 Series Representation of Random Processes

Let X(t) be a deterministic, generally complex-valued, signal defined on [0, T] with

X t dtT

( ) 20

< ∞z . (13-1)

Let φk(t), k ≥ 0, be a complete orthonormal basis for the vector space of complex-valued, square

integrable functions on [0, T]. The functions φk satisfy

φ φk jT

t t dt k j

k j

( ) ( ) ,

,

∗z = =

= ≠

01

0

. (13-2)

Then, we can expand X(t) in the generalized Fourier series

X t t

X t t dt

mm

m

m mT

( ) ( )

( ) ( )

=

=

=

z

x

x

1

0

φ

φ

(13-3)

for t in the interval [0,T]. In (13-3), convergence is not pointwise. Instead, Equation (13-3)

converges in the mean square sense. That is, we have

limitN→∞ =

− =∑z X t t dtk

N

kT

( ) ( )xk1

2

00φ . (13-4)

It is natural to ask if similar results can be obtained for finite power, m.s. Riemann

integrable random processes. The answer is yes. Obviously, for random process X(t), the

expansion coefficients xk will be random variables. In general, the coefficients xk will be pair-wise

Page 2: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-2

correlated. However, by selecting the basis functions as the eigenfunctions of a certain integral

operator, it is possible to insure that the coefficients are pair-wise uncorrelated, a highly desirable

condition that simplifies many applications. When the basis functions are chosen to make the

coefficients uncorrelated, the series representation of X(t) is known as a Karhunen-Loève

expansion. These types of expansions have many applications in the areas of communication and

control.

Some Important Properties of the Autocorrelation Function

Random process X(t) has an autocorrelation function Γ(t1,t2) which we assume is

continuous on [0, T]×[0, T]. Note that Γ is Hermitian; that is, the function satisfies Γ(t1,t2) =

Γ*(t2,t1). Also, it is nonnegative definite, a result that is shown easily. Let f(t) be any function

defined on the interval [0, T]. Then, we can define the random variable

xfT

X t f t dt= z ( ) ( )0

. (13-5)

The mean of xf is

E m t f t dtfT

[ ] ( ) ( )x = z0 ,

a result that is zero under the working assumption that m(t) = E[X(t)] = 0. The variance of xf is

Var[ E f t X t dt X f d f t E X t X f dtd

f t t f dtd

fT T TT

TT

x ] ( ) ( ) ( ) ( ) ( ) [ ( ) ( )] ( )

( ) ( , ) ( )

= LNM

OQP =

=

z z zzzz

∗ ∗ ∗ ∗

0 0 00

00

τ τ τ τ τ τ

τ τ τΓ

. (13-6)

Now, the variance of a random variable cannot be negative, so we conclude

Page 3: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-3

f t t f dtdTT

( ) ( , ) ( )Γ τ τ τ∗zz ≥00

0 (13-7)

for arbitrary function f(t), 0 ≤ t ≤ T. Condition (13-7) implies that autocorrelation function

Γ(t1,t2) is nonnegative definite. In most applications, the autocorrelation function is positive

definite in that

f t t f dtdTT

( ) ( , ) ( )Γ τ τ τ∗zz >00

0 (13-8)

for arbitrary functions f(t) that are not identically zero.

We can define the linear operator A : L2[0,T] → L2[0,T] by the formula

A x( ) ( , ) ( )t t x dT

≡ z Γ τ τ τ0

(13-9)

(recall that L2[0,T] is the vector space of square integrable functions on [0,T]). Continuous,

Hermitian, nonnegative definite autocorrelation function Γ forms the kernel of linear operator A.

In the world of mathematics, A[·] is a commonly-used Hilbert-Schmidt operator, and it is an

example of a compact, self-adjoint linear operator (for definitions of these terms see the appendix

of R. Ash, Information Theory, the book An Introduction to Hilbert Space, by N. Young, or

almost any book on Hilbert spaces and/or functional analysis).

Eigenfunctions and Eigenvalues of Linear Operator A

The eigenfunctions φk and eigenvalues λk of linear operator A satisfy A[φk(t)] = λkφk(t)

which is the same as

λ φ τ φ τ τk k kT

t t d( ) ( , ) ( )= z Γ0

. (13-10)

In what follows, we assume that kernel Γ(t,τ) is

Page 4: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-4

a) Hermitian (i.e. Γ(t,τ) = Γ∗(τ,t)),

b) at least nonnegative definite (i.e. (13-7) holds),

c) continuous on [0, T]×[0, T],

d) satisfies Γ( , )t dtdTT

τ τ00 zz < ∞ (this is a consequence of the continuity condition c).

Much is known about the eigenfunctions and eigenvalues of linear operator A. We state a number

of properties of the eignevectors/eigenvalues. Proofs that are not given here can be found in the

references cited above.

1. For a Hermitian, nonnegative definite, continuous kernel Γ(t,τ), there exist at least one square-

integrable eigenfunction and one nonzero eigenvalue.

2. It is obvious that eigenfunctions are defined up to a multiplicative constant. So, we normalize

them according to (13-2).

3. If φ1(t) and φ2(t) are eigenfunctions corresponding to the same eigenvalue λ, then αφ1(t) +

βφ2(t) is an eigenfunction corresponding to λ.

4. Distinct eigenvalues correspond to eigenfunctions that are orthogonal.

5. The eigenvalues are countable (i.e., a 1-1 correspondence can be established between the

eigenvalues and the integers). Furthermore, the eigenvalues are bounded. In fact, each

eigenvalue λk must satisfy the inequality

inf ( ) ( , ) ( ) sup ( ) ( , ) ( )f

TTk

f

TTf t t f dtd f t t f dtd

=∗

=

∗zz zz≤ ≤ < ∞1 00 1 00

Γ Γτ τ τ λ τ τ τ (13-11)

6. Every nonzero eigenvalue has a finite-dimensional eigenspace. That is, there are a finite

number of linearly independent eigenfunctions that correspond to a given eigenvalue (φk, 1 ≤ k ≤

n, are linearly independent if α1φ1 + α2φ2 + … + αnφn = 0 implies that α1 = α2 = … = αn = 0).

7. The eigenfunctions form a complete orthonormal basis of the vector space L2[0,T], the set of

all square integrable functions on [0, T]. If Γ is not positive definite, there is a zero eigenvalue,

Page 5: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-5

and you must include its orthonormalized eigenfunction(s) to get a complete orthonormal basis of

L2[0,T] (use the Gram-Schmidt procedure here).

8. The eigenvalues are nonnegative. For a positive definite kernel Γ(t,τ), the eigenvalues are

positive. To establish this claim, use (13-8) and (13-2) and write

λ φ λ φ φ τ φ τ τ

φ τ φ τ τ

i i i iT

i iTT

i iTT

t t dt t t d dt

t t d dt

= = LNM

OQP

=

∗ ∗

z zzzz

( )[ ( )] ( ) ( , ) ( )

( ) ( , ) ( )

0 00

00

0

Γ

Γ . (13-12)

This result is strictly positive if kernel Γ(t,τ) is positive definite.

9. The sum of the eigenvalues is the expected value of the process energy in the interval [0, T].

That is

E X t dt t t dtT

kk

T( ) ( , )2

01

0z ∑zLNM

OQP = =

=

∞Γ λ . (13-13)

With items 10 through 15, we want to establish Mercer’s theorem. This theorem states

that you can represent the autocorrelation function Γ(t,τ) by the expansion

Γ( , ) ( ) ( )t tk k kk

τ λ φ φ τ= ∗

=

∑1

. (13-14)

We will not give a rigorous proof of this result, but we will “come close”.

10. Let φ1(t) and λ1 be an eigenfunction and eigenvalue pair for kernel Γ(t,τ), the nonnegative

definite autocorrelation of random process X. Then

Page 6: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-6

Γ Γ1 1 1 1( , ) ( , ) ( ) ( )t t tτ τ λ φ φ τ≡ − ∗ (13-15)

is the nonnegative-definite autocorrelation of the random process

X t X t t X dT

1 1 0 1( ) ( ) ( ) ( ) ( )≡ − z ∗φ τ φ τ τ . (13-16)

To show this, first compute the intermediate result

E X t X E X t t X s s ds X X s s ds

E X t X E X s X t s ds t E X s X s ds

t E X s X s

T T

T T

1 1 1 1 1 1 10 1 2 1 2 20

1 20 1 2 2 1 10 1 1 1

1 1 1 2 1

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) (

∗ ∗ ∗ ∗ ∗

∗ ∗ ∗ ∗ ∗

∗ ∗ ∗

= −FH

IK −FH

IK

LNM

OQP

= − −

+

z zz z

τ φ φ τ φ τ φ

τ φ τ φ φ τ φ

φ φ τ φ s s ds ds

t t s s ds t s s ds

t s s s s ds ds

TT

T T

TT

1 1 200 1 2

1 20 1 2 2 1 10 1 1 1

1 1 1 2 1 1 1 200 1 2

) ( )

( , ) ( ) ( , ) ( ) ( ) ( , ) ( )

( ) ( ) ( , ) ( ) ( ) .

φ

τ φ τ φ φ τ φ

φ φ τ φ φ

zzz z

zz= − −

+

∗ ∗

∗ ∗

Γ Γ Γ

Γ

(13-17)

Use Γ(t,τ) = Γ∗(τ,t), and take the complex conjugate of the eigenfunction relationship to obtain

λ φ τ τ φ1 1 10∗ ∗= z( ) ( , ) ( )Γ s s ds

T(13-18)

With (13-18), the two cross terms on the right-hand-side of (13-17) become

− = − = −∗ ∗ ∗z zφ τ φ φ τ φ λ φ φ τ1 0 1 1 0 1 1 1 1( ) ( , ) ( ) ( ) ( , ) ( ) ( ) ( )Γ Γt s s ds t s s ds tT T

(13-19)

Page 7: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-7

On the right-hand-side of (13-17), the double integral can be evaluated as

Γ Γ( , ) ( ) ( ) ( ) ( , ) ( )

( ) ( )

s s s s ds ds s s s s ds ds

s s ds

TT TT

T

1 2 1 1 1 200 1 2 1 1 1 2 1 2 200 1

1 1 1 1 10 1

1

φ φ φ φ

φ λ φ

λ

∗ ∗

zz zzz

= LNM

OQP

=

=

. (13-20)

Finally, use (13-19) and (13-20) in (13-17) to obtain

E X t X t t1 1 1 1 1( ) ( ) ( , ) ( ) ( )∗ ∗= −τ τ λ φ φ τΓ , (13-21)

and this establishes the validity of (13-15).

11. As defined by (13-15), Γ1(t,τ) may be zero for all t, τ. If not, Γ1(t,τ) can be used as the

kernel of integral equation (13-10). This reformulated operator equation has a “new”

eigenfunction φ2(t) and “new” nonzero eigenvalue λ2 (this follows from Property #1 above). They

can be used to define the “new” nonnegative definite autocorrelation function

Γ Γ2 1 2 2 2( , ) ( , ) ( ) ( )t t tτ τ λ φ φ τ≡ − ∗ . (13-22)

Furthermore, the “new” eigenfunction φ2(t) is orthogonal to the “old” eigenfunction φ1(t). That

Γ2 is nonnegative definite follows immediate from application of Property 10 with Γ replaced by

Γ1. That φ1(t)⊥φ2(t) follows from an argument that starts by noting

λ φ τ φ τ τ2 2 1 20( ) ( , ) ( )t t d

T= z Γ . (13-23)

Plug (13-15) into (13-23) and obtain

Page 8: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-8

λ φ τ φ τ τ λ φ φ τ φ τ τ2 2 20 1 1 1 20( ) ( , ) ( ) ( ) ( ) ( )t t d t d

T T= −z z ∗Γ . (13-24)

Multiply both sides of this equation by φ1*(t) and integrate to obtain

λ φ φ τ φ φ τ τ λ φ φ τ φ τ τ2 1 20 1 200 1 12

0 1 20∗ ∗z zz z z= − ⋅( ) ( ) ( , ) ( ) ( ) ( ) ( ) ( )*t t dt t t d dt t dt d

T TT T TΓ

(13-25)

= LNM

OQP −∗ ∗zz zφ τ τ φ τ λ φ τ φ τ τ2 100 1 1 20

( ) ( , ) ( ) ( ) ( )Γ t t dt d dTT T

.

Use (13-18) (which results from the Hermitian symmetry of Γ) to evaluate the term in the bracket

on the right-hand-side of Equation (13-25). This evaluation results in

λ φ φ φ τ τ φ τ λ φ τ φ τ τ

φ τ λ φ τ τ λ φ τ φ τ τ

λ λ φ τ φ τ τ

2 1 20 2 100 1 1 20

2 1 10 1 1 20

1 1 1 20

∗ ∗ ∗

∗ ∗ ∗

∗ ∗

z zz zz z

z

= LNM

OQP −

= −

= −

( ) ( ) ( ) ( , ) ( ) ( ) ( )

( ) ( ) ( ) ( )

( ) ( )

t t dt t t dt d d

d d

d

T TT T

T T

T

Γ

e j

. (13-26)

Since λ1* - λ1 = 0 (by Property 8, the eigenvalues are real valued), we have

λ φ φ2 1 200∗z =( ) ( )t t dt

T. (13-27)

Since λ2 ≠ 0, we conclude that φ1(t)⊥φ2(t), as claimed.♥ In addition to being an eigenfunction-

eigenvalue pair for kernel Γ1, φ2(t) and λ2 are an eigenfunction and eigenvalue, respectively, for

kernel Γ (as can be seen from (13-24) and the fact that φ1⊥φ2).

12. Clearly, as long as the resulting autocorrelation function is nonzero, the process outlined in

Property 11 can be repeated. After N such repetitions, we have orthonormal eigenfunctions φ1,

Page 9: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-9

… φN and nonzero eigenvalues λ1, … , λN. Furthermore, the Nth-stage autocorrelation function is

Γ ΓN k k kk

Nt t t( , ) ( , ) ( ) ( )τ τ λ φ φ τ≡ − ∗

=∑

1

. (13-28)

13. ΓN(t,τ) may vanish, and the algorithm for computing eigenvalues may terminate, for some

finite N. In this case, there exist a finite number of nonzero eigenvalues, and autocorrelation

Γ(t,τ) has a finite dimensional expansion of the form

Γ( , ) ( ) ( )t tk k kk

Nτ λ φ φ τ= ∗

=∑

1

. (13-29)

for some N. In this case, the kernel Γ(t,τ) is said to be degenerate; also, it is easy to show that

Γ(t,τ) is not positive definite.

14. If the case outlined by 13) does not hold, there exists a countable infinite number of nonzero

eigenvalues. However, ΓN(t,τ) converges as N → ∞. First, we show convergence for the special

case t = τ; next, we use this special case to establish convergence for the general case, t ≠ τ. To

reduce notational complexity in what follows, define the partial sum

S t t t tn m k k kk n

m

n n m m

n n

m m

, ( , ) ( ) ( ) ( ) ( )

( )

( )

τ λ φ φ τ λ φ λ φ

λ φ τ

λ φ τ

≡ =

L

N

MMMMM

O

Q

PPPPP

=∑

L M . (13-30)

Consider the special case t = τ. Since ΓN(t,t) ≥ 0, Equation (13-28) implies

0 1< ≤ < ∞S t t t tN, ( , ) ( , )Γ . (13-31)

Page 10: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-10

As a function of index N, the sequence S1,N(t,t) is increasing but always bounded above by Γ(t,t),

as shown by (13-31). Hence, as N → ∞, both S1,N(t,t) and ΓN(t,t) must converge to some limit.

For the general case t ≠ τ, convergence of S1,N(t,τ) can be shown by establishing the fact

that partial sum Sn,m (t,τ) → 0 as n, m → ∞ (in any order). To establish this fact, consider partial

sum Sn,m(t,τ) to be the inner product of two vectors as shown by (13-30); one vector contains the

elements λ φk k t( ) , n ≤ k ≤ m, and the second vector contains the elements λ φ τk k ( ) , n ≤ k

≤ m. Now, apply the Cauchy-Schwartz inequality (see Theorem 11-4) to inner product Sn,m (t,τ)

and obtain

S t t S t t Sn m k k kk n

m

n m n m, , ,( , ) ( ) ( ) ( , ) ( , )τ λ φ φ τ τ τ= ≤∗

=∑ . (13-32)

As N → ∞, the convergence of S1,N(t,t) implies that partial sum Sn,m(t,t) → 0 as n, m → ∞ (in any

order). Hence, the right-hand-side of (13-32) approaches zero as n, m → ∞ (in any order), and

this establishes the convergence of S1,N(t,τ) and (13-28) for the general case t ≠ τ.

15. As it turns out, ΓN(t,τ) converges to zero as N → ∞, a claim that is supported by the

following argument. For each m ≤ N and fixed t, multiply ΓN(t,τ) by φm(τ) and integrate to obtain

Γ ΓN mT

mT

k k kk

N

mT

m m m m

t d t d t d

t t

( , ) ( ) ( , ) ( ) ( ) ( ) ( )

( ) ( )

τ φ τ τ τ φ τ τ λ φ φ τ φ τ τ

λ φ λ φ

0 01

0

0

z z ∑z= − LNM

OQP

= −

=

=

. (13-33)

For each fixed t and all m ≤ N, ΓN(t,τ) has zero component in the φm(τ) direction. Equation

(13-33) leads to the conclusion

Page 11: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-11

limitN→∞ z =ΓN m

Tt d( , ) ( )τ φ τ τ

00 (13-34)

for each m ≥ 1. By the continuity of the inner product, we can interchange limit and integration in

(13-34) to see that Γ∞(t,τ) has no component in the φm(τ) direction, m ≥ 1. Since the

eigenfunctions φm(τ) span the vector space L2[0,T] of square-integrable functions, we see that

Γ∞(t,τ) = 0. The argument we have presented supports the claim

Γ( , ) ( ) ( )t tk k kk

τ λ φ φ τ= ∗

=

∑1

, (13-35)

a result known as Mercer’s theorem. In fact, the sum in (13-35) can be shown to converge

uniformly on the rectangle 0 ≤ t,τ ≤ T (see R. Ash, Information Theory, Interscience Publishers,

1965).

Karhunen-Loève Expansion

In an expansion of the form (13-3), we show that the coefficients xk will be pair-wise

uncorrelated if, and only if, the basis functions φk are eigenfunctions of (13-10) . Then, we show

that the series converges in a mean-square sense.

Theorem 13-1: Suppose that finite-power random process X(t) has an expansion of the form

X t t

X t d

mm

m

m mT

( ) ( )

( ) ( )

=

=

=

z

x

x

1

0

φ

τ φ τ

(13-36)

for some complete orthonormal set φk(t), k ≥ 1, of basis functions. If the coefficients xn satisfy

Page 12: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-12

E n m

n m

n m nx x∗ = =

= ≠

λ ,

,0

(13-37)

(i.e., the coefficients are pair-wise uncorrelated and xn has a variance equal to eigenvalue λn), then

the basis functions φn(t) must be eigenfunctions of (13-9); that is, they must satisfy

Γ( , ) ( ) ( ) ,t d tnT

n nτ φ τ τ λ φ0z = ≤ ≤ 0 t T . (13-38)

Proof: Multiply the expansion in (13-36) (the first equation in (13-36)) by xn∗ , take the

expectation, and use (13-37) to obtain

E X t E t E t tn m nm

m n n n( ) ( ) ( ) ( )x x x x∗ ∗

=

∞= = =∑

1

2φ φ λ φn . (13-39)

Now, multiply the complex conjugate of the second equation in (13-36) by X(t), and take the

expectation, to obtain

E X t E X t X d t dn nT

nT

( ) ( ) ( ) ( ) ( , ) ( )x∗ ∗= =z zτ φ τ τ τ φ τ τ0 0

Γ . (13-40)

Finally, equate (13-39) and (13-40) to obtain

Γ( , ) ( ) ( ),t dnT

n nτ φ τ τ λ φ0z = ≤ ≤t 0 t T,

where λn is given by (13-37).♥ In addition to this result, the K-L coefficients will be orthogonal

if the orthonormal basis function satisfy (13-38).

Theorem 13-2: If the orthogonal basis functions φn(t) are eigenfunctions of (13-38) the

Page 13: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-13

coefficients xk will be orthogonal.

Proof: Suppose the orthogonal basis functions φn(t) satisfy integral equation (13-38). Compute

the expected value

E E X t t dt E X t t dtm nT

m m nT

x x x xn∗ ∗ ∗ ∗ ∗= RST

UVWLNM

OQP =z z( ) ( ) ( ) ( )φ φ

0 0. (13-41)

Now, use (13-39) to replace the expectation in (13-41) and obtain

E t t dtm m m nT

m mnx xn∗ ∗= =z λ φ φ λ δ( ) ( )

0

which shows that the coefficients are pair-wise uncorrelated.♥ Theorems 13-1 and 13-2 establish

the claim that the xk will be uncorrelated if, and only if, the basis functions satisfy integral equation

(13-38). Next, we show mean square convergence of the K-L series.

Theorem 13-3: Let X(t) be a finite-power random process on [0, T]. The Karhunen-Loève

expansion

X t t

X d

mm

m

m mT

( ) ( )

( ) ( )

=

=

=

z

x

x

1

0

φ

τ φ τ τ

, (13-42)

where the coefficients are pair-wise uncorrelated and the basis functions satisfy the integral

equation (13-38), converges in the mean square sense.

Proof: Evaluate the mean-square error between the series and the process to obtain

E X t tmm

m( ) ( )−LNM

OQP=

∑ x1

2

φ

Page 14: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-14

= −FH

IK

LNM

OQP

− −FH

IK

LNM

OQP=

∞ ∗

=

=

∞ ∗∑ ∑ ∑E X t X t t E t X t tmm

m nn

n mm

m( ) ( ) ( ) ( ) ( ) ( )x x x1 1 1

φ φ φ . (13-43)

On the right-hand side of (13-43), the first term is

E X t X t t t t t tmm

m mm

m m( ) ( ) ( ) ( , ) ( ) ( )−FH

IK

LNM

OQP

= − ==

∞ ∗

=

∞∗∑ ∑x

1 1

0φ λ φ φΓ (13-44)

(E[X(t) xm∗ ]= = λmφm(t), first established by (13-39), was used here). The fact that the right hand

side of (13-44) is zero follows from Mercer’s Theorem (discussed in Property 15 above). On the

right-hand side of (13-43), the second term can be expressed as

E t X t t

E X t t E t t

t t t t

nn

n mm

m

nn

n n m n mnm

n nn

n n n nn

x x

x x x

=

=

∞ ∗

=

∞∗ ∗

=

=

=

∞∗

=

∑ ∑

∑ ∑∑

∑ ∑

−FH

IK

LNM

OQP

= −

= −

=

0 0

0 00

1 1

0

φ φ

φ φ φ

λ φ φ λ φ φ

( ) ( ) ( )

( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

. (13-45)

On the right-hand-side of (13-45), E[xnX*] was evaluated with the aid of (13-39); also the fact

that the coefficients are uncorrelated was used in (13-45). Equations (13-43) through (13-45)

imply

E X t tmm

m( ) ( )−LNM

OQP

==

∑ x1

2

0φ , (13-46)

Page 15: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-15

so the K-L expansion converges in the mean square sense.♥

As it turns out, the K-L expansion need contain only eigenfunctions that correspond to

nonzero eigenvalues. Suppose φ(t) is an eigenfunction that corresponds to eigenvalue λ = 0.

Then the corresponding coefficient x has a second moment given by

E E X t t dt E X t X t dt d

t t dt d t t d dt

T TT

TT TT

xx∗ ∗ ∗ ∗

∗ ∗

=LNMM

OQPP = L

NMOQP

= = LNM

OQP

=

z zz

zz zz

( ) ( ) ( ) ( ) ( ) ( )

( , ) ( ) ( ) ( ) ( , ) ( )

.

φ τ φ φ τ τ

τ φ φ τ τ φ τ φ τ τ

0

2

00

00 00

0

Γ Γ (13-47)

That is, in the K-L expansion, the coefficient x of φ(t) has zero variance, and it need not be

included in the expansion.

Example 13-1 (K-L Expansion of the Wiener Process): From Chapter 6, recall that the

Wiener process X(t), t ≥ 0, has the autocorrelation function

Γ( , ) min{ , }t t D t t1 2 1 22= , (13-48)

where D is the diffusion constant. Substitute (13-48) into (13-38) and obtain

20

D t d tn n nT

min{ , } ( ) ( )τ φ τ τ λ φ=z (13-49)

2 20

D d Dt d tnt

nt

Tn nτ φ τ τ φ τ τ λ φ( ) ( ) ( )z z+ = , (13-50)

for 0 ≤ t ≤ T. With respect to t, we must differentiate (13-50) twice; the first derivative produces

Page 16: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-16

2D d tnt

Tn nφ τ τ λ φ( ) ( )z = ′ , (13-51)

where ′φn denotes the time derivative of φn. Differentiate (13-51) to obtain

′′ + =φλ

φnn

ntD

t( ) ( )2

0 , (13-52)

a second-order differential equation in the eigenfunction φn. A general solution of (13-52) is

φ α ω β ω ω λn n n n n nt t t D( ) sin cos , /= + = n 2 , (13-53)

where αn, βn and ωn are constants that must be chosen to so that φn satisfies appropriate boundary

conditions. Evaluate (13-50) at t = 0 to see

φn ( )0 0= (13-54)

for all n. Because of (13-54), Equation (13-53) implies that all βn ≡ 0. In a similar manner,

Equation (13-51) implies that ′ =φn T)( 0 , a result that leads to the conclusion

ωλ

π πn

n

D n

T

n

T= =

−=

−2 2 1

2

( ) ( ½), n = 1, 2, 3, ... (13-55)

Equation (13-55) implies that the eigenvalues are given by

λπ

nD T

n=

2 2

2 2( ½), n = 1, 2, 3, ... (13-56)

And, the normalization condition (13-2) can be invoked to obtain

Page 17: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-17

( sin )α ωα

n nT nt dt

T0

22

21z = = , (13-57)

so that

αn T=

2. (13-58)

After using βn ≡ 0, (13-58) and (13-55) in Equation (13-53), the eigenfunctions can be expressed

as

φ πn T

tT

n t( ) sin ½ ,= − ≤ ≤2 b ge j 0 t T . (13-59)

Finally, the K-L expansion of the Wiener process is

X tT

n tT

n

( ) sin ( ½) ,= − ≤ ≤=

∑2

1

xnπe j 0 t T , (13-60)

where the uncorrelated coefficients are given by

xn = −z20T

X t n t dtT

T( ) sin ( ½) πe j . (13-61)

Furthermore, the coefficient xn has variance λn given by (13-56).

Example 13-2: Consider the random process

X t A t( ) cos= +ω θ0 , (13-62)

Page 18: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-18

where A and ω0 are constants, and θ is a random variable that is uniformly distributed on (-π, π].

As shown in Chapter 7, the autocorrelation of X(t) is

Γ( ) cosτ ω τ=A2

02, (13-63)

a function with period T0 = 2π/ω0. Substitute (13-63) into (13-10) to obtain

At dn

Tn n

2

00 20 cos ( ) ( ) ( ),ω τ φ τ τ λ φ− = ≤ ≤z t 0 t T0 . (13-64)

The eigenvalues and eigenfunctions are found easily. First, use Mercer’s theorem to write

Γ( ) ( ) ( )

cos ( ) cos cos sin sin

t t

At

At

At

k k kk

− =

= − = +

=

∑τ λ φ φ τ

ω τ ω ω τ ω ω τ

1

2

0

2

0 0

2

0 02 2 2

. (13-65)

Note that this kernel is degenerate. After normalization, the eigenfunctions, that correspond to

nonzero eigenvalues, can be written as

φ ω

φ ω

1 0

2 0

2

2

( ) / cos

( ) / sin

t T t

t T t

=

=. (13-66)

Both of these eigenfunctions correspond to the eigenvalue λ = TA2/4; note that λ = TA2/4 has an

eigenspace of dimension two. Also, note that there are a countably infinite number of

eigenfunctions in the null space of the operator. That is, for k ≠ 1, the eigenfunctions

Page 19: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-19

φ ω

φ ω

1 0

2 0

2

2

k

k

t T k t

t T k t

( ) / cos

( ) / sin

=

=(13-67)

correspond to the eigenvalue λ = 0. The K-L expansion of random process X(t) is

X t T t T t( ) / cos / sin= +x x1 0 0 2 0 02 2ω ω , (13-68)

where

x

x

1 0

2 0

2

2

= +

= −

A T

A T

/ cos

/ sin

θ

θ . (13-69)

As expected, we have

E EA T

EA T A T

d

E x EA T A T

E x EA T A T

[ ] sin cos sin sin( )

[ ] cos

[ ] sin

x x1 2

20

20

20

0

2

12

20 2

20

22

20 2

20

2 42

4

1

22 0

2 4

2 4

=LNMM

OQPP =

LNMM

OQPP = =

=LNMM

OQPP =

=LNMM

OQPP =

zθ θ θπ

θ θ

θ

θ

π

. (13-70)

K-L Expansion for Processes with Rational Spectrums

Suppose X(t) is a wide sense stationary process with a rational power spectrum. That is,

the power spectrum of X can be represented as

Page 20: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-20

S( )( )

( )ω

ω

ω=

N

D

2

2, (13-71)

were N and D are polynomials. Such a process occurs if white noise is passed through a linear,

time-invariant filter. Hence, many applications are served well by modeling their processes as

having a rational power spectrum.

As it turns out, a process with a rational power spectrum can be expanded in a K-L

expansion where the eigenfunctions are non-harmonically related sine and cosine functions. For

such a case, the eigenvalues and eigenfunctions can be found. The example that follows illustrates

a general method for solving for the eigenfunctions.

Example 13-4: Let X(t) be a process with power spectrum

S( ) ,ωα

ω αω α=

+∞ ∞

2 P2 2

- < < , P > 0, > 0 . (13-72)

Process X(t) has the autocorrelation function

Γ( ) ( ) exp( )τ ω α τ= = −−F S1 P . (13-73)

For the related eigenfunction/eigenvalue problem, the integral equation is

P e u du tt uT

T − −−z = ≤ ≤α φ λ φ( ) ( ), - T t T . (13-74)

An analysis leading to the eigenvalues and eigenfunctions is less complicate if a symmetric interval

[-T,T] is used (of course, our expansion will be valid on [0, T]). We can write (13-74) as

λ φ φ φα α( ) ( ) ( ) ,( ) ( )t P e u du Pe u dut uT

t u tt

T= + = ≤ ≤− −

−− −z z - T t T . (13-75)

Page 21: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-21

With respect to t, differentiate (13-75) to obtain

λ φ α φ α φα α α αddt

t P e e u du P e e u dut u t ut

T

T

t( ) ( ) ( )= − +− −

− zz . (13-76)

Once again, differentiate (13-76) to obtain

λ φ α φ α φ

α φ α φ

α α α α

α α α α

ddt

t P e e u du P e e t

P e e u du P e e t

t u t tT

t

t u t tt

T

22

2

2

( ) ( ) ( )

( ) ( )

= −

+ −

− −−

− −

zz

, (13-77)

which can be written as

λ φ α φ α φαddt

t P e u du P tt uT

T2

22 2( ) ( ) ( )= −− −

−z . (13-78)

Now, multiply (13-74) by α2 and use the product to eliminate the integral in (13-78); this

procedure results in

λ φ α λ α φddt

t P t2

22 2( ) ( / ) ( )= − . (13-79)

There are no zero eigenvalues since Γ is positive definite. Inspection of (13-79) reveals that the

three cases

i) 0 < λ < 2P/α,

ii) λ = 2P/α,

iii) λ < 2P/α

must be considered.

Case i) 0 < λλ < 2P/αα

Page 22: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-22

We start by defining

bP2

2 2≡ −

−< ∞

α λ αλ

( / ), 0 < b2 , (13-80)

which can be solved for

λα

α α=

− +2P

jb jb( )( ). (13-81)

In terms of b, the general, complex-valued solution of (13-79) is

φ( )t c e c ejbt jbt= + −1 2 , (13-82)

where c1 and c2 are complex constants. Plug (13-82) into integral equation (13-75) to obtain

λ

α α α α

α α α

α α α α

αα α

αα α

c e c e

Pe e c e c e du Pe e c e c e du

Pe ce

jbc

e

jbPe c

e

jbc

e

jb

Pc e

jb

c e

jb

c e

jb

jbt jbt

t u jbu jbuT

t t u jbu jbut

T

tjb u jb u

u T

u tt

jb u jb u

u t

u T

jbt jbt jbt

1 2

1 2 1 2

1 2 1 2

1 2 1

+

= + + +

=+

+−

LNMM

OQPP +

− ++

− −

LNMM

OQPP

=+

+−

+−

+

− −−

− −

−+ −

=−

= − + − −

=

=

z z( ) ( ) ( ) ( )

c e

jb

P e ce

jbc

e

jbPe c

e

jbc

e

jb

jbt

tjb T jb T

tjb T jb T

2

1 2 1 2

−− + − − − + − +

+

LNMM

OQPP

−+

+−

LNMM

OQPP

+− +

+− −

LNMM

OQPP

α

α α α αα

α αα

α α( ) ( ) ( ) ( ).

(13-83)

Now, substitute (13-81) for λ on the left-hand-side of (13-83); then, cancel out like terms to

Page 23: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-23

obtain the requirement

0 1 2 1 2=+

+−

LNMM

OQPP −

− ++

− −

LNMM

OQPP

−− + − − − − − +

e ce

jbc

e

jbe c

e

jbc

e

jbt

jb T jb Tt

jb T jb Tα

α αα

α α

α α α α

( ) ( ) ( ) ( ). (13-84)

We must find the values of b (i.e., the frequencies of the eigenfunctions) for which equality is

achieved in (13-84) . Note that both bracket terms must vanish identically to achieve equality for

all time t. However, for c1 ≠ ±c2, neither bracket will vanish for any real b. Hence, we require c1

= ±c2 in order to obtain equality in (13-84). First, consider c1 = -c2; to zero the first bracket term

we must have

e

jb

e

jb

e jb e jb

jb jb

e e jb e e

jb jb

j bT) jb bT)

jb jb

jbT jbT jbT jbT jbT jbT jbT jbT− − − −

+−

−=

− − ++ −

=− − +

+ −

=− −

+ −

=

α αα αα α

α

α α

αα α

( ) ( )

( )( ) ( )( )

sin( cos(

( )( )

2 2

0

. (13-85)

To obtain zero in this last expression, we must have

α sin( cos(bT) b bT)+ = 0 . (13-86)

Finally, this leads to the requirement

tan( /bT) b= − α . (13-87)

With c1 = -c2, the second bracket in (13-84) is zero if (13-87) holds. Hence, the values of b that

solve (13-87) are roots of (13-84), and they are frequencies of the eigenfunctions.

Page 24: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-24

Next, we must analyze the case c1 = c2 (which is similar to the case c1 = -c2 just finished).

For c1 = c2, we get

tan( /bT) b= α . (13-88)

Hence, the permissible frequencies of the eigenfunction are given by the union

b bT) b b bT) b≥ = − ≥ =0 0 : tan( / : tan( /α αU . (13-89)

These frequencies can be found numerically. Figure 13-1 depicts graphical solutions of

(13-89) for the first nine frequencies. A value of αT = 2 was used to construct the figure. Note

αT = 2

-6

-5

-4

-3

-2

-1

0

1

2

π 4π3π2π bTπ2

3

2

π 5

2

π 7

2

π

b1

b8

b6

b4

b2

b9b7

b5

b3

Y

Y = -bT/αT

Y = αT/bT

Figure 13-1: Graphical display of the bk, the frequencies of theeigenfunctions.

Page 25: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-25

that bk, k odd, form a decreasing sequence of positive numbers, while bk, k even, form a

decreasing sequence of negative numbers.

Once the frequencies bk are found, they can be used to determine the eigenvalues

λα

αk

k

P

b=

22 2( )

, k = 1, 2, 3, L (13-90)

The frequencies bk, k odd, were obtained by setting c1 = c2. For this case, (13-82) yields

φk k kt b t( ) cos ,= l k odd , (13-91)

where constant lk is chosen to normalize the eigenfunction. That is, lk must satisfy

lk2 2 1cos ( )b t dtkT

T

−z = , (13-92)

which leads to

lk =+

≤ ≤1

1 2T Sa b T)k[ (, - T t T, k odd , (13-93)

where Sa(x) ≡ sin(x)/x. Hence, for k odd, we have the eigenfunctions

φkk

ktT Sa b T)

b t( )[ (

cos ,=+

≤ ≤1

1 2 - T t T, k odd . (13-94)

The frequencies bk, k even, were obtained by setting c1 = -c2. An analysis similar to the

case just presented yields the eigenfunctions

Page 26: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-26

φkk

ktT Sa b T)

b t( )[ (

sin ,=−

≤ ≤1

1 2 - T t T, k even . (13-95)

Observations:

1. Eigenfunctions are cosines and sines at frequencies that are not harmonically related.

2. For each n, the value of bnT is independent of T. Hence, as T increases, the value of bn

decreases, so the frequencies are inversely related to T.

3. As bT increases, the upper intersections (the odd integers k) occur at approximately (k-1)π/2,

and the lower intersections occur at approximately (k-1)π/2, k even. Hence, the higher index

eigenfunctions are approximately a set of harmonically related sines and cosines. For large k

we have

φπ

π

kk

k

tT Sa b T)

k

Tt

T Sa b T)

k

Tt

( )[ (

cos( )

,

[ (sin

( ),

≈+

−≤ ≤

≈−

−≤ ≤

1

1 2

1

2

1

1 2

1

2

- T t T, k odd

- T t T, k even

(13-96)

This concludes the case 0 < λ < 2P/α.

Case ii) λλ = 2P/αα

For this case, Equation (13-79) becomes

20

2

2P d

dtt

αφ( ) = . (13-97)

Two independent solutions to this equation are φ(t) = t and φ(t) = 1. By direct substitution, it is

seen that neither of these satisfy integral equation (13-74). Hence, this case yields no

eigenfunctions and eigenvalues.

Case iii) λλ > 2P/αα

Page 27: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-27

For this case, Equation (13-79) becomes

ddt

tP

t2

2

2 2φ

α λ αλ

φ( )( / )

( )=−

(13-98)

This equation two independent solutions given by

φ

φ

ν

ν

1

2

( )

( )

t e

t e

t

t

=

= −, (13-99)

where

να λ α

λ≡

−>

2 20

( / )P . (13-100)

By direct substitution, it is seen that neither of these satisfy integral equation (13-74). Hence, this

case yields no eigenfunctions and eigenvalues.

Example 13-5: In radar detection theory, we must detect the presence of a signal given a T-

second record of receiver output data. There are two possibilities (termed hypotheses). First, the

record may consist only of receiver noise; no target is present for this case. The second possibility

is that the data record contains a target reflection embedded in the receiver noise; in this case, a

target is present. You must “filter” the record of data and make a decision regarding the

presence/absence of a target.

Let ν(t), 0 ≤ t ≤ T, denote the record of receiver output data. After receiving the

complete time record, we must decide between the hypotheses

Page 28: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-28

H

H s

0

1

:

:

(t) = (t), only noise - target not present

(t) = (t) + (t), signal + noise - target is present

ν η

ν η. (13-101)

Here, η(t) is zero-mean Gaussian noise that is described by positive definite correlation function

Γ(t,τ). Note that we allow non-white and non-stationary noise in this example. s(t) is the

reflected signal, which we assume to be known (usually, s(t) is a scaled and time-shifted version

of the transmitted signal). At time T, we must decide between H0 and H1.

We expand the received signal ν(t) in a K-L expansion of the form

ν ν φ

ν ν φ

( ) ( )

( ) ( )

t t

t t dt

kk

k

k kT

=

=

z1

0

, (13-102)

where φk(t) are the eigenfunction of (13-10), an integral equation that utilizes kernel Γ(t,τ)

describing the receiver noise. The νk are uncorrelated Gaussian random variables with variance

equal to the positive eigenvalues of the integral equation; that is, VAR[νk] = λk.

The received signal ν(t) may be only noise, or it may be signal + noise. Hence, the

conditional mean of νk is

E E t t dt

E E t t t dt s

kT

kT

k

ν η φ

ν η φ

k

k

Y

Y

0

1

H

H s

= LNM

OQP =

= +LNM

OQP =

zz

( ) ( )

( ) ( ) ( )

0

0

0

b g, (13-103)

where

s t t dtk kT

= z s( ) ( )φ0

Page 29: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-29

are the coefficients in the expansion

s( ) ( )tk

==

∑ s tk kφ0

(13-104)

of signal s(t). Under both hypotheses, νk has a variance given by

Var Vark k kν ν λY YH H0 = =1 (13-105)

To start with, our statistical test will use only the first n K-L coefficients νk, 1 ≤ k ≤ n.

We form the vector

rLV n

T= ν ν ν1 2 (13-106)

and the two densities

P V P V

P V P V s

kk

n

k kk

n

kk

n

k k kk

n

01

2

1

1 11

2

1

2

2

( ) ( ) ( ) exp ½ /

( ) ( ) ( ) exp ½ ( ) /

½

½

r r

r r

≡ =LNMM

OQP −FHG

IKJ

≡ =LNMM

OQP − −F

HGIKJ

= =

= =

∏ ∑

∏ ∑

Y

Y

H

H

0 πλ ν λ

πλ ν λ

. (13-107)

P0 (alternatively, P1) is the density for the n coefficients when H0 (alternatively, H1) is true.

We will use a classical likelihood ratio test (see C.W. Helstrom, Statistical Theory of

Signal Detection, 2nd edition) to make a decision between H0 and H1. First, given Vr

, we compute

the likelihood ratio

Page 30: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-30

Λ( )( )

( )exp ( ) /

rr

rVP V

P Vs sk k k k

k

n≡ = −

LNMM

OQPP=

∑1

0

2

1

2 2ν λ . (13-108)

in terms of the known sk and λk. Then, we compare the computed Λ to a user-defined threshold

Λ0 to make our decision (there are several well-know methods for setting the threshold Λ0). We

decide hypothesis H1 if Λ exceeds the threshold, and H0 if Λ is less than the threshold. Stated

tersely, our test can be expressed as

Λ Λ( )rV

H

H

1

0

0

><

. (13-109)

The inequality (13-109) will be unchanged, and the decision process will not be affected, if

we take any monotone function of (13-109). Due to the exponential functions in (13-108), we

take the logarithm of this equation and obtain

Gs s

s Gnk

kk

k

nk

kk

k

n

n≡ OQP

><

+ OQP ≡

= =∑ ∑λ

νλ1

1

01

0

0

H

H

ln ½Λ . (13-110)

To simplify (13-110), we define qk ≡ sk/λk. The qk are coefficients in the generalized

Fourier series expansion of a function q(t); that is, the coefficients qk determine the function

q t q tk kk

( ) ( )≡=

∑ φ1

. (13-111)

As will be discussed shortly, function q(t) is the solution of an integral equation based on kernel

Page 31: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-31

Γ. In terms of the coefficients qk, (13-110) can be written as

G q q s Gn k kk

n

k kk

n

n≡><

+ ≡= =

∑ ∑ν1

1

01

0

0

H

H

ln ½Λ (13-112)

The two sums in (13-112) converge as n → ∞. By a general form of Parseval’s theorem,

we have

limit

limit

n

n

→∞ =

→∞ =

∑ z

∑ z

=

=

q q t t dt

q s q t t dt

k kk

n T

k kk

n T

ν ν1

0

10

( ) ( )

( ) ( )s

. (13-113)

Use (13-113), and take the limit of (13-112) to obtain the decision criteria

G q t t dt q t s t dtT T

≡><

+z z( ) ( ) ln ½ ( ) ( )ν0

1

0 0

0

H

H

Λ . (13-114)

As shown on the left-hand-side of Equation (13-114), statistic G can be computed once data

record ν(t), 0 ≤ t ≤ T is known. Then, to make a decision between hypothesis H0 and H1, G is

compared to the threshold obtained by computing the right-hand-side of (13-114).

Statistic G can be obtained by a filtering operation, as illustrated by Figure 13-3. Simply

pass received signal ν(t) through a filter with impulse response

h t q T t( ) ( ),≡ − ≤ ≤ 0 t T , (13-115)

Page 32: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-32

and sample the filter output at t = T (the end of the integration period) to obtain the statistic G.

This is the well known matched filter for signal s(t) embedded in Gaussian, nonstationary,

correlated noise described by correlation function Γ(t,τ).

As described above, function q(t) has expansion (13-111) with coefficients qk ≡ sk/λk.

However, we show that q(t) is the solution of a well-known integral equation. First, write

(13-111) with τ as the time variable. Then, multiply the result by Γ(t,τ), and integrate from τ = 0

to τ = T to obtain

Γ Γ( , ) ( ) ( , ) ( ) ( )t q d q t dsT

k kT

k

k

kk k

k

τ τ τ τ φ τ τλ

λ φ τ0 0

1 1z z∑ ∑= = L

NMOQP=

=

∞, (13-116)

where sk/λk has been substituted for qk. On the right-hand-side, cancel out the eigenvalue λk, and

use (13-104) to obtain the integral equation

Γ( , ) ( ) ( ),t q d tT

τ τ τ0z = ≤ ≤s 0 t T , (13-117)

for the match filter impulse response q(t). Equation (13-117) is the well-know Fredholm integral

equation of the first kind.

h(t) = q(T-t), 0 ≤ t ≤ Tν(t)

Sample@ t = T

G q t t dt q t s t dtT T

≡><

+z z( ) ( ) ln ½ ( ) ( )ν0

1

0 0

0

H

H

Λ

G q t t dtT

= z ( ) ( )ν0

a)

b)

Figure 13-3: a) Statistic G generated by a filtering operation that is matchedto the signal and noise environment. b) Statistical test description.

Page 33: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-33

Special Case: Matched Filter for Signal in White Gaussian Noise

We consider the special case where the noise is white with correlation

Γ( ) ( )τ σ δ τ= 2 . (13-118)

The Fredholm integral equation is solved easily for this case; simply substitute (13-118) into

(13-117) and obtain

q t t( ) ( ) / ,= ≤ ≤s2 2σ 0 t T . (13-119)

So, according to (13-115), the matched filter for the white Gaussian noise case is

T

1

t tT

1

s(t) h(t)a) b)

t

Filte

r O

utpu

t

T 2T

T/3

Sampling Time to Produce Statistic G

c)

Figure 13-4: a) Signal s(t), b) matched filter impulse response h(t) and c) filter output for thecase σ2 = 1. Note that the filter output is sampled at t = T to produce the decision statistic G.

Page 34: EE603 Class Notes Version 1 John Stensby

EE603 Class Notes Version 1 John Stensby

603CH13.DOC 13-34

h t T t( ) ( ) /= −s σ2 , (13-120)

a folded, shifted and scaled version of the original signal. Figure 13-4 illustrates a) signal s(t), b)

matched filter h(t) and c) filter output, including the sample point t = T, for the case σ2 = 1.