Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1...

91
Estimation Theory Fredrik Rusek Chapter 12

Transcript of Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1...

Page 1: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Estimation Theory Fredrik Rusek

Chapter 12

Page 2: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Summary of chapters 10 and 11

• Bayesian estimators are injecting prior information into the estimation

• Concepts from classical estimation breaks down – MVU

– Efficient estimator

– unbiasedness

• Performance measure change: variance -> Bayesian MSE

• Optimal estimator for Bmse: E(θ|x). This is the MMSE estimator

• MMSE is difficult since – Posterior is hard to find p(θ|x)

– If we can find p(θ|x), then E(θ|x) is still difficult due to integral

• Conjugate priors simplify finding p(θ|x). Posterior has same distribution as prior (with other parameters). Useful when the posterior acts as prior in a sequential estimation process.

• Other risk functions than the Bmse exists. – MAP estimation is solution to hit-and-miss risk

– Conditional Median is solution to a linear risk function

• Invariance does not hold for MAP

• Bayesian estimators can be used for deterministic parameters, but work well only for parameter values that are close to the prior mean

Page 3: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator

Page 4: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator

This is the same method as the BLUE, but we now make the following changes:

Page 5: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator

This is the same method as the BLUE, but we now make the following changes:

Remove Unbiasedness constraint

Page 6: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator

This is the same method as the BLUE, but we now make the following changes:

Remove Unbiasedness constraint

Change cost function from variance to Bmse

Page 7: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

When an optimal Bayesian estimator is hard to find, we can resort to a linear estimator

This is the same method as the BLUE, but we now make the following changes:

Remove Unbiasedness constraint

Change cost function from variance to Bmse

An optimal estimator within this class is termed the

linear minimum mean square error (LMMSE) estimator

Page 8: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Take differentials with respect to aN

Page 9: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Take differentials with respect to aN

Page 10: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Take differentials with respect to aN

Page 11: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Plug in aN into the cost function

Page 12: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Plug in aN into the cost function

Page 13: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Assembly into vector notation

Page 14: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Generates 4 terms

Page 15: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Page 16: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Observe: a is not random, can be moved outside from expectation operator

Page 17: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Page 18: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Page 19: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Page 20: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Page 21: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Page 22: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Finding the LMMSE estimator

Cost function

Page 23: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Collect the results using vector notation

Page 24: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Collect the results using vector notation

Page 25: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Computing the Bmse cost

Schur complement

Page 26: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Connections

We have seen the expression for the BMSE before

Page 27: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Connections

X and θ jointly Gaussian X and θ not jointly Gaussian

Page 28: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Connections

X and θ jointly Gaussian X and θ not jointly Gaussian

LMMSE estimator

Bmse

Page 29: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Connections

X and θ jointly Gaussian X and θ not jointly Gaussian

LMMSE estimator

Bmse

Page 30: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Connections

X and θ jointly Gaussian X and θ not jointly Gaussian

LMMSE estimator

Bmse MMSE estimator

LMMSE = MMSE

Page 31: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Connections

X and θ jointly Gaussian X and θ not jointly Gaussian

LMMSE estimator

Bmse MMSE estimator Bmse

LMMSE = MMSE

Better than

Page 32: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Connections

X and θ jointly Gaussian X and θ not jointly Gaussian

LMMSE estimator

Bmse MMSE estimator Bmse

LMMSE = MMSE

Better than

The LMMSE yields the same Bmse as if the variables are jointly Gaussian

Page 33: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Example 12.1

Bayesian options: 1. MMSE 2. MAP 3. LMMSE

Page 34: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Example 12.1

Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE

Page 35: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Example 12.1

Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE

All means are zero

Page 36: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Example 12.1

Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE

Page 37: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Example 12.1

Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE

Page 38: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Example 12.1

Bayesian options: 1. MMSE. Not possible in closed form 2. MAP. Possible: truncated sample mean 3. LMMSE. Doable

=……=

Page 39: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Example 12.1

Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between the MVU and the sample mean 3. We did not use the fact that A is uniform, only its mean and variance comes in 4. We do not need A and w to be independent, only uncorrelated 5. No integration is needed

Page 40: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Extension to vector parameter

We can work with each parameter individually

Page 41: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Extension to vector parameter

We can work with each parameter individually From before, we have that

Page 42: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Extension to vector parameter

We can work with each parameter individually From before, we have that collect in vector notation

Page 43: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Extension to vector parameter

We can work with each parameter individually From before, we have that collect in vector notation

BMSE

Page 44: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Three properties 1. Invariance holds for affine transformations

2. LMMSE of a sum is a sum of the LMMSE

Page 45: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Three properties 1. Invariance holds for affine transformations

2. LMMSE of a sum is a sum of the LMMSE

Page 46: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Three properties 1. Invariance holds for affine transformations

2. LMMSE of a sum is a sum of the LMMSE

3. Number of observations can be less than parameters to estimate a significant difference from linear classical estimation

Page 47: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

With we have

Page 48: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

With we have

Page 49: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Data: x[0], x[1],x[2],… WSS with zero mean Covariance matrix = ??

Page 50: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Data: x[0], x[1],x[2],… WSS with zero mean Covariance matrix

Autocorrelation matrix

(Toeplitz structure)

Page 51: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Data: x[0], x[1],x[2],… WSS with zero mean Covariance matrix

Parameter to be estimated: s[n], zero mean x[m] = s[m] +w[n]

Page 52: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Four cases Filtering Find s[n] given x[0],…,x[n]

Page 53: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Four cases Smoothing Find s[n] given x[0],…,x[N]

Page 54: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Four cases Prediction Find s[n-1+L] given x[0],…,x[n-1]

Page 55: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Four cases Interpolation Find x[n] given x[0],…,x[n-1],x[n+1],…

Page 56: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that

Page 57: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have

Page 58: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have

Page 59: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have

Page 60: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have

So,

Page 61: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Filtering problem Signal and noise are uncorrelated Since the means are zero, we know that We have

So, With We get

Page 62: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Why is it a filtering?

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

Find s[n]

Page 63: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Why is it a filtering?

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

Find s[n] We do this as a weighted sum s[n] = a0x[n]+a1x[n-1]+ a2x[n-2]

Page 64: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Why is it a filtering?

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

Find s[n] We do this as a weighted sum s[n] = a0x[n]+a1x[n-1]+ a2x[n-2] So, the weights {ak} can be seen as a FIR filter

Page 65: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Why is it a filtering?

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

Find s[n] We do this as a weighted sum s[n] = a0x[n]+a1x[n-1]+ a2x[n-2] So, the weights {ak} can be seen as a FIR filter However, at the next time, the weights {ak} are not the same (edge effect)

x[n+1]

Page 66: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Why is it a filtering?

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

x[n+1]

To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant

a is computed for a given n (not shown explicitly)

Page 67: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Why is it a filtering?

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

x[n+1]

To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant

We get,

Page 68: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Why is it a filtering?

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

x[n+1]

To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant

We get,

Page 69: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering Why is it a filtering?

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

x[n+1]

To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant

We get,

Page 70: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

To estimate s[n], we filter the recent observations with a filter that is dependent on n The filter is time-variant

We get,

Observe

is a but flipped upside-down

Page 71: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

Recall We get,

Observe

is a but flipped upside-down

Page 72: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

Recall We get,

Observe

is a but flipped upside-down

Page 73: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

Recall We get,

Observe

is a but flipped upside-down

Page 74: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

Page 75: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

Wiener-Hopf equations

Page 76: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

Wiener-Hopf equations

These equations can be solved recursively by the Levinson algorithm Observe: The matrix is Toeplitz, but cannot be approximated as circulant as n grows. Therefore, Szegö theory does not apply.

Page 77: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

Wiener-Hopf equations

Page 78: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

Wiener-Hopf equations

As n grows, the filter converges to a stationary solution

Page 79: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener filtering

To find h[n], we can apply spectral factorization (= same method as is used to find a minimum phase version of a filter)

Page 80: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener smoothing

Now consider asymptotic Wiener smoothing

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

x[n+1]

x[n+2]

We can still express the estimation of s[n] as a filtering of {x[k]}

Page 81: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener smoothing

Now consider asymptotic Wiener smoothing

x[n]

x[n-1]

x[n-2]

x[n-3]

x[n-4]

x[n+1]

x[n+2]

We can still express the estimation of s[n] as a filtering of {x[k]} The filter is not causal

Page 82: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener smoothing

Filtering Smoothing

???????

k

Page 83: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener smoothing

Filtering Smoothing

k

(12.61): Typo in book l

Page 84: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener smoothing

No edge effect in the smoothing setup! Can be solved by approximating Rxx as a circulant matrix (Szegö theory)

Page 85: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener smoothing

No edge effect in the smoothing setup! Can be solved by approximating Rxx as a circulant matrix (Szegö theory)

Page 86: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener prediction

Filtering equations

Page 87: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener prediction

Filtering equations….Filtering ”predicts” s[n] given x[n]

Page 88: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener prediction

Filtering equations….Filtering ”predicts” s[n] given x[n] In prediction x[n] is not available, therefore there is no h[0] coefficient.

Page 89: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener prediction

Filtering equations….Filtering ”predicts” s[n] given x[n] In prediction x[n] is not available, therefore there is no h[0] coefficient.

Page 90: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener prediction

Filtering equations….Filtering ”predicts” s[n] given x[n] In prediction x[n] is not available, therefore there is no h[0] coefficient. We are also predicting l steps into the future

Page 91: Digital Communications Fredrik Rusek · Chapter 12 – Linear Bayesian Estimation Example 12.1 Observations 1. With no prior, the sample mean is MVU 2. The LMMSE is a tradeoff between

Chapter 12 – Linear Bayesian Estimation

Wiener prediction

Filtering equations….Filtering ”predicts” s[n] given x[n] In prediction x[n] is not available, therefore there is no h[0] coefficient. We are also predicting l steps into the future Wiener-Hopf prediction equations. For l=1 we obtain the Yule-Walker equations Solved by Levinson recursion or spectral factorization (not Szegö Theory)