prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian...

7
1 For Bayesian estimation we need a prior probability for n. Suppose we’ve determined it is Poisson w/ known parameter : [] , 0 ! n Pn e n n λ λ = {} En λ = A radioactive source emits n radioactive particles, where n is random. Our job is to estimate how many particles were emitted. Source n emitted particles Parm. Standard Results for Poisson A common model for # of times something occurs is the Poisson distribution {} var n λ =

Transcript of prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian...

Page 1: prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian estimation we need a prior probability for n. Suppose we’ve determined it is Poisson w

1

For Bayesian estimation we need a prior probability for n.Suppose we’ve determined it is Poisson w/ known parameter 𝜆𝜆:

[ ] , 0!

n

P n e nn

λ λ−= ≥ { }E n λ=

A radioactive source emits n radioactive particles, where n is random.Our job is to estimate how many particles were emitted.

𝑃𝑃[𝑛𝑛]Source

nemitted particles

Parm. Standard Results for Poisson

A common model for # of times something occurs is the Poisson distribution

{ }var n λ=

Page 2: prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian estimation we need a prior probability for n. Suppose we’ve determined it is Poisson w

2

But suppose we have an imperfect Geiger counter… It misses some particles → Let p be the prob of detecting a particle.So we only count 𝑘𝑘 ≤ 𝑛𝑛 particles with cond. prob. of 𝑃𝑃[𝑘𝑘|𝑛𝑛]

𝑃𝑃[𝑛𝑛]Source

nemitted particles

𝑃𝑃[𝑘𝑘|𝑛𝑛]

ImperfectCounter

kcounted particles

This 𝑃𝑃[𝑘𝑘|𝑛𝑛] is the classic binomial distribution:

( )[ | ] 1 , 0n kknP k n p p k n

k−

= − ≤ ≤

{ }|E k n np=

{ } ( )var | 1k n np p= −

Parm.Data Standard Results for Binomial

Binomial Dist. is the classic result for “ksuccesses out of ntries with prob of

success of p”

Page 3: prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian estimation we need a prior probability for n. Suppose we’ve determined it is Poisson w

3

We could just accept the count k…Or… devise a Bayesian estimator: map observed k into estimate �𝑛𝑛

𝑃𝑃[𝑛𝑛]Source

nemitted particles

𝑃𝑃[𝑘𝑘|𝑛𝑛]Counter

kcounted particles

�𝜃𝜃(𝑘𝑘)Bayes Est.

�𝑛𝑛estimatedparticles

Regardless of which Bayesian estimation form we use, we need the posterior probability for n.

[ ][ ]

[ ], | [ ][ | ]

[ ]P k n P k n P n

P n kP k P k

= =

DataParm.

Bayes’ Rule!

So we need to determine all this stuff!

Page 4: prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian estimation we need a prior probability for n. Suppose we’ve determined it is Poisson w

4

[ ][ ]

[ ], | [ ][ | ]

[ ]P k n P k n P n

P n kP k P k

= =Need to analyze:

[ ] [ ], | [ ]P k n P k n P n= ( )[ , ] 1 ; 0!

nn kkn

P k n p p e k nk n

λ λ− − = − ≤ ≤

First, the numerator – we have both of the parts so plug in:

Second, the denominator – we have the joint prob and need to sum it to get the marginal on k:

[ ] [ ] ( ), 1!

nn kk

n k n k

nP k P k n p p e

k nλ λ∞ ∞

− −

= =

= = −

∑ ∑ ( )[ ]

!

kp p

P k ek

λ λ−=How?

[ ] ( ) ( )! 1! ! !

nn kk

n k

nP k p p ek n k n

λ λ∞− −

=

= − − ∑ ( ) ( )1 1

! !

k kn k

n k

e p pk n k

λ λ λ− ∞ −

=

= − −

= 𝜆𝜆𝑘𝑘𝜆𝜆𝑛𝑛−𝑘𝑘

Use 𝑒𝑒𝑥𝑥 = ⁄1 0! + ⁄𝑥𝑥 1! + ⁄𝑥𝑥2 2! + ⁄𝑥𝑥3 3! + ⋯(1 )

!

k kpe p e

k

λλλ−

−= ( )!

kpe pk

λ λ−

=

Thinking: Get into a power series!

Page 5: prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian estimation we need a prior probability for n. Suppose we’ve determined it is Poisson w

5

So now we have what we need to form 𝑃𝑃[𝑛𝑛|𝑘𝑘]:

[ ][ ]

( ) ( )

( )

! 1 !! !,

[ | ]!

n kk n

kp

n p p e nk n kP k n

P n kP k e p k

λ

λ

λ

λ

− −

− − = =

( ) ( ) (1 )1[ | ] 1 ; 0!

n k pP n k p e n kn k

λλ− − −= − ≥ ≥ −

… which gives us the posterior distribution we need!

What is it??? Compare to regular Poisson [ ] , 0!

n

P n e nn

λλ

λ−= ≥

Looks like a k-shifted Poisson w/ parameter 𝜆𝜆 1 − 𝑝𝑝 !!!!

( )( ) ( )( )

( )1

1

1[ ] , 0

!

n kp

p

pP n k e n k

n kλ

λ

λ−

− −−

−− = − ≥

So can use std. Poisson results to get conditional mean and variance:{ } ( )| 1E n k k pλ= + − { } { }( ){ } ( )2

var | | | 1n k E n E n k k pλ= − = −

k-shift of prob. func. shifts mean by k shift of prob. func. has no effect on variance

Page 6: prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian estimation we need a prior probability for n. Suppose we’ve determined it is Poisson w

6

So now if we use quadratic Bayes risk, the MMSE estimate is the conditional mean:

{ } ( )ˆ | 1n E n k k pλ= = + −

𝑃𝑃[𝑛𝑛]Source

nemitted particles

𝑃𝑃[𝑘𝑘|𝑛𝑛]Counter

kcounted particles

ΣBayes Est.

�𝑛𝑛estimatedparticles

( )1 pλ −

[ ] , 0!

n

P n e nn

λλ

λ−= ≥

{ }E n λ=

Expected # of Particles

( )1 p−

Prob. of Missing a Particle

( )1 pλ −

Expected # of Missed Particles

Makes Sense: Add this to count!!

Page 7: prior probability n - ws.binghamton.edu personal page/EE522_files/EEC… · 1 For Bayesian estimation we need a prior probability for n. Suppose we’ve determined it is Poisson w

7

What is the performance of this estimator?We have general results that say the MMSE estimator…

• Is unbiased: 𝐸𝐸𝑛𝑛𝑘𝑘 �𝑛𝑛 = 𝐸𝐸𝑛𝑛 𝑛𝑛 = 𝜆𝜆• Has variance = Bmse

( ){ } { }( ){ }{ }22|ˆ |nk k n kBmse E n n E E n E n k= − = −

“Decomposing Joint Expected Values”

(variance of 𝑃𝑃[𝑛𝑛|𝑘𝑘]) = 𝜆𝜆 1 − 𝑝𝑝

( ){ } ( )1 1kE p pλ λ= − = −

Thus, the performance of this estimator is characterized by

{ }ˆ 0E n n− = { } ( )ˆvar 1n n Bmse pλ− = = −