˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of...

45
ECE 645 Spring 2021 Problem Set 2 Due Monday February 15, 2021 1. [Kay Detection Book P3.14] Consider the hypothesis testing problem X N " 0 1 # , " 1 ρ ρ 1 #! under H 0 N " 1 1 # , " 1 ρ ρ 1 #! under H 1 where X =[x[0],x[1]] T is observed. Find the NP test statistic (do not evaluate the threshold) and explain what happens if ρ = 0. 2. [Kay Detection Book P3.15] Consider the detection of a signal s[n] embedded in white Gaussian noise with variance σ 2 based on the observed samples x[n] for n =0, 1,..., 2N - 1. The signal is given by s[n]= ( A n =0, 1,...,N - 1 0 n = N, N +1,..., 2N - 1 under H 0 and by s[n]= ( A n =0, 1,...,N - 1 2A n = N, N +1,..., 2N - 1 under H 1 . Assume that A> 0 and find the NP detector as well as its detection performance. Explain the operation of the detector. 3. [Kay Detection Book P3.17] Assume that we wish to distinguish between the hypotheses H 0 : x ∼N (02 I ) and H 1 : x ∼N (μ2 I ) based on x = [x[0],x[1]] T . If π 0 = π 1 =0.5, find the decision regions that minimize prob- ability of error. Show that the decision region boundary is a line that is the perpendicular bisector of the line segment joining 0 and μ. 4. Say X C (a, b) if it has the Cauchy density b π 1 b 2 +(x - a) 2 . (a) Let X i (i =1, 2) be independently distributed according to the Cauchy densities C (a i ,b i ). Prove that X 1 + X 2 is distributed as C (a 1 + a 2 ,b 1 + b 2 ). (b) Prove that if X 1 ,...,X n are iid and distributed as C (a, b) then the dis- tribution of the sample mean is again C (a, b). What does this say about the sample mean as an estimator in the Cauchy case?

Transcript of ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of...

Page 1: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

ECE 645 Spring 2021Problem Set 2

Due Monday February 15, 2021

1. [Kay Detection Book P3.14] Consider the hypothesis testing problem

X ∼

N([

01

],

[1 ρρ 1

])under H0

N([

11

],

[1 ρρ 1

])under H1

where X = [x[0], x[1]]T is observed. Find the NP test statistic (do not evaluatethe threshold) and explain what happens if ρ = 0.

2. [Kay Detection Book P3.15] Consider the detection of a signal s[n] embeddedin white Gaussian noise with variance σ2 based on the observed samples x[n]for n = 0, 1, . . . , 2N − 1. The signal is given by

s[n] =

{A n = 0, 1, . . . , N − 10 n = N, N + 1, . . . , 2N − 1

under H0 and by

s[n] =

{A n = 0, 1, . . . , N − 12A n = N, N + 1, . . . , 2N − 1

under H1. Assume that A > 0 and find the NP detector as well as its detectionperformance. Explain the operation of the detector.

3. [Kay Detection Book P3.17] Assume that we wish to distinguish between thehypotheses H0 : x ∼ N (0, σ2I) and H1 : x ∼ N (µ, σ2I) based on x =[x[0], x[1]]T . If π0 = π1 = 0.5, find the decision regions that minimize prob-ability of error. Show that the decision region boundary is a line that is theperpendicular bisector of the line segment joining 0 and µ.

4. Say X ∼ C(a, b) if it has the Cauchy density

b

π

1

b2 + (x− a)2.

(a) Let Xi (i = 1, 2) be independently distributed according to the Cauchydensities C(ai, bi). Prove that X1 +X2 is distributed as C(a1 +a2, b1 + b2).

(b) Prove that if X1, . . . , Xn are iid and distributed as C(a, b) then the dis-tribution of the sample mean is again C(a, b). What does this say aboutthe sample mean as an estimator in the Cauchy case?

Page 2: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

5. Suppose that Y is a random variable which, under hypothesis H0, has pdf

f0(y) =

{(2/3)(y + 1) 0 ≤ y ≤ 10 otherwise

and, under hypothesis H1, has pdf

f1(y) =

{1 0 ≤ y ≤ 10 otherwise

.

(a) Find the Bayes rule and minimum Bayes risk for testing H0 versus H1 withuniform costs and equal priors.

(b) Find the minimax rule and minimax risk for uniform costs.

6. Consider a pattern classification problem where, in addition to deciding amonga certain number of classes, we may decide to reject the pattern if it is notrecognizable.

Let observations be denoted by the random vector X. In the case of two possiblepattern classes there are two hypotheses:

H0 : X ∼ f0versusH1 : X ∼ f1

with prior probabilities π0 + π1 = 1. Instead of making one of two decisions wenow allow three decisions

d0 = accept H0, d1 = accept H1, or d2 = reject.

If we wish to solve this problem in a Bayesian framework we will need costsassociated with the three possible decisions and the two hypotheses. Denotethese by cij with the interpretation that this is the cost of making decision diwhen Hj is true and assume that

cij =

0 i = j, i, j = 0, 1λm i 6= j, i, j = 0, 1λr i = 2

.

(a) Since there are three possible decisions a decision rule δ will amount to apartition of the observation space into three regions, each correspondingto a different decision1 Find the two condional risks

Rj(δ)def= E{ Cost |Hj is true }

for j = 0, 1.

1Note that we do not need to consider randomized decision rules for this problem.

Page 3: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

(b) Derive the Bayes decision rule for this problem by specifying the regionsof the observation space which should correspond to each of the threedecisions.

(c) Simplify the decision rule from part (b) by writing it in terms of the pos-terior probabilities of the hypotheses given the observations, i.e., in termsof

π0(x)def= P (H0 is true |X = x), π1(x)

def= P (H1 is true |X = x),

and the misclassification cost λm, and the rejection cost λr.

(d) Describe the behavior of the test in the two cases: 1) where the rejectioncost λr is large, and 2) where it is small, relative to the misclassificationcost λm.

7. Let {Yi : 1 ≤ i ≤ M} be i.i.d. with distribution function FY and density fYand let {Zi : 1 ≤ i ≤M} be the order statistics of the sample. That is

Z1 = smallest of Y1, Y2, . . . , YM

Z2 = second smallest of Y1, Y2, . . . , YM...

Zj = j–th smallest of Y1, Y2, . . . , YM...

ZM = largest of Y1, Y2, . . . , YM

Prove the following formulas:

FZr(zr) =M∑j=r

(Mj

)[FY (zr)]

j [1− FY (zr)]M−j

fZr(zr) =M !

(r − 1)!(M − r)![FY (zr)]

r−1 [1− FY (zr)]M−r fY (zr)

fZrZs(zr, zs) =M !

(r − 1)!(s− r − 1)!(M − s)![FY (zr)]

r−1 [FY (zs)− FY (zr)]s−r−1

× [1− FY (zs)]M−s fY (zr)fY (zs), 1 ≤ r < s ≤M.

8. Consider the hypothesis pair

H0 : Y = NH1 : Y = N + S

where N and S are independent random variables each having pdf

f(x) =

{e−x x ≥ 00 x < 0

.

Page 4: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

(a) Find the likelihood ratio between H0 and H1.

(b) Find the threshold and detection probability for α–level Neyman–Pearsontesting of H0 vs. H1.

Now consider the hypothesis pair

H0 : Yk = Nk, k = 1, 2, . . . , nH1 : Yk = Nk + S, k = 1, 2, . . . , n

where n > 1 and N1, . . . , Nn and S are independent random variables eachhaving the pdf f(x) above.

(c) Find the likelihood ratio between H0 and H1.

(d) Find the threshold and detection probability for α–level Neyman–Pearsontesting of H0 vs. H1.

Page 5: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

[Kay Detection Book P3.14] Consider the hypothesis testing problem

X ∼

N([

01

],

[1 ρρ 1

])under H0

N([

11

],

[1 ρρ 1

])under H1

where X = [x[0], x[1]]T is observed. Find the NP test statistic (do not evaluate thethreshold) and explain what happens if ρ = 0.

Page 6: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 7: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 8: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

[Kay Detection Book P3.15] Consider the detection of a signal s[n] embeddedin white Gaussian noise with variance σ2 based on the observed samples x[n] forn = 0, 1, . . . , 2N − 1. The signal is given by

s[n] =

{A n = 0, 1, . . . , N − 10 n = N, N + 1, . . . , 2N − 1

under H0 and by

s[n] =

{A n = 0, 1, . . . , N − 12A n = N, N + 1, . . . , 2N − 1

under H1. Assume that A > 0 and find the NP detector as well as its detectionperformance. Explain the operation of the detector.

Page 9: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 10: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 11: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

[Kay Detection Book P3.17] Assume that we wish to distinguish between thehypotheses H0 : x ∼ N (0, σ2I) and H1 : x ∼ N (µ, σ2I) based on x = [x[0], x[1]]T .If π0 = π1 = 0.5, find the decision regions that minimize probability of error. Showthat the decision region boundary is a line that is the perpendicular bisector of theline segment joining 0 and µ.

Page 12: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 13: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 14: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

Say X ∼ C(a, b) if it has the Cauchy density

b

π

1

b2 + (x− a)2.

(a) Let Xi (i = 1, 2) be independently distributed according to the Cauchy densitiesC(ai, bi). Prove that X1 +X2 is distributed as C(a1 + a2, b1 + b2).

(b) Prove that if X1, . . . , Xn are iid and distributed as C(a, b) then the distributionof the sample mean is again C(a, b). What does this say about the sample meanas an estimator in the Cauchy case?

Page 15: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 16: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

Suppose that Y is a random variable which, under hypothesis H0, has pdf

f0(y) =

{(2/3)(y + 1) 0 ≤ y ≤ 10 otherwise

and, under hypothesis H1, has pdf

f1(y) =

{1 0 ≤ y ≤ 10 otherwise

.

1. Find the Bayes rule and minimum Bayes risk for testing H0 versus H1 withuniform costs and equal priors.

2. Find the minimax rule and minimax risk for uniform costs.

Page 17: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 18: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 19: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 20: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

Consider a pattern classification problem where, in addition to deciding among acertain number of classes, we may decide to reject the pattern if it is not recognizable.

Let observations be denoted by the random vector X. In the case of two possiblepattern classes there are two hypotheses:

H0 : X ∼ f0versusH1 : X ∼ f1

with prior probabilities π0 + π1 = 1. Instead of making one of two decisions we nowallow three decisions

d0 = accept H0, d1 = accept H1, or d2 = reject.

If we wish to solve this problem in a Bayesian framework we will need costsassociated with the three possible decisions and the two hypotheses. Denote these bycij with the interpretation that this is the cost of making decision di when Hj is trueand assume that

cij =

0 i = j, i, j = 0, 1λm i 6= j, i, j = 0, 1λr i = 2

.

(a) Since there are three possible decisions a decision rule δ will amount to a parti-tion of the observation space into three regions, each corresponding to a differentdecision2 Find the two condional risks

Rj(δ)def= E{ Cost |Hj is true }

for j = 0, 1.

(b) Derive the Bayes decision rule for this problem by specifying the regions of theobservation space which should correspond to each of the three decisions.

2Note that we do not need to consider randomized decision rules for this problem.

Page 21: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

(c) Simplify the decision rule from part (b) by writing it in terms of the posteriorprobabilities of the hypotheses given the observations, i.e., in terms of

π0(x)def= P (H0 is true |X = x), π1(x)

def= P (H1 is true |X = x),

and the misclassification cost λm, and the rejection cost λr.

(d) Describe the behavior of the test in the two cases: 1) where the rejection costλr is large, and 2) where it is small, relative to the misclassification cost λm.

Page 22: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 23: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 24: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 25: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 26: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 27: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

Let {Yi : 1 ≤ i ≤ M} be i.i.d. with distribution function FY and density fY andlet {Zi : 1 ≤ i ≤M} be the order statistics of the sample. That is

Z1 = smallest of Y1, Y2, . . . , YM

Z2 = second smallest of Y1, Y2, . . . , YM...

Zj = j–th smallest of Y1, Y2, . . . , YM...

ZM = largest of Y1, Y2, . . . , YM

Prove the following formulas:

FZr(zr) =M∑j=r

(Mj

)[FY (zr)]

j [1− FY (zr)]M−j

fZr(zr) =M !

(r − 1)!(M − r)![FY (zr)]

r−1 [1− FY (zr)]M−r fY (zr)

fZrZs(zr, zs) =M !

(r − 1)!(s− r − 1)!(M − s)![FY (zr)]

r−1 [FY (zs)− FY (zr)]s−r−1

× [1− FY (zs)]M−s fY (zr)fY (zs), 1 ≤ r < s ≤M.

Page 28: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 29: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 30: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 31: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 32: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 33: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 34: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 35: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 36: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 37: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 38: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 39: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 40: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator

Consider the hypothesis pair

H0 : Y = NH1 : Y = N + S

where N and S are independent random variables each having pdf

f(x) =

{e−x x ≥ 00 x < 0

.

(a) Find the likelihood ratio between H0 and H1.

(b) Find the threshold and detection probability for α–level Neyman–Pearson test-ing of H0 vs. H1.

Now consider the hypothesis pair

H0 : Yk = Nk, k = 1, 2, . . . , nH1 : Yk = Nk + S, k = 1, 2, . . . , n

where n > 1 and N1, . . . , Nn and S are independent random variables each havingthe pdf f(x) above.

(c) Find the likelihood ratio between H0 and H1.

(d) Find the threshold and detection probability for α–level Neyman–Pearson test-ing of H0 vs. H1.

Page 41: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 42: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 43: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 44: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator
Page 45: ˘Njvk/645/hw/sp21/SP21-hw2-soln.pdfn are iid and distributed as C(a;b) then the dis-tribution of the sample mean is again C(a;b). What does this say about the sample mean as an estimator