HW3 Solutionsmath.unm.edu/~james/STAT553/hw3_solb.pdf ·  · 2018-03-08HW3 Solutions I have hw3 on...

5
HW3 Solutions I have hw3 on a different computer, so I’m just going to type solutions without restating the original problems. 1. Here X exponential with mean 1 and Y = σX + μ. Then F Y (y)= P (Y y) = P (σX + μ y) = P (σX y - μ) = P X y - μ σ = F X y - μ σ f Y (y)= f X y - μ σ · d dy y - μ σ f Y (y)= 1 σ f X y - μ σ f Y (y)= 1 σ e -(y-μ)I (y>μ) To see why the indicator is necessary, note that if X = 0 and Y = σX + μ, then the minimum value for Y is σ · 0+ μ = μ. (I don’t care whether you use y>μ or y μ.) 2. Here X 1 ,X 2 ,..., are i.i.d. uniform(0,1) and we want to show that the minimum converges in probability to 0. First consider the distribution of the minimum. To get this, we have F X (1) (x)= P (X (1) x) =1 - P (X (1) >x) =1 - P (X 1 ,...,X n >x) =1 - [P (X 1 >x)] n =1 - [1 - F X (x)] n =1 - (1 - x) n We actually need the cdf for this problem rather then the density, so I won’t bother to differentiate. To show convergence in probability, We’ll let Y n = X (1) when the sample size is n in order to indicate the dependence on n. it must be shown that for any > 0, lim n→∞ P (|Y n - 0|≤ ε)=1 1

Transcript of HW3 Solutionsmath.unm.edu/~james/STAT553/hw3_solb.pdf ·  · 2018-03-08HW3 Solutions I have hw3 on...

Page 1: HW3 Solutionsmath.unm.edu/~james/STAT553/hw3_solb.pdf ·  · 2018-03-08HW3 Solutions I have hw3 on a di erent computer, so I’m just going to type solutions without restating the

HW3 Solutions

I have hw3 on a different computer, so I’m just going to type solutions without restating the originalproblems.

1. Here X exponential with mean 1 and Y = σX + µ. Then

FY (y) = P (Y ≤ y)

= P (σX + µ ≤ y)

= P (σX ≤ y − µ)

= P

(X ≤ y − µ

σ

)= FX

(y − µσ

)⇒ fY (y) = fX

(y − µσ

)· ddy

y − µσ

⇒ fY (y) =1

σfX

(y − µσ

)⇒ fY (y) =

1

σe−(y−µ)/σI(y > µ)

To see why the indicator is necessary, note that if X = 0 and Y = σX + µ, then the minimumvalue for Y is σ · 0 + µ = µ. (I don’t care whether you use y > µ or y ≥ µ.)

2. Here X1, X2, . . . , are i.i.d. uniform(0,1) and we want to show that the minimum convergesin probability to 0. First consider the distribution of the minimum. To get this, we have

FX(1)(x) = P (X(1) ≤ x)

= 1− P (X(1) > x)

= 1− P (X1, . . . , Xn > x)

= 1− [P (X1 > x)]n

= 1− [1− FX(x)]n

= 1− (1− x)n

We actually need the cdf for this problem rather then the density, so I won’t bother to differentiate.To show convergence in probability, We’ll let Yn = X(1) when the sample size is n in order to indicatethe dependence on n. it must be shown that for any ε > 0,

limn→∞

P (|Yn − 0| ≤ ε) = 1

1

Page 2: HW3 Solutionsmath.unm.edu/~james/STAT553/hw3_solb.pdf ·  · 2018-03-08HW3 Solutions I have hw3 on a di erent computer, so I’m just going to type solutions without restating the

Take any ε > 0 (actually, just consider ε ∈ (0, 1). If ε ≥ 1, then the probability is 1 for all n). Wehave

P (|Yn − 0| ≤ ε) = P (Yn ≤ ε)= FYn(ε)

= 1− (1− ε)n

→ 1 as n→∞

The limit is true because (1 − ε) ∈ (0, 1), and raising this to increasing powers in n makes itapproach 0.

3. It will be useful to have the first and second derivatives of g(x) = 11+x .

g′(x) =d

dx(1 + x)−1 = −(1 + x)−2

g′′(x) =d

dx− (1 + x)−2 = 2(1 + x)−3

The first order approximation generally is

g(µ)

The second order approximation is

g(µ) +1

2g′′(µ)V ar(X)

(a)

first order:1

1 + 1/2=

1

3/2=

2

3

second order:2

3+

1

2· 2(1 + µ)−3/12 =

2

3+

1

12(3/2)3=

2

3+

2

81=

56

81

(b)

first order:1

1 + θ

second order:1

1 + θ+

1

2· 2(1 + θ)−3θ2

=(1 + θ)2

(1 + θ)3+

θ2

(1 + θ)3

=1 + 2θ + 2θ2

(1 + θ)3

2

Page 3: HW3 Solutionsmath.unm.edu/~james/STAT553/hw3_solb.pdf ·  · 2018-03-08HW3 Solutions I have hw3 on a di erent computer, so I’m just going to type solutions without restating the

(c) This is sort of a trick question I guess. The expected value doesn’t exist. The reason isthat the denominator, 1 +X is a normal random variable, and the reciprocal of a normal randomvariable has a Cauchy distribution. Anytime you have a ratio with a random number in the de-nominator, you are likely to have problems if the density is strictly positive at 0. We can still seewhat the Taylor series ”approximation” would give, but it is misleading in this case. In practice, ifµ is far away from 0 (relative to σ), then simulating 1/(1 +X), you might not notice any problemsbecause you might not simulate anything close to a division by 0 problem. The longer you run thesimulation, however, the more likely you are to notice a problem.

first order:1

1 + µ

second order:1

1 + µ+ (1 + µ−3σ2

first order: (1 + α/(α+ β))−1 = [(α+ β + α)/(α+ β)]−1 =α+ β

2α+ β

second order:α+ β

2α+ β+

[α+ β

2α+ β

]3· αβ

(α+ β)2(α+ β + 1)

=α+ β

2α+ β+

[α+ β

(2α+ β)3

]· αβ

α+ β + 1

=(α+ β)(α+ β + 1)

2α+ β+

[α+ β

(2α+ β)3

]· αβ

α+ β + 1

4. For the variance, we use

V ar(g(X)) ≈ V ar(X)[g′(µ)]2

(a)

V ar

[1

1 +X

]≈ (1/12)[−(1 + µ)]−2]2

= (1/12)[−(3/2)−2]2

= (1/12)[−4/9]2

=1

12· 16

81

=4

243

(b)

3

Page 4: HW3 Solutionsmath.unm.edu/~james/STAT553/hw3_solb.pdf ·  · 2018-03-08HW3 Solutions I have hw3 on a di erent computer, so I’m just going to type solutions without restating the

V ar

[1

1 +X

]≈ θ2[−(1 + θ)]−2]2

=θ2

(1 + θ)4

As examples to see how good the variance estimate is:

> x <- runif(10000)

> y <- 1/(1+x)

> var(y)

[1] 0.01940631

> 4/243

[1] 0.01646091

> x <- rexp(10000)

> y <- 1/(1+x)

> var(y)

[1] 0.04776825

> 1/16

[1] 0.0625

Hmm, not great. For the exponential case, the variance is 76% of the simulated value. Forthe uniform case, the simulated variance is about 18% higher than the delta method estimate.Underestimating could lead to confidence intervals that are too narrow, while overestimating couldlead to confidence intervals that are too wide.

5.

It’s convenient to make the first- and second-order approximations functions of b:

> options(digits=3) # to not show so many digits

> first <- function(b) {

+ value <- 1/(1+b/2)

+ return(value)

+ }

> first(1:5)

[1] 0.667 0.500 0.400 0.333 0.286

> second <- function(b) {

+ value <- first(b) + (1/2)*2*(1+b/2)^(-3)*b^2/12

+ return(value)

+ }

> second(1:5)

[1] 0.691 0.542 0.448 0.383 0.334

4

Page 5: HW3 Solutionsmath.unm.edu/~james/STAT553/hw3_solb.pdf ·  · 2018-03-08HW3 Solutions I have hw3 on a di erent computer, so I’m just going to type solutions without restating the

bmethod 1 2 3 4 5

simulation 0.693 0.549 0.463 0.402 0.359first-order 0.667 0.500 0.400 0.333 0.286

second-order 0.691 0.542 0.448 0.383 0.334

It looks like the approximations are getting worse with increasing b, especially proportionally.For example, the ratio of simulated to first-order approximation is 1.04 for b = 1 but 1.26 for b = 5.The same comparison for the second order approximation is 1.002 for b = 1 to 1.074 for b = 5.The second-order approximation is always closer to the simulated estimate than the first-orderapproximation.

6. For the third-order Taylor series, we use

g(x) ≈ g(µ) + g′(µ)(x− µ) + g′′(µ)(x− µ)2/2 + g′′′(µ)(x− µ)3/6

Taking expected values of both sides, we get

E[g(X)] ≈ g(µ) +1

2· g′′(µ)V ar(X) + g′′′(µ)E[(X − µ)3]

For the uniform case,

E[(X − µ)3 =

∫ 1

0(x− 1/2)3 dx =

∫ 1/2

−1/2u3 du

where u = x − 1/2. The integral is 0 because it is an odd function over a symmetric intervalwith no convergence issues. Thus the third-order approximation is the same as the second-orderapproximation.

5