Hw 1people.math.umass.edu/~daeyoung/Stat608/hw1sol.pdfHw 1 5.16 [Sol] a. Since X i˘N(i;i2), X i i i...

3
Hw 1 5.16 [Sol] a. Since X i N (i, i 2 ), X i -i i N (0, 1) and ( X i -i i ) 2 χ 2 1 . Thus, 3 i=1 ( X i -i i ) 2 χ 2 3 . b. By the definition of T-distribution X 1 -1 q 1 2 3 i=2 ( X i -i i ) 2 T 2 because X i s are independent. c. (X 1 -1) 2 1 2 3 i=2 ( X i -i i ) 2 F 1,2 by the definition of F-distribution. 5.29 [Sol] Let random variable X i be the weight of a booklet with mean 1 and standard de- viation 0.05. By the Central Limit Theorem, the distribution of the sum of 100 random samples n i=1 X i can be approximated by a normal distribution with mean 100 and variance 100 · 0.05 2 : P (∑ 100 i=1 X i > 100.4 ) = P 100 i=1 X i -100 1000.05 2 > 100.4-100 1000.05 2 = P (Z> 0.8) = 0.211. 5.30 [Sol] The mean and variance of ¯ X 1 and ¯ X 2 are μ and σ 2 /n, respectively. By Chebychev’s in- equality, we have P | ¯ X 1 - ¯ X 2 |<k p V ar( ¯ X 1 - ¯ X 2 ) 1- 1 k 2 . Thus, P | ¯ X 1 - ¯ X 2 |<k p (2σ 2 )/n) 1 - 1 k 2 . In this case, k p 2/n =1/5 and k = n 5 2 . Plugging k into the right hand side of the above inequality, the required sample size can be obtained by solving the equation 1 - 1 n 5 2 2 0.99 and n 5000. 5.32 [Sol] a. X i > 0 for all i and X n p a. g(x)= x is continuous for all x> 0. Thus, g(X n )= X n p g(a)= a. b. g(x)= a/x is continuous for all x> 0. Thus, g(X n )= a/X n p g(a)= a/a = 1. c. Since S 2 = S p σ 2 = σ and σ/S n p σ/σ = 1. 5.36 [Sol] a. E(Y )= E(E(Y | N )) = E(2N )=2E(N )=2θ and V ar(Y )= E(V ar(Y | N )) + V ar(E(Y | N )) = E(4N )+ V ar(2N )=4θ +4θ =8θ. b. Derive the MGF of Y , M Y (t)= E(e tY )= E(E(e tY | N )) = E(M Y |N (t)) = n=0 (1 - 2t) -ne -θ θ n n! = e -θ e θ 1-2t . The MGF of Z = Y -E(Y ) V ar(Y ) is M Z (t)= E(e Y -2θ 8θ t )= e -2θ 8θ t E(e Y t 8θ )= e -θ 2θ t M Y ( t 2 2θ )= e -θ 2θ t e -θ e 2θθ 2θ-t = exp h θ ( 2θ-t)(-t)-( 2θ-t) 2θ+ 2θ 2θ ( 2θ-t) 2θ i = exp h θt 2 2θ- 2θt i . As θ →∞, the 1

Transcript of Hw 1people.math.umass.edu/~daeyoung/Stat608/hw1sol.pdfHw 1 5.16 [Sol] a. Since X i˘N(i;i2), X i i i...

Page 1: Hw 1people.math.umass.edu/~daeyoung/Stat608/hw1sol.pdfHw 1 5.16 [Sol] a. Since X i˘N(i;i2), X i i i ˘N(0;1) and X i i i P 2 ˘˜2 1. Thus, 3 =1 X i i i 2 ˘˜2 3. b. By the de nition

Hw 1

5.16[Sol]

a. Since Xi ∼ N(i, i2), Xi−ii∼ N(0, 1) and

(Xi−ii

)2 ∼ χ21. Thus,

∑3i=1

(Xi−ii

)2 ∼ χ23.

b. By the definition of T-distribution X1−1√12

∑3i=2(

Xi−ii

)2∼ T2 because Xis are independent.

c. (X1−1)2

12

∑3i=2(

Xi−ii

)2∼ F1,2 by the definition of F-distribution.

5.29[Sol] Let random variable Xi be the weight of a booklet with mean 1 and standard de-viation 0.05. By the Central Limit Theorem, the distribution of the sum of 100 randomsamples

∑ni=1Xi can be approximated by a normal distribution with mean 100 and variance

100 · 0.052 : P(∑100

i=1Xi > 100.4)

= P(∑100

i=1 Xi−100√1000.052

> 100.4−100√1000.052

)= P (Z > 0.8) = 0.211.

5.30[Sol] The mean and variance of X1 and X2 are µ and σ2/n, respectively. By Chebychev’s in-

equality, we have P(| X1 − X2 |< k

√V ar(X1 − X2)

)≥ 1− 1

k2 . Thus, P(| X1 − X2 |< k

√(2σ2)/n)

)≥

1 − 1k2 . In this case, k

√2/n = 1/5 and k =

√n

5√

2. Plugging k into the right hand side of

the above inequality, the required sample size can be obtained by solving the equation1− 1( √

n

5√

2

)2 ≈ 0.99 and n ≈ 5000.

5.32[Sol]a. Xi > 0 for all i and Xn →p a. g(x) =

√x is continuous for all x > 0. Thus,

g(Xn) =√Xn →p g(a) =

√a.

b. g(x) = a/x is continuous for all x > 0. Thus, g(Xn) = a/Xn →p g(a) = a/a = 1.c. Since

√S2 = S →p

√σ2 = σ and σ/Sn →p σ/σ = 1.

5.36[Sol]a. E(Y ) = E(E(Y | N)) = E(2N) = 2E(N) = 2θ and V ar(Y ) = E(V ar(Y | N)) +V ar(E(Y | N)) = E(4N) + V ar(2N) = 4θ + 4θ = 8θ.b. Derive the MGF of Y , MY (t) = E(etY ) = E(E(etY | N)) = E(MY |N(t)) =

∑∞n=0(1 −

2t)−n e−θθn

n!= e−θe

θ1−2t .

The MGF of Z = Y−E(Y )√V ar(Y )

is MZ(t) = E(eY−2θ√

8θt) = e

−2θ√8θtE(e

Y t√8θ ) = e

−θ√2θtMY ( t

2√

2θ) =

e−θ√2θte−θe

√2θθ√

2θ−t = exp[θ(

(√

2θ−t)(−t)−(√

2θ−t)√

2θ+√

2θ√

(√

2θ−t)√

)]= exp

[θt2

2θ−√

2θt

]. As θ → ∞, the

1

Page 2: Hw 1people.math.umass.edu/~daeyoung/Stat608/hw1sol.pdfHw 1 5.16 [Sol] a. Since X i˘N(i;i2), X i i i ˘N(0;1) and X i i i P 2 ˘˜2 1. Thus, 3 =1 X i i i 2 ˘˜2 3. b. By the de nition

MGF of Z becomes limθ→∞MZ(t) = limθ→∞ exp[

θt2

2θ−√

2θt

]= et

2/2. Therefore, the limiting

distribution of Z = Y−E(Y )√V ar(Y )

as θ →∞ is standard normal distribution.

5.43[Sol]a. limn→∞ P (| Yn − µ |> ε) = limn→∞ P (

√n | Yn − µ |>

√nε) = limn→∞ P (| Z |>

√nε) = 0

where Z ∼ N(0, σ2). Thus, Yn converges to µ in probability.b. In Theorem 5.5.24, by using Talyor expansion, we can find that

√n(g(Yn) − g(θ)) =√

ng′(θ)(Yn− θ) +Rn. In addition, given that

√n(Yn− θ)→d N(0, σ2) and Rn →p 0. Thus,

by Slutsky Theorem,√n(g(Yn) − g(θ)) = g

′(θ)√n(Yn − θ) + Rn →d g

′(θ)N(0, σ2) + 0 =

N(0, g′(θ)2σ2).

5.44a. The expectation and variance of Yn are p and p(1−p)

n. By the CLT, the sampling distribu-

tion of Yn can be approximated by N(p, p(1−p)n

). Thus,√n(Yn − p)→d N(0, p(1− p)).

b. Suppose p 6= 1/2. Let g(Y ) = Y (1− Y ). Then g(Y ) is continuous and differentiable. Byusing the result in (a),

√n(Yn−p)→d N(0, p(1−p)). By Delta method, the asymptotic distri-

bution of Yn(1−Yn) is√n(g(Yn)−g(p)) =

√n(Yn(1−Yn)−p(1−p))→d N(0, (1−2p)2p(1−p)).

c. Suppose p = 1/2. Then g′(p) = 0. By the second-order Delta method, the asymptotic dis-

tribution of n(g(Yn)−g(p)) is n(g(Yn)−g(p)) = n(Yn(1−Yn)−p(1−p)) = n(Yn(1−Yn)− 14)→d

g′′

(p)2p(1− p)χ2

1 = −14χ2

1.

Example 2a. − log(Yn) =

∑ni=1(− logXi). Since Xi ∼ uniform(0, 1), − logXi ∼ exponential(1) (

by transformation techniques we learned in Stat 607). Since X1, . . . , Xn are independent,− log(Yn) =

∑ni=1(− logXi) ∼ exponential(n).

b. log Tn = 1n(− log Yn) = 1

n

∑ni=1(− logXi). By WLLN and − logXi ∼ exponential(1),

log Tn = 1n

∑ni=1(− logXi)→p E(− logX1) = 1. Consider g(x) = ex. Then, by “convergence-

in-probability” transformation, g(log Tn) = elog Tn →p e1 = e.c. Since − logXi ∼ exponential(1), µ = E(− logXi) = 1 and σ2 = V ar(− logXi) = 1. Since

log Tn = 1n

∑ni=1(− logXi), the CLT implies that

√n(log Tn−µ)

σ=√n(log Tn − 1) →d N(0, 1).

Let g(x) = ex (so that g′(x) = ex). Then by Delta’s method,

√n(g(log Tn)− g(1)) =√

n(Tn − e)→d N(0, (g′(1))2 = e2).

Example 3a. H(X, Y ) ∼ Beroulli(θ). So E(H(X, Y )) = 1 · P (X < Y ) + 0 · P (X ≥ Y ) = θ.b. T =

∑ni=1H(Xi, Yi)= number of (Xi, Yi) whose value of H(Xi, Yi) = 1. Thus, T ∼

Binomial(n, θ).

2

Page 3: Hw 1people.math.umass.edu/~daeyoung/Stat608/hw1sol.pdfHw 1 5.16 [Sol] a. Since X i˘N(i;i2), X i i i ˘N(0;1) and X i i i P 2 ˘˜2 1. Thus, 3 =1 X i i i 2 ˘˜2 3. b. By the de nition

c. E(T ) = nθ, V arT = nθ(1 − θ). So E(T/n) = θ and V ar(T/n) = θ(1 − θ)/n. By

Chebyshev’s inequality, limn→∞ P (| T/n− θ |> ε) ≤ limn→∞E(T/n−θ)2

ε2= limn→∞

V ar(T/n)ε2

=

limn→∞θ(1−θ)nε2

= 0 (or By WLLN) and so T/n→p θ.d. 1

nT = 1

n

∑ni=1 H(Xi, Yi) has E(H(Xi, Yi)) = θ and V ar(H(Xi, Yi)) = θ(1 − θ). By CLT,

√n(T/n−θ)√θ(1−θ)

= 1√n

T−nθ√θ(1−θ)

→d N(0, 1). Thus, 1√n(T − nθ)→d N(0, θ(1− θ)).

Three additional problems[Sol]1. Let X =

∑ni=1 Yi where Yi ∼ exp(β). Then X ∼ Gamma(n, β). Thus, by CLT,

1√nX−nβ√

β2= X−nβ√

nβ2∼d N(0, 1).

2. (a) Note that X ∼ Gamma(3, 1) and let Y = 1/X. Then E(Y ) = 1/2 and V ar(Y ) = 1/4.By WLLN, Yn →p 1/2 and thus T = 1/Yn →p 2.(b) By CLT,

√n(Yn − 1/2)→d N(0, 1/4). Thus, 2

√n(Yn − 1/2)→d N(0, 1)

3. (a) For ε > 0, P (| X − λ |≥ ε) = P ((X − λ)2 ≥ ε2) ≤ E(X−λ)2

ε2= V ar(X)

ε2= λ

nε2. Thus,

0 ≤ limn→∞ P (| X − λ |≥ ε) ≤ limn→∞λnε2

= 0 and X →p λ.

(b) By CLT, X−E(X)√V arX

=√n(X−λ)√

λ→d N(0, 1). Thus,

√n(X − λ)→d N(0, λ).

(c) By (a) and (b), X →p λ and√n(X−λ)→d N(0, λ). By convergence in probability trans-

formation,√

Xλ→p 1. Thus, by Slutsky Theorem,

√n(X−λ)√

λ√Xλ

=√n(X−λ)√

X→d N(0,1)

1= N(0, 1).

3