Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e...

29
Exercise 1 Exercise 2 Exercise 3 Exercise 4 Question 3 p.d.f. integrates to 1 R -∞ 1 2π e - x 2 2 dx =1 Z -∞ e - x 2 2 dx = 2π. E[X ]= Z -∞ x · 1 2π e - x 2 2 dx = 1 2π Z -∞ x · e - x 2 2 dx =0 (integration by substitution, or by symmetry). E[X 2 ]= 1 2π Z -∞ x 2 · e - x 2 2 dx =1 (by parts, and using L’Hopital’s rule).

Transcript of Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e...

Page 1: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 3p.d.f. integrates to 1⇒

∫∞−∞

1√2πe−

x2

2 dx = 1⇒∫ ∞−∞

e−x2

2 dx =√

2π.

E[X] =

∫ ∞−∞

x · 1√2πe−

x2

2 dx

=1√2π

∫ ∞−∞

x · e−x2

2 dx

= 0 (integration by substitution, or by symmetry).

E[X2] =1√2π

∫ ∞−∞

x2 · e−x2

2 dx

= 1 (by parts, and using L’Hopital’s rule).

Page 2: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 4The m.g.f. of X is

φX(t) = E[etX ] =

∞∑k=0

etk · λk

k!e−λ = [

∞∑k=0

(etλ)k

k!] · e−λ = e(et−1)λ.

The m.g.f. of X + Y is

φX+Y (t) = E[et(X+Y )]

= E[etX ] · E[etY ] (by independence)

= φX(t) · φY (t)

= e(et−1)(λ+µ).

The m.g.f. of X + Y is the same as that of Poisson withparameter λ+ µ, and the m.g.f. can uniquely determine thedistribution, so X + Y has Poisson distribution with parameterλ+ µ.

Page 3: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 5The event {Y ≤ y} can be expressed as

{Y ≤ y} = {g(X) ≤ y} = {X ≤ g−1(y)},so the c.d.f. of Y is

FY (y) = P(Y ≤ y) = P(X ≤ g−1(y)) = FX(g−1(y)),

As X has a continuous distribution, its c.d.f. is continuous.Thus the c.d.f. of Y is continuous, which implies Y hascontinuous distribution. Since g(·) is an increasing bijection,g−1(·) is also an increasing bijection. Since g(·) is differentiable,and g(x) 6= 0 for all x ∈ R, g−1(·) is differentiable. We haveFY (y) = FX(g−1(y)), so

fY (y) =d

dyFX(g−1(y)) = fX(g−1(y))· d

dy[g−1(y)] =

fX(g−1(y))

g′[g−1(y)].

X ∼ N(0, 1)⇒ Y = aX + b ∼ N(b, a2), fY (y) = 1√2πa

e−(y−b)2

2a2 .

Page 4: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 6Ti ∼ Exp(1), i = 1, 2, . . . , n. The m.g.f. of T is

φT (θ) = E(eθT ) =

∫ ∞0

e(θ−1)tdt

This integral converges when θ < 1, so the m.g.f. only exists in(−∞, 1), and φT (θ) = 1

1−θ , θ < 1. Thus the m.g.f. of Sn is

φSn(θ) = E(eθSn)

= E(e∑ni=1 θTi)

=

n∏i=1

E(eθTi) (by independence)

=1

(1− θ)n(θ < 1),

which is the same as the m.g.f. of the given distribution, so Snfollows the given distribution.

Page 5: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 6The probability can be calculated as follows.

P(Sn > t) =

∫ ∞t

xn−1 e−x

(n− 1)!dx

=1

(n− 1)![tn−1e−t + (n− 1)

∫ ∞t

e−x · xn−2dx] (by parts)

= · · ·

=1

(n− 1)!· [tn−1 · e−t + (n− 1)tn−2 · e−t + . . .

+ (n− 1)! · te−t + (n− 1)! · e−t]

=tn−1

(n− 1)!· e−t +

tn−2

(n− 2)!· e−t + . . .+ te−t + e−t

=

n−1∑i=0

ti

i!· e−t

Page 6: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 6

The probability can be calculated as follows.

P(N(t) = k) = P(N(t) < k + 1)− P(N(t) < k)

= P(Sk+1 > t)− P(Sk > t)

=

k∑i=0

ti

i!· e−t −

k−1∑i=0

ti

i!· e−t

=tk

k!· e−t

which is the same as shown in the question.

Page 7: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 3Y = X2, X ∼ Exp(1).

P(0 ≤ Y ≤ y) = P(0 ≤ X2 ≤ y)

= P(0 ≤ X ≤ √y)

= 1− e−√y.

So the density of Y is

fY (y) =

{0 if y ≤ 0,

12√ye−√y if y > 0.

The first way to compute E(Y ):

E(Y ) = E(X2) =

∫ ∞0

x2e−xdx = 2

∫ ∞0

xe−xdx (by parts) = 2.

Page 8: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 3

The second way to compute E(Y ):

E(Y ) =

∫ ∞0

y

2√ye−√ydy =

∫ ∞0

x2e−xdx = 2.

Two ways to compute E(Y ) are the same because substitutiongives the same integral to be dealt with.

Page 9: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 4The joint p.d.f. fXY (x, y) is

fXY (x, y) =

{1π if (x, y) ∈ D,0 if otherwise.

P(−1 ≤ a ≤ X ≤ b ≤ 1) =

∫ b

a

∫ ∞−∞

fXY (x, y)dydx

=

∫ b

a

2

π

√1− x2dx.

So X has a distribution with density

fX(x) =

{2π

√1− x2 if − 1 ≤ x ≤ 1,

0 if otherwise.

Page 10: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 4Similarly, Y has a distribution with density

fY (y) =

{2π

√1− y2 if − 1 ≤ y ≤ 1,

0 if otherwise.

X and Y are not independent since fXY (x, y) 6= fX(x) · fY (y).

cov(X,Y ) =1

π

∫ ∫Dxydxdy

=1

π

∫ ∫D1

xydxdy +1

π

∫ ∫D2

xydxdy

+1

π

∫ ∫D3

xydxdy +1

π

∫ ∫D4

xydxdy,

whereD1 = D ∩ {x ≥ 0} ∩ {y ≥ 0},

Page 11: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 4

D2 = D ∩ {x < 0} ∩ {y ≥ 0},

D3 = D ∩ {x < 0} ∩ {y < 0},

D4 = D ∩ {x ≥ 0} ∩ {y < 0}.

These four integrals in the R.H.S have the same absolute value,two being positive and two being negative, so

cov(X,Y ) = 0.

Zero covariance can lead to uncorrelatedness, however theconverse is not necessarily true. Uncorrelatedness doesn’t implyindependence. Independence is a stronger condition thanuncorrelatedness.

Page 12: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 5The event {Mn ≤ c} can be expressed as

{Mn ≤ c} = {Uk ≤ c for all k = 1, 2, . . . , n},

and then for 0 ≤ c ≤ 1,

P(Mn ≤ c) = P(Uk ≤ c for all k = 1, 2, . . . , n)

=

n∏k=1

P(Uk ≤ c)

= cn.

So the c.d.f. of Mn is

FMn(x) =

0 if x ≤ 0,

xn if 0 ≤ x ≤ 1,

1 if x ≥ 1.

Page 13: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 5

Now for x ≥ 0

P(n(1−Mn) ≤ x) = P(Mn ≥ 1− x

n)

= 1− (1− x

n)n.

For large enough n,

1− (1− x

n)n → 1− e−x,

son(1−Mn)

D−→ Z,

where Z has standard exponential distribution.

Page 14: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 6

X,Y ∼ N(0, 1) ⇒ fX(x) = 1√2πe−

x2

2 and fY (y) = 1√2πe−

y2

2 ,

and X ⊥⊥ Y , thus

fX+Y (u) =

∫ ∞−∞

fX(u− v)fY (v)dv

=

∫ ∞−∞

1√2πe−

(u−v)22 × 1√

2πe−

v2

2 dv

=

∫ ∞−∞

1

2πe−v

2+uv−u2

2 dv

=

∫ ∞−∞

1

2πe−(v−u

2)2−u

2

4 dv

=e−

u2

4

2√π

∫ ∞−∞

1√πe−(v−u

2)2dv

Page 15: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 6

1√πe−(v−u

2)2 is the pdf of N(

u

2,1

2) ⇒∫ ∞

−∞

1√πe−(v−u

2)2dv = 1.

So

fX+Y (u) =e−

u2

4

2√π

∫ ∞−∞

1√πe−(v−u

2)2dv

=e−

u2

4

2√π.

Page 16: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 7

We assume the needle has unit length. Fix the coordinatesystem where the origin is one end of the needle. The other endcan be considered as a random point (X,Y, Z) uniformlydistributed on the unit sphere. Projection onto the x-ray isequivalent to projecting (X,Y, Z) onto any plane (e.g. x-yplane). Then the length of projection

R ≤ r ⇔ x2 + y2 ≤ r2.

We need to calculate surface area of that part of the sphere:

Sp = {(x, y, z) ∈ R3 : x2 + y2 + z2 = 1, x2 + y2 ≤ r2}.

Page 17: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 7

The surface area of the sphere is 4π and the area of Sp is

4π(1−√

1− r2),

then

P(R ≤ r) = P(X2 +Y 2 ≤ r2) =4π(1−

√1− r2)

4π= 1−

√1− r2.

So the density of R is

fR(r) =

{r√

1−r2 , if 0 ≤ r ≤ 1

0, if otherwise

Page 18: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Part Bi) The m.g.f. of Y = [Z2

1 + Z22 , Z

21 − Z2

2 ] is

φY (θ) = E[exp(θ1Y1 + θ2Y2)]

= E[exp(θ1(Z21 + Z2

2 ) + θ2(Z21 − Z2

2 ))]

= E[exp((θ1 + θ2)Z21 + (θ1 − θ2)Z2

2 )]

= E[exp((θ1 + θ2)Z21 )] · E[exp((θ1 − θ2)Z2

2 )] (by independence)

= φZ21(θ1 + θ2) · φZ2

2(θ1 − θ2)

Z1, Z2 ∼ N(0, 1)⇒ Z21 , Z

22 ∼ χ2

1 ⇒ φZ21(θ) = φZ2

2(θ) =

1√1− 2θ

.

So we have

φY (θ) =1√

1− 2(θ1 + θ2)· 1√

1− 2(θ1 − θ2)=

1√(1− 2θ1)2 − 4θ2

2

.

Page 19: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Part B

The marginal m.g.f. of Y1 is

φY1(θ1) = φY (θ1, 0) =1

1− 2θ1(θ1 <

1

2),

which is the m.g.f. of χ22. Similarly,

φY2(θ2) = φY (0, θ2) =1√

1− 4θ22

(θ2 ∈ (−1

2,1

2)).

ii)

cov(Y1, Y2) = E(Y1Y2)− E(Y1)E(Y2)

=∂2

∂θ1∂θ2φY (0, 0)− ∂

∂θ1φY (0, 0) · ∂

∂θ2φY (0, 0)

Page 20: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Part B

∂∂θ1

φY (θ) = 2(1− 2θ1)[(1− 2θ1)2 − 4θ22]−

32 ⇒ ∂

∂θ1φY (0, 0) = 2.

∂∂θ2

φY (θ) = 4θ2[(1− 2θ1)2 − 4θ22]−

32 ⇒ ∂

∂θ2φY (0, 0) = 0.

∂2

∂θ1∂θ2φY (θ) = 24(1− 2θ1)θ2[(1− 2θ1)2 − 4θ2

2]⇒∂2

∂θ1∂θ2φY (0, 0) = 0.

Thus, cov(Y1, Y2) = 0⇒ Y1 and Y2 are uncorrelated.

φY (θ) =1√

(1− 2θ1)2 − 4θ22

, φY1(θ1)φY2(θ2) =1

(1− 2θ1)√

1− 4θ22

.

φY (θ) 6= φY1(θ1)φY2(θ2)⇒ Y1 and Y2 are not independent.

Page 21: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Part C

Objective: to compute the m.g.f. of the r.v. M = X21 +X2

2 .

M = X21 +X2

2

= X · X ′

= (Z · Σ12 ) · (Z · Σ

12 )′

= Z · Σ · Z ′ (by symmetry)

= (Z1, Z2)

(σ1 ρρ σ2

)(Z1

Z2

)= σ1Z

21 + 2ρZ1Z2 + σ2Z

22

= σ1(Z1 +ρ

σ1Z2)2 + (σ2 −

ρ2

σ1)Z2

2

Page 22: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Part CThe m.g.f. of M is

φM (θ) = E[exp(θM)]

= E[exp(θσ1(Z1 +ρ

σ1Z2)2 + θ(σ2 −

ρ2

σ1)Z2

2 )]

= E[E[exp(θσ1(Z1 +ρ

σ1Z2)2 + θ(σ2 −

ρ2

σ1)Z2

2 )|Z2]]

= E[exp(θ(σ2 −ρ2

σ1)Z2

2 ) · E[exp(θσ1(Z1 +ρ

σ1Z2)2)|Z2]]

= E[exp(θ(σ2 −ρ2

σ1)Z2

2 ) · φ(Z1+ ρσ1Z2)2(θσ1)]

= E[exp(θ(σ2 −ρ2

σ1)Z2

2 ) ·exp(

ρ2Z22

σ21θσ1

1−2θσ1)

(1− 2θσ1)12

]

Page 23: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Part C

Simplifying the expression of the expectation gives

φM (θ) = E[(1− 2θσ1)−12 exp(θ(

σ1σ2 − ρ2

σ1+

ρ2

σ1(1− 2θσ1))Z2

2 )]

= (1− 2θσ1)−12 · E[exp(θ(

σ1σ2 − ρ2

σ1+

ρ2

σ1(1− 2θσ1))Z2

2 )]

= (1− 2θσ1)−12 · φZ2

2(θ(

σ1σ2 − ρ2

σ1+

ρ2

σ1(1− 2θσ1)))

= (1− 2θσ1)−12 · (1− 2θ(

σ1σ2 − ρ2

σ1+

ρ2

σ1(1− 2θσ1)))−

12

= [(1− 2θσ1)(1− 2θσ2)(1− 4θ2ρ2

(1− 2θσ1)(1− 2θσ2))]−

12

Page 24: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 3

i.i.d. r.v.s X1, . . . , Xn ∼ U(0, 1).

E[logX1] =

∫ 1

0log xdx = x log x|10 −

∫ 1

0x · 1

xdx = −1,

E[(logX1)2] =

∫ 1

0(log x)2dx = x(log x)2|10−

∫ 1

0x·2 log x·1

xdx = 2,

var[logX1] = E[(logX1)2]− [E(logX1)]2 = 2− 1 = 1.

Denote Y = (X1 · · ·Xn)1√n e√n.

log Y =1√n

n∑i=1

logXi +√n =√n · ( 1

n

n∑i=1

logXi + 1)

Since E(logXi) = −1 and var(logXi) = 1,

Page 25: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 3√n · ( 1

n

∑ni=1 logXi + 1)

D−→ Z, where Z ∼ N(0, 1) by usingcentral limit theorem (CLT). Thus,

log YD−→ Z,

and thenYD−→ ln N(0, 1),

where ln N(0, 1) is a log-normal distribution.

Central Limit Theorem:A sequence of i.i.d. r.v.’s Xi with E(Xi) = µ, var(Xi) = σ2

(σ2 <∞),√n · ( 1

n

n∑i=1

Xi − µ)D−→ N(0, σ2).

Page 26: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 4X1, . . . , Xn ∼ Pois(λ), and X ′is are mutually independent, so∑n

i=1Xi has Poisson distribution with parameter nλ, which canbe obtained by using m.g.f.

The support of Xi is {0, 1, . . .} ⇒ the support of∑n

i=1Xi is{0, 1, . . .} ⇒ the support of X is {0, 1

n , . . .}, and the p.m.f. of Xis

fX(k

n) = P(

n∑i=1

Xi = k) = e−nλ · (nλ)k

k!.

E[X − λ] = E[X]− λ

=1

n

n∑i=1

E[Xi]− λ

= 0.

Page 27: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 4

E[(X − λ)2] = var(X − λ)

= var(X)

=1

n2

n∑i=1

var(Xi) (by independence)

n.

Denote Y =√n(X − λ) =

√n · ( 1

n

∑ni=1Xi − λ).

Xi ∼ Pois(λ)⇒ E(Xi) = λ and var(Xi) = λ.

By CLT,

YD−→ N(0, λ).

Page 28: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 5X,Y ∼ N(0, 1). For r > 0,

P(X2 + Y 2 ≤ r2) =

∫ ∫x2+y2≤r2

1√2πe−

x2

2 · 1√2πe−

y2

2 dxdy

=

∫ ∫x2+y2≤r2

1

2πe−

x2+y2

2 dxdy

=

∫ r

0

∫ 2π

0

1

2πe−

u2

2 ududθ

=

∫ r

0ue−

u2

2 du

= 1− e−r2

2

So x2 + y2 has exponential distribution with mean 2, or χ22, or

Gamma distribution with shape parameter 1 and scaleparameter 2.

Page 29: Question 3 R1 x2 p1 2 1 2ˇ Z - warwick.ac.uk · Question 3 p.d.f. integrates to 1 ) R 1 1 p1 2ˇ e x 2 2 dx= 1 ) Z 1 1 e x 2 2 dx= p 2ˇ: E[X] = Z 1 1 x 1 p 2ˇ e x 2 2 dx = 1 p

Exercise 1 Exercise 2 Exercise 3 Exercise 4

Question 6Y1, . . . , Yn : Yi = α+ βxi + εi,

∑ni=0 xi = 0, εi ∼ N(0, σ2).

α =1

n

n∑i=1

Yi = α+ β · 1

n

n∑i=1

xi +1

n

n∑i=1

εi = α+1

n

n∑i=1

εi

So α ∼ N(α, σ2

n ).

β =

∑ni=1 xiYi∑ni=1 x

2i

=

∑ni=1 xi(α+ βxi + εi)∑n

i=1 x2i

= β +

∑ni=1 xiεi∑ni=1 x

2i

So β ∼ N(β, σ2∑ni=1 x

2i).

Here [α, β] and σ2 are independent, and

nσ2

σ2=

∑ni=1(Yi − α− βxi)2

σ2∼ χ2

n−2.