Solution to Homework 6 - Statistics at UC Berkeleypartha/stat205BS08/sol6.pdf · Solution to...
Transcript of Solution to Homework 6 - Statistics at UC Berkeleypartha/stat205BS08/sol6.pdf · Solution to...
Solution to Homework 6
Statistics 205B: Spring 2008
1. (Problem 1.2 from section 6.1 in Durrett)
(a) Let A be any set and let B = ∪∞n=0ϕ−n(A) . Show that ϕ−1(B) ⊂ B.
(b) Let B be any set with ϕ−1(B) ⊂ B and let C = ∩∞n=0ϕ−n(B). Show that ϕ−1(C) = C.
(c) Show that A is almost invariant if and only if there is a C invariant in the strict sence withP(A4C) = 0.
Solution: We will need the following standard facts. For any function φ, any sets A,B and C we have
φ−1(A ∪B) = φ−1(A) ∪ φ−1(B)
φ−1(A ∩B) = φ−1(A) ∩ φ−1(B)P(A4B) ≤ P(A4C) + P(C4B)
(a) ϕ−1(B) = ϕ−1(∪∞n=0ϕ−n(A)) = ∪∞n=0ϕ
−1(ϕ−n(A)) = ∪∞n=1ϕ−n(A) ⊂ ∪∞n=0ϕ
−n(A) = B.
(b) ϕ−1(C) = ϕ−1(∩∞n=0ϕ−n(B)) = ∩∞n=0ϕ
−n−1(B) = ∩∞n=1ϕ−n(B) = ∩∞n=0ϕ
−n(B) = C where thesecond to last inequality is true because ϕ−0(B) ∩ ϕ−1(B) = ϕ−1(B).
(c) If part: If C is strictly invariant and P(A4C) = 0 then
P(ϕ−1(A)4C) = P(ϕ−1(A4C)) = P(A4C) = 0.
Hence P(ϕ−1(A)4A) ≤ P(ϕ−1(A)4C) + P(C4A) = 0.Only if part: First we claim that P(A4B) = 0. This follows from the following calculation
P(A4B) = P(A4(∪∞n=0ϕ−n(A))) ≤
∞∑n=0
P(A4ϕ−n(A))
≤∞∑n=1
n−1∑i=0
P(ϕ−i(A)4ϕ−i−1(A))
=∞∑n=1
n−1∑i=0
P(ϕ−i(A4ϕ−1(A))) =∞∑n=1
n−1∑i=0
P(A4ϕ−1(A)) = 0.
Now we claim that P(B4C) = 0 and then we are done, because C is strictly invariant andP(A4C) ≤ P(A4B) + P(B4C) = 0. To prove the claim first of all note that it is enough toprove that P(B \ C) = 0 since C ⊂ B. Now ϕ−1(B) ⊂ B implies that
B ⊃ ϕ−1(B) ⊃ ϕ−2(B) ⊃ · · · ⊃ ϕ−n(B) ⊃ · · · .
Hence
P(B \ C) = P(B \ ∩∞n=0ϕ−n(B)) ≤
∞∑n=0
P(ϕ−n(B) \ ϕ−n−1(B)) =∞∑n=0
P(B \ ϕ−1(B)) = 0
since P(B \ ϕ−1(B)) ≤ P(A4ϕ−1(A)) = 0.
2. (Problem 1.6 from section 6.1 in Durrett)
Continued fractions. Let ϕ(x) = 1/x − [1/x] for x ∈ (0, 1) and A(x) = [1/x] where [1/x] = thelargest integer ≤ 1/x. an = A(ϕnx), n = 0, 1, 2, . . . gives the continued fraction representation of x,i.e.
x =1
a0 +1
a1 +1
a2 + 1...
Show that ϕ preserves
µ(A) =1
log 2
∫A
dx
1 + xfor A ⊂ (0, 1).
Solution: Note that it is enough to prove the statement for A of the from A = [a, b] with 0 < a < b < 1.Then we can use the π − λ theorem to extend the result for any measurable A ⊂ (0, 1). We have
ϕ−1([a, b]) =∞⋃n=1
[1
n+ b,
1n+ a
]and
µ
([1
n+ b,
1n+ a
])=
1log 2
[log(n+ 1 + a
n+ a
)− log
(n+ 1 + b
n+ b
)].
Hence
µ(ϕ−1([a, b])) =∞∑n=1
µ
([1
n+ b,
1n+ a
])
=1
log 2
∞∑n=1
[log(n+ 1 + a
n+ a
)− log
(n+ 1 + b
n+ b
)]
=1
log 2limN→∞
N∑n=1
[log(n+ 1 + a
n+ a
)− log
(n+ 1 + b
n+ b
)]=
1log 2
limN→∞
log(
(N + 1 + a)(1 + b)(N + 1 + b))(1 + a)
)=
1log 2
log(
1 + b
1 + a
)= µ([a, b]).
3. (Problem 1.7 from section 6.1 in Durrett)
Independent blocks. Let X1, X2, . . . be a stationary sequence. Let n < ∞ and let Y1, Y2, . . . be asequence so that (Ynk+1, . . . , Yn(k+1)), k ≥ 0 are i.i.d. and (Y1, Y2, . . . , Yn) = (X1, X2, . . . , Xn). Finallylet ν be uniformly distributed on {1, 2, . . . , n} and let Zm = Yν+m for m ≥ 1. Show that Z is stationaryand ergodic.
Solution: To check stationarity it is enough to show that for any integer k ≥ 0 and any measurable
2
set A ∈ XN we have
P((Zk, Zk+1, . . .) ∈ A)
=n∑i=1
P((Yν+k, Yν+k+1, . . .) ∈ A|ν = i)P(ν = i)
=1n
n∑i=1
P((Yi+k, Yi+k+1, . . .) ∈ A), since ν is uniform over {1, 2, . . . , n} and independent of Y.
=1n
n∑i=2
P((Yi+k, Yi+k+1, . . .) ∈ A) +1n
P((Yk+1, Yk+2, . . .) ∈ A)
=1n
n−1∑i=1
P((Yi+k+1, Yi+k+2, . . .) ∈ A) +1n
P((Yk+n+1, Yk+n+2, . . .) ∈ A)
=1n
n∑i=1
P((Yi+k+1, Yi+k+2, . . .) ∈ A) = P((Zk+1, Zk+2, . . .) ∈ A)
and we are done. Intuitively what we have is that for any two integers p, q, Zp, Zp+1, . . . , Zp+q−1
consists of a partial block with a length that is uniformly distributed on {1, ...n}, then a number offull blocks of length n and then a partial block n.
To check ergodicity we note that the tail σ-field of the Zm is contained in the tail σ-field of the blockprocess, which is trivial since it is i.i.d.
4. (Problem 2.1 from section 6.2 in Durrett)
Show that if X ∈ Lp with p > 1 then the convergence in Birkhoff’s ergodic theorem occurs in Lp.
Solution: Let X ′M and X ′′M be as in the proof of Birkhoff’s ergodic theorem. Then the above claimfollows from the following results.1n
∑n−1m=0X
′M (ϕmω)− E(X ′M |I)→ 0 a.s. and |X ′M | ≤M implies that∥∥∥∥∥ 1
n
n−1∑m=0
X ′M (ϕmω)− E(X ′M |I)
∥∥∥∥∥p
→ 0
and ∥∥∥∥∥ 1n
n−1∑m=0
X ′′M (ϕmω)− E(X ′′M |I)
∥∥∥∥∥p
≤
∥∥∥∥∥ 1n
n−1∑m=0
X ′′M (ϕmω)
∥∥∥∥∥p
+ ‖E(X ′′M |I)‖p
≤ 1n
n−1∑m=0
‖X ′′M (ϕmω)‖p + ‖E(X ′′M |I)‖p ≤ 2|X ′′M |p
5. (Problem 3.3 from section 6.3 in Durrett)
Suppose that X1, X2, . . . are i.i.d. with P(Xi < −1) = 0 and EXi > 0. If P(Xi = −1) > 0 there is aunique θ < 0 such that E exp(θXi) = 1. Let Sn = X1 +X2 + · · ·+Xn and N = inf{n : Sn < 0}. Showthat exp(θSn) is a martingale and use the optional stopping theorem to conclude that P(N <∞) = eθ.
Solution: Consider the function ϕ(θ) = E(eθX1). ϕ is well-defined for θ ∈ (−∞, 0] because eθX1 ≤ e−θa.s. for θ ≤ 0. Now ϕ is a convex function, ϕ(0) = 1, ϕ(θ) ≥ e−θP(X1 = −1), limθ→−∞ ϕ(θ) = ∞
3
and ϕ′(0−) = E(X1) > 0. This implies that there is an ε < 0 such that ϕ(θ) < ϕ(0) = 1 for allθ ∈ (ε, 0) and hence by Mean Value Theorem and convexity of ϕ there is a unique θ0 ≤ ε < 0 suchthat ϕ(θ0) = 1.
exp(θ0Sn) is a martingale follows easily from the fact that E(exp(θ0Xi)) = 1 for all i.
Recall that N = inf{n : Sn < 0}. Clearly N is a stopping time and using optional sampling the-orem we have (exp(θ0Sn∧N ))n≥0 is a martingale. Hence E exp(θ0Sn∧N ) = E exp(θ0S0) = 1. Sinceexp(θ0Sn∧N ) ≤ exp(−θ0) and Sn →∞ as n→∞ by SLLN, using DCT we have
1 = limn→∞
E exp(θ0Sn∧N ) = E(exp(θ0SN )1{N <∞}) ≤ exp(−θ0)P(N <∞).
Hence we have P(N <∞) ≥ eθ0 . If we define N1 = inf{n : Sn = −1} then we have P(N1 <∞) = eθ0 .
4