Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ,...

175
Chapter 2: Markov Chains L. Breuer University of Kent, UK October 23, 2013 L. Breuer Chapter 2: Markov Chains

Transcript of Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ,...

Page 1: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Chapter 2: Markov Chains

L. BreuerUniversity of Kent, UK

October 23, 2013

L. Breuer Chapter 2: Markov Chains

Page 2: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary Distributions

Let X denote a Markov chain with state space E .

Let π denote aprobability measure on E . If P(X0 = i) = πi impliesP(Xn = i) = πi for all n ∈ N and i ∈ E , then π is called astationary distribution for X . If π is a stationary distribution,then c · π for any c ≥ 0 is called a stationary measure.

L. Breuer Chapter 2: Markov Chains

Page 3: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary Distributions

Let X denote a Markov chain with state space E . Let π denote aprobability measure on E .

If P(X0 = i) = πi impliesP(Xn = i) = πi for all n ∈ N and i ∈ E , then π is called astationary distribution for X . If π is a stationary distribution,then c · π for any c ≥ 0 is called a stationary measure.

L. Breuer Chapter 2: Markov Chains

Page 4: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary Distributions

Let X denote a Markov chain with state space E . Let π denote aprobability measure on E . If P(X0 = i) = πi impliesP(Xn = i) = πi for all n ∈ N and i ∈ E ,

then π is called astationary distribution for X . If π is a stationary distribution,then c · π for any c ≥ 0 is called a stationary measure.

L. Breuer Chapter 2: Markov Chains

Page 5: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary Distributions

Let X denote a Markov chain with state space E . Let π denote aprobability measure on E . If P(X0 = i) = πi impliesP(Xn = i) = πi for all n ∈ N and i ∈ E , then π is called astationary distribution for X .

If π is a stationary distribution,then c · π for any c ≥ 0 is called a stationary measure.

L. Breuer Chapter 2: Markov Chains

Page 6: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary Distributions

Let X denote a Markov chain with state space E . Let π denote aprobability measure on E . If P(X0 = i) = πi impliesP(Xn = i) = πi for all n ∈ N and i ∈ E , then π is called astationary distribution for X . If π is a stationary distribution,then c · π for any c ≥ 0 is called a stationary measure.

L. Breuer Chapter 2: Markov Chains

Page 7: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.18

Let X denote a Markov chain with state space E and transitionmatrix P.

Further, let π denote a probability distribution on Ewith πP = π, i.e.

πj =∑i∈E

πipij and∑j∈E

πj = 1

for all j ∈ E . Then π is a stationary distribution for X . If π is astationary distribution for X , then πP = π holds.

L. Breuer Chapter 2: Markov Chains

Page 8: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.18

Let X denote a Markov chain with state space E and transitionmatrix P. Further, let π denote a probability distribution on Ewith πP = π,

i.e.

πj =∑i∈E

πipij and∑j∈E

πj = 1

for all j ∈ E . Then π is a stationary distribution for X . If π is astationary distribution for X , then πP = π holds.

L. Breuer Chapter 2: Markov Chains

Page 9: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.18

Let X denote a Markov chain with state space E and transitionmatrix P. Further, let π denote a probability distribution on Ewith πP = π, i.e.

πj =∑i∈E

πipij and∑j∈E

πj = 1

for all j ∈ E .

Then π is a stationary distribution for X . If π is astationary distribution for X , then πP = π holds.

L. Breuer Chapter 2: Markov Chains

Page 10: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.18

Let X denote a Markov chain with state space E and transitionmatrix P. Further, let π denote a probability distribution on Ewith πP = π, i.e.

πj =∑i∈E

πipij and∑j∈E

πj = 1

for all j ∈ E . Then π is a stationary distribution for X .

If π is astationary distribution for X , then πP = π holds.

L. Breuer Chapter 2: Markov Chains

Page 11: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.18

Let X denote a Markov chain with state space E and transitionmatrix P. Further, let π denote a probability distribution on Ewith πP = π, i.e.

πj =∑i∈E

πipij and∑j∈E

πj = 1

for all j ∈ E . Then π is a stationary distribution for X . If π is astationary distribution for X , then πP = π holds.

L. Breuer Chapter 2: Markov Chains

Page 12: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.18

Let P(X0 = i) = πi for all i ∈ E .

Then P(Xn = i) = P(X0 = i) forall n ∈ N and i ∈ E follows by induction on n. The case n = 1holds by assumption, and the induction step follows by inductionhypothesis and the Markov property. The last statement is obvious.

L. Breuer Chapter 2: Markov Chains

Page 13: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.18

Let P(X0 = i) = πi for all i ∈ E . Then P(Xn = i) = P(X0 = i) forall n ∈ N and i ∈ E follows by induction on n.

The case n = 1holds by assumption, and the induction step follows by inductionhypothesis and the Markov property. The last statement is obvious.

L. Breuer Chapter 2: Markov Chains

Page 14: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.18

Let P(X0 = i) = πi for all i ∈ E . Then P(Xn = i) = P(X0 = i) forall n ∈ N and i ∈ E follows by induction on n. The case n = 1holds by assumption,

and the induction step follows by inductionhypothesis and the Markov property. The last statement is obvious.

L. Breuer Chapter 2: Markov Chains

Page 15: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.18

Let P(X0 = i) = πi for all i ∈ E . Then P(Xn = i) = P(X0 = i) forall n ∈ N and i ∈ E follows by induction on n. The case n = 1holds by assumption, and the induction step follows by inductionhypothesis and the Markov property.

The last statement is obvious.

L. Breuer Chapter 2: Markov Chains

Page 16: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.18

Let P(X0 = i) = πi for all i ∈ E . Then P(Xn = i) = P(X0 = i) forall n ∈ N and i ∈ E follows by induction on n. The case n = 1holds by assumption, and the induction step follows by inductionhypothesis and the Markov property. The last statement is obvious.

L. Breuer Chapter 2: Markov Chains

Page 17: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.19

Let the transition matrix of a Markov chain X be given by

P =

0.8 0.2 0 00.2 0.8 0 00 0 0.4 0.60 0 0.6 0.4

Then π = (0.5, 0.5, 0, 0), π′ = (0, 0, 0.5, 0.5) as well as any linearcombination of them are stationary distributions for X . This showsthat a stationary distribution does not need to be unique.

L. Breuer Chapter 2: Markov Chains

Page 18: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.19

Let the transition matrix of a Markov chain X be given by

P =

0.8 0.2 0 00.2 0.8 0 00 0 0.4 0.60 0 0.6 0.4

Then π = (0.5, 0.5, 0, 0),

π′ = (0, 0, 0.5, 0.5) as well as any linearcombination of them are stationary distributions for X . This showsthat a stationary distribution does not need to be unique.

L. Breuer Chapter 2: Markov Chains

Page 19: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.19

Let the transition matrix of a Markov chain X be given by

P =

0.8 0.2 0 00.2 0.8 0 00 0 0.4 0.60 0 0.6 0.4

Then π = (0.5, 0.5, 0, 0), π′ = (0, 0, 0.5, 0.5)

as well as any linearcombination of them are stationary distributions for X . This showsthat a stationary distribution does not need to be unique.

L. Breuer Chapter 2: Markov Chains

Page 20: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.19

Let the transition matrix of a Markov chain X be given by

P =

0.8 0.2 0 00.2 0.8 0 00 0 0.4 0.60 0 0.6 0.4

Then π = (0.5, 0.5, 0, 0), π′ = (0, 0, 0.5, 0.5) as well as any linearcombination of them are stationary distributions for X .

This showsthat a stationary distribution does not need to be unique.

L. Breuer Chapter 2: Markov Chains

Page 21: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.19

Let the transition matrix of a Markov chain X be given by

P =

0.8 0.2 0 00.2 0.8 0 00 0 0.4 0.60 0 0.6 0.4

Then π = (0.5, 0.5, 0, 0), π′ = (0, 0, 0.5, 0.5) as well as any linearcombination of them are stationary distributions for X . This showsthat a stationary distribution does not need to be unique.

L. Breuer Chapter 2: Markov Chains

Page 22: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.20: Bernoulli process

The transition matrix of a Bernoulli process has the structure

P =

1− p p 0 0 . . .

0 1− p p 0. . .

0 0 1− p p. . .

.... . .

. . .. . .

. . .

Hence πP = π implies first

π0 · (1− p) = π0 ⇒ π0 = 0

since 0 < p < 1. Assume that πn = 0 for any n ∈ N0. This andthe condition πP = π further imply for πn+1

πn · p + πn+1 · (1− p) = πn+1 ⇒ πn+1 = 0

which completes an induction argument proving πn = 0 for alln ∈ N0. Hence the Bernoulli process does not have a stationarydistribution.

L. Breuer Chapter 2: Markov Chains

Page 23: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.20: Bernoulli process

The transition matrix of a Bernoulli process has the structure

P =

1− p p 0 0 . . .

0 1− p p 0. . .

0 0 1− p p. . .

.... . .

. . .. . .

. . .

Hence πP = π implies first

π0 · (1− p) = π0 ⇒ π0 = 0

since 0 < p < 1.

Assume that πn = 0 for any n ∈ N0. This andthe condition πP = π further imply for πn+1

πn · p + πn+1 · (1− p) = πn+1 ⇒ πn+1 = 0

which completes an induction argument proving πn = 0 for alln ∈ N0. Hence the Bernoulli process does not have a stationarydistribution.

L. Breuer Chapter 2: Markov Chains

Page 24: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.20: Bernoulli process

The transition matrix of a Bernoulli process has the structure

P =

1− p p 0 0 . . .

0 1− p p 0. . .

0 0 1− p p. . .

.... . .

. . .. . .

. . .

Hence πP = π implies first

π0 · (1− p) = π0 ⇒ π0 = 0

since 0 < p < 1. Assume that πn = 0 for any n ∈ N0.

This andthe condition πP = π further imply for πn+1

πn · p + πn+1 · (1− p) = πn+1 ⇒ πn+1 = 0

which completes an induction argument proving πn = 0 for alln ∈ N0. Hence the Bernoulli process does not have a stationarydistribution.

L. Breuer Chapter 2: Markov Chains

Page 25: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.20: Bernoulli process

The transition matrix of a Bernoulli process has the structure

P =

1− p p 0 0 . . .

0 1− p p 0. . .

0 0 1− p p. . .

.... . .

. . .. . .

. . .

Hence πP = π implies first

π0 · (1− p) = π0 ⇒ π0 = 0

since 0 < p < 1. Assume that πn = 0 for any n ∈ N0. This andthe condition πP = π further imply for πn+1

πn · p + πn+1 · (1− p) = πn+1 ⇒ πn+1 = 0

which completes an induction argument proving πn = 0 for alln ∈ N0. Hence the Bernoulli process does not have a stationarydistribution.

L. Breuer Chapter 2: Markov Chains

Page 26: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.20: Bernoulli process

The transition matrix of a Bernoulli process has the structure

P =

1− p p 0 0 . . .

0 1− p p 0. . .

0 0 1− p p. . .

.... . .

. . .. . .

. . .

Hence πP = π implies first

π0 · (1− p) = π0 ⇒ π0 = 0

since 0 < p < 1. Assume that πn = 0 for any n ∈ N0. This andthe condition πP = π further imply for πn+1

πn · p + πn+1 · (1− p) = πn+1 ⇒ πn+1 = 0

which completes an induction argument proving πn = 0 for alln ∈ N0.

Hence the Bernoulli process does not have a stationarydistribution.

L. Breuer Chapter 2: Markov Chains

Page 27: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.20: Bernoulli process

The transition matrix of a Bernoulli process has the structure

P =

1− p p 0 0 . . .

0 1− p p 0. . .

0 0 1− p p. . .

.... . .

. . .. . .

. . .

Hence πP = π implies first

π0 · (1− p) = π0 ⇒ π0 = 0

since 0 < p < 1. Assume that πn = 0 for any n ∈ N0. This andthe condition πP = π further imply for πn+1

πn · p + πn+1 · (1− p) = πn+1 ⇒ πn+1 = 0

which completes an induction argument proving πn = 0 for alln ∈ N0. Hence the Bernoulli process does not have a stationarydistribution.

L. Breuer Chapter 2: Markov Chains

Page 28: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.21

The solution of πP = π and∑

j∈E πj = 1 is unique for

P =

(1− p pp 1− p

)with 0 < p < 1.

Thus there are transition matrices which haveexactly one stationary distribution.

L. Breuer Chapter 2: Markov Chains

Page 29: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example 2.21

The solution of πP = π and∑

j∈E πj = 1 is unique for

P =

(1− p pp 1− p

)with 0 < p < 1. Thus there are transition matrices which haveexactly one stationary distribution.

L. Breuer Chapter 2: Markov Chains

Page 30: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.22

A transient Markov chain (i.e. a Markov chain with transientstates only) has no stationary distribution.

Proof:

Assume that πP = π holds for some distribution π. Furtherlet E = N without loss of generality. Choose any state m ∈ N withπm > 0. Since

∑∞n=1 πn = 1 is bounded, there is an index M > m

such that∑∞

n=M πn < πm. Set ε := πm −∑∞

n=M πn. By theorem2.17, there is an index N ∈ N such that PN(i ,m) < ε for alli ≤ M. Then the stationarity of π implies

πm =∞∑i=1

πiPN(i ,m) =

M−1∑i=1

πiPN(i ,m) +

∞∑i=M

πiPN(i ,m)

< ε+∞∑

i=M

πi = πm

which is a contradiction.

L. Breuer Chapter 2: Markov Chains

Page 31: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.22

A transient Markov chain (i.e. a Markov chain with transientstates only) has no stationary distribution.

Proof: Assume that πP = π holds for some distribution π.

Furtherlet E = N without loss of generality. Choose any state m ∈ N withπm > 0. Since

∑∞n=1 πn = 1 is bounded, there is an index M > m

such that∑∞

n=M πn < πm. Set ε := πm −∑∞

n=M πn. By theorem2.17, there is an index N ∈ N such that PN(i ,m) < ε for alli ≤ M. Then the stationarity of π implies

πm =∞∑i=1

πiPN(i ,m) =

M−1∑i=1

πiPN(i ,m) +

∞∑i=M

πiPN(i ,m)

< ε+∞∑

i=M

πi = πm

which is a contradiction.

L. Breuer Chapter 2: Markov Chains

Page 32: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.22

A transient Markov chain (i.e. a Markov chain with transientstates only) has no stationary distribution.

Proof: Assume that πP = π holds for some distribution π. Furtherlet E = N without loss of generality.

Choose any state m ∈ N withπm > 0. Since

∑∞n=1 πn = 1 is bounded, there is an index M > m

such that∑∞

n=M πn < πm. Set ε := πm −∑∞

n=M πn. By theorem2.17, there is an index N ∈ N such that PN(i ,m) < ε for alli ≤ M. Then the stationarity of π implies

πm =∞∑i=1

πiPN(i ,m) =

M−1∑i=1

πiPN(i ,m) +

∞∑i=M

πiPN(i ,m)

< ε+∞∑

i=M

πi = πm

which is a contradiction.

L. Breuer Chapter 2: Markov Chains

Page 33: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.22

A transient Markov chain (i.e. a Markov chain with transientstates only) has no stationary distribution.

Proof: Assume that πP = π holds for some distribution π. Furtherlet E = N without loss of generality. Choose any state m ∈ N withπm > 0.

Since∑∞

n=1 πn = 1 is bounded, there is an index M > msuch that

∑∞n=M πn < πm. Set ε := πm −

∑∞n=M πn. By theorem

2.17, there is an index N ∈ N such that PN(i ,m) < ε for alli ≤ M. Then the stationarity of π implies

πm =∞∑i=1

πiPN(i ,m) =

M−1∑i=1

πiPN(i ,m) +

∞∑i=M

πiPN(i ,m)

< ε+∞∑

i=M

πi = πm

which is a contradiction.

L. Breuer Chapter 2: Markov Chains

Page 34: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.22

A transient Markov chain (i.e. a Markov chain with transientstates only) has no stationary distribution.

Proof: Assume that πP = π holds for some distribution π. Furtherlet E = N without loss of generality. Choose any state m ∈ N withπm > 0. Since

∑∞n=1 πn = 1 is bounded, there is an index M > m

such that∑∞

n=M πn < πm.

Set ε := πm −∑∞

n=M πn. By theorem2.17, there is an index N ∈ N such that PN(i ,m) < ε for alli ≤ M. Then the stationarity of π implies

πm =∞∑i=1

πiPN(i ,m) =

M−1∑i=1

πiPN(i ,m) +

∞∑i=M

πiPN(i ,m)

< ε+∞∑

i=M

πi = πm

which is a contradiction.

L. Breuer Chapter 2: Markov Chains

Page 35: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.22

A transient Markov chain (i.e. a Markov chain with transientstates only) has no stationary distribution.

Proof: Assume that πP = π holds for some distribution π. Furtherlet E = N without loss of generality. Choose any state m ∈ N withπm > 0. Since

∑∞n=1 πn = 1 is bounded, there is an index M > m

such that∑∞

n=M πn < πm. Set ε := πm −∑∞

n=M πn.

By theorem2.17, there is an index N ∈ N such that PN(i ,m) < ε for alli ≤ M. Then the stationarity of π implies

πm =∞∑i=1

πiPN(i ,m) =

M−1∑i=1

πiPN(i ,m) +

∞∑i=M

πiPN(i ,m)

< ε+∞∑

i=M

πi = πm

which is a contradiction.

L. Breuer Chapter 2: Markov Chains

Page 36: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.22

A transient Markov chain (i.e. a Markov chain with transientstates only) has no stationary distribution.

Proof: Assume that πP = π holds for some distribution π. Furtherlet E = N without loss of generality. Choose any state m ∈ N withπm > 0. Since

∑∞n=1 πn = 1 is bounded, there is an index M > m

such that∑∞

n=M πn < πm. Set ε := πm −∑∞

n=M πn. By theorem2.17, there is an index N ∈ N such that PN(i ,m) < ε for alli ≤ M.

Then the stationarity of π implies

πm =∞∑i=1

πiPN(i ,m) =

M−1∑i=1

πiPN(i ,m) +

∞∑i=M

πiPN(i ,m)

< ε+∞∑

i=M

πi = πm

which is a contradiction.

L. Breuer Chapter 2: Markov Chains

Page 37: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.22

A transient Markov chain (i.e. a Markov chain with transientstates only) has no stationary distribution.

Proof: Assume that πP = π holds for some distribution π. Furtherlet E = N without loss of generality. Choose any state m ∈ N withπm > 0. Since

∑∞n=1 πn = 1 is bounded, there is an index M > m

such that∑∞

n=M πn < πm. Set ε := πm −∑∞

n=M πn. By theorem2.17, there is an index N ∈ N such that PN(i ,m) < ε for alli ≤ M. Then the stationarity of π implies

πm =∞∑i=1

πiPN(i ,m) =

M−1∑i=1

πiPN(i ,m) +

∞∑i=M

πiPN(i ,m)

< ε+∞∑

i=M

πi = πm

which is a contradiction.L. Breuer Chapter 2: Markov Chains

Page 38: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Positive / Null Recurrence

Define

Ni (n) :=n∑

k=0

I{Xk=i}

as the number of visits to state i until time n.

Further define for arecurrent state i ∈ E the mean time of return

mi := E(τi |X0 = i)

By definition mi > 0 for all i ∈ E . A recurrent state i ∈ E withmi <∞ will be called positive recurrent, otherwise i is callednull recurrent.

L. Breuer Chapter 2: Markov Chains

Page 39: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Positive / Null Recurrence

Define

Ni (n) :=n∑

k=0

I{Xk=i}

as the number of visits to state i until time n. Further define for arecurrent state i ∈ E the mean time of return

mi := E(τi |X0 = i)

By definition mi > 0 for all i ∈ E . A recurrent state i ∈ E withmi <∞ will be called positive recurrent, otherwise i is callednull recurrent.

L. Breuer Chapter 2: Markov Chains

Page 40: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Positive / Null Recurrence

Define

Ni (n) :=n∑

k=0

I{Xk=i}

as the number of visits to state i until time n. Further define for arecurrent state i ∈ E the mean time of return

mi := E(τi |X0 = i)

By definition mi > 0 for all i ∈ E .

A recurrent state i ∈ E withmi <∞ will be called positive recurrent, otherwise i is callednull recurrent.

L. Breuer Chapter 2: Markov Chains

Page 41: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Positive / Null Recurrence

Define

Ni (n) :=n∑

k=0

I{Xk=i}

as the number of visits to state i until time n. Further define for arecurrent state i ∈ E the mean time of return

mi := E(τi |X0 = i)

By definition mi > 0 for all i ∈ E . A recurrent state i ∈ E withmi <∞ will be called positive recurrent,

otherwise i is callednull recurrent.

L. Breuer Chapter 2: Markov Chains

Page 42: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Positive / Null Recurrence

Define

Ni (n) :=n∑

k=0

I{Xk=i}

as the number of visits to state i until time n. Further define for arecurrent state i ∈ E the mean time of return

mi := E(τi |X0 = i)

By definition mi > 0 for all i ∈ E . A recurrent state i ∈ E withmi <∞ will be called positive recurrent, otherwise i is callednull recurrent.

L. Breuer Chapter 2: Markov Chains

Page 43: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Elementary renewal theorem

The elementary renewal theorem (which will be proven in chapter4) states that

limn→∞

E(Ni (n)|X0 = j)

n=

1

mi

for all recurrent i ∈ E

and independently of j ∈ E provided j ↔ i ,with the convention of 1/∞ := 0. Thus the asymptotic rate ofvisits to a recurrent state is determined by the mean recurrencetime of this state.

L. Breuer Chapter 2: Markov Chains

Page 44: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Elementary renewal theorem

The elementary renewal theorem (which will be proven in chapter4) states that

limn→∞

E(Ni (n)|X0 = j)

n=

1

mi

for all recurrent i ∈ E and independently of j ∈ E provided j ↔ i ,

with the convention of 1/∞ := 0. Thus the asymptotic rate ofvisits to a recurrent state is determined by the mean recurrencetime of this state.

L. Breuer Chapter 2: Markov Chains

Page 45: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Elementary renewal theorem

The elementary renewal theorem (which will be proven in chapter4) states that

limn→∞

E(Ni (n)|X0 = j)

n=

1

mi

for all recurrent i ∈ E and independently of j ∈ E provided j ↔ i ,with the convention of 1/∞ := 0.

Thus the asymptotic rate ofvisits to a recurrent state is determined by the mean recurrencetime of this state.

L. Breuer Chapter 2: Markov Chains

Page 46: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Elementary renewal theorem

The elementary renewal theorem (which will be proven in chapter4) states that

limn→∞

E(Ni (n)|X0 = j)

n=

1

mi

for all recurrent i ∈ E and independently of j ∈ E provided j ↔ i ,with the convention of 1/∞ := 0. Thus the asymptotic rate ofvisits to a recurrent state is determined by the mean recurrencetime of this state.

L. Breuer Chapter 2: Markov Chains

Page 47: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.23

Positive recurrence and null recurrence are class properties withrespect to the relation of communication between states.

Proof:Assume that i ↔ j for two states i , j ∈ E and i is null recurrent.Thus there are numbers m, n ∈ N with Pn(i , j) > 0 andPm(j , i) > 0. Because of the representationE(Ni (k)|X0 = i) =

∑kl=0 P

l(i , i), we obtain

L. Breuer Chapter 2: Markov Chains

Page 48: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.23

Positive recurrence and null recurrence are class properties withrespect to the relation of communication between states.

Proof:Assume that i ↔ j for two states i , j ∈ E

and i is null recurrent.Thus there are numbers m, n ∈ N with Pn(i , j) > 0 andPm(j , i) > 0. Because of the representationE(Ni (k)|X0 = i) =

∑kl=0 P

l(i , i), we obtain

L. Breuer Chapter 2: Markov Chains

Page 49: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.23

Positive recurrence and null recurrence are class properties withrespect to the relation of communication between states.

Proof:Assume that i ↔ j for two states i , j ∈ E and i is null recurrent.

Thus there are numbers m, n ∈ N with Pn(i , j) > 0 andPm(j , i) > 0. Because of the representationE(Ni (k)|X0 = i) =

∑kl=0 P

l(i , i), we obtain

L. Breuer Chapter 2: Markov Chains

Page 50: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.23

Positive recurrence and null recurrence are class properties withrespect to the relation of communication between states.

Proof:Assume that i ↔ j for two states i , j ∈ E and i is null recurrent.Thus there are numbers m, n ∈ N with Pn(i , j) > 0 andPm(j , i) > 0.

Because of the representationE(Ni (k)|X0 = i) =

∑kl=0 P

l(i , i), we obtain

L. Breuer Chapter 2: Markov Chains

Page 51: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.23

Positive recurrence and null recurrence are class properties withrespect to the relation of communication between states.

Proof:Assume that i ↔ j for two states i , j ∈ E and i is null recurrent.Thus there are numbers m, n ∈ N with Pn(i , j) > 0 andPm(j , i) > 0. Because of the representationE(Ni (k)|X0 = i) =

∑kl=0 P

l(i , i), we obtain

L. Breuer Chapter 2: Markov Chains

Page 52: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.23 (contd.)

0 = limk→∞

∑kl=0 P

l(i , i)

k

≥ limk→∞

∑k−m−nl=0 P l(j , j)

k· Pn(i , j)Pm(j , i)

= limk→∞

k −m − n

k·∑k−m−n

l=0 P l(j , j)

k −m − n· Pn(i , j)Pm(j , i)

= limk→∞

∑kl=0 P

l(j , j)

k· Pn(i , j)Pm(j , i)

=Pn(i , j)Pm(j , i)

mj

and thus mj =∞, which signifies the null recurrence of j .

L. Breuer Chapter 2: Markov Chains

Page 53: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.23 (contd.)

0 = limk→∞

∑kl=0 P

l(i , i)

k

≥ limk→∞

∑k−m−nl=0 P l(j , j)

k· Pn(i , j)Pm(j , i)

= limk→∞

k −m − n

k·∑k−m−n

l=0 P l(j , j)

k −m − n· Pn(i , j)Pm(j , i)

= limk→∞

∑kl=0 P

l(j , j)

k· Pn(i , j)Pm(j , i)

=Pn(i , j)Pm(j , i)

mj

and thus mj =∞, which signifies the null recurrence of j .

L. Breuer Chapter 2: Markov Chains

Page 54: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.23 (contd.)

0 = limk→∞

∑kl=0 P

l(i , i)

k

≥ limk→∞

∑k−m−nl=0 P l(j , j)

k· Pn(i , j)Pm(j , i)

= limk→∞

k −m − n

k·∑k−m−n

l=0 P l(j , j)

k −m − n· Pn(i , j)Pm(j , i)

= limk→∞

∑kl=0 P

l(j , j)

k· Pn(i , j)Pm(j , i)

=Pn(i , j)Pm(j , i)

mj

and thus mj =∞, which signifies the null recurrence of j .

L. Breuer Chapter 2: Markov Chains

Page 55: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.23 (contd.)

0 = limk→∞

∑kl=0 P

l(i , i)

k

≥ limk→∞

∑k−m−nl=0 P l(j , j)

k· Pn(i , j)Pm(j , i)

= limk→∞

k −m − n

k·∑k−m−n

l=0 P l(j , j)

k −m − n· Pn(i , j)Pm(j , i)

= limk→∞

∑kl=0 P

l(j , j)

k· Pn(i , j)Pm(j , i)

=Pn(i , j)Pm(j , i)

mj

and thus mj =∞, which signifies the null recurrence of j .

L. Breuer Chapter 2: Markov Chains

Page 56: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.23 (contd.)

0 = limk→∞

∑kl=0 P

l(i , i)

k

≥ limk→∞

∑k−m−nl=0 P l(j , j)

k· Pn(i , j)Pm(j , i)

= limk→∞

k −m − n

k·∑k−m−n

l=0 P l(j , j)

k −m − n· Pn(i , j)Pm(j , i)

= limk→∞

∑kl=0 P

l(j , j)

k· Pn(i , j)Pm(j , i)

=Pn(i , j)Pm(j , i)

mj

and thus mj =∞, which signifies the null recurrence of j .

L. Breuer Chapter 2: Markov Chains

Page 57: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.23 (contd.)

0 = limk→∞

∑kl=0 P

l(i , i)

k

≥ limk→∞

∑k−m−nl=0 P l(j , j)

k· Pn(i , j)Pm(j , i)

= limk→∞

k −m − n

k·∑k−m−n

l=0 P l(j , j)

k −m − n· Pn(i , j)Pm(j , i)

= limk→∞

∑kl=0 P

l(j , j)

k· Pn(i , j)Pm(j , i)

=Pn(i , j)Pm(j , i)

mj

and thus mj =∞, which signifies the null recurrence of j .

L. Breuer Chapter 2: Markov Chains

Page 58: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.24

Let i ∈ E be positive recurrent and define the mean first visit timemi := E(τi |X0 = i).

Then a stationary distribution π is given by

πj := m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

for all j ∈ E . In particular, πi = m−1i and πk = 0 for all states koutside of the communication class belonging to i .

L. Breuer Chapter 2: Markov Chains

Page 59: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.24

Let i ∈ E be positive recurrent and define the mean first visit timemi := E(τi |X0 = i). Then a stationary distribution π is given by

πj := m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

for all j ∈ E .

In particular, πi = m−1i and πk = 0 for all states koutside of the communication class belonging to i .

L. Breuer Chapter 2: Markov Chains

Page 60: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.24

Let i ∈ E be positive recurrent and define the mean first visit timemi := E(τi |X0 = i). Then a stationary distribution π is given by

πj := m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

for all j ∈ E . In particular, πi = m−1i

and πk = 0 for all states koutside of the communication class belonging to i .

L. Breuer Chapter 2: Markov Chains

Page 61: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.24

Let i ∈ E be positive recurrent and define the mean first visit timemi := E(τi |X0 = i). Then a stationary distribution π is given by

πj := m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

for all j ∈ E . In particular, πi = m−1i and πk = 0 for all states koutside of the communication class belonging to i .

L. Breuer Chapter 2: Markov Chains

Page 62: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24

First of all, π is a probability measure

since

∑j∈E

∞∑n=0

P(Xn = j , τi > n|X0 = i) =∞∑n=0

∑j∈E

P(Xn = j , τi > n|X0 = i)

=∞∑n=0

P(τi > n|X0 = i) = mi

The particular statements in the theorem are obvious from thedefinition of π and the fact that a recurrent communication class isclosed.

L. Breuer Chapter 2: Markov Chains

Page 63: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24

First of all, π is a probability measure since

∑j∈E

∞∑n=0

P(Xn = j , τi > n|X0 = i) =∞∑n=0

∑j∈E

P(Xn = j , τi > n|X0 = i)

=∞∑n=0

P(τi > n|X0 = i) = mi

The particular statements in the theorem are obvious from thedefinition of π and the fact that a recurrent communication class isclosed.

L. Breuer Chapter 2: Markov Chains

Page 64: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24

First of all, π is a probability measure since

∑j∈E

∞∑n=0

P(Xn = j , τi > n|X0 = i) =∞∑n=0

∑j∈E

P(Xn = j , τi > n|X0 = i)

=∞∑n=0

P(τi > n|X0 = i) = mi

The particular statements in the theorem are obvious from thedefinition of π and the fact that a recurrent communication class isclosed.

L. Breuer Chapter 2: Markov Chains

Page 65: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24

First of all, π is a probability measure since

∑j∈E

∞∑n=0

P(Xn = j , τi > n|X0 = i) =∞∑n=0

∑j∈E

P(Xn = j , τi > n|X0 = i)

=∞∑n=0

P(τi > n|X0 = i) = mi

The particular statements in the theorem are obvious from thedefinition of π

and the fact that a recurrent communication class isclosed.

L. Breuer Chapter 2: Markov Chains

Page 66: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24

First of all, π is a probability measure since

∑j∈E

∞∑n=0

P(Xn = j , τi > n|X0 = i) =∞∑n=0

∑j∈E

P(Xn = j , τi > n|X0 = i)

=∞∑n=0

P(τi > n|X0 = i) = mi

The particular statements in the theorem are obvious from thedefinition of π and the fact that a recurrent communication class isclosed.

L. Breuer Chapter 2: Markov Chains

Page 67: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

The stationarity of π is shown as follows.

First we obtain

πj = m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi ≥ n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi > n − 1|X0 = i)

since X0 = Xτi = i in the conditioning set {X0 = i}. Further,

L. Breuer Chapter 2: Markov Chains

Page 68: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

The stationarity of π is shown as follows. First we obtain

πj = m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi ≥ n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi > n − 1|X0 = i)

since X0 = Xτi = i in the conditioning set {X0 = i}. Further,

L. Breuer Chapter 2: Markov Chains

Page 69: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

The stationarity of π is shown as follows. First we obtain

πj = m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi ≥ n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi > n − 1|X0 = i)

since X0 = Xτi = i in the conditioning set {X0 = i}. Further,

L. Breuer Chapter 2: Markov Chains

Page 70: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

The stationarity of π is shown as follows. First we obtain

πj = m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi ≥ n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi > n − 1|X0 = i)

since X0 = Xτi = i in the conditioning set {X0 = i}.

Further,

L. Breuer Chapter 2: Markov Chains

Page 71: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

The stationarity of π is shown as follows. First we obtain

πj = m−1i ·∞∑n=0

P(Xn = j , τi > n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi ≥ n|X0 = i)

= m−1i ·∞∑n=1

P(Xn = j , τi > n − 1|X0 = i)

since X0 = Xτi = i in the conditioning set {X0 = i}. Further,

L. Breuer Chapter 2: Markov Chains

Page 72: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

P(Xn = j , τi > n − 1|X0 = i) =P(Xn = j , τi > n − 1,X0 = i)

P(X0 = i)

=∑k∈E

P(Xn = j ,Xn−1 = k , τi > n − 1,X0 = i)

P(X0 = i)

=∑k 6=i

P(Xn = j ,Xn−1 = k , τi > n − 1,X0 = i)

P(Xn−1 = k , τi > n − 1,X0 = i)

×P(Xn−1 = k , τi > n − 1,X0 = i)

P(X0 = i)

=∑k∈E

pkjP(Xn−1 = k , τi > n − 1|X0 = i)

L. Breuer Chapter 2: Markov Chains

Page 73: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

P(Xn = j , τi > n − 1|X0 = i) =P(Xn = j , τi > n − 1,X0 = i)

P(X0 = i)

=∑k∈E

P(Xn = j ,Xn−1 = k , τi > n − 1,X0 = i)

P(X0 = i)

=∑k 6=i

P(Xn = j ,Xn−1 = k , τi > n − 1,X0 = i)

P(Xn−1 = k , τi > n − 1,X0 = i)

×P(Xn−1 = k , τi > n − 1,X0 = i)

P(X0 = i)

=∑k∈E

pkjP(Xn−1 = k , τi > n − 1|X0 = i)

L. Breuer Chapter 2: Markov Chains

Page 74: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

P(Xn = j , τi > n − 1|X0 = i) =P(Xn = j , τi > n − 1,X0 = i)

P(X0 = i)

=∑k∈E

P(Xn = j ,Xn−1 = k , τi > n − 1,X0 = i)

P(X0 = i)

=∑k 6=i

P(Xn = j ,Xn−1 = k , τi > n − 1,X0 = i)

P(Xn−1 = k , τi > n − 1,X0 = i)

×P(Xn−1 = k , τi > n − 1,X0 = i)

P(X0 = i)

=∑k∈E

pkjP(Xn−1 = k , τi > n − 1|X0 = i)

L. Breuer Chapter 2: Markov Chains

Page 75: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

P(Xn = j , τi > n − 1|X0 = i) =P(Xn = j , τi > n − 1,X0 = i)

P(X0 = i)

=∑k∈E

P(Xn = j ,Xn−1 = k , τi > n − 1,X0 = i)

P(X0 = i)

=∑k 6=i

P(Xn = j ,Xn−1 = k , τi > n − 1,X0 = i)

P(Xn−1 = k , τi > n − 1,X0 = i)

×P(Xn−1 = k , τi > n − 1,X0 = i)

P(X0 = i)

=∑k∈E

pkjP(Xn−1 = k , τi > n − 1|X0 = i)

L. Breuer Chapter 2: Markov Chains

Page 76: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

Hence we obtain

πj = m−1i ·∞∑n=1

∑k∈E

pkjP(Xn−1 = k , τi > n − 1|X0 = i)

=∑k∈E

pkj ·m−1i

∞∑n=0

P(Xn = k , τi > n|X0 = i)

=∑k∈E

πkpkj

which completes the proof.

L. Breuer Chapter 2: Markov Chains

Page 77: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

Hence we obtain

πj = m−1i ·∞∑n=1

∑k∈E

pkjP(Xn−1 = k , τi > n − 1|X0 = i)

=∑k∈E

pkj ·m−1i

∞∑n=0

P(Xn = k , τi > n|X0 = i)

=∑k∈E

πkpkj

which completes the proof.

L. Breuer Chapter 2: Markov Chains

Page 78: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.24 (contd.)

Hence we obtain

πj = m−1i ·∞∑n=1

∑k∈E

pkjP(Xn−1 = k , τi > n − 1|X0 = i)

=∑k∈E

pkj ·m−1i

∞∑n=0

P(Xn = k , τi > n|X0 = i)

=∑k∈E

πkpkj

which completes the proof.

L. Breuer Chapter 2: Markov Chains

Page 79: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.

Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 80: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 81: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24.

Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 82: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows.

Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 83: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24

and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 84: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.

Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 85: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X .

Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 86: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0

and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 87: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0,

since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 88: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.25

Let X denote an irreducible, positive recurrent Markov chain.Then X has a unique stationary distribution.

Proof:Existence has been shown in theorem 2.24. Uniqueness of thestationary distribution can be seen as follows. Let π denote thestationary distribution as constructed in theorem 2.24 and i thepositive recurrent state that served as recurrence point for π.Further, let ν denote any stationary distribution for X . Then thereis a state j ∈ E with νj > 0 and a number m ∈ N withPm(j , i) > 0, since X is irreducible.

L. Breuer Chapter 2: Markov Chains

Page 89: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Consequently we obtain

νi =∑k∈E

νkPm(k , i) ≥ νjPm(j , i) > 0

Hence we can multiply ν by a factor c > 0 such thatc · νi = πi = 1/mi . Denote ν̃ := c · ν, i.e. ν̃k := c · νk for all k ∈ E .Let P̃ denote the transition matrix P without the ith column, i.e.P̃ = (p̃hk)h,k∈E with

p̃hk =

{phk , k 6= i

0, k = i

Denote further the Dirac measure on i by δi , i.e.

δik =

{1, k = i

0, k 6= i

L. Breuer Chapter 2: Markov Chains

Page 90: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Consequently we obtain

νi =∑k∈E

νkPm(k , i) ≥ νjPm(j , i) > 0

Hence we can multiply ν by a factor c > 0 such thatc · νi = πi = 1/mi .

Denote ν̃ := c · ν, i.e. ν̃k := c · νk for all k ∈ E .Let P̃ denote the transition matrix P without the ith column, i.e.P̃ = (p̃hk)h,k∈E with

p̃hk =

{phk , k 6= i

0, k = i

Denote further the Dirac measure on i by δi , i.e.

δik =

{1, k = i

0, k 6= i

L. Breuer Chapter 2: Markov Chains

Page 91: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Consequently we obtain

νi =∑k∈E

νkPm(k , i) ≥ νjPm(j , i) > 0

Hence we can multiply ν by a factor c > 0 such thatc · νi = πi = 1/mi . Denote ν̃ := c · ν,

i.e. ν̃k := c · νk for all k ∈ E .Let P̃ denote the transition matrix P without the ith column, i.e.P̃ = (p̃hk)h,k∈E with

p̃hk =

{phk , k 6= i

0, k = i

Denote further the Dirac measure on i by δi , i.e.

δik =

{1, k = i

0, k 6= i

L. Breuer Chapter 2: Markov Chains

Page 92: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Consequently we obtain

νi =∑k∈E

νkPm(k , i) ≥ νjPm(j , i) > 0

Hence we can multiply ν by a factor c > 0 such thatc · νi = πi = 1/mi . Denote ν̃ := c · ν, i.e. ν̃k := c · νk for all k ∈ E .

Let P̃ denote the transition matrix P without the ith column, i.e.P̃ = (p̃hk)h,k∈E with

p̃hk =

{phk , k 6= i

0, k = i

Denote further the Dirac measure on i by δi , i.e.

δik =

{1, k = i

0, k 6= i

L. Breuer Chapter 2: Markov Chains

Page 93: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Consequently we obtain

νi =∑k∈E

νkPm(k , i) ≥ νjPm(j , i) > 0

Hence we can multiply ν by a factor c > 0 such thatc · νi = πi = 1/mi . Denote ν̃ := c · ν, i.e. ν̃k := c · νk for all k ∈ E .Let P̃ denote the transition matrix P without the ith column, i.e.P̃ = (p̃hk)h,k∈E with

p̃hk =

{phk , k 6= i

0, k = i

Denote further the Dirac measure on i by δi , i.e.

δik =

{1, k = i

0, k 6= i

L. Breuer Chapter 2: Markov Chains

Page 94: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Consequently we obtain

νi =∑k∈E

νkPm(k , i) ≥ νjPm(j , i) > 0

Hence we can multiply ν by a factor c > 0 such thatc · νi = πi = 1/mi . Denote ν̃ := c · ν, i.e. ν̃k := c · νk for all k ∈ E .Let P̃ denote the transition matrix P without the ith column, i.e.P̃ = (p̃hk)h,k∈E with

p̃hk =

{phk , k 6= i

0, k = i

Denote further the Dirac measure on i by δi , i.e.

δik =

{1, k = i

0, k 6= i

L. Breuer Chapter 2: Markov Chains

Page 95: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Then the stationary distribution π can be represented by

π = m−1i · δi∞∑n=0

P̃n

We first claim that

mi ν̃ = δi + mi ν̃P̃

This is clear for the entry ν̃i and easily seen for ν̃k with k 6= ibecause in this case

(ν̃P̃)k = c · (νP)k = c · νk = ν̃k

L. Breuer Chapter 2: Markov Chains

Page 96: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Then the stationary distribution π can be represented by

π = m−1i · δi∞∑n=0

P̃n

We first claim that

mi ν̃ = δi + mi ν̃P̃

This is clear for the entry ν̃i and easily seen for ν̃k with k 6= ibecause in this case

(ν̃P̃)k = c · (νP)k = c · νk = ν̃k

L. Breuer Chapter 2: Markov Chains

Page 97: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Then the stationary distribution π can be represented by

π = m−1i · δi∞∑n=0

P̃n

We first claim that

mi ν̃ = δi + mi ν̃P̃

This is clear for the entry ν̃i

and easily seen for ν̃k with k 6= ibecause in this case

(ν̃P̃)k = c · (νP)k = c · νk = ν̃k

L. Breuer Chapter 2: Markov Chains

Page 98: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Then the stationary distribution π can be represented by

π = m−1i · δi∞∑n=0

P̃n

We first claim that

mi ν̃ = δi + mi ν̃P̃

This is clear for the entry ν̃i and easily seen for ν̃k with k 6= i

because in this case

(ν̃P̃)k = c · (νP)k = c · νk = ν̃k

L. Breuer Chapter 2: Markov Chains

Page 99: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Then the stationary distribution π can be represented by

π = m−1i · δi∞∑n=0

P̃n

We first claim that

mi ν̃ = δi + mi ν̃P̃

This is clear for the entry ν̃i and easily seen for ν̃k with k 6= ibecause in this case

(ν̃P̃)k = c · (νP)k = c · νk = ν̃k

L. Breuer Chapter 2: Markov Chains

Page 100: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Now we can proceed with the same argument to see that

mi ν̃ = δi + (δi + mi ν̃P̃)P̃ = δi + δi P̃ + mi ν̃P̃2 = . . .

= δi∞∑n=0

P̃n = miπ

Hence ν̃ already is a probability measure and thus c = 1. Thisyields ν = ν̃ = π and thus the statement.

L. Breuer Chapter 2: Markov Chains

Page 101: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Now we can proceed with the same argument to see that

mi ν̃ = δi + (δi + mi ν̃P̃)P̃ = δi + δi P̃ + mi ν̃P̃2 = . . .

= δi∞∑n=0

P̃n = miπ

Hence ν̃ already is a probability measure and thus c = 1. Thisyields ν = ν̃ = π and thus the statement.

L. Breuer Chapter 2: Markov Chains

Page 102: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Now we can proceed with the same argument to see that

mi ν̃ = δi + (δi + mi ν̃P̃)P̃ = δi + δi P̃ + mi ν̃P̃2 = . . .

= δi∞∑n=0

P̃n = miπ

Hence ν̃ already is a probability measure

and thus c = 1. Thisyields ν = ν̃ = π and thus the statement.

L. Breuer Chapter 2: Markov Chains

Page 103: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Now we can proceed with the same argument to see that

mi ν̃ = δi + (δi + mi ν̃P̃)P̃ = δi + δi P̃ + mi ν̃P̃2 = . . .

= δi∞∑n=0

P̃n = miπ

Hence ν̃ already is a probability measure and thus c = 1.

Thisyields ν = ν̃ = π and thus the statement.

L. Breuer Chapter 2: Markov Chains

Page 104: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.25 (contd.)

Now we can proceed with the same argument to see that

mi ν̃ = δi + (δi + mi ν̃P̃)P̃ = δi + δi P̃ + mi ν̃P̃2 = . . .

= δi∞∑n=0

P̃n = miπ

Hence ν̃ already is a probability measure and thus c = 1. Thisyields ν = ν̃ = π and thus the statement.

L. Breuer Chapter 2: Markov Chains

Page 105: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.27

Let X denote an irreducible, positive recurrent Markov chain.

Then the stationary distribution π of X is given by

πj = m−1j =1

E(τj |X0 = j)

for all j ∈ E .

Proof:Since all states in E are positive recurrent, the construction intheorem 2.24 can be pursued for any inital state j . This yieldsπj = m−1j for all j ∈ E . The statement now follows from theuniqueness of the stationary distribution.

L. Breuer Chapter 2: Markov Chains

Page 106: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.27

Let X denote an irreducible, positive recurrent Markov chain.Then the stationary distribution π of X is given by

πj = m−1j =1

E(τj |X0 = j)

for all j ∈ E .

Proof:Since all states in E are positive recurrent, the construction intheorem 2.24 can be pursued for any inital state j . This yieldsπj = m−1j for all j ∈ E . The statement now follows from theuniqueness of the stationary distribution.

L. Breuer Chapter 2: Markov Chains

Page 107: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.27

Let X denote an irreducible, positive recurrent Markov chain.Then the stationary distribution π of X is given by

πj = m−1j =1

E(τj |X0 = j)

for all j ∈ E .

Proof:Since all states in E are positive recurrent,

the construction intheorem 2.24 can be pursued for any inital state j . This yieldsπj = m−1j for all j ∈ E . The statement now follows from theuniqueness of the stationary distribution.

L. Breuer Chapter 2: Markov Chains

Page 108: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.27

Let X denote an irreducible, positive recurrent Markov chain.Then the stationary distribution π of X is given by

πj = m−1j =1

E(τj |X0 = j)

for all j ∈ E .

Proof:Since all states in E are positive recurrent, the construction intheorem 2.24 can be pursued for any inital state j .

This yieldsπj = m−1j for all j ∈ E . The statement now follows from theuniqueness of the stationary distribution.

L. Breuer Chapter 2: Markov Chains

Page 109: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.27

Let X denote an irreducible, positive recurrent Markov chain.Then the stationary distribution π of X is given by

πj = m−1j =1

E(τj |X0 = j)

for all j ∈ E .

Proof:Since all states in E are positive recurrent, the construction intheorem 2.24 can be pursued for any inital state j . This yieldsπj = m−1j for all j ∈ E .

The statement now follows from theuniqueness of the stationary distribution.

L. Breuer Chapter 2: Markov Chains

Page 110: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.27

Let X denote an irreducible, positive recurrent Markov chain.Then the stationary distribution π of X is given by

πj = m−1j =1

E(τj |X0 = j)

for all j ∈ E .

Proof:Since all states in E are positive recurrent, the construction intheorem 2.24 can be pursued for any inital state j . This yieldsπj = m−1j for all j ∈ E . The statement now follows from theuniqueness of the stationary distribution.

L. Breuer Chapter 2: Markov Chains

Page 111: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.28

For an irreducible, positive recurrent Markov chain,

the stationaryprobability πj of a state j coincides with its asymptotic rate ofrecurrence, i.e.

limn→∞

E(Nj(n)|X0 = i)

n= πj

for all j ∈ E and independently of i ∈ E . Further, if an asymptoticdistribution pj = limn→∞ P(Xn = j) for all j ∈ E does exist, then itcoincides with the stationary distribution. In particular, it isindependent of the initial distribution of X .

L. Breuer Chapter 2: Markov Chains

Page 112: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.28

For an irreducible, positive recurrent Markov chain, the stationaryprobability πj of a state j coincides with its asymptotic rate ofrecurrence,

i.e.

limn→∞

E(Nj(n)|X0 = i)

n= πj

for all j ∈ E and independently of i ∈ E . Further, if an asymptoticdistribution pj = limn→∞ P(Xn = j) for all j ∈ E does exist, then itcoincides with the stationary distribution. In particular, it isindependent of the initial distribution of X .

L. Breuer Chapter 2: Markov Chains

Page 113: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.28

For an irreducible, positive recurrent Markov chain, the stationaryprobability πj of a state j coincides with its asymptotic rate ofrecurrence, i.e.

limn→∞

E(Nj(n)|X0 = i)

n= πj

for all j ∈ E

and independently of i ∈ E . Further, if an asymptoticdistribution pj = limn→∞ P(Xn = j) for all j ∈ E does exist, then itcoincides with the stationary distribution. In particular, it isindependent of the initial distribution of X .

L. Breuer Chapter 2: Markov Chains

Page 114: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.28

For an irreducible, positive recurrent Markov chain, the stationaryprobability πj of a state j coincides with its asymptotic rate ofrecurrence, i.e.

limn→∞

E(Nj(n)|X0 = i)

n= πj

for all j ∈ E and independently of i ∈ E .

Further, if an asymptoticdistribution pj = limn→∞ P(Xn = j) for all j ∈ E does exist, then itcoincides with the stationary distribution. In particular, it isindependent of the initial distribution of X .

L. Breuer Chapter 2: Markov Chains

Page 115: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.28

For an irreducible, positive recurrent Markov chain, the stationaryprobability πj of a state j coincides with its asymptotic rate ofrecurrence, i.e.

limn→∞

E(Nj(n)|X0 = i)

n= πj

for all j ∈ E and independently of i ∈ E . Further, if an asymptoticdistribution pj = limn→∞ P(Xn = j) for all j ∈ E does exist,

then itcoincides with the stationary distribution. In particular, it isindependent of the initial distribution of X .

L. Breuer Chapter 2: Markov Chains

Page 116: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.28

For an irreducible, positive recurrent Markov chain, the stationaryprobability πj of a state j coincides with its asymptotic rate ofrecurrence, i.e.

limn→∞

E(Nj(n)|X0 = i)

n= πj

for all j ∈ E and independently of i ∈ E . Further, if an asymptoticdistribution pj = limn→∞ P(Xn = j) for all j ∈ E does exist, then itcoincides with the stationary distribution.

In particular, it isindependent of the initial distribution of X .

L. Breuer Chapter 2: Markov Chains

Page 117: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.28

For an irreducible, positive recurrent Markov chain, the stationaryprobability πj of a state j coincides with its asymptotic rate ofrecurrence, i.e.

limn→∞

E(Nj(n)|X0 = i)

n= πj

for all j ∈ E and independently of i ∈ E . Further, if an asymptoticdistribution pj = limn→∞ P(Xn = j) for all j ∈ E does exist, then itcoincides with the stationary distribution. In particular, it isindependent of the initial distribution of X .

L. Breuer Chapter 2: Markov Chains

Page 118: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.28

The first statement immediately follows from the elementaryrenewal theorem.

For the second statement, it suffices to employE(Nj(n)|X0 = i) =

∑nl=0 P

l(i , j). If an asymptotic distributiondoes exist, then for any initial distribution ν we obtain

pj = limn→∞

(νPn)j =∑i∈E

νi limn→∞

Pn(i , j)

=∑i∈E

νi limn→∞

∑nl=0 P

l(i , j)

n=∑i∈E

νiπj

= πj

L. Breuer Chapter 2: Markov Chains

Page 119: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.28

The first statement immediately follows from the elementaryrenewal theorem. For the second statement, it suffices to employE(Nj(n)|X0 = i) =

∑nl=0 P

l(i , j).

If an asymptotic distributiondoes exist, then for any initial distribution ν we obtain

pj = limn→∞

(νPn)j =∑i∈E

νi limn→∞

Pn(i , j)

=∑i∈E

νi limn→∞

∑nl=0 P

l(i , j)

n=∑i∈E

νiπj

= πj

L. Breuer Chapter 2: Markov Chains

Page 120: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.28

The first statement immediately follows from the elementaryrenewal theorem. For the second statement, it suffices to employE(Nj(n)|X0 = i) =

∑nl=0 P

l(i , j). If an asymptotic distributiondoes exist,

then for any initial distribution ν we obtain

pj = limn→∞

(νPn)j =∑i∈E

νi limn→∞

Pn(i , j)

=∑i∈E

νi limn→∞

∑nl=0 P

l(i , j)

n=∑i∈E

νiπj

= πj

L. Breuer Chapter 2: Markov Chains

Page 121: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.28

The first statement immediately follows from the elementaryrenewal theorem. For the second statement, it suffices to employE(Nj(n)|X0 = i) =

∑nl=0 P

l(i , j). If an asymptotic distributiondoes exist, then for any initial distribution ν we obtain

pj = limn→∞

(νPn)j =∑i∈E

νi limn→∞

Pn(i , j)

=∑i∈E

νi limn→∞

∑nl=0 P

l(i , j)

n=∑i∈E

νiπj

= πj

L. Breuer Chapter 2: Markov Chains

Page 122: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.28

The first statement immediately follows from the elementaryrenewal theorem. For the second statement, it suffices to employE(Nj(n)|X0 = i) =

∑nl=0 P

l(i , j). If an asymptotic distributiondoes exist, then for any initial distribution ν we obtain

pj = limn→∞

(νPn)j =∑i∈E

νi limn→∞

Pn(i , j)

=∑i∈E

νi limn→∞

∑nl=0 P

l(i , j)

n=∑i∈E

νiπj

= πj

L. Breuer Chapter 2: Markov Chains

Page 123: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.28

The first statement immediately follows from the elementaryrenewal theorem. For the second statement, it suffices to employE(Nj(n)|X0 = i) =

∑nl=0 P

l(i , j). If an asymptotic distributiondoes exist, then for any initial distribution ν we obtain

pj = limn→∞

(νPn)j =∑i∈E

νi limn→∞

Pn(i , j)

=∑i∈E

νi limn→∞

∑nl=0 P

l(i , j)

n=∑i∈E

νiπj

= πj

L. Breuer Chapter 2: Markov Chains

Page 124: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example

Let X denote a Markov chain with transition matrix

P =

(0 11 0

)

Then X has no asymptotic distribution, but a stationarydistribution, namely π = (1/2, 1/2).

L. Breuer Chapter 2: Markov Chains

Page 125: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example

Let X denote a Markov chain with transition matrix

P =

(0 11 0

)Then X has no asymptotic distribution,

but a stationarydistribution, namely π = (1/2, 1/2).

L. Breuer Chapter 2: Markov Chains

Page 126: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Example

Let X denote a Markov chain with transition matrix

P =

(0 11 0

)Then X has no asymptotic distribution, but a stationarydistribution, namely π = (1/2, 1/2).

L. Breuer Chapter 2: Markov Chains

Page 127: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.31

An irreducible Markov chain with finite state space F is positiverecurrent.

Proof:For all n ∈ N and i ∈ F we have∑

j∈FPn(i , j) = 1

Hence it is not possible that limn→∞ Pn(i , j) = 0 for all j ∈ F .Thus there is one state h ∈ F such that

∞∑n=0

Pn(i , h) = rih = fihrhh =∞

which means by corollary 2.15 that h is recurrent and byirreducibility that the chain is recurrent.

L. Breuer Chapter 2: Markov Chains

Page 128: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.31

An irreducible Markov chain with finite state space F is positiverecurrent.

Proof:For all n ∈ N and i ∈ F we have∑

j∈FPn(i , j) = 1

Hence it is not possible that limn→∞ Pn(i , j) = 0 for all j ∈ F .Thus there is one state h ∈ F such that

∞∑n=0

Pn(i , h) = rih = fihrhh =∞

which means by corollary 2.15 that h is recurrent and byirreducibility that the chain is recurrent.

L. Breuer Chapter 2: Markov Chains

Page 129: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.31

An irreducible Markov chain with finite state space F is positiverecurrent.

Proof:For all n ∈ N and i ∈ F we have∑

j∈FPn(i , j) = 1

Hence it is not possible that limn→∞ Pn(i , j) = 0 for all j ∈ F .

Thus there is one state h ∈ F such that

∞∑n=0

Pn(i , h) = rih = fihrhh =∞

which means by corollary 2.15 that h is recurrent and byirreducibility that the chain is recurrent.

L. Breuer Chapter 2: Markov Chains

Page 130: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.31

An irreducible Markov chain with finite state space F is positiverecurrent.

Proof:For all n ∈ N and i ∈ F we have∑

j∈FPn(i , j) = 1

Hence it is not possible that limn→∞ Pn(i , j) = 0 for all j ∈ F .Thus there is one state h ∈ F such that

∞∑n=0

Pn(i , h) = rih = fihrhh =∞

which means by corollary 2.15 that h is recurrent and byirreducibility that the chain is recurrent.

L. Breuer Chapter 2: Markov Chains

Page 131: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.31

An irreducible Markov chain with finite state space F is positiverecurrent.

Proof:For all n ∈ N and i ∈ F we have∑

j∈FPn(i , j) = 1

Hence it is not possible that limn→∞ Pn(i , j) = 0 for all j ∈ F .Thus there is one state h ∈ F such that

∞∑n=0

Pn(i , h) = rih = fihrhh =∞

which means by corollary 2.15 that h is recurrent

and byirreducibility that the chain is recurrent.

L. Breuer Chapter 2: Markov Chains

Page 132: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.31

An irreducible Markov chain with finite state space F is positiverecurrent.

Proof:For all n ∈ N and i ∈ F we have∑

j∈FPn(i , j) = 1

Hence it is not possible that limn→∞ Pn(i , j) = 0 for all j ∈ F .Thus there is one state h ∈ F such that

∞∑n=0

Pn(i , h) = rih = fihrhh =∞

which means by corollary 2.15 that h is recurrent and byirreducibility that the chain is recurrent.

L. Breuer Chapter 2: Markov Chains

Page 133: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.31

An irreducible Markov chain with finite state space F is positiverecurrent.

Proof:For all n ∈ N and i ∈ F we have∑

j∈FPn(i , j) = 1

Hence it is not possible that limn→∞ Pn(i , j) = 0 for all j ∈ F .Thus there is one state h ∈ F such that

∞∑n=0

Pn(i , h) = rih = fihrhh =∞

which means by corollary 2.15 that h is recurrent and byirreducibility that the chain is recurrent.

L. Breuer Chapter 2: Markov Chains

Page 134: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.31

If the chain were null recurrent,

then according to the elementaryrenewal theorem

limn→∞

1

n

n∑k=1

Pk(i , j) = 0

would hold for all j ∈ F , independently of i because ofirreducibility. But this would imply that

limn→∞

Pn(i , j) = 0

for all j ∈ F , which contradicts our first observation in this proof.Hence the chain must be positive recurrent.

L. Breuer Chapter 2: Markov Chains

Page 135: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.31

If the chain were null recurrent, then according to the elementaryrenewal theorem

limn→∞

1

n

n∑k=1

Pk(i , j) = 0

would hold for all j ∈ F ,

independently of i because ofirreducibility. But this would imply that

limn→∞

Pn(i , j) = 0

for all j ∈ F , which contradicts our first observation in this proof.Hence the chain must be positive recurrent.

L. Breuer Chapter 2: Markov Chains

Page 136: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.31

If the chain were null recurrent, then according to the elementaryrenewal theorem

limn→∞

1

n

n∑k=1

Pk(i , j) = 0

would hold for all j ∈ F , independently of i because ofirreducibility.

But this would imply that

limn→∞

Pn(i , j) = 0

for all j ∈ F , which contradicts our first observation in this proof.Hence the chain must be positive recurrent.

L. Breuer Chapter 2: Markov Chains

Page 137: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.31

If the chain were null recurrent, then according to the elementaryrenewal theorem

limn→∞

1

n

n∑k=1

Pk(i , j) = 0

would hold for all j ∈ F , independently of i because ofirreducibility. But this would imply that

limn→∞

Pn(i , j) = 0

for all j ∈ F ,

which contradicts our first observation in this proof.Hence the chain must be positive recurrent.

L. Breuer Chapter 2: Markov Chains

Page 138: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.31

If the chain were null recurrent, then according to the elementaryrenewal theorem

limn→∞

1

n

n∑k=1

Pk(i , j) = 0

would hold for all j ∈ F , independently of i because ofirreducibility. But this would imply that

limn→∞

Pn(i , j) = 0

for all j ∈ F , which contradicts our first observation in this proof.

Hence the chain must be positive recurrent.

L. Breuer Chapter 2: Markov Chains

Page 139: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Proof of theorem 2.31

If the chain were null recurrent, then according to the elementaryrenewal theorem

limn→∞

1

n

n∑k=1

Pk(i , j) = 0

would hold for all j ∈ F , independently of i because ofirreducibility. But this would imply that

limn→∞

Pn(i , j) = 0

for all j ∈ F , which contradicts our first observation in this proof.Hence the chain must be positive recurrent.

L. Breuer Chapter 2: Markov Chains

Page 140: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue in discrete time

Choose any parameters 0 < p, q < 1. Let the arrival process bedistributed as a Bernoulli process with parameter p and the servicetimes (Sn : n ∈ N0) be iid according to the geometric distributionwith parameter q.

L. Breuer Chapter 2: Markov Chains

Page 141: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue in discrete time

Choose any parameters 0 < p, q < 1.

Let the arrival process bedistributed as a Bernoulli process with parameter p and the servicetimes (Sn : n ∈ N0) be iid according to the geometric distributionwith parameter q.

L. Breuer Chapter 2: Markov Chains

Page 142: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue in discrete time

Choose any parameters 0 < p, q < 1. Let the arrival process bedistributed as a Bernoulli process with parameter p

and the servicetimes (Sn : n ∈ N0) be iid according to the geometric distributionwith parameter q.

L. Breuer Chapter 2: Markov Chains

Page 143: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue in discrete time

Choose any parameters 0 < p, q < 1. Let the arrival process bedistributed as a Bernoulli process with parameter p and the servicetimes (Sn : n ∈ N0) be iid

according to the geometric distributionwith parameter q.

L. Breuer Chapter 2: Markov Chains

Page 144: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue in discrete time

Choose any parameters 0 < p, q < 1. Let the arrival process bedistributed as a Bernoulli process with parameter p and the servicetimes (Sn : n ∈ N0) be iid according to the geometric distributionwith parameter q.

L. Breuer Chapter 2: Markov Chains

Page 145: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.34 (memoryless property)

Let S be distributed geometrically with parameter q,

i.e. letP(S = k) = (1− q)k−1q for all k ∈ N. ThenP(S = k|S > k − 1) = q, independently of k .

Proof:

P(S = k |S > k − 1) =P(S = k ,S > k − 1)

P(S > k − 1)

=P(S = k)

P(S > k − 1)

=(1− q)k−1q

(1− q)k−1= q

L. Breuer Chapter 2: Markov Chains

Page 146: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.34 (memoryless property)

Let S be distributed geometrically with parameter q, i.e. letP(S = k) = (1− q)k−1q for all k ∈ N.

ThenP(S = k|S > k − 1) = q, independently of k .

Proof:

P(S = k |S > k − 1) =P(S = k ,S > k − 1)

P(S > k − 1)

=P(S = k)

P(S > k − 1)

=(1− q)k−1q

(1− q)k−1= q

L. Breuer Chapter 2: Markov Chains

Page 147: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.34 (memoryless property)

Let S be distributed geometrically with parameter q, i.e. letP(S = k) = (1− q)k−1q for all k ∈ N. ThenP(S = k|S > k − 1) = q,

independently of k .

Proof:

P(S = k |S > k − 1) =P(S = k ,S > k − 1)

P(S > k − 1)

=P(S = k)

P(S > k − 1)

=(1− q)k−1q

(1− q)k−1= q

L. Breuer Chapter 2: Markov Chains

Page 148: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.34 (memoryless property)

Let S be distributed geometrically with parameter q, i.e. letP(S = k) = (1− q)k−1q for all k ∈ N. ThenP(S = k|S > k − 1) = q, independently of k .

Proof:

P(S = k |S > k − 1) =P(S = k ,S > k − 1)

P(S > k − 1)

=P(S = k)

P(S > k − 1)

=(1− q)k−1q

(1− q)k−1= q

L. Breuer Chapter 2: Markov Chains

Page 149: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.34 (memoryless property)

Let S be distributed geometrically with parameter q, i.e. letP(S = k) = (1− q)k−1q for all k ∈ N. ThenP(S = k|S > k − 1) = q, independently of k .

Proof:

P(S = k |S > k − 1) =P(S = k , S > k − 1)

P(S > k − 1)

=P(S = k)

P(S > k − 1)

=(1− q)k−1q

(1− q)k−1= q

L. Breuer Chapter 2: Markov Chains

Page 150: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.34 (memoryless property)

Let S be distributed geometrically with parameter q, i.e. letP(S = k) = (1− q)k−1q for all k ∈ N. ThenP(S = k|S > k − 1) = q, independently of k .

Proof:

P(S = k |S > k − 1) =P(S = k , S > k − 1)

P(S > k − 1)

=P(S = k)

P(S > k − 1)

=(1− q)k−1q

(1− q)k−1= q

L. Breuer Chapter 2: Markov Chains

Page 151: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Theorem 2.34 (memoryless property)

Let S be distributed geometrically with parameter q, i.e. letP(S = k) = (1− q)k−1q for all k ∈ N. ThenP(S = k|S > k − 1) = q, independently of k .

Proof:

P(S = k |S > k − 1) =P(S = k , S > k − 1)

P(S > k − 1)

=P(S = k)

P(S > k − 1)

=(1− q)k−1q

(1− q)k−1= q

L. Breuer Chapter 2: Markov Chains

Page 152: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue as a Markov chain

Let Qn denote the number of users in the system at time n ∈ N0.

Then the state space is E = N0.The transition probabilities are p01 := p, p00 := 1− p, and

pij :=

p(1− q), j = i + 1

pq + (1− p)(1− q), j = i

q(1− p), j = i − 1

for i ≥ 1.

L. Breuer Chapter 2: Markov Chains

Page 153: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue as a Markov chain

Let Qn denote the number of users in the system at time n ∈ N0.Then the state space is E = N0.

The transition probabilities are p01 := p, p00 := 1− p, and

pij :=

p(1− q), j = i + 1

pq + (1− p)(1− q), j = i

q(1− p), j = i − 1

for i ≥ 1.

L. Breuer Chapter 2: Markov Chains

Page 154: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue as a Markov chain

Let Qn denote the number of users in the system at time n ∈ N0.Then the state space is E = N0.The transition probabilities are p01 := p, p00 := 1− p,

and

pij :=

p(1− q), j = i + 1

pq + (1− p)(1− q), j = i

q(1− p), j = i − 1

for i ≥ 1.

L. Breuer Chapter 2: Markov Chains

Page 155: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

The Geo/Geo/1 queue as a Markov chain

Let Qn denote the number of users in the system at time n ∈ N0.Then the state space is E = N0.The transition probabilities are p01 := p, p00 := 1− p, and

pij :=

p(1− q), j = i + 1

pq + (1− p)(1− q), j = i

q(1− p), j = i − 1

for i ≥ 1.

L. Breuer Chapter 2: Markov Chains

Page 156: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Transition matrix

Thus the transition matrix is triagonal,

i.e.

P =

1− p p 0 . . .

q(1− p) pq + (1− p)(1− q) p(1− q). . .

0 q(1− p) pq + (1− p)(1− q). . .

.... . .

. . .. . .

Abbreviate p′ := p(1− q) and q′ := q(1− p).

L. Breuer Chapter 2: Markov Chains

Page 157: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Transition matrix

Thus the transition matrix is triagonal, i.e.

P =

1− p p 0 . . .

q(1− p) pq + (1− p)(1− q) p(1− q). . .

0 q(1− p) pq + (1− p)(1− q). . .

.... . .

. . .. . .

Abbreviate p′ := p(1− q) and q′ := q(1− p).

L. Breuer Chapter 2: Markov Chains

Page 158: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Transition matrix

Thus the transition matrix is triagonal, i.e.

P =

1− p p 0 . . .

q(1− p) pq + (1− p)(1− q) p(1− q). . .

0 q(1− p) pq + (1− p)(1− q). . .

.... . .

. . .. . .

Abbreviate p′ := p(1− q) and q′ := q(1− p).

L. Breuer Chapter 2: Markov Chains

Page 159: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationarity condition for the Geo/Geo/1 queue

Then the condition πP = π

means

π0 = π0(1− p) + π1q′

π1 = π0p + π1(1− p − q′) + π2q′

and

πn = πn−1p′ + πn(1− (p′ + q′)) + πn+1q

for all n ≥ 2.

L. Breuer Chapter 2: Markov Chains

Page 160: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationarity condition for the Geo/Geo/1 queue

Then the condition πP = π means

π0 = π0(1− p) + π1q′

π1 = π0p + π1(1− p − q′) + π2q′

and

πn = πn−1p′ + πn(1− (p′ + q′)) + πn+1q

for all n ≥ 2.

L. Breuer Chapter 2: Markov Chains

Page 161: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationarity condition for the Geo/Geo/1 queue

Then the condition πP = π means

π0 = π0(1− p) + π1q′

π1 = π0p + π1(1− p − q′) + π2q′

and

πn = πn−1p′ + πn(1− (p′ + q′)) + πn+1q

for all n ≥ 2.

L. Breuer Chapter 2: Markov Chains

Page 162: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationarity condition for the Geo/Geo/1 queue

Then the condition πP = π means

π0 = π0(1− p) + π1q′

π1 = π0p + π1(1− p − q′) + π2q′

and

πn = πn−1p′ + πn(1− (p′ + q′)) + πn+1q

for all n ≥ 2.

L. Breuer Chapter 2: Markov Chains

Page 163: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 1

We try the geometric form

πn+1 = πn · r

for all n ≥ 1, with 0 < r < 1.

Then stationarity yields

0 = πnp′ − πnr(p′ + q′) + πnr

2q′

= πn(p′ − r(p′ + q′) + r2q′

)and hence r = p′/q′ < 1 ⇐⇒ p < q. Further,

π1 = π0p

q′= π0

ρ

1− p

with ρ := p/q,

L. Breuer Chapter 2: Markov Chains

Page 164: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 1

We try the geometric form

πn+1 = πn · r

for all n ≥ 1, with 0 < r < 1. Then stationarity yields

0 = πnp′ − πnr(p′ + q′) + πnr

2q′

= πn(p′ − r(p′ + q′) + r2q′

)and hence r = p′/q′ < 1 ⇐⇒ p < q. Further,

π1 = π0p

q′= π0

ρ

1− p

with ρ := p/q,

L. Breuer Chapter 2: Markov Chains

Page 165: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 1

We try the geometric form

πn+1 = πn · r

for all n ≥ 1, with 0 < r < 1. Then stationarity yields

0 = πnp′ − πnr(p′ + q′) + πnr

2q′

= πn(p′ − r(p′ + q′) + r2q′

)

and hence r = p′/q′ < 1 ⇐⇒ p < q. Further,

π1 = π0p

q′= π0

ρ

1− p

with ρ := p/q,

L. Breuer Chapter 2: Markov Chains

Page 166: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 1

We try the geometric form

πn+1 = πn · r

for all n ≥ 1, with 0 < r < 1. Then stationarity yields

0 = πnp′ − πnr(p′ + q′) + πnr

2q′

= πn(p′ − r(p′ + q′) + r2q′

)and hence r = p′/q′

< 1 ⇐⇒ p < q. Further,

π1 = π0p

q′= π0

ρ

1− p

with ρ := p/q,

L. Breuer Chapter 2: Markov Chains

Page 167: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 1

We try the geometric form

πn+1 = πn · r

for all n ≥ 1, with 0 < r < 1. Then stationarity yields

0 = πnp′ − πnr(p′ + q′) + πnr

2q′

= πn(p′ − r(p′ + q′) + r2q′

)and hence r = p′/q′ < 1 ⇐⇒ p < q.

Further,

π1 = π0p

q′= π0

ρ

1− p

with ρ := p/q,

L. Breuer Chapter 2: Markov Chains

Page 168: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 1

We try the geometric form

πn+1 = πn · r

for all n ≥ 1, with 0 < r < 1. Then stationarity yields

0 = πnp′ − πnr(p′ + q′) + πnr

2q′

= πn(p′ − r(p′ + q′) + r2q′

)and hence r = p′/q′ < 1 ⇐⇒ p < q. Further,

π1 = π0p

q′= π0

ρ

1− p

with ρ := p/q,

L. Breuer Chapter 2: Markov Chains

Page 169: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 2

and

π2 =1

q′(π1(p′ + q′)− π0p

)

=1

q′

(p

q′(p′ + q′)− p

)π0

= π0p

q′

(p′ + q′

q′− 1

)

= π1p′

q′

L. Breuer Chapter 2: Markov Chains

Page 170: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 2

and

π2 =1

q′(π1(p′ + q′)− π0p

)=

1

q′

(p

q′(p′ + q′)− p

)π0

= π0p

q′

(p′ + q′

q′− 1

)

= π1p′

q′

L. Breuer Chapter 2: Markov Chains

Page 171: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 2

and

π2 =1

q′(π1(p′ + q′)− π0p

)=

1

q′

(p

q′(p′ + q′)− p

)π0

= π0p

q′

(p′ + q′

q′− 1

)

= π1p′

q′

L. Breuer Chapter 2: Markov Chains

Page 172: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 2

and

π2 =1

q′(π1(p′ + q′)− π0p

)=

1

q′

(p

q′(p′ + q′)− p

)π0

= π0p

q′

(p′ + q′

q′− 1

)

= π1p′

q′

L. Breuer Chapter 2: Markov Chains

Page 173: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 3

Normalisation of π yields

1 =∞∑n=0

πn = π0

(1 +

p

q′

∞∑n=1

(p′

q′

)n−1)

and hence

π0 =

(1 +

p

q′

∞∑n=1

(p′

q′

)n−1)−1

= 1− ρ

Verify this as an exercise!

L. Breuer Chapter 2: Markov Chains

Page 174: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 3

Normalisation of π yields

1 =∞∑n=0

πn = π0

(1 +

p

q′

∞∑n=1

(p′

q′

)n−1)

and hence

π0 =

(1 +

p

q′

∞∑n=1

(p′

q′

)n−1)−1

= 1− ρ

Verify this as an exercise!

L. Breuer Chapter 2: Markov Chains

Page 175: Chapter 2: Markov Chains · Further, let ˇdenote a probability distribution on E with ˇP = ˇ, i.e. ˇ j = X i2E ˇ ip ij and X j2E ˇ j = 1 for all j 2E. Then ˇis a stationary

Stationary distribution - 3

Normalisation of π yields

1 =∞∑n=0

πn = π0

(1 +

p

q′

∞∑n=1

(p′

q′

)n−1)

and hence

π0 =

(1 +

p

q′

∞∑n=1

(p′

q′

)n−1)−1

= 1− ρ

Verify this as an exercise!

L. Breuer Chapter 2: Markov Chains