Download - Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Transcript
Page 1: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 1

Markov Chains (Part 4)

Steady State Probabilities and First Passage Times

Page 2: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 2

Steady-State Probabilities

•  Remember, for the inventory example we had

•  For an irreducible ergodic Markov chain, where πj = steady state probability of being in state j

jnijnp !=

"#

)(lim

!!!

"

#

$$$

%

&

=

166.263.285.286.166.263.285.286.166.263.285.286.166.263.285.286.

)8(P

Page 3: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 3

Some Observations About the Limit

•  The behavior of this important limit depends on properties of states i and j and the Markov chain as a whole. –  If i and j are recurrent and belong to different classes, then

p(n)ij=0 for all n.

–  If j is transient, then for all i. Intuitively, the probability that the Markov chain is in a transient state after a large number of transitions tends to zero.

–  In some cases, the limit does not exist! Consider the following Markov chain: if the chain starts out in state 0, it will be back in 0 at times 2,4,6,… and in state 1 at times 1,3,5,…. Thus p(n)

00=1 if n is even and p(n)

00=0 if n is odd. Hence the limit does not exist.

!

limn"#

pij(n ) = 0

!

limn"#

p00(n )

Page 4: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 4

Steady-State Probabilities •  How can we find these probabilities without calculating

P(n) for very large n? •  The following are the steady-state equations:

!

" jj= 0

M

# =1

" j = " i piji= 0

M

# for all j = 0,...,M

" j $ 0 for all j = 0,...,M•  In matrix notation we have πTP = πT

•  Solve a system of linear equations. Note: there are M+2 equations and only M+1 variables (π0, π1, …, πM), so one of the equations is redundant and can be dropped - just don’t drop the equation 1

0=!

=

M

jj"

Page 5: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Solving for the Steady-State Probabilities

•  Idea is to go from steady state to steady state:

Markov Chains - 5

!

! TP = ! T and ! i =1i=0

M

!

! 0 !1 ! !M"#

$%

p00 p01 ... p0M

p10 p11 ... p1M

! ! p(M&1)M

pM 0 pM1 ... pMM

"

#

'''''

$

%

(((((

= ! 0 !1 ! !M"#

$%

! 0p00 +!1p10 +!+!M pM 0 = ! 0

! 0p01 +!1p11 +!+!M pM1 = !1

! = !! 0p0M +!1p1M +!+!M pMM = !M

! 0 +!1 +!+!M = 1

Xt

j

i

0

M

t

!

" i

!

"0!

"M

!

" j

Page 6: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 6

Steady-State Probabilities Examples

•  Find the steady-state probabilities for

• 

• 

•  Inventory example

!!!!

"

#

$$$$

%

&

=

43

410

2102

103

231

P

!"#

$%&= 4.06.0

7.03.0P

!!!!

"

#

$$$$

%

&

=

368.0368.0184.0080.00368.0368.0264.000368.0632.0368.0368.0184.0080.0

P

Page 7: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Other Applications of Steady-State Probabilities

•  Expected recurrence time: we are often interested in the expected number of steps between consecutive visits to a particular (recurrent) state. –  What is the expected number of sunny days between rainy days? –  What is the expected number of weeks between ordering cameras?

•  Long-run expected average cost per unit time: in many applications, we incur a cost or gain a reward every time a Markov chain visits a specific state. –  If we incur costs for carrying inventory, and costs for not meeting

demand, what is the long-run expected average cost per unit time?

Markov Chains - 7

Page 8: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 8

Expected Recurrence Times

•  The expected recurrence time, denoted µjj , is the expected number of transition between two consecutive visits to state j.

•  The steady state probabilities, πj , are related to the expected recurrence times, µjj, as

Mjj

jj ,...,1,0 all for 1

=!

Page 9: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 9

Weather Example •  What is the expected number of sunny days in between

rainy days? •  First, calculate πj.

•  Now, µ11 = 1/πj = 4 •  For this example, we expect 4 sunny days between

rainy days.

0 1

Sun 0Rain 1

0.8 0.20.6 0.4

!

"#

$

%&

! 0 0.8+!10.6 = ! 0! 0 0.2+!10.4 = !1

! 0 +!1 = 1

! 0 =1!!1

1!!1( )0.2+!10.4 = !1

0.2 = 0.8!1!1 =1/ 4 and ! 0 = 3 / 4

Page 10: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 10

Inventory Example •  What is the expected number of weeks in between orders? •  First, the steady state probabilities are:

•  Now, µ00 = 1/π0 = 3.5 •  For this example, on the average, we order cameras every

three and a half weeks.

! 0 = 0.286, !1 = 0.285, ! 2 = 0.263, ! 3 = 0.166

Page 11: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 11

Expected Recurrence Times Examples

!

P =0.3 0.70.6 0.4"

# $

%

& '

( 0 = 613

(1 = 713

µ00 =13

6 = 2.1667µ11 =13

7 =1.857

!

P =

13

23 0

12 0 1

20 1

43

4

"

#

$ $ $

%

&

' ' '

( 0 = 315

(1 = 415

( 2 = 815

µ00 =15

3 = 5 µ11 =15

4 = 3.75µ22 =15

8 =1.875

0.7 0

0.3 1

0.6

0.4

1

2

2/3 1/2

1/2

3/4 0

1/4

1/3

Page 12: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 12

Steady-State Cost Analysis

•  Once we know the steady-state probabilities, we can do some long-run analyses

•  Assume we have a finite-state, irreducible Markov chain •  Let C(Xt) be a cost at time t, that is, C(j) = expected cost of being in

state j, for j=0,1,…,M •  The expected average cost over the first n time steps is

•  The long-run expected average cost per unit time is a function of steady state probabilities

!

E 1n

C Xt( )t =1

n"

#

$ % %

&

' ( (

!

limn"#

E 1n

C Xt( )t =1

n$

%

& ' '

(

) * *

= + jC j( )j=0

M

$

Page 13: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 13

Steady-State Cost Analysis Inventory Example

•  Suppose there is a storage cost for having cameras on hand:

•  The long-run expected average cost per unit time is

!

C i( ) =

0 if i = 02 if i =18 if i = 2

18 if i = 3

"

# $ $

% $ $

!

"0C 0( ) +"

1C 1( ) +"

2C 2( ) +"

3C 3( )

= 0.286 0( ) + 0.285 2( ) + 0.268 8( ) + 0.166 18( )= 5.662

Page 14: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 14

First Passage Times - Motivation •  In many applications, we are interested in the time at

which the Markov chain visits a particular state for the first time. –  If I start out with a dollar, what is the probability that I will go

broke (for the first time) after n gambles? –  If I start out with three cameras in my inventory, what is the

expected number of days after which I will have none for the first time?

•  The answers to these questions are related to an important concept called “first passage times”

Page 15: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 15

First Passage Times •  The first passage time from state i to state j is the

number of transitions made by the process in going from state i to state j for the first time

•  When i = j, this first passage time is called the recurrence time for state i

•  Let fij(n) = probability that the first passage time from state i to state j is equal to n

•  What is the difference between fij(n) and pij(n)?

pij(n) includes paths that visit j

fij(n) does not include paths that visit j

Xt

j

i

0 t+n t

Page 16: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 16

Some Observations about First Passage Times

•  First passage times are random variables and have probability distributions associated with them

•  fij(n) = probability that the first passage time from state i to state j is equal to n

•  These probability distributions can be computed using a simple idea: condition on where the Markov chain goes after the first transition

•  For the first passage time from i to j to be n>1, the Markov chain has to transition from i to k (different from j) in one step, and then the first passage time from k to j must be n-1.

•  This concept can be used to derive recursive equations for fij(n)

Page 17: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 17

First Passage Times

The first passage time probabilities satisfy a recursive relationship

Xt

j

i

0

M

t

!

piM

t+1 t+n !

pij

!

pii

!

pi0

!

fMj(n "1)

!

fij(n "1)

!

f0 j(n "1)

fij(1) = pij

(1) = pijfij

(2) = pikk! j" fkj

(1)

!fij

(n) = pikk! j" fkj

(n#1)

Page 18: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 18

First Passage Times Inventory Example

•  Suppose we were interested in the number of weeks until the first order (start in State 3, X0=3)

•  Then we would need to know what is the probability that the first order is submitted in –  Week 1? –  Week 2?

–  Week 3?

f301( ) = p30 = 0.080

!

f303( ) = p3k fk0

2( )

k"0# = p31 f10

2( ) + p32 f202( ) + p33 f30

2( )

f30(2) = p3k fk0

1( )

k!0" = p31 f10

(1) + p32 f20(1) + p33 f30

(1)

= p31p10 + p32p20 + p33p30= 0.184(0.632)+ 0.368(0.264)+ 0.368(0.080)= 0.243

Page 19: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 19

Probability of Ever Reaching j from i

•  If the chain starts out in state i, what is the probability that it visits state j at some future time?

•  This probability is denoted •  If fij=1, then the chain starting at i definitely reaches j at

some future time, in which case f(n)ij is a genuine

probability distribution for the first passage time.

•  On the other hand, if fij<1, the chain starting at i may never reach j. In fact, the probability that this happens is 1-fij

fij and fij = fij(n)

n=1

!

"

Page 20: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 20

Expected First Passage Times

•  The expected first passage time from state i to state j is •  If fij=1, we can also calculate µij using the idea to

condition on where the chain goes after one transition

!"=

µ+=µM

jkk

kjikij p0

1

µij =! if fij <1

µij = E fij(n)"# $%= nfij

(n) if fij=1 n=1

!

&

µij =1pij + pik (1+µkj )k! j" = pik

k" + pikµkj

k! j" =1+ pikµkj

k! j"

Page 21: Markov Chains (Part 4) - University of Washingtoncourses.washington.edu/inde411/MarkovChains(part4).pdf · Markov Chains - 3 Some Observations About the Limi • The behavior of this

Markov Chains - 21

Expected First Passage Times Inventory Example

•  Find the expected time until the first order is submitted

•  Find the expected time between orders

!

µ30

=1+ p31µ

10+ p

32µ

20+ p

33µ

30

µ20

=1+ p21µ

10+ p

22µ

20+ p

23µ

30

µ10

=1+ p11µ

10+ p

12µ

20+ p

13µ

30

"

# $

% $

Solve simultaneously,

!

µ10

=1.58 weeksµ

20= 2.51 weeks

µ30

= 3.50 weeks

!

µ00

=1"

0

= 3.50 weeks