Markov Chains (Part 4) - University of part4).pdf · PDF fileMarkov Chains - 3 Some...
date post
30-Nov-2018Category
Documents
view
219download
0
Embed Size (px)
Transcript of Markov Chains (Part 4) - University of part4).pdf · PDF fileMarkov Chains - 3 Some...
Markov Chains - 1
Markov Chains (Part 4)
Steady State Probabilities and First Passage Times
Markov Chains - 2
Steady-State Probabilities
Remember, for the inventory example we had
For an irreducible ergodic Markov chain, where j = steady state probability of being in state j
jnijnp !=
"#
)(lim
!!!
"
#
$$$
%
&
=
166.263.285.286.166.263.285.286.166.263.285.286.166.263.285.286.
)8(P
Markov Chains - 3
Some Observations About the Limit
The behavior of this important limit depends on properties of states i and j and the Markov chain as a whole. If i and j are recurrent and belong to different classes, then
p(n)ij=0 for all n. If j is transient, then for all i. Intuitively, the
probability that the Markov chain is in a transient state after a large number of transitions tends to zero.
In some cases, the limit does not exist! Consider the following Markov chain: if the chain starts out in state 0, it will be back in 0 at times 2,4,6, and in state 1 at times 1,3,5,. Thus p(n)00=1 if n is even and p(n)00=0 if n is odd. Hence the limit does not exist.
!
limn"#
pij(n ) = 0
!
limn"#
p00(n )
Markov Chains - 4
Steady-State Probabilities How can we find these probabilities without calculating
P(n) for very large n? The following are the steady-state equations:
!
" jj= 0
M
# =1
" j = " i piji= 0
M
# for all j = 0,...,M
" j $ 0 for all j = 0,...,M In matrix notation we have TP = T Solve a system of linear equations. Note: there are M+2 equations and only M+1 variables (0, 1, , M), so one of the equations is redundant and can be dropped - just dont drop the equation 1
0=!
=
M
jj"
Solving for the Steady-State Probabilities
Idea is to go from steady state to steady state:
Markov Chains - 5
!
! TP = ! T and ! i =1i=0
M
!
! 0 !1 ! !M"#
$%
p00 p01 ... p0Mp10 p11 ... p1M! ! p(M&1)MpM 0 pM1 ... pMM
"
#
'''''
$
%
(((((
= ! 0 !1 ! !M"#
$%
! 0p00 +!1p10 +!+!M pM 0 = ! 0! 0p01 +!1p11 +!+!M pM1 = !1
! = !! 0p0M +!1p1M +!+!M pMM = !M
! 0 +!1 +!+!M = 1
Xt
j
i
0
M
t
!
" i
!
"0!
"M
!
" j
Markov Chains - 6
Steady-State Probabilities Examples
Find the steady-state probabilities for
Inventory example
!!!!
"
#
$$$$
%
&
=
43
410
2102
103
231
P
!"#
$%&= 4.06.0
7.03.0P
!!!!
"
#
$$$$
%
&
=
368.0368.0184.0080.00368.0368.0264.000368.0632.0368.0368.0184.0080.0
P
Other Applications of Steady-State Probabilities
Expected recurrence time: we are often interested in the expected number of steps between consecutive visits to a particular (recurrent) state. What is the expected number of sunny days between rainy days? What is the expected number of weeks between ordering cameras?
Long-run expected average cost per unit time: in many applications, we incur a cost or gain a reward every time a Markov chain visits a specific state. If we incur costs for carrying inventory, and costs for not meeting
demand, what is the long-run expected average cost per unit time?
Markov Chains - 7
Markov Chains - 8
Expected Recurrence Times
The expected recurrence time, denoted jj , is the expected number of transition between two consecutive visits to state j.
The steady state probabilities, j , are related to the expected recurrence times, jj, as
Mjj
jj ,...,1,0 all for 1
=!
=
Markov Chains - 9
Weather Example What is the expected number of sunny days in between
rainy days? First, calculate j.
Now, 11 = 1/j = 4 For this example, we expect 4 sunny days between
rainy days.
0 1
Sun 0Rain 1
0.8 0.20.6 0.4
!
"#
$
%&
! 0 0.8+!10.6 = ! 0! 0 0.2+!10.4 = !1
! 0 +!1 = 1
! 0 =1!!11!!1( )0.2+!10.4 = !10.2 = 0.8!1!1 =1/ 4 and ! 0 = 3 / 4
Markov Chains - 10
Inventory Example What is the expected number of weeks in between orders? First, the steady state probabilities are:
Now, 00 = 1/0 = 3.5 For this example, on the average, we order cameras every
three and a half weeks.
! 0 = 0.286, !1 = 0.285, ! 2 = 0.263, ! 3 = 0.166
Markov Chains - 11
Expected Recurrence Times Examples
!
P =0.3 0.70.6 0.4"
# $
%
& '
( 0 = 613(1 = 713
00 =136 = 2.166711 =137 =1.857
!
P =
13
23 0
12 0
12
0 143
4
"
#
$ $ $
%
&
' ' '
( 0 = 315(1 = 415( 2 = 815
00 =15 3 = 5 11 =15 4 = 3.7522 =158 =1.875
0.7 0
0.3 1
0.6
0.4
1
2
2/3 1/2
1/2
3/4 0
1/4
1/3
Markov Chains - 12
Steady-State Cost Analysis
Once we know the steady-state probabilities, we can do some long-run analyses
Assume we have a finite-state, irreducible Markov chain Let C(Xt) be a cost at time t, that is, C(j) = expected cost of being in
state j, for j=0,1,,M The expected average cost over the first n time steps is
The long-run expected average cost per unit time is a function of steady state probabilities
!
E 1n
C Xt( )t =1n"
#
$ % %
&
' ( (
!
limn"#
E 1n
C Xt( )t =1n$
%
& ' '
(
) * *
= + jC j( )j=0
M
$
Markov Chains - 13
Steady-State Cost Analysis Inventory Example
Suppose there is a storage cost for having cameras on hand:
The long-run expected average cost per unit time is
!
C i( ) =
0 if i = 02 if i =18 if i = 2
18 if i = 3
"
# $ $
% $ $
!
"0C 0( ) +" 1C 1( ) +" 2C 2( ) +" 3C 3( )
= 0.286 0( ) + 0.285 2( ) + 0.268 8( ) + 0.166 18( )= 5.662
Markov Chains - 14
First Passage Times - Motivation In many applications, we are interested in the time at
which the Markov chain visits a particular state for the first time. If I start out with a dollar, what is the probability that I will go
broke (for the first time) after n gambles? If I start out with three cameras in my inventory, what is the
expected number of days after which I will have none for the first time?
The answers to these questions are related to an important concept called first passage times
Markov Chains - 15
First Passage Times The first passage time from state i to state j is the
number of transitions made by the process in going from state i to state j for the first time
When i = j, this first passage time is called the recurrence time for state i
Let fij(n) = probability that the first passage time from state i to state j is equal to n
What is the difference between fij(n) and pij(n)?
pij(n) includes paths that visit j fij(n) does not include paths that visit j
Xt
j
i
0 t+n t
Markov Chains - 16
Some Observations about First Passage Times
First passage times are random variables and have probability distributions associated with them
fij(n) = probability that the first passage time from state i to state j is equal to n
These probability distributions can be computed using a simple idea: condition on where the Markov chain goes after the first transition
For the first passage time from i to j to be n>1, the Markov chain has to transition from i to k (different from j) in one step, and then the first passage time from k to j must be n-1.
This concept can be used to derive recursive equations for fij(n)
Markov Chains - 17
First Passage Times
The first passage time probabilities satisfy a recursive relationship
Xt
j
i
0
M
t
!
piM
t+1 t+n !
pij
!
pii
!
pi0
!
fMj(n "1)
!
fij(n "1)
!
f0 j(n "1)
fij(1) = pij
(1) = pijfij
(2) = pikk! j" fkj(1)
!fij
(n) = pikk! j" fkj(n#1)
Markov Chains - 18
First Passage Times Inventory Example
Suppose we were interested in the number of weeks until the first order (start in State 3, X0=3)
Then we would need to know what is the probability that the first order is submitted in Week 1? Week 2?
Week 3?
f301( ) = p30 = 0.080
!
f303( ) = p3k fk0
2( )
k"0# = p31 f102( ) + p32 f202( ) + p33 f302( )
f30(2) = p3k fk0
1( )
k!0" = p31 f10(1) + p32 f20(1) + p33 f30(1)
= p31p10 + p32p20 + p33p30= 0.184(0.632)+ 0.368(0.264)+ 0.368(0.080)= 0.243
Markov Chains - 19
Probability of Ever Reaching j from i
If the chain starts out in state i, what is the probability that it visits state j at some future time?
This probability is denoted If fij=1, then the chain starting at i definitely reaches j at
some future time, in which case f(n)ij is a genuine probability