Queueing theory

45
ELL 785–Computer Communication Networks Lecture 3 Introduction to Queueing theory 3-1 Contents Review on Poisson process Discrete-time Markov processes Continuous-time Markov processes Queueing systems 3-2 Review on Poisson process I Properties of a Poisson process, Λ(t ): P1) Independent increment for some finite λ (arrivals/sec): Number of arrivals in disjoint intervals, e.g., [t 1 , t 2 ] and [t 3 , t 4 ], are independent random variables. Its probability density function is Pr[Λ(t )= k ]= (λt ) k k ! e -λt for k =0, 1,... P2) Stationary increments: The number of events (or arrivals) in (t , t + h] is independent of t . Using the prob. generating function of distribution Λ(t + h), i.e., E [z Λ(t) ]= k=0 z k Pr[Λ(t )= k ]= e λt(z-1) , E [z Λ(t+h) ]= E [z Λ(t) · z Λ(t+h)-Λ(t) ] = E [z Λ(t) ] · E [z Λ(t+h)-Λ(t) ], due to P1. E [z Λ(t+h)-Λ(t) ]= e λ(t+h)(z-1) e λ(t)(z-1) = e λ(h)(z-1) . 3-3 Review on Poisson process II P3) Interarrival (or inter-occurrence) times between Poisson arrivals are exponentially distributed: Suppose τ 1 2 2 ,... are the epochs of the first, second and third arrivals, then the interarrival times t 1 , t 2 and t 3 are given by t 1 = τ 1 , t 2 = τ 2 - τ 1 and t 3 = τ 3 - τ 2 , generally, t n = τ n - τ n-1 with τ 0 =0. time 1. For t1, we have Pr[t1 t ] = Pr[Λ(t ) = 0] = e -λt for t 0, which means that t1 is exponentially distributed with mean 1. 2. For t2, we get Pr[t2 > t |t1 = x ] = Pr[Λ(t +x )-Λ(x ) = 0] = Pr[Λ(t ) = 0] = e -λt , which also means that t2 is independent of t1 and has the same distribution as t1. Similarly t3, t4,... are iid. 3-4

Transcript of Queueing theory

Page 1: Queueing theory

ELL 785–Computer Communication Networks

Lecture 3Introduction to Queueing theory

3-1

Contents

Review on Poisson process

Discrete-time Markov processes

Continuous-time Markov processes

Queueing systems

3-2

Review on Poisson process IProperties of a Poisson process, Λ(t):P1) Independent increment for some finite λ (arrivals/sec):

Number of arrivals in disjoint intervals, e.g., [t1, t2] and [t3, t4], areindependent random variables. Its probability density function is

Pr[Λ(t) = k] = (λt)k

k! e−λt for k = 0, 1, . . .

P2) Stationary increments:The number of events (or arrivals) in (t, t + h] is independent of t.Using the prob. generating function of distribution Λ(t + h), i.e.,E [zΛ(t)] =

∑∞k=0 zk Pr[Λ(t) = k] = eλt(z−1),

E [zΛ(t+h)] = E [zΛ(t) · zΛ(t+h)−Λ(t)]

= E [zΛ(t)] · E [zΛ(t+h)−Λ(t)], due to P1.

⇒ E [zΛ(t+h)−Λ(t)] = eλ(t+h)(z−1)

eλ(t)(z−1) = eλ(h)(z−1).

3-3

Review on Poisson process II

P3) Interarrival (or inter-occurrence) times between Poisson arrivals areexponentially distributed:Suppose τ1, τ2, τ2, . . . are the epochs of the first, second and thirdarrivals, then the interarrival times t1, t2 and t3 are given byt1 = τ1, t2 = τ2 − τ1 and t3 = τ3 − τ2, generally, tn = τn − τn−1with τ0 = 0.

time

1. For t1, we have Pr[t1 ≥ t] = Pr[Λ(t) = 0] = e−λt for t ≥ 0, whichmeans that t1 is exponentially distributed with mean 1/λ.

2. For t2, we getPr[t2 > t|t1 = x] = Pr[Λ(t+x)−Λ(x) = 0] = Pr[Λ(t) = 0] = e−λt ,which also means that t2 is independent of t1 and has the samedistribution as t1. Similarly t3, t4, . . . are iid.

3-4

Page 2: Queueing theory

Review on Poisson process III

P4) The converse of P4 is true:If the sequence of interarrival times {ti} is iid rv’s with exp. densityfun. λe−λt , t ≥ 0, then the number of arrivals in interval [0, t],Λ(t), is a Poisson process.Let Y denote the sum of j independent rv’s with exp. density fun.,then Y is Erlang-j distributed, fY (y) = λ(λy)j−1

(j−1)! e−λy:

Pr[Λ(t) = j] =∫ t

0Pr[0 arrival in(y, t]|Y = y]fY (y)dy

=∫ t

0eλ(t−y) · fY (y)dy = (λt)je−λt

j! .≈ time

3-5

Review on Poisson process IV

P5) For a short interval, the probability that an arrival occurs in aninterval is proportional to the interval size, i.e.,

limh→0

Pr[Λ(h) = 1]h = lim

h→0

e−λh(λh)h = λ.

Or, we have Pr[Λ(h) = 1] = λh + o(h), where limh→0o(h)

h = 0

P6) The probability of two or more arrivals in an interval of length hgets small as h → 0. For every t ≥ 0,

limh→0

Pr[Λ(h) ≥ 2]h = lim

h→0

1− e−λh − λhe−λh

h = 0

3-6

Review on Poisson process V

P7) Merging: If Λi(t)’s are mutually independent Poisson processeswith rates λi ’s, the superposition process Λ(t)

(=∑k

i=1 Λi(t))is

a Poisson process with rate λ(

=∑k

i=1 λi

)Note: If the interarrival times of the ith stream are a sequence ofiid rv’s but not necessarily exponentially distributed, then Λ(t)tends to a Poisson process as k →∞. [D. Cox, Renewal Theory]

Merging

Splitting

P8) Splitting: If an arrival randomly chooses the ith branch withprobability πi , the arrival process at the ith branch, Λi(t), isPoisson with rate λi(= πiλ). Moreover, Λi(t) is independent ofΛj(t) for any pair of i and j (i 6= j).

3-7

Poisson approximation to Binomial distribution

• Poisson distribution approximates the binomial probabilities– If n is large and p is small, by keeping G = np fixed,

pk =(

nk

)pk(1− p)n−k ≈ Gk

k! e−G for k = 0, 1, . . . ,

– The probability that no events occur in n trials

p0 = (1− p)n =(

1− Gn

)n→ e−G as n →∞

– The rest of probabilities can be found

pk+1

pk=( n

k+1)pk+1qn−k−1(n

k)pkqn−k = k!(n − k)!p

(k + 1)!(n − k − 1)!q

= (n − k)p(k + 1)q = (1− k/n)α

(k + 1)(1−G/n)

→ Gk + 1 as n →∞ =⇒ pk = Gk

k! e−G

3-8

Page 3: Queueing theory

Random (or Stochastic) Processes

General notion• Suppose a random experiment specified by the outcomes ζ from

some sample space S , and ζ ∈ S• A random process (or stochastic) is a mapping ζ to a function of

time t: X(t, ζ)– For a fixed t, e.g., t1, t2,...: X(ti , ζ) is random variable– For ζ fixed: X(t, ζi) is a sample path or realization

0 1 2 n n+1 time

n+2

– e.g. # of frames in a transmitter’s queue, # of rickshaws at IITmain gate

3-9

Discrete-time Markov process I

A sequence of integer-valued random variables, Xn , n = 0, 1, . . . ,is called a discrete-time Markov process

If the following Markov property holds

Pr[Xn+1 = j|Xn = i,Xn−1 = in−1, . . . ,X0 = i0]= Pr[Xn+1 = j|Xn = i]

• State: the value of Xn at time n in the set S• State space: the set S = {n|n = 0, 1, . . . , }

– An integer-valued Markov process is called Markov chain (MC)

Time-homogeneous, if for any n,

pij = Pr[Xn+1 = j|Xn = i] (indepedent of time n)

which is called one-step (state) transition probability

3-10

Discrete-time Markov process II

State transition probability matrix:

P = [pij ] =

p00 p01 p02 · · · · · ·p10 p11 p12 · · · · · ·· · · · · · · · · · ·

pi0 pi1 pi2 · · · · · ·...

......

. . . . . .

which is called a stochastic matrix with pij ≥ 0 and

∑∞j=0 pij = 1

n-step transition probability matrix:

Pn =[p(n)

ij]

with n-step transition probabilities

p(n)ij = Pr[Xl+n = j|Xl = i] for n ≥ 0, i, j ≥ 0.

3-11

Discrete-time Markov process III

The Chapman-Kolmogorov equations:

p(n+m)ij =

∞∑k=0

p(n)ik p(m)

kj for n,m ≥ 0, i, j ∈ S

Proof:

Pr[Xn+m = j|X0 = i] =∑k∈S

Pr[Xn+m = j|X0 = i,Xn = k] Pr[Xn = k|X0 = i]

(Markov property) =∑k∈S

Pr[Xn+m = j|Xn = k] Pr[Xn = k|X0 = i]

(Time homogeneous) =∑k∈S

Pr[Xm = j|X0 = k] Pr[Xn = k|X0 = i]

Pn+m = PnPm ⇒ Pn+1 = PnP

3-12

Page 4: Queueing theory

Discrete-time Markov process IV

A mouse in a maze

4 5 6

8

1 2 3

7 9

• A mouse chooses the next cell to visit withprobability 1/k, where k is the number ofadjacent cells.

• The mouse does not move any more once it iscaught by the cat or it has the cheese.

P =

1 2 3 4 5 6 7 8 9

1 0 12 0 1

2 0 0 0 0 02 1

3 0 13 0 1

3 0 0 0 03 0 1

2 0 0 0 12 0 0 0

4 13 0 0 0 1

3 0 13 0 0

5 0 14 0 1

4 0 14 0 1

4 06 0 0 1

3 0 13 0 0 0 1

37 0 0 0 0 0 0 1 0 08 0 0 0 0 1

3 0 13 0 1

39 0 0 0 0 0 0 0 0 1

3-13

Discrete-time Markov process V

In a place, the weather each day is classified as sunny, cloudy or rainy. Thenext day’s weather depends only on the weather of the present day and noton the weather of the previous days. If the present day is sunny, the nextday will be sunny, cloudy or rainy with respective probabilities 0.70, 0.10and 0.20. The transition probabilities are 0.50, 0.25 and 0.25 when thepresent day is cloudy; 0.40, 0.30 and 0.30 when the present day is rainy.

0.70.2

0.1

Sunny Cloudy Rainy0.5

0.25

0.25

0.4

0.3

0.3

P =

S C R S 0.7 0.1 0.2C 0.5 0.25 0.25R 0.4 0.3 0.3

– Using n-step transition probability matrix,

P3 =

0.601 0.168 0.2300.596 0.175 0.2330.585 0.179 0.234

and P12 =

0.596 0.172 0.2310.596 0.172 0.2310.596 0.172 0.231

= P13

3-14

Discrete-time Markov process VI

State probabilities at time n– π(n)

i = Pr[Xn = i] and π(n) =[π

(n)0 , . . . , π

(n)i , . . .](row vector)

– π(0)i : the initial state probability

Pr[Xn = j] =∑i∈S

Pr[Xn = j|X0 = i] Pr[X0 = i]

π(n)j =

∑i∈S

p(n)ij π

(0)i

– In matrix notation: π(n) = π(0)Pn

Limiting distribution: Given an initial prob. distribution, π(0),

~π = limn→∞

π(n) → π(∞)j = lim

n→∞p(n)

ij

– n →∞: π(n) = π(n−1)P → ~π = ~πP and ~π · ~1 = 1– The system reaches “equilibrium" or “steady-state"

3-15

Discrete-time Markov process VII

Stationary distribution:– zj and z = [zj ] denote the prob. of being in state j and its vector

z = z · P and z · ~1 = 1

• If zj is chosen as the initial distribution, i.e., π(0)j = zj for all j, we

have π(n)j = zj for all n

• A limiting distribution, when it exists, is always a stationarydistribution, but the converse is not true

P =[

0 11 0

], P2 =

[1 00 1

], P3 =

[0 11 0

]Global balance equation:

~π = ~πP ⇒ (each row) πj∑

ipji =

∑iπipij

3-16

Page 5: Queueing theory

Discrete-time Markov process VIII

Back to the weather example on page 3-16

• Using ~πP = ~π, we have

π0 =0.7π0 + 0.5π1 + 0.4π2

π1 =0.1π0 + 0.25π1 + 0.3π2

π2 =0.2π0 + 0.25π1 + 0.3π2

- Note that one equation is always redundant• Using 1 = π0 + π1 + π2, we have 0.3 −0.5 −0.4

−0.1 0.75 −0.31 1 1

π0

π1

π2

=

001

π0 = 0.596, π1 = 0.1722, π2 = 0.2318

3-17

Discrete-time Markov process IX

Classes of states:• State j is accessible from state i if p(n)

ij > 0 for some n• States i and j communicate if they are accessible to each other• Two states belong to the same class if they communicate with

each other• MC having a single class is said to be irreducible

1 2

0

3

Recurrence property• State j is recurrent if

∑∞n=1 p(n)

jj =∞– Positive recurrent if πj > 0– Null recurrent if πj = 0

• State j is transient if∑∞

n=1 p(n)jj <∞

3-18

Discrete-time Markov process X

Periodicity and aperiodic:• State i has period d if

p(n)ii = 0 when n is not a multiple of d,

where d is the largest integer with this property.• State i is aperiodic if it has period d = 1.• All states in a class have the same period

– An irreducible Markov chain is said to be aperiodic if the statesin its single class have period one

State

Recurrent

Transient:

Positive recurrent

Periodic:

Aperiodic:

Null recurrent:

ergodic

3-19

Discrete-time Markov process XI

In a place, a mosquito is produced every hour with prob. p, and dieswith prob. 1− p

0 1 2 3 … …

• Using global balance equations,

pπi = (1− p)πi+1 → πi+1 = p1− pπi =

(p

1− p

)iπ0

All states are positive recurrent if p < 1/2, null recurrent ifp = 1/2 (see

∑∞i=0 πi = 1), and transient if p > 1/2

3-20

Page 6: Queueing theory

Drift and Stability I

Suppose an irreducible, aperiodic, discrete-time MC• The chain is ‘stable’ if πj > 0 for all j• Drift is defined as

Di = E [Xn+1 −Xn|Xn = i] =∞∑

k=−ikPi(i+k)

– If Di > 0, the process goes up some higher states from state i– If Di < 0, the process visits some lower states from state i– In the previous slide, Di = 1 · p − 1 · (1− p) = 2p − 1

Pakes’ lemma1) Di <∞ for all i2) For some scalar δ > 0 and integer i ≥ 0

Di ≤ −δ for all i > i

Then, the MC has a stationary distribution

3-21

Drift and Stability II

Proof: β = maxi≤i Di (on page 264 in the textbook)

E [Xn|X0]− i = E [Xn −Xn−1 + Xn−1 −Xn−2 + · · ·+ X1 −X0|X0 = i]

=n∑

k=1E [Xk −Xk−1|X0 = i]

=n∑

k=1

∞∑j=0

E [Xk −Xk−1|Xk−1 = j] Pr[Xk−1 = j|X0 = i]

≤n∑

k=1

i∑j=0

Pr[Xk−1 = j|X0 = i]

− δ(

1−i∑

j=0Pr[Xk−1 = j|X0 = i]

)]

= (β + δ)n∑

k=1

i∑j=0

Pr[Xk−1 = j|X0 = i]− nδ

3-22

Drift and Stability III

from which we can get

0 ≤ E [Xn|X0 = i] ≤ n[(β + δ)

i∑j=0

( 1n

n∑k=1

Pr[Xk−1 = j|X0 = i]︸ ︷︷ ︸p(k)

ij

−δ)]

+ i

Dividing by n and as n →∞ yields

0 ≤ (β + δ)i∑

j=0πj − δ

– πj = limn→∞1n∑n

k=1 p(k)ij (Cesaro limit) = limn→∞ p(n)

ij

For some j ∈ {0, . . . , i}, we have πj > 0

3-23

Computational methods I

Infinite-state MC• Probability generating function (PGF) of a probability distribution

G(z) = E [zX ] =∑∞

i=0 z i Pr[X = i]︸ ︷︷ ︸πi

– G(1) = 1– dG(z)/dz|z=1 = X , and d2G(z)/dz2|z=1 = X2 −X

• In example on slide 3-22,

G(z) = π0

∞∑i=0

(pz

1− p

)i= 1− p

(1− p)− zpπ0

Finite-state MC• Direct method

– ~π = ~πP → ~π(P − I ) = 0 and ~π · ~1 = 1→ ~π · E = ~1– E is a matrix of all ones

~π(P + E − I ) = ~1 ⇒ ~π = (P + E − I )−1~13-24

Page 7: Queueing theory

Computational methods II

• Successive overrelaxation

πi =N∑

k=0pkiπk −→ πi =

N∑k=0,k 6=i

akiπk , aki = pki/(1− pii)

1. Choose π(1)i such that

∑Ni=0 π

(1)i = 1, and 0 ≤ ω ≤ 2

2. For each iteration k, compute

π(k)i = (1− ω)π(k−1)

i + ω

( i−1∑j=0

aijπ(k)j +

N∑j=i+1

aijπ(k−1)j

)For ω = 1, this iteration becomes Gauss-Seidel method

3. Check if the following is satisfiedN∑

i=0

∣∣π(k)i − π(k−1)

i

∣∣ ≤ ε N∑i=0

∣∣π(k)i

∣∣go to Step 4. Otherwise go to Step 2.

4. Compute the state probabilities as

π∗i = π(k)i /

N∑j=0

π(k)j 3-25

Speech model I

A Markov model for packet speech assumes that if the nth packetcontains silence, then the probability of silence in the next packet is1− α and the probability of speech activity is α. Similarly, if the nthpacket contains speech activity, then the probability of speech activityin the next packet is 1− β and the probability of silence is β.

≈ ≈

talkspurt silence

A frame

time

• Find the state transition probability matrix, P

P =silence talkspurt[ ]silence 1− α α

talkspurt β 1− β

3-26

Speech model II

What if a frame is generated based on the previous Markov speechmodel at a transmitter during talkspurts with probability ε while eachframe is successfully transmitted with probability p at each time slot?Assume that ACK/NAK come before the beginning of the next slot• Two-dimensional Markov chain

– {(k, i): k is # of frames and i is the source state}– Queue length evolves as

Qn+1 = max(Qn − Tn, 0) + An

… …

3-27

Simplified Google page rank model

A Web surfer browses pages in a five-page Web universe shown below.The surfer selects the next page to view by selecting with equalprobability from the pages pointed to by the current page. If a pagehas no outgoing link (e.g., page 2), then the surfer selects any of thepages in the universe with equal probability. Find the probability thatthe surfer views page i

1 4

3

2 5

3-28

Page 8: Queueing theory

Summary of discrete-time MC (DTMC)

Stochastic modeling with a DTMC

Inspect whether Xn is a Markov process or not

Build a state transition probability matrix, P (or diagram)

Examine under what condition Xn is stable– e.g., use the drift for n ≥ n′,

Dn(= expected input rate - expected output rate) < 0

Solve ~π · P = ~π and ~π · ~1 = 1 or πj =∑

i πipij

Compute other metrics with ~π, when necessary

3-29

Continuous-time Markov process I

Memoryless property of exponential dist., F(x) = 1− e−µx for x ≥ 0.

Pr[X ≤ x0 + x|X > x0] = Pr[x0 < X ≤ x0 + x]Pr[X > x0]

= Pr[X ≤ x0 + x]− Pr[X ≤ x0]1− Pr[X ≤ x0] = 1− e−µx = Pr[X ≤ x],

which is not a function of x0.Example: Suppose X is the length of a telephone conversationexponentially distributed with mean 5 minutes, (µ = 1/5). Given that theconversation has been going on for 20 minutes (x0 = 20), the probabilitythat it continues for at most 10 minutes (x = 10), is given by

Pr[X ≤ x0 + x|X > x0] = Pr[X ≤ x] = 1− e−10/5.

In fact, because of this property, we shall see that the Markovian queueingsystem can be completely specified by the number of customers in thesystem. A similar result holds for the geometric distribution.

3-30

Continuous-time Markov process II

A stochastic process is called continuous-time MC if it satisfies

Pr[X(tk+1) = xk+1|X(tk) = xk ,X(tk−1) = xk−1, . . . ,X(t1) = x1]= Pr[X(tk+1) = xk+1|X(tk) = xk ]

If X(t) is a time-homogeneous continuous-time MC if

Pr[X(t + s) = j|X(s) = i] = pij(t) (independent of s)

which is analogous to pij in a discrete-time MC

time

1

2

3

4

: sojourn time in state

: time of state change

A sample path of continuous time MC 3-31

Continuous-time Markov process III

State occupancy time– Let Ti be the sojourn (or occupancy) time of X(t) in state i beforemaking a transition to any other state. For all s ≥ 0 and t ≥ 0, dueto Markovian property of this process,

Pr[Ti > s + t|Ti > s] = Pr[Ti > t].

Only exponential dist. satisfies this property, i.e., Pr[Ti > t] = e−vit .

State transition rate

qii(δ) = Pr[the process remains in state i during δ sec]

= Pr[Ti > δ] = e−viδ = 1− viδ

1 + (viδ)2

2! − · · · = 1− viδ + o(δ)

Or, equivalently, the rate that the process moves out of state i,

limδ→0

1− qii(δ)δ

= limδ→0

viδ + o(δ)δ

= vi

3-32

Page 9: Queueing theory

Continuous-time Markov Process IV

Comparison between discrete- and continuous time MC

time

1

2

8

: sojourn time in state

: time of state change

time

1

8≈

: discrete-time Markov process

: continuous-time Markov process

3-33

Continuous-time Markov Process V

A discrete-time MC is embedded in a continuous-time MC.

time

1

2

4

: continuous-time Markov process

3

Each time a state, say i, is entered, an exponentially distributed stateoccupancy time is selected. When the time is up, the next state j isselected according to transition probabilities, p̃ij

When the process enters state j from state i,

qij(δ) = (1− qjj(δ))p̃ij = vi p̃ijδ + o(δ) = γijδ + o(δ),

where γij = limδ→0 qij(δ)/δ = vi p̃ij , i.e., rate from state i to j.

3-34

Continuous-time Markov process V

State probabilities πj(t) = Pr[X(t) = j]. For δ > 0,

πj(t + δ) = Pr[X(t + δ) = j]

=∑

iPr[X(t + δ) = j|X(t) = i]︸ ︷︷ ︸

=qij(δ)

Pr[X(t) = i]

=∑

iqij(δ)πi(t)⇐⇒ πi =

∑j

pjiπj (DTMC)

Transition into state j

time

state

3-35

Continuous-time Markov process VI

Subtracting πj(t) from both sides,

πj(t + δ)− πj(t) =∑

iqij(δ)πi(t)− πj(t)

=∑

iqij(δ)πi(t) + (qjj(δ)− 1)πj(t)

Dividing both sides by δ,

limδ→0

πj(t + δ)− πj(t)δ

= dπj(t)dt

= limδ→0

[∑i

qij(δ)πi(t) + (qjj(δ)− 1)︸ ︷︷ ︸γii=−vi

πj(t)]

=∑

iγijπi(t),

which is a form of the Chapman-Kolmogorov equations

dπj(t)dt =

∑iγijπi(t)

3-36

Page 10: Queueing theory

Continuous-time Markov process VI

A queueing system alternates between two states. In state 0, thesystem is idle and waiting for a customer to arrive. This idle time is anexponential random variable with mean 1/α. In state 1, the system isbusy servicing a customer.The time in the busy state is an exponentialrandom variable with mean 1/β. Find the state probabilities and interms of the initial state probabilities π0(0) and π1(0).• γ00 = −α, γ01 = α, γ10 = β, γ11 = −β• From dπj(t)

dt =∑

i γijπi(t),

π′0(t) = −απ0(t) + βπ1(t)π′1(t) = απ0(t)− βπ1(t)

• Using π0(t) + π1(t) = 1, we have

π′0(t) = −απ0(t) + β(1− π0(t)) and π0(0) = p0

• The general solution of the above is

π0(t) = β

α+ β+ Ce−(α+β)t and C = p0 −

β

α+ β 3-37

Continuous-time Markov process VII

As t →∞, the system reaches ‘equilibrium’ or ‘steady-state’

dπj(t)dt → 0 and πj(∞) = πj

0 =∑

iγijπi or vjπj =

∑i 6=j

γijπi

(γjj = −vj = −

∑i 6=j

γij

)which is called the global balance equation, and

∑j πj = 1.

… … ……

3-38

Continuous-time Markov process VIII

As a matrix form,

d~π(t)dt = ~π(t)Q and ~π(t)~1 = 1

whose solution is given by

~π(t) = ~π(0)eQt

As t →∞, ~π(∞) , ~π = [πi ],

~πQ = 0 with Q =

−v0 γ01 γ02 γ03 . . .

γ10 −v1 γ12 γ13 . . .

γ20 γ21 −v2 γ23 . . ....

......

.... . .

and ~π · ~1 = 1,

where Q is called the infinitesimal generator or rate matrix.

3-39

Example: Barber shop I

Customers arrive at a Barbor shop with a Poisson process with rate λ.One barber serves those customers based on first-come first-servebasis. Its service time, Si is exponentially distributed with 1/µ (sec).

The number of customers in the system, N (t) for t ≥ 0, forms aMarkov chain

N (t + τ) = max(N (t)− B(τ), 0) + A(τ)

State transition probabilities (see properties of Poisson process):

Pr[0 arrival (or departure) in (t, t + δ)]= 1− λδ + o(δ) (or 1− µδ + o(δ))

Pr[1 arrival (or deparutre) in (t, t + δ)]= λδ + o(δ) (or µδ + o(δ))

Pr[more than 1 arrivals (or departure) in (t, t + h)] = o(h)

3-40

Page 11: Queueing theory

Example: Barber shop II

Find Pn(t) , Pr[N (t) = n]. For n ≥ 1

Pn(t + δ) = Pn(t) Pr[0 arrival & 0 departure in (t, t + δ)]+ Pn−1(t) Pr[1 arrival & 0 departure in (t, t + δ)]+ Pn+1(t) Pr[0 arrival & 1 departure in (t, t + δ)] + o(h)

= Pn(t)(1− λδ)(1− µδ) + Pn−1(t)(λδ)(1− µδ)+ Pn+1(t)(1− λδ)(µδ) + o(δ).

Rearranging and dividing it by δ,

Pn(t + δ)− Pn(t)δ

= −(λ+ µ)Pn(t) + λPn−1(t) + µPn+1(t) + o(δ)δ

As δ → 0, for n > 0 we havedPn(t)

dt =− (λ+ µ)︸ ︷︷ ︸rate out of state n

Pn(t) + λ︸︷︷︸rate from state n − 1 to n

Pn−1(t)

+ µ︸︷︷︸rate from state n + 1 to n

Pn+1(t).3-41

Example: Barber shop III

For n = 0, we havedP0(t)

dt = −λP0(t) + µP1(t).

As t →∞, i.e., steady-state, we have Pn(∞) = πn with dPn(t)dt = 0.

λπ0 = µπ1

(λ+ µ)πn = λπn−1 + µπn+1 forn ≥ 1.

State transition rate diagram

……

Solution of the above equations is (ρ = λ/µ)

πn = ρnπ0 and 1 = π0

(1 +

∞∑i=1

ρi)⇒ π0 = 1− ρ

3-42

Example: Barber shop IV

ρ: the server’s utilization (< 1, i.e., λ < µ)Mean of customers in the system

E [N ] =∞∑

n=0nπn = ρ

1− ρ

= ρ(in server) + ρ2/(1− ρ)(in queue)

An M/M/1 system with 1/µ = 1

ρ

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Num

ber

of c

usto

mer

s in

the

syst

em

0

5

10

15

20SimulationAnalysis

ρ

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Mea

n sy

stem

res

pons

e tim

e (s

ec)

0

5

10

15

20SimulationAnalysis

3-43

Example: Barbershop V

Recall the state transition rate matrix, Q on page 3-40 as

~πQ = 0 with Q =

−v0 γ01 γ02 γ03 . . .

γ10 −v1 γ12 γ13 . . .

γ20 γ21 −v2 γ23 . . ....

......

.... . .

and ~π · ~1 = 1,

– What is γij , and vi in M/M/1 queue ?

γij =

λ, if j = i + 1,µ, if j = i − 1.−(λ+ µ), if j = i0, otherwise

If a and b denote interarrival and service time, respectively, then vi is themean of an exponential distribution, i.e., min(a, b).

What is pi,i+1 or pi+1,i?

pi,i+1 = Pr[a < b] = λ/(λ+µ) and pi+1,1 = Pr[b < a] = µ/(λ+µ).3-44

Page 12: Queueing theory

Example: Barbershop VI

Distribution of sojourn time, T :

TN = S1 + S2 + · · ·+ SN︸ ︷︷ ︸customers ahead

+SN+1

An arriving customer finds N customers in the system (including thecustomer in the server)– By the memoryless property of the exponential distribution, theremaining service time of the customer in service is exponentiallydistributed:

fT(t) =∞∑

i=0µ

(µt)i

i! e−µtπi

=∞∑

i=0µ

(µt)i

i! e−µtρi(1− ρ) = µ(1− ρ)e−µ(1−ρ)t

which can be obtained via Laplace transform of distribution of Si .

3-45

Barbershop simulation I

Discrete event simulation

Generate an arrival

sim_time = sim_time + interarrival time

sim_time = 0

Queue = Queue + 1

Scheduling an eventnext interarrival time

< service time

Arrival? yes

Queue = Queue ─ 1

service time < next interarrival time

Queue is empty?

Yes

sim_time = sim_time + service time

No

3-46

Barbershop simulation IIclear% Define variablesglobal arrival departure mservice_timearrival = 1; departure = -1; mservice_time = 1;% Set simulation parameterssim_length = 30000; max_queue = 1000;% To get delay statisticssystem_queue = zeros(1,max_queue);k = 0;for arrival_rate = 0.1:0.025:0.97

k = k + 1;% x(k) denotes utilizationx(k) = arrival_rate*mservice_time;% initializesim_time = 0; num_arrivals = 0; num_system =0; upon_arrival = 0; total_delay = 0; num_served =0;% Assuming that queue is emptyevent = arrival; event_time = exprnd(1/arrival_rate);sim_time = sim_time + event_time;while (sim_time < sim_length),

% If an arrival occurs,if event == arrival

num_arrivals = num_arrivals + 1;num_system = num_system + 1;% Record arrival time of the customersystem_queue(num_system) = sim_time;upon_arrival = upon_arrival + num_system;% To see whether one new arrival comes or new departure occurs[event, event_time] = schedule_next_event(arrival_rate);

3-47

Barbershop simulation III

% If a departure occurs,elseif event == departure

delay_per_arrival = sim_time - system_queue(1);system_queue(1:max_queue-1) = system_queue(2:max_queue);total_delay = total_delay + delay_per_arrival;num_system = num_system - 1;num_served = num_served + 1;if num_system == 0

% nothing to serve, schedule an arrivalevent = arrival;event_time = exprnd(1/arrival_rate);

elseif num_system > 0% still the system has customers to serve[event, event_time] = schedule_next_event(arrival_rate);

endendsim_time = sim_time + event_time;

endana_queue_length(k) = (x(k)/(1-x(k)));ana_response_time(k) = 1/(1/mservice_time-arrival_rate);% Queue length seen by arrivalsim_queue_length(k) = upon_arrival/sim_length;sim_response_time(k) = total_delay/num_served;

end

3-48

Page 13: Queueing theory

Barbershop simulation IV

function [event, event_time] = schedule_next_event(arrival_rate)

global arrival departure mservice_time

minter_arrival = 1/arrival_rate;inter_arrival = exprnd(minter_arrival);service_time = exprnd(mservice_time);if inter_arrival < service_time

event = arrival;event_time = inter_arrival;

elseevent = departure;event_time = service_time;

end

3-49

Relation between DTMC and CTMC I

Recall an embedded MC: each time a state, say i, is entered, anexponentially distributed state occupancy time is selected. When the timeis up, the next state j is selected according to transition probabilities, p̃ij

time

1

2

4

: continuous-time Markov process

3

• Ni(n): the number of times state i occurs in the first n transitions• Ti(j): the occupancy time the jth time state i occurs.

The proportion of time spent by X(t) in state i after the first n transitions

time spent in state itime spent in all states =

∑Ni(n)j=1 Ti(j)∑

i

∑Ni(n)j=1 Ti(j)

3-50

Relation between DTMC and CTMC II

As n →∞, using πi = Ni(n)/n we haveNi(n)

n1

Ni(n)∑Ni(n)

j=1 Ti(j)∑i

Ni(n)n

1Ni(n)

∑Ni(n)j=1 Ti(j)

= πiE [Ti ]∑i πiE [Ti ]

∣∣∣E[Ti ]=1/vi

= φi ,

where πi is the unique pmf solution to

πj =∑

iπi p̃ij and

∑jπj = 1 (∗)

The long-term proportion of time spent in state i approaches

φi = πi/vi∑i πi/vi

= cπi

vi→ πi = viφi

c

Substituting πi = (viφi)/c into (∗) yieldsviφi

c = 1c∑

iviφi p̃ij → viφi =

∑iφivi p̃ij =

∑iφiγij

3-51

Relation between DTMC and CTMC III

Recall M/M/1 queue

0 1 32 4

0 1 32 4

… …

… …

a) CTMC

b) Embedded MC

In the embedded MC, we have the following global balance equations

π0 = qπ1π1 = π0 + qπ2

...πi = pπi−1 + qπi+1

πi =(

pq

)πi−1

3-52

Page 14: Queueing theory

Relation between DTMC and CTMC IV

Using the normalization condition,∑∞

i=0 πi = 1,

πi =(

pq

)i−1 1q π0 and π0 = 1− 2p

2(1− p)

Converting the embedded MC into CTMC,

φ0 = cv0π0 = c

λπ0 and φi = cπi

vi= cλ+ µ

πi

To determine c,∞∑

i=0φi = 1→ c

(π0

λ+ 1λ+ µ

∞∑i=1

πi

)= 1→ c = 2λ

Finally, we get φi = ρi(1− ρ) for i = 1, 2, . . .

3-53

Example of using embedded MC I

A stray dog, in front of the tea shop at the central library of IIT Delhi,spends most of the daytime sleeping around the tea shop. When aperson comes to the tea shop, the dog greets him or her and wags hertail for an average time of one minute. At the end of this period, thisdog is fed with probability 1/4, patted briefly with probability 5/8, ortaken for a walk with probability 1/8. If fed, she spends an average oftwo minutes eating. The walks take 15 minutes on average. Aftereating, being patted, or walking, she returns to sleep. Assume thatpeople come to the tea shop on average every hour.

1. Find a Markov chain model with four states, {sleep, greet, eat,walk}: Specify the transition probabilities and rates

2. Find the steady state probabilities that you find the dog’s state

3-54

Example of using embedded MC II

• State transition diagram

sleep greet

eat

walk

1

1

1/4

1/85/8

P =

S G E W

S 0 1 0 0G 5/8 0 1/4 1/8E 1 0 0 0W 1 0 0 0

• From ~πP = ~π and ~π · ~1 = 1, we have

π0 = π1 = 8/19, π2 = 2/19, π3 = 1/19

• From φi = (πi/vi)/(∑

k πk/vk), we have, e.g.,

φ0 = 60π0

60π0 + π1 + 2π2 + 15π3and φ2 = 2π2

60π0 + π1 + 2π2 + 15π3

3-55

Queueing systems I

The arrival times, the size of demand for service, the service capacityand the size of waiting room may be (random) variables.Queueing discipline: specify which customer to pick next for service.• First come first serve (FCFS, or FIFO)• Last come first serve (LCFS, LIFO)• Random order, Processor sharing (PS), Round robin (RR)• Priority (preemptive:resume, non-resume; non-preemptive)• Shortest job first (SJF) and Longest job first (LJF)

3-56

Page 15: Queueing theory

Queueing systems II

Customer behavior: jockeying, reneging, balking, etc.Kendall’s notation:

Population size (default )Queue size (default )# of serversService time distributionArrival time distribution

For A and B:• M: Markovian, exponential dist.• D: Deterministic• GI: General independent• Ek : Erlang-k• Hk : Mixture of k exponentials• PH : Phase type distributionE.g.: M/D/2, M/M/c, G/G/1, etc.; Barbershop is M/M/1 queue.

3-57

Queueing system III

Performance measure:

• N (t) = Nq(t) + NS(t): number insystem

• Nq(t): number in queue• NS(t): number in service• W : Waiting time in queue• T : total time (or response time) in

the system• τ : service time

• Throughput: γ , mean # of customers served per unit time1. γ for non-blocking system = min(λ,mµ)2. γ for a blocking system = (1− PB)λ, PB = blocking probability

• Utilization: ρ , fraction of time server is busy

ρ = loadcapacity = lim

T→∞

λTµT = λ

µfor a single server queue

= limT→∞

λTmµT = λ

mµ for an m-server queue3-58

Little’s theorem I

Any queueing system in steady state: N = λT

1

2

4

3

5

6

Nu

mb

er o

f ar

riva

ls o

r d

epar

ture

s

timeCustomer 1

Customer 2

• N : average numberof customers in thesystem

• λ: steady-statearrival rate, need notto be a Poisson

• T : average delay percustomer

Proof: For a system with N (0) = 0 and N (t) = 0, as t →∞

Nt = 1t

∫ t

0N (τ)dτ = 1

t

α(t)∑i=1

Ti = α(t)t

∑α(t)i=0 Ti

α(t) = λt · Tt .

If N (t) 6= 0, we have β(t)t

∑β(t)i=0

Ti

β(t) ≤ Nt ≤ λtTt .

3-59

Little’s theorem II

As an alternative, for the cumulative processes,

N (t) = α(t)− β(t) = γ(t) −→divided by t

N (t)/t = γ(t)/t = Nt

See the variable, ‘num_system’ in the previous Matlab code

‘num_arrvials’ in the code (t corresponds to ‘sim_length’)

λt = α(t)/t

Response time per customer from ‘total_delay’

Tt = γ(t)α(t) = γ(t)

t · tα(t) = Nt

λt

As t →∞, we have

λT = λ(W + x) = Nq + ρ

valid for any queue (even with any service order) as long as the limitsof λt and Tt exist as t →∞

3-60

Page 16: Queueing theory

Little’s theorem III

Finite queue

… …

Network of queues

3-61

Increasing the arrival and transmission rates by thesame fator

In a packet transmission system,• Arrival rate (packets/sec) is increased from λ to Kλ for K > 1• The packet length distribution remains the same (exponential),

with mean 1/µ bits• The transmission capacity (C bps) is increased by a factor of KPerformance• The average number of packets in the system remain the same

N = ρ

1− ρ with ρ = λ/(µC )

• Average delay per packet

λW = N →W = N/(Kλ)

Aggregation is better: increasing a transmission line by K times canallow K times as many packets/sec with K times smaller averagedelay per packet

3-62

Statistical multiplexing vs TDMA or FDMA

Multiplexing: m Poisson packet streams each with λ/m (packets/sec)are transmitted over a communication link with 1/µ exponentiallydistributed packet transmission time

……

a) Statistical multiplexing b) TDMA or FDMA

T = 1µ− λ

< T = mµ− λ

When do we need TDMA or FDMA?– In a multiplexer, packet generation times overlap, so that it mustbuffer and delay some of the packets

3-63

Little’s theorem: example I

Estimating throughput in a time-sharing systemSec. 3.2 Queueing Models-Little's Theorem

Average reflectiontime R

ComputerB

Average job processingtime P

C

161

(3.6)

Figure 3.4 N terminals connected with a time-sharing computer system. Toestimate maximum attainable throughput, we assume that a departing user im-mediately reenters the system or, equivalently, is immediately replaced by a newuser.

Combining this relation with A = NIT [cf. Eq. (3.3)], we obtain

N-R+P

The throughput A is also bounded above by the processing capacity of the computer. Inparticular, since the execution time of a job is P units on the average, it follows that thecomputer cannot process in the long run more than II P jobs per unit time, that is,

IA<--P (3.7)

(3.8)

(This conclusion can also be reached by applying Little's Theorem between the entry andexit points of the computer's CPU.)

By combining the preceding two relations, we obtain the bounds

N {I N}---::-:-:c- < A < min - ---R+NP- - P'R+Pfor the throughput A. By using T = N I A, we also obtain bounds for the average user delaywhen the system is fully loaded:

max {NP, R+ P} -s; T -s; R+ NP (3.9)

These relations are illustrated in Fig. 3.5.It can be seen that as the number of terminals N increases, the throughput approaches

the maximum liP, while the average user delay rises essentially in direct proportion withN. The number of terminals becomes a throughput bottleneck when N < I + RIP, inwhich case the computer resource stays idle for a substantial portion of the time while allusers are engaged in reflection. In contrast, the limited processing power of the computerbecomes the bottleneck when N > I + RIP. It is interesting to note that while the exactmaximum attainable throughput depends on system parameters, such as the statistics of thereflection and processing times, and the manner in which jobs are served by the CPU, the

Suppose a time-sharing computer system with N terminals. A user logsinto the system through a terminal and after an initial reflection period ofaverage length R, submit a job that requires an average processing time Pat the computer. Jobs queue up inside the computer and are served by asingle CPU according to some unspecified priority or time-sharing rule.What is the maximum of sustainable throughput by the system?– Assume that there is always a user ready to take the place of a departinguser, so the number of users in the system is always N

3-64

Page 17: Queueing theory

Little’s theorem: example II

The average time a user spends in the system

T = R + D → R + P ≤ T ≤ R + NP

– D: the average delay between time time a job is submitted to thecomputer and the time its execution is completed, D = [P,NP]Combining this with λ = N/T ,

NR + NP ≤ λ ≤ min

{1P ,

NR + P

}– throughput is bounded by 1/P, maximum job execution rate

162

,<....::J0..r::Cl::Jo1:I-

'"::c'"c:t

11P

Bound induced byIimited numberof terminals

Delay Models in Data Networks

Bound induced byCPU processingcapacity

Guaranteedthroughputcurve

Chap. 3

Upper bound for delayE'"1;;>en'".sc

'"Ei=

:::>'"Cl:;>'"><{

o 1 + RIP

R+P/1

R/ I

II1/V

//1

0

Number of Terminals N

(a)

Lower bound for delay due to limitedCPU processing capacity

Delay assuming no waiting in queue

Number of Terminals N

(b)

Figure 3.5 Bounds on throughput and average user delay in a time-sharingsystem. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on averageuser time in a fully loaded system [Eq. (3.9)]. The time increases essentially inproportion with the number of terminals N.

bounds obtained are independent of these parameters. We owe this convenient situation tothe generality of Little's Theorem.

3.3 THE M /M /1 QUEUEING SYSTEM

The M / ]\[ / I queueing system consists of a single queueing station with a single server(in a communication context, a single transmission line). Customers arrive accordingto a Poisson process with rate A, and the probability distribution of the service time isexponential with mean 1/ f.1 sec. We will explain the meaning of these terms shortly.The name AI/AI / I reflects standard queueing theory nomenclature whereby:

1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem-oryless, which here means a Poisson process (i.e., exponentially distributed inter-

3-65

Little’s theorem: example III

Using T = N/λ, we can rewrite

max{NP,R + P} ≤ T ≤ R + NP

162

,<....::J0..r::Cl::Jo1:I-

'"::c'"c:t

11P

Bound induced byIimited numberof terminals

Delay Models in Data Networks

Bound induced byCPU processingcapacity

Guaranteedthroughputcurve

Chap. 3

Upper bound for delayE'"1;;>en'".sc

'"Ei=

:::>'"Cl:;>'"><{

o 1 + RIP

R+P/1

R/ I

II1/V

//1

0

Number of Terminals N

(a)

Lower bound for delay due to limitedCPU processing capacity

Delay assuming no waiting in queue

Number of Terminals N

(b)

Figure 3.5 Bounds on throughput and average user delay in a time-sharingsystem. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on averageuser time in a fully loaded system [Eq. (3.9)]. The time increases essentially inproportion with the number of terminals N.

bounds obtained are independent of these parameters. We owe this convenient situation tothe generality of Little's Theorem.

3.3 THE M /M /1 QUEUEING SYSTEM

The M / ]\[ / I queueing system consists of a single queueing station with a single server(in a communication context, a single transmission line). Customers arrive accordingto a Poisson process with rate A, and the probability distribution of the service time isexponential with mean 1/ f.1 sec. We will explain the meaning of these terms shortly.The name AI/AI / I reflects standard queueing theory nomenclature whereby:

1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem-oryless, which here means a Poisson process (i.e., exponentially distributed inter-

3-66

Poisson Arrivals See Time Average (PASTA) theorem I

Suppose a random process which spends its time in different states Ej

In equilibrium, we can associate with each state Ej two differentprobabilities

• The probability of the state as seen by an outside random observer– πj : prob. that the system is in the state Ej at a random instant

• The probability of the state seen by an arriving customer– π∗j : prob. that the system is in the state Ej just before (arandomly chosen) arrival

In general, we have πj 6= π∗j

When the arrival process is Poisson, we have

πj = π∗j

3-67

PASTA theorem II

For a stochastic process, N ≡ {N (t), t ≥ 0} for t ≥ 0 and anarbitrary set B ∈ N :

U (t) ={

1, if N (t) ∈ B,0, otherwise. ⇒ V (t) = 1

t

∫ t

0U (τ)dτ.

For a Poisson arrival process A(t),

Y (t) =∫ t

0U (τ)dA(τ) ⇒ Z (t) = Y (t)/A(t)

Lack of Anticipation Assumption (LAA): For each t ≥ 0,{A(t + u)−A(t), u ≥ 0} and {U (s), 0 ≤ s ≤ t} are independent:Future inter-arrival times and service times of previously arrivedcustomers are independent.

Under LAA, as t →∞, PASTA ensures

V (t)→ V (∞) w.p. 1 if Z (t)→ V (∞) w.p.1

3-68

Page 18: Queueing theory

PASTA theorem

Proof:• For sufficiently large n, Y (t) is approximated as

Yn(t) =n−1∑k=0

U (k(t/n))[A((k + 1)t/n)−A(kt/n)︸ ︷︷ ︸(λ(k+1)t−λkt)/n

]

• LAA decouples the above as

E [Yn(t)] = λtE[ n−1∑

k=0U (kt/n)/n

]• As n →∞, if |Yn(t)| is bounded,

limn→∞

E [Yn(t)] = E [Y (t)] = λtE [V (t)] = λE[ ∫ t

0U (τ)dτ

]. �

: the expected number of arrivals who find the system in state Bequals arrival rate times the expected length of time it is there.

3-69

Systems where PASTA does not hold

Ex1) D/D/1 queue• Deterministic arrivals every 10 msec• Deterministic service times of 9 msec

0 9 10 19

… …

20

A sample path of D/D/1 queue

• Arrivals always finds the system empty.• The system is occupied on average with 0.9.

Ex2) LAA violated: Service times for a current customer depends onan inter-arrival time of a future customer• Your own PC (one customer, one server)• Your own PC is always free when you need it, π∗0 = 1• π0= proportion of time the PC is free (< 1)

3-70

M/M/1/K I

M/M/1/K: the system can accommodate K customers

… …

waiting customers

• State balance equations

λπ0 = µπ1

(λ+ µ)πi = λπi−1 + µπi+1 for 1 ≤ i ≤ K

After rearranging, we have

λπi−1 = µπi for 1 ≤ i ≤ K

• For i ∈ {0, 1, . . . ,K}, steady-state probabilities are

πn = ρnπ0 andK∑

n=0πn = 1 ⇒ π0 = 1− ρ

1− ρK+13-71

M/M/1/K II

• πK : the probability that an arriving customer finds the system full.Due to PASTA, this is a blocking probability

πK = 1− ρ1− ρK+1 ρ

K

• Blocking probability in simulation

PB = total # of blocked arrivals upon arrival instantstotal # of arrivals at the system

ρ

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9PB

0

0.025

0.05

0.075

0.1

0.125

0.15Analysis, K = 10

Simulation, K = 10

Analysis, K = 5

Simulation, K = 5

3-72

Page 19: Queueing theory

M/M/1/K Simulation Iclear% Define variablesglobal arrival departure mservice_timearrival = 1; departure = -1; mservice_time = 1;% Define simulation parameterssim_length = 30000; K = 10; system_queue = zeros(1,K);k = 0; max_iter = 5;for arrival_rate = 0.1:0.025:0.97

k = k + 1;x(k) = arrival_rate*mservice_time;% initializesim_time=0; num_arrivals=0; num_system=0; upon_arrival=0; total_delay=0; num_served=0; dropped=0;% Assuming that queue is emptyevent = arrival; event_time = exprnd(1/arrival_rate);sim_time = sim_time + event_time;for iter = 1:max_iter

while (sim_time < sim_length),% If an arrival occurs,if event == arrival

num_arrivals = num_arrivals + 1;if num_system == K

dropped = dropped + 1;else

num_system = num_system + 1;system_queue(num_system) = sim_time;upon_arrival = upon_arrival + num_system;

end% To see whether one new arrival comes or new departure occurs[event, event_time] = schedule_next_event(arrival_rate); 3-73

M/M/1/K Simulation II

% If a departure occurs,elseif event == departure

delay_per_arrival = sim_time - system_queue(1);system_queue(1:K-1) = system_queue(2:K);total_delay = total_delay + delay_per_arrival;num_system = num_system - 1;num_served = num_served + 1;if num_system == 0

% nothing to serve, schedule an arrivalevent = arrival;event_time = exprnd(1/arrival_rate);

elseif num_system > 0% still the system has customers to serve[event, event_time] = schedule_next_event(arrival_rate);

endendsim_time = sim_time + event_time;

endPd_iter(iter)=dropped/num_arrivals;

endpiK(k) = x(k)^K*(1-x(k))./(1-x(k)^(K+1));Pd(k) = mean(Pd_iter);

end

%%%%%%%%%%%%% use the previous schedule_next_event function

3-74

M/M/m queue I

M/M/m: there are m parallel servers, whose service times areexponentially distributed with mean 1/µ.

…… …

State transition rate diagram of M/M/m

When m servers are busy, the time until the next departure, X , is

X = min(τ1, τ2, . . . , τm)⇒ Pr[X > t] = Pr[min(τ1, τ2, . . . , τm) > t]

=m∏

i=1Pr[τi > t] = e−mµt (i.i.d.)

Global balance equations:

λπ0 = µπ1

(λ+ min(n,m)µ)πn = λπn−1 + min(n + 1,m)µπn+1 for n ≥ 13-75

M/M/m queue II

The previous global balance equation can be rewritten as

λπn−1 = min(n,m)µπn for n ≥ 0

Using a = λ/µ and ρ = λ/mµ

πn = ρmax(0,n−m) am

m! π0

From the normalization condition, π0 is obtained

1 =∞∑

i=0πi = π0

{m−1∑i=0

ai

i! + am

m!

∞∑i=m

ρi−m}

Erlang C formula, C (m, a),

C (m, a) = Pr[W > 0] = Pr[N ≥ m] =∞∑

i=mπi = (mρ)m

m! · π0

1− ρ

3-76

Page 20: Queueing theory

M/M/c/c I

c-server and only c customers can be accommodated

… …

Balance equations are (a = λ/µ called Erlang)

λπn−1 = nµπn ⇒ πn = an πn−1 = an

n! π0

Using∑c

n=0 πn = 1, we have

πn = an

n!

{ c∑i=0

ai

i!

}−1

Erlang B formula: B(c, a) = πc– valid for M/G/c/c system. Note that this depends only on themean of service time distribution

3-77

M/M/c/c II

Erlang capacity: Telephone systems with c channels

offered traffic intensity, a10-1 100 101

B(c,a)

10-4

10-3

10-2

10-1

100

c = 1

2

3

4 5 6 7 8 9 10

offered traffic intensity, a0 20 40 60 80 100

B(c,a)

10-4

10-3

10-2

10-1

100

10 20 30

40 50 60

70 80 90 100

a

0 0.5 1 1.5 2 2.5 3

PB

10-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

Analysis, c = 3

Simulation, c = 3

Analysis, c = 5

Simulation, c = 5

3-78

M/M/c/c Simulation I

clearglobal arrival departure mservice_timearrival = 1; departure = -1; mservice_time = 1;sim_length = 50000; n_iter = 5;K = 5; % number of serversk = 0;for arrival_rate = 0.05:0.025:0.95

k = k + 1;for iter = 1:n_iter

sim_time = 0;num_busy_servers =0;block = 0;num_arrival = 0;event = arrival;event_time = exprnd(1/arrival_rate);while (sim_time < sim_length),

if event == arrivalnum_arrival = num_arrival + 1;%% All servers are working,if num_busy_servers == K

block = block + 1;end%% increase the number of busy servers by 1num_busy_servers = min(K,num_busy_servers + 1);[event, event_time] = schedule_next_event_multi(arrival_rate,num_busy_servers);

3-79

M/M/c/c Simulation IIelseif event == departure

num_busy_servers = num_busy_servers - 1;if num_busy_servers == 0

event = arrival; event_time = exprnd(1/arrival_rate);else

[event, event_time] = schedule_next_event_multi(arrival_rate,num_busy_servers);end

endsim_time = sim_time + event_time;

endsimb(iter) = block/num_arrival;

endrho = arrival_rate/mservice_time; x(k) = rho;anab(k) = (rho^K/factorial(K))/(sum((rho.^[0:K])./factorial([0:K])));Pb(k) = mean(simb);

end%%%%%%%%%%%% A new function starts here %%%%%%%%%%%%function [event, event_time] = schedule_next_event_multi(arrival_rate,num_busy_servers)global arrival departure mservice_timeinter_arrival = exprnd(1/arrival_rate);multi_service = exprnd(mservice_time,[1 num_busy_servers]);service_time = min(multi_service);

if inter_arrival < service_timeevent = arrival; event_time = inter_arrival;

elseevent = departure;event_time = service_time;

end 3-80

Page 21: Queueing theory

Example: a system with blocking I

In Select-city shopping mall, customers arrive at the undergroundparking lot of it according to a Poisson process with a rate of 60 carsper hour. Parking time follows a Weibull distribution with mean 2.5hours and the parking lot can accommodate 150 cars. When theparking lot is full, an arriving customer has to park his car somewhereelse. Find the fraction of customers finding all places occupied uponarrival

x (hours)0 1 2 3 4 5 6 7 8

f(x)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Weibull: α = 2.7228, k = 5

f(x) = kα

(

)k−1e(

xα)k

f(x) = 1αe−

exponential

two different distributions with the same mean

– Mean of Weibull distribution: αΓ(1 + 1/k), and Γ(x) =∫∞

0 tx−1e−tdt is calledthe gamma function 3-81

Example: a system with blocking II

• c = 150 and a = λ/µ = 60× 2.5 = 150

B(c, a)∣∣∣c=150,a=150

=ac

c!∑ci=0

ai

i!

• Divide the numerator and denominator by∑c−1

n=0 an/n!,

B(c, a) =ac

c!∑c−1i=0

ai

i! + ac/c!=

(ac/c!)/∑c−1

n=0 an/n!1 + (ac/c!)/

∑c−1n=0 an/n!

= (a/c)B(c − 1, a)1 + (a/c)B(c − 1, a) = aB(c − 1, a)

c + aB(c − 1, a)

with B(0, a) = 1

3-82

Finite source population: M/M/C/C/K system I

Consider the loss system (no waiting places) in the case where thearrivals originate from a finite population of sources: the total numberof customers is K

……

1

21

2

• The time to the next call attempt by a customer, so called thinkingtime (idle time) of the customer obeys an exponential distributionwith mean 1/λ (sec)

• Blocked calls are lost- does not lead to reattempts; starts a new thinking time, again.The time to the next attempt is also the same exponentialdistribution with 1/λ

- the call holding time is exponentially distributed with 1/µ3-83

M/M/C/C/K system II

If C ≥ K , each customer has its own server, i.e., no blocking.

• Each user shows two-state, active with mean 1/µ and idle withmean 1/λ

• The probability for a user to be idle or active is

π0 = 1/λ/(1/λ+ 1/µ) and π1 = 1/µ/(1/λ+ 1/µ),

• Call arrival rate: π0λ, offered load: π1 = a/(1 + a), and a = λ/µ

If C < K , this system can be described as

((K − i)λ+ iµ)πi = (K − (i − 1))πi−1 + (i + 1)µπi+1

3-84

Page 22: Queueing theory

M/M/C/C/K system III

• For j = 1, 2, . . . ,K , we have

(C − j + 1)πj−1 = jµπj ⇒ πj =(

Kj

)ajπ0.

• Applying∑K

j=0 πj = 1,

πj =(

Kj

)aj/

C∑k=0

(Kk

)ak

Time blocking (or congestion): the proportion of time the systemspends in the state C ; the equilibrium probability of the state C is

PB = πC

– The probability of all resources being busy in a given observational period– Insensitivity: Like Erlang B formula, this result is insensitive to the formof the holding time distribution (though the derivation above was explicitlybased on the assumption of exponential holding time distribution)

3-85

M/M/C/C/K system IV

Call blocking: the probability that an arriving call is blocked, i.e., PL

• Arrival rate is state-dependent, i.e., (K −N (t))λ: Not Poisson.• PASTA does not hold: Time blocking, PB can’t represent PL• λT : Call arrivals on average

λT ∝C∑

i=0(K − i)λπi

– PL: the probability that a call finds the system blocked– If λT = 10000 and PL = 0.01, λTPL = 100 calls are lost

• λC : Call arrivals when the system is blocked

λC ∝ (K − C )λ

–PBλC : blocked calls upon the arrival instant

PLλT = PBλC

– Among total arrivals, some of them that find the system blockedshould be equal to call arrivals of seeing the busy system

3-86

M/M/C/C/K system V

• Call blocking PL can be obtained by

PLλT = PBλC → PL = λC

λTPB ≤ PB

• Engset formula:

PL(K ) = (K − C )λπC∑Ci=0(K − i)λπi

=(K − C ) K !

C !(K−C)!ac∑C

i=0(K − i) K !i!(K−i)!ac

=(K−1)!

C !(K−1−C)!ac∑C

i=0(K−1)!

i!(K−1−i)!ac=(

K − 1C

)ac/

C∑i=0

(K − 1

i

)ai

– The state distribution seen by an arriving customer is the same as theequilibrium distribution in a system with one less customer. It is as if thearriving customer were an "outside observer"– PL(K) = PB(K − 1): as K →∞, PL → PB

3-87

Probability generating function

For a discrete random variable X with gk = Pr[X = k], the PGF isdefined as

G(z) = E [zX ] =∑

kzkgk

where z is a complex variable, and gk = 1k!d

kG(z)/dzk at z = 0.

For |z| ≤ 1, G(z) is convergent

G(z) ≤∑

k|zk ||gk | ≤

∑k

gk = 1

G(z) is analytic for |z| < 1, if

• differentiable infinitely often in that domain, or• G(z) is expressed as a power series, i.e.,

∑k zkgk

3-88

Page 23: Queueing theory

Bulk queues: Bulk arrival I

An arrival of i customers with gi = Pr[bluk size = i]

… …

… …

… …… …

Global balance equations

λπ0 = µπ1

(λ+ µ)πk = µπk+1 +k−1∑i=0

πiλgk−i for k ≥ 1

Using the definition of PGF, i.e., Π(z) =∑∞

k=0 zkπk ,

(λ+ µ)∞∑

k=1πkzk = µ

z

∞∑k=1

πk+1zk+1 +∞∑

k=1

k−1∑i=0

πiλgk−izk

3-89

Bulk queues: Bulk arrival II

The term,∑∞

k=1∑k−1

i=0 πigk−izk , can be written as

k = 1, π0g1zk = 2, (π0g2 + π1g1)z2

k = 3, (π0g3 + π1g2 + π2g1)z3

...k = i, (π0gi + π1gi−1 + · · ·+ πi−1g1)z i

which yields π0G(z) + π1zG(z) + π2z2G(z) + · · · = Π(z)G(z) andG(z) =

∑∞k=1 gkzk

Substituting this into the previous eqn.,

(λ+ µ)(Π(z)− π0) = µ

z (Π(z)− π0 − zπ1︸︷︷︸λ/µπ0=π1

) + λΠ(z)G(z)

3-90

Bulk queues: Bulk arrival III

After some manipulations, we have

Π(z) = µπ0(1− z)µ(1− z)− λz[1−G(z)] = N (z)

D(z)

To determine π0, we use Π(1) = 1

Π(1) = N (1)D(1) = 0

0−→

L’Hopitals ruleN ′(1)D′(1) = 1,

which yieldsπ0 = 1− λG′(1)/µ

The mean number of customers in the system

N = Π′(1) = N ′(z)D(z)−N (z)D′(z)(D(z))2 ,

where L’Hopital’s rule should be applied again

3-91

Bulk queues: Bulk service I

Serve a group of size r : some variations are possible

1 20 … … …

λπ0 = µ(π1 + π2 + · · ·+ πr)(λ+ µ)πk = µπk+r + λπk−1 for k ≥ 1

Using the definition of PGF,

(λ+ µ)[Π(z)− π0] = µ

zr

[Π(z)−

r∑k=0

πkzk

]+ λzΠ(z)

Solving for Π(z), we have

Π(z) =µ∑r

k=0 πkzk − (λ+ µ)π0zr

λzr+1 − (λ+ µ)zr + µ

3-92

Page 24: Queueing theory

Bulk queues: Bulk service II

The term, λπ0 = µ(π1 + π2 + · · ·+ πr), can be modified as

−zr(λπ0 + µπ0) = −zrµ(π0 + π1 + π2 + · · ·+ πr)

We can rewrite Π(z) as

Π(z) =∑r−1

k=0 πk(zk − zr)rρzr+1 − (1 + rρ)zr + 1 with ρ = λ/(µr)

– Can we determine πk for k ∈ {0, 1, . . . , r} using Π(1) = 1?Rouche’s theorem If f (z) and g(z) are analytic functions of z insideand on a closed contour C and also if |g(z)| < |f (z)| on C , then f (z)and f (z) + g(z) have the same number of zeros inside C• Let f (z) = −(1 + rρ)zr and g(z) = rρzr+1 + 1• On the closed contour C with the origin of radius 1 + δ, |z| = 1 + δ,

|f (z)| = | − (1 + rρ)zr | = (1 + rρ)(1 + δ)r

|g(z)| = |rρzr+1 + 1| ≤ rρ(1 + δ)r+1 + 1

3-93

Bulk queues: Bulk service III

• On the contour C,

|f (z)| − |g(z)| ≥ (1 + rρ)(1 + δ)r − rρ(1 + δ)r+1 − 1= (1 + δ)r(1− rρδ)− 1≥ (1 + rδ)(1− rρδ)− 1 (use (1 + δ)r ≥ 1 + rδ)= rδ(1− ρ− rρδ) > 0, 0 < δ < (1− ρ)/(rρ)

• Letting δ → 0, the denominator has r roots for |z| ≤ 1• The denominator of degree r + 1 has one additional root for |z| > 1Since Π(z) is analytic for |z| ≤ 1, r roots in the denominator must becanceled out in the numerator (Otherwise, Π(z) is not analytic)– We can rewrite the numerator as

r−1∑k=0

πk(zk − zr) = K (z − 1)r−1∏k=1

(z − z∗k )

where K is a proportionality constant and z∗k is the root inside theunit disk

3-94

Bulk queues: Bulk service IV

By canceling the roots inside and on the unit disk in the numeratorand denominator, we have

Π(z) =K (1− z)

∏r−1k=1(z − z∗k )

rρzr+1 − (1 + rρ)zr + 1 = K1− z/z0

– Using Π(1) = 1, we have K = 1− 1/z0– πk = (1− 1/z0)(1/z0)k for k = 0, 1, 2 . . .

Our system with bulk service becomes M/M/1 queue if r = 1

• Comparison with M/M/1 queue

πi = (1− ρ)ρi for i = 0, 1, 2, . . .→ Π(z) = 1− ρ1− ρz

where we find z0 = 1/ρ > 1

3-95

Bulk queues: Bulk service V

Real roots outside the unit disk

z1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2

D(z)

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

1/0.75 = 1.333

D(z) = rρzr+1− (1 + rρ)zr + 1, ρ̂ = λ/µ, ρ = ρ̂/r

ρ̂ = 0.75, r = 1ρ̂ = 0.75, r = 2ρ̂ = 0.75, r = 3ρ̂ = 0.95, r = 3

• For r = 1, we have M/M/1 queue, z0 = 1/ρ̃• Mean number of customers in the system

L = Π′(1) = 1/(z0 − 1)

– As r increases given a fixed ρ̃, z0 increases → L decreases– As ρ̃ increases given a fixed r , z0 gets closer to 1 → L increases

3-96

Page 25: Queueing theory

Where are we?

Elementary queueing models– M/M/1, M/M/C, M/M/C/C/K, ... and bulk queues– either product-form solutions or use PGF

Intermediate queueing models (product-form solution)– Time-reversibility of Markov process– Detailed balance equations of time-reversible MCs– Multidimensional Birth-death processes– Network of queues: open- and closed networks

Advanced queueing models– M/G/1 type queue: Embedded MC and Mean-value analysis– M/G/1 with vacations and Priority queues– G/M/m queue

More advanced queueing models (omitted)– Algorithmic approaches to get steady-state solutions

3-97

Time Reversibility of discrete-time MC I

For an irreducible, aperiodic, discrete-time MC, (Xn, Xn+1,...) havingtransition probabilities pij and stationary distribution πi for all i:Time-reversed MC is defined as X∗n = Xτ−n for an arbitrary τ > 0

Forward process Time reversed process

1) Transition probabilities of X∗np∗ij = πjpji

πi

2) Xn and X∗n have the same stationary distribution πi with 1) and if∞∑

j=0pij =

∞∑j=0

p∗ij

3-98

Time Reversibility of discrete-time MC II

• Proof for 1) p∗ij = πjpji/πi :p∗ij = Pr[Xm = j|Xm+1 = i,Xm+2 = i2, . . . ,Xm+k = ik ]

= Pr[Xm = j,Xm+1 = i,Xm+2 = i2, . . . ,Xm+k = ik ]Pr[Xm+1 = i,Xm+2 = i2, . . . ,Xm+k = ik ]

= Pr[Xm = j,Xm+1 = i] Pr[Xm+2 = i2, . . . ,Xm+k = ik |Xm = j,Xm+1 = i]Pr[Xm+1 = i] Pr[Xm+2 = i2, . . . ,Xm+k = ik |Xm+1 = i]

= Pr[Xm = j,Xm+1 = i]Pr[Xm+1 = i]

= Pr[Xm+1 = i|Xm = j] Pr[Xm = j]Pr[Xm+1 = i]

= pjiπj

πi

• Proof for 2) Using the above result,∑i∈S

πip∗ij =∑i∈S

πi(πjpji/πi) = πj

3-99

Time Reversibility of discrete-time MC III

A Markov process, Xn, is said to be reversible, if– the transition probabilities of the forward and reversed chains arethe same,

p∗ij = Pr[Xm = j|Xm+1 = i] = pij = Pr[Xm+1 = j|Xm = i]

• Time reversibility ⇔ Detailed balanced equations (DBEs) hold

πip∗ij = πjpji → πipij = πjpji (detailed balance eq.)

What types of Markov processes satisfy this detailed balanceequation? discrete-time Birth-death (BD) process

• Transition occurs between neighboring states: pij = 0 for |i− j| > 1

0 1 2 … …

3-100

Page 26: Queueing theory

Time Reversibility of discrete-time MC IV

A transmitter’s queue with stop-and-wait ARQ (θ = qr) in Mid-term I• Is this process reversible?

0 1 2… …… …

• Global balance equations (GBEs)

π0 =(1− p)π0 + (1− p)θπ1

π1 =pπ0 + (pθ + (1− p)(1− θ))π1 + (1− p)θπ2

For i = 2, 3, . . ., we have

πi =p(1− θ)πi−1 + (pθ + (1− p)(1− θ))πi + (1− p)θπi+1

• Instead, we can use DBEs, or simplify GBEs using DBEs, e.g.,

p(1− θ)πi = (1− p)θπi+1 ↔n∑

j=0

∞∑i=n+1

πjpji =n∑

j=0

∞∑i=n+1

πipij

3-101

Time Reversibility of discrete-time MC V

Kolmogorov Criteria• A discrete-time Markov chain is reversible if and only if

pi1i2pi2i3 · · · pin−1in pini1 = pi1in pinin−1 · · · pi3i2pi2i1

for any finite sequence of states, i1, i2,. . . ,in and any nProof:• For a reversible chain, if detailed balance eqns. hold, we have

10

3 2

• Fixing two states, i1 = i, and in = j and multiplying over all states,

pi,i2pi2i3 · · · pin−1jpji = pijpjin−1 · · · pi3i2pi2i3-102

Time Reversibility of discrete-time MC VI

• From the Kolmogorov criteria, we can get

pi,i2pi2i3 · · · pin−1jpji = pijpjin−1 · · · pi3i2pi2i

p(n−1)ij pji = pijp(n−1)

ji

As n →∞, we have

limn→∞

p(n−1)ij pji = lim

n→∞pijp(n−1)

ji → πjpji = πipij

Inspect whether the following two-state MC is reversible

P =[

0 10.5 0.5

]– It is a small BD process– Using state probabilities, π0 = 1/3 and π1 = 2/3,

π0p01 = 13 · 1 = π1p10 = 2

3 ·12

3-103

Time Reversibility of discrete-time MC VII

Inspect whether the following three-state MC is reversible

P =

0 0.6 0.40.1 0.8 0.10.5 0 0.5

• Using Kolmogorov criteria,

p12p23p31 = 0.6× 0.1× 0.5 6= p13p32p21 = 0.4× 0× 0.1 = 0• Inspecting state transition diagram, it is not a BD processIf the state transition diagram of a Markov process is a tree, then theprocess is time reversible

– A generalization of BD processes: at the cut boundary, DBE issatisfied 3-104

Page 27: Queueing theory

Continuous-time reversible MC I

For a continuous-time MC, X(t), whose stationary state probabilityπi , we have a discrete-time embedded Markov chain whose stationarypmf and a state transition probability are πi and p̃ij .

Embedded Markov process

Forward Process Reverse Process

time

There is a reversed embedded MC with πi p̃ij = πj p̃∗ji for all i 6= j.

0 1 32 4

0 1 32 4

… …

… …

CTMC

Embedded MC (BD process)

3-105

Continuous-time reversible MC II

Recall the state occupancy time of the forward process

Pr[Ti > t + s|Ti > t] = Pr[Ti > s] = e−vis

If X(t) = i, the probability that the reversed process remains in statei for an additional s seconds is

Pr[X(t′) = i, t − s ≤ t′ ≤ t|X(t) = i] = e−vis

Embedded Markov process

Forward Process Reverse Process

time

3-106

Continuous-time reversible MC III

A continuous-time MC whose stationary probability of state i is θi ,and state transition rate from j to i is γji has a reversible MC whosestate transition rate is γ∗ij , if we find γ∗ij of satisfying

γ∗ij = vi p̃∗ij = viπj p̃ji

πi

∣∣∣p̃ji=γji/vj

= viπjγji

πivj︸ ︷︷ ︸from embedded MC

= θjγji/θi

– p̃∗ij(= p̃ij): state transition probability of the reversed embedded MC– Continuous-time MC whose state occupancy times are exponentiallydistributed is reversible if its embedded MC is reversible

Additionally, we have vj = v∗j∑i 6=j

θiγ∗ij∣∣γ∗ij =θjγji/θi

= θj∑i 6=j

γji = θjvj = θjv∗j ⇒∑j 6=i

γij =∑j 6=i

γ∗ij

3-107

Continuous-time reversible MC IV

Detailed balance equation holds for continuous-time reversible MCs

θjγji (input rate to i) = θiγij (output rate from i) for j = i + 1

– Birth-death systems with γij = 0 for |i − j| > 1– Since the embedded MC is reversible,

πi p̃ij = πj p̃ji → (viθi/c)p̃ij = (vjθj/c)p̃ji → θiγij = θjγji

If there exists a set of positive numbers θi , that sum up to 1 andsatisfy

θiγij = θjγji for i 6= jthen, the MC is reversible and θi is the unique stationary distribution– Birth and death processes, e.g., M/M/1, M/M/c, M/M/∞Kolmogorov criteria for continuous time MC– A continuous-time Markov chain is reversible if and only if

γi1i2γi2i3 · · · γini1 = γi1inγinin−1 · · · γi3i2γi2i1

– Proof is the same as in the discrete-time reversible MC3-108

Page 28: Queueing theory

M/M/2 queue with heterogeneous servers I

Servers A and B with service rates µA and µB. When the systemempty, arrivals go to A with probability p and to B with probability1− p. Otherwise, the head of the queue takes the first free server

0

1A

2B

2 3

Under what condition is this system time-reversible?• For n = 2, 3, . . .,

πn = π2 (λ/(µA + µA))n−2

• Global balance equations along the cuts

λπ0 = µAπ1,A + µBπ1,B

(µA + µB)π2 = λ(π1,A + π1,B)(µA + λ)π1,A = pλπ0 + µBπ2 3-109

M/M/2 queue with heterogeneous servers II

After some manipulations,

π1,A = π0λ

µA

λ+ p(µA + µB)2λ+ µA + µB

π2,A = π0λ

µB

λ+ (1− p)(µA + µB)2λ+ µA + µB

π2 = π0λ2

µAµB

λ+ (1− p)µA + pµB

2λ+ µA + µB

π0 can be determined by π0 + π1,A + π2,B +∑∞

n=2 πn = 1• If it is reversible, use detailed balance equations

(1/2)λπ0 = µAπ1,A → π1,A = 0.5(λ/µA)π0

(1/2)λπ0 = µBπ1,B → π1,B = 0.5(λ/µB)π0

π2 = 0.5λ2

µAµBπ0

3-110

Multidimensional Markov chains I

Suppose that X1(t) and X2(t) are independent reversible MCs• Then, X(t) = (X1(t),X2(t)) is a reversible MC• Two independent M/M/1 queue, where arrival and service rates at

queue i are λi and µi

– (N1(t),N2(t)) forms an MC6-23 Example: Two Independent M/M/1 Queues

Stationary distribution:

Detailed Balance Equations:

Verify that the Markov chain is reversible – Kolmogorov criterion

1 2

1 1 2 21 2

1 1 2 2

( , ) 1 1n n

p n n λ λ λ λµ µ µ µ

= − −

1 1 2 1 1 2

2 1 2 2 1 2

( 1, ) ( , )( , 1) ( , )

p n n p n np n n p n n

µ λµ λ

+ =+ =

2λ 2µ2λ 2µ 2λ 2µ 2λ 2µ

2λ 2µ2λ 2µ 2λ 2µ 2λ 2µ

2λ 2µ2λ 2µ 2λ 2µ 2λ 2µ

02 12 22 321λ 1λ 1λ

1µ 1µ 1µ

01 11 21 311λ 1λ 1λ

1µ 1µ 1µ

00 10 20 301λ 1λ 1λ

1µ 1µ 1µ

03 13 23 331λ 1λ 1λ

1µ 1µ 1µ

– Is this a reversible MC?3-111

Multidimensional Markov chains II

– Owing to time-reversibility, detailed balance equations hold

µ1π(n1 + 1,n2) = λ1π(n1,n2)µ2π(n1,n2 + 1) = λ2π(n1,n2)

– Stationary state distribution

π(n1,n2) =(

1− λ1

µ1

)(λ1

µ1

)n1 (1− λ2

µ2

)(λ2

µ2

)n2

• Can be generalized for any number of independent queues, e.g.,M/M/1, M/M/c or M/M/∞

π(n1,n2, . . . ,nK ) = π1(n1)π2(n2) · · ·πK (nK )

– ’Product form’ distribution

3-112

Page 29: Queueing theory

Truncation of a Reversible Markov chain I

X(t) is a reversible Markov process with state space S and stationarydistribution, πj for j ∈ S .– Truncated to a set E ⊂ S such that the resulting chain Y (t) isirreducible. Then, Y (t) is reversible and has the stationarydistribution

π̂j = πj∑k∈E πk

j ∈ E

– This is the conditional prob. that. in steady state, the originalprocess is at state j, given that it is somewhere in E

Proof:

π̂jqji = π̂iqij ⇒πj∑

k∈E πk︸ ︷︷ ︸π̂j

qji = πi∑k∈E πk

qij ⇒ πjqji = πiqij

∑k∈E

π̂k =∑j∈E

πj∑k∈E πk

= 1

3-113

Truncation of a Reversible Markov chain II

Markov processes for M/M/1 and M/M/C are reversible• State probabilities of M/M/1/K queue

πi = (1− ρ)ρi∑Ki=0(1− ρ)ρi

= (1− ρ)ρi

1− ρK+1 for ρ = λ

µ

– Truncated version of M/M/1/∞ queue• State probabilities of M/M/c/c queue

– M/M/c/∞ queue with ρ = λ/(mµ) and a = λ/µ

πn = ρmax(0,n−c) ac

n! π0

– Truncated version of M/M/c/∞ queue

π̂n = πn/c∑

n=0πn = an

n! /c∑

i=0

ai

i!

3-114

Truncation of a Reversible Markov chain III

Two independent M/M/1 queues of the previous example share acommon buffer of size B (=2)• An arriving customer who finds B customers waiting is blocked

6-25 Example: Two Queues with Joint Buffer The two independent M/M/1 queues of the previous example share a common buffer of size B – arrival that finds Bcustomers waiting is blockedState space restricted to

Distribution of truncated chain:

Normalizing:

Theorem specifies joint distribution up to the normalization constantCalculation of normalization constant is often tedious

2λ 2µ

2λ 2µ

2λ 2µ1λ

2λ 2µ

2λ 2µ

2λ 2µ

2λ 2µ

2λ 2µ 2λ 2µ

02 121λ

01 11 211λ 1λ

1µ 1µ

00 10 20 301λ 1λ

1µ 1µ

03 13

22

31

1 2 1 2{( , ) : ( 1) ( 1) }E n n n n B+ += − + − ≤

1 21 2 1 2 1 2( , ) (0,0) , ( , )n np n n p n n Eρ ρ= ⋅ ∈

1 2

1 2

1

1 2( , )

(0,0) n n

n n Ep ρ ρ

= ∑

State diagram for B =2• State space: E = {(n1,n2) : (n1 − 1)+ + (n2 − 1)+ ≤ B}• Stationary state distribution of the truncated MC

π(n1,n2) = π(0, 0)ρn11 ρn2

2 for (n1,n2) ∈ E

• π(0, 0) is obtained by π(0, 0) = 1/∑

(n1,n2)∈E ρn11 ρn2

2 3-115

Truncation of a Reversible Markov chain IV

Two session classes in a circuit switching system with preferentialtreatment for one class for a total of C channels• Type 1: Poisson arrivals with λ1 require exponentially distributed

service rate µ1 – admissible only up to K• Type 2: Poisson arrivals with λ2 require exponentially distributed

service rate µ2 – can be accepted until C channels are used up

S = {(n1,n2)|0 ≤ n1 ≤ K ,n1 + n2 ≤ C}374 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 51, NO. 2, MARCH 2002

Fig. 2. Transition diagram for the new call bounding scheme.

handoff calls in the cell. Let and . Fromthe detailed balance equation, we obtain

From the normalization equation, we obtain

From this, we obtain the formulas for new call blocking proba-bility and handoff call blocking probability as follows:

(1)

(2)

Obviously, when , the new call bounding scheme be-comes the nonprioritized scheme. As we expect, we obtain

As we mentioned earlier, in most literature the channelholding times for both new calls and handoff calls are iden-tically distributed with the same parameter. In this case, theaverage channel holding time is given by

(3)

From this, the traffic intensities for new calls and handoff callsusing the above common average channel holding time 1are given by

Applying these formulas in (1) and (2), we obtain similar re-sults for new call blocking probability and handoff call blockingprobability following the traditional approach (one-dimensionalMarkov chain theory), which obviously provides only an ap-proximation. We will show later that significantly inaccurateresults are obtained using this approach, which implies that wecannot use the traditional approach if the channel holding timesfor new calls and handoff calls are distinct with different av-erage values. We observe that there is one case where these twoapproaches give the same results, i.e., when the nonprioritizedscheme is used: . This is because we have the followingidentity: .

As a final remark, this scheme may work best when the callarrivals are bursty. When a big burst of calls arrives in a cell (forexample, before or after a football game), if too many new callsaccepted, the network may not be able to handle the resultinghandoff traffic, which will lead to severe call dropping. The newcall bounding scheme, however, could handle the problem wellby spreading the potential bursty calls (users will try again whenthe first few tries fail). On another note, as we observe in wirednetworks, network traffic tends to be self-similar ([15]). Wire-less network traffic will behave the same considering more dataservices will be supported in the wireless networks. This schemewill be useful in the future wireless multimedia networks.

B. Cutoff Priority Scheme

Instead of putting limitation on the number of new calls, webase on the number of total on-going calls in the cell to makea decision whether a new arriving call is accepted or not. Thescheme works as follows.

Let denote the threshold, upon a new call arrival. If the totalnumber of busy channels is less than, the new call is accepted;

3-116

Page 30: Queueing theory

Truncation of a Reversible Markov chain V

• The state probabilities can be obtained as

P(n1,n2) = ρn11

n1!ρn2

2n2! P(0, 0) for 0 ≤ n1 ≤ K , n1+n2 ≤ C , n2 ≥ 0

– P(0, 0) can be determined by∑

n1,n2P(n1,n2) = 1

• Blocking probability of type 1

Pb1 =∑C−K

n2=0ρK

1K ! ·

ρn22

n2! +∑K−1

n1=0ρ

n11

n1! ·ρ

C−n12

(C−n1)!∑Kn1=0

ρn11

n1!∑C−n1

n2=0ρ

n22

n2!

• Blocking probability of type 2

Pb2 =∑K

n1=0ρ

n11

n1! ·ρ

C−n22

(C−n2)!∑Kn1=0

ρn11

n1!∑C−n1

n2=0ρ

n22

n2!

For this kind of systems, blocking probabilities are valid for a broadclass of holding time distributions

3-117

Networks of queues

Two queues in tandem (BG, p.210)

• Assume that service time is proportional to the packet length

Queue 1

Queue 1 Queue 2

Queue 2 is empty when arrives

Queue 2

timeArrivals at queue 2 get bursty

– Interarrival times at the second queue are strongly correlated withthe packet length at the first queue or the service time!

• The first queue is an M/M/1, but the second queue cannot beconsidered as an M/M/1

3-118

Kleinrock’s Independence Approximation I

In real networks, many queues interact with each other– a traffic stream departing from one or more queues enters one ormore other queues, even after merging with other streams departingfrom yet other queues• Packet interarrival times are correlated with packet lengths.• Service times at various queue are not independent, e.g.,

state-dependent flow control.Kleinrock’s independence approximation:• M/M/1 queueing model works for each link: merging several

packet streams on a transmission line makes interarrival times andpacket lengths independent

• Good approximation when:* Poisson arrivals at entry points of the network* Packet transmission times ‘nearly’ exponential* Several packet streams merged on each link* Densely connected network and moderate to heavy traffic load

3-119

Kleinrock’s Independence Approximation II

Suppose several packet streams, each following a unique path throughthe network: appropriate for virtual circuit network, e.g., ATM

• xs: arrival rate of packet stream s• fij(s): the fraction of the packets of stream s through link (i, j)• Total arrival rate at link (i, j)

λij =∑

all packet streams scrossing link (i, j)

fij(s)xs

3-120

Page 31: Queueing theory

Kleinrock’s Independence Approximation III

Based on M/M/1 (with Kleinrock’s Independence approximation), #of packets in queue or service at (i, j) on average is

Nij = λij

µij − λij

– 1/µij is the average packet transmission time on link (i, j)• The average number of packets over all queues and the average

delay per packet are

N =∑(i,j)

Nij and T = 1γ

∑(i,j)

Nij

– γ =∑

s xs: total arrival rate in the system• As a generalization with proc. & propag. delay,

Tp =∑

all packet streams scrossing link (i, j)

λij

µij(µij − λij)︸ ︷︷ ︸queueing delay

+ 1µij

+ dij

3-121

Kleinrock’s Independence Approximation IV

In datagram networks including multiple path routing for someorigin-destination pairs, M/M/1 approx. often fails• Node A sends traffic to node B along two links with service rate µSec. 3.6 Networks of Transmission Lines

V2

V2

B

213

Figure 3.29 Poisson process with rate .\divided among two links. If division isdone by randomization, each link behaveslike an M I JIII queue. If division is doneby metering, the whole system behaves Iikean 1\111'v112 queue.

(3.103)

(3.104)

where! = L8.rS is the total arrival rate in the system. If the average processing andpropagation delay el ij at link (i. j) is not negligible, this formula should be adjusted to

I L ( Ai) )T = - . + Aij elij11" - A"r (i.j) I) 1)

Finally, the average delay per packet of a traffic stream traversing a path p is given by

'"' (Ai) 1 )Tp = L . . . . _ .. + -. + eli)'. ILI)(p,) AI)) 11i)all (I,))on path p

where the three terms in the sum above represent average waiting time in queue, averagetransmission time, and processing and propagation delay, respectively.

In many networks, the assumption of exponentially distributed packet lengths is notappropriate. Given a different type of probability distribution of the packet lengths, onemay keep the approximation of independence between queues but use the P-K formula foraverage number in the system in place of the AI/M /1 formula (3.100). Equations (3.101)to (3.104) for average delay would then be modified in an obvious way.

For virtual circuit networks (cf. Fig. 3.27), the main approximation involved in theI\I /M /1 formula (3.101) is due to the correlation of the packet lengths and the packetinterarrival times at the various queues in the network. If somehow this correlation wasnot present (e.g., if a packet upon departure from a transmission line was assigned a newlength drawn from an exponential distribution), then the average number of packets inthe system would be given indeed by the formula

This fact (by no means obvious) is a consequence of Jackson's Theorem, which will bediscussed in Section 3.8.

In datagram networks that involve multiple path routing for some origin-destinationpairs (cf. Fig. 3.28), the accuracy of the M / M /1 approximation deteriorates for anotherreason, which is best illustrated by an example.

Example 3.17

Suppose that node A sends traffic to node B along two links with service rate 11 in thenetwork of Fig. 3.29. Packets arrive at A according to a Poisson process with rate .\packets/sec. Packet transmission times are exponentially distributed and independent ofinterarrival times as in the AI/M / I system. Assume that the arriving traffic is to bedivided equally among the two links. However, how should this division be implemented?Consider the following possibilities.

– Random splitting: queue at each link may behave like an M/M/1

TR = 1µ− λ/2

– Metering: arriving packets are assigned to a queue with the smallestbacklog → approximated as an M/M/2 with a common queue

TM = 2(2µ− λ)(1 + ρ) < TR

∗ Metering destroys an M/M/1 approximation

3-122

Burke’s theorem I

For M/M/1, M/M/c, M/M/∞ with arrival rate λ (without bulkarrivals and service):

B1. Departure process is Poisson with rate λ.

Forward process Reverse process

Reverse process

Arrivals of forward process Departures of reverse process

• The arrival process in the forward process corresponds to thedeparture process in the reverse process

• Since the arrivals in forward time form a Poisson process, thedepartures in backward time form a Poisson process

• Since the backward process is statistically the same as the forwardprocess, the (forward) departure process is Poisson

3-123

Burke’s theorem II

B2. The state (packets in system) left by a forward departure in theforward process is independent of the past departures.

Time direction in the forward process

Time direction in the reverse process

Departure prior to in the forward process

Arrival after in the reverse process: Future arrivals does not depend on the

current number in the system

– In the reverse process, the state is independent of future arrivals.

3-124

Page 32: Queueing theory

Two M/M/1 Queues in Tandem

The service times of a customer at the first and the second queues aremutually independent as well as independent of the arrival process.

Queue 1 Queue 2

• Based on Burke’s theorem B1, queue 2 in isolation is an M/M/1– Pr[m at queue 2] = ρm

2 (1− ρ2)

• B2: # of customers presently in queue 1 is independent of thesequence of departure time prior to t (earlier arrivals at queue 2)– independent of # of customers presently in queue 2

Pr[n at queue 1 and m at queue 2]= Pr[n at queue 1] Pr[m at queue 2] = ρn

1 (1− ρ1)ρm2 (1− ρ2)

3-125

Open queueing networks

Consider a network of K first-come first serve, single server queue,each of which has unlimited queue size and exponential distributionwith rate µk .

External arrivals

routing path

• Traffic equation with routing probability pij or matrix P = [pij ]

λi = αi +K∑

j=1λjpji and

K∑i=0

pji = 1

– pi0: flow going to the outside– λi can be uniquely determined by solving

λ = α+ λP ⇒ λ = α(I − P)−1 3-126

Open queueing networks II

Let n = (n1, . . . ,nK ) denote a state (row) vector of the network.The limiting queue length distribution π(n)

π(n) = limt→∞

Pr[X1(t) = n1, . . . ,XK (t) = nK ]

Global balance equation (GBE): total rate out of n = total rate into n(α+

K∑i=1

µi

)π(n) =

K∑i=1

αiπ(n− ei)︸ ︷︷ ︸external arrivals

+K∑

i=1pi0µiπ(n + ei)︸ ︷︷ ︸go outside from i

+K∑

i=1

K∑j=1

pjiµjπ(n + ej − ei)︸ ︷︷ ︸from j to i

– ei = (0, . . . , 1, . . . , 0), i.e., the 1 is in the ith position– π(n− ei) denotes π(n1,n2, . . . ,ni − 1, . . . ,nK )

3-127

Jackson’s theorem I

Using time-reversibility, guess detailed balance equations (DBEs) as

λiπ(n− ei) = µiπ(n), λiπ(n) = µiπ(n + ei)

and λjπ(n− ei) = µjπ(n + ej − ei) based on

Substituting DBEs into GBE gives us

RHS =π(n)[ K∑

i=1

αiµi

λi+

K∑i=1

pi0λi +K∑

i=1

∑Kj=1 pjiλj

λiµi

]

=π(n)[ K∑

i=1pi0λi︸ ︷︷ ︸α

+K∑

i=1

αi +∑K

j=1 pjiλj

λiµi

]

– in the numerator: λi = αi +∑K

j=1 pjiλj 3-128

Page 33: Queueing theory

Jackson’s theorem II

From DBEs, we have

π(n1, . . . ,ni , . . . ,nK ) = λi

µiπ(n1, . . . ,ni − 1, . . . ,nK )

and

π(n1, . . . ,ni − 1, . . . ,nK ) = λi

µiπ(n1, . . . ,ni − 2, . . . ,nK )

which is finally rearranged as

π(n1, . . . ,ni , . . . ,nK ) =(λi

µi

)ni

π(n1, . . . , 0, . . . ,nK )

Repeating for i = 1, 2, . . . ,K ,

π(n) = π(0)K∏

i=1

(λi

µi

)ni

– π(0) =∏K

i=1∑∞

ni=0 ρnii and ρi = λi/µi

3-129

Jackson’s theorem: proof of DBEs I

Proving DBEs based on time-reversibility

• Construct a routing matrix, P∗ = [p∗ij ], of the reversed process• The rate from node i to j must be the same in the forward and

reverse direction,

(forward process) λipij = λjp∗ji (reverse process)

– λjp∗ji : the output rate from server j is λj , and p∗ji is the rate ofmoving from j to i; α∗i = λipi0; p∗i0 = αi/λi

We need to show (recall θiγij = θjγ∗ji)

π(n)vn,m = π(m)v∗m,n and∑m

vn,m =∑m

v∗n,m

– vn,m and v∗n,m denote state transition rate of the forward andreversed process

3-130

Jackson’s theorem: proof of DBEs II

We need to consider the following three cases• Arrival to server i outside the network in the forward process

corresponds to a departure out of the network from server i in thereversed process

π(n)vn,n+ei = π(n + ei)v∗n+ei ,n

• Departure to the outside in the forward process corresponds toarrival from the outside in the reversed process,

π(n)vn,n−ei = π(n− ei)v∗n−ei ,n

• Leaving queue i and joining queue j in the forward process(vn,n−ei+ej = µipij) correspond to leaving queue j and joiningqueue i in the reversed process (v∗n−ei+ej ,n = µjp∗ji = λipijµj/λj)

π(n)vn,n−ei+ej = π(n− ei + ej)v∗n−ei+ej ,n

3-131

Jackson’s theorem: proof of DBEs III

1) π(n)vn,n+ei = π(n + ei)v∗n+ei ,n:Arrival to server i outside the network in the forward process corresponds toa departure out of the network from server i in the reversed process, i.e.,

v∗n+ei ,n =µi

(1−

K∑j=1

p∗ij︸ ︷︷ ︸p∗i0

)Use p∗ij=λjpji/λi

= µi

(1−

K∑j=1

λjpji

λi

)

=µi

λi

(λi −

K∑j=1

λjpji︸ ︷︷ ︸λi=αi+

∑Kj=1

λjpji

)= αi/ρi (= v∗n,n−ei

).

Substituting this into 1) (vn,n+ei = αi : arrival to server i from outside)K∏

i=1πi(ni)αi = πi(ni + 1)

K∏j=1,j 6=i

πj(nj)αi/ρi

3-132

Page 34: Queueing theory

Jackson’s theorem: proof of DBEs IV

Rearranging the previous eqn. yields

πi(ni)αi

K∏j=1,j 6=i

πj(nj) = πi(ni + 1)(αi/ρi)K∏

j=1,j 6=iπj(nj)

After canceling, we have

πi(ni + 1) = ρiπi(ni)⇒ πi(n) = ρni (1− ρi)

2) π(n)vn,n−ei = π(n− ei)v∗n−ei ,n: Departure to the outside in the forwardprocess corresponds to arrival from the outside in the reversed process,

v∗n−ei ,n = αi = λi −K∑

j=1λjp∗ji︸ ︷︷ ︸

Traffic eqn. for the reversed process

= λi −K∑

j=1λjλipij

λj

=λi

(1−

K∑j=1

pij

)= λipi0 (= v∗n,n+ei

).

3-133

Jackson’s theorem: proof of DBEs V

Substituting this with vn,n−ei = µipi0 (departure to the outside),

(1− ρi)ρnii

K∏k=1,k 6=i

πk(nk)µipi0 = (1− ρi)ρni−1i

K∏k=1,k 6=i

πk(nk)λipi0

3) π(n)vn,n−ei+ej = π(n− ei + ej)v∗n−ei+ej ,n: Leaving queue i andjoining queue j in the forward process (vn,n−ei+ej = µipij) correspond toleaving queue j and joining queue i in the reversed process, i.e,v∗n−ei+ej ,n = µjp∗ji = λipijµj/λj ,

(1− ρi)ρnii (1− ρj)ρ

njj

K∏k=1,k 6=i,j

πk(nk)µipij

= (1− ρi)ρni−1i (1− ρj)ρ

nj+1j

K∏k=1,k 6=i,j

πk(nk)µjp∗ji∣∣∣use p∗ji=λipij/λj

3-134

Jackson’s theorem: proof of DBEs VI

Summary of transition rates of forward and reverse processesTransition Forward vn,m Reverse v∗n,m Commentn→ n + ei αi λi(1−

∑Kj=1 pij) all i

n→ n− ei µi(1−∑K

j=1 pij) αiµi/λi all i: ni > 0n→ n− ei + ej µipij λjpjiµi/λi all i: ni > 0, all j

4) Finally, we verify total rate equation,∑

m vn,m =∑

m v∗n,m:

v∗n,m =∑

i

λi

(1−

K∑j=1

pij

)︸ ︷︷ ︸∑

iλi−∑

i

∑jλipij

+∑

i:ni>0

(αiµi/λi +

∑j

λjpjiµi/λi

)

=∑

i

λi −∑

j

(λj − αj)︸ ︷︷ ︸λj=αj+

∑Ki=1

λipij

+∑

i:ni>0

(αiµi

λi+ µi

λi(λi − αi)

)

=∑

i

αi +∑

i:ni>0

µi = vn,m. �3-135

Open queueing networks: Extension I

The product-form solution of Jackson’s theorem is valid for thefollowing network of queues

• State-dependent service rate– 1/µi(ni): the mean of queue i’s service time exponentiallydistributed, when ni is the number of customers in the ith queuejust before the customer’s departure

ρi(ni) = λi

µi(ni), i = 1, . . . ,K , ni = 1, 2, . . .

– λi : total arrival rate at queue i determined by the traffic eqn.– Define P̂j(nj) as

γij ={

1, if nj = 0,ρj(1)ρj(2) · · · ρj(nj), if nj > 0

3-136

Page 35: Queueing theory

Open queueing networks: Extension II

– For all state n = (n1, . . . ,nK )

P(n) = P̂1(n1)P̂2(n2) · · · P̂k(nK )G ,

where G =∑∞

n1=0 · · ·∑∞

nK =0 P̂1(n1) · · · P̂k(nK )

• Multiple classes of customers– Provided that the service time distribution at each queue is thesame for all customer classes, the product form solution is valid forthe system with different classes of customers, i.e.,

λj(c) = αj(c) +K∑

i=1λi(c)pij(c)

– αj(c): rate of the external arrival of class c at queue j; pij(c) therouting probabilities of class c – See pp.230-231 in the textbook formore details

3-137

Open queueing networks: Performance measure

Performance measure• State probability distribution has been derived• Mean # of hops traversed, h, is

h = λ

α=∑K

i=1 λi∑Ki=1 αi

• Throughput of queue i: λi• Total throughput of the queueing network: α• Mean number of customers at queue i (ρi = λi/µi)

N i = ρi/(1− ρi)

• System response time T

T = Nα

= 1α

K∑i=1

N i = 1α

K∑i=1

λiTi = 1α

K∑i=1

λi

µi − λi

3-138

Open queueing networks: example A-I

New programs arrive at a CPU according to a Poisson process of rate α. Aprogram spends an exponentially distributed execution time of mean 1/µ1

in the CPU. At the end of this service time, the program execution iscomplete with probability p or it requires retrieving additional informationfrom secondary storage with probability 1− p. Suppose that the retrieval ofinformation from secondary storage requires an exponentially distributedamount of time with mean 1/µ2. Find the mean time that each programspends in the system.

3-139

Open queueing networks: example A-II

Find the mean arrival rate,

• Arrival rate into each queue, λ1 = α+ λ2 and λ2 = (1− p)λ1

λ1 = α/p and λ2 = (1− p)α/p

• Each queue behaves like an M/M/1 system, so

E [N1] = ρ1

1− ρ1and E [N2] = ρ2

1− ρ2

where ρ1 = λ1/µ1 and ρ2 = λ2/µ2

Using Little’s result, the total time spent in the system

E [T ] = E [N1 + N2]α

= 1α

[ρ1

1− ρ1+ ρ2

1− ρ2

]

3-140

Page 36: Queueing theory

Open queueing networks: example B-I

Consider the following network with three nodes

Data Communications EIEN 368 (2014년 2학기) QUEUEING THEORY Q–69

ρi∆=

λi

µi

N i =ρi

1− ρi; Ti =

1

µi − λi

N =

M∑

i=1

N i =

M∑

i=1

ρi1− ρi

; T =N

γ=

M∑

i=1

(λi

γ)Ti

eg. Consider the following networks with three routers

A

B

CγA

γB

γC

L1 L3

L2

L4

• External packet arrivals : Poisson

process with γA = 3.5 pack-

ets/sec, γB = 1.5, γC = 1.5.

• Packet length : exponentially dis-

tributed with mean 1000 bits/packet.

• External packet arrivals :Poisson process with γA = 350(packets/sec), γB = 150,γC = 150.

• Packet length : exponentiallydistributed with mean 50(kbits/packet)

Assumptions:(a) Packets moving along a path from source to destination have their

lengths selected independently at each outgoing link→ Kleinrock’s independence assumption

(b) Channel capacity of link i : Ci= 17.5 Mbps for i = 1, 2, 3, 4→ Service rate at link i: exponentially distributed with rateµi = Ci/50000 = 350 packets/sec.

3-141

Open queueing networks: example B-II

• Traffic matrix (packets per second)from → to A B C

A – 150 200(50% through B)(50% directly to C)

B 50 – 100C 100 50 –

• Find mean delay from A to C• First, we need to know link traffic

traffic type L1 L2 L3 L4A → B 150A → C 100 100 100B → A 50 50B → C 100C → A 100C → B 50 50total λ1 = 300 λ2 = 100 λ3 = 250 λ4 = 200

3-142

Open queueing networks: example B-II

• Since α = 650 and λ = 850, the mean number of hops is

h = 850/650 = 1.3077

• We get link utilization, mean number and response time asL1 L2 L3 L4

ρi 300/350=0.857 100/350=0.286 250/350=0.714 200/350 =0.572N i 300/50 100/250 250/100 200/150Ti 100/50=2 100/250=0.4 100/100=1 100/150=0.667

– N i = ρi/(1− ρi) and Ti = λiN i

• Mean delay from A to C

TAC = ( T1︸︷︷︸A to B

+ T2︸︷︷︸B to C

)× 0.5 + T3 × 0.5 = 1.7 (sec)

– propagation delay is ignored

3-143

Closed queueing networks I

Consider a network of K first-come first serve, single server queue,each of which has unlimited queue size and exponential distributionwith rate µk . There are also a fixed number of customers, say M ,circulate endlessly in a closed network of queues.

• Traffic eqn.: no external arrival!

λi =K∑

j=1λjpji with

K∑i=0

pji = 1

3-144

Page 37: Queueing theory

Closed queueing networks II

• Using ~π = ~π · P, and ~π · ~1 = 1, we have

λi = λ(M )πi

– λ(M ): a constant of proportionality, the sum of the arrival ratesin all the queues in the network–∑K

i=1 λi 6= 1

Assuming ρi = λi/µi < 1 for i = 1, . . . ,K , we have for all ni ≥ 0,

π(n) = 1G(M )

K∏i=1

ρni , and G(M ) =∑

n1+···+nK =M

K∏i=1

ρnii

• ρi is no longer the actual utilization due to λ(M )• Setting λ(M ) to a value does not change the results• Since there are M customers, the maximum queue size of each

queue is M

3-145

Closed queueing networks III

Proof: as in Jackson’s theorem for open queueing networks• Use time-reversibility: routing matrix of the reversed process• For state transition between n and n′ = n− ei + ej

π(n′)v∗n′,n = π(n)vn,n′ (∗)

• As in open queueing networks, we have

v∗n−ei+ej ,n =µjp∗ji = µj(λipij/λj)

vn,n−ei+ej =µipij for ni > 0

• Substituting these into (*), we have

ρiπ(n1, . . . ,ni − 1, . . . ,nj + 1, . . . ,nK ) = ρjπ(n1, . . . ,nK )

• The proof for the following is given on page 235∑m

vn,m =∑m

v∗n,m

3-146

Closed queueing networks IV

Computing G(M ,K ) with M customers and K queues iteratively

G(m, k) = G(m, k − 1) + ρkG(m − 1, k)

with boundary conditions: G(m, 1) = ρm1 for m = 0, 1, . . . ,M , and

G(0, k) = 1 for k = 1, 2, · · · ,K• For m > 0 and k > 1, split the sum into two disjoint sums as

G(m, k) =∑

n1+···+nk=mρn1

1 ρn22 · · · ρ

nkk

=∑

n1+···+nk=m,nk=0

ρn11 ρn2

2 · · · ρnkk +

∑n1+···+nk=m,

nk>0

ρn11 ρn2

2 · · · ρnkk

=∑

n1+···+nk=m,nk=0

ρn11 ρn2

2 · · · ρnk−1k−1

︸ ︷︷ ︸G(m,k−1)

+∑

n1+···+nk=m,nk>0

ρn11 ρn2

2 · · · ρnkk

3-147

Closed queueing networks V

• Since nk > 0, we change nk = n′k + 1 for n′k ≥ 0∑n1+···+nk=m,

nk>0

ρn11 ρn2

2 · · · ρnkk =

∑n1+···+n′k+1=m,

n′k>0

ρn11 ρn2

2 · · · ρn′k+1k

=ρk∑

n1+···+n′k=m−1,n′k>0

ρn11 ρn2

2 · · · ρn′kk

=ρkG(m − 1, k)

In a closed Jackson network with M customers, the probability thatat steady-state, the number of customers in station j greater than orequal to m is

Pr[xj ≥ m] = ρmj

G(M −m)G(M ) for 0 ≤ m ≤ M

3-148

Page 38: Queueing theory

Closed queueing networks VI

• Proof: nj = n′j + m for n′j ≥ 0

Pr[xj ≥ m] =∑

n1+·+nj+···+nK =M,nj≥m

ρn11 · · · ρ

njj · · · ρ

nKK

G(M )

=∑

n1+·+n′j +m+···+nK =M,

n′j +m≥m

ρn11 · · · ρ

n′j +mj · · · ρnK

KG(M )

=ρm

j

G(M )∑

n1+·+n′j +···+nK =M−m,n′j≥m

ρn11 · · · ρ

n′jj · · · ρ

nKK

=ρm

j

G(M )G(M −m)

• Pr[xj = m] = Pr[xj ≥ m]− Pr[xj ≥ m + 1]= ρm

j (G(M −m)− ρjG(M −m − 1))/G(M )3-149

Closed queueing networks VII

In a closed Jackson network with M customers, the average numberof customers at queue j:

Nj(M ) =M∑

m=1Pr[xj ≥ m] =

M∑m=1

ρmj

G(M −m)G(M )

In a closed Jackson network with M customers, the averagethroughput of queue j:

γj(M ) =µj Pr[xj ≥ 1] = µjρjG(M − 1)

G(M )

=λjG(M − 1)

G(M )– Average throughput is the average rate at which customers areserviced in the queue. For a single-server queue the service rate is µjwhen there are one or more customers in the queue, and 0 when thequeue is empty

3-150

Closed queueing networks: example I

Suppose that the computer system given in the open queueing network isnow operated so that there are always I programs in the system. Note thatthe feedback loop around the CPU signifies the completion of one job andits instantaneous replacement by another one. Find the steady state pmf ofthe system. Find the rate at which programs are completed.

• Using λi = λ(I )πi with ~π = ~πP,

π1 = pπ1 + π2, π2 = (1− p)π1 and π1 + π2 = 1

we have

λ1 = λ(I )π1 = λ(I )2− p and λ2 = λ(I )π2 = λ(I )(1− p)

2− p 3-151

Closed queueing networks: example II

• For 0 ≤ i ≤ I , ρ1 = λ1/µ1 and ρ2 = λ2/µ2

Pr[N1 = i,N2 = I − i] = (1− ρ1)ρi1(1− ρ2)ρI−i

2S(I )

• The normalization constant, S(I ), is obtained by

S(I ) = (1−ρ1)(1−ρ2)I∑

i=0ρi

1ρI−i2 = (1−ρ1)(1−ρ2)ρI

21− (ρ1/ρ2)I+1

1− (ρ1/ρ2)

• We then have for 0 ≤ i ≤ I

Pr[N1 = i,N2 = I − i] = 1− β1− βI+1 β

i

where β = ρ1/ρ2 = µ2/((1− p)µ1)• Program completion rate is pλ1: λ1/µ1 = Pr[N1 = 0]

3-152

Page 39: Queueing theory

Arrival theorem for closed networks I

Theorem: In a closed Jackson network with M customers, theoccupancy distribution seen by a customer upon arrival at queue j isthe same as the occupancy distribution in a closed network with thearriving customer removed

• In a closed network with M customers, the expected number ofcustomers found upon arrival by a customer at queue j is equal tothe average number of customers at queue j, when the totalnumber of customers in the closed network is M − 1

• An arriving customer sees the system at a state that does notinclude itself

Proof:

• X(t) = [X1(t),X2(t), . . . ,XK (t))]: state of the network at time t• Tij(t): probability that a customer moves from queue i to j at

time t+

3-153

Arrival theorem for closed networks II

• For any state n with ni > 0, the conditional probability that acustomer moving from node i to j finds the network at state n

αij(n) = Pr[X(t) = n|Tij(t)] = Pr[X(t) = n,Tij(t)]Pr[Tij(t)]

= Pr[Tij(t)|X(t) = n] Pr[X(t) = n]∑m,mi>0 Pr[Tij(t)|X(t) = m] Pr[X(t) = m]

= π(n)µipij∑m,mi>0 π(m)µipij

= ρn11 · · · ρ

nii · · · ρ

nKK∑

m,mi>0 ρm11 · · · ρ

mii · · · ρ

mKK

– Changing mi = m′i + 1, m′i ≥ 0,

αij(n) = ρn11 · · · ρ

nii · · · ρ

nKK∑

m1+···+m′i +1+···+mK =M,

m′i +1>0ρm1

1 · · · ρm′i +1i · · · ρmK

K

= ρn11 · · · ρ

ni−1i · · · ρnK

K∑m1+···+m′i

+···+mK =M−1,m′i≥0ρm1

1 · · · ρm′ii · · · ρ

mKK

= ρn11 · · · ρ

ni−1i · · · ρnK

KG(M − 1)

3-154

Mean Value Analysis I

Performance measure for closed networks with M customers• Nj(M ): average number of customers in queue j• Tj(M ): average time a customer spends (per visit) in queue j• γj(M ): average throughput of queue j

Mean-Value Analysis: Calculates Nj(M ) and Tj(M ) directly, withoutfirst computing G(M ) or deriving the stationary distribution of thenetworka) The queue length observed by an arriving customer is the same asthe queue length in a closed network with one less customerb) Little’s result is applicable throughout the network1. Based on a)

Tj(s) = 1µj

(1 + Nj(s − 1)) for j = 1, . . . ,K , s = 1, . . . ,M

– Tj(0) = Nj(0) = 0 for j = 1, . . . ,K

3-155

Mean Value Analysis II

2. Based on b), we first have when there are s customers in thenetwork

E [Nj(s)] = λj(s)E [Tj(s)] = λ(s)πjE [Tj(s)]︸ ︷︷ ︸step 2-b

and

s =K∑

j=1E [Nj(s)] = λ(s)

K∑j=1

πjE [Tj(s)]→ λ(s) = s∑Kj=1 πjE [Tj(s)]︸ ︷︷ ︸step 2-a

This will be iteratively done for s = 0, 1, . . . ,M

3-156

Page 40: Queueing theory

Where are we?

Elementary queueing models– M/M/1, M/M/C, M/M/C/C/K, ... and bulk queues– either product-form solutions or use PGF

Intermediate queueing models (product-form solution)– Time-reversibility of Markov process– Detailed balance equations of time-reversible MCs– Multidimensional Birth-death processes– Network of queues: open- and closed networks

Advanced queueing models– M/G/1 type queue: Embedded MC and Mean-value analysis– M/G/1 with vacations and Priority queues– G/M/m queue

More advanced queueing models (omitted)– Algorithmic approaches to get steady-state solutions

3-157

Residual life time∗ I

Hitchhiker’s paradox:

Cars are passing at a point of a road according to a Poisson processwith rate λ = 1/10, i.e., 10 min.

A hitchhiker arrives to the roadside point at random instant of time.

Hitchhiker arrives

Next car Previous car

time

What is his mean waiting time for the next car?

1. Since he arrives randomly in an interval, it would be 5 min.2. Due to memoryless property of exponential distribution, it would be

another 10 min.

∗L. Kleinrock, Queueing systems, vol.1: theory3-158

Residual life time II

The distribution of an interval that the hitchhiker captures dependson both X and fX(x):

fX′(x) = CxfX(x) and C : proportional constant

Since∫∞

0 fX′(x)dx = 1, we have C = 1/E [X ] = 1/X :

fX′(x) = xfX(x)X

Since Pr[R′ < y|X ′ = x] = y/x for 0 ≤ y ≤ x, joint pdf of X and R′:

Pr[y < R′ < y + dy, x < X ′ < x + dx] = dyx

xfX(x)dxX

= fX(x)dydxX

Unconditioning over X ′,

fR′(y)dy = dyX

∫ ∞y

fX(x)dx = 1− FX(y)X

dy ⇒ fR′(y) = 1− FX(y)X

3-159

Residual life time III

If we take the Laplace transform of the pdf of R′ for 0 ≤ R′ ≤ x,

E [e−R′s|X ′ = x] =∫ x

0

e−sx

x dy = 1− e−sx

sx

Unconditioning over X ′, we have R′∗(s) and its moments as

R′∗(s) = 1− F∗X(s)sX

⇒ E [R′n] = X (n+1)

(n + 1)X

where F∗X(s) =∫∞

0 e−sx fX(t)dt.

Surprisingly, the distribution of the elapsed waiting time, X ′ − R′, isidentical to that of the remaining waiting time.

3-160

Page 41: Queueing theory

M/G/1 queue: Embedded MC I

Recall that a continuous-time MC is described by (n, r):

• n: number of customers in the system.• r : attained or remaining service time of the customer in service.

Due to x, (n, x) is not a countable state space. How can we get rid of x?

What if we observe the system at the end of each service?

Xn+1 = max(Xn − 1, 0) + Yn+1

Xn: number of customers in the system left behind by a departure.Yn: number of arrivals that occur during the service time of thedeparting customer.

Question: Xn is equal to the queue length seen by an arrivingcustomer (queue length just before arrival)? Recall PASTA.

3-161

Distribution Upon Arrival or Departure

α(t), β(t): number of arrivals and departures (respectively) in (0, t)Un(t): number of times the system goes from n to n + 1 in (0, t);number of times an arriving customer finds n customers in the systemVn(t): number of times that the system goes from n + 1 to n;number of times a departing customer leaves n.

the transition n to n+1 cannot reoccur until after the number in the system drops to n once more (i.e., until after the transition n +1 to n reoccurs)

Un(t) and Vn(t) differ by at most one: |Un(t)−Vn(t)| ≤ 1.

limt→∞

Un(t)t = lim

t→∞

Vn(t)t ⇒ lim

t→∞

Un(t)α(t)

α(t)t = lim

t→∞

Vn(t)β(t)

β(t)t

3-162

M/G/1 queue: Embedded MC II

Defining probability generating function of distribution Xn+1,

Qn+1(z) , E [zXn+1 ] = E [zmax(Xn−1,0)+Yn+1 ] = E [zmax(Xn−1,0)]E [zYn+1 ]

Let Un+1(z) = E [zYn+1 ], as n →∞, Un+1(z) = U (z) (independentof n). Then, we have

Qn+1(z) =U (z)∞∑

k=0zk Pr[max(Xn − 1, 0) = k]

=U (z)[z0 Pr[Xn = 0] +

∞∑k=1

zk−1 Pr[zXn = k]]

=U (z)[

Pr[Xn = 0] + z−1(Qn(z)− Pr[Xn = 0])]

As n →∞, we have Qn+1(z) = Qn(z) = Q(z), and Pr[Xn = 0] = q0,

Q(z) = U (z)(z − 1)z −U (z) q0.

3-163

M/G/1 queue: Embedded MC III

We need to find U (z) and q0. Using U (z|xi = x) = eλx(z−1),

U (z) =∫ ∞

0U (z|xi = x)b(x)dx = B∗(λ(1− z)).

Since Q(1) = 1, we have q0 = 1−U ′(1) = 1− λ ·X = 1− ρ.

Transform version of P-K formula is

Q(z) = B∗(λ(1− z))(z − 1)z − B∗(λ(1− z)) (1− ρ).

Letting q = Q′(1), one gets W = q/λ−X .

Sojourn time distribution of an M/G/1 system with FIFO service:If a customer spends Tj sec in the system, the number of customers itleaves behind in the system is the number of customers that arriveduring these Tj sec, due to FIFO.

3-164

Page 42: Queueing theory

M/G/1 Queue: Embedded MC IV

Let fT(t) be probability density function of T , i.e., total delay.

Q(z) =∞∑

k=0zk∫ ∞

0

(λt)k

k! e−λtfT(t)dt = T∗(λ(1− z))

where T∗(s) is the Laplace transform of fT(t). We have

T∗(λ(1− z)) = B∗(λ(1− z))(z − 1)z − B∗(λ(1− z)) (1− ρ)

Let s = λ(1− z), one gets

T∗(s) = (1− ρ)sB∗(s)s − λ+ λB∗(s) = W ∗(s)B∗(s)⇒W ∗(s) = (1− ρ)s

s − λ+ λB∗(s)

In an M/M/1 system, we have B∗(s) = µ/(s + µ):

W ∗(s) = (1− ρ)(

1 + λ

s + µ− λ

)3-165

Delay analysis of an ARQ system

Suppose Go-Back-N ARQ system, where a packet is successfullytransmitted with probability 1− p• Packet arrivals to a transmitter’s queue follows Poisson with meanλ (packets/slot)

Sec. 3.5 The MIG11 System 191

2. A packet transmitted in frame i might be accepted at the receiver, but the correspond-ing acknowledgment (in the form of the receive number) might not have arrived atthe transmitter by the time the transmission of packet i + n - I is completed. Thiscan happen due to errors in the return channel, large propagation delays, long returnframes relative to the size of the goback number n, or a combination thereof.

We will assume (somewhat unrealistically) that retransmissions occur only due toreason I, and that a packet is rejected at the receiver with probability p independently ofother packets.

Consider the case where packets arrive at the transmitter according to a Poissonprocess with rate .\. It follows that the time interval between start of the first transmissionof a given packet after the last transmission of the previous packet and end of the lasttransmission of the given packet is I + kn time units with probability (I - p)pk. (Thiscorresponds to k retransmissions following the last transmission of the previous packet; seeFig. 3.17.) Thus, the transmitter's queue behaves like an MIGII queue with service timedistribution given by

kP{X=I+kn}=(l-p)p, k=O.I. ...

The first two moments of the service time are

x (X X)

x (00 00 00)- 2 k k k 2 2k=(l-p) +n

We now note thatX

,",pk = _1_,I-p

k=O

X

'"'kk __P_P - (I _ )2'

k=O P

Effective service timeof packet 1

Effective service timeof packet 2

,. II .. 'II

Start of effective service timeof packet 4

Error Final transmissionof packet 1

Error Final transmission Correct Errorof packet 2 -Packets Transmitted

Error

Figure 3.17 Illustration of the effective service times of packets in the ARQsystem of Example 3.15. For example, packet 2 has an effective service time ofn + 1 because there was an error in the first attempt to transmit it following thelast transmission of packet 1. but no error in the second attempt.

• We need the first two moments of the service time to use P-Kformula

X =∞∑

k=0(1 + kW )(1− p)pk = 1 + Wp

1− p

X2 =∞∑

k=0(1 + kW )2(1− p)pk = 1 + 2Wp

1− p + W 2(p + p2)(1− p)2 3-166

M/G/1 queue: Mean value analysis I

For queueing systems with a general and independent service timedistribution, G, a continuous-time MC is described by (n, r):• n: number of customers in the system.• r : attained or remaining service time of the customer in service.

Wi = Ri +i−1∑

j=i−Ni

Xj

where Wi , Ri and Xj are waiting time in queue of customer i, residualservice time seen by customer i, and service time of customer j.

Taking expectations and using the independence among Xj ,

E [Wi ] ,W = E [Ri ] + E[ i−1∑

j=i−Ni

E [Xj |Ni ]]

= Ri + 1µ

Nq

Since Nq = λW , Ri = R for all i, we have W = R/(1− ρ).

3-167

M/G/1 queue: Mean value analysis II

Time averaged residual time of r(τ) in the interval [0, t] is

R(t) = 1t

∫ t

0r(τ)dτ = 1

t

M(t)∑i=1

12X2

i = 12

M (t)t

∑M(t)i=1 X2

iM (t)

– M (t) is the number of service completion within [0, t].

Res

idu

al s

ervi

ce t

ime

time

Upon a new service of duration , starts at and decays linearly for time units.

As t →∞, R(∞) = R = λX2/2,From the hitchhiker’s paradox, E [R′] = E [X2]/(2E [X ]):

R = 0 · Pr[N (t) = 0] + E [R′] Pr[N (t) > 0] = E [X2]2E [X ]λE [X ] = λX2

2 .3-168

Page 43: Queueing theory

M/G/1 queue: Mean value analysis III

Pollaczek-Khinchin (P-K) formula for mean waiting time in queue

W = λX2

2(1− ρ)

∣∣∣∣X2=σ2

X +X2= λ(σ2

X + X2)2(1− ρ)

= 1 + C 2x

1− ρX = 1 + C 2x

2 WM/M/1

– C 2x = σ2

X/E [X ]2 is the coefficient of variation of the service time.

Eg.: since Cx = 1 in an M/M/1 and Cx = 0 in an M/D/1,

WM/M/1 = ρ

1− ρX > WM/D/1 = ρ

2(1− ρ)X

3-169

M/G/1 Queue with vacations I

Server takes a vacation at the end of each busy period• Take an additional vacation if no customers are found at the end of

each vacation: V1, V2, ... the durations of the successive vacations• A customer finds the system idle (vacation), waits for the end of

the vacation period

Delay Models in Data Networks

Packet arrivals

x, x x v2 V3 X5 V4 X5::::::::'':'::':'':'::'V ,Busy period

Vacations

Time

Figure 3.12 An M/G/I1 system with vacations. At the end of a busyperiod, the server goes on vacation for time V with first and second momentsV and VY , respectively. If the system is empty at the end of a vacation, theserver takes a new vacation. An arriving customer to an empty system mustwait until the end of the current vacation to get service.

X, X3

X,

Figure 3.13 Residual service times for an M/G/1 system with vacations.Busy periods alternate with vacation periods.

Chap. 3

II

• Residual service time including vacation periods

1t

∫ t

0r(τ)dτ = 1

t

M(t)∑i=1

12X2

i + 1t

L(t)∑i=1

12V 2

i

– M (t): # of services completed by time t– L(t): # of vacations completed by time t

3-170

M/G/1 Queue with vacations II

• Residual service time including vacation periods is rewritten as

1t

∫ t

0r(τ)dτ︸ ︷︷ ︸

R as t→∞

= M (t)t︸ ︷︷ ︸

λ as t→∞

·∑M(t)

i=112 X2

iM (t) + L(t)

t︸ ︷︷ ︸1−ρ

Vas t→∞

·∑L(t)

i=112 V 2

iL(t)

= λX2

2 + (1− ρ)V 2

2V= R

• Using W = R/(1− ρ), we have

W = λX2

2(1− ρ) + V 2

2V

– The sum of waiting time in M/G/1 queue and residual vacationtimes

3-171

M/G/1 Queue with vacations: Embedded Markovchain approach

Observing queue either at the end of a vacation or a service period• αm: probability of m customers found at the end of a vacation

period

αk =∫ ∞

0

(λx)k

k! e−λxv(x)dx

– v(x) is the pdf of the length of a vacation period• ak : probability of k customers found at the end of a service time

ak =∫ ∞

0

(λx)k

k! e−λxb(x)dx

• Combining the above, we have GBE as

πk = π0

k+1∑m=1

αmak−m+1 +k+1∑j=1

πjak−j+1

3-172

Page 44: Queueing theory

M/G/1 Queue with vacations IV

• Using Π(z) =∑∞

k=0 zkπk

Π(z) =π0

∞∑m=1

αmzm−1∞∑

k=m−1ak−m+1zk−m+1

+∞∑

j=1πjz j−1

∞∑k=j−1

ak−j+1zk−j+1

=π0

z α(z)B∗(λ− λz) + Π(z)− π0

z B∗(λ− λz)

– α(z) =∑∞

k=0 zkαk

Π(z) = π0(1− α(z))B∗(λ− λz)B∗(λ− λz)− z

• Using Π(1) = 1, we have π0 = (1− ρ)/V• W = Π′(1)/λ−X

L = Π′(1) = λX + λ2X2

2(1− λX)+ λ

V 2

2V 3-173

FDM and TDM on a Slot Basis I

Suppose m traffic streams of equal-length packets according toPoisson process with rate λ/m each

• If the traffic streams are frequency-division multiplexed on msubchannels, the transmission time of each packet is m time units– Using P-K formula, λX2/(2(1− ρ)), with ρ = λ and µ = 1/m,

WFDM = λm2(1− λ)

• Consider the same FDM, but packet transmissions can start only attimes, m, 2m, 3m,...: slotted FDM– This system gives stations a vacation of m slots

WSFDM = WFDM + 0.5m(

= V 2

2V

)

3-174

FDM and TDM on a Slot Basis II

• m traffic streams are time-division multiplexed, where one slotdedicated to each traffic stream as shown belowSec. 3.5 The M / G/1 System 195

Stream 1 Stream 2 Stream 3 Stream 4

--,--IttM-'-------'--...l...----..!-1-!--I__.I. Framek .1_. t

Frame (k + 1)One time unit per slot

Figure 3.20 TOM with m = 4 traffic streams.

Thus, the customer's average total delay is more favorable in TDM than in FDM (assumingthat In > 2). The longer average waiting time in queue for TDM is more than compensatedby the faster service time. Contrast this with the Example 3.9, which treats TDM withslots that are a very small portion of the packet size. Problem 3.33 outlines an altemativeapproach for deriving the TDM average delay.

3.5.2 Reservations and Polling

Organizing transmissions from several packet streams into a statistical multiplexing sys-tem requires some form of scheduling. In some cases, this scheduling is naturally andeasily accomplished; in other cases, however, some form of reservation or polling systemis required.

Situations of this type arise often in multiaccess channels, which will be treatedextensively in Chapter 4. For a typical example, consider a communication channel thatcan be accessed by several spatially separated users; however, only one user can transmitsuccessfully on the channel at anyone time. The communication resource of the channelcan be divided over time into a portion used for packet transmissions and another portionused for reservation or polling messages that coordinate the packet transmissions. In otherwords, the time axis is divided into data intervals, where actual data are transmitted, andreservation intervals, used for scheduling future data. For uniform presentation, we usethe term "reservation" even though "polling" may be more appropriate to the practicalsituation.

We will consider Tn traffic streams (also called users) and assume that each datainterval contains packets of a sinfile user. Reservations for these packets are made in theimmediately preceding reservation interval. All users are taken up in cyclic order (seeFig. 3.21). There are several versions of this system differing in the rule for decidingwhich packets are transmitted during the data interval of each user. In the fiated system,the rule is that only those packets that arrived prior to the user's preceding reservationinterval are transmitted. By contrast, in the exhaustive system, the rule is that all availablepackets of a user are transmitted during the corresponding data interval, including thosethat arrived in this data interval or the preceding reservation interval. An intermediateversion, which we call the partially fiated system, results when the packets transmitted ina user's data interval are those that arrived up to the time this data interval began (and thecorresponding reservation interval ended). A typical example of such reservation systems

TDM with m = 4 traffic streams

– Service time to each queue, X : m slots → X2 = m2

– Frame synchronization delay: m/2– Using P-K formula, we have

WTDM = m2(1− λ) = WSFDM

– System response time: T = 1 + WTDM

3-175

M/G/1 Queue with Non-Preemptive Priorities ICustomers are divided into K priority classes, k = 1, . . . ,K .

Non-preemptive priority:• Service of a customer completes uninterrupted, even if customers

of higher priority arrive in the meantime• A separate (logical) queue is maintained for each class; each time

the server becomes free, the first customer in the highest priorityqueue (that is not empty) enters service

• Due to non-preemptive policy, the mean residual service time R′seen by an arriving customer is the same for all priority classes

Notations• N (k)

q : mean number of waiting customers belonging to class k inthe queue.

• Wk : mean waiting time of class-k customers• ρk : utilization, or load of class k, ρk = λXk .• R′: mean residual service time in the server upon arrival

3-176

Page 45: Queueing theory

M/G/1 Queue with Non-Preemptive Priorities II

Stability condition: ρ1 + ρ2 + · · ·+ ρK < 1.

Priority 1: similar to P-K formula,

W1 = R + 1µ

N (1)q and N (1)

q = λ1W1 ⇒W1 = R1− ρ1

Priority 2:

W2 = R + 1µ1

N (1)q + 1

µ2N (2)

q︸ ︷︷ ︸time needed to serve

class-1 and class-2 customersahead in the queue

+ 1µ1λ1W2︸ ︷︷ ︸

time needed to serve those customersin higher classes that arrive

during the waiting time of class-2 customer

From N (2)q = λ2W2,

W2 = R + ρ1W1 + ρ2W2 + ρ1W2 ⇒W2 = R + ρ1W1

1− ρ1 − ρ2

3-177

M/G/1 Queue with Non-Preemptive Priorities III

From W2 = R/((1− ρ1)(1− ρ1 − ρ2)), we can generalize

Wk = R(1− ρ1 − · · · − ρk−1)(1− ρ1 − · · · − ρk)

As before, the mean residual service time R is

R = 12λX2, where λ =

K∑i=1

λi and X2 = 1λ

K∑i=1

λiX2i

Mean waiting time for class-k customers:

Wk =∑K

i=1 λiX2i

2(1− ρ1 − · · · − ρk−1)(1− ρ1 − · · · − ρk)

Note that average queueing time of a customer depends on the arrivalrate of lower priority customers.

3-178

M/G/1 Queue with Preemptive Priorities I

Preemptive priority• Service of a customer is interrupted when a higher priority

customer arrives.• It resumes from the point of interruption when all higher priority

customers have been served.• In this case the lower priority class customers are completely

"invisible" and do not affect in any way the queues of the higherclasses

Waiting time of class-k customer consists of• The customer’s own mean service time Xk .• The mean time to serve the customers in classes 1, . . . , k, ahead in

the queue,

Rk

1− ρ1 − · · · − ρkand Rk =

k∑i=1

λiX2i .

This is equal to the average waiting time in an M/G/1 systemwhere customers of priority lower than k are neglected 3-179

M/G/1 Queue with Preemptive Priorities II

• Average time required to serve customers of priority higher than kthat arrive while the customer is in the system

k−1∑i=1

λiX iTk =k−1∑i=1

ρiTk for k > 1 and 0 if k = 1

• Combining these,

Tk = Xk + Rk

1− ρ1 − · · · − ρk+ Tk

k−1∑i=1

ρk︸ ︷︷ ︸this is zero for k=1

⇒Tk =(1− ρ1 − · · · − ρk)Xk +

∑ki=1 λiX2

i(1− ρ1 − · · · − ρk−1)︸ ︷︷ ︸

becomes 1 if k=1

(1− ρ1 − · · · − ρk) .

3-180