A Class of Functionals on Markov Chain Transitions

22
Journal of Mathematical Sciences, Vol. 198, No. 5, May, 2014 A CLASS OF FUNCTIONALS ON MARKOV CHAIN TRANSITIONS V. S. Lugavov Kurgan State University 25, Gogolya str., Kurgan 640669, Russia [email protected] UDC 519.217.2 + 519.214.6 We find distributions of functionals on transitions of a Markov chain. Bibliography: 15 titles. Introduction Let κ = {κ(n); n =0, 1,...} be a homogenous Markov chain with the set of states D = {1, 2,...,N }. For a subset B D 2 we consider the random set T B = {n :(κ(n 1)(n)) B} {0} and the functionals m(B,n) = Card T B [1,n] ,q(B,n) = max T B [0,n] , r(B,n) = inf T B [n +1, ) (inf {} = ), where Card (A) is the number of elements of a set A. Denote s(B)= r(B, 0), so that s(B)= inf T B [1, ) . Varying the set B, we can obtain a large class of functionals. Thus, if C is a nonempty subset of D, then s(D × C ) is the first positive time moment when a chain κ attains the set C , s ( (D\C ) × C ) is the time moment of the first entrance of the chain into the set C , m(D × C,n) is the sojourn of the chain κ in the set C during the time interval [1,n], m((D\C ) × C,n) is the number of entrances of κ into C during [1,n ]. The functional q(D × C,n) defined on the set {s(D × C ) n} is the time moment of the last (in the time interval [1,n]) sojourn of the chain κ in C . Similarly, the functional q((D\C ) × C,n) on the set {s((D\C ) × C ) n} is the last (in the time interval [1,n]) time moment of entrance of the chain κ into the set C . Many above-mentioned functionals have been studied rather well. Thus, for classical Markov chains the local limit theorem [1] and the integral limit theorem [1, 2] were proved. General- izations to processes on a Markov chain can be found in [3]–[8]. An exact distribution law for the matrix of numbers of transitions in n steps (i.e., m({i}×{j },n)) was established by combinatoric methods in [9], where it was also proved that the distribution is asymptotically Translated from Vestnik Novosibirskogo Gosudarstvennogo Universiteta: Seriya Matematika, Mekhanika, Infor- matika 12, No. 2, 2012, pp. 61–82. 1072-3374/14/1985-0580 c 2014 Springer Science+Business Media New York 580

Transcript of A Class of Functionals on Markov Chain Transitions

Page 1: A Class of Functionals on Markov Chain Transitions

Journal of Mathematical Sciences, Vol. 198, No. 5, May, 2014

A CLASS OF FUNCTIONALS ON MARKOV CHAINTRANSITIONS

V. S. Lugavov

Kurgan State University25, Gogolya str., Kurgan 640669, Russia

[email protected] UDC 519.217.2 + 519.214.6

We find distributions of functionals on transitions of a Markov chain. Bibliography: 15

titles.

Introduction

Let κ = {κ(n);n = 0, 1, . . .} be a homogenous Markov chain with the set of states D =

{1, 2, . . . , N}. For a subset B ⊂ D2 we consider the random set

TB = {n : (κ(n− 1), κ(n)) ∈ B}⋃

{0}

and the functionals

m(B,n) = Card{TB ∩ [1, n]

}, q(B,n) = max

{TB ∩ [0, n]

},

r(B,n) = inf{TB ∩ [n+ 1,∞)

}(inf{∅} = ∞),

where Card (A) is the number of elements of a set A. Denote s(B) = r(B, 0), so that s(B) =

inf{TB ∩ [1,∞)

}.

Varying the set B, we can obtain a large class of functionals. Thus, if C is a nonempty

subset of D, then s(D ×C) is the first positive time moment when a chain κ attains the set C,

s((D\C)×C

)is the time moment of the first entrance of the chain into the set C, m(D×C, n)

is the sojourn of the chain κ in the set C during the time interval [1, n], m((D\C)×C, n) is the

number of entrances of κ into C during [1, n ]. The functional q(D × C, n) defined on the set

{s(D ×C) � n} is the time moment of the last (in the time interval [1, n]) sojourn of the chain

κ in C. Similarly, the functional q((D\C)×C, n) on the set {s((D\C)×C) � n} is the last (in

the time interval [1, n]) time moment of entrance of the chain κ into the set C.

Many above-mentioned functionals have been studied rather well. Thus, for classical Markov

chains the local limit theorem [1] and the integral limit theorem [1, 2] were proved. General-

izations to processes on a Markov chain can be found in [3]–[8]. An exact distribution law

for the matrix of numbers of transitions in n steps (i.e., m({i} × {j}, n)) was established by

combinatoric methods in [9], where it was also proved that the distribution is asymptotically

Translated from Vestnik Novosibirskogo Gosudarstvennogo Universiteta: Seriya Matematika, Mekhanika, Infor-

matika 12, No. 2, 2012, pp. 61–82.

1072-3374/14/1985-0580 c© 2014 Springer Science+Business Media New York

580

Page 2: A Class of Functionals on Markov Chain Transitions

normal. Distribution of the number of series of the ith state of a two-valued chain was found

in [10] (this functional is simply expressed in terms of m((D\{i}) × {i}, n)). Distributions of

the number of occurrences of some state and the number of series of this state were described

in [11] for two-valued chains. Comprehensive information about the functionals m(D × {i}, n),q(D × {i}, n), and r(D × {i}, n) can be found in [7, 8, 12].

In Theorems 1 and 2, conditions for s(B) to be finite and factorial moments of s(B) are found

(provided that s(B) is finite). Theorem 3 yields a transformation over distribution of a random

vector (m(B,n), q(B,n), r(B,n), κ(q(B,n)), κ(r(B,n))). Using the inverse transformation, it is

possible to find the limit distributions of the functionals r(B,n) and q(B,n) for regular chains

(i.e., such that all their essential states are nonperiodic).

Theorem 7 yields the limit distribution of the functional m(B,n) in the case where the initial

state κ(0) of the chain is inessential. To prove this assertion, we study (Theorems 5 and 6) the

limit behavior of the variance of this functional.

The results of this paper are partially announced in [13].

1 Auxiliaries

We consider a homogeneous Markov sequence of random vectors V = {V (n), σ(n); n =

0, 1, . . .} controlled by a homogeneous Markov chain σ = {σ(n), n = 0, 1, . . .} with finitely many

states D0 = {1, 2, . . . , N +1}. The coordinate V (n) takes the values in Z+ ∪{∞} (Z+ is the set

of nonnegative integers), and V (0) = 0. For any Borel set B ⊂ [0,∞] it is assumed that

P (V (n+ 1) ∈ B, σ(n+ 1) = j/V (n) = x, σ(n) = i)

= P (V (1) + x ∈ B, σ(1) = j/σ(0) = i), x ∈ Z+, i ∈ D, j ∈ D0.

We assume that the state (∞, N + 1) of the sequence V is absorbing and the absorption is

simultaneous with respect to both coordinates. We set

ϕ(ρ) = ‖E(ρV (1);σ(1) = j/σ(0) = i)‖i,j=1,N ,

ϕ̃(ρ) = ‖E(ρV (1);σ(1) = j/σ(0) = i)‖j=1,N+1

i=1,N,

where ρ∞ = 0 for 0 < ρ < 1 and 1∞ = 1. We consider the functional

t(k) = inf {n > 0 : V (n) � k }, k � 1.

Introduce the notation: δij , i, j ∈ {1, . . . , N + 1} is the Kronecker symbol and I = ‖δij‖i,j=1,N

is the identity matrix of order N . For an arbitrary set M we define δM (x) = 1 if x ∈ M and

δM (x) = 0 if x /∈ M . For S ⊂ (D0)2 we denote by Sc the complement of S to (D0)

2. For an

arbitrary rectangular matrix A(·) = ‖aij(·)‖j=1,l

i=1,k, k, l ∈ {1, 2, · · · , N + 1} we set

A(S, ·) = ‖aij(·)δS(i, j)‖j=1,l

i=1,k,

[A(·)]ij = aij(·), i = 1, k, j = 1, l,

|A(·)| = sup1�i�k

l∑

j=1

|aij(·)|.

581

Page 3: A Class of Functionals on Markov Chain Transitions

For an arbitrary matrix C(·) = ‖cij(·)‖i,j=1,k, k ∈ 1, N , we set

C∗(·) = ‖δijk∑

l=1

cil(·)‖i,j=1,k.

Lemma 1 (cf. [14, Lemma1]). Let |ϕ(ρ)| < 1 for 0 < ρ < 1, which is equivalent to

P (V (1) = 0/σ(0) = j) < 1, j = 1, N . Then for 0 < γ < 1, 0 < z < 1, 0 < μ2 � 1,

0 < γμ1μ2 � 1; i, j ∈ 1, N , r ∈ 1, N + 1

(1− γ)γ−1∞∑

k=1

γkE(zt(k)−1μV (t(k)−1)1 μ

V (t(k))2 ); t(k) < ∞, σ(t(k)− 1) = j,

σ(t(k)) = r/σ(0) = i) = [(I − zϕ(μ1γμ2))−1]ij [ϕ̃(μ2)− ϕ̃(γμ2)]jr.

2 Functionals on Transitions of a Markov Chain

As in the introduction, we suppose that κ = {κ(n), n = 0, 1, . . .} is a homogeneous Markov

chain with the set of states D = {1, 2, . . . , N} and transition matrix P; TB = {n : (κ(n − 1),

κ(n)) ∈ B} ∪ {0}, B ⊂ D2. Throughout the section, we use the notation Pi(·) = P (·/κ(0) = i)

and Ei(·) = E(·/κ(0) = i).

To study functionals defined in the introduction, we consider the sequence of moments

TB(0) = 0, TB(k) = inf {TB ∩ (TB(k − 1),∞)}, k � 1,

and the process

VB = {TB(n), κ(TB(n));n = 0, 1, . . . },where κ(∞) = N + 1. The process VB is a two-dimensional Markov process, homogeneous

with respect to time and additive with respect to the first component, with absorbing state

(∞, N + 1). The process evolution matrix VB is expressed as

ϕB(ρ) =∥∥E(ρTB(1); κ(TB(1)) = j/κ(TB(0)) = i)

∥∥i,j=1,N

. (1)

We set

ϕ̃B(ρ) =∥∥E(ρTB(1); κ(TB(1)) = j/κ(TB(0)) = i)

∥∥j=1,N+1

i=1,N.

For 0 < ρ < 1 the matrix ϕB(ρ) can be represented as

ϕB(ρ) = ρ(I − ρP(Bc)

)−1P(B) (2)

because ϕB(ρ) = ρP(B) + ρP(Bc)ϕB(ρ).

1. We consider the functional s(B) = inf {n � 1 : (κ(n − 1), κ(n)) ∈ B } defined in the

introduction. It is easy to find a distribution of s(B). Indeed, from (2) and s(B) = TB(1) it

follows that

Pi

(s(B) = n, κ(s(B)) = j

)=[(P(Bc)

)n−1P(B)

]ij

for n � 1, i, j = 1, N . In particular,

Pi (s(B) = ∞) = limn→∞[{(P(Bc))n}∗]ii. (3)

582

Page 4: A Class of Functionals on Markov Chain Transitions

We try to deduce conditions on the pair (i, B) for s(B) to be Pi almost surely finite and

find the factorial moments of s(B) under these conditions. For this purpose we use (3) and the

following equality obtained from (2):

Ei

(ρs(B); s(B) < ∞) = ρ

[{(I − ρP(Bc))−1

P(B)}∗]ii

(4)

for 0 < ρ < 1, i = 1, N . We also need the following property of nonnegative (with nonnegative

entries) matrices.

Lemma 2. The decrease of at least one entry of a nonnegative irreducible matrix so that the

resulting matrix remains nonnegative leads to the decrease of the spectral radius of this matrix.

Lemma 2 can be obtained from the following assertion (cf. [15, Chapter 13, Section 3]):

Under increasing at least one entry of a nonnegative matrix, the spectral radius does not decrease;

moreover, it strictly increases if, in addition, the matrix is irreducible.

Indeed, let C be a nonnegative matrix obtained from an irreducible matrix D by decreasing

some entries. Since C + 12(D − C) and D = C + (D − C) have the same location of their zero

entries, the matrix C+ 12(D−C) is also irreducible. Applying the first part of the above assertion

to the matrices C and C + 12(D−C) and the second part to the matrices C + 1

2(D−C) and D,

we obtain the required property.

Let i be an essential state of a chain κ, and let S(i) be the class of essential communicating

states of the chain containing the state i. It is obvious that the submatrix of P corresponding

to transitions inside the class S(i) is an irreducible stochastic matrix.

Theorem 1. Let i be an essential state of a chain κ. Then s(B) is Pi almost surely finite

if and only if P(B⋂

S2(i)) �= O (O is the zero matrix of order N). If s(B) Pi is almost surely

finite, then for k � 0 (O0 = I)

Ei

( k∏

n=0

(s(B)− n))= (k + 1)!

[{(I −P(Bc

⋂S2(i))

)−(k+1)(P(Bc

⋂S2(i))

)k}∗]

ii

.

Proof. We assume that P(B⋂

S2(i)) �= O. By Lemma 2, the spectral radius of the matrix

P(Bc ∩ S2(i)

)is less than 1. By (3),

Pi

(s(B) = ∞) = lim

n→∞[{(P(Bc ∩ S2(i)))n}∗]

ii= 0.

It is obvious that the condition P(B ∩ S2(i)

) �= O is necessary for s(B) to be Pi almost surely

finite since for any class a transition from an essential state is possible only to a state of the same

class. By the same reason, {s(B) < ∞} Pi almost surely coincides with {s(B ∪ (S2(i))c)< ∞}

and s(B), s(B ∪ (S2(i))c

)are Pi almost surely equal on these sets. By (4),

Ei (ρs(B); s(B) < ∞) = ρ

[{(I − ρP(Bc ∩ S2(i))

)−1P(B ∪ (S2(i))c

)}∗]

ii

for 0 < ρ < 1. Differentiating both sides of the last equality (k + 1) times and passing to the

limit as ρ ↑ 1, we easily obtain the second assertion.

We note that the first two moments of s(B), B = D × i0, i0 ∈ D, were obtained in [7, 8].

583

Page 5: A Class of Functionals on Markov Chain Transitions

Remark 1. Suppose that, under the assumptions of Theorem 1, the matrix P(Bc⋂

S2(i))

is nilpotent and n0 is its nilpotency index (n0 = min{k > 0 :(P(Bc⋂

S2(i)))k

= O}). Then

for k > n0 the moments Ei(s(B))k are represented as linear combinations of Ei

(s(B)

)n, n � n0;

moreover, the coefficients of these linear combinations depend only on n0 and k. This is true

because for an essential state i we have

Pi

(s(B) = n, κ(s(B)) = j

)=[(

P(Bc⋂

S2(i)))n−1

P(B)]

ij= 0, j = 1, N, n > n0,

and, consequently, s(B) � n0 Pi almost surely.

Consider the case of an inessential state i. For an arbitrary nonempty subset A ⊂ D we

define the probability p(i, A) = Pi {κ(n) ∈ A for some n > 0 } of accessing the set A from

the state i and say that the set A is accessible from the state i if p(i, A) > 0. It is rather

simple to verify whether a set A is accessible from a state i with the help of the directed graph

corresponding to the transition matrix P. Let S1, S2 . . . Sd, d � 1, be all the classes of essential

communicating states of the chain κ. To find p(i, Sm), we denote by H the set of inessential

states of the chain κ and note that (i ∈ H)

p(i, Sm) = Pi{∃n > 0 : (κ(n− 1), κ(n)) ∈ H × Sm} = Pi{s(H × Sm) < ∞}.

Since a chain never returns after leaving a class H, (4) we find

Ei

(ρs(H×Sm); s(H × Sm) < ∞) = ρ

[{(I − ρP(H2))−1

P(H × Sm)}∗]ii

and, consequently,

p(i, Sm) =[{(I −P(H2)

)−1P(H × Sm)}∗]

ii.

Denote Gi = {Sm, 1 � m � d : p(i, Sm) > 0}. The set Gi is not empty. A sequence of states

(j0, j1 . . . jn), n � 1, is called a chain if all transition probabilities pjm−1,jm , m = 1, n, are positive

and a simple chain if, in addition, all states in this chain are different. We say that a chain

(j0, j1 . . . jn), n � 1, joins an inessential state i with the class Sm if j0 = i, jn−1 ∈ H, jn ∈ Sm.

We denote by n(i, Sm) the number of simple chains joining the state i with the class Sm. With

each simple chain (j0, j1 . . . jn) we associate the unique set {(j0, j1), . . . (jn−1, jn)}, called the

transition set corresponding to a given simple chain. We arbitrarily enumerate simple chains

joining i with Sm ∈ Gi by the numbers 1, 2, . . . , n(i, Sm). Let D1(i, Sm), . . . , Dn(i,Sm)(i, Sm)

denote the corresponding transition sets.

Theorem 2. Let i be an inessential state of a chain κ. Then s(B) is Pi almost surely

finite if and only if for any class Sm ∈ Gi either P(B ∩ (Sm)2

) �= O or B ∩ Dr(i, Sm) �= ∅,

r = 1, n(i, Sm). If s(B) is Pi almost surely finite, then

Ei

( k∏

n=0

(s(B)− n

))= (k + 1) !

[{(I −P

((B′)c

))−(k+1) (P((B′)c

))k}∗]

ii

for k � 0, where

B′ = B ∪( ⋃

m: Sm /∈Gi

(Sm)2)∪( ⋃

m: Sm∈Gi,P(B

⋂(Sm)2)=O

(Sm)2).

584

Page 6: A Class of Functionals on Markov Chain Transitions

Proof. Necessity. We assume that there is a class Sm ∈ Gi such that P(B⋂(Sm)2

)= O

and B⋂Dr(i, Sm) = ∅ for some r ∈ 1, n(i, Sm). Let (i, j1 . . . jn) be the chain corresponding to

the transition set Dr(i, Sm). Then

Pi{s(B) = ∞} � Pi{κ(1) = j1, . . . , κ(n) = jn} = pi,j1

n−1∏

l=1

pjljl+1> 0.

Sufficiency. We consider the completion B′ of a set B by elements of those sets (Sm)2

(1 � m � d) for which either Sm /∈ Gi or Sm ∈ Gi and P(B ∩ (Sm)2

)= O simultaneously. It is

easy to see that the sets {s(B) < ∞} and {s(B′) < ∞} coincide Pi almost surely, whereas s(B)

and s(B′) are Pi almost surely equal on these sets. By (4),

Ei

(ρs(B); s(B) < ∞) = ρ

[{(I − ρP

((B′)c

))−1P(B′)

}∗]

ii.

By the structure of B′, the spectral radius of the matrix P((B′)c) is less than 1. Passing to the

limit on both sides of the last equality as ρ ↑ 1, we conclude that s(B) is Pi almost surely finite.

Using the last equality, we easily obtain the second assertion of Theorem 2.

Using Theorems 1 and 2, we find necessary and sufficient conditions for s(B) to be Pi almost

surely finite for all states i in any subset of D. In particular, we obtain the following assertion

(which also follows from (3) and (4)).

Corollary 1. The quantity s(B) is Pi almost surely finite, i = 1, N if and only if the spectral

radius of the matrix P(Bc) is less than 1 or, which is equivalent, P(B⋂(Sm)2

) �= O for all

classes of essential communicating states Sm. If s(B) is Pi almost surely finite, i = 1, N , then

for k � 0, i ∈ D

Ei

( k∏

n=0

(s(B)− n

))= (k + 1)!

[{(I −P(Bc)

)−(k+1)(P(Bc)

)k}∗]

ii.

2. We find the limit distributions of q(B,n) and r(B,n) (cf. the introduction). For this

purpose we consider the process VB = {TB(n), κ(TB(n)

), n = 0, 1, . . .} with evolution matrix

ϕB(ρ) (cf. (1)). We set tB(k) = inf {n > 0 : TB(n) � k }, k � 1. Then tB(k)− 1 = m(B, k − 1),

TB

(tB(k)− 1

)= q(B, k− 1), TB

(tB(k)

)= r(B, k− 1). Lemma 1 implies the following assertion.

Theorem 3. For 0 < ρ < 1, 0 < γ < 1, 0 < λ � 1, 0 < μλ � 1, i, j ∈ 1, N , r ∈ 1, N + 1

(1− γ

) ∞∑

k=0

γk Ei

(ρm(B,k)μq(B,k) λr(B,k);κ

(q(B, k)

)= j, κ

(r(B, k)

)= r)

= −[(I − ρϕB(μλγ))−1]

ij

[ϕ̃B(λγ)− ϕ̃B(λ)

]jr. (5)

A homogeneous Markov chain is said to be regular if all its essential states are nonperiodic.

For regular chains there exist limiting transition probabilities [15, Chapter 13, Section 7]. Let

Π = limn→∞Pn be the matrix of limiting transition probabilities of a regular chain κ. Inverting

the equality (5) and making simple transformations, we obtain the following assertion (cf. (2)).

585

Page 7: A Class of Functionals on Markov Chain Transitions

Corollary 2. Suppose that κ is a regular Markov chain and Π is the matrix of limiting

transition probabilities. Then for i, j ∈ 1, N , t � 0

limk→∞

Pi

(k − q(B, k) = t;κ

(q(B, k)

)= j)= [ΠP(B)

(Pt(Bc)

)∗]ij ,

limk→∞

Pi

(r(B, k)− k = t+ 1;κ

(r(B, k)

)= j)= [ΠPt(Bc)P(B)]ij ,

limk→∞

Pi

(r(B, k)− q(B, k) = t+ 1

)= (t+ 1)[

(ΠPt(Bc)

(I −P(Bc)

)2)∗]ii.

For B = D × i0, i0 ∈ D, representations of the left-hand sides of these equalities were

obtained in [7].

3. To find a distribution of m(B, k), k � 1, we set μ = λ = 1 in (5) and take the sum with

respect to j from 1 to N and with respect to r from 1 to N + 1. Then

(1− γ

) ∞∑

k=0

γk Ei(ρm(B,k)) =

[((I − ρϕB(γ)

)−1(I − ϕB(γ)

))∗]ii.

Using (2), we find

Ei

(ρm(B,k)

)=[((

P(Bc) + ρP(B))k)∗]

ii, (6)

which implies

Ei

( r∏

n=0

(m(B, k)− n

))= (r + 1) !

n1,n2,...,nr+1�0,n1+···+nr+1�k−r−1

[(r+1∏

i=1

(PniP(B)

))∗]

ii(7)

for k − 1 � r � 0.

We also need the following assertion.

Lemma 3. Suppose that An, n = 0,∞, and Cn, n = 0,∞, are sequence of matrices of the

same order that converge (componentwise) to matrices A and C respectively. Then the sequence

n−1n−1∑

k=0

AkCn−1−k, n = 1,∞,

converges to the matrix AC.

In the scalar case (i.e., An, n = 0,∞, and Cn, n = 0,∞ are number sequences), this assertion

follows from the Toeplitz theorem. The general case follows from the scalar case.

Theorem 4. Suppose that κ is a regular Markov chain and Π is the matrix of limiting

transition probabilities. Then for i ∈ 1, N

limk→∞{Eim(B, k)− k[

(ΠP(B)

)∗]ii}=[((

(I −P +Π)−1 −Π)P(B)

)∗]ii.

Proof. Since PΠ = Π and Π2 = Π, for n � 1 we have

(P −Π)n =

n∑

k=0

CknP

k(−Π)n−k = Pn +

n−1∑

k=0

Ckn(−1)n−kΠ = Pn −Π .

586

Page 8: A Class of Functionals on Markov Chain Transitions

By (7), we have

Eim(B, k) =k−1∑

n=0

[(PnP(B)

)∗]ii=

k−1∑

n=0

[((Pn −Π)P(B)

)∗]ii+ k[(ΠP(B)

)∗]ii

=

k−1∑

n=1

[((P −Π)nP(B)

)∗]ii+[((I −Π)P(B)

)∗]ii+ k[(ΠP(B)

)∗]ii

and, consequently,

limk→∞{Eim(B, k)− k

[(ΠP(B)

)∗]ii

}

=[((I −P +Π)−1P(B)

)∗]ii− [(P(B)

)∗]ii+[((I −Π)P(B)

)∗]ii

=[((

(I −P +Π)−1 −Π)P(B)

)∗]ii.

The theorem is proved.

Corollary 3. Under the assumptions of Theorem 4,

limk→∞

k−1Ei

(m(B, k)

)= [(ΠP(B)

)∗]ii, i ∈ 1, N.

We find the limit distribution of the functional m(B,n). Following [15, Chapter 13, Section

7], we say that a regular Markov chain is fully regular if the set of its essential states is presented

by one class of communicating states. A regular chain containing more than one class of essential

communicating states is referred to as non-fully regular chain. We need some properties of the

matrix Π of limiting transition probabilities for regular chains. It is known (cf. [12, Chapter

12, Section 5]) that the entries of the matrix Π of a fully regular chain satisfy the equalities

[Π]ij = [Π]jj if i and j are inessential and essential states respectively, whereas [Π]ij = 0 if

both i and j are inessential states. As above, let H be the set of inessential states of the chain

κ, and let S(j) (j is an essential state) is the class of essential states containing the state j. The

following assertion supplements the above result.

Lemma 4. Let κ be a regular Markov chain, and let Π be the matrix of limiting transition

probabilities. Then for i ∈ H the equality [Π]ij = p(i, S(j)

)[Π]jj holds for any essential state j.

Proof. Let c be an arbitrary nonnegative integer, and let n � c. Then

[Pn]ij = Pi

(κ(n) = j, κ(c) ∈ H

)+∑

l∈S(j)Pi

(κ(c) = l

)Pl

(κ(n− c) = j

).

We denote by I1(c, n) and I2(c, n) the first and second terms. For I2(c, n) we have

limn→∞ I2(c, n) =

l∈S(j)Pi

(κ(c) = l

)[Π]lj

= [Π]jj∑

l∈S(j)Pi

(κ(c) = l

)=[Π]jjPi

(s(H × S(j)

)� c)

and, consequently,

limc→∞ lim

n→∞ I2(c, n) =[Π]jjPi

(s(H × S(j)) < ∞) = [Π]jjp

(i, S(j)

).

587

Page 9: A Class of Functionals on Markov Chain Transitions

For I1(c, n) we have

limn→∞I1(c, n) � lim

n→∞Pi

(s(H × S(j)

) ∈ (c, n])= Pi

(s(H × S(j)

) ∈ (c,∞))

and, consequently,

limc→∞ lim

n→∞I1(c, n) = 0.

Therefore,

limn→∞[Pn]ij � lim

c→∞ limn→∞ I1(c, n) + lim

c→∞ limn→∞ I2(c, n) = p

(i, S(j)

)[Π]jj

and

limn→∞

[Pn]ij � limc→∞ lim

n→∞ I2(c, n) = p(i, S(j)

)[Π]jj .

The lemma is proved.

We set Di(·) = D(·/κ0 = i) and study the limit behavior of Dim(B, k), i ∈ 1, N , as k → ∞.

Theorem 5. Let κ be a (not necessarily fully) regular Markov chain. If any of the following

conditions is satisfied:

1) i is an arbitrary state of a fully regular chain,

2) i is an arbitrary essential state of a non-fully regular chain,

3) i is an inessential state of a non-fully regular chain such that

[((ΠP(B)− (ΠP(B)

)∗)ΠP(B)

)∗]

ii= 0, (8)

then

limk→∞

k−1Dim (B, k) =[(ΠP(B)

)∗]ii− 3{[(ΠP(B)

)∗]ii}2

+ 2[(ΠP(B)

(I −P +Π

)−1P(B)

)∗]ii. (9)

If the following condition is satisfied:

4) i is an inessential state of a non-fully regular chain such that

[((ΠP(B)− (ΠP(B)

)∗)ΠP(B)

)∗]ii�= 0,

then

limk→∞

k−2Dim(B, k) =[((

ΠP(B)− (ΠP(B))∗)

ΠP(B))∗]

ii. (10)

Proof. From the equality (7) with r = 0 and r = 1 we have

Dim (B, k)−Eim (B, k) = 2∑

(n1,n2)∈K1

[(Pn1P(B)Pn2P(B)

)∗]ii

−∑

(n1,n2)∈K2

[(Pn1P(B)

)∗(Pn2P(B)

)∗]ii, (11)

where K1 = {(n1, n2) : n1, n2 � 0, n1 + n2 � k − 2} and K2 = {(n1, n2) : n1, n2 = 0, k − 1}.

588

Page 10: A Class of Functionals on Markov Chain Transitions

We reduce the right-hand side of the last equality to a form convenient for the limit passage.

For this purpose we express the terms in (11) with the help of the following equalities:

(Pn1P(B)Pn2P(B)

)∗=(Pn1P(B)

(Pn2P(B)−ΠP(B)

))∗+(Pn1P(B)

)∗(ΠP(B)

)∗

+((Pn1P(B)− (Pn1P(B)

)∗)ΠP(B)

)∗

and

(Pn1P(B)

)∗(Pn2P(B)

)∗=(Pn1P(B)

)∗(Pn2P(B)−ΠP(B)

)∗

+(Pn1P(B)

)∗(ΠP(B)

)∗

(in the first equality, we used the equality (A∗C)∗ = A∗C∗ for arbitrary matrices A and C of

the same order). Then (11) takes the form

Dim(B, k)−Eim(B, k) = 2∑

(n1,n2)∈K1

[(Pn1P(B)

(Pn2P(B)−ΠP(B)

))∗]ii

+ 2∑

(n1,n2)∈K1

[(Pn1P(B)

)∗(ΠP(B)

)∗]ii+ 2

(n1,n2)∈K1

[((Pn1P(B)

− (Pn1P(B))∗)

ΠP(B))∗]

ii−

(n1,n2)∈K2

[(Pn1P(B)

)∗(Pn2P(B)

−ΠP(B))∗]

ii−

(n1,n2)∈K2

[(Pn1P(B)

)∗(ΠP(B)

)∗]ii.

We denote by In(i, k), n = 1, 5, the nth sum on the right-hand side and consider

limk→∞

k−1(2(I1(i, k) + I2(i, k) + I3(i, k))− I4(i, k)− I5(i, k)).

For k � 2 we have

I1(i, k) =[( k−2∑

n1=0

Pn1P(B)

k−2−n1∑

n2=0

(Pn2P(B)−ΠP(B)

))∗]

ii=[(k−2∑

n=0

A(n)C(k− 2− n))∗]

ii,

where A(n) = PnP(B) and C(n) =n∑

m=0(PmP(B)−ΠP(B)).

Since limn→∞A(n) = ΠP(B) and lim

n→∞C(n) = ((I −P +Π)−1 −Π)P(B), we have

limk→∞

k−1I1(i, k) = [(ΠP(B)((I −P +Π)−1 −Π)P(B))∗]ii

in view of Lemma 3. Similarly,

limk→∞

k−1I4(i, k) = [(ΠP(B))∗(((I −P +Π)−1 −Π)P(B))∗]ii

and

limk→∞

k−1(2I2(i, k)− I5(i, k)) = [(ΠP(B))∗(((I −P +Π)−1 −Π)P(B))∗]ii

− [(ΠP(B))∗(ΠP(B))∗]ii.

589

Page 11: A Class of Functionals on Markov Chain Transitions

The above computation is valid for any regular Markov chain κ and any its state i. Therefore,

the study of the limit

limk→∞

k−1(2(I1(i, k) + I2(i, k) + I3(i, k))− I4(i, k)− I5(i, k)

)

is reduced to the study of the limit

limk→∞

k−1I3(i, k).

We consider the sum

I3(i, k) =∑

(n1,n2)∈K1

[((Pn1P(B)− (Pn1P(B)

)∗)ΠP(B)

)∗]ii.

Using the matrix equality (AC)∗ = (AC∗)∗, we write the term an(i) in the form

an(i) =[((

PnP(B)− (PnP(B))∗)(

ΠP(B))∗)∗]

ii. (12)

We prove that an(i) = 0 under conditions 1) and 2) of Theorem 5 and then I3(i, k) = 0.

Suppose that κ is a fully regular chain. We show that the matrices PnP(B)− (PnP(B))∗

and (ΠP(B))∗ commute. The equalities((A−A∗)C

)∗=(C(A−A∗)

)∗=(C(A−A∗)∗

)∗=(C(A∗ −A∗)

)∗= O∗ = O

(O is the zero matrix) for the commuting matrices (A−A∗) and C imply an(i) = 0 for any i. If

the chain κ is fully regular, then all the rows of the limit matrix Π are the same [12, Chapter 12,

Section 5]. Hence ΠP(B) also has the same rows and, consequently, (ΠP(B))∗ coincides up

to a scalar factor with the identity matrix and commutes with any matrix of the corresponding

order.

Suppose that κ is a regular chain, i is its essential state, and S(i) is the class of essential

states of the chain containing i. Then (cf. (12))

[(PnP(B)

(ΠP(B)

)∗)∗]ii=

N∑

k=1

[PnP(B)]ik[(ΠP(B)

)∗]kk

=∑

k∈S(i)

[PnP(B)

]ik

[(ΠP(B)

)∗]kk

=[(ΠP(B)

)∗]ii

k∈S(i)[PnP(B)]ik

=[(ΠP(B)

)∗]ii

[(PnP(B)

)∗]ii=[((

PnP(B))∗(

ΠP(B))∗)∗]

ii

since [PnP(B)]ik = 0 for k /∈ S(i) (since [PnP(B)]ik � [Pn+1]ik = 0 for k /∈ S(i)). Thus,

an(i) = 0 and I3(i, k) = 0. Passing to an(i) = 0 as n → ∞, we obtain the equality (8).

We consider conditions 3) and 4) of Theorem 5. Let κ be a non-fully regular chain, and let

i be its inessential state. We transform I3(i, k) as follows:

I3(i, k) =∑

(n1,n2)∈K1

[((Pn1P(B)−ΠP(B)

)ΠP(B)

)∗]ii

+∑

(n1,n2)∈K1

[((ΠP(B)

)∗ − (Pn1P(B))∗)(

ΠP(B))∗]

ii

+ 2−1k(k − 1)[((

ΠP(B)− (ΠP(B))∗)

ΠP(B))∗]

ii. (13)

590

Page 12: A Class of Functionals on Markov Chain Transitions

We denote by G1(i, k) and G2(i, k) the first and second terms on the right-hand side of this

equality. Since

(n1,n2)∈K1

(Pn1 −Π) = (k − 1)(I −Π) +k−2∑

l=1

l∑

m=1

(P −Π)m,

from Lemma 3 we have

limk→∞

k−1(G1(i, k) +G2(i, k)

)=[((

(I −P +Π)−1 −Π)P(B)ΠP(B)

)∗]ii

− [(((I −P +Π)−1 −Π)P(B)

)∗(ΠP(B)

)∗]ii. (14)

If [((ΠP(B)− (ΠP(B)

)∗)ΠP(B)

)∗]ii �= 0 for the state i, then (13) and (14) imply

limk→∞

k−2I3(i, k) = 2−1[((ΠP(B)− (ΠP(B)

)∗)ΠP(B)

)∗]ii.

Consequently,

limk→∞

k−2Dim(B, k) = [((ΠP(B)− (ΠP(B)

)∗)ΠP(B)

)∗]ii > 0. (15)

Thus, the equality (10) is established under condition 4).

We assume that an inessential state i satisfies (8) and, consequently (cf. (13) and (14)),

limk→∞

k−1I3(i, k) = [((I −P +Π)−1P(B)ΠP(B)

)∗]ii

− [((I −P +Π)−1P(B))∗(

ΠP(B))∗]

ii. (16)

Note that the condition (8) for an inessential state i implies a similar condition for any inessential

state j accessible from the state i:

[((ΠP(B)− (ΠP(B)

)∗)ΠP(B)

)∗]jj

= 0. (17)

Indeed, in the opposite case, applying the above computations to the state j, we obtain (cf.

(15)) limk→∞

k−2Djm(B, k) > 0. Since the state j is accessible from the state i, the last inequality

implies limk→∞

k−2Dim(B, k) > 0, which contradicts (16).

We show that, under the assumption (8), the right-hand side of (16) vanishes. We denote

by S the set of essential states of the chain κ. As above, H is the set of inessential states

of the chain κ and S1, S2, . . . , Sd, d � 2, is a partition of the set S into disjoint classes of

communicating states; S(j) = Sm for j ∈ Sm. We also denote Πkl = [Π]kl. Considering the

expression [(ΠP(B)ΠP(B)

)∗]ii on the left-hand side of (8), we find

[(ΠP(B)

(ΠP(B)

)∗)∗]ii=

N∑

k=1

[ΠP(B)]ik[(ΠP(B)

)∗]kk

=∑

k∈S

[ΠP(B)

]ik

[(ΠP(B)

)∗]kk

=

d∑

r=1

k∈Sr

[ΠP(B)

]ik

[(ΠP(B)

)∗]kk

(18)

since [ΠP(B)]ik � [ΠP]ik = Πik = 0, i, k ∈ H.

591

Page 13: A Class of Functionals on Markov Chain Transitions

On the right-hand side of (18), the expression [(ΠP(B)

)∗]kk is the same for all k ∈ Sr since

[(ΠP(B)

)∗]kk

=

N∑

j=1

[ΠP(B)

]kj

=

N∑

j=1

l∈Sr

Πkl

[P(B)

]lj

and Πkl = Πvl for v ∈ Sr. We set

αr = [(ΠP(B)

)∗]kk, k ∈ Sr. (19)

Then (18) implies

[(ΠP(B)

(ΠP(B)

)∗)∗]ii=

d∑

r=1

αr

k∈Sr

[ΠP(B)

]ik. (20)

We consider the right-hand side of (20). Let v ∈ Sr be a fixed state. By Lemma 4,

k∈Sr

[ΠP(B)

]ik

=∑

k∈Sr

N∑

j=1

Πij

[P(B)

]jk

=∑

k∈Sr

j∈Sr

Πij

[P(B)

]jk

=∑

k∈Sr

j∈Sr

p(i, Sr)Πjj

[P(B)

]jk

= p(i, Sr)∑

k∈Sr

j∈Sr

Πvj

[P(B)

]jk

= p(i, Sr)∑

k∈Sr

[ΠP(B)

]vk

= p(i, Sr)[(ΠP(B)

)∗]vv

= αrp(i, Sr)

and, consequently, ∑

k∈Sr

[ΠP(B)]ik = αrp(i, Sr). (21)

Hence from (20) it follows that

[(ΠP(B)

(ΠP(B)

)∗)∗]ii=

d∑

r=1

(αr)2p(i, Sr).

We again turn to the assumption (8) and consider the expression ([(ΠP(B)

)∗]ii)

2 on the left-

hand side of (8) (i is an inessential state). By (21), we have

[(ΠP(B)

)∗]ii=∑

k∈S

[ΠP(B)

]ik=

d∑

r=1

αrp(i, Sr). (22)

Thus, the equality (8) can be written as

d∑

r=1

(αr)2p(i, Sr) =

( d∑

r=1

αrp(i, Sr))2

.

We transform the left-hand side of this equality. By the equalityd∑

k=1

p(i, Sk) = 1, we have

d∑

r=1

(αr)2p(i, Sr) =

d∑

r=1

(αr)2

d∑

k=1

p(i, Sr)p(i, Sk) =

d∑

r=1

(αr)2(p2(i, Sr) +

k=1,d,k �=r

p(i, Sr)p(i, Sk))

=

d∑

r=1

(αr)2p2(i, Sr) +

d∑

r=1

(αr)2

d∑

k=r+1

p(i, Sr)p(i, Sk) +

d∑

k=1

d∑

r=k+1

(αr)2p(i, Sr)p(i, Sk)

592

Page 14: A Class of Functionals on Markov Chain Transitions

=d∑

r=1

(αr)2p2(i, Sr) +

d−1∑

r=1

d∑

k=r+1

p(i, Sr)p(i, Sk)×((αr)

2 + (αk)2).

Therefore, the equality (8) can be written as

d−1∑

r=1

d∑

k=r+1

p(i, Sr)p(i, Sk)(αr − αk)2 = 0. (23)

We consider a number n(i) ∈ 1, d such that p(i, Sn(i)) > 0 (such a number exists becaused∑

r=1p(i, Sr) = 1). Then from (23) it follows that for any 1 � r � d, r �= n(i), at least one of the

following equalities holds:

p(i, Sr) = 0, αr = αn(i). (24)

By (22), we have

[(ΠP(B)

)∗]ii = αn(i). (25)

Similarly, for any inessential state j accessible from a state i (cf. (17)) we have

[(ΠP(B)

)∗]jj = αn(j). (26)

If an inessential state j is accessible from a state i, then p(j, Sn(j)) > 0 implies p(i, Sn(j)) > 0

and, consequently (cf. (24)),

αn(j) = αn(i). (27)

We consider the right-hand side of (16). We have

[((I −P +Π

)−1P(B)ΠP(B)

)∗]ii

=[(P(B)

(ΠP(B)

)∗+

∞∑

n=1

(Pn −Π)P(B)(ΠP(B)

)∗)∗]ii

=N∑

k=1

[P(B)

]ik

[(ΠP(B)

)∗]kk

+∞∑

n=1

N∑

k=1

[(Pn −Π)P(B)

]ik

[(ΠP(B)

)∗]kk.

We consider terms on the right-hand side of the last equality. Let k ∈ H. If [P(B)]ik > 0,

then the state k is accessible from the state i. By (26) and (27), we have [(ΠP(B)

)∗]kk = αn(i).

Similarly, [(Pn −Π)P(B)]ik �= 0 implies that the state k is accessible from the state i (since

[ΠP(B)]ik = 0 for i, k ∈ H), and, consequently, [(ΠP(B)

)∗]kk = αn(i).

Let k ∈ Sr for some r ∈ 1, d. If [P(B)]ik > 0, then p(i, Sr) > 0. Consequently, from (19)

and (24) we find [(ΠP(B)

)∗]kk = αr = αn(i). If [

(Pn −Π

)P(B)]ik �= 0, then at least one of

the following inequalities holds:

[PnP(B)]ik > 0, [ΠP(B)]ik > 0.

In the first case (since [Pn+1]ik � [PnP(B)]ik) and in the second case (since [ΠP(B)]ik =

limn→∞[PnP(B)]ik), we find p(i, Sr) > 0 and, consequently, [

(ΠP(B)

)∗]kk = αn(i).

593

Page 15: A Class of Functionals on Markov Chain Transitions

Thus, we have (cf. also (25))

[((I −P +Π)−1P(B)ΠP(B)

)∗]ii= αn(i)

[((I −P +Π)−1P(B)

)∗]ii

=[(ΠP(B)

)∗]ii

[((I −P +Π)−1P(B)

)∗]ii.

Consequently, if an inessential state i of a non-fully regular chain κ satisfies (8), then the right-

hand side of (16) vanishes.

Thus, in any of the following cases:

1) i is an arbitrary state of a fully regular chain,

2) i is an arbitrary essential state of a non-fully regular chain,

3) i is an inessential state of a non-fully regular chain such that (8) holds

we have

limk→∞

k−1(Dim(B, k)− Eim(B, k)

)= 2[(ΠP(B)

((I −P +Π)−1 −Π

)P(B)

)∗]ii

− [(ΠP(B))∗((

(I −P +Π)−1 −Π)P(B)

)∗]ii

+[(ΠP(B)

)∗(((I −P +Π)−1 −Π

)P(B)

)∗]ii

− [(ΠP(B))∗(

ΠP(B))∗]

ii

= 2[(ΠP(B)(I −P +Π)−1P(B)

)∗]ii− 3[(ΠP(B)

)∗(ΠP(B)

)∗]ii.

By Corollary 3, we have the equality (9). Theorem 5 is proved.

We note that for irreducible nonperiodic chains (i.e., for fully regular chains containing no

inessential states) the equality (9) can be obtained from formula (0.4) in [6].

Remark 2. From the proof of Theorem 5 it is easy to see that the condition (8) applied to

an inessential state i is equivalent to the condition that for all essential states j accessible from

the state i the numbers [(ΠP(B)

)∗]jj have the same value. In turn, this implies that this value

coincides with the number [(ΠP(B)

)∗]ii.

Under conditions 1) and 2) of Theorem 5, the relation (8) holds.

From the Chebyshev inequality and Theorems 4 and 5 we obtain the following assertion.

Corollary 4. Let a regular chain κ and its state i satisfy any of conditions 1)–3) of Theo-

rem 5. Then the sequence

cn

(m(B,n)

n− [(ΠP(B)

)∗]ii

),

where cn = o(√n), converges in probability Pi to zero.

We denote by σ2i (B) the right-hand side of (9) and find conditions under which σ2

i (B) > 0.

Theorem 6. Let κ be a regular Markov chain, and let i be its state satisfying (8). Then

for the inequality σ2i (B) > 0 hold it is necessary (sufficient) that P(B ∩ K2) �= P(K2) and

P(B ∩K2) �= O for any (some) class K of essential states accessible from the state i, where O

is the zero matrix.

594

Page 16: A Class of Functionals on Markov Chain Transitions

Proof. We divide the proof into two steps. We begin with a fully regular chain κ such that

all its states are essential and then consider the general case.

Step 1. Let κ be a fully regular chain such that all its states are essential. In this case, the

necessity follows because for P(B) = P or P(B) = O we have Pi

(m(B,n) = n, n � 1

)= 1,

or Pi

(m(B,n) = 0, n � 1

)= 1 for i = 1, N . Consequently, Dim(B,n) = 0, n � 1, i = 1, N . To

prove the sufficiency, we write σ2i (B) (with (8) taken into account) in the form

σ2i (B) = [

(ΠP(B)

)∗ − ((ΠP(B))∗)2

+ 2(ΠP(B)

((I −P +Π)−1 −Π

)P(B)

)∗]ii

and transform the first two terms as follows:

(ΠP(B)

)∗ − ((ΠP(B))∗)2

=(ΠP(B)

)∗(I − (ΠP(B)

)∗)

=(ΠP(B)

)∗(ΠP −ΠP(B)

)∗=(ΠP(B)

)∗(ΠP(Bc)

)∗

=(ΠP(B)

)∗(ΠP(Bc)

)∗((ΠP(Bc)

)∗+(ΠP(B)

)∗)

=(ΠP(B)

)∗((ΠP(Bc)

)∗)2+(ΠP(Bc)

)∗((ΠP(B)

)∗)2.

We denote by Πj the common value [Π]ij > 0, i, j = 1, N and set pij = [P]ij , pk = [(P(B))∗]kk.Then

[(ΠP(B)

)∗((ΠP(Bc)

)∗)2+(ΠP(Bc)

)∗((ΠP(B)

)∗)2]ii

=∑

(k,l)∈BΠkpkl

(1−

N∑

d=1

Πdpd)2

+∑

(k,l)∈Bc

Πkpkl( N∑

d=1

Πdpd)2

=∑

k,l=1,N

Πkpklu2kl

where

ukl =

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

1−N∑

d=1

Πdpd, (k, l) ∈ B,

−N∑

d=1

Πdpd, (k, l) ∈ Bc.

We consider the matrix ΠP(B)((I − P + Π)−1 − Π

)P(B) in the representation of σ2

i (B).

We denote by T =‖ tij ‖i,j=1,N the matrix((I − P + Π)−1 − Π

)P(B) and set ti =

N∑j=1

tij .

We note that Π · T = O and (ΠP(B))∗ is a diagonal matrix with the equal diagonal entries

(consequently, (ΠP(B))∗ commutes with any matrix of the corresponding size). Then

(ΠP(B)T

)∗=(ΠP(B)

((ΠP(B)

)∗+(ΠP(Bc)

)∗)T)∗

=(ΠP(B)T

(ΠP(B)

)∗)∗+(ΠP(B)

(ΠP(Bc)

)∗T)∗

=(Π(P −P(Bc)

)T(ΠP(B)

)∗)∗+(ΠP(B)

(ΠP(Bc)

)∗T)∗

= −(ΠP(Bc)(ΠP(B)

)∗T)∗

+(ΠP(B)

(ΠP(Bc)

)∗T)∗.

For the first term on the right-hand side of the last equality we have

[(ΠP(Bc)

(ΠP(B)

)∗T)∗]

ii=[(ΠP(Bc)

(ΠP(B)

)∗T ∗)∗]

ii

595

Page 17: A Class of Functionals on Markov Chain Transitions

=

N∑

l=1

[ΠP(Bc)]il[(ΠP(B)

)∗]ll[T

∗]ll =N∑

l=1

[ΠP(Bc)]il

N∑

r=1

[Π]rprtl

=N∑

r=1

Πrpr

N∑

l=1

k:(k,l)∈Bc

Πkpkltl =N∑

r=1

Πrpr∑

(k,l)∈Bc

Πkpkltl = −∑

(k,l)∈Bc

Πkpkltlukl.

Similarly, for the second term we find

[(ΠP(B)

(ΠP(Bc)

)∗T)∗]

ii=∑

(k,l)∈BΠkpkltlukl.

Thus,

σ2i (B) =

k,l=1,N

Πkpklu2kl + 2

k,l=1,N

Πkpklukltl.

As in [2], we transform the right-hand side of the last equality as follows:

σ2i (B) =

k,l=1,N

Πkpkl(ukl + tl)2 −

k,l=1,N

Πkpklt2l

=∑

k,l=1,N

Πkpkl(ukl + tl − tk + tk)2 −

k,l=1,N

Πkpklt2l

=∑

k,l=1,N

Πkpkl(ukl + tl − tk)2 + 2

k,l=1,N

Πkpkl(ukl + tl − tk)tk

+∑

k,l=1,N

Πkpklt2k −

k,l=1,N

Πkpklt2l

=∑

k,l=1,N

Πkpkl(ukl + tl − tk)2 + 2

k=1,N

(Πktk

l=1,N

pkl(ukl + tl − tk))

since∑

k,l=1,N

Πkpklt2k =

k=1,N

Πkt2k

k,l=1,N

Πkpklt2l =

l=1,N

t2l∑

k=1,N

Πkpkl =∑

l=1,N

t2lΠl .

From the equality P · T = T + (Π − I)P(B) we have

l=1,N

pkl(ukl + tl − tk) =∑

l=1,N

pklukl +∑

l=1,N

pkltl − tk

=∑

l:(k,l)∈Bpkl

(1−

N∑

d=1

Πdpd

)+∑

l:(k,l)∈Bc

pkl

(−

N∑

d=1

Πdpd

)+ [(P · T )∗]kk − tk

= pk

(1−

N∑

d=1

Πdpd

)− (1− pk)

N∑

d=1

Πdpd + tk +N∑

d=1

Πdpd − pk − tk = 0.

Thus,

σ2i (B) =

k,l=1,N

Πkpkl(ukl + tl − tk)2.

596

Page 18: A Class of Functionals on Markov Chain Transitions

To prove the sufficiency, we assume that σ2i (B) = 0. Then for any pair of states (k, l) such

that pkl > 0 we have ukl+tl−tk = 0. Since the chain κ considered at this step is fully regular and

all its states commute, there is a positive integer r, starting with which all powers of the matrix

P consist of positive numbers. In particular, [Pr]jj and [Pr+1]jj are positive for an arbitrary

state j of the chain. Hence there are two sequences j, i1, . . . , ir−1, j and j, j1, . . . , jr, j such that

the transition probabilities pji1 , . . . , pir−1j and pjj1 , . . . , pjrj are positive. Hence uji1+ti1−tj = 0

and uir−1j + tj − tir−1 = 0 for σ2i (B) = 0. Adding these equalities, we see that

N∑

d=1

Πdpd = r−1 · n1

for some integer n1 such that 0 � n1 � r. Similarly, considering the second chain of states, we

obtain the equalityN∑

d=1

Πdpd = (r + 1)−1 · n2

for some integer n2 such that 0 � n2 � r + 1. From the equality r−1 · n1 = (r + 1)−1 · n2 it

follows that either n1 = n2 = 0 or n1 = n2 − 1 = r, i.e., P(B) = O or P(B) = P. Thus, the

sufficiency is proved.

Step 2. Let κ be a regular Markov chain, and let i be its state satisfying (8). If i is an essential

state, then a unique class of essential states accessible from the state i is the class containing

i. Therefore, we can argue in the same way as at Step 1. Let i be an inessential state, and let

Si1, . . . , S

ir(i) be all the classes of essential states accessible from the state i so that

k=1,r(i)

p(i, Sik) = 1

(recall that p(i, Sik) is the probability of accessibility of Si

k from the initial state i). Let ik ∈ Sik,

k = 1, r(i), be arbitrary states of accessible classes. We transform the right-hand side of (9).

By remarks to Theorem 5, we have

[(ΠP(B)

)∗]ii=

r(i)∑

k=1

p(i, Sik)[(ΠP(B)

)∗]ikik

,

([(ΠP(B)

)∗]ii

)2=

r(i)∑

k=1

p(i, Sik)([(

ΠP(B))∗]

ikik

)2.

By Lemma 4,[(ΠP(B)

(I −P +Π

)−1P(B)

)∗]ii=[(Π(P(B)

(I −P +Π

)−1P(B)

)∗)∗]ii

=N∑

j=1

Πij

[(P(B)

(I −P +Π

)−1P(B)

)∗]jj

=

r(i)∑

k=1

j∈Sik

p(i, Sik)Πikj

[(P(B)

(I −P +Π

)−1P(B)

)∗]jj

=

r(i)∑

k=1

N∑

j=1

p(i, Sik)Πikj

[(P(B)

(I −P +Π

)−1P(B)

)∗]jj

597

Page 19: A Class of Functionals on Markov Chain Transitions

=

r(i)∑

k=1

p(i, Sik)[(ΠP(B)(I −P +Π)−1P(B)

)∗]ikik

since Πjj = Πikj = 0 for j ∈ Sik and Πikj = 0 for j /∈ Si

k. By (9), we have

σ2i (B) =

r(i)∑

k=1

p(i, Sik)σ

2ik(B) =

r(i)∑

k=1

p(i, Sik)σ

2ik

(B ∩ (Si

k)2). (28)

To prove the necessity, we assume that σ2i (B) > 0 which implies σ2

iv

(B ∩ (Si

v)2)> 0 for

some v (1 � v � r(i)). Hence (cf. Step 1) P(B ∩ (Si

v)2) �= P

((Si

v)2)and P

(B ∩ (Si

v)2) �= O

or, which is equivalent, [(ΠP

(B ∩ (Si

v)2))∗

]iviv ∈ (0, 1). Since

[(ΠP

(B ∩ (Si

v)2))∗

]iviv = [(ΠP

(B ∩ (Si

m)2))∗

]imim , m = 1, r(i),

in view of the remark to Theorem 5, we conclude that P(B ∩ (Si

m)2) �= P

((Si

m)2)and P

(B ∩

(Sim)2) �= O for m = 1, r(i). Thus, the necessity is proved. We note that we simultaneously

proved that the inequality σ2i (B) > 0 implies σ2

ik(B) > 0 for k = 1, r(i).

To prove sufficiency, we assume that for some class Sim, m ∈ 1, r(i), we have P

(B∩(Si

m)2) �=

P((Si

m)2)and P

(B ∩ (Si

m)2) �= O. Then (cf. Step 1) σ2

im(B) > 0 which implies σ2

i (B) > 0 in

view of (28). Theorem 6 is proved.

Theorem 7. Suppose that i is an inessential state of a regular chain κ satisfying (8) and

σ2i (B) > 0, Si

1, . . . , Sir(i) are all the classes of essential states accessible from a state i, i.e.,

p(i, Sim) > 0, m = 1, r(i)). Then

Pi

(m(B,n)− n[

(ΠP(B)

)∗]ii

σi(B)√n

< x

)→

r(i)∑

m=1

p(i, Sim)Φ

(σi(B)

σm,i(B)x

)

as n → ∞, where σ2m,i(B) is the common value of σ2

j (B), j ∈ Sim, and Φ(x) is the standard

normal distribution function.

Proof. We introduce the notation:

mi(B,n) =m(B,n)− n[

(ΠP(B)

)∗]ii

σi(B)√n

,

c is an arbitrary positive integer constant, S+(i) =r(i)⋃m=1

Sim is the set of essential states accessible

from a state i. Then

Pi(mi(B,n) < x) = Pi(mi(B,n) < x, s(B \ (H ×D)) > c)

+

c∑

k=1

j∈S+(i)

Pi

(mi(B,n)<x, s(B\(H ×D)) = k, κkj

). (29)

598

Page 20: A Class of Functionals on Markov Chain Transitions

We consider the second term on the right-hand side of (29). For n > c � k, j ∈ S+(i)

Pi(mi(B,n) < x, s(B \ (H ×D)) = k, κk = j) =

√c

σi(B)∫

−√

cσi(B)

Pi(mi(B, k)√kn−1 ∈ dy,

s(B \ (H ×D)) = k, κk = j)Pj(mi(B,n− k)√

(n− k)n−1 < x− y)

(30)

since∣∣mi(B, k)

√kn−1

∣∣ � k

σi(B)√n�

√cn

σi(B)√n=

√c

σi(B).

From the results of [4] it follows that for an essential state j we have Pj(mj(B,n) < x) → Φ(x)

as n → ∞. . By the remark to Theorem 5, [(ΠP(B)

)∗]ii = [

(ΠP(B)

)∗]jj for j ∈ S+(i).

Therefore,

Pj

(mi(B,n− k)

√(n− k)n−1 < x

)

= Pj

(mj(B,n− k)

√(n− k)n−1 · σj(B)σ−1

i (B) < x) → Φ(σi(B)σ−1

j (B) · x)as n → ∞. Since Φ(x) is continuous, the convergence is uniform on any compact set. Therefore,

Pj

(mi(B,n− k)

√(n− k)n−1 < x− y

)→ Φ((x− y) · σi(B)σ−1

j (B))

as n → ∞ uniformly for all y : | y |� √c · (σi(B)

)−1. Then for the right-hand side of (30)

√c

σi(B)∫

−√c

σi(B)

Pi(mi(B, k)√kn−1 ∈ dy, s(B \ (H ×D)) = k, κk = j)

× {Pj(mi(B,n− k)√

(n− k)n−1 < x− y)− Φ((x− y)σi(B)σ−1j (B))}

+

√c

σi(B)∫

−√c

σi(B)

Pi(mi(B, k)√kn−1) ∈ dy, s(B \ (H ×D)) = k, κk = j)

× Φ((x− y)σi(B)σ−1j (B)) → Pi(s(B\(H ×D)) = k, κk = j)Φ(xσi(B)σ−1

j (B))

as n → ∞. Passing to the limit in (30) as n → ∞, we find

limn→∞Pi(mi(B,n) < x, s(B \ (H ×D)) = k, κk = j)

= Pi(s(B\(H ×D)) = k, κk = j)Φ(xσi(B)σ−1j (B)) (31)

for 1 � k � c, j ∈ S+(i). By (29),

limn→∞Pi(mi(B,n) < x) � Pi(s(B \ (H ×D)) > c)

+

c∑

k=1

j∈S+(i)

Pi(s(B \ (H ×D)) = k, κk = j)Φ(xσi(B)σ−1j (B)).

599

Page 21: A Class of Functionals on Markov Chain Transitions

The quantity s(B \ (H × D)) is Pi almost surely finite. Therefore, passing to the limit in the

last inequality as c → ∞, we find

limn→∞Pi(mi(B,n) < x) �

j∈S+(i)

Pi(κs(B\(H×D)) = j)Φ(xσi(B)σ−1j (B))

=

r(i)∑

m=1

p(i, Sim)Φ(xσi(B)σ−1

m, i(B)).

From the (29) and (31) it follows that

limn→∞

Pi(mi(B,n) < x) �r(i)∑

m=1

p(i, Sim)Φ(xσi(B)σ−1

m, i(B)) .

Theorem 7 is proved.

Remark 3. If σ2m,i(B), m = 1, r(i), are the same, then their common value is equal to σ2

i (B)

(cf. (28)), and, consequently,

Pi

(m(B,n)− n[(ΠP(B)

)∗]ii

σi(B)√n

< x)→ Φ(x), n → ∞.

Acknowledgments

The author thanks the reviewer for his comments.

References

1. A. I. Kolmogorov, “Local limit theory for classical Markov chains” [in Russian], Izv. Akad.Nauk SSSR, Ser. Mat. 13, No. 4, 281–300 (1949).

2. V. I. Romanvoskii, Discrete Markov Chains [in Russian], Gostekhizdat, Moscow etc. (1949).

3. S. V. Nagaev, “On several limit theorems for homogeneous Markov chains” [in Russian],Teor. Veroyatn. Primen. 2, No. 4, 389–416. (1957).

4. I. S. Volkov, “On the distribution of sums of random variables defined on a homogeneousMarkov chain with a finite number of states” [in Russian], Teor. Veroyatn. Primen. 3, No.4, 413–429 (1958).

5. J. Keilson and D. M. G. Wishart, “A central limit theorem for processes defined on a finiteMarkov chain,” Proc. Camb. Philos. Soc. 60, 547–567 (1964).

6. J. Keilson and D. M. G. Wishart, “Addenda to processes defined on a finite Markov chain,”// Proc. Camb. Philos. Soc. 63, 187–193 (1967).

7. K. L. Chung, Homogeneous Markov Chains [in Russian], Mir, Moscow (1964).

8. J. F. Kemeny and J. L. Snell, Finite Markov Chains, Springer, New York etc. (1983).

600

Page 22: A Class of Functionals on Markov Chain Transitions

9. N. V. Smirnov, O. V. Sarmanov, and V. K. Zakharov, “A local limit theorem for the numberof transitions in a Markov chain and its applications” [in Russian], Dokl. Akad. Nauk SSSR167, No. 6, 1238–1241 (1966).

10. V. K. Zakharov and O. V. Sarmanov, “Distribution law for the number of series in a homo-geneous Markov chain” [in Russian], Dokl. Akad. Nauk SSSR 179, No. 3, 526-528 (1968).

11. L. Ya. Savel’ev and S. V. Balakin, The joint distribution of the number of ones and thenumber of 1-runs in binary Markov sequences” [in Russian], Diskretn. Mat. 16, No. 3, 43–62(2004); English transl.: Discrete Math. Appl. 14, No. 4, 353–372 (2004).

12. A. A. Borovkov, Probability Theory [in Russian], Nauka, Moscow (1986); English transl.:Gordon and Breach, Abingdon, Oxon (1988).

13. V. S. Lugavov, “On functionals on the Markov chain transitions” In: IV InternationalConference. Novosibirsk, 2006, 22–23.

14. V. S. Lugavov and B. A. Rogozin, The Restoration Theory and Factorization Identities forWalks on a Markov Chain [in Russian], Dep. VINITI 06.07.88, No. 5425-B88.

15. F. R. Gantmacher, The Theory of Matrices [in Russian], Nauka, Moscow (1966); Englishtransl.: AMS Chelsea Publ. Providence, RI (1998).

Submitted on November 13, 2011

601