APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a...

34
APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION TO THE THEORY OF MARKOV PROCESSES Grigorios A. Pavliotis Department of Mathematics Imperial College London UK 22/10/2007 23/10/2007

Transcript of APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a...

Page 1: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

APPLIED STOCHASTIC PROCESSES

LECTURES 3/4

INTRODUCTION TO THE THEORY OFMARKOV PROCESSES

Grigorios A. PavliotisDepartment of Mathematics

Imperial College LondonUK

22/10/200723/10/2007

Page 2: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

MARKOV STOCHASTIC PROCESSES

Page 3: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Definition of a Markov Process

• Let (Ω,F) be a measurable space and T an ordered set. LetX = Xt(ω) be a stochastic process from the sample space(Ω,F) to the state space (E,G). It is a function of twovariables, t ∈ T and ω ∈ Ω.

• For a fixed ω ∈ Ω the function Xt(ω), t ∈ T is the samplepath of the process X associated with ω.

• Let K be a collection of subsets of Ω. The smallest σ–algebraon Ω which contains K is denoted by σ(K) and is called theσ–algebra generated by K.

• Let Xt : Ω 7→ E, t ∈ T . The smallest σ–algebra σ(Xt, t ∈ T ),such that the family of mappings Xt, t ∈ T is a stochasticprocess with sample space (Ω, σ(Xt, t ∈ T )) and state space(E,G), is called the σ–algebra generated by Xt, t ∈ T.

Page 4: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Definition of a Markov Process

• A filtration on (Ω,F) is a nondecreasing family Ft, t ∈ T ofsub–σ–algebras of F : Fs ⊆ Ft ⊆ F for s 6 t.

• We set F∞ = σ(∪t∈TFt). The filtration generated by Xt,where Xt is a stochastic process, is

FXt := σ (Xs; s 6 t) .

• We say that a stochastic process Xt; t ∈ T is adapted to thefiltration Ft := Ft, t ∈ T if for all t ∈ T , Xt is anFt–measurable random variable.

Definition 1 Let Xt be a stochastic process defined on aprobability space (Ω,F , µ) with values in E and let FX

t be thefiltration generated by Xt. Then Xt is a Markov process if

P(Xt ∈ Γ|FXs ) = P(Xt ∈ Γ|Xs) (1)

for all t, s ∈ T with t > s, and Γ ∈ B(E).

Page 5: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Definition of a Markov Process

• The filtration FXt is generated by events of the form

ω|Xs1 ∈ B1, Xs2 ∈ B2, . . . Xsn ∈ Bn, with0 6 s1 < s2 < · · · < sn 6 s and Bi ∈ B(E). The definition of aMarkov process is thus equivalent to the hierarchy of equations

P(Xt ∈ Γ|Xt1 , Xt2 , . . . Xtn) = P(Xt ∈ Γ|Xtn) a.s.

for n > 1, 0 6 t1 < t2 < · · · < tn 6 t and Γ ∈ B(E).

Page 6: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Definition of a Markov Process

• Roughly speaking, the statistics of Xt for t > s are completelydetermined once Xs is known; information about Xt for t < s

is superfluous. In other words: a Markov process has nomemory. More precisely: when a Markov process isconditioned on the present state, then there is no memory ofthe past. The past and future of a Markov process arestatistically independent when the present is known.

• A typical example of a Markov process is the random walk: inorder to find the position x(t + 1) of the random walker at timet + 1 it is enough to know its position x(t) at time t: how it gotto x(t) is irrelevant.

• A non Markovian process Xt can be described through aMarkovian one Yt by enlarging the state space: the additionalvariables that we introduce account for the memory in the Xt.This ”Markovianization” trick is very useful since there aremany more tools for analyzing Markovian process.

Page 7: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• With a Markov process Xt we can associate a functionP : T × T × E × B(E) → R+ defined through the relation

P[Xt ∈ Γ|FX

s

]= P (s, t,Xs, Γ),

for all t, s ∈ T with t > s and all Γ ∈ B(E).

• Assume that Xs = x. Since P[Xt ∈ Γ|FX

s

]= P [Xt ∈ Γ|Xs] we

can writeP (Γ, t|x, s) = P [Xt ∈ Γ|Xs = x] .

• The transition function P (t,Γ|x, s) is (for fixed t, x s) aprobability measure on E with P (t, E|x, s) = 1; it isB(E)–measurable in x (for fixed t, s, Γ) and satisfies theChapman–Kolmogorov equation

P (Γ, t|x, s) =∫

E

P (Γ, t|y, u)P (dy, u|x, s). (2)

• for all x ∈ E, Γ ∈ B(E) and s, u, t ∈ T with s 6 u 6 t.

Page 8: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• The derivation of the Chapman-Kolmogorov equation is basedon the assumption of Markovianity and on properties of theconditional probability:

i) Let (Ω,F , µ) be a probability space, X a random variablefrom (Ω,F , µ) to (E,G) and let F1 ⊂ F2 ⊂ F . Then

E(E(X|F2)|F1) = E(E(X|F1)|F2) = E(X|F1). (3)

ii) Given G ⊂ F we define the functionPX(B|G) = P (X ∈ B|G) for B ∈ F . Assume that f is suchthat E(f(X)) < ∞. Then

E(f(X)|G) =∫

Rf(x)PX(dx|G). (4)

Page 9: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

Now we use the Markov property, together with equations (3)and (4) and the fact that s < u ⇒ FX

s ⊂ FXu to calculate:

P (Γ, t|x, s) := P(Xt ∈ Γ|Xs = x) = P(Xt ∈ Γ|FXs )

= E(IΓ(Xt)|FXs ) = E(E(IΓ(Xt)|FX

s )|FXu )

= E(E(IΓ(Xt)|FXu )|FX

s ) = E(P(Xt ∈ Γ|Xu)|FXs )

= E(P(Xt ∈ Γ|Xu = y)|Xs = x)

=∫

RP (Γ, t|Xu = y)P (dz, u|Xs = x)

=:∫

RP (Γ, t|y, u)P (dy, u|x, s).

IΓ(·) denotes the indicator function of the set Γ. We have also setE = R.

Page 10: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• The CK equation is an integral equation and is the fundamentalequation in the theory of Markov processes. Under additionalassumptions we will derive from it the Fokker-Planck PDE,which is the fundamental equation in the theory of diffusionprocesses, and will be the main object of study in this course.

• A Markov process is homogeneous if

P (t,Γ|Xs = x) := P (s, t, x, Γ) = P (0, t− s, x, Γ).

• We set P (0, t, ·, ·) = P (t, ·, ·). The Chapman–Kolmogorov (CK)equation becomes

P (t + s, x, Γ) =∫

E

P (s, x, dz)P (t, z, Γ). (5)

Page 11: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• Let Xt be a homogeneous Markov process and assume that theinitial distribution of Xt is given by the probability measureν(Γ) = P (X0 ∈ Γ) (for deterministic initial conditions–X0 = x–we have that ν(Γ) = IΓ(x) ).

• The transition function P (x, t, Γ) and the initial distribution ν

determine the finite dimensional distributions of X by

P(X0 ∈ Γ1, X(t1) ∈ Γ1, . . . , Xtn ∈ Γn)

=∫

Γ0

Γ1

. . .

Γn−1

P (tn − tn−1, yn−1, Γn)P (tn−1 − tn−2, yn−2, dyn−1)

· · · × P (t1, y0, dy1)ν(dy0). (6)

Theorem 1 (Ethier and Kurtz 1986, Sec. 4.1) Let P (t, x, Γ)satisfy (5) and assume that (E, ρ) is a complete separable metricspace. Then there exists a Markov process X in E whosefinite-dimensional distributions are uniquely determined by (6).

Page 12: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• Let Xt be a homogeneous Markov process with initialdistribution ν(Γ) = P (X0 ∈ Γ) and transition functionP (x, t, Γ). We can calculate the probability of finding Xt in aset Γ at time t:

P(Xt ∈ Γ) =∫

E

P (x, t, Γ)ν(dx).

• Thus, the initial distribution and the transition function aresufficient to characterize a homogeneous Markov process.

• Notice that they do not provide us with any information aboutthe actual paths of the Markov process.

Page 13: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• The transition probability P (Γ, t|x, s) is a probability measure.Assume that it has a density for all t > s:

P (Γ, t|x, s) =∫

Γ

p(y, t|x, s) dy.

• Clearly, for t = s we have P (Γ, s|x, s) = IΓ(x).

• The Chapman-Kolmogorov equation becomes:∫

Γ

p(y, t|x, s) dy =∫

R

Γ

p(y, t|z, u)p(z, u|x, s) dzdy,

• and, since Γ ∈ B(R) is arbitrary, we obtain the equation

p(y, t|x, s) =∫

Rp(y, t|z, u)p(z, u|x, s) dz. (7)

• The transition probability density is a function of 4 arguments:the initial position and time x, s and the final position andtime y, t.

Page 14: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• In words, the CK equation tells us that, for a Markov process,the transition from x, s to y, t can be done in two steps: firstthe system moves from x to z at some intermediate time u.Then it moves from z to y at time t. In order to calculate theprobability for the transition from (x, s) to (y, t) we need tosum (integrate) the transitions from all possible intermediarystates z.

• The above description suggests that a Markov process can bedescribed through a semigroup of operators, i.e. aone-parameter family of linear operators with the properties

P0 = I, Pt+s = Pt Ps ∀ t, s > 0.

• A semigroup of operators is characterized through itsgenerator.

Page 15: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• Indeed, let P (t, x, dy) be the transition function of ahomogeneous Markov process. It satisfies the CK equation (5):

P (t + s, x, Γ) =∫

E

P (s, x, dz)P (t, z, Γ).

• Let X := Cb(E) and define the operator

(Ptf)(x) := E(f(Xt)|X0 = x) =∫

E

f(y)P (t, x, dy).

• This is a linear operator with

(P0f)(x) = E(f(X0)|X0 = x) = f(x) ⇒ P0 = I.

Page 16: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Chapman-Kolmogorov Equation

• Furthermore:

(Pt+sf)(x) =∫

f(y)P (t + s, x, dy)

=∫ ∫

f(y)P (s, z, dy)P (t, x, dz)

=∫ (∫

f(y)P (s, z, dy))

P (t, x, dz)

=∫

(Psf)(z)P (t, x, dz)

= (Pt Psf)(x).

• Consequently:Pt+s = Pt Ps.

Page 17: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Generator of a Markov Processes

• Let (E, ρ) be a metric space and let Xt be an E-valuedhomogeneous Markov process. Define the one parameter familyof operators Pt through

Ptf(x) =∫

f(y)P (t, x, dy) = E[f(Xt)|X0 = x]

for all f(x) ∈ Cb(E) (continuous bounded functions on E).

• Assume for simplicity that Pt : Cb(E) → Cb(E). Then theone-parameter family of operators Pt forms a semigroup ofoperators on Cb(E).

• We define by D(L) the set of all f ∈ Cb(E) such that thestrong limit

Lf = limt→0

Ptf − f

t,

exists.

Page 18: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Generator of a Markov Processes

Definition 2 The operator L : D(L) → Cb(E) is called theinfinitesimal generator of the operator semigroup Pt.

Definition 3 The operator L : Cb(E) → Cb(E) defined above iscalled the generator of the Markov process Xt.• The study of operator semigroups started in the late 40’s

independently by Hille and Yosida. Semigroup theory wasdeveloped in the 50’s and 60’s by Feller, Dynkin and others,mostly in connection to the theory of Markov processes.

• Necessary and sufficient conditions for an operator L to be thegenerator of a (contraction) semigroup are given by theHille-Yosida theorem (e.g. Evans Partial DifferentialEquations, AMS 1998, Ch. 7).

Page 19: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Generator of a Markov Processes

• The semigroup property and the definition of the generator of asemigroup imply that, formally at least, we can write:

Pt = exp(Lt).

• Consider the function u(x, t) := (Ptf)(x). We calculate its timederivative:

∂u

∂t=

d

dt(Ptf) =

d

dt

(eLtf

)

= L (eLtf

)= LPtf = Lu.

• Furthermore, u(x, 0) = P0f(x) = f(x). Consequently, u(x, t)satisfies the initial value problem

∂u

∂t= Lu, u(x, 0) = f(x). (8)

Page 20: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Generator of a Markov Processes

• When the semigroup Pt is the transition semigroup of aMarkov process Xt, then equation (8) is called the backwardKolmogorov equation. It governs the evolution of anobservable

u(x, t) = E(f(Xt)|X0 = x).

• Thus, given the generator of a Markov process L, we cancalculate all the statistics of our process by solving thebackward Kolmogorov equation.

• In the case where the Markov process is the solution of astochastic differential equation, then the generator is a secondorder elliptic operator and the backward Kolmogorov equationbecomes an initial value problem for a parabolic PDE.

Page 21: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Generator of a Markov Processes

• The space Cb(E) is natural in a probabilistic context, but otherBanach spaces often arise in applications; in particular whenthere is a measure µ on E, the spaces Lp(E; µ) sometimesarise. We will quite often use the space L2(E; µ), where µ willis the invariant measure of our Markov process.

• The generator is frequently taken as the starting point for thedefinition of a homogeneous Markov process.

• Conversely, let Pt be a contraction semigroup (Let X be aBanach space and T : X → X a bounded operator. Then T is acontraction provided that ‖Tf‖X 6 ‖f‖X ∀ f ∈ X), withD(Pt) ⊂ Cb(E), closed. Then, under mild technical hypotheses,there is an E–valued homogeneous Markov process Xtassociated with Pt defined through

E[f(X(t)|FXs )] = Pt−sf(X(s))

for all t, s ∈ T with t > s and f ∈ D(Pt).

Page 22: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Generator of a Markov Processes

Example 4 The Poisson process is a homogeneous Markovprocess.

Example 5 The one dimensional Brownian motion is ahomogeneous Markov process. The transition function is theGaussian defined in the example in Lecture 2:

P (t, x, dy) = γt,x(y)dy, γt,x(y) =1√2πt

exp(−|x− y|2

2t

).

The semigroup associated to the standard Brownian motion is the

heat semigroup Pt = et2

d2

dx2 . The generator of this Markov process is12

d2

dx2 .

• Notice that the transition probability density γt,x of theone dimensional Brownian motion is the fundamental solution(Green’s function) of the heat (diffusion) PDE

∂u

∂t=

12

∂2u

∂x2.

Page 23: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Adjoint Semigroup

• The semigroup Pt acts on bounded measurable functions.

• We can also define the adjoint semigroup P ∗t which acts onprobability measures:

P ∗t µ(Γ) =∫

RP(Xt ∈ Γ|X0 = x) dµ(x) =

Rp(t, x, Γ) dµ(x).

• The image of a probability measure µ under P ∗t is again aprobability measure. The operators Pt and P ∗t are adjoint inthe L2-sense:

RPtf(x) dµ(x) =

Rf(x) d(P ∗t µ)(x). (9)

Page 24: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Adjoint Semigroup

• We can, formally at least, write

P ∗t = exp(L∗t),

• where L∗ is the L2-adjoint of the generator of the process:∫Lfh dx =

∫fL∗h dx.

• Let µt := P ∗t µ. This is the law of the Markov process and µ isthe initial distribution. An argument similar to the one used inthe derivation of the backward Kolmogorov equation (8)enables us to obtain an equation for the evolution of µt:

∂µt

∂t= L∗µt, µ0 = µ.

Page 25: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Adjoint Semigroup

• Assuming that µt = ρ(y, t) dy, µ = ρ0(y) dy this equationbecomes:

∂ρ

∂t= L∗ρ, ρ(y, 0) = ρ0(y). (10)

• This is the forward Kolmogorov or Fokker-Planckequation. When the initial conditions are deterministic,X0 = x, the initial condition becomes ρ0 = δ(y − x).

• Given the initial distribution and the generator of the Markovprocess Xt, we can calculate the transition probability densityby solving the Forward Kolmogorov equation. We can thencalculate all statistical quantities of this process through theformula

E(f(Xt)|X0 = x) =∫

f(y)ρ(t, y; x) dy.

• We will derive rigorously the backward and forwardKolmogorov equations for Markov processes that are defined assolutions of stochastic differential equations later on.

Page 26: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

The Adjoint Semigroup

• We can study the evolution of a Markov process in twodifferent ways:

• Either through the evolution of observables(Heisenberg/Koopman)

∂(Ptf)∂t

= L(Ptf),

• or through the evolution of states(Schrodinger/Frobenious-Perron)

∂(P ∗t µ)∂t

= L∗(P ∗t µ).

• We can also study Markov processes at the level of trajectories.We will do this after we define the concept of a stochasticdifferential equation.

Page 27: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Ergodic Markov processes

• A very important concept in the study of limit theorems forstochastic processes is that of ergodicity.

• This concept, in the context of Markov processes, provides uswith information on the long–time behavior of a Markovsemigroup.

Definition 6 A Markov process is called ergodic if theequation

Ptg = g, g ∈ Cb(E) ∀t > 0

has only constant solutions.

• Roughly speaking, ergodicity corresponds to the case where thesemigroup Pt is such that Pt − I has only constants in its nullspace, or, equivalently, to the case where the generator L hasonly constants in its null space. This follows from the definitionof the generator of a Markov process.

Page 28: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Ergodic Markov processes

• Under some additional compactness assumptions, an ergodicMarkov process has an invariant measure µ with the propertythat, in the case T = R+,

limt→+∞

1t

∫ t

0

g(Xs) ds = Eg(x),

• where E denotes the expectation with respect to µ.

• This is a physicist’s definition of an ergodic process: timeaverages equal phase space averages.

• Using the adjoint semigroup we can define an invariantmeasure as the solution of the equation

P ∗t µ = µ.

• If this measure is unique, then the Markov process is ergodic.

Page 29: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Ergodic Markov processes

• Using this, we can obtain an equation for the invariant measurein terms of the adjoint of the generator L∗, which is thegenerator of the semigroup P ∗t . Indeed, from the definition ofthe generator of a semigroup and the definition of an invariantmeasure, we conclude that a measure µ is invariant if and onlyif

L∗µ = 0

• in some appropriate generalized sense ((L∗µ, f) = 0 for everybounded measurable function).

• Assume that µ(dx) = ρ(x) dx. Then the invariant densitysatisfies the stationary Fokker-Planck equation

L∗ρ = 0.

• The invariant measure (distribution) governs the long-timedynamics of the Markov process.

Page 30: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Stationary Markov Processes

• If X0 is distributed according to µ, then so is Xt for all t > 0.The resulting stochastic process, with X0 distributed in thisway, is stationary .

• In this case the transition probability density (the solution ofthe Fokker-Planck equation) is independent of time:ρ(x, t) = ρ(x).

• Consequently, the statistics of the Markov process isindependent of time.

Page 31: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Stationary Markov Processes

Example 7 The one dimensional Brownian motion is not anergodic process: The null space of the generator L = 1

2d2

dx2 on R isnot one dimensional!

Example 8 Consider a one-dimensional Brownian motion on[0, 1], with periodic boundary conditions. The generator of thisMarkov process L is the differential operator L = 1

2d2

dx2 , equippedwith periodic boundary conditions on [0, 1]. This operator isself-adjoint. The null space of both L and L∗ comprises constantfunctions on [0, 1]. Both the backward Kolmogorov and theFokker-Planck equation reduce to the heat equation

∂ρ

∂t=

12

∂2ρ

∂x2

with periodic boundary conditions in [0, 1]. Fourier analysis showsthat the solution converges to a constant at an exponential rate.

Page 32: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Stationary Markov Processes

Example 9 • The one dimensional Ornstein-Uhlenbeck(OU) process is a Markov process with generator

L = −αxd

dx+ D

d2

dx2.

• The null space of L comprises constants in x. Hence, it is anergodic Markov process. In order to calculate the invariantmeasure we need to solve the stationary Fokker–Planckequation:

L∗ρ = 0, ρ > 0, ‖ρ‖L1(R) = 1. (11)

• Let us calculate the L2-adjoint of L. Assuming that f, h decaysufficiently fast at infinity, we have:∫

RLfh dx =

R

[(−αx∂xf)h + (D∂2

xf)h]

dx

=∫

R

[f∂x(αxh) + f(D∂2

xh)]

dx =:∫

RfL∗h dx,

Page 33: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Stationary Markov Processes

• where

L∗h :=d

dx(axh) + D

d2h

dx2.

• We can calculate the invariant distribution by solvingequation (11).

• The invariant measure of this process is the Gaussian measure

µ(dx) =√

α

2πDexp

(− α

2Dx2

)dx.

• If the initial condition of the OU process is distributedaccording to the invariant measure, then the OU process is astationary Gaussian process.

Page 34: APPLIED STOCHASTIC PROCESSES LECTURES 3/4 INTRODUCTION …pavl/Lec3_markov.pdf · Deflnition of a Markov Process † Let (›; F) be a measurable space and T an ordered set. Let

Stationary Markov Processes

• Let Xt be the 1d OU process and let X0 ∼ N (0, D/α). ThenXt is a mean zero, Gaussian second order stationary process on[0,∞) with correlation function

R(t) =D

αe−α|t|

and spectral density

f(x) =D

π

1x2 + α2

.

Furthermore, the OU process is the only real-valued mean zeroGaussian second-order stationary Markov process defined on R.