Stochastic Processes
SOLO HERMELIN
Updated: 10.05.11 15.06.14
http://www.solohermelin.com
SOLO Stochastic Processes
Table of Content
Random Variables
Stochastic Differential Equation (SDE)
Brownian Motion
Smoluchowski Equation
Langevin Equation
Lévy Process
Martingale
Chapmann – Kolmogorov Equation
Itô Lemma and Itô Processes
Stratonovich Stochastic Calculus
Fokker – Planck Equation
Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE)
Propagation Equation
SOLO Stochastic Processes
Table of Content (continue)
Bartlett-Moyal TheoremFeller- Kolmogorov Equation
Langevin and Fokker- Planck Equations
Generalized Fokker - Planck Equation
Karhunen-Loève Theorem
References
4
Random ProcessesSOLO
Random Variable: A variable x determined by the outcome Ω of a random experiment.
( )Ω= xx
Random Process or Stochastic Process:
A function of time x determined by the outcome Ω of a random experiment.
( ) ( )Ω= ,txtx
1Ω
2Ω
3Ω
4Ω
x
t
This is a family or an ensemble of functions of time, in general different for each outcome Ω.
Mean or Ensemble Average of the Random Process: ( ) ( )[ ] ( ) ( )∫+∞
∞−
=Ω= ξξξ dptxEtx tx,:
Autocorrelation of the Random Process: ( ) ( ) ( )[ ] ( ) ( ) ( )∫ ∫+∞
∞−
+∞
∞−
=ΩΩ= ηξξξη ddptxtxEttR txtx 21 ,2121 ,,:,
Autocovariance of the Random Process: ( ) ( ) ( )[ ] ( ) ( )[ ] 221121 ,,:, txtxtxtxEttC −Ω−Ω=
( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )2121212121 ,,,, txtxttRtxtxtxtxEttC −=−ΩΩ=
Table of Content
5
SOLO
Stationarity of a Random Process
1. Wide Sense Stationarity of a Random Process: • Mean Average of the Random Process is time invariant:
( ) ( )[ ] ( ) ( ) .,: constxdptxEtx tx ===Ω= ∫+∞
∞−
ξξξ
• Autocorrelation of the Random Process is of the form: ( ) ( ) ( )ττ
RttRttRtt 21:
2121 ,−=
=−=
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )12,2121 ,,,:,21
ttRddptxtxEttR txtx === ∫ ∫+∞
∞−
+∞
∞−
ηξξξηωωsince:
We have: ( ) ( )ττ −= RR
Power Spectrum or Power Spectral Density of a Stationary Random Process:
( ) ( ) ( )∫+∞
∞−
−= ττωτω djRS exp:
2. Strict Sense Stationarity of a Random Process: All probability density functions are time invariant: ( ) ( ) ( ) .,, constptp xtx == ωωω
Ergodicity:
( ) ( ) ( )[ ]Ω==Ω=Ω ∫+
−∞→
,,2
1:, lim txExdttx
Ttx
ErgodicityT
TT
A Stationary Random Process for which Time Average = Assembly Average
Random Processes
6
SOLO
Time Autocorrelation:
Ergodicity:
( ) ( ) ( ) ( ) ( )∫+
−∞→
Ω+Ω=Ω+Ω=T
TT
dttxtxT
txtxR ,,2
1:,, lim τττ
For a Ergodic Random Process define
Finite Signal Energy Assumption: ( ) ( ) ( ) ∞<Ω=Ω= ∫+
−∞→
T
TT
dttxT
txR ,2
1,0 22 lim
Define: ( ) ( ) ≤≤−Ω
=Ωotherwise
TtTtxtxT 0
,:, ( ) ( ) ( )∫
+∞
∞−
Ω+Ω= dttxtxT
R TTT ,,2
1: ττ
( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )∫∫∫
∫∫∫
−−
−
−
+∞
−
−
−
−
∞−
Ω+Ω−Ω+Ω=Ω+Ω=
Ω+Ω+Ω+Ω++Ω=
T
T
TT
T
T
TT
T
T
TT
T
TT
T
T
TT
T
TTT
dttxtxT
dttxtxT
dttxtxT
dttxtxT
dttxtxT
dttxtxT
R
τ
τ
τ
τ
τττ
ττωττ
,,2
1,,
2
1,,
2
1
,,2
1,,
2
1,,
2
1
00
Let compute:
( ) ( ) ( ) ( ) ( )∫∫−∞→−∞→∞→
Ω+Ω−Ω+Ω=T
T
TTT
T
T
TTT
TT
dttxtxT
dttxtxT
Rτ
τττ ,,2
1,,
2
1limlimlim
( ) ( ) ( )ττ RdttxtxT
T
T
TT
T
=Ω+Ω∫−∞→
,,2
1lim
( ) ( ) ( ) ( )[ ] 0,,2
1,,
2
1 suplimlim →
Ω+Ω≤Ω+Ω≤≤−∞→−∞→
∫ τττττ
txtxT
dttxtxT TT
TtTT
T
T
TTT
therefore: ( ) ( )ττ RRTT
=→∞
lim
( ) ( ) ( )[ ]Ω==Ω=Ω ∫+
−∞→
,,2
1:, lim txExdttx
Ttx
ErgodicityT
TT
T− T+
( )txT
t
Random Processes
7
SOLO
Ergodicity (continue):
( ) ( ) ( ) ( ) ( )
( ) ( )[ ] ( ) ( )( )[ ]
( ) ( ) ( ) ( )( )
( ) ( ) ( ) ( ) [ ]TTTT
TT
TT
TTT
XXT
dvvjvxdttjtxT
dtjtxdttjtxT
ddttjtxtjtxT
dttxtxdjT
djR
*
2
1exp,exp,
2
1
exp,exp,2
1
exp,exp,2
1
,,exp2
1exp
=−ΩΩ=
+−Ω+Ω=
+−Ω+Ω=
Ω+Ω−=−
∫∫
∫∫
∫ ∫
∫ ∫∫
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
+∞
∞−
+∞
∞−
+∞
∞−
ωω
ττωτω
ττωτω
τττωττωτLet compute:
where: and * means complex-conjugate.( ) ( )∫+∞
∞−
−Ω= dvvjvxX TT ωexp,:
Define:
( ) ( ) ( ) ( ) ( ) ( )[ ]∫ ∫∫+∞
∞−
+
−∞→
+∞
∞−∞→∞→
Ω+Ω−=
−=
= τττωττωτω ddttxtxE
TjdjRE
T
XXES
T
T
TTT
TT
TT
T
,,2
1expexp
2: limlimlim
*
Since the Random Process is Ergodic we can use the Wide Stationarity Assumption:
( ) ( )[ ] ( )ττ RtxtxE TT =Ω+Ω ,,
( ) ( ) ( ) ( ) ( )
( ) ( )∫
∫ ∫∫ ∫∞+
∞−
+∞
∞−
+
−∞→
+∞
∞−
+
−∞→∞→
−=
−=
−=
=
ττωτ
ττωττττωω
djR
ddtT
jRddtRT
jT
XXES
T
TT
T
TT
TT
T
exp
2
1exp
2
1exp
2:
1
*
limlimlim
Random Processes
8
SOLO
Ergodicity (continue):
We obtained the Wiener-Khinchine Theorem (Wiener 1930):
( ) ( ) ( )∫+∞
∞−→∞−=
= dtjR
T
XXES TT
T
τωτω exp2
:*
lim
Norbert Wiener1894 - 1964
Alexander YakovlevichKhinchine1894 - 1959
The Power Spectrum or Power Spectral Density of a Stationary Random Process S (ω) is the Fourier Transform of the Autocorrelation Function R (τ).
Random Processes
9
SOLO
White Noise
A (not necessary stationary) Random Process whose Autocorrelation is zero for any two different times is called white noise in the wide sense.
( ) ( ) ( )[ ] ( ) ( )211
2
2121 ,,, ttttxtxEttR −=ΩΩ= δσ
( )1
2 tσ - instantaneous variance
Wide Sense Whiteness
Strict Sense Whiteness
A (not necessary stationary) Random Process in which the outcome for any two different times is independent is called white noise in the strict sense.
( ) ( ) ( ) ( )2121, ,,21
ttttp txtx −=Ω δ
A Stationary White Noise Random has the Autocorrelation:
( ) ( ) ( )[ ] ( )τδσττ 2,, =Ω+Ω= txtxER
Note
In general whiteness requires Strict Sense Whiteness. In practice we have only moments (typically up to second order) and thus only Wide Sense Whiteness.
Random Processes
10
SOLO
White Noise
A Stationary White Noise Random has the Autocorrelation:
( ) ( ) ( )[ ] ( )τδσττ 2,, =Ω+Ω= txtxER
The Power Spectral Density is given by performing the Fourier Transform of the Autocorrelation:
( ) ( ) ( ) ( ) ( ) 22 expexp στωτδστωτω =−=−= ∫∫+∞
∞−
+∞
∞−
dtjdtjRS
( )ωS
ω2σ
We can see that the Power Spectrum Density contains all frequencies at the same amplitude. This is the reason that is called White Noise.
The Power of the Noise is defined as: ( ) ( ) 20 σωτ ==== ∫+∞
∞−
SdtRP
Random Processes
11
SOLO
Markov Processes
A Markov Process is defined by:
Andrei AndreevichMarkov
1856 - 1922
( ) ( )( ) ( ) ( )( ) 111 ,|,,,|, tttxtxptxtxp >∀ΩΩ=≤ΩΩ ττ
i.e. the Random Process, the past up to any time t1 is fully defined by the process at t1.
Examples of Markov Processes:
1. Continuous Dynamic System( ) ( )( ) ( )wuxthtz
vuxtftx
,,,
,,,
==
2. Discrete Dynamic System
( ) ( )( ) ( )kkkkk
kkkkk
wuxthtz
vuxtftx
,,,
,,,
1
1
==
+
+
x - state space vector (n x 1)u - input vector (m x 1)v - white input noise vector (n x 1)
- measurement vector (p x 1)z
- white measurement noise vector (p x 1)w
Random Processes
Table of Content
SOLO Stochastic Processes
The earliest work on SDEs was done to describe Brownian motion in Einstein's famous paper, and at the same time by Smoluchowski. However, one of the earlier works related to Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This work was followed upon by Langevin. Later Itō and Stratonovich put SDEs on more solid mathematical footing.
In physical science, SDEs are usually written as Langevin Equations. These are sometimes confusingly called "the Langevin Equation" even though there are many possible forms. These consist of an ordinary differential equation containing a deterministic part and an additional random white noise term. A second form is the Smoluchowski Equation and, more generally, the Fokker-Planck Equation. These are partial differential equations that describe the time evolution of probability distribution functions. The third form is the stochastic differential equation that is used most frequently in mathematics and quantitative finance (see below). This is similar to the Langevin form, but it is usually written in differential form. SDEs come in two varieties, corresponding to two versions of stochastic calculus.
Background
Terminology
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, thus resulting in a solution which is itself a stochastic process. SDE are used to model diverse phenomena such as fluctuating stock prices or physical system subject to thermal fluctuations. Typically, SDEs incorporate white noise which can be thought of as the derivative of Brownian motion (or the Wiener process); however, it should be mentioned that other types of random fluctuations are possible, such as jump processes.
Stochastic Differential Equation (SDE)
SOLO Stochastic Processes
Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is non-differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Ito Stochastic Calculus and the Stratonovich Stochastic Calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist and conveniently, one can readily convert an Ito SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down.
Stochastic Calculus
Table of Content
Stochastic ProcessesSOLO
Brownian Motion
In 1827 Brown, a botanist, discovered the motion of pollen particles in water. At the beginning of the twentieth century, Brownian motion was studied by Einstein, Perrin and other physicists. In 1923, against this scientific background, Wiener defined probability measures in path spaces, and used the concept of Lebesgue integrals to lay the mathematical foundations of stochastic analysis. In 1942, Ito began to reconstruct from scratch the concept of stochastic integrals, and its associated theory of analysis. He created the theory of stochastic differential equations, which describe motion due to random events. Albert Einstein
1879 - 1955
Norbert Wiener1894 - 1964
Henri Léon Lebesgue
1875 - 1941
Robert Brown 1773–1858
Albert Einstein's (in his 1905 paper) and Marian Smoluchowski's (1906) independent research of the problem that brought the solution to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules.
Marian Ritter von Smolan Smoluchowski1872 - 1917
Kiyosi Itô1915 - 2008
Stochastic ProcessesSOLO
Random Walk
Assume the process of walking on a straight line at discrete intervals T. At each timewe walk a distance s , randomly, to the left or to the right, with the same probability p=1/2. In this way we created a Stochastic Process called Random Walk. (This experiment is equivalent to tossing a coin to get, randomly, Head or Tail).
Assume that at t = n T we have taken k steps to the right and n-k steps to the left, then the distance traveled isx (nT) is a Random Walk, taking the values r s, wherer equals n, n-2,…, -(n-2),-n
( ) ( ) ( ) snksknsknTx −=−−= 2
( ) ( )2
2nr
ksnksrnTx+=⇒−==
Therefore
( ) n
nnr
npnr
nnr
kPsrnTxP2
1
222
+=
+=
+===
Stochastic ProcessesSOLO
Random Walk (continue – 1)
The Random value is ( ) nxxxnTx +++= 21
We have at step i the event xi: P xi = +s = p = 1/2 and P xi = - s = 1-p = 1/2
( ) ( )( )
( ) nrppn
pnk
en
eppn
nrkPsrnTxP 2/12 2
2
2/
1
12
1
2−−
−−=
−≈
+===
ππ
( ) 0=−=−++== sxPssxPsxE iii
( ) 2222 ssxPssxPsxE iii =−=−++==
( )
( ) 222
22
1
0
1 1
2
21 0
snxExExExxEnTxE
xExExEnTxE
n
xxEn
i
n
jji
n
ji
ji
=+++==
=+++=≠=
= =∑∑
===
≠==⇒jisxE
jixExExxE
i
ii
tindependenxx
ji
ji
22
,
0
For large r ( )nr >
and( )
+=+≈≤ ∫ −
n
rerfdyesrnTxP
nry
2
1
2
1
2
1 /
0
2/2
π
Stochastic ProcessesSOLO
Random Walk (continue – 2)
For n1 > n2 > n3 > n4 the number of steps to the right from n2T to n1T interval is independent of the number of steps to the right between n4T to n3T interval. Hence x (n1T) – x (n2T) is independent of x (n4T) – x (n3T).
Table of Content
SOLO Stochastic Processes
Smoluchowski Equation
In physics, the Diffusion Equation with drift term is often called Smoluchowski equation (after Marian von Smoluchowski).
Let w(r, t) be a density, D a diffusion constant, ζ a friction coefficient, and U(r, t) a potential. Then the Smoluchowski equation states that the density evolves according to
The diffusivity term acts to smoothen out the density, while the drift term shifts the density towards regions of low potential U. The equation is consistent with each particle moving according to a stochastic differential equation, with a bias term and a diffusivity D. Physically, the drift term originates from a force being balanced by a viscous drag given by ζ.
The Smoluchowski equation is formally identical to the Fokker–Planck equation, the only difference being the physical meaning of w: a distribution of particles in space for the Smoluchowski equation, a distribution of particle velocities for the Fokker–Planck equation.
SOLO Stochastic Processes
Einstein-Smoluchowski Equation
In physics (namely, in kinetic theory) the Einstein relation (also known as Einstein–Smoluchowski relation) is a previously unexpected connection revealed independently by Albert Einstein in 1905 and by Marian Smoluchowski (1906) in their papers on Brownian motion. Two important special cases of the relation are:
(diffusion of charged particles)
("Einstein–Stokes equation", for diffusion of spherical particles through liquid with low Reynolds number)
Where
• ρ (x,t) density of the Brownian particles•D is the diffusion constant,•q is the electrical charge of a particle,•μq, the electrical mobility of the charged particle, i.e. the ratio of the particle's terminal drift velocity to an applied electric field,•kB is Boltzmann's constant,•T is the absolute temperature,•η is viscosity•r is the radius of the spherical particle.The more general form of the equation is:where the "mobility" μ is the ratio of the particle's terminal drift velocity to an applied force, μ = vd / F.
2
2
xD
t ∂∂=
∂∂ ρρ
Einstein’s EquationFor Brownian Motion
( ) ( )
−=
tD
x
tDtx
4exp
4
1,
2
2/1πρ
Table of Content
Paul Langevin1872-1946
Langevin Equation
SOLO Stochastic Processes
Langevin equation (Paul Langevin, 1908) is a stochastic differential equation describing the time evolution of a subset of the degrees of freedom. These degrees of freedom typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation.
The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid,
Langevin, P. (1908). "On the Theory of Brownian Motion". C. R. Acad. Sci. (Paris) 146: 530–533.
( )td
xdvtv
td
vdm =+−= ηλ
We are interested in the position x of a particle of mass m. The force on the particle is the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a noise term η (t) that has a Gaussian Probability Distribution with Correlation Function
( ) ( ) ( )'2', , ttTktt jiBji −= δδληη
where kB is Boltzmann’s constant and T is the Temperature.
Table of Content
Propagation Equation
SOLO Stochastic Processes
Definition 1: Holder Continuity Condition
( )( ) 111 , mxnxmx Kttxk ∈Given a mx1 vector on a mx1 domain, we say that is Holder Continuous in K if for some constants C, α >0 and some norm || ||:
( ) ( ) α2121 ,, xxCtxktxk −<−
Holder Continuity is a generalization of Lipschitz Continuity (α = 1):
Holder Continuity
Lipschitz Continuity( ) ( ) 2121 ,, xxCtxktxk −<−
Rudolf Lipschitz1832 - 1903
Otto Ludwig Hölder1859 - 1937
Propagation Equation
SOLO Stochastic Processes
Definition 2: Standard Stochastic State Realization (SSSR)
The Stochastic Differential Equation:
( ) ( ) ( ) ( ) [ ]fnxnxnnxnx ttttndtxGdttxftxd ,,, 0111 ∈+=
( ) ( ) ( ) ( ) ( ) ( ) 0===+= tndEtndEtndEtndtndtnd pgpg
we can write ( ) ( ) ( ) ( ) ( ) ( )sttQswtwEtd
tndtw Tg −== δ
( )tnd g ( ) ( ) ( ) dttQtntndE nxnT
gg =Wiener (Gauss) Process
( )tnd p Poisson Process ( ) ( )
=
na
a
a
Tpp
n
tntndE
λσ
λσ
λσ
2
22
12
00
00
00
2
1
(1) where is independent of( ) 00 xtx = 0x ( )tnd
(2) is Holder Continuous in t, Lipschitz Continuous in ( )txGnxn , x( ) ( )txGtxG T
nxnnxn ,, is strictly Positive Definite( ) ( )
ji
ij
i
ij
xx
txG
x
txG
∂∂∂
∂∂ ,
;, 2
are Globally Lipschitz Continuous in x, continuous in t, and globally bounded.
(3) The vector f (x,t) is Continuous in t and Globally Lipschitz Continuous in , and ∂fi/∂xi are Globally Lipschitz Continuous in , and continuous in t. x
x
The Stochastic Differential Equation is called a Standard Stochastic State Realization (SSSR)
Table of Content
Stochastic ProcessesSOLO
Lévy Process
In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is any continuous-time stochastic process
Paul Pierre Lévy1886 - 1971
A Stochastic Process X = Xt: t ≥ 0 is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits.
Independent incrementsA continuous-time stochastic process assigns a random variable Xt to each point t ≥ 0 in time. In effect it is a random function of t. The increments of such a process are the differences Xs − Xt between its values at different times t < s. To call the increments of a process independent means that increments Xs − Xt and Xu − Xv are independent random variables whenever the two time intervals do not overlap and, more generally, any finite number of increments assigned to pairwise non-overlapping time intervals are mutually (not just pairwise) independent
Stochastic ProcessesSOLO
Lévy Process (continue – 1)
Paul Pierre Lévy1886 - 1971
A Stochastic Process X = Xt: t ≥ 0 is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits.
Stationary increments
To call the increments stationary means that the probability distribution of any increment Xs − Xt depends only on the length s − t of the time interval; increments with equally long time intervals are identically distributed. In the Wiener process, the probability distribution of Xs − Xt is normal with expected value 0 and variance s − t. In the (homogeneous) Poisson process, the probability distribution of Xs − Xt is a Poisson distribution with expected value λ(s − t), where λ > 0 is the "intensity" or "rate" of the process.
Stochastic ProcessesSOLO
Lévy Process (continue – 2)
Paul Pierre Lévy1886 - 1971
A Stochastic Process X = Xt: t ≥ 0 is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits.
DivisibilityLévy processes correspond to infinitely divisible probability distributions:The probability distributions of the increments of any Lévy process are infinitely divisible, since the increment of length t is the sum of n increments of length t/n, which are i.i.d. by assumption (independent increments and stationarity). Conversely, there is a Lévy process for each infinitely divisible probability distribution: given such a distribution D, multiples and dividing define a stochastic process for positive rational time, defining it as a Dirac delta distribution for time 0 defines it for time 0, and taking limits defines it for real time. Independent increments and stationarity follow by assumption of divisibility, though one must check continuity and that taking limits gives a well-defined function for irrational time.
Table of Content
Stochastic ProcessesSOLO
Martingale
Originally, martingale referred to a class of betting strategies that was popular in 18th century France. The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double his bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, his probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users
History of Martingale
The concept of martingale in probability theory was introduced by Paul Pierre Lévy, and much of the original development of the theory was done by Joseph Leo Doob. Part of the motivation for that work was to show the impossibility of successful betting strategies.
Paul Pierre Lévy1886 - 1971
Joseph Leo Doob1910 - 2004
Stochastic ProcessesSOLO
Martingale
In probability theory, a martingale is a stochastic process (i.e., a sequence of random variables) such that the conditional expected value of an observation at some time t, given all the observations up to some earlier time s, is equal to the observation at that earlier time s
A discrete-time martingale is a discrete-time stochastic process (i.e., a sequence of random variables) X1, X2, X3, ... that satisfies for all n
i.e., the conditional expected value of the next observation, given all the past observations, is equal to the last observation.
Somewhat more generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with respect to another sequence X1, X2, X3 ... if for all n
Similarly, a continuous-time martingale with respect to the stochastic process Xt is a stochastic process Yt such that for all t
This expresses the property that the conditional expectation of an observation at time t, given all the observations up to time s, is equal to the observation at time s (of course, provided that s ≤ t).
Stochastic ProcessesSOLO
Martingale
In full generality, a stochastic process Y : T × Ω → S is a martingale with respect to a filtration Σ∗ and probability measure P if
* Σ∗ is a filtration of the underlying probability space (Ω, Σ, P);
* Y is adapted to the filtration Σ∗, i.e., for each t in the index set T, the random variable Yt is a Σt-measurable function;
* for each t, Yt lies in the Lp space L1(Ω, Σt, P; S), i.e.
* for all s and t with s < t and all F Σ∈ s,
where χF denotes the indicator function of the event F. In Grimmett and Stirzaker's Probability and Random Processes, this last condition is denoted as
which is a general form of conditional expectation
It is important to note that the property of being a martingale involves both the filtration and the probability measure (with respect to which the expectations are taken). It is possible that Y could be a martingale with respect to one measure but not another one; the Girsanov theorem offers a way to find a measure with respect to which an Itō process is a martingale.
Table of Content
Stochastic ProcessesSOLO
Chapmann – Kolmogorov Equation
Sydney Chapman1888 - 1970
Andrey Nikolaevich Kolmogorov
1903 - 1987
Suppose that fi is an indexed collection of random variables, that is, a stochastic process. Let
be the joint probability density function of the values of the random variables f1 to fn. Then, the Chapman-Kolmogorov equation is
Note that we have not yet assumed anything about the temporal (or any other) ordering of the random variables -- the above equation applies equally to the marginalization of any of them.
Particularization to Markov Chains
When the stochastic process under consideration is Markovian, the Chapman-Kolmogorov equation is equivalent to an identity on transition densities. In the Markov chain setting, one assumes that
Then, because of the Markov property,
where the conditional probability is the transition probability between the times i > j.
So, the Chapman-Kolmogorov equation takes the form
When the probability distribution on the state space of a Markov chain is discrete and the Markov chain is homogeneous, the Chapman-Kolmogorov equations can be expressed in terms of (possibly infinite-dimensional) matrix multiplication, thus:where P(t) is the transition matrix, i.e., if Xt is the state of the process at time t, then for any two points i and j in the state space, we have
( )nii ffpn
,,1,,1
( ) ( )∫+∞
∞−− =
− nniinii fdffpffpnn
,,,, 1,,11,, 111
( ) ( ) ( ) ( )1|12|11,, ||,,11211 −−
= nniiiiinii ffpffpfpffpnnn
( ) ( ) ( )∫+∞
∞−
= 212|23|13| |||122313
dfffpffpffp iiiiii
Stochastic ProcessesSOLO
Chapmann – Kolmogorov Equation (continue – 1)
Particularization to Markov Chains
( ) ( ) ( )∫+∞
∞−
= 20022,|,22,|,00,|, ,|,,|,,|,00220000
dttxtxptxtxptxtxp txtxtxtxtxtx
Let be a probability density function on the Markov process x(t) given that x(t0) = x0, and t0 < t, then,
( )00,|, ,|,00
txtxp txtx
Geometric Interpretation of Chapmann – Kolmogorov Equation
Table of Content
Stochastic ProcessesSOLO
Kiyosi Itô1915 - 2008
In 1942, Itô began to reconstruct from scratch the concept of stochastic integrals, and its associated theory of analysis. He created the theory of stochastic differential equations, which describe motion due to random events.
In 1945 Ito was awarded his doctorate. He continued to develop his ideas on stochastic analysis with many important papers on the topic. Among them were “On a stochastic integral equation” (1946), “On the stochastic integral” (1948), “Stochastic differential equations in a differentiable manifold” (1950), “Brownian motions in a Lie group” (1950), and “On stochastic differential equations” (1951).
Itô Lemma and Itô Processes
Itô Lemma and Itô processes
In its simplest form, Itô 's lemma states that for an Itô process
and any twice continuously differentiable function f on the real numbers, then f(X) is also an Itô process satisfying
Or, more extended. Let X(t) be an Itô process given by
and let f(t,x) be a function with continuous first- and second-order partial derivatives
Then by Itô's lemma:
SOLO
tttt dBdtXd σµ +=
( ) ( ) ( )
( ) ( ) ( ) dtXfXfdBXf
dtXfdXXfXfd
ttT
tttttt
ttT
tttt
++=
+=
σσµσ
σσ
''2
1''
''2
1'
Stochastic Processes
Itô Lemma and Itô processes (continue – 1)
Informal derivation
A formal proof of the lemma requires us to take the limit of a sequence of random variables, which is not done here. Instead, we can derive Ito's lemma by expanding a Taylor series and applying the rules of stochastic calculus.
Assume the Itō process is in the form of
Expanding f(x, t) in a Taylor series in x and t we have
and substituting a dt + b dB for dx gives
In the limit as dt tends to 0, the dt2 and dt dB terms disappear but the dB2 term tends to dt. The latter can be shown if we prove that since
Deleting the dt2 and dt dB terms, substituting dt for dB2, and collecting the dt and dB terms, we obtain
as required.
SOLO Stochastic Processes
Table of Content
Ruslan L. Stratonovich(1930 – 1997)
Stratonovich invented a stochastic calculus which serves as an alternative to the Itô calculus; the Stratonovich calculus is most natural when physical laws are being considered. The Stratonovich integral appears in his stochastic calculus. He also solved the problem of optimal non-linear filtering based on his theory of conditional Markov processes, which was published in his papers in 1959 and 1960. The Kalman-Bucy (linear) filter (1961) is a special case of Stratonovich's filter. He also developed the value of information theory (1965). His latest book was on non-linear non-equilibrium thermodynamics.
SOLO
Stratonovich Stochastic Calculus
Stochastic Processes
Table of Content
A solution to the one-dimensional Fokker–Planck equation, with both the drift and the diffusion term. The initial condition is a Dirac delta function in x = 1, and the distribution drifts towards x = 0.
The Fokker–Planck equation describes the time evolution of the probability density function of the position of a particle, and can be generalized to other observables as well. It is named after Adriaan Fokker and Max Planck and is also known as the Kolmogorov forward equation. The first use of the Fokker–Planck equation was the statistical description of Brownian motion of a particle in a fluid. In one spatial dimension x, the Fokker–Planck equation for a process with drift D1(x,t) and diffusion D2(x,t) is
More generally, the time-dependent probability distribution may depend on a set of N macrovariables xi. The general form of the Fokker–Planck equation is then
where D1 is the drift vector and D2 the diffusion tensor; the latter results from the presence of the stochastic force.
Fokker – Planck Equation
Adriaan Fokker 1887 - 1972
Max Planck1858 - 1947
SOLO
Adriaan Fokker„Die mittlere Energie rotierender elektrischer Dipole im Strahlungsfeld" Annalen der Physik 43, (1914) 810-820 Max Plank, „Ueber einen Satz der statistichen Dynamik und eine Erweiterung in der Quantumtheorie“, Sitzungberichte der Preussischen Akadademie der Wissenschaften (1917) p. 324-341
Stochastic Processes
( ) ( ) ( )[ ] ( ) ( )[ ]txftxDx
txftxDx
txft
,,,,, 22
2
1 ∂∂+
∂∂−=
∂∂
( )[ ] ( )[ ]∑∑∑= == ∂∂
∂+∂∂−=
∂∂ N
i
N
jNji
ji
N
iNi
i
ftxxDxx
ftxxDx
ft 1 1
12
2
11
1 ,,,,,,
Fokker – Planck Equation (continue – 1)
The Fokker–Planck equation can be used for computing the probability densities of stochastic differential equations.
where is the state and is a standard M-dimensional Wiener process. If the initial probability distribution is , then the probability distribution of the stateis given by the Fokker – Planck Equation with the drift and diffusion terms:
Similarly, a Fokker–Planck equation can be derived for Stratonovich stochastic differential equations. In this case, noise-induced drift terms appear if the noise strength is state-dependent.
SOLO
Consider the Itô stochastic differential equation:
( ) ( ) ( )[ ] ( ) ( )[ ]txftxDx
txftxDx
txft
,,,,, 22
2
1 ∂∂+
∂∂−=
∂∂
Fokker – Planck Equation (continue – 2)
Derivation of the Fokker–Planck Equation
SOLO
Start with ( ) ( ) ( )11|1, 111|, −−− −−−
= kxkkxxkkxx xpxxpxxpkkkkk
and ( ) ( ) ( ) ( )∫∫+∞
∞−−−−
+∞
∞−−− −−−
== 111|11, 111|, kkxkkxxkkkxxkx xdxpxxpxdxxpxp
kkkkkk
define ( ) ( )ttxxtxxttttt kkkk ∆−==∆−== −− 11 ,,,
( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( )∫+∞
∞−∆−∆− ∆−∆−∆−= ttxdttxpttxtxptxp ttxttxtxtx ||
Let use the Characteristic Function of
( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )ttxtxtxtxdttxtxpttxtxss ttxtxttxtx ∆−−=∆∆−∆−−−=Φ ∫+∞
∞−∆−∆−∆ |exp: ||
( ) ( ) ( ) ( )[ ]ttxtxp ttxtx ∆−∆− ||
The inverse transform is ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )∫∞+
∞−∆−∆∆− Φ∆−−=∆−
j
j
ttxtxttxtx sdsttxtxsj
ttxtxp || exp2
1|
π
Using Chapman-Kolmogorov Equation we obtain:
( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )
( ) ( ) ( ) ( )[ ]
( ) ( )[ ] ( )
( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxsj
ttxdttxpsdsttxtxsj
txp
j
j
ttxttxtx
ttx
ttxtxp
j
j
ttxtxtx
ttxtx
∆−∆−Φ∆−−=
∆−∆−Φ∆−−=
∫ ∫
∫ ∫
∞+
∞−
∞+
∞−∆−∆−∆
+∞
∞−∆−
∆−
∞+
∞−∆−∆
∆−
|
|
|
exp2
1
exp2
1
|
π
π
Stochastic Processes
Fokker – Planck Equation (continue – 3)
Derivation of the Fokker–Planck Equation (continue – 1)
SOLO
The Characteristic Function can be expressed in terms of the moments about x (t-Δt) as:
( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxsj
txpj
j
ttxttxtxtx ∆−∆−Φ∆−−= ∫ ∫+∞
∞−
∞+
∞−∆−∆−∆ |exp
2
1
π
( ) ( ) ( ) ( )( ) ( ) ( ) ( )[ ] ( ) ∑
∞
=∆−∆∆−∆ ∆−∆−−−+=Φ
1|| |
!1
i
ittxtx
i
ttxtx ttxttxtxEi
ss
Therefore
( ) ( )[ ] ( ) ( )[ ] ( )( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( )ttxdsdttxpttxttxtxE
i
sttxtxs
jtxp
j
j
ttxi
ittxtx
i
tx ∆−∆−
∆−∆−−−+∆−−= ∫ ∫ ∑+∞
∞−
∞+
∞−∆−
∞
=∆−
1| |
!1exp
2
1
π
Use the fact that ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )[ ] ,2,1,01exp
2
1 =∂
∆−−∂−=∆−−−∫∞+
∞−
itx
ttxtxsdttxtxss
j i
ii
j
j
i δπ
( ) ( )[ ] ( ) ( )[ ] ( ) ( )[ ] ( )
( ) ( ) ( )[ ]( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( )∫∑
∫ ∫∞+
∞−
∞
=∆−
+∞
∞−∆−
∞+
∞−
∆−∆−∆−∆−−∂
∆−−∂−+
∆−∆−∆−−=
1
|!
1
exp2
1
ittx
i
i
ii
ttx
j
j
tx
ttxdttxpttxttxtxEtx
ttxtx
i
ttxdttxpsdttxtxsj
txp
δ
π
where δ [u] is the Dirac delta function:
[ ] ( ) [ ] ( ) ( ) ( ) ( ) ( )000..0exp2
1FFFtsuFFduuuFsdus
ju
j
j
==∀== −+
+∞
∞−
∞+
∞−∫∫ δ
πδ
Stochastic Processes
Fokker – Planck Equation (continue – 4)
Derivation of the Fokker–Planck Equation (continue – 2)
SOLO
[ ] ( ) ( ) [ ] ( ) ( ) ( ) ( ) ( )afafaftsufufduuaufsduasj
uaau
j
j
==∀=−−=− −+=
+∞
∞−
∞+
∞−∫∫ ..exp
2
1 δπ
δ
[ ] ( ) ( ) ( ) ( ) ( ) ∫∫∫∞+
∞−
∞+
∞−
∞+
∞−
=→=−−=−j
j
j
j
j
j
sdussFsj
ufdu
dsdussF
jufsduass
jua
ud
dexp
2
1exp
2
1exp
2
1
πππδ
( ) [ ] ( ) ( ) ( ) ( )
( ) ( ) ( )au
j
j
j
j
j
j
j
j
ud
ufdsdsFass
jsdduusufass
j
sdduuasufsj
dusduassj
ufduuaud
duf
=
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
+∞
∞−
+∞
∞−
∞+
∞−
+∞
∞−
−=−=−−=
−−=−−=−
∫∫ ∫
∫ ∫∫ ∫∫
exp2
1expexp
2
1
exp2
1exp
2
1
ππ
ππδ
[ ] ( ) ( ) ( ) ( ) ( ) ( ) ∫∫∫∞+
∞−
∞+
∞−
∞+
∞−
=→=−−=−j
j
ii
ij
j
j
j
ii
i
i
sdussFsj
ufdu
dsdussF
jufsduass
jua
ud
dexp
2
1exp
2
1exp
2
1
πππδ
( ) [ ] ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )au
i
ii
j
j
iij
j
ii
j
j
iij
j
ii
i
i
ud
ufdsdassFs
jsdduusufass
j
sdduuasufsj
dusduassj
ufduuaud
duf
=
−=−=−−=
−−=−−=−
∫∫ ∫
∫ ∫∫ ∫∫∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
+∞
∞−
+∞
∞−
∞+
∞−
+∞
∞−
1exp2
1expexp
2
1
exp2
1exp
2
1
ππ
ππδ
Useful results related to integrals involving Delta (Dirac) function
Stochastic Processes
Fokker – Planck Equation (continue – 5)
Derivation of the Fokker–Planck Equation (continue – 3)
SOLO
( ) ( )[ ]
( ) ( )[ ]
( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ]txpttxdttxpttxtxttxdttxpsdttxtxsj ttxttxttx
ttxtx
j
j
∆−
+∞
∞−∆−
+∞
∞−∆−
∆−−
∞+
∞−
=∆−∆−∆−−=∆−∆−∆−− ∫∫ ∫ δπ
δ
exp2
1
( ) ( ) ( )[ ]( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( )
( ) ( ) ( )[ ]( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( )
( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑
∑ ∫
∫∑
∞
==∆
∆−∆−
∞
=
∞+
∞−∆−∆−
+∞
∞−
∞
=∆−∆−
∂∆−∆−−∂−=
∆−∆−∆−∆−−∂
∆−−∂−=
∆−∆−∆−∆−−∂
∆−−∂−
10
|
1|
1|
|
!
1
|!
1
|!
1
it
i
ttxi
ttxtxii
ittx
ittxtxi
ii
ittx
ittxtxi
ii
tx
txpttxttxtxE
i
ttxdttxpttxttxtxEtx
ttxtx
i
ttxdttxpttxttxtxEtx
ttxtx
i
δ
δ
( ) [ ] ( ) ( ) ( ) [ ] [ ] ( )auau
i
i
i
i
i
ii
i
i
ud
ufdduua
uad
duf
ud
ufdduua
ud
duf
==
=−−
→−=− ∫∫+∞
∞−
+∞
∞−
δδ 1We found
( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑
∞
==∆
∆−∆−∆− ∂
∆−∆−−∂−+=1
0
| |
!
1
it
i
ttxi
ttxtxii
ttxtxtx
txpttxttxtxE
itxptxp
( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑
∞
=
∆−
→∆
∆−
→∆ ∂∆−∆−−∂
∆−=
∆−
100
|1lim
!
1lim
ii
ttxii
t
ittxtx
t tx
txpttxttxtxE
tit
txptxp
Therefore
Rearranging, dividing by Δt, and tacking the limit Δt→0, we obtain:
Stochastic Processes
Fokker – Planck Equation (continue – 6)
Derivation of the Fokker–Planck Equation (continue – 4)
SOLO
We found ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑
∞
=
∆−∆−
→∆
∆−
→∆ ∂∆−∆−−∂
∆−=
∆−
1
|
00
|1lim
!
1lim
ii
ttxi
ttxtxi
t
ittxtx
t tx
txpttxttxtxE
tit
txptxp
Define: ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) t
ttxttxtxEtxtxm
ittxtx
t
i
∆∆−∆−−
=− ∆−
→∆−
|lim: |
0
Therefore ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ]( )( )[ ]∑
∞
=
−
∂−∂−=
∂∂
1 !
1
ii
txiii
tx
tx
txptxtxm
it
txp
( ) ( )ttxtxt
∆−=→∆−
0lim: and:
This equation is called the Stochastic Equation or Kinetic Equation.
It is a partial differential equation that we must solve, with the initial condition:
( ) ( )[ ] ( )[ ]000 0 txptxp tx ===
Stochastic Processes
Fokker – Planck Equation (continue – 7)
Derivation of the Fokker–Planck Equation (continue – 5)
SOLO
We want to find px(t) [x(t)] where x(t) is the solution of
( ) ( ) ( ) [ ]fg ttttntxfdt
txd,, 0∈+=
( ) 0: == tnEn gg
( )tng
( ) ( )[ ] ( ) ( )[ ] ( ) ( )τδττ −=−− ttQnntntnE gggg ˆˆ
Wiener (Gauss) Process
( ) ( )[ ] ( ) ( )[ ] ( ) [ ] ( ) [ ] ( )tQnEtxnEt
ttxttxtxEtxtxm gg
t===
∆∆−∆−−=−
→∆−22
2
2
0
2 ||
lim:
( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( )txfnEtxftxtd
txdE
t
ttxttxtxEtxtxm g
t,,|
|lim:
0
0
1 =+=
=
∆∆−∆−−=−
→∆−
( ) ( )[ ] ( ) ( )[ ] ( ) 20
|lim:
0>=
∆∆−∆−−=−
→∆− it
ttxttxtxEtxtxm
i
t
i
Therefore we obtain:
( ) ( )[ ] ( )[ ] ( ) ( )[ ]( )( ) ( ) ( ) ( )[ ]
( )[ ] 2
2
2
1,
tx
txptQ
tx
txpttxf
t
txp txtxtx
∂∂
+∂
∂−=
∂∂
Stochastic Processes
Fokker–Planck Equation
Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE)
Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) are partial differential equations (PDE) that arise in the theory of continuous-time continuous-state Markov processes. Both were published by Andrey Kolmogorov in 1931. Later it was realized that the KFE was already known to physicists under the name Fokker–Planck equation; the KBE on the other hand was new.
Kolmogorov forward equation addresses the following problem. We have information about the state x of the system at time t (namely a probability distribution pt(x)); we want to know the probability distribution of the state at a later time s > t. The adjective 'forward' refers to the fact that pt(x) serves as the initial condition and the PDE is integrated forward in time. (In the common case where the initial state is known exactly pt(x) is a Dirac delta function centered on the known initial state).
Kolmogorov backward equation on the other hand is useful when we are interested at time t in whether at a future time s the system will be in a given subset of states, sometimes called the target set. The target is described by a given function us(x) which is equal to 1 if state x is in the target set and zero otherwise. We want to know for every state x at time t (t < s) what is the probability of ending up in the target set at time s (sometimes called the hit probability). In this case us(x) serves as the final condition of the PDE, which is integrated backward in time, from s to t.
for t ≤ s , subject to the final condition p(x,s) = us(x).
( ) ( ) ( )[ ] ( ) ( )[ ]txptxDx
txptxDx
txpt
,,,,, 22
2
1 ∂∂+
∂∂=
∂∂−
( ) ( ) ( )[ ] ( ) ( )[ ]txptxDx
txptxDx
txpt
,,,,, 22
2
1 ∂∂+
∂∂−=
∂∂
Andrey Nikolaevich Kolmogorov1903 - 1987
SOLO Stochastic Processes
Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) (continue – 1)
Kolmogorov backward equation on the other hand is useful when we are interested at time t in whether at a future time s the system will be in a given subset of states, sometimes called the target set. The target is described by a given function us(x) which is equal to 1 if state x is in the target set and zero otherwise. We want to know for every state x at time t (t < s) what is the probability of ending up in the target set at time s (sometimes called the hit probability). In this case us(x) serves as the final condition of the PDE, which is integrated backward in time, from s to t.
Formulating the Kolmogorov backward equation
Assume that the system state x(t) evolves according to the stochastic differential equation
then the Kolmogorov backward equation is, using Itô 's lemma on p(x,t):
SOLO Stochastic Processes
Table of Content
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Let Φx(t)|x(t1) (s,t) be the Characteristic Function of the Markov Process x (t), t T ɛ(some interval). Assume the following:
(1) Φx(t)|x(t1) (s,t) is continuous differentiable in t, t T.ɛ
( ) ( ) ( ) ( )[ ] ( ) ( )( )txtsgt
txtxttxsE Ttxtx ,;
|1exp1| ≤
∆−−∆+(2)
where E | g| is bounded on T.
(3) then
( ) ( ) ( ) ( )[ ] ( ) ( )( )txtst
txtxttxsE Ttxtx
t,;:
|1explim 1|
0φ=
∆−−∆+
→∆
( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) 1|
1| |,;exp|,
1
1 txtxtstxsEt
txtsT
txtxtxtx φ=∂
Φ∂
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )∫+∞
∞−
−=Φ txdtxtxptxsts txtxT
txtx 1|| |exp,11
The Characteristic Function of ( ) ( ) ( ) ( )[ ] 11| |1
tttxtxp txtx >
Maurice Stevenson Bartlett 1910 - 2002
Jose EnriqueMoyal
1910 - 1998
Theorem 1
Bartlett-Moyal Theorem
SOLO Stochastic Processes
( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )t
txtstxtts
t
txts txtxtxtx
t
txtx
∆Φ−∆+Φ
=∂
Φ∂→∆
1|1|
0
1| |,|,lim
|,111
Proof
By definition
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) 1|1|| |exp|exp,111
txtxsEtxdtxtxptxsts Ttxtxtxtx
Ttxtx −=−=Φ ∫
+∞
∞−
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )∫+∞
∞−
∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts txtxT
txtx 1|| |exp,11
But since x (t) is a Markov process, we can use the Chapman-Kolmogorov Equation
( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( )∫ ∆+=∆+ txdtxtxptxttxptxttxp txtxtxtxtxtx 1||1| |||111
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )∫ ∫+∞
∞−
∆+∆+∆+−=∆+Φ ttxdtxdtxtxptxttxpttxstts txtxtxtxT
txtx 1||| ||exp,111
( ) ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )txdttxdtxttxptxttxstxtxptxs txtxT
txtxT∫ ∫ ∆+∆+−∆+−−= |exp|exp
11 |1|
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) 1|| ||expexp11
txtxtxttxsEtxsE Ttxtx
Ttxtx −∆+−⋅−=
Bartlett-Moyal Theorem
SOLO Stochastic Processes
( )( ) ( )( ) ( )( )t
txtstxtts
t
txts xx
t
x
∆Φ−∆+Φ=
∂Φ∂
→∆
11
0
1 |,|,lim
|,
Proof (continue – 1)
We found
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) 1|1|| |exp|exp,111
txtxsEtxdtxtxptxsts Ttxtxtxtx
Ttxtx −=−=Φ ∫
+∞
∞−
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )∫+∞
∞−∆− ∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts ttxtx
Ttxtx 1|| |exp,1
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) 1|| ||expexp11
txtxtxttxsEtxsE Ttxtx
Ttxtx −∆+−⋅−=
Therefore
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( )
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ]
( )( )
( )
( ) ( ) ( )[ ] ( )( ) ( ) 1|
1
,;
|
0|
1|
0|
|,;exp
||1exp
limexp
|1|exp
limexp
1
1
1
1
1
txtxtstxsE
txt
txtxttxsEtxsE
txt
txtxttxsEtxsE
Ttxtx
txts
Ttxtx
t
Ttxtx
Ttxtx
t
Ttxtx
φφ
⋅−=
∆−−∆+−
⋅−=
∆−−∆+−
⋅−=
→∆
→∆
q.e.d.
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Discussion about Bartlett-Moyal Theorem
(1) The assumption that x (t) is a Markov Process is essential to the derivation
( )( ) ( ) ( ) ( ) ( )[ ]td
txxdsEtxts
Ttxtx |1exp
:,; 1| −−=φ
(2) The function is calledItô Differential of the Markov Process, orInfinitesimal Generator of Markov Process
( )( )txts ,;φ
(3) The function is all we need to define the Stochastic Process(this will be proven in the next Lemma)
( )( )txts ,;φ
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Lemma
Let x(t) be an (nx1) Vector Markov Process generated by ( ) nddttxfxd += ,
where pg ndndnd +=
pnd - is an (nx1) Poisson Process with Zero Mean and Rate Vector and Jump Probability Density pa(α).
gnd - is an (nx1) Wiener (Gauss) Process with Zero Mean and Covariance( ) ( ) ( ) dttQtndtndE T
gg =
then ( )( ) ( ) ( )[ ]∑=
−−−−=n
iiai
TT sMsQstxfstxtsi
1
12
1,,; λφ
Proof
We have ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( ) td
txndnddttxfsE
td
txxdsEtxts pg
Ttxtx
Ttxtx |1,exp|1exp
:,; 11 || −++−=
−−=φ
( ) ( ) ( )( )[ ] ( ) ( )[ ] [ ] [ ] pT
gTT
pgT
txtx ndsEndsEdttxfstxndnddttxfsE −−−=++− expexp,exp|,exp1|
Because are independentpg ndndxd ,,
[ ] ( ) ( )dtdtdtdtndinjumponeonlyP i
n
ijjii 01 +=−= ∏
≠
λλλ
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Lemma
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
then ( )( ) ( ) ( )[ ]∑=
−−−−=n
iiai
TT sMsQstxfstxtsi
1
12
1,,; λφ
Proof (continue – 1)
Because is Gaussiangnd [ ]
−=− dtsQsndsE T
gT
2
1expexp
The Characteristic Function of the Generalized Poisson Process can be evaluated as follows. Let note that the Probability of two or more jumps occurring at dt is 0(dt)→0
[ ] [ ] [ ] [ ]∑=
−+⋅=−n
iiiip
T ndinjumponeonlyPasEjumpsnoPndsE1
exp1exp
But [ ] ( ) ( )dtdtdtjumpsnoPn
ii
n
ii 011
11
+−=−= ∑∏==
λλ
[ ] ( ) ( )dtdtdtdtndinjumponeonlyP i
n
ijjii 01 +=−= ∏
≠
λλλ
[ ] [ ] ( )
( ) ( )[ ]∑∑∑===
−−=+−+−=−n
iiai
n
ii
sM
ii
n
iip
T sMdtdtdtasEdtndsEi
iia111
110exp1exp λλλ
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Lemma
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
then ( )( ) ( ) ( )[ ]∑=
−−−−=n
iiai
TT sMsQstxfstxtsi
1
12
1,,; λφ
Proof (continue – 3)
We found
[ ]
−=− dtsQsndsE T
gT
2
1expexp
[ ] [ ] ( )
( ) ( )[ ]∑∑∑===
−−=+−+−=−n
iiai
n
ii
sM
ii
n
iip
T sMdtdtdtasEdtndsEi
ita111
110exp1exp λλλ
( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( )
( )[ ] [ ] [ ] ( )[ ] ( )[ ]td
sMdtdtsQsdttxfs
td
ndsEndsEdttxfs
td
txndnddttxfsE
td
txxdsEtxts
n
iiai
TT
pT
gTT
pgT
txtxT
txtx
i111
21
exp,exp1expexp,exp
|1,exp|1exp:,;
1
|| 11
−
−−
−−
=−−−−
=
−++−=
−−=
∑=
λ
φ
( ) ( )[ ] ( ) ( )[ ]( ) ( )[ ]∑
∑=
= −−−−=−
−−
+−+−
=n
iiai
TT
n
iiai
TT
sMdtsQstxfstd
sMdtdtdtsQsdtdttxfs
i
i
1
1
22
12
1,
111021
10,1
λλ
q.e.d.
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Theorem 2
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
( ) [ ]∑∑∑∑== ==
∗+−+∂∂
∂+
∂∂−=
∂∂ n
iai
n
i
n
j ji
ijn
i i
ii
pppxx
pQ
x
pf
t
p
11 1
2
1 2
1 λ
Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
where the convolution (*) is defined as
( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,:
1
ProofFrom Theorem 1 and the previous Lemma, we have:
( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( )
( ) ( ) ( ) ( ) ( )[ ] ( )
−−−−−=
−=∂
Φ∂
∑=
11
|
1|
11|
|12
1,exp
|,;exp|,
1
1
1
txsMsQstxfstxsE
txtxtstxsEt
txts
n
iiai
TTTtxtx
Lemma
Ttxtx
Theoremtxtx
iλ
φ
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∫∞+
∞−
+∞
∞−
Φ=⇔−=Φj
j
txtxT
ntxtxtxtxT
txtx sdtstxsj
txttxptxdtxttxptxsts ,exp2
1|,|,exp,
1111 |1|1|| π
( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+
∞−
Φ∂∂=
∂∂ j
j
txtxT
ntxtx sdtst
txsj
txttxpt
,exp2
1|,
11 |1| π
We also have:
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Theorem 2
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
( ) [ ]∑∑∑∑== ==
∗+−+∂∂
∂+
∂∂−=
∂∂ n
iai
n
i
n
j ji
ijn
i i
ii
pppxx
pQ
x
pf
t
p
11 1
2
1 2
1 λ
Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof (continue – 1)
( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )
−−−−−=−=
∂Φ∂
∑=
11
|1|
11| |1
2
1,exp|,;exp
|,11
1 txsMsQstxfstxsEtxtxtstxsEt
txts n
iiai
TTTtxtx
LemmaT
txtx
Theoremtxtx
iλφ
( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+
∞−
Φ∂∂=
∂∂ j
j
txtxT
ntxtx sdtst
txsj
txttxpt
,exp2
1|,
11 |1| π
( ) ( ) ( ) ( ) ( ) ( )[ ] ( )
( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( )[ ]
( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ]
( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( ) ( ) ( )( )[ ]1|1
1|1|
1|
1|
1|
|,|,
|,exp2
1
exp|,exp2
1
,exp|exp2
1
|,expexp2
1
1
1
1
1
1
1
txtxptxfx
txtxptxfsdtxtxptxfLstxs
j
sdvdtvstxtvptvfstxsj
sdvdtvfstvstxtvptxsj
sdtxtxfstxsEtxsj
txtxix
n
i i
txtxij
j
txtxTT
n
j
j
Ttxtx
TTn
j
j
TTtxtx
Tn
j
j
TTtxtx
Tn
∇=∂
∂=−=
−−=
−−=
−−
∑∫
∫ ∫
∫ ∫
∫
=
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
π
π
π
π
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Theorem 2
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
( ) [ ]∑∑∑∑== ==
∗+−+∂∂
∂+
∂∂−=
∂∂ n
iai
n
i
n
j ji
ijn
i i
ii
pppxx
pQ
x
pf
t
p
11 1
2
1 2
1 λ
Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof (continue – 2)
( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )
−−−−−=−=
∂Φ∂
∑=
11
|1|
11| |1
2
1,exp|,;exp
|,11
1 txsMsQstxfstxsEtxtxtstxsEt
txts n
iiai
TTTtxtx
LemmaT
txtx
Theoremtxtx
iλφ
( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+
∞−
Φ∂∂=
∂∂ j
j
txtxT
ntxtx sdtst
txsj
txttxpt
,exp2
1|,
11 |1| π
( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( )
( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ]
( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( )[ ]∑∑∫
∫ ∫
∫ ∫
∫
= =
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
∂∂∂
=−=
−−=
−=
−
n
i
n
j ji
txtxijj
j
txtxTT
n
j
j
Ttxtx
TTn
j
j
TTtxtx
Tn
j
j
TTtxtx
Tn
xx
txtxptxQsdstxtxptQLstxs
j
sdsvdtvstxtvptQstxsj
sdvdstQstvstxtvptxsj
sdtxstQstxsEtxsj
1 1
1|2
1|
1|
1|
1|
|,
2
1|exp
2
1
exp|exp2
1
exp|exp2
1
|expexp2
1
1
1
1
1
1
π
π
π
π
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Theorem 2
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
( ) [ ]∑∑∑∑== ==
∗+−+∂∂
∂+
∂∂−=
∂∂ n
iai
n
i
n
j ji
ijn
i i
ii
pppxx
pQ
x
pf
t
p
11 1
2
1 2
1 λ
Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof (continue – 3)
( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )
−−−−−=−=
∂Φ∂
∑=
11
|1|
11| |1
2
1,exp|,;exp
|,11
1 txsMsQstxfstxsEtxtxtstxsEt
txts n
iiai
TTTtxtx
LemmaT
txtx
Theoremtxtx
iλφ
( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+
∞−
Φ∂∂=
∂∂ j
j
txtxT
ntxtx sdtst
txsj
txttxpt
,exp2
1|,
11 |1| π
( ) ( ) ( ) ( ) ( ) [ ] [ ] ( )
( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] [ ] [ ]
( ) ( ) [ ] [ ] ( ) ( ) ( ) ( )( ) ( )[ ]
( ) ( ) [ ] [ ] ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( )∫∫
∫ ∫
∫ ∫
∫
−−=−−−=
−−−−=
−−−=
−−−
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
initxtxiiaitxtxi
j
j
txtxiiiT
n
j
j
Ttxtxiii
Tn
j
j
iiiT
txtxT
n
j
j
iiiT
txtxT
n
vdtxsvspvsptxtxpsdtxtvpasELtxsj
sdvdtvstxtvpasEtxsj
sdvdasEtvstxtvptxsj
sdtxasEtxsEtxsj
i 11|1|1|
1|
1|
1|
|,,,,||exp1exp2
1
exp|exp1exp2
1
exp1exp|exp2
1
|exp1expexp2
1
111
1
1
1
λλλπ
λπ
λπ
λπ
( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,:
1Table of Content
Fokker- Planck Equation
SOLO Stochastic Processes
Feller- Kolmogorov Equation
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pnddttxfxd += ,
( ) [ ]∑∑==
∗+−+∂
∂−=∂∂ n
iai
n
i i
ii
pppx
pf
t
p
11
λ
Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof
where the convolution (*) is defined as
( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,:
1
Andrey Nikolaevich Kolmogorov
1903 - 1987
Derived from Theorem 2 by tacking 0=gnd
Fokker- Planck Equation
SOLO Stochastic Processes
Fokker-Planck Equation
Let x(t) be an (nx1) Vector Markov Process generated by ( ) gnddttxfxd += ,
( ) ∑∑∑= == ∂∂
∂+
∂∂−=
∂∂ n
i
n
j ji
ijn
i i
i
xx
pQ
x
pf
t
p
1 1
2
1 2
1
Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof
Derived from Theorem 2 by tacking 0=pnd
Discussion of Fokker-Planck Equation
The Fokker-Planck Equation can be written as a Conservation Law
01
=∇+∂∂=
∂∂+
∂∂ ∑
=J
t
p
x
J
t
p n
i i
where pQpfJ ∇−=2
1:
This Conservation Law is a consequence of the Global Conservation of Probability
( ) ( ) ( ) ( )( ) 1|, 1| 1=∫ xdtxttxp txtx
Table of Content
Langevin and Fokker- Planck Equations
SOLO Stochastic Processes
The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid,
( ) ( )tm
vmtd
vd
td
xdvtv
td
vdm ηληλ 1+−=⇒=+−=
We are interested in the position x of a particle of mass m. The force on the particle is the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a noise term η (t) that has a Gaussian Probability Distribution with Correlation Function
( ) ( ) ( ) 2, /2'2', mTkQttTktt BjiBji λδδληη =−=
where kB is Boltzmann’s constant and T is the Temperature.
Let be the Transition Probability Density Function that corresponds to the Langevin Equation state. Then p satisfies the Partial Differential Equation given by the Fokker-Planck Equation:
( ) ( ) ( ) ( )( ) ptvttvp tvtv =1| |,1
( ) ( ) ( ) ( )( ) ( )( )00000| |,1
vtvtvttvp tvtv −= δ
( )( )2
2/
v
pQ
v
pvm
t
p
∂∂+
∂−∂−=
∂∂ λ
We assume that the initial state at t0 is v(t0) and is deterministic
Langevin and Fokker- Planck Equations
SOLO Stochastic Processes
The Fokker-Planck Equation:
( ) ( ) ( ) ( )( ) ( )( )
−−=2
2
2/120|
ˆ
2
1exp
2
1|,
1 σσπvv
tvttvp tvtv
( ) ( ) ( ) ( )( ) ( )( )00000| |,1
vtvtvttvp tvtv −= δ
( )( )2
2/
v
pQ
v
pvm
t
p
∂∂+
∂−∂−=
∂∂ λ
the initial state at t0 is v(t0) is deterministic
The solution to the Fokker-Planck Equation is:
where: A solution to the one-dimensional Fokker–Planck equation, with both the drift and the diffusion term. The initial condition is a Dirac delta function in x = 1, and the distribution drifts towards x = 0.
( )
−−= 00 expˆ ttm
vvλ
and:
( )
−−−= 0
2 2exp1 ttm
Qλσ
Table of Content
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
( )TXtxpx ,|,Define the set of past data. We need to find( ) ( ) ( )( )nn tttxxxTX ,,,,,,,:, 2121 =
where we assume that ( ) ( )TXtx ,∉
Start the analysis by defining the Conditional Characteristic Function of the Increment of the Process:
( ) ( )( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( )( )[ ] ( ) ( )( ) ( ) ( ) ( )ttxtxxtxdTXttxtxpttxtxs
TXttxttxtxsETXttxts
TXttxxT
TTXttxxTXttxx
∆−−=∆∆−∆−−−=
∆−∆−−−=∆−Φ
∫∞+
∞−∆−
∆−∆∆−∆
:,,|,exp
,,|exp,,|,
,,|
,,|,,|
( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )( )∫∞+
∞−∆−∆∆− ∆−Φ∆−−==∆−
j
j
TXttxxT
nTXttxtx sdTXttxtsttxtxsj
TXvttxtxp ,,|,exp2
1,,|, ,,|,,| π
The Inverse Transform is
The Fokker-Planck Equation was derived under the assumption that is a Markov Process. Let assume that we don’t have a Markov Process, but an Arbitrary Random Process (nx1 vector), where an arbitrary set of past value , must be considered. nn txtxtx ,;;,;, 2211
( )tx
( )tx
( ) ( )nTn
T sssxxx 11 , ==
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Using Chapman – Kolmogorov Equation we obtain:
( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( )( ) ( )
( ) ( ) ( )[ ] ( ) ( )( )
( ) ( ) ( )[ ]
( ) ( )( ) ( )
( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( )( ) ( )∫ ∫
∫ ∫
∫
∞+
∞−
∞+
∞−∆−∆−∆
∞+
∞−∆−
∆−
∞+
∞−∆−∆
+∞
∞−∆−∆−∆−
∆−∆−∆−Φ∆−−=
∆−∆−∆−Φ∆−−=
∆−∆−∆−=
∆−
j
j
TXttxTXttxxT
n
TXttx
TXttxtxp
j
j
TXttxxT
n
TXttxTXttxtxTXttxtx
ttxdsdTXttxpTXttxtsttxtxsj
ttxdTXttxpsdTXttxtsttxtxsj
ttxdTXttxpTXttxtxpTXtxp
TXttxtx
,|,,|,exp2
1
,|,,|,exp2
1
,|,,|,,|,
,|,,|
,|
,,|,
,,|
,|,,|,,|
,,|
π
π
where
Let expand the Conditional Characteristic Function in a Taylor Series about the vector 0=s
( ) ( )( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( )( )[ ] ( ) ( )( ) ( )∫
∞+
∞−∆−
∆−∆∆−∆
∆−∆−∆−−−=
−∆+−=∆−Φ
ttxdTXttxtxpttxtxs
TXtxtxttxsETXttxts
TXttxxT
TTXttxxTXttxx
,,|,exp
,,|exp,,|,
,,|
,,|,,|
( ) ( )( ) ( ) ( )
( ) ∑∑ ∑
∑∑∑
=
∞
=
∞
=
∆−∆
= =
∆−∆
=
∆−∆∆−∆
=∂∂
Φ∂=
+∂∂
Φ∂+
∂Φ∂
+=∆−Φ
n
ii
m m
mn
m
mn
m
TXttxxm
n
n
i
n
iii
ii
TXttxxi
n
i i
TXttxxTXttxx
mmssssmm
ssss
ss
TXttxts
n
n
n10 0
1
1
,,|
1
1 1
,,|2
1
,,|,,|
1
1
1
1 2
21
21
1
1 1
!!
1
!2
11,,|,
( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) ∑=
∆−∆∆−∆ =∆−∆−−∆−−⋅∆−−−=
∂∂∂∆−Φ∂ n
ii
mnn
mmTXttxx
m
mn
mm
TXttxxm
mmTXttxttxtxttxtxttxtxEsss
TXttxtsn
n1
2211,,|
21
,,| :,,|1,,|,
21
21
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( )( ) ( )∫ ∫+∞
∞−
∞+
∞−∆−∆−∆∆− ∆−∆−∆−Φ∆−−=
j
j
TXttxTXttxxT
nTXttxtx ttxdsdTXttxpTXttxtsttxtxsj
TXtxp ,|,,|,exp2
1,|, ,|,,|,,| π
( ) ( ) ( )[ ] ( )( ) ( )( ) ( )∫ ∫ ∑ ∑
+∞
∞−
∞+
∞−∆−
∞
=
∞
=
∆−∆ ∆−∆−∂∂
Φ∂∆−−=
j
j
TXttxm m
mn
m
mn
m
TXttxxm
n
Tn ttxdsdTXttxpss
ssmmttxtxs
jn
n
n,|
!!
1exp
2
1.|
0 01
1
,,|
11
1
1
π
( ) ( ) ( )[ ] ( )( ) ( )( ) ( )ttxdTXttxpdsdsss
ssttxtxs
jmm TXttxm m
j
j
j
j
nm
nm
mn
m
TXttxxm
Tn
nn
n
n∆−∆−
∂∂Φ∂
∆−−= ∆−
∞
=
∞
=
+∞
∞−
∞+
∞−
∞+
∞−
∆−∆∑ ∑ ∫ ∫ ∫ ,|exp2
1
!!
1,|
0 011
1
,,|
11
1
1
π
( )( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpdsdsssTXttxttxtxttxtxEttxtxs
jmm TXttxm m
j
j
j
j
nmn
mmnn
mTXttxx
Tn
n
m
n
nn ∆−∆−∆−∆−−∆−−∆−−−= ∆−
∞
=
∞
=
+ ∞
∞−
∞+
∞−
∞+
∞−∆−∆∑ ∑ ∫ ∫ ∫ ,|,,|exp
2
1
!!
1,|
0 01111,,|
11
11
π
we obtained:
( )( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpdssTXttxttxtxEttxtxs
jm TXttxm m
n
i
j
j
imi
miiTXttxxiii
i
m
n
ii
i
∆−∆−
∆−∆−−∆−−−= ∆−
∞
=
∞
=
+ ∞
∞− =
∞+
∞−∆−∆∑ ∑ ∫ ∏ ∫ ,|,,|exp
2
1
!
1,|
0 0 1,,|
1π
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Using :
[ ] ( ) ( ) ( ) ( ) ( ) ∫∫∫∞+
∞−
∞+
∞−
∞+
∞−
=→=−=−j
j
ii
ij
j
j
j
ii
i
sdussFsj
ufdu
dsdussF
jufsdauss
jau
ud
dexp
2
1exp
2
1exp
2
1
πππδ
we obtained:
we obtain:
( ) ( ) [ ]( )
( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxEdsttxtxssjm
TXtxp
TXttxm m
j
j
miiTXttxxiiii
mi
i
mn
i
TXttxtx
n
ii
i
∆−∆−
∆−∆−−∆−−−= ∆−
∞
=
∞
=
∞+
∞−
∞+
∞−∆−∆
=
∆−
∑ ∑ ∫ ∫∏ .|,,|exp2
1
!
1
,|,
.|0 0
,,|1
,,|
1π
( ) ( ) [ ]( ) ( ) ( )[ ]
( ) ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxEtx
ttxtx
m
TXtxp
TXttxm m
n
i
miiTXttxxm
i
iim
i
m
TXttxtx
n
i
i
ii
∆−∆−
∆−∆−−
∂∆−−∂−= ∆−
∞
=
∞
=
∞+
∞− =∆−∆
∆−
∑ ∑ ∫ ∏ ,|,,|!
1
,|,
,|0 0 1
,,|
,,|
1
δ
( )( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )
( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )[ ]∑ ∑ ∏
∑ ∑ ∏ ∫∞
=
∞
= ==∆∆−∆
∞
=
∞
= =
+∞
∞−∆−∆−∆
∆−∆−∆−−
∂∂−=
∆−∆−∆−∆−−∆−−
∂∂−=
0 0 10,|,,|
0 0 1,|,,|
1
1
,|,,|!
1
,|,,|!
1
m m
n
itTXttx
miiTXtxxm
i
m
i
m
m m
n
iTXttx
miiTXttxxiim
i
m
i
m
n
i
i
ii
n
i
i
ii
TXttxpTXttxttxtxEtxm
ttxdTXttxpTXttxttxtxEttxtxtxm
δ
For m1=…=mn=m=0 we obtain : ( ) ( ) [ ]TXttxp TXttxttx ,|,,,| ∆−∆−∆−
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
we obtained:
( ) ( ) [ ] ( ) [ ]( )
( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )[ ] 0,|,,|!
1
,|,,|,
10 0 10,|,,|
,|,,|
1
≠=
∆−∆−∆−−
∂∂−=
∆−−
∑∑ ∑ ∏=
∞
=
∞
= ==∆∆−∆
∆−∆−
n
ii
m m
n
itTXttx
miiTXtxxm
i
m
i
m
TXttxTXttxtx
mmTXttxpTXttxttxtxEtxm
TXttxpTXtxp
n
i
i
ii
Dividing both sides by Δt and taking Δt →0 we obtain:
( ) [ ] ( ) ( ) [ ] ( ) [ ]
( )( )
( ) ( ) ( )( ) ( ) ( ) ( )( ) 0,|
,,|lim
!
1
,|,,|,lim
,|,
10 0 1,|
,,|
0
,|,,|
0
,|
1
≠=
∆∆−∆−−
∂∂−=
∆∆−−
=∂
∂
∑∑ ∑ ∏=
∞
=
∞
= =
∆
→∆
∆−∆−
→∆
n
ii
m m
n
iTXtx
miiTXtxx
tmi
m
i
m
TXttxTXttxtx
t
TXtx
mmTXtxpt
TXttxttxtxE
txm
t
TXttxpTXtxp
t
TXtxp
n
i
i
ii
This is the Generalized Fokker - Planck Equation for Non-Markovian Random Processes
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Discussion of Generalized Fokker – Planck Equation
( ) [ ] ( )( ) ( ) ( ) ( )( )( )
( ) ( ) ( )( ) ( ) ( )( ) ( ) t
TXtxttxtxttxtxEA
mmTXtxpAtxtxmmt
TXtxp
n
p
pn
n
mnn
mTXtxx
tmm
n
iiTXtxmmm
nm
m
m m n
mTXtx
∆∆−−∆−−
=
≠=∂∂
∂−=∂
∂
∆
→∆
=
∞
=
∞
=∑∑ ∑
,,|lim:
0,|!!
1,|,
1
1
11
1
11,,|
0,,
1,|
10 0 1
,|
• The Generalized Fokker - Planck Equation is much more complex than the Fokker – Planck Equation because of the presence of the infinite number of derivative of the density function.
• It requires certain types of density function, infinitely differentiable, and knowledge of all coefficients
• To avoid those difficulties we seek conditions on the process for which ∂p/∂t is defined by a finite set of derivatives.
pmmA ,,1
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Discussion of Generalized Fokker – Planck Equation
( ) [ ] ( )( ) ( ) ( ) ( )( )( )
( ) ( ) ( )( ) ( ) ( )( ) ( ) t
TXtxttxtxttxtxEA
mmTXtxpAtxtxmmt
TXtxp
n
p
pn
n
mnn
mTXtxx
tmm
n
iiTXtxmmm
nm
m
m m n
mTXtx
∆∆−−∆−−
=
≠=∂∂
∂−=∂
∂
∆
→∆
=
∞
=
∞
=∑∑ ∑
,,|lim:
0,|!!
1,|,
1
1
11
1
11,,|
0,,
1,|
10 0 1
,|
• To avoid those difficulties we seek conditions on the process for which ∂p/∂t is defined by a finite set of derivatives. Those were defined by Pawula, R.F. (1967)
Lemma 1
Let( ) ( ) ( )( ) ( )
0,,|
lim: 111,,|
00,,0,
1
1≠=
∆∆−−
= ∆
→∆mm
t
TXtxttxtxEA
mTXtxx
tm
If is zero for some even m1, then
Proof
For m1 odd and m1 ≥ 3, we have
( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t
TXtxttxtxttxtxE
t
TXtxttxtxEA
mm
TXtxx
t
mTXtxx
tm ∆
∆−−∆−−
=∆
∆−−=
+−
∆
→∆
∆
→∆
,,|lim
,,|lim:
2
1
112
1
11,,|
0
11,,|
00,,0,
11
1
1
0,,0,1 mA 30 10,,0,1≥∀= mAm
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 1
Let ( ) ( ) ( )( ) ( ) 0
,,|lim: 1
11,,|
00,,0,
1
1≠=
∆∆−−
= ∆
→∆mm
t
TXtxttxtxEA
mTXtxx
tm
Proof
For m1 odd and m1 ≥ 3, we have
( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t
TXtxttxtxttxtxE
t
TXtxttxtxEA
mm
TXtxx
t
mTXtxx
tm ∆
∆−−∆−−
=∆
∆−−=
+−
∆
→∆
∆
→∆
,,|lim
,,|lim:
2
1
112
1
11,,|
0
11,,|
00,,0,
11
1
1
Using Schwarz Inequality, we have
( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) 0,,0,10,,0,1
111,,|
0
111,,|
0
20,,0, 11
11
1
,,|lim
,,|lim +−
+∆
→∆
−∆
→∆=
∆∆−−
∆∆−−
≤ mm
mTXtxx
t
mTXtxx
tm AA
t
TXtxttxtxE
t
TXtxttxtxEA
In the same way, for m1 ≥ 4, and m1 even we have
( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t
TXtxttxtxttxtxE
t
TXtxttxtxEA
mm
TXtxx
t
mTXtxx
tm ∆
∆−−∆−−
=∆
∆−−=
+−
∆
→∆
∆
→∆
,,|lim
,,|lim:
2
2
112
2
11,,|
0
11,,|
00,,0,
11
1
1
0,,0,20,,0,22
0,,0, 111 +−≤ mmm AAAUsing Schwarz Inequality, again for m1 ≥ 4
If is zero for some even m1, then0,,0,1 mA 30 10,,0,1≥∀= mAm
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 1
Let( ) ( ) ( )( ) ( )
0,,|
lim: 111,,|
00,,0,
1
1≠=
∆∆−−
= ∆
→∆mm
t
TXtxttxtxEA
mTXtxx
tm
Proof (continue)
we haveevenmmAAA
oddmmAAA
mmm
mmm
110,,0,20,,0,22
0,,0,
110,,0,10,,0,12
0,,0,
4
3
111
111
≥≤
≥≤
+−
+−
00,,0, =rAFor some m1 = r even we have , and
Therefore A r-2,0,…,0=0, A r-1,0,…,0 =0, A r+1,0,…,0 =0, A r+2,0,…,0 =0, if A r,0,…,0 = 0 and all A are bounded. This procedure will continue leaving A 1,0,…,0 not necessarily zero and achieving:
420
310
310
420
0,,0,0,,0,42
0,,0,2
0,,0,20,,0,2
0,,0,1
0,,0,0,,0,22
0,,0,1
0,,0,0,,0,42
0,,0,2
≥+=≤
≥+=≤
≥−=≤
≥−=≤
++
++
−−
−−
rAAA
rAAA
rAAA
rAAA
rrr
rrr
rrr
rrr
00,,0,0,,0,30,,0,2 ==== ∞→ rAAAq.e.d.
If is zero for some even m1, then0,,0,1 mA 30 10,,0,1≥∀= mAm
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 2
Let
If each of the moments is finite and vanishes for some even mi, then
nmmm AAA ,,0,0,,,00,,0, ,,,21
Proof
2,,0 321,0,0,0,,00,0, 321≥∀=== mmmAAA mmm
( ) ( ) ( )( ) ( ) ( )( ) ( ) 0
,,|lim:
1
11,,|
0,,
1
1>=
∆∆−−∆−−
= ∑=
∆
→∆
n
ii
mnn
mTXtxx
tmm mm
t
TXtxttxtxttxtxEA
n
p
20..1,0
3..00
1,,
1,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=n
iiimm
n
iiimm
mmtsmzeronecessarlynotA
mmtsmA
p
p
We shall prove this Lemma by Induction.Let start with n=3
( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) 0
,,|lim
1
332211,,|
0,,
321
321>=
∆∆−−∆−−∆−−
= ∑=
∆
→∆
n
ii
mmmTXtxx
tmmm mm
t
TXtxttxtxttxtxttxtxEA
We proved in Lemma 1 that and A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero.
( ) ( ) ( )( ) ( ) ( )( ) ( )
( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) 22,0,0
20,2,0
233,,|
0
222,,|
0
2
3322,,|
0
2,,0
32
32
32
32
,,|lim
,,|lim
,,|lim
mm
mTXtxx
t
mTXtxx
t
mmTXtxx
tmm
AAt
TXtxttxtxE
t
TXtxttxtxE
t
TXtxttxtxttxtxEA
=
∆∆−−
∆∆−−
≤
∆∆−−∆−−
=
∆
→∆
∆
→∆
∆
→∆
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 2
Let
If each of the moments is finite and vanishes for some even mi, then
nmmm AAA ,,0,0,,,00,,0, ,,,21
Proof (continue – 1)
2,,0 321,0,0,0,,00,0, 321≥∀=== mmmAAA mmm
( ) ( ) ( )( ) ( ) ( )( ) ( ) 0
,,|lim:
1
11,,|
0,,
1
1>=
∆∆−−∆−−
= ∑=
∆
→∆
n
ii
mnn
mTXtxx
tmm mm
t
TXtxttxtxttxtxEA
n
p
A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero.
22,0,0
20,2,0
2,,0 3232 mmmm AAA ≤
≥+>=
⇒zeroynecessarilnotA
mmmmA mm
1,1,0
3232,,0 3&0,032
20..1,0
3..00
1,,
1,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=n
iiimm
n
iiimm
mmtsmzeronecessarlynotA
mmtsmA
p
p
22,0,0
20,0,2
2,0, 3131 mmmm AAA ≤
≥+>=
⇒zeroynecessarilnotA
mmmmA mm
1,0,1
3131,0, 3&0,032
20,2,0
20,0,2
20,, 2121 mmmm AAA ≤
≥+>=
⇒zeroynecessarilnotA
mmmmA mm
0,1,1
21210,, 3&0,021
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 2
Let
If each of the moments is finite and vanishes for some even mi, then
nmmm AAA ,,0,0,,,00,,0, ,,,21
Proof (continue – 2)
( ) ( ) ( )( ) ( ) ( )( ) ( ) 0
,,|lim:
1
11,,|
0,,
1
1>=
∆∆−−∆−−
= ∑=
∆
→∆
n
ii
mnn
mTXtxx
tmm mm
t
TXtxttxtxttxtxEA
n
p
20..1,0
3..00
1,,
1,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=n
iiimm
n
iiimm
mmtsmzeronecessarlynotA
mmtsmA
p
p
( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) 4
332211,,|
0
4,,
,,|lim
321
321
∆∆−−∆−−∆−−
= ∆
→∆ t
TXtxttxtxttxtxttxtxEA
mmmTXtxx
tmmm
( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( )
( ) ( ) ( )( ) ( ) 321
3
22
4,0,00,4,02
0,0,2
433,,|
0
422,,|
0
2211,,|
0
,,|lim
,,|lim
,,|lim
mmm
mTXtxx
t
mTXtxx
t
mTXtxx
t
AAAt
TXtxttxtxE
t
TXtxttxtxE
t
TXtxttxtxE
=
∆∆−−
⋅
∆∆−−
⋅
∆∆−−
≤
∆
→∆
∆
→∆
∆
→∆
321321 4,0,00,4,02
0,0,24
mmmmmm AAAA ≤ Since 000,032132 ,,324,0,00,4,0 >∀=⇒>∀== immmmm mAmmAA
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 2
Let
If each of the moments is finite and vanishes for some even mi, then
nmmm AAA ,,0,0,,,00,,0, ,,,21
Proof (continue – 3)
q.e.d.
( ) ( ) ( )( ) ( ) ( )( ) ( ) 0
,,|lim:
1
11,,|
0,,
1
1>=
∆∆−−∆−−
= ∑=
∆
→∆
n
ii
mnn
mTXtxx
tmm mm
t
TXtxttxtxttxtxEA
n
p
20..1,0
3..00
1,,
1,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=n
iiimm
n
iiimm
mmtsmzeronecessarlynotA
mmtsmA
p
p
We proved that only are not necessarily zero and1,1,01,0,10,1,11,0,00,1,00,0,1 ,,,,, AAAAAA
3..003
1,, 321
≥=>∀= ∑=i
iimmm mmtsmA
In the same way, assuming that the result is true for (n-1) is straight forward to show that is true for n and
20..1,0
3..00
1,,
1,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=
n
iiimm
n
iiimm
mmtsmzeronecessarlynotA
mmtsmA
p
p
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Theorem 2
Let for some set (X,T) and let each of the moments vanish for some even mi. Then the transition density satisfies the Generalized Fokker-Planck Equation
nmmm AAA ,,0,0,,,00,,0, ,,,21
Proof
q.e.d.
( ) ( )
( ) ( ) ( )( ) ( )
( ) ( ) ( )( ) ( ) ( )( ) ( ) 0,,1,,0,,1.,00
0,,1,,00
1 1
2
1
,,|1
lim,
,,|1
lim,
2
1
==→∆
=→∆
= ==
=−∆+−∆+∆
=
=−∆+∆
=
∂∂∂
+∂
∂−=∂∂ ∑∑∑
ji
i
mmjjiit
ji
miit
i
n
i
n
j ji
jin
i i
i
ATXtxtxttxtxttxEt
txC
ATXtxtxttxEt
txB
xx
pC
x
pB
t
p
( )TXtxpp x ,|,=
0,,1,,0,,1,,00,,1,,0 , === jii mmm AASince vanishes for some even mi, from Lemma 2 the onlynon-necessarily zero Moments are
nmmm AAA ,,0,0,,,00,,0, ,,,21
The Generalized Fokker – Planck Equation becomes
( ) [ ] ( )( ) ( ) ( ) ( )( )( )
( ) ( )∑∑∑
∑∑ ∑
= ===
==
=
∞
=
∞
=
⋅∂∂
∂+⋅∂∂−=
≠=⋅∂∂
∂−=∂
∂
n
i
n
jmm
i
n
im
i
n
iiTXtxmmm
nm
m
m m n
mTXtx
pAxjx
pAx
mmTXtxpAtxtxmmt
TXtxp
jii
pn
n
1 10,,1,,0,,1,,0
2
10,,1,,0
1,|
10 0 1
,|
2
1
0,|!!
1,|,11
1
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
History
The Fokker-Planck Equation was derived by Uhlenbeck and Orenstein for Wiener noise in the paper: “On the Theory of Brownian Motion”, Phys. Rev. 36, pp.823 – 841 (September 1, 1930), (available on Internet)
George EugèneUhlenbeck (1900-1988)
Leonard Salomon Ornstein
(1880 -1941)
Ming Chen Wang (王明贞( (1906-2010(
Un updated version was published by M.C. Wang and Uhlenbeck : “On the Theory of Brownian Motion II”,. Rev. Modern Physics, 17, Nos. 2 and 3, pp.323 – 342 (April-July 1945), (available on Internet).They assumed that all Moments above second must vanish.
The sufficiency of a finite set of Moments to obtain a Fokker-Planck Equation was shown by R.F. Pawula, “Generalization and Extensions of Fokker-Planck-Kolmogorov Equations,”, IEEE, IT-13, No.1, pp. 33-41 (January 1967)
Table of Content
Karhunen-Loève Theorem
SOLOStochastic Processes
Michel Loève1907 )Jaffa(
- 1979 )Berkley(
In the theory of stochastic processes, the Karhunen-Loève theorem (named after Kari Karhunen and Michel Loève) is a representation
of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. In contrast to a Fourier series where the coefficients are real numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen-Loève theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. If we regard a stochastic process as a random function F, that is, one in which the random value is a function on an interval [a, b], then this theorem can be considered as a random orthonormal expansion of F.
Given a Stochastic Process x (t) defined on an interval [a,b], Karhunen-Loeve Theorem states that
( ) ( ) ( ) btatbtxtxn
nn ≤≤=≈ ∑∞
=1
ˆ ϕ
( ) ( )
≠=
=∫ nm
nmdttt
b
a
mn 0
1*ϕϕ ( )
( ) ( )
( ) ( )
,2,1, 122
*
21
21
==∫ mtdttttR mm
b
a
m
txtxE
ϕλϕ:bydefined
functionslorthonormaare
are random variables( ) ( ) ,2,1* == ∫ ndtttxbb
a
nn ϕand( )
=≠
=
=→=
mn
mnbbE
bEtxEIf
nmn
n
λ0
*
00
Karhunen-Loève Theorem (continue – 1)
SOLOStochastic Processes
Proof:
( ) ( )
≠=
=∫ nm
nmdttt
b
a
mn 0
1*ϕϕ ( )
( ) ( )
( ) ( )
,2,1, 122
*
21
21
==∫ mtdttttR mm
b
a
m
txtxE
ϕλϕ:bydefined
functionslorthonormaare
( ) ( ) ( ) btatbtxtxn
nn ≤≤=≈ ∑∞
=1
ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxbb
a
nn ϕIf
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatsttdtttxtxEdtttxtxEbtxE mm
b
a
m
b
a
mm ≤≤∀==
= ∫∫ 111222122211 ..*** ϕλϕϕ
1
=≠
=mn
mnbbE
nmn λ
0*then
( ) ( ) ( ) ( ) ( ) ( )
=≠
===
= ∫∫∫ mn
mndtttdttbtxEbdtttxEbbE
n
b
a
nmm
b
a
nmm
b
a
nmn λϕϕλϕϕ
0****** 111111111
2
( ) ( ) ( ) ( )( )
,2,10**0
===
==
∫∫ ndtttxEdtttxEbEtxEb
a
n
b
a
nn ϕϕ
Karhunen-Loève Theorem (continue – 2)
SOLOStochastic Processes
Proof:
( ) ( )
≠=
=∫ nm
nmdttt
b
a
mn 0
1*ϕϕ
( )( ) ( )
( ) ( )
,2,1, 122
*
21
21
==∫ mtdttttR mm
b
a
m
txtxE
ϕλϕ
:andfunctionslorthonormaare
( ) ( ) ( ) btatbtxtxn
nn ≤≤=≈ ∑∞
=1
ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxbb
a
nn ϕIf
( ) ( ) ( ) ( ) btatstttbbEbtbEbtxE mmn
nmnmn
nnm ≤≤∀==
= ∑∑
∞
=
∞
=111
11
111 ..*** ϕλϕϕ
3
=≠
=mn
mnbbE
nmn λ
0*
then
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatstdttttRdtttxtxEdtttxtxEbtxEb
a
m
b
a
m
b
a
mm ≤≤∀==
= ∫∫∫ 112221222122211 ..,*** ϕϕϕbut
( )( ) ( )
( ) ( )
,2,1, 122
*
21
21
==∫ mtdttttR mm
b
a
m
txtxE
ϕλϕtherefore
with positiverealbbE mmm &*=λ
Karhunen-Loève Theorem (continue – 3)
SOLOStochastic Processes
( ) ( ) btatbtxn
nn ≤≤= ∑∞
=1
ˆ ϕ then ( ) ( ) ( ) ( ) btatttRtxtxEn
nn ≤≤−=− ∑∞
=1
22,ˆ ϕλ
Convergence of Karhunen – Loève Theorem4
therefore ( ) ( ) ( ) ( ) btatttRtxtxEn
nn ≤≤=⇔=− ∑∞
=1
22,0ˆ ϕλ
Proof:
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttbtxEtbtxEtxtxEn
nnnn
nnn
nn ≤≤==
= ∑∑∑
∞
=
∞
=
∞
= 111
******ˆ ϕϕλϕϕ
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttbtxEtbtxEtxtxEn
nnnn
nnn
nn
nn
≤≤==
= ∑∑∑
∞
=
=∞
=
∞
= 1
*
11
***ˆ* ϕϕλϕϕλλ
( ) ( ) btatsttbtxE nnn ≤≤∀= 1111 ..* ϕλ
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttbtxEtbtxEtxtxEn
nnnn
nnn
nn ≤≤==
= ∑∑∑
∞
=
∞
=
∞
= 111
***ˆ**ˆ*ˆˆ ϕϕλϕϕ
( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttR
txtxtxEtxtxEtxEtxtxtxtxEtxtxE
nnn ≤≤−=
+−−=−−=−
∑∞
=1
2
222
,
ˆˆ**ˆ*ˆˆˆ
ϕλ
Table of Content
References:
SOLO
http://en.wikipedia.org/wiki/Category:Stochastic_processes
http://en.wikipedia.org/wiki/Category:Stochastic_differential_equations
Papoulis, A., “Probability, Random Variables, and Stochastic Processes”, McGraw Hill, 1965, Ch. 14 and 15
Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971
McGarty, T., “Stochastic Systems and State Estimation”, John Wiley & Sons, 1974
Maybeck, P.S., “Stochastic Systems Estimation and Control”, Academic Press, Mathematics in Science and Engineering, Volume 141-2, 1982, Ch. 11 and 12
Stochastic Processes
Table of Content
Jazwinski, A.H., “Stochastic Processes and Filtering Theory”, Academic Press, 1970
January 12, 2015 80
SOLO
TechnionIsraeli Institute of Technology
1964 – 1968 BSc EE1968 – 1971 MSc EE
Israeli Air Force1970 – 1974
RAFAELIsraeli Armament Development Authority
1974 – 2013
Stanford University1983 – 1986 PhD AA
Functional Analysis
( ) ( ) ( ) bxtxxtxtxaxxtfdttf nnn
n
iiiin
b
a
=<<<<<<<<=−= −−=
+→∞∑∫ 1121100
01lim
SOLO
Riemann Integral
http://en.wikipedia.org/wiki/Riemann_integral
ix 1+ix
it
( )itf
ax =0 bxn =
εδ <−= + ii xx 1
( )∫b
a
dttf
In Riemann Integral we divide the interval [a,b]in n non-overlapping intervals, that decrease asn increases. The value f (ti) is computed inside theintervals.
bxtxxtxtxa nnn =<<<<<<<<= −− 1121100
The Riemann Integral is not always defined, for example:
( )
=irationalex
rationalexxf
3
2
The Riemann Integral of this function is not defined.
Georg Friedrich BernhardRiemann
1826 - 1866
Integration
SOLO Stochastic Processes
Thomas Joannes Stieltjes
1856 - 1894
Riemann–Stieltjes integral
Bernhard Riemann1826 - 1866
The Stieltjes integral is a generalization of Riemann integral. Let f (x) and α (x) be] real-valued functions defined in the closed interval [a,b]. Take a partition of the interval
and consider a Riemann sum
bxxxa n <<<<= 10
( ) ( ) ( )[ ] [ ]iii
n
iiii xxxxf ,1
11 −
=− ∈−∑ ξααξ
If the sum tends to a fixed number I when max(xi-xi-1)→0 then I is called aStieltjes integral or a Riemann-Stieltjes integral. The Stieltjes integral of fwith respect to α is denoted:
( ) ( )∫ xdxf α
∫ αdf
If f and α have a common point of discontinuity, then the integral doesn’t exist.However, if f is continuous and α’ is Riemann integrable over the specific interval
or sometimes simply
( ) ( ) ( )xd
xddxfxdxf
αααα == ∫∫ :''
Functional Analysis
my
ky
( )[ ]kyEµ
( )[ ]myEµ
1M
2M( )[ ] 01 =MEµ
( )[ ] 02 =MEµ( )xfy =
SOLO
Lebesgue Integral
Measure
The mean idea of the Lebesgue integral is the notion of Measure.
Definition 1: E (M) є [a,b] is the regionin x є [a,b], of the function f (x) for which ( ) Mxf >
Definition 2: µ [E (M)] the measure of E (M) is
( )[ ]( )
0≥= ∫ME
dxMEµ
We can see that µ [E (M)] is the sum of lengths on x axis for which ( ) Mxf >
From the Figure above we can see that for jumps M1 and M2 ( )[ ] ( )[ ] 021 == MEME µµ
Example: Let find the measure of the rationale numbers, ratio of integers, that are countable
n
mrrrrrr k ====== ,,
4
3,
4
1,
3
2,
3
1,
2
15321 3
Since the rationale numbers are discrete we can choose ε > 0 as small as we want and construct an open interval of length ε/2 centered around r1, an interval of ε/22 centered around r2,.., an interval of ε/2k centered around rk
( )[ ] εεεεµ =++++≤ k
rationalsE222 2
( )[ ] 00
=⇒→
rationalsEµε
Functional Analysis
( ) ( ) ( )[ ] ( ) ( )xfyyyyxfyyEyydttfbxa
nnibxa
n
i
iiin
b
a≤≤
−≤≤=
−∞→=<<<<<<=−= ∑∫ supinflim 110
0
1 µ
a b
0y1y
1−ky
1+kyky
1−ny
ny( )[ ]1+kyEµ
( )[ ]1−kyEµ( )[ ]kyEµ
( )xfy =
( )
=irationalex
rationalexxf
3
2
SOLO
Lebesgue Integral
Henri Léon Lebesgue1875 - 1941
A function y = f (x) is said to be measurable if the set of points x at which f (x) < c is measurable for any and all choices of the constant c.
The Lebesgue Integral for a measurable function f (x) is defined as:
Example
( )( )
( )( )
( )( )
( )( )
( ) 30131
0
1110/
=−==+= ∫∫∫∫≤≤ irationalsErationalsEirationalsExfE
dxxfdxxfdxxfdxxf
3
2
0 1 x
( )xfIrationals
Rationals
For a continuous function the Riemann and Lebesgue integrals give the same results.
Integration
SOLO Stochastic Processes
Lebesgue-Stieltjes integration
Thomas Joannes Stieltjes1856 - 1894
Henri Léon Lebesgue
1875 - 1941
In measure-theoretic analysis and related branches of mathematics, Lebesgue-Stieltjes integration generalizes Riemann-Stieltjes and Lebesgue integration, preserving the many advantages of the latter in a more general measure-theoretic framework.
Let α (x) a monotonic increasing function of x, and define an interval I =(x1,x2).
Define the nonnegative function
( ) ( ) ( )12 xxIU αα −=The Lebesgue integral with respect to a measure constructed using U (I) is called Lebesgue-Stieltjes integral, or sometimes Lebesgue-Radon integral.
Johann Karl August Radon
1887– 1956
Integration
SOLO Stochastic Processes
Darboux Integral Lower (green) and upper (green plus lavender) Darboux sums for four subintervals
Jean-Gaston Darboux
1842 - 1917 In real analysis, a branch of mathematics, the Darboux integral or Darboux sum is one possible definition of the integral of a function. Darboux integrals are equivalent to Riemann integrals, meaning that a function is Darboux-integrable if and only if it is Riemann-integrable, and the values of the two integrals, if they exist, are equal. Darboux integrals have the advantage of being simpler to define than Riemann integrals. Darboux integrals are named after their discoverer, Gaston Darboux.
A partition of an interval [a,b] is a finite sequence of values xi such that bxxxa n <<<<= 10
Definition
Each interval [xi−1,xi] is called a subinterval of the partition. Let ƒ:[a,b]→R be a bounded function, and let ( )nxxxP ,,, 10 = be a partition of [a,b]. Let
[ ]( )
[ ]( )xfmxfM
iiiixxx
ixxx
i,, 11
inf:;sup:−−
∈∈==
The upper Darboux sum of ƒ with respect to P is ( )∑=
−−=n
iiiiPf MxxU
11, :
The lower Darboux sum of ƒ with respect to P is ( )∑=
−−=n
iiiiPf mxxL
11, :
Integration
SOLO Stochastic Processes
Darboux Integral(continue – 1)
Lower (green) and upper (green plus lavender) Darboux sums for four
subintervalsJean-Gaston
Darboux1842 - 1917
The upper Darboux sum of ƒ with respect to P is ( )∑=
−−=n
iiiiPf MxxU
11, :
The lower Darboux sum of ƒ with respect to P is ( )∑=
−−=n
iiiiPf mxxL
11, :
The upper Darboux integral of ƒ is [ ] baofpartitionaisPUU Pff ,:inf ,=
The lower Darboux integral of ƒ is [ ] baofpartitionaisPLL Pff ,:inf ,=
If Uƒ = Lƒ, then we say that ƒ is Darboux-integrable and set
( ) ff
b
a
LUdttf ==∫the common value of the upper and lower Darboux integrals.
Integration
SOLO Stochastic Processes
Lebesgue Integration
Henri Léon Lebesgue
1875 - 1941
Illustration of a Riemann integral (blue) and a Lebesgue integral (red)
Riemann Integral A sequence of Riemann sums. The numbers in the upper right are the areas of the grey rectangles. They converge to the integral of
the function.
Darboux Integral Lower (green) and upper (green plus lavender) Darboux sums for four
subintervals
Jean-Gaston Darboux
1842 - 1917
Bernhard Riemann1826 - 1866
SOLO Stochastic Processes
Richard SnowdenBucy
Abdrew JamesViterby1935 -
Harold J.Kushner1932 -
Moshe Zakai1926 -
Jose EnriqueMoyal
(1910 – 1998)
Rudolf E.Kalman1930 -
Maurice Stevenson Bartlett
(1910 - 2002)
George EugèneUhlenbeck (1900-1988)
Leonard Salomon Ornstein
(1880 -1941)
Bernard OsgoodKoopman
)1900 – 1981(
Edwin James GeorgePitman
)1897 – 1993(
Georges Darmois(1888 -1960)
Top Related