# Introduction to Stochastic Processes - · PDF fileIntroduction to Stochastic Processes Eduardo...

date post

26-Aug-2018Category

## Documents

view

247download

0

Embed Size (px)

### Transcript of Introduction to Stochastic Processes - · PDF fileIntroduction to Stochastic Processes Eduardo...

Introduction to Stochastic Processes

Eduardo RossiUniversity of Pavia

October 2013

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 1 / 28

Stochastic Process

Stochastic ProcessA stochastic process is an ordered sequence of random variables defined on aprobability space (,F ,P).

{Yt(), , t T }, such that for each t T , Yt() is a random variable onthe sample space , and for each , Yt() is a realization of the stochasticprocess on the index set T (that is an ordered set of values, each corresponds to avalue of the index set).

Time Series

A time series is a set of observations {Yt , t T0}, each one recorded at aspecified time t. The time series is a part of a realization of a stochastic process,{Yt , t T } where T T0. An infinite series {Yt}t=.

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 2 / 28

Stochastic Process

A statistical time series is a mathematical sequence, not a series.

{Yt} is assumed to be real valued, but we shall also consider sequences ofrandom vectors {Yt}, or variables with values in the complex numbers.We are interested in the joint distribution of the variables {Yt}. The easiestway to ensure that these variables possess joint laws, is to assume that theyare defined as measurable maps on a single underlying probability space.

One of the purposes of time series analysis is to study and characterizeprobability distributions of sets of variables {Yt} that will typically bedependent.

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 3 / 28

Stochastic Process

The statistical problem is to determine the probability distribution of the timeseries given observations Y1,Y2, . . . ,YT .

The statistical model can be used for1 understanding the stochastic system;2 predicting the future, i.e. {YT+1,YT+2, . . .}

In order to have any chance of success at these tasks it is necessary toassume some apriori structure of the time series.

If the {Yt} could be completely arbitrary random variables, then{Y1, . . . ,YT} would constitute a single observation from a completelyunknown distribution on RT .

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 4 / 28

Mean, Variance and Autocovariance

The unconditional mean

t = E [Yt ] =

Yt f (Yt)dYt

Autocovariance function: The joint distribution of

(Yt ,Yt1, . . . ,Yth)

is usually characterized by the autocovariance function:

t(h) = Cov(Yt ,Yth)

= E [(Yt t)(Yth th)]

=

. . .

(Yt t)(Yth th)f (Yt , . . . ,Yth)dYt . . . dYth

The autocorrelation function

t(h) =t(h)

t(0)th(0)

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 5 / 28

Mean, Variance and Autocovariance

Both functions are symmetric about 0.

The autocovariance and autocorrelations functions are measures of (linear)dependence between the variables at different time instants.

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 6 / 28

Stationarity

A basic type of structure is stationarity. This comes in two forms.

Strict Stationarity

The process is said to be strictly stationary if for any values of h1, h2, . . . , hn thejoint distribution of (Yt ,Yt+h1 , . . . ,Yt+hn) depends only on the intervalsh1, h2, . . . , hn but not on the date t itself:

f (Yt ,Yt+h1 , . . . ,Yt+hn) = f (Y ,Y+h1 , . . . ,Y+hn) t, h

Strict stationarity implies that all existing moments are time invariant.

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 7 / 28

Stationarity

Weak (Covariance) Stationarity

The process Yt is said to be weakly stationary or covariance stationary if thesecond moments of the process are time invariant:

E [Yt ] =

Ergodicity

The statistical ergodicity theorem concerns what information can be derived froman average over time about the common average at each point of time.Note that the WLLN does not apply as the observed time series represents justone realization of the stochastic process.

Ergodic for the mean

Let {Yt(), , t T } be a weakly stationary process, such thatE [Yt()] =

Ergodicity

To be ergodic the memory of a stochastic process should fade in the sensethat the covariance between increasingly distant observations converges tozero sufficiently rapidly.

For stationary process it can be shown that absolutely summableautocovariances, i.e.

h=0 |(h)|

Example

Deterministic trigonometric series:

Yt = A cos (t) + B sin (t)

E [A] = E [B] = 0, E [A2] = E [B2] = 2 defines a stationary time series.

E [Yt ] = E [A cos (t) + B sin (t)] = 0

The Autocovariance function

(h) = E [YtYt+h]

= cos (t + h) cos (t) Var[A] + sin (t + h) sin (t) Var[B]

= 2 cos (h)

Once A and B have been determined the process behaves as a deterministictrigonometric function. This type of time series is an important building block tomodel cyclic events in a system.

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 11 / 28

Example

Consider the stochastic process {Yt} defined by

Yt =

{u0 t = 0 with u0 N(0, 2)Yt1 t > 0

Then {Yt} is strictly stationary but not ergodic.Proof Obviously we have that Yt = u0 for all t 0. Stationarity follows from:

E [Yt ] = E [u0] = 0

E [Y 2t ] = E [u20 ] =

2

E [YtYt1] = E [u20 ] =

2 (1)

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 12 / 28

Example

Thus we have = 0, (h) = 2 (h) = 1 are time invariant. Ergodicity for themean requires:

yT = T1

T1t=o

Yt = T1(Tu0)

yT = u0

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 13 / 28

Random walk

Yt = Yt1 + ut

where ut WN(0, 2). By recursive substitution,

Yt = Yt1 + ut

Yt = Yt2 + ut1 + ut...

= Y0 + ut + ut1 + ut2 + . . .+ u1

The mean is time-invariant:

= E [Yt ] = E

[Y0 +

ts=1

us

]= Y0 +

ts=1

E [us ] = 0.

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 14 / 28

Random walk

But the second moments are diverging. The variance is given by:

t(0) = E [Y2t ] = E

(Y0 + ts=1

us

)2 = E( t

s=1

us

)2= E

[t

s=1

tk=1

usuk

]= E

ts=1

u2s +t

s=1

tk=1,k 6=s

usuk

=

ts=1

E [u2s ] +t

s=1

tk=1,k 6=s

E [usuk ] =t

s=1

2 = t2.

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 15 / 28

Random walk

The autocovariances are:

t(h) = E [YtYth] = E

[(Y0 +

ts=1

us

)(Y0 +

thk=1

uk

)]

= E

[t

s=1

us

(thk=1

uk

)]

=thk=1

E [u2k ]

=thk=1

2 = (t h)2 h > 0.

Finally, the autocorrelation function t(h) for h > 0 is given by:

2t (h) =2t (h)

t(0)th(0)=

[(t h)2]2

[t2][(t h)2]= 1 h

th > 0

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 16 / 28

White Noise Process

A white-noise process is a weakly stationary process which has zero mean and isuncorrelated over time:

ut WN(0, 2)

Thus ut is WN process t T :

E [ut ] = 0

E [u2t ] = 2

The assumption of normality implies strict stationarity and serial independence(unpredictability). A generalization of the NID is the IID process with constant,but unspecified higher moments.A process ut with independent, identically distributed variates is denoted IID:

ut IID(0, 2) (3)

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 18 / 28

Martingale

Martingale The stochastic process xt is said to be martingale with respect to aninformation set (-field), It1, of data realized by time t 1 if

E [|xt |] < E [xt |It1] = xt1 a.s. (4)

Martingale Difference Sequence The process ut = xt xt1 with E [|ut |]

Martingale difference

Let {xt} be an m.d. sequence and let gt1 = (xt1, xt2, . . .) be anymeasurable, integrable function of the lagged values of the sequence. Then xtgt1is a m.d., and

Cov [gt1, xt ] = E [gt1xt ] E [gt1]E [xt ] = 0

This implies that, putting gt1 = xtj j > 0

Cov [xt , xtj ] = 0

M.d. property implies uncorrelatedness of the sequence.White Noise processes may not be m.d. because the conditional expectation of anuncorrelated process can be a nonlinear.An example is the bilinear model (Granger and Newbold, 1986)

xt = xt2t1 + t , t i .i .d(0, 1)

The model is w.n. when 0 < < 12

.

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 20 / 28

The conditional expectations are the following nonlinear functions:

E [xt |It1] = i=1

()i+1xti

i+1j=2

xtj

Uncorrelated, zero mean processes:

White Noise Processes1 IID processes

1 Gaussian White Noise processes

Martingale Difference Processes

Rossi Introduction to Stochastic Processes Financial Econometrics - 2013 21 / 28

Innovation

Innovation

An innovation {ut} against an information set It1 is a process whose densityf (ut |It1) does not depend on It1.

Mean Innovation

{ut} is a mean innovation with respect to an information set It1 ifE [ut |It1] = 0.

Rossi Introduction to Stochastic Processes Financial Econom