Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf ·...

46
Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio Corsi () Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 1 / 24

Transcript of Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf ·...

Page 1: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Introduction to ARMA and GARCH processes

Fulvio Corsi

SNS Pisa

3 March 2010

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 1 / 24

Page 2: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Stationarity

Strict stationarity:

(X1, X2, ..., Xn)d= (X1+k, X2+k, ..., Xn+k) for any integer n > 1, k

Weak/second-order/covariance stationarity:

E[xt] = µ

E[Xt − µ]2 = σ2

< +∞ (i.e. constant and indipendent of t)E[(Xt − µ)(Xt+k − µ)] = γ(|k|) (i.e. indipendent of t for each k)

Interpretation:

mean and variance are constantmean reversionshocks are transientcovariance between Xt and Xt−k tends to 0 as k → ∞

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 2 / 24

Page 3: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Stationarity

Strict stationarity:

(X1, X2, ..., Xn)d= (X1+k, X2+k, ..., Xn+k) for any integer n > 1, k

Weak/second-order/covariance stationarity:

E[xt] = µ

E[Xt − µ]2 = σ2

< +∞ (i.e. constant and indipendent of t)E[(Xt − µ)(Xt+k − µ)] = γ(|k|) (i.e. indipendent of t for each k)

Interpretation:

mean and variance are constantmean reversionshocks are transientcovariance between Xt and Xt−k tends to 0 as k → ∞

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 2 / 24

Page 4: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Stationarity

Strict stationarity:

(X1, X2, ..., Xn)d= (X1+k, X2+k, ..., Xn+k) for any integer n > 1, k

Weak/second-order/covariance stationarity:

E[xt] = µ

E[Xt − µ]2 = σ2

< +∞ (i.e. constant and indipendent of t)E[(Xt − µ)(Xt+k − µ)] = γ(|k|) (i.e. indipendent of t for each k)

Interpretation:

mean and variance are constantmean reversionshocks are transientcovariance between Xt and Xt−k tends to 0 as k → ∞

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 2 / 24

Page 5: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Withe noise

weak (uncorrelated)

E(ǫt) = 0 ∀tV(ǫt) = σ

2 ∀tρ(ǫt, ǫs) = 0 ∀s 6= t where ρ ≡ γ(|t−s|)

γ(0)

strong (independence)

ǫt ∼ I.I.D.(0, σ2)

Gaussian (weak=strong)

ǫt ∼ N.I.D.(0, σ2)

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 3 / 24

Page 6: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Lag operator

the Lag operator is defined as:LXt ≡ Xt−1

is a linear operator:

L(βXt) = β · LXt = βXt−1

L(Xt + Yt) = LXt + LYt = Xt−1 + Yt−1

and admits power exponent, for instance:

L2Xt = L(LXt) = LXt−1 = Xt−2

LkXt = Xt−k

L−1Xt = Xt+1

Some examples:

∆Xt = Xt − Xt−1 = Xt − LXt = (1 − L)Xt

yt = (θ1 + θ2L)LXt = (θ1L + θ2L2)Xt = θ1Xt−1 + θ2Xt−2

Expression like(θ0 + θ1L + θ2L2 + ... + θnLn)

with possibly n = ∞, are called lag polynomial and are indicated as θ(L)

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 4 / 24

Page 7: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Lag operator

the Lag operator is defined as:LXt ≡ Xt−1

is a linear operator:

L(βXt) = β · LXt = βXt−1

L(Xt + Yt) = LXt + LYt = Xt−1 + Yt−1

and admits power exponent, for instance:

L2Xt = L(LXt) = LXt−1 = Xt−2

LkXt = Xt−k

L−1Xt = Xt+1

Some examples:

∆Xt = Xt − Xt−1 = Xt − LXt = (1 − L)Xt

yt = (θ1 + θ2L)LXt = (θ1L + θ2L2)Xt = θ1Xt−1 + θ2Xt−2

Expression like(θ0 + θ1L + θ2L2 + ... + θnLn)

with possibly n = ∞, are called lag polynomial and are indicated as θ(L)

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 4 / 24

Page 8: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Lag operator

the Lag operator is defined as:LXt ≡ Xt−1

is a linear operator:

L(βXt) = β · LXt = βXt−1

L(Xt + Yt) = LXt + LYt = Xt−1 + Yt−1

and admits power exponent, for instance:

L2Xt = L(LXt) = LXt−1 = Xt−2

LkXt = Xt−k

L−1Xt = Xt+1

Some examples:

∆Xt = Xt − Xt−1 = Xt − LXt = (1 − L)Xt

yt = (θ1 + θ2L)LXt = (θ1L + θ2L2)Xt = θ1Xt−1 + θ2Xt−2

Expression like(θ0 + θ1L + θ2L2 + ... + θnLn)

with possibly n = ∞, are called lag polynomial and are indicated as θ(L)

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 4 / 24

Page 9: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Lag operator

the Lag operator is defined as:LXt ≡ Xt−1

is a linear operator:

L(βXt) = β · LXt = βXt−1

L(Xt + Yt) = LXt + LYt = Xt−1 + Yt−1

and admits power exponent, for instance:

L2Xt = L(LXt) = LXt−1 = Xt−2

LkXt = Xt−k

L−1Xt = Xt+1

Some examples:

∆Xt = Xt − Xt−1 = Xt − LXt = (1 − L)Xt

yt = (θ1 + θ2L)LXt = (θ1L + θ2L2)Xt = θ1Xt−1 + θ2Xt−2

Expression like(θ0 + θ1L + θ2L2 + ... + θnLn)

with possibly n = ∞, are called lag polynomial and are indicated as θ(L)

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 4 / 24

Page 10: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Moving Average (MA) process

The simplest way to construct a stationary process is to use a lag polynomial θ(L) with∑∞

j=0 θ2j < ∞ to construct a sort of “weighted moving average” of withe noises ǫt, i.e.

MA(q)Yt = θ(L)ǫt = ǫt + θ1ǫt−1 + θ2ǫt−2 + ... + +θqǫt−q

Example, MA(1)Yt = ǫt + θǫt−1 = (1 + θL)ǫt

being E[Yt] = 0

γ(0) = E[YtYt] = E[(ǫt + θǫt−1)(ǫt + θǫt−1)] = σ2(1 + θ2);

γ(1) = E[YtYt−1] = E[(ǫt + θǫt−1)(ǫt−1 + θǫt−2)] = σ2θ;

γ(k) = E[YtYt−k] = E[(ǫt + θǫt−1)(ǫt−k + θǫt−k−1)] = 0 ∀k > 1

and,

ρ(1) =γ(1)

γ(0)=

θ

1 + θ2

ρ(k) =γ(k)

γ(0)= 0 ∀k > 1

hence, while a withe noise is “0-correlated”, MA(1) is 1-correlated(i.e. it has only the first correlation ρ(1) different from zero)Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 5 / 24

Page 11: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Moving Average (MA) process

The simplest way to construct a stationary process is to use a lag polynomial θ(L) with∑∞

j=0 θ2j < ∞ to construct a sort of “weighted moving average” of withe noises ǫt, i.e.

MA(q)Yt = θ(L)ǫt = ǫt + θ1ǫt−1 + θ2ǫt−2 + ... + +θqǫt−q

Example, MA(1)Yt = ǫt + θǫt−1 = (1 + θL)ǫt

being E[Yt] = 0

γ(0) = E[YtYt] = E[(ǫt + θǫt−1)(ǫt + θǫt−1)] = σ2(1 + θ2);

γ(1) = E[YtYt−1] = E[(ǫt + θǫt−1)(ǫt−1 + θǫt−2)] = σ2θ;

γ(k) = E[YtYt−k] = E[(ǫt + θǫt−1)(ǫt−k + θǫt−k−1)] = 0 ∀k > 1

and,

ρ(1) =γ(1)

γ(0)=

θ

1 + θ2

ρ(k) =γ(k)

γ(0)= 0 ∀k > 1

hence, while a withe noise is “0-correlated”, MA(1) is 1-correlated(i.e. it has only the first correlation ρ(1) different from zero)Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 5 / 24

Page 12: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Properties MA(q)

In general for a MA(q) process we have

γ(0) = σ2(1 + θ21 + θ2

2 + ... + θ2q)

γ(k) = σ2q−k∑

j=0

θjθj+k ∀k ≤ q

= 0 ∀k > q

and

ρ(k) =

∑q−kj=0 θjθj+k

1 +∑q

j=1 θ2j

∀k ≤ q

= 0 ∀k > q

Hence, an MA(q) is q-correlated and it can also be shown that any stationary q-correlatedprocess can be represented as an MA(q).

Wold Theorem : any mean zero covariance stationary process can be represented in theform, MA(∞) + deterministic component (the two being uncorrelated).

But, given a q-correlated process, is the MA(q) process unique? In general no, indeed it canbe shown that for a q-correlated process there are 2q possible MA(q) with sameautocovariance structure. However, there is only one MA(q) which is invertible .

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 6 / 24

Page 13: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Properties MA(q)

In general for a MA(q) process we have

γ(0) = σ2(1 + θ21 + θ2

2 + ... + θ2q)

γ(k) = σ2q−k∑

j=0

θjθj+k ∀k ≤ q

= 0 ∀k > q

and

ρ(k) =

∑q−kj=0 θjθj+k

1 +∑q

j=1 θ2j

∀k ≤ q

= 0 ∀k > q

Hence, an MA(q) is q-correlated and it can also be shown that any stationary q-correlatedprocess can be represented as an MA(q).

Wold Theorem : any mean zero covariance stationary process can be represented in theform, MA(∞) + deterministic component (the two being uncorrelated).

But, given a q-correlated process, is the MA(q) process unique? In general no, indeed it canbe shown that for a q-correlated process there are 2q possible MA(q) with sameautocovariance structure. However, there is only one MA(q) which is invertible .

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 6 / 24

Page 14: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Properties MA(q)

In general for a MA(q) process we have

γ(0) = σ2(1 + θ21 + θ2

2 + ... + θ2q)

γ(k) = σ2q−k∑

j=0

θjθj+k ∀k ≤ q

= 0 ∀k > q

and

ρ(k) =

∑q−kj=0 θjθj+k

1 +∑q

j=1 θ2j

∀k ≤ q

= 0 ∀k > q

Hence, an MA(q) is q-correlated and it can also be shown that any stationary q-correlatedprocess can be represented as an MA(q).

Wold Theorem : any mean zero covariance stationary process can be represented in theform, MA(∞) + deterministic component (the two being uncorrelated).

But, given a q-correlated process, is the MA(q) process unique? In general no, indeed it canbe shown that for a q-correlated process there are 2q possible MA(q) with sameautocovariance structure. However, there is only one MA(q) which is invertible .

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 6 / 24

Page 15: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Invertibility conditions for MAfirst consider the MA(1) case:

Yt = (1 + θL)ǫt

given the result

(1 + θL)−1 = (1 − θL + θ2L2 − θ3L3 + θ4L4 + ...) =

∞∑

i=0

(−θL)i

inverting the θ(L) lag polynomial, we can write

(1 − θL + θ2L2 − θ3L3 + θ4L4 + ...)Yt = ǫt

which can be considered an AR(∞) process.

If an MA process can be written as an AR(∞) of this type, such MA representation is said tobe invertible . For MA(1) process the invertibility condition is given by |θ| < 1.

For a general MA(q) process

Yt = (1 + θ1L + θ2L2 + ... + θqLq)ǫt

the invertibility conditions are that the roots of the lag polynomial

1 + θ1z + θ2z2 + ... + θqzq = 0

lie outside the unit circle. Then the MA(q) can be written as an AR(∞) by inverting θ(L).

Invertibility also has important practical consequence in application. In fact, given that the ǫt

are not observable they have to be reconstructed from the observed Y ’s through theAR(∞) representation.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 7 / 24

Page 16: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Invertibility conditions for MAfirst consider the MA(1) case:

Yt = (1 + θL)ǫt

given the result

(1 + θL)−1 = (1 − θL + θ2L2 − θ3L3 + θ4L4 + ...) =

∞∑

i=0

(−θL)i

inverting the θ(L) lag polynomial, we can write

(1 − θL + θ2L2 − θ3L3 + θ4L4 + ...)Yt = ǫt

which can be considered an AR(∞) process.

If an MA process can be written as an AR(∞) of this type, such MA representation is said tobe invertible . For MA(1) process the invertibility condition is given by |θ| < 1.

For a general MA(q) process

Yt = (1 + θ1L + θ2L2 + ... + θqLq)ǫt

the invertibility conditions are that the roots of the lag polynomial

1 + θ1z + θ2z2 + ... + θqzq = 0

lie outside the unit circle. Then the MA(q) can be written as an AR(∞) by inverting θ(L).

Invertibility also has important practical consequence in application. In fact, given that the ǫt

are not observable they have to be reconstructed from the observed Y ’s through theAR(∞) representation.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 7 / 24

Page 17: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Auto-Regressive Process (AR)A general AR process is defined as

φ(L)Yt = ǫt

It is always invertible but not always stationary.

Example: AR(1)(1 − φL)Yt = ǫt or Yt = φYt−1 + ǫt

by inverting the lag polynomial (1 − φL) the AR(1) can be written as

Yt = (1 − φL)−1ǫt =

∞∑

i=0

(φL)iǫt =

∞∑

i=0

φiǫt−i = MA(∞)

hence the stationarity condition is that |φ| < 1.

From this representation we can apply the general formula of MA to compute γ(·) and ρ(·).In particular,

ρ(k) = φ|k| ∀k

i.e. monotonic exponential decay for φ > 0 and exponentially damped oscillatory decay forφ < 0.

In general an AR(p) process

Yt = φ1Yt−1 + φ2Yt−2 + ... + φpYt−p + ǫt

is stationarity if all the roots of the characteristic equation of the lag polynomial

1 − φ1z − φ2z2 − ... − φpzp = 0

are outside the unit circle.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 8 / 24

Page 18: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Auto-Regressive Process (AR)A general AR process is defined as

φ(L)Yt = ǫt

It is always invertible but not always stationary.

Example: AR(1)(1 − φL)Yt = ǫt or Yt = φYt−1 + ǫt

by inverting the lag polynomial (1 − φL) the AR(1) can be written as

Yt = (1 − φL)−1ǫt =

∞∑

i=0

(φL)iǫt =

∞∑

i=0

φiǫt−i = MA(∞)

hence the stationarity condition is that |φ| < 1.

From this representation we can apply the general formula of MA to compute γ(·) and ρ(·).In particular,

ρ(k) = φ|k| ∀k

i.e. monotonic exponential decay for φ > 0 and exponentially damped oscillatory decay forφ < 0.

In general an AR(p) process

Yt = φ1Yt−1 + φ2Yt−2 + ... + φpYt−p + ǫt

is stationarity if all the roots of the characteristic equation of the lag polynomial

1 − φ1z − φ2z2 − ... − φpzp = 0

are outside the unit circle.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 8 / 24

Page 19: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

State Space Representation of AR(p)to gain more intuition on the AR stationarity conditions write an AR(p) in its state space form

Yt

Yt−1...

Yt−p+1

=

φ1 φ2 φ3 . . . φp−1 φp

1 0 0 . . . 0 0...

...... . . .

......

0 0 0 . . . 1 0

Yt−1Yt−2

...Yt−p

+

ǫt

0...0

Xt = F Xt−1 + vt

Hence, the expected value of Xt satisfy,

E[Xt] = F Xt−1 and E[Xt+j] = F j+1 Xt−1

is a linear map in Rp whose dynamic properties are given by the eigenvalues of the matrix F.

The eigenvalues of F are given by solving the characteristic equation

λp − φ1λp−1 − φ2λ

p−2 − ... − φp−1λ − φp = 0.

Comparing this with the characteristic equation of the lag polynomial φ(L)

1 − φ1z − φ2z2 − ... − φp−1zp−1 − φpzp = 0

we can see that the roots of the 2 equations are such that

z1 = λ−11 , z2 = λ−1

2 , ... , zp = λ−1p

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 9 / 24

Page 20: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

State Space Representation of AR(p)to gain more intuition on the AR stationarity conditions write an AR(p) in its state space form

Yt

Yt−1...

Yt−p+1

=

φ1 φ2 φ3 . . . φp−1 φp

1 0 0 . . . 0 0...

...... . . .

......

0 0 0 . . . 1 0

Yt−1Yt−2

...Yt−p

+

ǫt

0...0

Xt = F Xt−1 + vt

Hence, the expected value of Xt satisfy,

E[Xt] = F Xt−1 and E[Xt+j] = F j+1 Xt−1

is a linear map in Rp whose dynamic properties are given by the eigenvalues of the matrix F.

The eigenvalues of F are given by solving the characteristic equation

λp − φ1λp−1 − φ2λ

p−2 − ... − φp−1λ − φp = 0.

Comparing this with the characteristic equation of the lag polynomial φ(L)

1 − φ1z − φ2z2 − ... − φp−1zp−1 − φpzp = 0

we can see that the roots of the 2 equations are such that

z1 = λ−11 , z2 = λ−1

2 , ... , zp = λ−1p

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 9 / 24

Page 21: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

ARMA(p,q)

An ARMA(p,q) process is defined as

φ(L)Yt = θ(L)ǫt

where φ(·) and θ(·) are pth and qth lag polynomials.

the process is stationary if all the roots of

φ(z) ≡ 1 − φ1z − φ2z2 − ... − φp−1zp−1 − φpzp = 0

lie outside the unit circle and, hence, admits the MA(∞) representation:

Yt = φ(L)−1θ(L)ǫt

the process is invertible if all the roots of

θ(z) ≡ 1 + θ1z + θ2z2 + ... + θqzq = 0

lie outside the unit circle and, hence, admits the AR(∞) representation:

ǫt = θ(L)−1φ(L)Yt

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 10 / 24

Page 22: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Estimation

For AR(p)

Yt = φ1Yt−1 + φ2Yt−2 + ... + φpYt−p + ǫt

OLS are consistent and under gaussianity asymptotically equivalent to MLE→ asymptotically efficient

For example, the full Likelihood of an AR(1) can be written as

L(θ) ≡ f (yT , yT−1, ..., y1; θ) = fY1 (y1; θ)︸ ︷︷ ︸

marginal first obs

·T∏

t=2

fYt|Yt−1(yt|yt−1; θ)

︸ ︷︷ ︸

conditional likelihoodunder normality OLS=MLE

For a general ARMA(p,q)

Yt = φ1Yt−1 + ... + φpYt−p + ǫt + θ1ǫt−1 + ... + θqǫt−q

Yt−1 is correlated with ǫt−1, ..., ǫt−q ⇒ OLS not consistent.

→ MLE with numerical optimization procedures.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 11 / 24

Page 23: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Estimation

For AR(p)

Yt = φ1Yt−1 + φ2Yt−2 + ... + φpYt−p + ǫt

OLS are consistent and under gaussianity asymptotically equivalent to MLE→ asymptotically efficient

For example, the full Likelihood of an AR(1) can be written as

L(θ) ≡ f (yT , yT−1, ..., y1; θ) = fY1 (y1; θ)︸ ︷︷ ︸

marginal first obs

·T∏

t=2

fYt|Yt−1(yt|yt−1; θ)

︸ ︷︷ ︸

conditional likelihoodunder normality OLS=MLE

For a general ARMA(p,q)

Yt = φ1Yt−1 + ... + φpYt−p + ǫt + θ1ǫt−1 + ... + θqǫt−q

Yt−1 is correlated with ǫt−1, ..., ǫt−q ⇒ OLS not consistent.

→ MLE with numerical optimization procedures.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 11 / 24

Page 24: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Prediction

write the model in its AR(∞) representation:

η(L)(Yt − µ) = ǫt

then the optimal prediction of Yt+s is given by

E[Yt+s|Yt, Yt−1, ...] = µ +

[η(L)−1

Ls

]

+

η(L)(Yt − µ) with[

Lk]

+= 0 for k < 0

which is known as Wiener-Kolmogorov prediction formula.

In the case of an AR(p) process the prediction formula can also be written as

E[Yt+s|Yt, Yt−1, ...] = µ + f (s)11 (Yt − µ) + f (s)

12 (Yt−1 − µ) + ... + f (s)1p (Yt−p+1 − µ)

where f (j)11 is the element (1, 1) of the matrix F j.

The easiest way to compute prediction from AR(p) model is, however, through recursivemethods.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 12 / 24

Page 25: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Prediction

write the model in its AR(∞) representation:

η(L)(Yt − µ) = ǫt

then the optimal prediction of Yt+s is given by

E[Yt+s|Yt, Yt−1, ...] = µ +

[η(L)−1

Ls

]

+

η(L)(Yt − µ) with[

Lk]

+= 0 for k < 0

which is known as Wiener-Kolmogorov prediction formula.

In the case of an AR(p) process the prediction formula can also be written as

E[Yt+s|Yt, Yt−1, ...] = µ + f (s)11 (Yt − µ) + f (s)

12 (Yt−1 − µ) + ... + f (s)1p (Yt−p+1 − µ)

where f (j)11 is the element (1, 1) of the matrix F j.

The easiest way to compute prediction from AR(p) model is, however, through recursivemethods.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 12 / 24

Page 26: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Box-Jenkins Approachcheck for stationarity : if not try different transformation (ex differentiation→ARIMA models)

Identification :check the autocorrelation (ACF) function: a q-correlated process is an MA(q) model

check the partial autocorrelation (PACF) function:

for an AR(p) process, while the k–lag ACF can be interpreted as simple regressionYt = ρ(k)Yt−k + error, the k–lag PACF can be seen as a multiple regression

Yt = b1Yt−1 + b2Yt−2 + ... + bkYt−k + error

it can be computed by solving the Yule-Walker system:

b1

b2

...bk

=

γ(0) γ(1) . . . γ(k − 1)γ(1) γ(0) . . . γ(k − 2)

...... . . .

...γ(k − 1) γ(k − 2) . . . γ(0)

−1

γ(1)γ(2)

...γ(k)

Importantly, AR(p) processes are “p–partially correlated” ⇒ identification of AR order

Validation : check the appropriateness of the model by some measure of fit.

AIC/Akaike = T log σ2e + 2 m

BIC/Schwarz = T log σ2e + m log T

with σ2e estimation error variance, m = p + q + 1 n of parameters, and T n of obs

Diagnostic checking of the residuals.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 13 / 24

Page 27: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Box-Jenkins Approachcheck for stationarity : if not try different transformation (ex differentiation→ARIMA models)

Identification :check the autocorrelation (ACF) function: a q-correlated process is an MA(q) model

check the partial autocorrelation (PACF) function:

for an AR(p) process, while the k–lag ACF can be interpreted as simple regressionYt = ρ(k)Yt−k + error, the k–lag PACF can be seen as a multiple regression

Yt = b1Yt−1 + b2Yt−2 + ... + bkYt−k + error

it can be computed by solving the Yule-Walker system:

b1

b2

...bk

=

γ(0) γ(1) . . . γ(k − 1)γ(1) γ(0) . . . γ(k − 2)

...... . . .

...γ(k − 1) γ(k − 2) . . . γ(0)

−1

γ(1)γ(2)

...γ(k)

Importantly, AR(p) processes are “p–partially correlated” ⇒ identification of AR order

Validation : check the appropriateness of the model by some measure of fit.

AIC/Akaike = T log σ2e + 2 m

BIC/Schwarz = T log σ2e + m log T

with σ2e estimation error variance, m = p + q + 1 n of parameters, and T n of obs

Diagnostic checking of the residuals.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 13 / 24

Page 28: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Box-Jenkins Approachcheck for stationarity : if not try different transformation (ex differentiation→ARIMA models)

Identification :check the autocorrelation (ACF) function: a q-correlated process is an MA(q) model

check the partial autocorrelation (PACF) function:

for an AR(p) process, while the k–lag ACF can be interpreted as simple regressionYt = ρ(k)Yt−k + error, the k–lag PACF can be seen as a multiple regression

Yt = b1Yt−1 + b2Yt−2 + ... + bkYt−k + error

it can be computed by solving the Yule-Walker system:

b1

b2

...bk

=

γ(0) γ(1) . . . γ(k − 1)γ(1) γ(0) . . . γ(k − 2)

...... . . .

...γ(k − 1) γ(k − 2) . . . γ(0)

−1

γ(1)γ(2)

...γ(k)

Importantly, AR(p) processes are “p–partially correlated” ⇒ identification of AR order

Validation : check the appropriateness of the model by some measure of fit.

AIC/Akaike = T log σ2e + 2 m

BIC/Schwarz = T log σ2e + m log T

with σ2e estimation error variance, m = p + q + 1 n of parameters, and T n of obs

Diagnostic checking of the residuals.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 13 / 24

Page 29: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

ARIMA

Integrated ARMA model:

ARIMA(p,1,q) denote a nonstationary process Yt for which the first differenceYt − Yt−1 = (1 − L)Yt is a stationary ARMA(p,q) process.

⇓Yt is said to be integrated of order 1 or I(1).

If 2 differentiations of Yt are necessary to get a stationary process i.e.(1 − L)2Yt

⇓then the process Yt is said to be integrated of order 2 or I(2).

I(0) indicate a stationary process.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 14 / 24

Page 30: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

ARIMA

Integrated ARMA model:

ARIMA(p,1,q) denote a nonstationary process Yt for which the first differenceYt − Yt−1 = (1 − L)Yt is a stationary ARMA(p,q) process.

⇓Yt is said to be integrated of order 1 or I(1).

If 2 differentiations of Yt are necessary to get a stationary process i.e.(1 − L)2Yt

⇓then the process Yt is said to be integrated of order 2 or I(2).

I(0) indicate a stationary process.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 14 / 24

Page 31: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

ARFIMA

The k-difference operator (1 − L)n with integer n can be generalized to a fractionaldifference operator (1 − L)d with 0 < d < 1 defined by the binomial expansion

(1 − L)d = 1 − dL + d(d − 1)L2/2! − d(d − 1)(d − 2)L3/3! + ...

obtaining a fractionally integrated process of order d i.e. I(d).

If d < 0.5 the process is cov stationary and admits an AR(∞) representation.

The usefulness of a fractional filter (1 − L)d is that it produces hyperbolic decayingautocorrelations i.e. the so called long memory. In fact, for ARFIMA(p,d,q) processes

φ(L)(1 − L)dYt = θ(L)ǫt

the autocorrelation functions is proportional to

ρ(k) ≈ ck2d−1

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 15 / 24

Page 32: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

ARFIMA

The k-difference operator (1 − L)n with integer n can be generalized to a fractionaldifference operator (1 − L)d with 0 < d < 1 defined by the binomial expansion

(1 − L)d = 1 − dL + d(d − 1)L2/2! − d(d − 1)(d − 2)L3/3! + ...

obtaining a fractionally integrated process of order d i.e. I(d).

If d < 0.5 the process is cov stationary and admits an AR(∞) representation.

The usefulness of a fractional filter (1 − L)d is that it produces hyperbolic decayingautocorrelations i.e. the so called long memory. In fact, for ARFIMA(p,d,q) processes

φ(L)(1 − L)dYt = θ(L)ǫt

the autocorrelation functions is proportional to

ρ(k) ≈ ck2d−1

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 15 / 24

Page 33: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

ARCH and GARCH models

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 16 / 24

Page 34: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Basic Structure and Properties of ARMA model

standard time series models have:

Yt = E[Yt|Ωt−1] + ǫt

E[Yt|Ωt−1] = f (Ωt−1; θ)

Var [Yt|Ωt−1] = E

[

ǫ2t |Ωt−1

]

= σ2

hence,

Conditional mean: varies with Ωt−1Conditional variance: constant (unfortunately)

k-step-ahead mean forecasts: generally depends on Ωt−1k-step-ahead variance forecasts : depends only on k, not on Ωt−1 (againunfortunately)

Unconditional mean: constantUnconditional variance: constant

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 17 / 24

Page 35: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

AutoRegressive Conditional Heteroskedasticity(ARCH) model

Engle (1982, Econometrica) intruduced the ARCH models:

Yt = E[Yt|Ωt−1] + ǫt

E[Yt|Ωt−1] = f (Ωt−1; θ)

Var [Yt|Ωt−1] = E

[

ǫ2t |Ωt−1

]

= σ (Ωt−1; θ) ≡ σ2t

hence,

Conditional mean: varies with Ωt−1Conditional variance: varies with Ωt−1

k-step-ahead mean forecasts: generally depends on Ωt−1k-step-ahead variance forecasts : generally depends on Ωt−1

Unconditional mean: constantUnconditional variance: constant

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 18 / 24

Page 36: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

ARCH(q)

How to parameterize E[ǫ2

t |Ωt−1]

= σ((Ωt−1; θ) ≡ σ2t ?

ARCH(q) postulated that the conditional variance is a linear function of the past q squaredinnovations

σ2t = ω +

q∑

i=1

αiǫ2t−i = ω + α(L)ǫ2

t−1

Defining vt = ǫ2t − σ2

t , the ARCH(q) model can be written as

ǫ2t = ω + α(L)ǫ2

t−1 + vt

Since Et−1(vt) = 0, the model corresponds directly to an AR(q) model for the squaredinnovations, ǫ2

t .

The process is covariance stationary if and only if the sum of the positive AR parameters isless than 1. Then, the unconditional variance of ǫt is

Var(ǫt) = σ2 = w/(1 − α1 − α2 − ... − αq).

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 19 / 24

Page 37: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

ARCH(q)

How to parameterize E[ǫ2

t |Ωt−1]

= σ((Ωt−1; θ) ≡ σ2t ?

ARCH(q) postulated that the conditional variance is a linear function of the past q squaredinnovations

σ2t = ω +

q∑

i=1

αiǫ2t−i = ω + α(L)ǫ2

t−1

Defining vt = ǫ2t − σ2

t , the ARCH(q) model can be written as

ǫ2t = ω + α(L)ǫ2

t−1 + vt

Since Et−1(vt) = 0, the model corresponds directly to an AR(q) model for the squaredinnovations, ǫ2

t .

The process is covariance stationary if and only if the sum of the positive AR parameters isless than 1. Then, the unconditional variance of ǫt is

Var(ǫt) = σ2 = w/(1 − α1 − α2 − ... − αq).

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 19 / 24

Page 38: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

AR(1)-ARCH(1)

Example: the AR(1)-ARCH(1) model

Yt = φYt−1 + ǫt

σ2t = ω + αǫ2

t−1

ǫt ∼ N(0, σ2t )

- Conditional mean: E(Yt|Ωt−1) = φYt−1- Conditional variance: E([Yt − E(Yt|Ωt−1)]

2|Ωt−1 = ω + αǫ2t−1

- Unconditional mean: E(Yt) = 0- Unconditional variance: E(Yt − E(Yt))2 = 1

(1−φ2)ω

(1−α)

Note that the unconditional distribution of ǫt has Fat Tail.

In fact, the unconditional kurtosis E(ǫ4t )/E(ǫ2

t )2 is

E

[

ǫ4t

]

= E

[

E(ǫ4t |Ωt−1)

]

= 3E

[

σ4t

]

= 3[Var(σ2t ) + E(σ2

t )2] = 3[Var(σ2

t )︸ ︷︷ ︸

>0

+E(ǫ2t )

2] > 3E(ǫ2t )

2.

Hence,Kurtosis(ǫt) = E(ǫ4

t )/E(ǫ2t )

2 > 3

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 20 / 24

Page 39: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

AR(1)-ARCH(1)

Example: the AR(1)-ARCH(1) model

Yt = φYt−1 + ǫt

σ2t = ω + αǫ2

t−1

ǫt ∼ N(0, σ2t )

- Conditional mean: E(Yt|Ωt−1) = φYt−1- Conditional variance: E([Yt − E(Yt|Ωt−1)]

2|Ωt−1 = ω + αǫ2t−1

- Unconditional mean: E(Yt) = 0- Unconditional variance: E(Yt − E(Yt))2 = 1

(1−φ2)ω

(1−α)

Note that the unconditional distribution of ǫt has Fat Tail.

In fact, the unconditional kurtosis E(ǫ4t )/E(ǫ2

t )2 is

E

[

ǫ4t

]

= E

[

E(ǫ4t |Ωt−1)

]

= 3E

[

σ4t

]

= 3[Var(σ2t ) + E(σ2

t )2] = 3[Var(σ2

t )︸ ︷︷ ︸

>0

+E(ǫ2t )

2] > 3E(ǫ2t )

2.

Hence,Kurtosis(ǫt) = E(ǫ4

t )/E(ǫ2t )

2 > 3

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 20 / 24

Page 40: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

GARCH(p,q)

Problem: empirical volatility very persistent ⇒ Large q i.e. too many α’s

Bollerslev (1986, J. of Econometrics) proposed the Generalized ARCH model.

The GARCH(p,q) is defined as

σ2t = ω +

q∑

i=1

αi ǫ2t−i +

p∑

j=1

βj σ2t−j = ω + α(L)ǫ2

t−1 + β(L)σ2t−1

As before also the GARCH(p,q) can be rewritten as

ǫ2t = ω + [α(L) + β(L)] ǫ2

t−1 − β(L)vt−1 + vt

which defines an ARMA[max(p, q),p] model for et z.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 21 / 24

Page 41: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

GARCH(p,q)

Problem: empirical volatility very persistent ⇒ Large q i.e. too many α’s

Bollerslev (1986, J. of Econometrics) proposed the Generalized ARCH model.

The GARCH(p,q) is defined as

σ2t = ω +

q∑

i=1

αi ǫ2t−i +

p∑

j=1

βj σ2t−j = ω + α(L)ǫ2

t−1 + β(L)σ2t−1

As before also the GARCH(p,q) can be rewritten as

ǫ2t = ω + [α(L) + β(L)] ǫ2

t−1 − β(L)vt−1 + vt

which defines an ARMA[max(p, q),p] model for et z.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 21 / 24

Page 42: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

GARCH(1,1)By far the most commonly used is the GARCH(1,1):

σ2t = ω + α ǫ2

t−1 + βj σ2t−1

with ω > 0, α > 0, β > 0.

By recursive substitution, the GARCH(1,1) may be written as the following ARCH(∞):

σ2t = ω(1 − β) + α

∞∑

i=1

βi−1ǫ2t−i

which reduces to an exponentially weighted moving average filter for ω = 0 and α + β = 1(sometimes referred to as Integrated GARCH or IGARCH(1,1)).

Moreover, GARCH(1,1) implies an ARMA(1,1) representation in the ǫ2t

ǫ2t = ω + [α + β]ǫ2

t−1 − βvt−1 + vt

Forecasting. Denoting the unconditional variance σ2 ≡ ω(1 − α − β)−1 we have:

σ2t+h|t = σ2 + (α + β)h−1(σ2

t+1 − σ2)

showing that the forecasts of the conditional variance revert to the long-run unconditionalvariance at an exponential rate dictated by α + β

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 22 / 24

Page 43: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

GARCH(1,1)By far the most commonly used is the GARCH(1,1):

σ2t = ω + α ǫ2

t−1 + βj σ2t−1

with ω > 0, α > 0, β > 0.

By recursive substitution, the GARCH(1,1) may be written as the following ARCH(∞):

σ2t = ω(1 − β) + α

∞∑

i=1

βi−1ǫ2t−i

which reduces to an exponentially weighted moving average filter for ω = 0 and α + β = 1(sometimes referred to as Integrated GARCH or IGARCH(1,1)).

Moreover, GARCH(1,1) implies an ARMA(1,1) representation in the ǫ2t

ǫ2t = ω + [α + β]ǫ2

t−1 − βvt−1 + vt

Forecasting. Denoting the unconditional variance σ2 ≡ ω(1 − α − β)−1 we have:

σ2t+h|t = σ2 + (α + β)h−1(σ2

t+1 − σ2)

showing that the forecasts of the conditional variance revert to the long-run unconditionalvariance at an exponential rate dictated by α + β

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 22 / 24

Page 44: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Asymmetric GARCH

In standard GARCH model:

σ2t = ω + αr2

t−1 + βσ2t−1

σ2t responds symmetrically to past returns.

The so called “news impact curve” is a parabola

−5 0 50

2

4

6

8

standardized lagged shocks

co

nd

itio

na

l va

ria

nce

News Impact Curve

symmetric GARCHasymmetric GARCH

Empirically negative rt−1 impact more than positive ones → asymmetric news impact curve

GJR or T-GARCH

σ2t = ω + αr2

t−1 + γr2t−1Dt−1 + βσ2

t−1 with Dt =

1 if rt < 00 otherwise

- Positive returns (good news): α

- Negative returns (bad news): α + γ

- Empirically γ > 0 → “Leverage effect”

Exponential GARCH (EGARCH)

ln(σ2t ) = ω + α

∣∣∣∣

rt−1

σt−1

∣∣∣∣+ γ

rt−1

σt−1+ βln(σ2

t−1)

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 23 / 24

Page 45: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Asymmetric GARCH

In standard GARCH model:

σ2t = ω + αr2

t−1 + βσ2t−1

σ2t responds symmetrically to past returns.

The so called “news impact curve” is a parabola

−5 0 50

2

4

6

8

standardized lagged shocks

co

nd

itio

na

l va

ria

nce

News Impact Curve

symmetric GARCHasymmetric GARCH

Empirically negative rt−1 impact more than positive ones → asymmetric news impact curve

GJR or T-GARCH

σ2t = ω + αr2

t−1 + γr2t−1Dt−1 + βσ2

t−1 with Dt =

1 if rt < 00 otherwise

- Positive returns (good news): α

- Negative returns (bad news): α + γ

- Empirically γ > 0 → “Leverage effect”

Exponential GARCH (EGARCH)

ln(σ2t ) = ω + α

∣∣∣∣

rt−1

σt−1

∣∣∣∣+ γ

rt−1

σt−1+ βln(σ2

t−1)

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 23 / 24

Page 46: Introduction to ARMA and GARCH processeshomepage.sns.it/marmi/lezioni/corsi-arma-garch-2010.pdf · Introduction to ARMA and GARCH processes Fulvio Corsi SNS Pisa 3 March 2010 Fulvio

Estimation

A GARCH process with gaussian innovation:

rt|Ωt−1 ∼ N(µt(θ), σ2t (θ))

has conditional densities:

f (rt|Ωt−1; θ) =1√2π

σ−1t (θ) exp

(

− 1

2

(rt − µ(θ))

σ2t (θ)

)

using the “prediction–error” decomposition of the likelihood

L(rT , rT−1, ..., r1; θ) = f (rT |ΩT−1; θ) × f (rT−1|ΩT−2; θ) × ... × f (r1|Ω0; θ)

the log-likelihood becomes:

log L(rT , rT−1, ..., r1; θ) = −T

2log(2π) −

T∑

t=1

log σt(θ) −1

2

T∑

t=1

(rt − µ(θ))

σ2t (θ)

Non–linear function in θ ⇒ Numerical optimization techiques.

Fulvio Corsi ()Introduction to ARMA and GARCH processes SNS Pisa 3 March 2010 24 / 24