# EC 831: Empirical Methods in Macroeconomics

date post

06-Apr-2022Category

## Documents

view

0download

0

Embed Size (px)

### Transcript of EC 831: Empirical Methods in Macroeconomics

EC 831: Empirical Methods in MacroeconomicsAeimit Lakdawala

yt = Axt +Hξt + wt

yt ,wt are n x 1, ξt is r x 1 and xt is k x 1

State Equation ξt+1 = F ξt + vt+1(

vt wt

]) Observed variables: yt ,xt Unobserved variables: ξt ,vt and wt

Consider an AR(1) yt = ρyt−1 + εt

Define ξt = yt , vt = εt and yt = yt , wt = 0, xt = 0 and F = ρ, H = 1 and A = 0

Observation Equation:

yt+1 = ρyt + εt+1

Consider an AR(1) yt = ρyt−1 + εt

Define ξt = yt , vt = εt and yt = yt , wt = 0, xt = 0 and F = ρ, H = 1 and A = 0

Observation Equation:

yt+1 = ρyt + εt+1

Consider a MA(1):

yt = µ + ut + θut−1

Define ξt = (ut , ut−1), yt = yt , xt = 1, wt = 0,vt = [ut , 0]′

Observation Equation:

ut

)

Generally any ARMA(p,q) model can be put in state space form.

Solutions to linearized DSGE models can also be put in state space form.

Kalman Filter

The goal is to find the distribution of ξt given all information available upto time t, t = yt , yt−1, ..., y1, xt , xt−1, ..., x1

p(ξt |t)

Assume ξ0 ∼ N(ξ0|0,P0|0)

• ξ0|0: Best guess of ξ0 • P0|0: Uncertainty about this guess

With normal errors and linear structure we get

ξt |t ∼ N(ξt|t ,Pt|t)

Goal is to find ξt|t and Pt|t

Kalman Filter

The goal is to find the distribution of ξt given all information available upto time t, t = yt , yt−1, ..., y1, xt , xt−1, ..., x1

p(ξt |t)

Assume ξ0 ∼ N(ξ0|0,P0|0)

• ξ0|0: Best guess of ξ0 • P0|0: Uncertainty about this guess

With normal errors and linear structure we get

ξt |t ∼ N(ξt|t ,Pt|t)

Goal is to find ξt|t and Pt|t

Kalman Filter

The goal is to find the distribution of ξt given all information available upto time t, t = yt , yt−1, ..., y1, xt , xt−1, ..., x1

p(ξt |t)

Assume ξ0 ∼ N(ξ0|0,P0|0)

• ξ0|0: Best guess of ξ0 • P0|0: Uncertainty about this guess

With normal errors and linear structure we get

ξt |t ∼ N(ξt|t ,Pt|t)

Goal is to find ξt|t and Pt|t

Kalman Filter

Two main steps:

1 Predict: Using all information upto time t − 1 we want to obtain an optimal forecast of ξt , let’s call it ξt|t−1.

2 Update: Once yt is realized, we update our forecast, call it ξt|t

Intuition for Kalman Filter

] ∼ N

([ µ1

µ2

[ 11 12

21 22

]) Suppose now you observe z1. Now what is the conditional distribution of z2?

z2|z1 ∼ N(mz2 ,Vz2)

Vz2 = 22 −21−111 12

Intuition for Kalman Filter

] ∼ N

([ µ1

µ2

[ 11 12

21 22

]) Suppose now you observe z1. Now what is the conditional distribution of z2?

z2|z1 ∼ N(mz2 ,Vz2)

Vz2 = 22 −21−111 12

1st Predict step

1st Update step

[ y1|x1, 0

ξ1|x1, 0

11 = HP1|0H ′ + R

ξ1|y1, x1, 0 = ξ1|1 ∼ N(ξ1|1,P1|1)

ξ1|1 = ξ1|0 +Kη1|0 P1|1 = P1|0 −KHP1|0

where

′ + R)−1

Kalman Filter Recursions

Update:

ξt|t = ξt|t−1 +Kηt|t−1 Pt|t = Pt|t−1 −KHPt|t−1

where

ft|t−1 = HPt|t−1H ′ + R

K = Pt|t−1H ′f −1 t|t−1

How do we specify ξ0|0 and P0|0?

ξt+1 = F ξt + vt

If this is stationary, i.e. eigenvalues of F are inside the unit circle, then

ξ0|0 = E (ξ0) = 0

P0|0 = E (ξ0ξ ′0)

vec(P0|0) = (I − F ⊗ F )−1vec(Q)

If not, set diagonal elements of P0|0 to large numbers

Bayesian Perspective

ξ0 ∼ N(ξ0|0,P0|0) is the prior.

ξt|t is the posterior Bayesian expectation of ξt |t for given values of F,Q,A,H,R

So how do we estimate the parameters?

Parameter Estimation from classical perspective

What is the distribution of yt |t−1, xt , θ where θ contains all the parameters.

yt |t−1, xt , θ ∼ N(yt|t−1, ft|t−1)

where

Likelihood (based on ”Prediction Error Decomposition”)

lnL = −1

η′t|t−1f −1 t|t−1ηt|t−1

Parameter Estimation from classical perspective

lnL = −1

η′t|t−1f −1 t|t−1ηt|t−1

Note ft|t−1 and ηt|t−1 are functions of θ

Maximize the log-likelihood w.r.t θ.

Note ξt|t is our best guess given all data upto time t.

But as econometricians we observe T ≥ t

• Can we improve our best guess using all available data?

Yes!

ξt |T ∼ N(ξt|T ,Pt|T )

Same idea used as Kalman filter derivations.

Note ξt|t is our best guess given all data upto time t.

But as econometricians we observe T ≥ t

• Can we improve our best guess using all available data?

Yes!

ξt |T ∼ N(ξt|T ,Pt|T )

Same idea used as Kalman filter derivations.

Smoothing

ξt|T = ξt|t + Jt ( ξt+1|T − ξt+1|t

) Pt|T = Pt|t + Jt(Pt+1|T − Pt+1|t)J

′ t

t+1|t

For smoothed inference.

1 Filter forward to get ξt|t and Pt|t 2 Smooth backward using above formulae to get ξt|T and Pt|T

What if Pt+1|t is singular?

See Durbin & Koopman (2002) Time Series Analysis by State Space Methods

• Alternative formulation of Kalman filter recursion in Ch 4.

• Does not require inversion of Pt+1|t for smoothing

• Their method is also computationally more efficient

yt = Axt +Hξt + wt

yt ,wt are n x 1, ξt is r x 1 and xt is k x 1

State Equation ξt+1 = F ξt + vt+1(

vt wt

]) Observed variables: yt ,xt Unobserved variables: ξt ,vt and wt

Consider an AR(1) yt = ρyt−1 + εt

Define ξt = yt , vt = εt and yt = yt , wt = 0, xt = 0 and F = ρ, H = 1 and A = 0

Observation Equation:

yt+1 = ρyt + εt+1

Consider an AR(1) yt = ρyt−1 + εt

Define ξt = yt , vt = εt and yt = yt , wt = 0, xt = 0 and F = ρ, H = 1 and A = 0

Observation Equation:

yt+1 = ρyt + εt+1

Consider a MA(1):

yt = µ + ut + θut−1

Define ξt = (ut , ut−1), yt = yt , xt = 1, wt = 0,vt = [ut , 0]′

Observation Equation:

ut

)

Generally any ARMA(p,q) model can be put in state space form.

Solutions to linearized DSGE models can also be put in state space form.

Kalman Filter

The goal is to find the distribution of ξt given all information available upto time t, t = yt , yt−1, ..., y1, xt , xt−1, ..., x1

p(ξt |t)

Assume ξ0 ∼ N(ξ0|0,P0|0)

• ξ0|0: Best guess of ξ0 • P0|0: Uncertainty about this guess

With normal errors and linear structure we get

ξt |t ∼ N(ξt|t ,Pt|t)

Goal is to find ξt|t and Pt|t

Kalman Filter

The goal is to find the distribution of ξt given all information available upto time t, t = yt , yt−1, ..., y1, xt , xt−1, ..., x1

p(ξt |t)

Assume ξ0 ∼ N(ξ0|0,P0|0)

• ξ0|0: Best guess of ξ0 • P0|0: Uncertainty about this guess

With normal errors and linear structure we get

ξt |t ∼ N(ξt|t ,Pt|t)

Goal is to find ξt|t and Pt|t

Kalman Filter

The goal is to find the distribution of ξt given all information available upto time t, t = yt , yt−1, ..., y1, xt , xt−1, ..., x1

p(ξt |t)

Assume ξ0 ∼ N(ξ0|0,P0|0)

• ξ0|0: Best guess of ξ0 • P0|0: Uncertainty about this guess

With normal errors and linear structure we get

ξt |t ∼ N(ξt|t ,Pt|t)

Goal is to find ξt|t and Pt|t

Kalman Filter

Two main steps:

1 Predict: Using all information upto time t − 1 we want to obtain an optimal forecast of ξt , let’s call it ξt|t−1.

2 Update: Once yt is realized, we update our forecast, call it ξt|t

Intuition for Kalman Filter

] ∼ N

([ µ1

µ2

[ 11 12

21 22

]) Suppose now you observe z1. Now what is the conditional distribution of z2?

z2|z1 ∼ N(mz2 ,Vz2)

Vz2 = 22 −21−111 12

Intuition for Kalman Filter

] ∼ N

([ µ1

µ2

[ 11 12

21 22

]) Suppose now you observe z1. Now what is the conditional distribution of z2?

z2|z1 ∼ N(mz2 ,Vz2)

Vz2 = 22 −21−111 12

1st Predict step

1st Update step

[ y1|x1, 0

ξ1|x1, 0

11 = HP1|0H ′ + R

ξ1|y1, x1, 0 = ξ1|1 ∼ N(ξ1|1,P1|1)

ξ1|1 = ξ1|0 +Kη1|0 P1|1 = P1|0 −KHP1|0

where

′ + R)−1

Kalman Filter Recursions

Update:

ξt|t = ξt|t−1 +Kηt|t−1 Pt|t = Pt|t−1 −KHPt|t−1

where

ft|t−1 = HPt|t−1H ′ + R

K = Pt|t−1H ′f −1 t|t−1

How do we specify ξ0|0 and P0|0?

ξt+1 = F ξt + vt

If this is stationary, i.e. eigenvalues of F are inside the unit circle, then

ξ0|0 = E (ξ0) = 0

P0|0 = E (ξ0ξ ′0)

vec(P0|0) = (I − F ⊗ F )−1vec(Q)

If not, set diagonal elements of P0|0 to large numbers

Bayesian Perspective

ξ0 ∼ N(ξ0|0,P0|0) is the prior.

ξt|t is the posterior Bayesian expectation of ξt |t for given values of F,Q,A,H,R

So how do we estimate the parameters?

Parameter Estimation from classical perspective

What is the distribution of yt |t−1, xt , θ where θ contains all the parameters.

yt |t−1, xt , θ ∼ N(yt|t−1, ft|t−1)

where

Likelihood (based on ”Prediction Error Decomposition”)

lnL = −1

η′t|t−1f −1 t|t−1ηt|t−1

Parameter Estimation from classical perspective

lnL = −1

η′t|t−1f −1 t|t−1ηt|t−1

Note ft|t−1 and ηt|t−1 are functions of θ

Maximize the log-likelihood w.r.t θ.

Note ξt|t is our best guess given all data upto time t.

But as econometricians we observe T ≥ t

• Can we improve our best guess using all available data?

Yes!

ξt |T ∼ N(ξt|T ,Pt|T )

Same idea used as Kalman filter derivations.

Note ξt|t is our best guess given all data upto time t.

But as econometricians we observe T ≥ t

• Can we improve our best guess using all available data?

Yes!

ξt |T ∼ N(ξt|T ,Pt|T )

Same idea used as Kalman filter derivations.

Smoothing

ξt|T = ξt|t + Jt ( ξt+1|T − ξt+1|t

) Pt|T = Pt|t + Jt(Pt+1|T − Pt+1|t)J

′ t

t+1|t

For smoothed inference.

1 Filter forward to get ξt|t and Pt|t 2 Smooth backward using above formulae to get ξt|T and Pt|T

What if Pt+1|t is singular?

See Durbin & Koopman (2002) Time Series Analysis by State Space Methods

• Alternative formulation of Kalman filter recursion in Ch 4.

• Does not require inversion of Pt+1|t for smoothing

• Their method is also computationally more efficient