Download - Optimal policy computation with Dynare

Transcript
Page 1: Optimal policy computation with Dynare

Optimal policy computation with DynareMONFISPOL workshop, Stresa

Michel Juillard1

March 12, 2010

1Bank of France and CEPREMAP

Page 2: Optimal policy computation with Dynare

Introduction

Dynare currently implements two manners to compute optimalpolicy in DSGE models

◮ optimal rule under commitment (Ramsey policy)◮ optimal simple rules

Page 3: Optimal policy computation with Dynare

A New Keynesian model

Thanks to Eleni Iliopulos!

Utility function:

Ut = ln Ct − φN1+γ

t

1 + γ

Uc,t =1Ct

UN,t = −φNγt

Page 4: Optimal policy computation with Dynare

Recursive equilibrium

1Ct

=βEt

{1

Ct+1

Rt

πt+1

}

πt

Ct(πt − 1) =βEt

{πt+1

Ct+1(πt+1 − 1)

}

+ εAt

ω

Nt

Ct

(φCtN

γt

At−

ε− 1ε

)

AtNt =Ct +ω

2(πt − 1)2

ln At =ρ ln At−1 + ǫt

Page 5: Optimal policy computation with Dynare

Ramsey problem

max E0

∞∑

t=0

βt

{ln Ct − φ

N1+γt

1 + γ

− µ1,t

(1Ct

− βEt

{1

Ct+1

Rt

πt+1

})

− µ2,t

(πt

Ct(πt − 1)− βEt

{πt+1

Ct+1(πt+1 − 1)

}

−εAt

ω

Nt

Ct

(φCtN

γt

At−

ε− 1ε

))

− µ3,t

(AtNt − Ct −

ω

2(πt − 1)2

)

− µ4,t (ln At − ρ ln At−1 − ǫt)

}

µi ,t = λi ,tβt where λi ,t is a Lagrange multiplier.

Page 6: Optimal policy computation with Dynare

F.O.C.

µ1,t−1CtRt−1

π2t

+1Ct

[2πt − 1](µ2,t − µ2,t−1

)= µ3,tω (

1Ct

−µ1,t

C2t

+ µ1,t−1Rt−1

C2t πt

− µ2,t1

C2t

[πt (πt − 1) +

AtNt

ω(ε− 1)

]

+µ2,t−11

C2t

[πt (πt − 1)] = µ3,t

φNγt + φ

ε

ωNγ

t µ2,t (γ + 1)− µ2,tAt

Ctω(ε− 1) = µ3,tAt

µ2,t

ω

Nt

Ct(ε− 1) + µ3,tNt +

µ4,t

At−

ρβµ4,t+1

At= 0

µ1,tβEt

{1

Ct+1πt+1

}= 0

Page 7: Optimal policy computation with Dynare

Dynare code

var pai, c, n, r, a;varexo u;parameters beta, rho, epsilon, omega, phi, gamma;

beta=0.99;gamma=3;omega=17;epsilon=8;phi=1;rho=0.95;

Page 8: Optimal policy computation with Dynare

Dynare code (continued)

model;a = rho*a(-1)+u;1/c = beta*r/(c(+1)*pai(+1));pai*(pai-1)/c = beta*pai(+1)*(pai(+1)-1)/c(+1)

+epsilon*phi*n^(gamma+1)/omega-exp(a)*n*(epsilon-1)/(omega*c);

exp(a)*n = c+(omega/2)*(pai-1)^2;end;

Page 9: Optimal policy computation with Dynare

Dynare code (continued)

initval;pai=1;r=1/beta;c=0.96717;n=0.96717;a=0;end;

shocks;var u; stderr 0.008;end;

planner_objective(ln(c)-phi*((n^(1+gamma))/(1+gamma)));

ramsey_policy(planner_discount=0.99,order=1);

Page 10: Optimal policy computation with Dynare

Ramsey policy: General nonlinear case

max{yτ}∞τ=0

Et

∞∑

τ=t

βτ−tU(yτ )

s.t.Eτ f (yτ+1, yτ , yτ−1, ετ ) = 0

yt ∈ Rn : endogenous variables

εt ∈ Rp : stochastic shocks

andf : R3n+p

→ Rm

There are n − m free policy instruments.

Page 11: Optimal policy computation with Dynare

Lagrangian

The Lagrangian is written

L (yt−1, εt) = Et

∞∑

τ=t

βτ−tU(yτ )− [λτ ]η [f (yτ+1, yτ , yτ−1, ετ )]η

where λτ is a vector of m Lagrange multipliers We adopt tensornotations because later on we will have to deal with the secondorder derivatives of [λ (st)]η [F (st)]

ηγ .

It turns out that it is the discounted value of the Lagrangemultipliers that are stationary and not the multipliersthemselves. It is therefore handy to rewrite the Lagrangian as

L (yt−1, εt ) = Et

∞∑

τ=t

βτ−t(

U(yτ )− [µτ ]η [f (yτ+1, yτ , yτ−1, ετ )]η)

whith µt = λτ/βτ−t .

Page 12: Optimal policy computation with Dynare

Optimization problem reformulated

The optimization problem becomes

max{yτ}∞τ=t

min{λτ}

τ=t

Et

∞∑

τ=t

βτ−t(

U(yτ )− [µτ ]η [f (yτ+1, yτ , yτ−1, ετ )]η)

for yt−1 given.

Page 13: Optimal policy computation with Dynare

First order necessary conditions

Derivatives of the Lagrangian with respect to endogenousvariables:[∂L∂yt

]

α

=Et{[U1(yt )]α − [µt ]η [f2(yt+1, yt , yt−1, εt )]

ηα

− β [µt+1]η [f3(yt+2, yt+1, yt , εt+1)]ηα

}τ = t

[∂L∂yτ

]

α

=Et{[U1(yτ )]α − [µτ ]η [f2(yτ+1, yτ , yτ−1, ετ )]

ηα

− β [µτ+1]η [f3(yτ+2, yτ+1, yτ , ετ+1)]ηα

− β−1 [µτ−1]η [f1(yτ , yτ−1, yτ−2, ετ−1)]α}

τ > t

Page 14: Optimal policy computation with Dynare

First order conditions

The first order conditions of this optimization problem are

Et{[U1(yτ )]α − [µτ ]η [f2(yτ+1, yτ , yτ−1, ετ )]

ηα

−β [µτ+1]η [f3(yτ+2, yτ+1, yτ , ετ+1)]ηα

−β−1 [µτ−1]η [f1(yτ , yτ−1, yτ−2, ετ−1)]ηα

}= 0

[f (yτ+1, yτ , yτ−1, ετ )]η = 0

with µt−1 = 0 and where U1() is the Jacobian of function U()with respect to yτ and fi() is the first order partial derivative off () with respect to the i th argument.

Page 15: Optimal policy computation with Dynare

Cautionary remark

The First Order Conditions for optimality are only necessaryconditions for a maximum. Levine, Pearlman and Pierse (2008)propose algorithms to check a sufficient condition. I still need toadapt it to the present framework where the dynamicconditions, f () are given as implicit functions and variables canbe both predetermined and forward looking.

Page 16: Optimal policy computation with Dynare

Nature of the solution

The above system of equations is nothing but a larger systemof nonlinear rational expectation equations. As such, it can beapproximated either to first order or to second order. Thesolution takes the form

[yt

µt

]= g (yt−2, yt−1, µt−1, εt−1, εt)

The optimal policy is then directly obtained as part of the set ofg() functions.

Page 17: Optimal policy computation with Dynare

The steady state problem

The steady state is solution of

U1(y)− µ′[

f2(y , y , y ,0) + βf3(y , y , y ,0)

+β−1f1(y , y , y ,0)]

= 0

f (y , y , y ,0) = 0

Page 18: Optimal policy computation with Dynare

Computing the steady stateFor a given value y , it is possible to use the first matrix equationabove to obtain the value of µ that minimizes the sum of squareresiduals, e:

M = f2(y , y , y , x ,0) + βf3(y , y , y , x ,0)

+β−1f1(y , y , y ,0)

µ′ = U1(y)M′(MM ′

)−1

e′ = U1(y)− µ′M

Furthermore, y must satisfy the m equations

f (y , y , y ,0) = 0

It is possible to build a sytem of equations whith only n unkownsy , but we must provide n − m independent measures of theresiduals e. Independent in the sense that the derivatives ofthese measures with respect to y must be linearly independent.

Page 19: Optimal policy computation with Dynare

A QR trick

At the steady state, the following must hold exactly

U1 (y) = µ′M

This can only be if

M⋆ =

[M

U1 (y)

]

is of rank m The reordered QR decomposition of M⋆ is suchthat

M⋆ E = Q R(m + 1)× n n × n (m + 1)× (m + 1) (m + 1) × n

where E is a permutation matrix, Q an orthogonal matrix and Ra triangular matrix with diagonal elements ordered indecreasing size.

Page 20: Optimal policy computation with Dynare

A QR trick (continued)

◮ When U1 (y) = µ′M doesn’t hold exactly M⋆ is full rank(m + 1) and the n − m last elements of R may be differentfrom zero.

◮ When U1 (y) = µ′M holds exactly M⋆ has rank m and then − m last elements of R are zero.

◮ The last n − m elements of the last row of R provide then − m independent measures of the residuals e

◮ In practice, we build a nonlinear function with y as inputand that returns the n − m last elements of the last row ofR and f (y , y , y ,0). At the solution, when y = y , thisfunction must return zeros.

Page 21: Optimal policy computation with Dynare

When an analytical solution is available

Let’s write yx the n − m policy instruments and yy , the mremaining endogenous variables, such that

y⋆ =

[y⋆

xy⋆

y

].

The analytical solution is given by

y⋆y = f (y⋆

x ) .

We have then

f (y⋆, y⋆, y⋆,0) = 0

Combining this analytical solution with the n − m residualscomputed above, we can reduce the steady state problem to asystem of n − m nonlinear equations that is much easier tosolve.

Page 22: Optimal policy computation with Dynare

First order approximation of the FOCs

Et

{[U11]αγ

[yt]γ

− [µt ]η [f2]η

α − β [µt+1]η [f3]η

α − β−1 [µt−1]η [f1]

η

α

− [µ]η

(β [f13]

η

γα

[yt+2

]γ+ β

−1 [f13]η

αγ

[yt−2

]γ+

([f12]

η

γα + β [f23]η

γα

) [yt+1

+([f23]

η

αγ + β−1 [f12]

η

αγ

) [yt−1

]γ+

([f33]

η

αγ + β [f22]η

αγ + β−1 [f11]

η

αγ

) [yt]γ

+ [f24]η

αδ [εt ]δ + β [f34]

η

αδ [εt+1]δ + β

−1 [f14]η

αδ [εt−1]]δ

)}= 0

Et{[f1]ηγ

[yt+1

]γ+ [f2]ηγ

[yt]γ

+ [f3]ηγ[yt−1

]γ+ [f4]ηδ [εt ]

δ}

= 0

where yt = yt − y , µt = µt − µ and fij the second orderderivatives corresponding to the i th and the j th argument of thef () function.

Page 23: Optimal policy computation with Dynare

The first pitfall

A naive approach ot linear-quadratic appoximation that wouldconsider a linear approximation of the dynamics of the systemand a second order approximation of the objective function,ignores the second order derivatives fij that enter in the firstorder approximation of the dynamics of the model underoptimal policy.

Page 24: Optimal policy computation with Dynare

The approximated solution function

yt = y + g1yt−2 + g2yt−1 + g3µt−1 + g4εt−1 + g5εt

µt = µ+ h1yt−2 + h2yt−1 + h3µt−1 + h4εt−1 + h5εt

Page 25: Optimal policy computation with Dynare

Dynare implementation

When the ramsey_policy instruction is given, Dynare forms thematrices of coefficients of the first order expansion of the firstorder necessary conditions, without writing explicitely theequations for the first order necessary conditions. In the future,we will write a *.mod file containing all the equations for theRamsey problem.

Page 26: Optimal policy computation with Dynare

Evaluating welfare under optimal policy

◮ Optimal policy under commitment is mostly interesting as abenchmark to evaluate other, more practical policies.

◮ The welfare evaluation must be consistent with theapproximated solution. Idealy, if the approximated solutionis the exact solution of an approximated problem,approximated welfare should be the objective function ofthe approximated problem.

◮ A first order approximation of the dynamics under optimalpolicy is the exact solution of some quadratic problem.

◮ Plugging policy rule in original model and compute anapproximation of welfare should provide the samemeasure and avoid paradoxical rankings.

◮ We should aim at a second order approximation of welfare.

Page 27: Optimal policy computation with Dynare

Conjectures

1. Under optimal policy, the second order approximation ofwelfare is equal to a second order approximation of theLagrangian

2. Second order derivatives of the policy function drop fromthe second order approximation of welfare

Page 28: Optimal policy computation with Dynare

Intuition from the static case

Considermax

yU(y)

s.t.f (y , s) = 0

The Lagrangian is

L(s) = U(y)− [λ]η [f (y , s)]η

The solution takes the form

y = g(s)

λ = h(s)

Page 29: Optimal policy computation with Dynare

First order condition for optimality

[U1]α +− [λ]η [f1(y , s)]η = 0

f (y , s) = 0

Page 30: Optimal policy computation with Dynare

Second order approximation of the Lagrangian

Taking the Taylor expansion around s:

U −

[λ]η

[f]η

[f]η[h1]

ηγ [s] +

(([U1]α −

[λ]η[f1]

ηα

)[g1]

αγ −

[λ]η[f2]

ηγ

)[s]γ

+12

(([U1]α −

[λ]η[f1]

ηα

)[gss]

αγθ +

([U11]αδ −

[λ]η[f11]

ηαδ

)[gs]

αγ [gs]

δθ

[λ]η[f22]

ηγδ −

[f]η[h11]

ηγδ − 2

( [λ]η[f21]

ηαδ [g1]

αγ

+ [h1]ηγ [f1]

ηα [g1]

αδ

))[s]γ [s]δ

Page 31: Optimal policy computation with Dynare

Simplifying

Taking the Taylor expansion of the Lagrangian around s:

U + [U1]α [g1]αγ [s]

γ

+12

(([U11]αδ −

[λ]η[f11]

ηαδ

)[gs]

αγ [gs]

δθ −

[λ]η[f22]

ηγδ

− 2([

λ]η[f21]

ηαδ [g1]

αγ + [h1]

ηγ [f1]

ηα [g1]

αδ

))[s]γ [s]δ

Page 32: Optimal policy computation with Dynare

In addition . . .

From steady state:

[U1]α −

[λ]η[f1]

ηα = 0

when we compute a second order approximation of the FOCs,we have also

[f1]ηα [gss]

αγθ + [f11]

ηαδ [gs]

αγ [gs]

δθ + [f22]

ηγδ + 2 [f21]

ηαδ [g1]

αγ = 0

So,

[U1]α [gss]αγθ =−

[λ]η

([f11]

ηαδ [gs]

αγ [gs]

δθ + [f22]

ηγδ + 2 [f21]

ηαδ [g1]

αγ

)

Page 33: Optimal policy computation with Dynare

Extending the intuition to DSGE models

Steady state condition:

[U1]α − [µ]η([f2]

ηα + β [f3]

ηα + β−1 [f1]

ηα

)= 0

◮ One needs to properly select timing of variables in welfaresummation over time.

◮ One needs to use the fact that initial value of Lagrangemultipliers is zero.

Page 34: Optimal policy computation with Dynare

Timeless perspective

◮ Time inconsistency problem stems from the fact that the initialvalue of the lagged Lagrange multipliers is set to zero.

◮ If, later on, the authorities re–optimize, they reset the Lagrangemultipliers to zero.

◮ This mechanism reflects the fact that authorities make theirdecision after that private agents have formed their expectations(on the basis of the previous policy).

◮ When private agents expect the authorities to reoptimize theeconomy switch to a Nash equilibrium (discretionary solution)

◮ For optimal policy in a timeless perspective, Svensson andWoodford suggest that authorities relinquish their first periodadvantage and act as if the Lagrange mulitpliers had beeninitialized to zero in the far away past.

◮ What should be the initial value of the Lagrange multipliers foroptimal policy in a timeless perspective?

Page 35: Optimal policy computation with Dynare

Eliminating the Lagrange multipliers

A linear approximation of the solution to a Ramsey problemtakes the form:

[yt

µt

]=

[A11 A12

A21 A22

] [yt−1

µt−1

]+

[B1

B2

]εt

= A1yt−1 + A2µt−1 + Bεt

where A1 and A2 are the conforming sub-matrices.The problem is to eliminate µt−1 from the expression for yt bysubstituting values of yt−1, yt−2, εt−1.

Page 36: Optimal policy computation with Dynare

A QR algolrithm (I)

A2E = QR =

[Q11 Q12

Q21 Q22

] [R1 R2

0 0

]

where E is a permutation matrix, R1 is upper triangular and R2

is empty if A2 is full (column) rank.Replacing A2 by its decomposition, one gets

Q′

[yt

µt

]= Q′A1yt−1 + RE ′µt−1 + Q′Bεt

The bottom part of the above system can be written

Q′12yt + Q′

22µt =[

Q′12 Q′

22

] (A1yt−1 + Bεt

)

This system in turn contains necessarily more equations thanelements in µt .

Page 37: Optimal policy computation with Dynare

A QR algolrithm (II)

A new application of the QR decomposition to Q′22 gives:

Q′22E = QR =

[Q11 Q12

Q21 Q22

][R1 R2

0 0

]

where E is a permutation matrix. When R2 isn’t empty, thesystem is undetermined and it is possible to choose some ofthe multipliers µt , the ones corresponding to the columns of R2.We set them to zero, following a minimal state space type ofarguments. Note however, that the QR decomposition isn’tunique. Once the economy is managed according to optimalpolicy, all choices of decomposition would be equivalent, butthe choice may matter for the initial period. This is an issue thatdeserves further studying.

Page 38: Optimal policy computation with Dynare

A QR algolrithm (III)

Then, we have

µt = E

[R−1

1

[Q′

11 Q′

21

] ([Q′

12 Q′

22

] (A1yt−1 + Bεt

)− Q′

12yt)

0

]

and

µt−1 = E

[R−1

1

[Q′

11 Q′

21

] ([Q′

12 Q′

22

] (A1yt−2 + Bεt−1

)− Q′

12yt−1)

0

]

Page 39: Optimal policy computation with Dynare

A QR algolrithm (IV)

Replacing, µt−1 in the original equation for yt , one obtainsfinally

yt = M1yt−1 + M2yt−2 + M3εt + M4εt−1

where

M1 = A11 − A12E

[R−1

1

[Q′

11 Q′21

]Q′

12

0

]

M2 = A12E

[R−1

1

[Q′

11 Q′21

] [Q′

12 Q′22

]A1

0

]

M3 = B1

M4 = A12E

[R−1

1

[Q′

11 Q′21

] [Q′

12 Q′22

]B

0

]

Page 40: Optimal policy computation with Dynare

A linear–quadratic example (I)

var y inf r dr;varexo e_y e_inf;

parameters delta sigma alpha kappa gamma1 gamma2;

delta = 0.44;kappa = 0.18;alpha = 0.48;sigma = -0.06;

Page 41: Optimal policy computation with Dynare

A linear–quadratic example(II)

model(linear);y = delta*y(-1)+(1-delta)*y(+1)+sigma *(r-inf(+1))+e_yinf = alpha*inf(-1)+(1-alpha)*inf(+1)+kappa*y+e_inf;dr = r - r(-1);end;

shocks;var e_y;stderr 0.63;var e_inf;stderr 0.4;end;

Page 42: Optimal policy computation with Dynare

A linear–quadratic example(III)

planner_objective y^2 + inf^2 + 0.2*dr^2;

ramsey_policy(planner_discount=1);

Warning: the solution isn’t necessarily minimum state!

Page 43: Optimal policy computation with Dynare

Optimal simple rule

Exemple (Clarida, Gali, Gertler)

yt = δyt−1 + (1 − δ)Et yt+1 + σ(rt − Et inft+1) + eyt

inft = αinf−1 + (1 − α)Et inft+1 + κyt + einft

rt = γ1inft + γ2yt

Objectifarg min

γ1,γ2var(y) + var(inf )

= arg minγ1,γ2

limβ→1

E0

∞∑

t=1

(1 − β)βt (y2t + inf 2

t )

Page 44: Optimal policy computation with Dynare

DYNARE example

var y inf r;varexo e_y e_inf;

parameters delta sigma alpha kappa gamma1 gamma2;

delta = 0.44;kappa = 0.18;alpha = 0.48;sigma = -0.06;

Page 45: Optimal policy computation with Dynare

(continued)

model(linear);y = delta*y(-1)+(1-delta)*y(+1)+sigma *(r-inf(+1))+e_y;inf = alpha*inf(-1)+(1-alpha)*inf(+1)+kappa*y+e_inf;r = gamma1*inf+gamma2*y;end;

shocks;var e_y;stderr 0.63;var e_inf;stderr 0.4;end;

Page 46: Optimal policy computation with Dynare

(continued)

optim_weights;inf 1;y 1;end;

gamma1 = 1.1;gamma2 = 0;

osr_params gamma1 gamma2;

osr;

Page 47: Optimal policy computation with Dynare

Another example

yt = δyt−1 + (1 − δ)Et yt+1 + σ(rt − Et inft+1) + eyt

inft = αinf−1 + (1 − α)Et inft+1 + κyt + einft

rt = γ1inft + γ2yt

drt = rt − rt−1

Objectifminγ1,γ2

var(y) + var(inf ) + 0.2var(dr)