Solving DSGE models

Macro II - Fluctuations - ENSAE, 2024-2025

Author

Pablo Winant

Published

March 12, 2025

Introduction

What is the main specificity of economic modeling?

In (macro)economics, we model the behaviour of economic agents by specifying:

  • their objective maxctEtstβsU(cs) maxπt
  • their constraints (budget constraint, econ. environment…)

. . .

This has important implications:

  • macro models are forward looking
    • rely on expectations
  • macro models need to be solved

In many cases, there is no closed form for the solution -> we need numerical techniques

Dynare

  • 1996: Michel Juillard created an opensource software to solve DSGE models
    • DSGE: Dynamic Stochastic General Equilibrium
    • usually solved around a steady-state
  • Now about 10 contributors.
    • + power users who have contributed to the code
  • It has been widely adopted:
    • early version in Gauss
    • then Matlab/Octave/Scilab
    • latest version in Julia
    • … and Python (checkout dyno 🦖)

Michel Juillard

DSGE Models in institutions

Nowadays most DSGE models built in institutions have a Dynare version (IMF/GIMF, EC/Quest, ECB/, NYFed/FRBNY)

  • they are usually based on the midsize model from Smets & Wouters (10 equations)
  • but have grown up a lot (>>100 equations)

. . .

Institutions, led by researchers are diversifying their model

  • Semi-Structural Models
  • Computational General Equilibrium Models
  • Network Models
  • Agent-based Models
  • Heterogenous Agents Models

The Plan

Provide a short introduction to DSGE modeling:

  • How models are solved (today)
  • Small Open Economy (aka IRBC model)
  • Heterogeneity
  • Financial Intermediation

In passing, we’ll discuss some of the trends

Solving a model

Model

A very concise representation of a model

Et[f(yt+1,yt,yt1,ϵt)]=0

The problem:

  • ytRn: the vector of endogenous variables
  • ϵtRne: the vector of exogenous variables
    • we assume that ϵt is a zero-mean gaussian process
  • f:RnRn: the model equations

The solution:

  • g such that t,yt=g(yt1,ϵt)

The situation is different when one is making a perfect foresight simulation.

The timing of the equations

Tip

In a dynare modefile the model equations are coded in the model; ... ; end; block.

Variable vt (resp vt1, vt+1) is denoted by v or v(0) (resp v(-1), v(+1)).

General Timing Convention

New information arrives with the innovations ϵt.

At date t, the information set is spanned by Ft=F(,ϵt3,ϵt2,ϵt1,ϵt)

By convention an endogenous variable has a subscript t if it is known first at date t.

. . .

Several variable types depending on how they appear in the model:

  • jump variable: appear t or t+1
  • predetermined variable: appear in t1 and t (possibly t+1)
  • static variables: appear in t only
    • can be expressed as a function of other variables

The timing of equations

Example

Using Dynare’s timing conventions:

  • Write the production function in the RBC

  • Write the law of motion for capital k, with a depreciation rate δ and investment i

    • when is capital known?
    • when is investment known?
  • Add a multiplicative investment efficiency shock χt. Assume it is an AR1 driven by innovation ηt and autocorrelation ρχ

    • how do you write the law of motion for capital?

Steady-state

The deterministic steady-state satisfies:

f(y,y,y,0)=0

Often, there is a closed-form solution.

Otherwise, one must resort to a numerical solver to solve

yf(y,y,y,0)

In dynare the steady-state values are provided in the steadystate_model; ... ; end; block. One can check they are correct using the check; statement.

To find numerically the steady-state: steady;.

The implicit system

Replacing the solution yt=g(yt1,ϵt) in the system Et[f(yt+1,yt,yt1,ϵt)]=0

we obtain:

Et[f(g(g(yt1,ϵt),ϵt+1),g(yt1,ϵt),yt1,ϵt)]=0

It is an equation defining implicitly the function g()

The state-space

Et[f(g(g(yt1,ϵt),ϵt+1),g(yt1,ϵt),yt1,ϵt)]=0

In this expression, yt1,ϵt is the state-space:

  • it contains all information available at t to predict the future evolution of (ys)st

. . .

Dropping the time subscripts, the equation must be satisfied for any realization of (y,ϵ) (y,ϵ) Φ(g)(y,ϵ)=Eϵ[f(g(g(y,ϵ),ϵ),g(y,ϵ),y,ϵ)]=0

It is a functional equation Φ(g)=0

Expected shocks

First order approximation:

  • Assume |ϵ|<<1,|ϵ|<<1

Perform a Taylor expansion with respect to future shock:

Eϵ[f(g(g(y,ϵ),ϵ),g(y,ϵ),y,ϵ)]=Eϵ[f(g(g(y,ϵ),0),g(y,ϵ),y,ϵ)]+Eϵ[fyt+1(g(g(y,ϵ),0),g(y,ϵ),y,ϵ)gϵϵ]+o(ϵ)f(g(g(y,ϵ),0),g(y,ϵ),y,ϵ)

. . .

This uses the fact that E[ϵ]=0.

At first order, expected shocks play no role.

To capture precautionary behaviour (like risk premia), we would need to increase the approximation order.

First order perturbation

We are left with the system:

F(y,ϵ)=f(g(g(y,ϵ),0),g(y,ϵ),y,ϵ)=0

A variant of the implicit function theorem then yields the existence of a first approximation of g:

g(y,ϵ)=y+gy(yy)+geϵt

. . .

Unknown quantities gy, and ge are obtained using the method of undeterminate coefficients. Plug the first approximation into the system and write the conditions Fy(y,0)=0 Fϵ(y,0)=0

Computing gy

Recall the system: F(y,ϵ)=f(g(g(y,ϵ),0),g(y,ϵ),y,ϵ)=0

We have Fy(y,0)=fyt+1gygy+fytgy+fyt1=0

. . .

gy is the solution of a specific Riccatti equation AX2+BX+C where A,B,C and X=gy are square matrices Rn×Rn

First Order Deterministic Model

Let’s pause a minute to observe the first order deterministic model: AX2+BX+C

From our intuition in dimension 1, we know there must be multiple solutions

  • how do we find them?
  • how do we select the right ones?

I the absence of shocks the dynamics of the model are given by yt=Xyt1

What is the condition for the model to be stationary?

. . .

-> the biggest eigenvalue of X should be smaller than 1

Develop intuition in dimension 1.

Multiplicity of solution

It is possible to show that the system is associated with 2n generalized eigenvalues:

|λ1||λ2n|

For each choice C of n eigenvalues (|C|=n), a specific recursive solution XC can be constructed. It has eigenvalues C.

. . .

This yields at least (2nn) different combinations.

. . .

A model is well defined when there is exactly one solution that is non divergent.

This is equivalent to:

|λ1||λn|1<|λn+1||λ2n|

Example 1

Forward looking inflation:

πt=απt+1 with α<1.

Is it well defined?

. . .

We can rewrite the system as:

απt+1πt+0πt1=0

or

πt+1(1α+0)πt+(1α0)πt1=0

. . .

The generalized eigenvalues are 01<1α.

. . .

The unique stable solution is πt=0πt1

Example 2

Debt accumulation equation by a rational agent:

bt+1(1+1β)bt+1βbt1=0

Is it well-defined?

. . .

Two generalized eigenvalues λ1=1<λ2=1β

. . .

The unique non-diverging solution is bt=bt1.

  • it is a unit-root: any initial deviation in bt1 has persistent effects

Example 3

Productivity process: zt=ρzt1 with ρ<1: well defined

. . .

In that case there is a hidden infinite eigenvalue associated to zt+1.

. . .

To see why consider the system associated with eigenvalues m and ρ: zt+1(m+ρ)zt+mρzt1=0

1mzt+1(1+ρm)zt+ρzt1=0

Which corresponds to the initial model when m=

. . .

The generalized eigenvalues are λ1=ρ1<λ2=

More generally, any variable that does not appear in t+1 creates one infinite generalized eigenvalue.

A criterium for well-definedness

Looking again at the list of eigenvalues we set aside the infinite ones.

The model is well specified iff we can sort the eigenvalues as:

|λ1||λn|1<|λn+1||λn+k||λn+k+1||λ2n|infinite eigenvalues

Blanchard-Kahn criterium

The model satisfies the Blanchard-Kahn criterium if the number of eigenvalues greater than one, is exactly equal to the number of variables appearing in t+1.

In that case the model is well-defined.

Computing the solution

There are several classical methods to compute the solution to the algebraic Riccatti equation: AX2+BX+C=0

  • qz decomposition
    • traditionnally used in the DSGE literature since Chris Sims
    • a little bit unintuitive
  • cyclic reduction
    • new default in dynare, more adequate for big models
  • linear time iteration cf @sec:linear_time_iteration
    • conceptually very simple

Computing ge

Now we have gy, how do we get ge?

Recall: F(y,ϵ)=f(g(g(y,ϵ),0),g(y,ϵ),y,ϵ)=0

We have Fe(y,0)=fyt+1gyge+fytge+fϵt=0

Now this is easy:

ge=(fyt+1gy+fyt)1fϵt=0

The model solution

The result of the model solution: yt=gyyt1+geϵt

It is an AR1, driven by exogenous shock ϵt.

. . .

Because it is a well known structure, one can investigate the model with

  • impulse response functions
  • stochastic simulations

. . .

Then to compare the model to the data we compute

  • implied moments:
    • covariances, autocorrelation
  • likelihood

Optimizing the fit to the data is called model estimation

Conclusion

What can you do with the solution

The solution of a model found by Dynare has an especially simple form: an AR1

  • yt=Xyt1+Yϵt
  • where the covariances Σ of ϵt can be chosen by the modeler

. . .

With this solution we can (cf next TD)

  • compute (conditional and unconditional) moments
  • perform stochastic simulations, impulse response function

. . .

Going Further

Taking the model to the data with Dynare

  • “estimate” the model: compute the likelihood of a solution and maximize it by choosing the right parameters
  • “identify” shocks in the data

Other functions

  • higher order approximation
  • (noninear) perfect foresight simulations
  • ramsey plan
  • discretionary policy

Coming Next

Many models

Appendix: Linear Time Iteration

Linear Time Iteration

Recall the system to solve: F(y,ϵ)=f(g(g(y,ϵ),0),g(y,ϵ),y,ϵ)=0

but now assume the decision rules today and tomorrow are different:

  • today: yt=g(yt1,ϵt)=y+Xyt1+gyϵt
  • tomorrow: yt+1=g~(yt,ϵt+1)=y+X~yt1+g~yϵt

Then the Ricatti equation is written:

AX~X+BX+C=0

Linear Time Iteration (2)

The linear time iteration algorithm consists in solving the decision rule X today as a function of decision rule tomorrow X~.

This corresponds to the simple formula:

X=(AX~+B)1C

And the full algorithm can be described as:

  • choose X0
  • for any Xn, compute Xn+1=T(Xn)=(AXn+B)1C
    • repeat until convergence

Linear Time Iteration (3)

It can be shown that, starting from a random initial guess, the linear time-iteration algorithm converges to the solution X with the smallest modulus:

|λ1||λn|Selected eigenvalues|λn+1||λ2n|

In other words, it finds the right solution when the model is well specified.

How do you check it is well specified?

  • λn is the biggest eigenvalue of solution X
  • what about λn+1?
    • 1λn+1 is the biggest eigenvalue of (AX+B)1A

Linear Time Iteration (4)

Define M(λ)=Aλ2+Bλ+C

For any solution X, M(λ) can be factorized as:

M(λ)=(λA+AX+B)(λIX)

and

det(M(λ))=det(λA+AX+B)Q(λ)det(λIX)

By construction Q(λ) is a polynomial whose roots are those that are not selected by the solution i.e. ΛSp(X).

Linear Time Iteration (5)

For λ0 we have:

λSp((AX+B)1A) det((AX+B)1)AIλ)=0 det(1λAI(AX+B))=0 Q(1λ)=0 1λGSp(X)

In words, (AX+B)1 contains all the eigenvalues that have been rejected by the selection of X.

In particular, ρ((AX+B)1)A)=1/min(GSp(X))

Footnotes

  1. Special case of Bezout theorem. Easy to check in that case↩︎