# numbers (usual operations usual)
1.0+(2.0+3.0*(4.0+5.0)))/30 (
1.0
Business Cycles and Fluctuations - AE2E6
Developped at MIT on top of opensource technologies
Syntax inspired by Matlab but:
Everything is JIT-compiled
Opensource/free + vibrant community
# numbers (usual operations usual)
1.0+(2.0+3.0*(4.0+5.0)))/30 (
1.0
# exponentials are denoted by ^
2^8
256
# variable assignment
= 10 x
10
# Variable names can have Unicode characters
# To get ϵ in the REPL, type \epsilon<TAB>
= 20
a = 34
σ = 1e-4 ϵ
0.0001
# comparison
2 == 3
false
3<=3
true
# Strings can also contain Unicode characters
= "α is a string" fancy_str
"α is a string"
# double quotes define a character, not a string
'c'
'c': ASCII/Unicode U+0063 (category Ll: Letter, lowercase)
# string interpolation (1/2)
= "Voldemort"
he_who_must_not_be_named "Welcome $(he_who_must_not_be_named)!"
"Welcome Voldemort!"
# string interpolation (2/2)
= 1999999
n println("Iteration ", n, " is still running...")
Iteration 1999999 is still running...
# vectors
= [1,2,3] v
3-element Vector{Int64}:
1
2
3
# matrices
= [1 2 3 ; 4 5 6 ; 7 8 9] M
3×3 Matrix{Int64}:
1 2 3
4 5 6
7 8 9
# matrix multiplication
*v M
3-element Vector{Int64}:
14
32
50
# mutating vectors
= ["One"]
x push!(x, "Two")
push!(x, "Three")
push!(x, "Four")
push!(x, "Five")
# Note how the type of the vector is different from above
# Vectors in Julia hold homoegenous types
# Also note the exclation mark in `push!`: it is a reminder
# of the fact that `push!` mutates its first argument
5-element Vector{String}:
"One"
"Two"
"Three"
"Four"
"Five"
# access elements in a vector
1] v[
1
# access elements in a matrix
1,2] M[
2
Indexing in Julia is 1-based. First element of a collection is denoted by 1.
# slice matrices
println(M)
# keep first line
println("First line")
println(M[1,:])
# keep second column
println("Second column")
println(M[:,2])
# extract a submatrix
println(M[1:2,1:2])
[1 2 3; 4 5 6; 7 8 9]
First line
[1, 2, 3]
Second column
[2, 5, 8]
[1 2; 4 5]
# concatenate vectors (horizontally)
vcat( [1,2], [3,4])
4-element Vector{Int64}:
1
2
3
4
# concatenate vectors
hcat( [1,2], [3,4])
2×2 Matrix{Int64}:
1 3
2 4
# transpose matrix
hcat( [1,2], [3,4])'
2×2 adjoint(::Matrix{Int64}) with eltype Int64:
1 2
3 4
# like in python
# tuples can hold heterogenous data
= ("This", "is", 1, "tuple") t
("This", "is", 1, "tuple")
# access elements in a tuple (still 1-based)
3] t[
1
# tuples are `immutable`
# The following should raise an exception
push!(t, "not a vector")
MethodError: MethodError: no method matching push!(::Tuple{String, String, Int64, String}, ::String)
The function `push!` exists, but no method is defined for this combination of argument types.
Closest candidates are:
push!(::Any, ::Any, !Matched::Any)
@ Base abstractarray.jl:3529
push!(::Any, ::Any, !Matched::Any, !Matched::Any...)
@ Base abstractarray.jl:3530
push!(!Matched::BitVector, ::Any)
@ Base bitarray.jl:752
...
MethodError: no method matching push!(::Tuple{String, String, Int64, String}, ::String)
The function `push!` exists, but no method is defined for this combination of argument types.
Closest candidates are:
push!(::Any, ::Any, !Matched::Any)
@ Base abstractarray.jl:3529
push!(::Any, ::Any, !Matched::Any, !Matched::Any...)
@ Base abstractarray.jl:3530
push!(!Matched::BitVector, ::Any)
@ Base bitarray.jl:752
...
Stacktrace:
[1] top-level scope
@ ~/Teaching/ensae/ae2e6/tutorials/session_1/jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_X44sZmlsZQ==.jl:3
# loop over any iterable (like in python)
for i in 1:5
println("Iteration ", i)
end
# note how 1 and 5 are both included.
Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
for i ∈ ["Paris", "New-York", "Bogota"]
println("City: ", i)
end
City: Paris
City: New-York
City: Bogota
# Vector comprehensions (like in python)
# enumerate all squares of even numbers (% computes modulo)
^2 for i=1:10 if i%2==1] [i
5-element Vector{Int64}:
1
9
25
49
81
# matlab like syntax
# with positional and keyword arguments separated by `;`
function fun(a,b; c=3)
= a+b*c
z return z
end
fun (generic function with 1 method)
fun(1,2)
7
fun(1,2; c=4)
9
We consider here a simple autoregressive model:
\[y_t = A y_{t-1} + B e_t\]
where \(y_t=(y^1_t, y^2_t)\) a vector of variables and \(e_t=(e^1_t, e^2_t)\) a normal i.i.d. multivariate process defined by covariance matrix \(\Sigma \in R^p \times R^p\).
We start by choosing:
\[A = \begin{bmatrix}\rho & 2 \\ 0 & \lambda\end{bmatrix}\]
\[B = \begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\]
\[\Sigma = \begin{bmatrix}0.05 & 0.005 \\ 0.005 & 0.01\end{bmatrix}\]
Define julia variables for matrices \(A, B, \Sigma\).
= 0.3
ρ = 0.5
λ = [ρ 2; 0 λ] A
2×2 Matrix{Float64}:
0.3 2.0
0.0 0.5
# B is the identity matrix
= [1.0 0.0; 0.0 1.0] # pay attention to use floats rather than integers B
2×2 Matrix{Float64}:
1.0 0.0
0.0 1.0
# a more julian way of using the identity matrix
using LinearAlgebra: I
= I
B # the resulting object adapts to any size (no need to instantiate it)
UniformScaling{Bool}
true*I
= [0.05 0.005; 0.005 0.01] Σ
2×2 Matrix{Float64}:
0.05 0.005
0.005 0.01
Compute (programmatically) the eigenvalues of A.
using LinearAlgebra
eigvals(A)
# no surprise: these are equal to λ, ρ
2-element Vector{Float64}:
0.3
0.5
*[0,1] A
2-element Vector{Float64}:
2.0
0.5
Simulate the response to a one deviation shock to \(e^1_t\) and \(e^2_t\) over 10 periods.
# an easy to read / inefficient solution
# pay attention that diagonal terms in Σ contain the variance...
= [sqrt(Σ[1,1]), 0.0]
e0_1 = [0.0, sqrt(Σ[2,2])]
e0_2
= hcat([A^n*e0_1 for n in 0:10]...)
sim_y_1 = hcat([ A^n*e0_2 for n in 0:10 ]...) sim_y_2
2×11 Matrix{Float64}:
0.0 0.2 0.16 0.098 0.0544 0.02882 … 0.00193344 0.000970658
0.1 0.05 0.025 0.0125 0.00625 0.003125 0.000195313 9.76563e-5
Plot the result using Plots.jl
using Plots
plot(sim_y_1[1,:])
# we can make to plots size by size
plot( plot(sim_y_1[1,:], title="\$y_1\$", legend=false), plot(sim_y_1[2,:], title="\$y_2\$", legend=false))
# or maake two plots on the same graph:
= plot( sim_y_1[1,:], label="\$e_1\$", title="\$y_1\$" )
pl plot!(pl,sim_y_1[2,:], label="\$e_2\$")
# Let's combine everything
= plot( sim_y_1[1,:], label="\$e_2\$", title="\$y_1\$" )
pl1 plot!(pl1,sim_y_1[2,:], label="\$e_2\$")
= plot( sim_y_2[1,:], label="\$e_2\$", title="\$y_2\$" )
pl2 plot!(pl2,sim_y_2[2,:], label="\$e_2\$", legend=false)
plot(pl1, pl2)
Remarks: we see that the response to the second shock is not monotonous (in the first two periods, \(y_2\) increases). This is a result of the fact that \(|||A||| > \rho(A)\)
What do you get if one eigenvalue is equal to 1? Greater than 1?
# unit root case
= 0.3
ρ = 1.0
λ
= [ρ 2; 0 λ]
A
= [sqrt(Σ[1,1]), 0.0]
e0_1 = [0.0, sqrt(Σ[2,2])]
e0_2
= hcat([A^n*e0_1 for n in 0:10]...)
sim_y_1 = hcat([ A^n*e0_2 for n in 0:10 ]...)
sim_y_2 = plot( sim_y_1[1,:], label="\$e_1\$", title="\$y_1\$" )
pl1 plot!(pl1,sim_y_1[2,:], label="\$e_2\$")
= plot( sim_y_2[1,:], label="\$e_1\$", title="\$y_2\$" )
pl2 plot!(pl2,sim_y_2[2,:], label="\$e_2\$", legend=false)
plot(pl1, pl2)
We see that the shock \(e_1\) has a persistent effect on \(y_2\). It is associated to the unit root. The new steady-state is not the original one.
# explosive case
= 0.3
ρ = 1.02
λ
= [ρ 2; 0 λ]
A
= [sqrt(Σ[1,1]), 0.0]
e0_1 = [0.0, sqrt(Σ[2,2])]
e0_2
= hcat([A^n*e0_1 for n in 0:10]...)
sim_y_1 = hcat([ A^n*e0_2 for n in 0:10 ]...)
sim_y_2 = plot( sim_y_1[1,:], label="\$e_2\$", title="\$y_1\$" )
pl1 plot!(pl1,sim_y_1[2,:], label="\$e_2\$")
= plot( sim_y_2[1,:], label="\$e_2\$", title="\$y_2\$" )
pl2 plot!(pl2,sim_y_2[2,:], label="\$e_2\$", legend=false)
plot(pl1, pl2)
Now the variable \(y_2\) diverges. Note that \(y_1\) does not diverge because of the specific initial shock (colinear to [1,0]).
Import the Distributions package. Use MvNormal
to compute draws from a multivariate distribution with covariance matrix \(\Sigma\)
# we need the distributions package to compute mvnormal
# import Pkg; Pkg.add("Distributions")
using Distributions
# MvNormal
# check the doc
= MvNormal(Σ)
dis rand(dis)
2-element Vector{Float64}:
-0.055367726741675345
-0.13620922040042954
# we can plot the distribution of $\epsilon$
= [rand(dis) for i=1:1000]
random_points = scatter([e[1] for e in random_points], [e[2] for e in random_points]) pl
# even better, with the StatsPlots Library
# import Pkg; Pkg.add("StatsPlots")
using StatsPlots
= [rand(dis) for i=1:1000]
random_points = scatter([e[1] for e in random_points], [e[2] for e in random_points], label="e", xlabel="\$e_1\$", ylabel="\$e_2\$")
pl covellipse!([0,0], Σ, n_std=2, aspect_ratio=1, label="cov")
Perform a stochastic simulation over 20 periods starting with a one standard-deviation in \(e_2\).
# return to old value of A
= 0.3
ρ = 0.5
λ
= [ρ 2; 0 λ] A
2×2 Matrix{Float64}:
0.3 2.0
0.0 0.5
function simulate(A,B,Σ,e0=zeros(2); T=20)
# initialize distribution object
= MvNormal(Σ)
dis
# create vector to hold the simulation
= [B*e0]
sim
for t=1:T
= rand(dis)
e1
= sim[end] # last value
y0 = A*y0 + B*e1
y1
push!(sim, y1)
end
return hcat(sim...)
end
simulate (generic function with 2 methods)
= simulate(A,B,Σ,e0_2) sim
2×21 Matrix{Float64}:
0.0 0.392371 0.374782 -0.101664 … 0.42125 0.472957 0.168838
0.1 0.0844052 -0.015747 -0.0961442 0.172038 0.133797 -0.184264
plot(
plot(sim[1,:], title="\$y_1\$" ),
plot(sim[2,:], title="\$y_2\$" )
)
Much harder to intepret results from a single simulation…
Perform K=500 stochastic simulations over 20 periods starting with a one standard-deviation in \(e_2\). Plot the result.
= 500
K = [simulate(A,B,Σ,e0_2) for n=1:K] sims
500-element Vector{Matrix{Float64}}:
[0.0 0.3519136419682234 … -0.13632256187189307 0.28734699004199543; 0.1 0.03458950910034711 … 0.05590879127288487 0.1821378027563172]
[0.0 0.18735118271396525 … -0.5792223496408389 -0.12954015040579497; 0.1 -0.017113568280008512 … 0.06610668517404303 0.034768318939232185]
[0.0 -0.26355803168343 … 0.13331606138155416 0.10605379996605355; 0.1 -0.08203172271058749 … 0.08789623820459265 0.04302552416068846]
[0.0 0.447924703915066 … -0.6840511756042715 -0.5087494379321347; 0.1 0.13240167512832168 … -0.13923613251848604 -0.30739802372828207]
[0.0 0.2864585646636234 … -0.8972892768061322 -0.32525236081256703; 0.1 0.052085969055819484 … -0.009248985478192695 0.19809006901765033]
[0.0 0.3176852256968753 … -0.07299075149807696 -0.1994033938565344; 0.1 0.04893276414828989 … -0.029399256319346128 -0.17523773567963405]
[0.0 -0.20515053466386868 … 0.11551566975384928 0.24349840611342544; 0.1 0.10551318621350193 … -0.06835216203436546 0.1977318992358082]
[0.0 -0.3716060471999449 … -0.23325637167917668 -0.5004115801583995; 0.1 0.004907455463646432 … -0.12496260745051108 -0.20431976135074786]
[0.0 0.2581801567444286 … -0.16912779592422972 -0.09365511537178871; 0.1 0.04981141790434147 … -0.019968585900737496 -0.02405222464014385]
[0.0 0.3162379646878034 … 0.3666433235352949 0.22238420401920062; 0.1 0.05061935649766457 … 0.10905853583928168 0.15540654408684915]
⋮
[0.0 -0.006313162400434141 … -0.3051166224680174 0.35324198956855907; 0.1 0.10919275219790044 … 0.09844593323738689 -0.052414251103087166]
[0.0 0.19066328313568945 … 0.17932712510191198 -0.3980090459422957; 0.1 0.05455624372469911 … -0.14126524049595823 -0.02266753697793905]
[0.0 0.1545923770296685 … -0.2682096572740642 -0.07992915762258111; 0.1 0.1139096215443511 … 0.023620023799938078 0.16166607816954648]
[0.0 0.13900994917320322 … -0.3260186582619859 -0.0019249994388082642; 0.1 -0.01526140568652111 … 0.10445387892545724 0.10462559642266155]
[0.0 0.023969168441970334 … -0.09978993290131524 -0.21584697092718536; 0.1 0.09702993611377461 … -0.01232470841242696 -0.19127247705898093]
[0.0 0.4065297992606911 … 0.4373798767781126 1.072163516270816; 0.1 0.23665301653087828 … 0.11865630180413081 0.24147810520943158]
[0.0 -0.005632113120375315 … 0.24083335858802074 0.2834922315575247; 0.1 0.035675410584903894 … 0.20521683118886225 0.2046794210651945]
[0.0 -0.1505758147014929 … -0.18447465647041095 -0.2854383438166833; 0.1 -0.001195702042095917 … -0.114333611336637 -0.061626044462775316]
[0.0 -0.15298328779328213 … -0.32902745858855864 -0.3585049280601489; 0.1 0.012308255207408567 … -0.09461842502322049 -0.02995515122129525]
= plot()
pl1 for sim in sims
plot!(pl1, sim[1,:], title="\$y_1\$", legend=false, color="red", alpha=0.01)
end
# add average
= [ mean( [sim[1,t] for sim in sims] ) for t=1:size(sims[1],2)]
average_1 = [ std( [sim[1,t] for sim in sims] ) for t=1:size(sims[1],2)]
std_1
plot!(pl1, average_1, color="blue")
plot!(pl1, average_1-std_1, color="blue", linestyle=:dash)
plot!(pl1, average_1+std_1, color="blue", linestyle=:dash)
= plot()
pl2 for sim in sims
plot!(pl2, sim[2,:], title="\$y_2\$", legend=false , color="red", alpha=0.01)
end
# add average
= [ mean( [sim[2,t] for sim in sims] ) for t=1:size(sims[1],2)]
average_2 = [ std( [sim[2,t] for sim in sims] ) for t=1:size(sims[1],2)]
std_2
plot!(pl2, average_2, color="blue")
plot!(pl2, average_2-std_2, color="blue", linestyle=:dash)
plot!(pl2, average_2+std_2, color="blue", linestyle=:dash)
plot(pl1, pl2)
We observe that:
Ergodic Distribution.
It can be shown that the ergodic distribution of a VAR1 is a multivariate normal law, with covariance matrix \(\Omega\).
This matrix is a solution to the equation \(\Omega = A \Omega A' + B \Sigma B'\).
A simple algorithm to find it consist in applying the recurrence \(\Omega_n = A \Omega_{n-1} A' + B \Sigma B'\) until convergence, starting with \(\Omega_0 =0\).
Implement this algorithm.
function ergodic_steady_state(A,B,Σ; N=1000, tol_η=1e-8)
0 = Σ*0
Ωfor n = 1:N
= A*Ω0*A'+B*Σ*B'
Ω = maximum( abs.( Ω - Ω0) )
η if η<tol_η
return Ω
end
0 = Ω
Ωend
error("No convergence")
end
ergodic_steady_state (generic function with 1 method)
Compare the result with the empirical ergodic distribution obtained from the simulations
= [sim[:,end] for sim in sims] random_final_points
500-element Vector{Vector{Float64}}:
[0.28734699004199543, 0.1821378027563172]
[-0.12954015040579497, 0.034768318939232185]
[0.10605379996605355, 0.04302552416068846]
[-0.5087494379321347, -0.30739802372828207]
[-0.32525236081256703, 0.19809006901765033]
[-0.1994033938565344, -0.17523773567963405]
[0.24349840611342544, 0.1977318992358082]
[-0.5004115801583995, -0.20431976135074786]
[-0.09365511537178871, -0.02405222464014385]
[0.22238420401920062, 0.15540654408684915]
⋮
[0.35324198956855907, -0.052414251103087166]
[-0.3980090459422957, -0.02266753697793905]
[-0.07992915762258111, 0.16166607816954648]
[-0.0019249994388082642, 0.10462559642266155]
[-0.21584697092718536, -0.19127247705898093]
[1.072163516270816, 0.24147810520943158]
[0.2834922315575247, 0.2046794210651945]
[-0.2854383438166833, -0.061626044462775316]
[-0.3585049280601489, -0.02995515122129525]
= ergodic_steady_state(A,B,Σ) Ω
2×2 Matrix{Float64}:
0.141995 0.0215686
0.0215686 0.0133333
# empirical covariance
cov( random_final_points)
# very close to theoretical one
2×2 Matrix{Float64}:
0.138121 0.0217592
0.0217592 0.0133982
# graphical representation
using StatsPlots
= scatter([e[1] for e in random_final_points], [e[2] for e in random_final_points], label="e", xlabel="\$e_1\$", ylabel="\$e_2\$", alpha=0.2)
pl covellipse!([0,0], Ω, n_std=2, aspect_ratio=1, label="cov")
# let's add the expected path
plot!(pl, average_1, average_2, color="black")