Final Exam 2023

Name:

Surname:

After completing the following questions, send the edited notebook to pwinant@escp.eu. You are allowed to use any available resource (but not copy/paste any code). Also, don’t forget to comment your code and take any initiative, you find relevant.

Power iteration method

In this exercise, we implement a power iteration method power_iteration(M::Matrix) to compute the biggest eigenvalue of a given matrix M.

Define a random 3 times 3 matrix M. Compute its biggest eigenvalue using LinearAlgebra: eigvals.

Define a norm2 function, which computes the euclidean norm of any vector. Test it on some simple cases.

Write a function iteration_step(M::Matrix, u::Vector)::Tuple{Float64, Vector} which takes a random vector \(u\) with norm 1 and returns \(M u\) and \(\frac{M u}{|M u|}\).

Write a function power_iteration(M::Matrix)::Float64, which computes the spectral radius of M using the power iteration method (spectral radius: absolute value of biggest eigenvalue)

Check the result is correct on matrix \(M\).

Doubling algorithm

Let \(M\) be a square matrix with spectral radius smaller than 1. Our goal here to approximate the infinite sum \(\sum_{t\geq 0} M^t\) (whose result is a matrix).

Define a (non-trivial) 3x3 matrix M with spectral radius smaller than 1.

Propose an algorithm and a function infinite_sum(M::Matrix) to approximate this sum by computing the limit of \(S_T = \sum_{t=0}^{T} M^t\) when \(T\) goes to \(\infty\).

We now consider the doubling algorithm. Consider the sum \(D_N = \sum_{t\geq 0}^{2^N} M^t\). Find a recursive relation between \(D_N\) and \(D_{N+1}\). Why is \(D_N\) less expensive to compute than \(S^{2 N}\) ?

Implement a function infinite_sum_doubling(M::Matrix) which computes the infinite sum using the doubling algorithm. Check the result and time it against the other implementation.

Reduced-form New Keyesian model

We consider the reduced-form New Keynesian Model

\[\pi_t = \alpha E_t \pi_{t+1} + y_t\]

\[y_t = \beta E_t y_{t+1} + \gamma (i_t-\pi_t) + \delta z_t\]

\[i_t = \alpha_{y} y_t + \alpha_{\pi} \pi_t\]

where \(\pi_t\) is inflation, \(y_t\) the output gap (i.e, the gap to full employment), and \(z_t\) an AR1 process given by:

\[z_t = \rho z_{t-1} + \epsilon_t\]

wher \(\epsilon_t\) is a random gaussian process with standard deviation \(\sigma_{\epsilon}\).

We’ll take \(\alpha=0.9\), \(\beta=0.9\), \(\gamma=0.1\), \(\alpha_{\pi}=0.5\), \(\alpha_y=0.5\), \(\rho=0.9\) and \(\sigma=0.01\).

Model Solution

Define a specialized structure to hold all model parameters.

Define matrices \(A\) (2x2) and \(B\) (2x1) such that:

\[\begin{bmatrix}\pi_t\\y_t\end{bmatrix} = A E_t \begin{bmatrix}\pi_{t+1}\\y_{t+1}\end{bmatrix} + B z_t\]

# text
# code
function create_matrices(parameters)
#     A = ...
#     B = ...
    return A, B
end

Assume the sequence \(z_0, z_1, ...\) is known, show that we can write \(\begin{bmatrix}\pi_t\\y_t\end{bmatrix}=\sum_{s\geq0} \rho^s A^s B z_s\). At which condition on \(A\) is this sum absolutely converging?

Check that this condition is met for the baseline calibration.

Bonus: find conditions on \(\alpha_y\) and \(\alpha_{\pi}\) for the sum to be convergent. When it is, in this context, we say that inflation is anchored.

Compute the solution to the system, that is find two sacalars \(x_y\) and \(x_{\pi}\) such that \(y_t(z) = x_{y} z_t\) and \(\pi_t(z)= x_{\pi} z_t\) solve the system.

Compute matrices \(P\) and \(Q\) such that \(\begin{bmatrix}\pi_t\\y_t\\z_t\end{bmatrix} = P \begin{bmatrix}\pi_{t-1}\\y_{t-1}\\z_{t-1}\end{bmatrix} + Q \epsilon_t\).

Simulate variables \(\pi_t, y_t, z_t\) when \(\pi_0 = y_0 = 0, z_0=0.1\) and \(\forall t, \epsilon_t = 0\).

Make \(N=1000\) stochastic simulations for \(T=200\) periods. Find a way to compute the standard deviation of the ergodic distribution.