Computational Economics @ Bundesbank
Dynamic programming comes in two flavours:
For ADP, objects of interest (shocks, decision rules) live in infinitely dimensional spaces.
They need be be quantized with a finite set of parameters.
This motivates the study of:
We will also need optimization but we will defer it to december.
Define two continuous sets \(X\in R^p\), \(Y \in R^q\).
Take a dataset: \((x_i, y_i)_{i\in[1,N]} \in X \times Y\)
Take \(\tilde{x} \in X \setminus \{x_i\}_{i\in[1,N]}\). What should be the matching \(\tilde{y}\) ?
Approximation method:
Concretely we choose \(f\) from a family \(\mathcal{F}\) of functions parameterized by a parameter \(\theta\), the approximation family.
1d Graph. Join the dots. Linear/Spline
2d Graph: Regression
Conclusion: interpolate only if \(f\) is known precisely on \(X\)
\(n\)-th order spline : piecewise polynomial function that is \(n\) times differentiable except on a finite set of break points (aka knots), where it is \((n-1)\) times differentiable.
in practice the data points are the breakpoints
example: order 2
Define \[B_{i,1}(x) = 1_{x \in [x_i, x_{i+1}]}\] \[B_{i,k+1}(x) = \frac{x-x_i}{x_{i+k}-x_i}B_{i,k}(x) + \frac{x_{i+k+1}-x}{x_{i+k+1}-x_{i+1}}B_{i+1,k}(x)\]
Properties:
Theorem (de Boor): Any spline of order \(k\) on the knots \((x_i)\) can be expressed as a linear combination of the basis splines \((B_{i,k})\).
Basis Splines
Unfortunately basis splines are not “interpolating” in the sense that in general \[f(x_i) \neq \sum_{n} f(x_n) B_{n,k} (x_i)\]
One must choose other coefficients \((c_n)\) which satisfy:
\[y_i = \sum_n c_n B_{n,k} (x_i)\]
import numpy as np
from scipy.interpolate import RegularGridInterpolator
f = lambda x: np.log(x)
xs = np.linspace(1, 5, 10)
A = f(xs)
# linear interpolation
interp_linear = RegularGridInterpolator((xs,), A)
interp_linear([1.3]) # interpolate
# cubic spline interpolation
interp_cubic = RegularGridInterpolator((xs,), A, method="cubic")
interp_cubic([1.3]) # interpolate
Let’s approximate: \(f(;\theta) = \sum_{n=0}^K \theta_k x^k\).
We need \((K+1)\) points to fit a polynomial of order \(K\). Let’s take grid points \((x_0, ... x_{K})\) and denote \(y_k=f(x_k)\)
We need to solve in \((\theta_k)_{k=[0,K]}\):
\[\forall n \in[0,K], \underbrace{\sum_k \theta_k (x_n)^{k}}_{M \theta} = y_k\]
Define a scalar product over functions on the domain \([a,b]\) by choosing a positive weight function \(w(x)\). \[<P,Q> = \int_a^b w(x) P(x)Q(x) dx\]
Construct an orthogonal base \((T_n)_{n=[1,K]}\).
Approximate \[f(x)\approx f(x; \theta) = \sum_{n=0}^K \theta_n T_n(x)=\sum_{n=0}^K <f|T_n> T_n(x)\]
Coefficients can still be identified by inverting: \[\forall n \in[0,K] \underbrace{\sum_k \theta_k T_k(x_n)}_{M \theta} = y_n\]
\[ M = \begin{bmatrix} T_0(x_0) & T_1(x_0) & \cdots & T_K(x_0) \\\\ T_0(x_1) & T_1(x_1) & \cdots & T_K(x_1) \\\\ T_0(x_2) & T_1(x_2) & \cdots & T_K(x_2) \\\\ \vdots & \vdots & \ddots & \vdots \\ T_0(x_K) & T_1(x_K) & \cdots & T_K(x_K) \end{bmatrix} \]
scipy.interpolation
many common methodsinterpolation.py
The unconditional distribution of an AR1 is a normal law \(\mathcal{N}(0,\frac{\sigma}{\sqrt{1-\rho^2}})\)
Choose \(m>0\), typically \(m=3\)
Bound the process: \(\underline{x} = -m \frac{\sigma}{\sqrt{1-\rho^2}}\) and \(\overline{x} = m \frac{\sigma}{\sqrt{1-\rho^2}}\)
Define the \(N\) discretized points (\(i\in[1,n]\)): \(y_i = \underline{x} + \frac{i-1}{N-1}(\overline{x}-\underline{x})\)
Define the transitions:
\[\pi_{ij} = prob \left( y_{t+1}=y_j|y_t=y_i\right)\] \[\pi_{ij} = prob \left( |y_{t+1}-x_j| = \inf_k |y_{t+1}-x_k| \left| y_t=y_i \right. \right)\]
Formulas \(\delta=\frac{\overline{x}-\underline{x}}{N}\):
if \(1<k<N-1\)
\[\pi_{jk} = F(\frac{y_k + \delta/2-\rho y_j}{\sigma_{\epsilon}}) - F(y_k + \delta/2-\rho y_j)\]
if \(k=1\)
\[\pi_{j} = F(\frac{y_k + \delta/2-\rho y_j}{\sigma_{\epsilon}}) \]
if \(k=N\)
\[\pi_{j} = 1- F(\frac{y_k - \delta/2-\rho y_j}{\sigma_{\epsilon}}) \]
compare generated stationary moments between discretized process and true AR1:
by looking at the exact ergodic distribution or by doing some simulations
not very precise when the process is very persistent \(\rho\approx 1\)
choose \(y_1=-\psi\), \(y_2=\psi\)
define transition matrix: \[ \Theta_2 = \begin{bmatrix} p & 1-p\\ 1-q & q \end{bmatrix} \]
choose \(p\), \(q\) and \(\psi\) to match some moments: \(E()\), \(Var()\), \(ACor()\)
Procedure converges to Bernouilli distribution.
Moments can be computed in closed form:
Rouwenhorst method performs better for highly correlated processes
Given \(f\), and an iid process \(\epsilon \sim N(0,\sigma^2)\), how to approximate \(E_{\epsilon} f(\epsilon)\) ?
Ideas:
\[\frac{1}{N} \sum_n w_n f(\epsilon_n)\]
Let’s take an exemple:
Let’s compute \(E_{\epsilon}(C(\epsilon))\) precisely.
Discuss value of \(\gamma\): is it crazy? (risk return)
Compute expectation
σ = 0.05; γ = 40
from math import exp
import numpy as np
from numpy.random import normal
from matplotlib import pyplot as plt
U = lambda x: (x**(-γ))/(-γ)
C = lambda e: U(exp(e))
def E_ϵ(f, N=100):
gen = (f(normal()*σ) for e in range(N))
return sum(gen)/N
NVec = [1000, 5000, 10000, 15000, 20000]
vals = [E_ϵ(C, N=i) for i in NVec]
Idea:
Same logic can be applied to compute integration with weight function \(w(x)\): \[\int_a^b f(x) w(x)\]
Gauss-Hermite:
Gauss-Legendre:
Gauss-Chebychev:
Beware that weight is not the density of the normal law:
\[\frac{1}{\sqrt{2 \pi \sigma^2}}\int f(x) e^{-\frac{x^2}{2\sigma^2}}dx = {\frac{1}{\sqrt{2 \pi}}}\int f(u {\sigma}) e^{-\frac{u^2}{2}}du \] \[{\frac{1}{\sqrt{2\pi}}}\sum_n w_n f(\epsilon_n {\sigma })\]