= 0.3
α = 0.5
β = 0.2
γ = [
P 1-α) α/2 α/2;
(/2 (1-β) β/2;
β/2 γ/2 (1-γ);
γ ]
3×3 Matrix{Float64}:
0.7 0.15 0.15
0.25 0.5 0.25
0.1 0.1 0.8
A worker’s employment dynamics is described by the stochastic matrix
\[P = \begin{bmatrix} 1-\alpha & \alpha \\ \beta & 1-\beta \end{bmatrix}\]
with \(\alpha\in(0,1)\) and \(\beta\in (0,1)\). First line corresponds to employment, second line to unemployment.
Which is the stationary equilibrium? (choose any value for \(\alpha\) and \(\beta\))
= 0.3
α = 0.5
β = 0.2
γ = [
P 1-α) α/2 α/2;
(/2 (1-β) β/2;
β/2 γ/2 (1-γ);
γ ]
3×3 Matrix{Float64}:
0.7 0.15 0.15
0.25 0.5 0.25
0.1 0.1 0.8
0 = [1.0, 1.0, 1.0]/3
μ0' * (P^10) μ
1×3 adjoint(::Vector{Float64}) with eltype Float64:
0.322581 0.193548 0.483871
function solve_steady_state(P; T=100)
= size(P,1)
n 0 = (ones(n)/n)'
μfor t in 1:T
1 = μ0*P
μ= maximum(abs, μ1 - μ0)
η if η<1e-10
return μ1'
end
0 = μ1
μend
error("No convergence")
end
solve_steady_state (generic function with 1 method)
solve_steady_state(P)
3-element Vector{Float64}:
0.3225806452587981
0.19354838711776282
0.48387096762343945
# using linear algrebra
using LinearAlgebra: I
# I is the identity operator
= P' - I
M
# modify last line of M
end,:] .= 1.0
M[
M
# define right hand side
= zeros(size(M,1))
r
end] = 1.0
r[
\ r M
3-element Vector{Float64}:
0.32258064516129037
0.19354838709677416
0.48387096774193555
= P' - I
M
# modify last line of M
= [
M1 # concatenate along first dimension
M ; ones(size(M,1))' # ' to turn the vector a 1x3 matrix
]
M1
# # define right hand side
= [zeros(size(M,1)) ; 1]
r
\ r M1
3-element Vector{Float64}:
0.32258064516129054
0.19354838709677397
0.4838709677419355
In the long run, what will the the fraction \(p\) of time spent unemployed? (Denote by \(X_m\) the fraction of dates were one is unemployed)
Illustrate this convergence by generating a simulated series of length 10000 starting at \(X_0=1\). Plot \(X_m-p\) against \(m\). (Take \(\alpha=\beta=0.1\)).
We want to solve the following model, adapted from McCall.
What are the states, the controls, the reward of this problem ? Write down the Bellman equation.
Define a parameter structure for the model.
Define a function value_update(V_U::Vector{Float64}, V_E::Vector{Float64}, x::Vector{Bool}, p::Parameters)::Tuple{Vector, Vector}
, which takes in value functions tomorrow and a policy vector and return updated values for today.
Define a function policy_eval(x::Vector{Bool}, p::Parameter)::Tuple{Vector, Vector}
which takes in a policy vector and returns the value(s) of following this policies forever. You can add relevant arguments to the function.
Define a function bellman_step(V_E::Vector, V_U::Vector, p::Parameters)::Tuple{Vector, Vector, Vector}
which returns updated values, together with improved policy rules.
Implement Value Function
Implement Policy Iteration and compare rates of convergence.
Discuss the Effects of the Parameters