Optimization Pushups
The spirit of this simple tutorial consists in learning how to write simple solution algorithms. For each algorithm, test that it works, using simple test functions whose solution is known.
Write a function fixed_point(f::Function, x0::Float64)
which computes the fixed point of f
starting from initial point x0
.
Write a function bisection(f::Function, a::Float64, b::Float64)
which computes the zero of function f
within (a,b)
using a bisection method.
Write a function golden(f::Function, a::Float64, b::Float64)
which computes a zero of function f
within (a,b)
using a golden ratio method.
Write a function zero_newton(f::Function, x0::Float64)
which computes the zero of function f
starting from initial point x0
.
Add an option zero_newton(f::Function, x0::Float64, backtracking=true)
which computes the zero of function f
starting from initial point x0
using backtracking in each iteration.
Write a function min_gd(f::Function, x0::Float64)
which computes the minimum of function f
using gradient descent. Assume f
returns a scalar and a gradient.
Write a function min_nr(f::Function, x0::Float64)
which computes the minimum of function f
using Newton-Raphson method. Assume f
returns a scalar, a gradient, and a hessian.
Write a method zero_newton(f::Function, x0::Vector{Float64})
which computes the zero of a vector valued function f
starting from initial point x0
.
Add an method zero_newton(f::Function, x0::Vector{Float64}, backtracking=true)
which computes the zero of function f
starting from initial point x0
using backtracking in each iteration.
Add a method zero_newton(f::Function, x0::Vector{Float64}, backtracking=true, lb=Vector{Float64})
which computes the zero of function f
starting from initial point x0
taking complementarity constraint into account x>=lb
using the Fischer-Burmeister method.