I'm trying to solve a life cycle problem in economics using Julia but I'm having trouble with NLsolve. The model boils down to trying to solve two a two equation system to find optimal leisure hours and capital stock each working period. The economic agent after retirement sets leisure = 1 and I only need to solve a single non linear equation for capital. This part works fine. It's solving the two equation system that seems to break down.
As I'm fairly new to Julia / programming in general so any advice would be very helpful. Also advice / points / recommendations on all aspects of the code will be greatly appreciated. The model is solved backwards from the final time period.
My attempt
using Parameters
using Roots
using Plots
using NLsolve
using ForwardDiff
Model = #with_kw (α = 0.66,
δ = 0.02,
τ = 0.015,
β = 1/1.01,
T = 70,
Ret = 40,
);
function du_c(c, l, η=2, γ=2)
if c>0 && l>0
return (c+1e-6)^(-η) * l^((1-η)*γ)
else
return Inf
end
end
function du_l(c, l, η=2, γ=2)
if l>0 && c>0
return γ * (c+1e-6)^(1-η) * l^(γ*(1-η)-1)
else
return Inf
end
end
function create_euler_work(x, y, m, k, l, r, w, t)
# x = todays capital, y = leisure
#unpack α, β, τ, δ, T, Ret = m
c_1 = x*(1+r) + (1-τ)*w*(1-y) - k[t+1]
c_2 = k[t+1]*(1+r) + (1-τ)*w*(1-l[t+1]) - k[t+2]
return du_c(c_1,y) - β*(1+r)*du_c(c_2,l[t+1])
end
function create_euler_retire(x, m, k, r, b, t)
# Holds at time periods Ret onwards
#unpack α, β, τ, δ, T, Ret = m
c_1 = x*(1+r) + b - k[t+1]
c_2 = k[t+1]*(1+r) + b - k[t+2]
return du_c(c_1,1) - β*(1+r)*du_c(c_2,1)
end
function create_euler_lyw(x, y, m, k, r, w, b, t)
# x = todays capital, y = leisure
#unpack α, β, τ, δ, T, Ret = m
c_1 = x*(1+r) + (1-τ)*w*(1-y) - k[t+1]
c_2 = k[t+1]*(1+r) + b - k[t+2]
return du_c(c_1,y) - β*(1+r)*du_c(c_2,1)
end
function create_foc(x, y, m, k, r, w, t)
# x = todays capital, l= leisure
#unpack α, β, τ, δ, T = m
c = x*(1+r) + (1-τ)*w*(1-y) - k[t+1]
return du_l(c,y) - (1-τ)*w*du_c(c,y)
end
function life_cycle(m, guess, r, w, b, initial)
#unpack α, β, τ, δ, T, Ret = m
k = zeros(T+1);
l = zeros(T);
k[T] = guess
println("Period t = $(T+1) Retirment, k = $(k[T+1]), l.0 = NA")
println("Period t = $T Retirment, k = $(k[T]), l = 1.0")
########################## Retirment ################################
for t in T-1:-1:Ret+1
euler(x) = create_euler_retire(x, m, k, r, b, t)
k[t] = find_zero(euler, (0,100))
l[t] = 1
println("Period t = $t Retirment, k = $(k[t]), l = $(l[t])")
end
###################### Retirement Year #############################
for t in Ret:Ret
euler(x,y) = create_euler_lyw(x, y, m, k, r, w, b, t)
foc(x,y) = create_foc(x, y, m, k, r, w, t)
function f!(F, x)
F[1] = euler(x[1], x[2])
F[2] = foc(x[1], x[2])
end
res = nlsolve(f!, [5; 0.7], autodiff = :forward)
k[t] = res.zero[1]
l[t] = res.zero[2]
println("Period t = $t Working, k = $(k[t]), l = $(l[t])")
end
############################ Working ###############################
for t in Ret-1:-1:1
euler(x,y) = create_euler_work(x, y, m, k, l, r, w, t)
foc(x,y) = create_foc(x, y, m, k, r, w, t)
function f!(F, x)
F[1] = euler(x[1], x[2])
F[2] = foc(x[1], x[2])
end
res = nlsolve(f!, [5; 0.7], autodiff = :forward)
k[t] = res.zero[1]
l[t] = res.zero[2]
println("Period t = $t Working, k = $(k[t]), l = $(l[t])")
end
#####################################################################
return k[1] - initial, k, l
end
m = Model();
residual, k, l = life_cycle(m, 0.3, 0.03, 1.0, 0.0, 0.0)
The code seems to break on period 35 with the error "During the resolution of the nonlinear system, the evaluation of following equations resulted in a non-finite number: [1,2]" However the solutions seem to go weird at period 37.
I need minimize a funtion based in Hsieh Model, using R language. The main objective is to minimize a distance function that depends on a set of other functions.
obj = function(x1){
s = sf()
h_til = h_tilf()
w_til = w_tilf(x1)
w_r = w_rf()
p_ir = p_irf()
#H_tr = H_trf(x1)
W = Wf(x1)
f1 = matrix(0, i, r)
f2 = matrix(0, i, r)
for (c in 1:i){
for (j in 1:r){
f1[c, j] = ( (W[c, j] - W_t[c, j]) / W_t[c, j] ) ** 2
f2[c, j] = ( (p_ir[c, j] - p_t[c, j]) / p_t[c, j] ) ** 2
}
}
d1 = sum(f1)
d2 = sum(f2)
D = d1 + d2
return(D)
}
Therefore, my algorithm must find three parameters (w, tau_w, tau_h) that minimize this distance function. These three parameters are arrays with i rows and r columns. Given by:
w = runif(i*r, 0, 1)
tau_w = runif(i*r, -1, 1)
tau_h = runif(i*r, -1, 1)
x1 = array( c(tau_w, tau_h, w), dim = c(i, r, 3))
I trying solve this using optimx and Rsolnp libraries.
res = optim(x1, #starting values
obj) #function to optimise
But i get this error:
Error in x1[c, j, 1] : incorrect number of dimensions
This minimization is usually done using the Nelder-Mead algorithm.
I'm beginner in optimization and apreciate any help. My complete code is here.
The dimensions of the array x1 are lost when you do optim(x1, obj). So the error you get is returned by w_tilf(x1) because it involves x1[c,j,1].
Reconstruct the array at the beginning of the obj function:
obj = function(x1){
x1 = array(x1, dim = c(i, r, 3))
s = sf()
......
}
Then opt <- optim(x1, obj) should work now. It will return the solution in the opt$par field as a vector, you will have to do array(opt$par, dim = c(i, r, 3)) to get an array.
I am struggling to plot evaluated function and Cbebyshev approximation.
I am using Julia 1.2.0.
EDIT: Sorry, added completed code.
using Plots
pyplot()
mutable struct Cheb_struct
c::Vector{Float64}
min::Float64
max::Float64
end
function cheb_coeff(min::Float64, max::Float64, n::Int, fn::Function)::Cheb_struct
struc = Cheb_struct(Vector{Float64}(undef,n), min, max)
f = Vector{Float64}(undef,n)
p = Vector{Float64}(undef,n)
max_plus_min = (max + min) / 2
max_minus_min = (max - min) / 2
for k in 0:n-1
p[k+1] = pi * ((k+1) - 0.5) / n
f[k+1] = fn(max_plus_min + cos(p[k+1])*max_minus_min)
end
n2 = 2 / n
for j in 0:n-1
s = 0
for i in 0:n-1
s += f[i+1]*cos(j*p[i+1])
struc.c[j+1] = s * n2
end
end
return struc
end
function approximate(struc::Cheb_struct, x::Float64)::Float64
x1 = (2*x - struc.max - struc.min) / (struc.max - struc.min)
x2 = 2*x1
t = s = 0
for j in length(struc.c):-1:2
pom = s
s = x2 * s - t + struc.c[j]
t = pom
end
return (x1 * s - t + struc.c[1] / 2)
end
fn = sin
struc = cheb_coeff(0.0, 1.0, 10, fn)
println("coeff:")
for x in struc.c
#printf("% .15f\n", x)
end
println("\n x eval approx eval-approx")
for x in struc.min:0.1:struc.max
eval = fn(x)
approx = approximate(struc, x)
#printf("%11.8f %12.8f %12.8f % .3e\n", x,eval, approx, eval - approx)
display(plot(x=eval,y=approx))
end
I am getting empty plot window.
I would be very grateful if someone coould how to plot these two functions.
You should provide a working code as an example.
However the code below can show you how to plot:
using Plots
pyplot()
fn = sin
approxf(x) = sin(x)+rand()/10
x = 0:0.1:1
evalv = fn.(x)
approxv = approxf.(x)
p = plot(evalv,approxv)
using PyPlot
PyPlot.display_figs() #needed when running in IDE such as Atom
Julia's ForwardDiff documentation suggests that computing the function value, gradient and Hessian can be computed in one fell swoop using the DiffResults API, but there are NO examples. The DiffResults package itself has no examples either, and no documentation to speak of. The use case for this is self-evident: suppose I have a function f of a vector argument x, and I want to minimize it using Newton's method. Below is the blunt approach, where things get recomputed three times - how would I write it with DiffResults?
function NewtMin(f, x0, eps)
fgrad = x-> ForwardDiff.gradient(f, x)
fhess = x-> ForwardDiff.hessian(f, x)
oldval = f(x0)
newx = x0 - fhess(x0)\fgrad(x0)
newval = f(newx)
while abs(newval - oldval) > eps
oldval = newval
newx = newx - fhess(newx)\fgrad(newx)
newval = f(newx)
end
return newx
end
There are examples in the DiffResults.jl documentation from http://www.juliadiff.org/DiffResults.jl/stable/.
And this is a simple rewriting of Newtmin using DiffResults, it works in julia v0.6.4. But I guess it can be refactored and optimized to be more elegant and performant.
using ForwardDiff
using DiffResults
function NewtMin(f, x0, eps)
result = DiffResults.HessianResult(x0)
ForwardDiff.hessian!(result, f, x0)
fhess_x0 = DiffResults.hessian(result)
fgrad_x0 = DiffResults.gradient(result)
oldval = DiffResults.value(result)
newx = x0 - fhess_x0\fgrad_x0
newval = f(newx)
while abs(newval - oldval) > eps
oldval = newval
ForwardDiff.hessian!(result, f, newx)
fhess_newx = DiffResults.hessian(result)
fgrad_newx = DiffResults.gradient(result)
newx = newx - fhess_newx\fgrad_newx
newval = f(newx)
end
return newx
end
foo(x) = sum(x.^2)
NewtMin(foo, [1.,1.,1.], 0.01)
## which should give a correct result at [0., 0., 0.]
I'm trying to vectorize an inequality constraint comparing two Convex types. On one side, I have Convex.MaxAtoms, and on the other side, I have Variables. I want to do something like the following:
using Convex
N = 10
t = Variable(1)
v = Variable(N)
x = Variable(1)
z = rand(100)
problem = minimize(x)
problem.constraints += [t >= 0]
ccc = Vector{Convex.MaxAtom}(N)
for i = 1:N
c = -(1. + minimum(x.*z))
cc = t + c
ccc[i] = max(cc,0.)
end
problem.constraints += [ccc <= v]
but I'm getting the following error on the final constraint:
ERROR: LoadError: MethodError: no method matching isless(::Complex{Int64}, ::Int64)
I'm not sure where the Int64 types are coming in. Is there a better way of adding this constraint besides looping through and adding individual comparisons like
for i = 1:N
problem.constraints += [ccc[i] <= v[i]]
end
I'm trying to avoid this because eventually my 10 will be much larger.
In this case (thanks to Dr. Udell), it works to vectorize as
c = -(1. + xisim + minimum(x.*z))
cc = t + c
ccc = max(cc,0.)
problem.constraints += [ccc <= v]