How to get values from every iteration in JuMP - julia

I'm solving a particular optimization problem in julia using JuMP, Ipopt and I have a problem finding the history of values i.e. value of x from every iteration.
I couldn't find anything useful in documentations.
Minimal example:
using JuMP
import Ipopt
model = Model(Ipopt.Optimizer)
#variable(model, -2.0 <= x <= 2.0, start = -2.0)
#NLobjective(model, Min, (x - 1.0) ^ 2)
optimize!(model)
value(x)
and I'd like to see value of x from every iteration, not only the last to create plot of x vs iteration.
Looking for any help :)

Each solver has a parameter on how verbose it is in representing the results.
In case of Ipopt you can do before calling optimize!(model):
set_optimizer_attribute(model, "print_level", 7)
In logs loog for curr_x (here is a part of logs):
**************************************************
*** Summary of Iteration: 6:
**************************************************
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
6 3.8455657e-13 0.00e+00 8.39e-17 -5.7 5.74e-05 - 1.00e+00 1.00e+00f 1
**************************************************
*** Beginning Iteration 6 from the following point:
**************************************************
Current barrier parameter mu = 1.8449144625279479e-06
Current fraction-to-the-boundary parameter tau = 9.9999815508553747e-01
||curr_x||_inf = 9.9999937987374388e-01
||curr_s||_inf = 0.0000000000000000e+00
||curr_y_c||_inf = 0.0000000000000000e+00
||curr_y_d||_inf = 0.0000000000000000e+00
||curr_z_L||_inf = 6.1403864613595829e-07

This is currently not possible. But there's an open issue: https://github.com/jump-dev/Ipopt.jl/issues/281

Related

Creating a stochastic SIR model in Julia

I am new to julia and want to create a Stochastic SIR model by following: http://epirecip.es/epicookbook/chapters/sir-stochastic-discretestate-continuoustime/julia
I have written my own interpretation which is nearly the same:
# Following the Gillespie algorthim:
# 1. Initialization of states & parameters
# 2. Monte-carlo step. Random process/step selection.
# 3. Update all states. e.g., I = I + 1 (increase of infected by 1 person). Note: only +/- by 1.
# 4. Repeat until stopTime.
# p - Parameter array: β, ɣ for infected rate and recovered rate, resp.
# initialState - initial states of S, I, R information.
# stopTime - Total run time.
using Plots, Distributions
function stochasticSIR(p, initialState, stopTime)
# Hold the states of S,I,R separately w/ a NamedTuple. See '? NamedTuple' in the REML for details
# Populate the data storage arrays with the initial data and initialize the run time
sirData = (dataₛ = [initialState[1]], dataᵢ = [initialState[2]], dataᵣ = [initialState[3]], time = [0]);
while sirData.time[end] < stopTime
if sirData.dataᵢ[end] == 0 # If somehow # of infected = 0, break the loop.
break
end
# Probabilities of each process (infection, recovery). p[1] = β and p[2] = ɣ
probᵢ = p[1] * sirData.dataₛ[end] * sirData.dataᵢ[end];
probᵣ = p[2] * sirData.dataᵣ[end];
probₜ = probᵢ + probᵣ; # Total reaction rate
# When the next process happens
k = rand(Exponential(1/probₜ));
push!(sirData.time, sirData.time[end] + k) # time step by k
# Probability that the reaction is:
# probᵢ, probᵣ resp. is: probᵢ / probₜ, probᵣ / probₜ
randNum = rand();
# Update the states by randomly picking process (Gillespie algo.)
if randNum < (probᵢ / probₜ)
push!(sirData.dataₛ, sirData.dataₛ[end] - 1);
push!(sirData.dataᵢ, sirData.dataᵢ[end] + 1);
else
push!(sirData.dataᵢ, sirData.dataᵢ[end] - 1);
push!(sirData.dataᵣ, sirData.dataᵣ[end] +1)
end
end
end
sirOutput = stochasticSIR([0.0001, 0.05], [999,1,0], 200)
#plot(hcat(sirData.dataₛ, sirData.dataᵢ, sirData.dataᵣ), sirData.time)
Error:
InexactError: Int64(2.508057234147307)
Stacktrace: [1] Int64 at .\float.jl:709 [inlined] [2] convert at
.\number.jl:7 [inlined] [3] push! at .\array.jl:868 [inlined] [4]
stochasticSIR(::Array{Float64,1}, ::Array{Int64,1}, ::Int64) at
.\In[9]:33 [5] top-level scope at In[9]:51
Could someone please explain why I receive this error? It does not tell me what line (I am using Jupyter notebook) and I do not understand it.
First error
You have to qualify your references to time as sirData.time
The error message is a bit confusing because time is a function in Base as well, so it is automatically in scope.
Second error
You need your data to be represented as Float64, so you have to explictly type your input array:
sirOutput = stochasticSIR([0.0001, 0.05], Float64[999,1,0], 200)
Alternatively, you can create the array with float literals: [999.0,1,0]. If you create an array with only literal integers, Julia will create an integer array.
I'm not sure StackOverflow is the best venue for this, as you seem to editing the original post as you go along with new errors.
Your current error at the time of writing (InexactError: Int(2.50805)) tells you that you are trying to create an integer from a Float64 floating point number, which you can't do without rounding explicitly.
I would highly recommend reading the Julia docs to get the hang of basic usage, and maybe use the Julia Discourse forum for more interactive back-and-forth debugging with the community.

SCILAB ode: How to solve 2nd order ODE

I am coming from ode45 in MATLAB trying to learn ode in scilab. I ran into an exception I am not sure how to address.
function der = f(t,x)
wn3 = 2800 * %pi/30; //rad/s
m = 868.1/32.174; //slugs
k = m*wn3^2; //lbf/ft
w = 4100 * %pi/30; //rad/s
re_me = 4.09/32.174/12; //slug-ft
F0 = w^2*re_me; //lbf
der(1) = x(2);
der(2) = -k*x(1) + F0*sin(w*t);
endfunction
x0 = [0; 0];
t = 0:0.1:5;
t0 = t(1);
x = ode(x0,t0,t,f);
plot(t,x(1,:));
I get this error message that I don't understand:
lsoda-- at t (=r1), mxstep (=i1) steps
needed before reaching tout
where i1 is : 500
where r1 is : 0.1027287737654D+01
Excessive work done on this call (perhaps wrong jacobian type).
at line 35 of executed file C:\Users\ndomenico\Documents\Scilab\high_frequency_vibrator_amplitude_3d.sce
ode: lsoda exit with state -1.
Thank you!
Your ode is particularly stiff (k = 2319733). To me, it has no sense to give such a large final time. The time step you took (0.1) is also very large w.r.t to the driving frequency. If you replace the line
t = 0:0.1:5
by
t = linspace(0,0.1,1001)
i.e. request approximations of your solution for t in [0,0.1] and 1000 time steps you will have the following output:

Julia MethodError: no method matching parseNLExpr_runtime(

I'm attempting to code the method described here to estimate production functions of metal manufacturers. I've done this in Python and Matlab, but am trying to learn Julia.
spain_clean.csv is a dataset of log capital (lnk), log labor (lnl), log output (lnva), and log materials (lnm) that I am loading. Lagged variables are denoted with an "l" before them.
Code is at the bottom. I am getting an error:
ERROR: LoadError: MethodError: no method matching parseNLExpr_runtime(::JuMP.Model, ::JuMP.GenericQuadExpr{Float64,JuMP.Variable}, ::Array{ReverseDiffSparse.NodeData,1}, ::Int32, ::Array{Float64,1})
I think it has to do with the use of vector sums and arrays going into the non-linear objective, but I do not understand Julia enough to debug this.
using JuMP # Need to say it whenever we use JuMP
using Clp, Ipopt # Loading the GLPK module for using its solver
using CSV # csv reader
# read data
df = CSV.read("spain_clean.csv")
#MODEL CONSTRUCTION
#--------------------
acf = Model(solver=IpoptSolver())
#variable(acf, -10<= b0 <= 10) #
#variable(acf, -5 <= bk <= 5 ) #
#variable(acf, -5 <= bl <= 5 ) #
#variable(acf, -10<= g1 <= 10) #
const g = sum(df[:phihat]-b0-bk* df[:lnk]-bl* df[:lnl]-g1* (df[:lphihat]-b0-bk* df[:llnk]-bl* df[:llnl]))
const gllnk = sum((df[:phihat]-b0-bk* df[:lnk]-bl* df[:lnl]-g1* (df[:lphihat]-b0-bk* df[:llnk]-bl* df[:llnl])).*df[:llnk])
const gllnl = sum((df[:phihat]-b0-bk* df[:lnk]-bl* df[:lnl]-g1* (df[:lphihat]-b0-bk* df[:llnk]-bl* df[:llnl])).*df[:llnl])
const glphihat = sum((df[:phihat]-b0-bk* df[:lnk]-bl* df[:lnl]-g1* (df[:lphihat]-b0-bk* df[:llnk]-bl* df[:llnl])).*df[:lphihat])
#OBJECTIVE
#NLobjective(acf, Min, g* g + gllnk* gllnk + gllnl* gllnk + glphihat* glphihat)
#SOLVE IT
status = solve(acf) # solves the model
println("Objective value: ", getobjectivevalue(acf)) # getObjectiveValue(model_name) gives the optimum objective value
println("b0 = ", getvalue(b0))
println("bk = ", getvalue(bk))
println("bl = ", getvalue(bl))
println("g1 = ", getvalue(g1))
No an expert in Julia, but I think a couple of things are wrong about your code.
first, constant are not supposed to change during iteration and you are making them functions of control variables. Second, what you want to use there are nonlinear expression instead of constants. so instead of the constants what you want to write is
N = size(df, 1)
#NLexpression(acf, g, sum(df[i, :phihat]-b0-bk* df[i, :lnk]-bl* df[i, :lnl]-g1* (df[i, :lphihat]-b0-bk* df[i, :llnk]-bl* df[i, :llnl]) for i=1:N))
#NLexpression(acf, gllnk, sum((df[i,:phihat]-b0-bk* df[i,:lnk]-bl* df[i,:lnl]-g1* (df[i,:lphihat]-b0-bk* df[i,:llnk]-bl* df[i,:llnl]))*df[i,:llnk] for i=1:N))
#NLexpression(acf,gllnl,sum((df[i,:phihat]-b0-bk* df[i,:lnk]-bl* df[i,:lnl]-g1* (df[i,:lphihat]-b0-bk* df[i,:llnk]-bl* df[i,:llnl]))*df[i,:llnl] for i=1:N))
#NLexpression(acf,glphihat,sum((df[i,:phihat]-b0-bk* df[i,:lnk]-bl* df[i,:lnl]-g1* (df[i,:lphihat]-b0-bk* df[i,:llnk]-bl* df[i,:llnl]))*df[i,:lphihat] for i=1:N))
I tested this and it seems to work.

Julia JuMP Multiavariate ML Estimation

I am trying to perform a ML-Estimation of a normally distributed variable in a linear regression setting in Julia using JuMP and the NLopt solver.
There exists a good working example here however if I try to estimate the regression parameters (slope) the code becomes quite tedious to write, in particular if the parameter space increases.
Maybe someone has an idea how to write it more concise. Here is my Code:
#type definition to store data
type data
n::Int
A::Matrix
β::Vector
y::Vector
ls::Vector
err::Vector
end
#generate regression data
function Data( n = 1000 )
A = [ones(n) rand(n, 2)]
β = [2.1, 12.9, 3.7]
y = A*β + rand(Normal(), n)
ls = inv(A'A)A'y
err = y - A * ls
data(n, A, β, y, ls, err)
end
#initialize data
d = Data()
println( var(d.y) )
function ml( )
m = Model( solver = NLoptSolver( algorithm = :LD_LBFGS ) )
#defVar( m, b[1:3] )
#defVar( m, σ >= 0, start = 1.0 )
#this is the working example.
#As you can see it's quite tedious to write
#and becomes rather infeasible if there are more then,
#let's say 10, slope parameters to estimate
#setNLObjective( m, Max,-(d.n/2)*log(2π*σ^2) \\cont. next line
-sum{(d.y[i]-d.A[i,1]*b[1] \\
-d.A[i,2]*b[2] \\
-d.A[i,3]*b[3])^2, i=1:d.n}/(2σ^2) )
#julia returns:
> slope: [2.14,12.85,3.65], variance: 1.04
#which is what is to be expected
#however:
#this is what I would like the code to look like:
#setNLObjective( m, Max,-(d.n/2)*log(2π*σ^2) \\
-sum{(d.y[i]-(d.A[i,j]*b[j]))^2, \\
i=1:d.n, j=1:3}/(2σ^2) )
#I also tried:
#setNLObjective( m, Max,-(d.n/2)*log(2π*σ^2) \\
-sum{sum{(d.y[i]-(d.A[i,j]*b[j]))^2, \\
i=1:d.n}, j=1:3}/(2σ^2) )
#but unfortunately it returns:
> slope: [10.21,18.89,15.88], variance: 54.78
solve(m)
println( getValue(b), " ", getValue(σ^2) )
end
ml()
Any ideas?
EDIT
As noted by Reza a working example is:
#setNLObjective( m, Max,-(d.n/2)*log(2π*σ^2) \\
-sum{(d.y[i]-sum{d.A[i,j]*b[j],j=1:3})^2,
i=1:d.n}/(2σ^2) )
The sum{} syntax is a special syntax that only works inside JuMP macros, and is the preferred syntax for sums.
So your example would be written as:
function ml( )
m = Model( solver = NLoptSolver( algorithm = :LD_LBFGS ) )
#variable( m, b[1:3] )
#variable( m, σ >= 0, start = 1.0 )
#NLobjective(m, Max,
-(d.n/2)*log(2π*σ^2)
- sum{
sum{(d.y[i]-d.A[i,j]*b[j], j=1:3}^2,
i=1:d.n}/(2σ^2) )
where I've expanded it across multiple lines to be as clear as possible.
Reza's answer isn't technically wrong, but isn't idiomatic JuMP and won't be as efficient for larger models.
I didn't trace your code but anywhere, I wish that the following works for you:
sum([(d.y[i]-sum([d.A[i,j]*b[j] for j=1:3]))^2 for i=1:d.n])
as #IainDunning mentioned, JuMP package has a special syntax for summation inside it's macros, so the more efficient and abstract way to do this is:
sum{sum{(d.y[i]-d.A[i,j]*b[j], j=1:3}^2,i=1:d.n}

How to call numerical results to integrate a ODE using Runge-Kutta-4 in Python 3?

I'm trying to solve (for m_0) numerically the following ordinary differential equation:
dm0/dx=(((1-x)*(x*(2-x))**(1.5))/(k+x)**2)*(((x*(2-x))/3.0)*(dw/dx)**2 + ((8*(k+1))/(3*(k+x)))*w**2)
The values of w and dw/dx have been found already numerically using the Runge-Kutta 4th order and k is a factor that is fixed. I wrote a code where I call the values for w and dw/dx from an external file, then I organize them in an array, then I call the array in the function and then I run the integration. My outcome is not what it's expected :(, I don't know what is wrong. If anyone could give me a hand, it would be highly appreciated. Thank you!
from math import sqrt
from numpy import array,zeros,loadtxt
from printSoln import *
from run_kut4 import *
m = 1.15 # Just a constant.
k = 3.0*sqrt(1.0-(1.0/m))-1.0 # k in terms of m.
omegas = loadtxt("omega.txt",float) # Import values of w
domegas = loadtxt("domega.txt",float) # Import values of dw/dx
w = [] # Defines the array w to store the values w^2
s = 0.0
for s in omegas:
w.append(s**2) # Calculates the values w**2
omeg = array(w,float) # Array to store the value of w**2
dw = [] # Defines the array dw to store the values dw**2
t = 0.0
for t in domegas:
dw.append(t**2) # Calculates the values for dw**2
domeg = array(dw,float) # Array to store the values of dw**2
x = 1.0e-12 # Starting point of integration
xStop = (2.0 - k)/3.0 # Final point of integration
def F(x,y): # Define function to be integrated
F = zeros(1)
for i in domeg: # Loop to call w^2, (dw/dx)^2
for j in omeg:
F[0] = (((1.0-x)*(x*(2.0-x))**(1.5))/(k+x)**2)*((1.0/3.0)*x* (2.0-x)*domeg[i] + (8.0*(k+1.0)*omeg[j])/(3.0*(k+x)))
return F
y = array([((32.0*sqrt(2.0)*(k+1.0)*(x**2.5))/(15.0*(k**3)))]) # Initial condition for m_{0}
h = 1.0e-5 # Integration step
freq = 0 # Prints only initial and final values
X,Y = integrate(F,x,y,xStop,h) # Calls Runge-Kutta 4
printSoln(X,Y,freq) # Prints solution
Interpreting your verbal description, there is an ODE for omega, w'=F(x,w), and a coupled ODE for m0, m'=G(x,m,w,w'). The almost always optimal way to solve this is to treat it as system of ODE,
def ODEfunc(x,y)
w,m = y
dw = F(x,w)
dm = G(x,m,w,dw)
return np.array([dw, dm])
which you can then insert in the ODE solver of your choice, e.g., the fictitious
ODEintegrate(ODEfunc, xsamples, y0)

Resources