MINLP in Julia using Juniper - julia

I'm trying to solve a MINLP with binary variables in Julia.
I have a user-specific objective function which is non-linear and I have nonlinear and linear constraints. I tried to solve the problem using Juniper, but I always get the error KeyError: key :myfunc not found which I don't understand.
Here is an example where the error occurs.
using Ipopt, JuMP, Juniper
N = 5
optimizer = Juniper.Optimizer
params = Dict{Symbol,Any}()
params[:nl_solver] = with_optimizer(Ipopt.Optimizer, print_level=0)
m = Model(with_optimizer(optimizer, params))
#variable(m, z[1:N],binary=true)
#NLconstraint(m, sum(z[i]*z[i+1] for i=1:N-1) <= 20)
#constraint(m, sum(z[i] for i=1:N) <= 10)
myfunc(z...) = sum(sin(i)*z[i]^i for i in 1:length(z))
register(m, :myfunc, N, myfunc, autodiff=true)
#NLobjective(m, Max, myfunc(z...))
m
optimize!(m)
println(JuMP.value.(z))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Do you have any idea why the error KeyError: key :myfunc not found occurs and how I can fix it?
Thanks!

It seems that the objective needs to be registered both with JuMP and Juniper. Here is a working example of an MINLP where i) the objective is non-linear, ii) the constraints are both linear and non-linear, and iii) the variables are binary.
using Juniper, Ipopt, JuMP, Cbc # <- last package is optional
N = 4
function myfunction(x...)
return sum(x[i].^4 for i = 1:length(x))
end
m = Model(
with_optimizer(
Juniper.Optimizer;
nl_solver = with_optimizer(Ipopt.Optimizer, print_level = 0),
mip_solver = with_optimizer(Cbc.Optimizer, logLevel=0), # <- optional
registered_functions = [
Juniper.register(:myfunction, N, myfunction; autodiff = true)
]
)
)
register(m, :myfunction, N, myfunction; autodiff = true)
#variable(m, x[1:N], Bin)
#NLconstraint(m, sum(sin(x[i]^2) for i=1:N) <= 4)
#constraint(m, x[1]+x[2]+x[3] <= 2)
#NLobjective(m, Max, myfunction(x...))
optimize!(m)
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Here's the resulting output:
nl_solver : OptimizerFactory(Ipopt.Optimizer, (), Base.Iterators.Pairs(:print_level => 0))
mip_solver : OptimizerFactory(Cbc.Optimizer, (), Base.Iterators.Pairs(:logLevel => 0))
log_levels : Symbol[:Options, :Table, :Info]
registered_functions : Juniper.RegisteredFunction[Juniper.RegisteredFunction(:myfunction, 4, myfunction, nothing, nothing, true)]
#Variables: 4
#IntBinVar: 4
#Constraints: 2
#Linear Constraints: 1
#Quadratic Constraints: 0
#NonLinear Constraints: 1
Obj Sense: Max
Incumbent using start values: 0.0
Status of relaxation: LOCALLY_SOLVED
Time for relaxation: 0.19966506958007812
Relaxation Obj: 1.5925926562762966
MIPobj NLPobj Time
=============================================
1.3333 0.0 0.0
FP: 0.00896906852722168 s
FP: 1 round
FP: Obj: 3.0
ONodes CLevel Incumbent BestBound Gap Time Restarts GainGap
============================================================================================================
0 2 3.0 1.59 46.91% 0.1 0 -
#branches: 1
Obj: 3.000000119960002
[1.0, 0.0, 1.0, 1.0]
3.000000119960002
LOCALLY_SOLVED

Related

Julia: How to introduce binary integers for mixed integers optimization problems with JuMP?

Problem description
I am trying to do a mixed-integer optimization for a "Unit Commitment" problem in Julia with Jump. But JuMP expects my introduction of the unit activation variable, x[1:N], to be a number and not a variable. However, the unit activation is a binary integer decision variable for the optimization problem so I have trouble including the variable into the optimization problem.
What am I doing wrong?
My approaches have been:
Approach 1: Include x[1:N] as part of #variable macro for P_G.
m = Model(Cbc.Optimizer) # Model
#variable(m, x[1:N], Bin) # Unit activation
#variable(m, P_C[i,1]*x[i] <= P_G[i=1:N,1:T] <= P_C[i,2]*x[i]) # Unit generation limit
for i in 1:T # Load balance
#constraint(m, sum(P_G[:,i]) == P_D[i])
end
#objective(m,Min,sum(P_G[:,1:T].*F[1:N]*x[1:N])) # Objective function
optimize!(m) # Solve
This leads to the following error:
LoadError: InexactError: convert(Float64, 50 x[1]).
Approach 2: Define the feasible region for P_G as a constraint including x[1:N]:
m = Model(Cbc.Optimizer) # Model
#variable(m, x[1:N]) # Unit activation
#variable(m, P_G[i=1:N,1:T] ) # Unit generation limit
for i in 1:T # Load balance
#constraint(m, sum(P_G[:,i]) == P_D[i])
end
for i in 1:N # Unit generation limit
for j in 1:T
#constraint(m, P_C[i,1]*x[i] <= P_G[i,j] <= P_C[i,2]*x[i])
end
end
#objective(m,Min,sum(P_G[:,1:T].*F[1:N]*x[1:N])) # Objective function
optimize!(m) # Solve
This leads to: LoadError: [..] '#constraint(m, $(Expr(:escape, :(P_C[i, 1]))) * $(Expr(:escape, :(x[i]))) <= P_G[i, j] <= $(Expr(:escape, :(P_C[i, 2]))) * $(Expr(:escape, :(x[i]))))': Expected 50 x[1] to be a number.
NB: there might be more proper iteration methods but this should be idiot proof to Julia and JuMP newbies like me.
Working code without mixed-integer optimization
using JuMP, Cbc # Optimization and modelling
using Plots, LaTeXStrings # Plotting
# DATA
P_C = [50 200; # Power capacity [:, (min, max)]
25 200;
100 200;
120 500;
10 500;
20 500;
200 800;
200 800;
100 800;
200 1000;]
P_D = LinRange(sum(P_C[:,1]), sum(P_C[:,2]), 100) # Power demand
F = rand(100:500,10) # Random prod. prices
T = length(P_D) # Number of time steps
N = length(P_C[:,1]) # Number of generators
# MODEL
m = Model(Cbc.Optimizer) # Model
#variable(m, x[1:N], Bin) # Unit activation
#variable(m, P_C[i,1] <= P_G[i=1:N,1:T] <= P_C[i,2]) # Unit generation limit
for i in 1:T # Load balance
#constraint(m, sum(P_G[:,i]) == P_D[i])
end
#objective(m,Min,sum(P_G[:,1:T].*F[1:N])) # Objective function
optimize!(m) # Solve
# PLOT
plt = plot(P_D[:],value.(P_G[:,1:T])', xlab = L"P_{load} [MW]", ylab = L"P_{unit} [MW]")
#show plt
Which should produce something similar:
The expected outcome of introducing the unit activation variable would be that each unit is not required to generate power in the the lower region of the P_load.
Preliminary
I have introduced the basics of the problem with success:
Objective function: Minimize the cost of power generation
Variable, P_G: for power generation (feasible region defined by min and max capacity, P_C)
Production cost, F (as constant only!)
Power demand, P_D, is set to be a linear space from the min power cap. to the max cap.
Mathematically expressed:
Variable bounds cannot include other variables. Do:
m = Model(Cbc.Optimizer)
#variable(m, x[1:N], Bin)
#variable(m, P_G[i=1:N,1:T])
#constraint(m, [i=1:N, t=1:T], P_C[i, 1] * x[i] <= P_G[i, t])
#constraint(m, [i=1:N, t=1:T], P_G[i, t] <= P_C[I, 2] * x[i])

Can't get performant Julia Turing model

I've tried to reproduce the model from a PYMC3 and Stan comparison. But it seems to run slowly and when I look at #code_warntype there are some things -- K and N I think -- which the compiler seemingly calls Any.
I've tried adding types -- though I can't add types to turing_model's arguments and things are complicated within turing_model because it's using autodiff variables and not the usuals. I put all the code into the function do_it to avoid globals, because they say that globals can slow things down. (It actually seems slower, though.)
Any suggestions as to what's causing the problem? The turing_model code is what's iterating, so that should make the most difference.
using Turing, StatsPlots, Random
sigmoid(x) = 1.0 / (1.0 + exp(-x))
function scale(w0::Float64, w1::Array{Float64,1})
scale = √(w0^2 + sum(w1 .^ 2))
return w0 / scale, w1 ./ scale
end
function do_it(iterations::Int64)::Chains
K = 10 # predictor dimension
N = 1000 # number of data samples
X = rand(N, K) # predictors (1000, 10)
w1 = rand(K) # weights (10,)
w0 = -median(X * w1) # 50% of elements for each class (number)
w0, w1 = scale(w0, w1) # unit length (euclidean)
w_true = [w0, w1...]
y = (w0 .+ (X * w1)) .> 0.0 # labels
y = [Float64(x) for x in y]
σ = 5.0
σm = [x == y ? σ : 0.0 for x in 1:K, y in 1:K]
#model turing_model(X, y, σ, σm) = begin
w0_pred ~ Normal(0.0, σ)
w1_pred ~ MvNormal(σm)
p = sigmoid.(w0_pred .+ (X * w1_pred))
#inbounds for n in 1:length(y)
y[n] ~ Bernoulli(p[n])
end
end
#time chain = sample(turing_model(X, y, σ, σm), NUTS(iterations, 200, 0.65));
# ϵ = 0.5
# τ = 10
# #time chain = sample(turing_model(X, y, σ), HMC(iterations, ϵ, τ));
return (w_true=w_true, chains=chain::Chains)
end
chain = do_it(1000)

How to fix TypeError: in setindex! in DifferentialEquations.jl

Recently, I got started with Julia's (v1.0.3) DifferentialEquations.jl package. I tried solving a simple ODE system, with the same structure as my real model, but much smaller.
Depending on the solver which I use, the example either solves or throws an error. Consider this MWE, a Chemical Engineering model of a consecutive / parallel reaction in a CSTR:
using DifferentialEquations
using Plots
# Modeling a consecutive / parallel reaction in a CSTR
# A --> 2B --> C, C --> 2B, B --> D
# PETERSEN-Matrix
# No. A B C D Rate
# 1 -1 2 k1*A
# 2 -2 1 k2*B*B
# 3 2 -1 k3*C
# 4 -1 1 k4*B
function fpr(dx, x, params, t)
k_1, k_2, k_3, k_4, q_in, V_liq, A_in, B_in, C_in, D_in = params
# Rate equations
rate = Array{Float64}(undef, 4)
rate[1] = k_1*x[1]
rate[2] = k_2*x[2]*x[2]
rate[3] = k_3*x[3]
rate[4] = k_4*x[2]
dx[1] = -rate[1] + q_in/V_liq*(A_in - x[1])
dx[2] = 2*rate[1] - 2*rate[2] + 2*rate[3] - rate[4] + q_in/V_liq*(B_in - x[2])
dx[3] = rate[2] - rate[3] + q_in/V_liq*(C_in - x[3])
dx[4] = rate[4] + q_in/V_liq*(D_in - x[4])
end
u0 = [1.5, 0.1, 0, 0]
params = [1.0, 1.5, 0.75, 0.15, 3, 15, 0.5, 0, 0, 0]
tspan = (0.0, 15.0)
prob = ODEProblem(fpr, u0, tspan, params)
sol = solve(prob)
plot(sol)
This works perfectly.
However, if a choose a different solver, say Rosenbrock23() or Rodas4(), the ODE is not solved and I get the following error:
ERROR: LoadError: TypeError: in setindex!, in typeassert, expected Float64,
got ForwardDiff.Dual{Nothing,Float64,4}
I won't paste the whole stacktrace here, since it is very long, but you can easily reproduce this by changing sol = solve(prob) into sol = solve(prob, Rosenbrock23()). It seems to me that the error occurs when the solver tries to derive Jacobians, but I have no clue why. And why does the default solver work, but others don't?
Please, could anyone tell me why this error occurs and how it can be fixed?
Automatic differentiation works by passing Dual types through your function, instead of the floats you would normally use it with. So the problem arises because you fix the internal value rate to be of type Vector{Float64} (see the third point here, and this advice). Fortunately, that's easy to fix (and even better looking, IMHO):
julia> function fpr(dx, x, params, t)
k_1, k_2, k_3, k_4, q_in, V_liq, A_in, B_in, C_in, D_in = params
# Rate equations
# should actually be rate = [k_1*x[1], k_2*x[2]*x[2], k_3*x[3], k_4*x[2]], as per #LutzL's comment
rate = [k_1*x[1], k_2*x[2], k_3*x[3], k_4*x[2]]
dx[1] = -rate[1] + q_in/V_liq*(A_in - x[1])
dx[2] = 2*rate[1] - 2*rate[2] + 2*rate[3] - rate[4] + q_in/V_liq*(B_in - x[2])
dx[3] = rate[2] - rate[3] + q_in/V_liq*(C_in - x[3])
dx[4] = rate[4] + q_in/V_liq*(D_in - x[4])
end
That works with both Rosenbrock23 and Rodas4.
Alternatively, you can turn off AD with Rosenbrock23(autodiff=false) (which, I think, will use finite differences instead), or supply a Jacobian.

Automatic Differentiation in Julia: Hessian from ReverseDiffSparse

How can I evaluate the Hessian of a function in Julia using automatic differentiation (preferably using ReverseDiffSparse)? In the following example, I can compute and evaluate the gradient at a point values through JuMP:
m = Model()
#variable(m, x)
#variable(m, y)
#NLobjective(m, Min, sin(x) + sin(y))
values = zeros(2)
values[linearindex(x)] = 2.0
values[linearindex(y)] = 3.0
d = JuMP.NLPEvaluator(m)
MathProgBase.initialize(d, [:Grad])
objval = MathProgBase.eval_f(d, values) # == sin(2.0) + sin(3.0)
∇f = zeros(2)
MathProgBase.eval_grad_f(d, ∇f, values)
# ∇f[linearindex(x)] == cos(2.0)
# ∇f[linearindex(y)] == cos(3.0)
Now I want the Hessian at values. I've tried changing the below:
MathProgBase.initialize(d, [:Grad,:Hess])
H = zeros(2); # based on MathProgBase.hesslag_structure(d) = ([1,2],[1,2])
MathProgBase.eval_hesslag(d, H, values, 0, [0])
but am getting an error.
For reference, cross-post is on the helpful Julia Discourse forum here.

Minimize the maximum variable

I have a Mixed Integer Programming problem. The objective function is a minimization of the maximum variable value in the a vector. The variable is has an upper bound of 5. The problem is like this:
m = Model(solver = GLPKSolverMIP())
#objective(m, Min, max(x[i] for i=1:12))
#variable(m, 0 <= x[i] <= 5, Int)
#constraint(m, sum(x[i] for i=1:12) == 12)
status = solve(m)
The max variable is not part of the julia JuMP syntax. So I modified the problem to
t=1
while t<=5 && (status == :NotSolved || status == :Infeasible)
m = Model(solver = GLPKSolverMIP())
i = 1:12
#objective(m, Min, max(x[i] for i=1:12))
#variable(m, 0 <= x[i] <= t, Int)
#constraint(m, sum(x[i] for i=1:12) == 12)
status = solve(m)
t += 1
end
This solution does the job by solving the problem iterative for starting with a upper bound for the variable at 1 and then increase by one until the solutoin is feasible. Is this really the best way to do this?
The question wants to minimize a maximum, this maximum can be held in an auxiliary variable and then we will minimize it. To do so, add constraints to force the new variable to actually be an upper bound on x. In code it is:
using GLPKMathProgInterface
using JuMP
m = Model(solver = GLPKSolverMIP())
#variable(m, 0 <= x[i=1:3] <= 5, Int) # define variables
#variable(m, 0 <= t <= 12) # define auxiliary variable
#constraint(m, t .>= x) # constrain t to be the max
#constraint(m, sum(x[i] for i=1:3) == 12) # the meat of the constraints
#objective(m, Min, t) # we wish to minimize the max
status = solve(m)
Now we can inspect the solution:
julia> getValue(t)
4.0
julia> getValue(x)
3-element Array{Float64,1}:
4.0
4.0
4.0
The actual problem the poster wanted to solve is probably more complex that this, but it can be solved by a variation on this framework.

Resources