Quadratically constrained MIQP with julia and gurobi - julia

This is an attempt to answer the following question: https://matheducators.stackexchange.com/questions/11757/small-data-sets-with-integral-sample-standard-deviations
So the intent of the following code is to find examples of small datasets with integer standard deviation. That can be formulated as a quadratically constrained mixed integer quadratic program, so I try to use Gurobin from Julia. Following is my code:
using JuMP
using Gurobi
m = Model(solver = GurobiSolver() )
#variable(m, 0<= x[1:20] <= 100, Int)
#variable(m, Gj, Int)
#constraint(m, Gj == sum(x[1:20])/20 )
#variable(m, Var, Int)
#constraint(m, Var == sum( (x[1:20]-Gj).^2/19) )
#variable(m, sd, Int)
#constraint(m, sd * sd == Var)
### We need some restrictions to avoid all equal, < or zero, solutions:
#constraint(m, sd >= 5)
#objective(m, Min, sd)
print(m)
status = solve(m)
println("Objective value: ", getobjectivevalue(m) )
x = getvalue(x)
Running this results in:
ERROR: Gurobi.GurobiError(10021, "Quadratic equality constraints")
Stacktrace:
[1] optimize(::Gurobi.Model) at /home/kjetil/.julia/v0.6/Gurobi/src/grb_solve.jl:7
[2] optimize!(::Gurobi.GurobiMathProgModel) at /home/kjetil/.julia/v0.6/Gurobi/src/GurobiSolverInterface.jl:294
[3] #solve#101(::Bool, ::Bool, ::Bool, ::Array{Any,1}, ::Function, ::JuMP.Model) at /home/kjetil/.julia/v0.6/JuMP/src/solvers.jl:173
[4] solve(::JuMP.Model) at /home/kjetil/.julia/v0.6/JuMP/src/solvers.jl:148
Any ideas?

A math programming solver like Gurobi Optimizer cannot solve models with quadratic equality constraints. Here are the types of constraints that Gurobi Optimizer can solve. To solve your model using Gurobi Optimizer, you must transform your constraints into one of these forms, such as quadratic inequality constraints.

The major problem is that, in general, a quadratic equality is not convex, and most solvers only work for convex problems (plus integer constraints). A product of two binary variables is easy to linearise (it's the equivalent of a logical AND), that of one binary variable and one continuous variable is easy too; the rest is not so easy.
Since Gurobi 9, you can solve nonconvex bilinear problems, in particular those having quadratic equality constraints. You just have to add the right parameter. With Gurobi.jl, if m is your JuMP model, you can do this:
set_optimizer_attribute(m, "NonConvex", 2)

Related

Define a correct constraint, like outside a 2-D rectangle in Julia with JuMP

I would like to define a constraint in an optimization problem as follows:
(x,y) not in {(x,y)|1.0 < x < 2.0, 3.0 < y < 4.0}.
what I tried is #constraint(model, (1.0 < x < 2.0 + 3.0 < y < 4.0)!=2), but failed.
It seems that boolen operation is not allowed. such that I have no idea about it. Any advice is appreciated!
You should avoid introducing quadratic constraints (as in the other answer) and rather introduce binary variables. This increase number of available solvers and generally linear models take shorter time to solve.
Hence you should note that !(1.0 < x < 2.0) is an equivalent of x <= 1 || x >= 2 which can be written in a linear form as:
#variable(model, bx, Bin)
const M = 1000 # number "big enough"
#constraint(model, x <= 1 + M*bx)
#constraint(model, x >=2 - M*(1-bx))
bx is here a "switcher" variable that makes either first or second constraint obligatory.
I am not sure what you want about y as you have 3.0 < y < 3.0 but basically the pattern to formulate the would be the same.
Just note you cannot have a constraint such as y != 3 as solvers obviously have some numerical accuracy and you would need rather to represent this is as for an example !(3-0.01 < y < 3+0.01) (still using the same pattern as above)
UPDATE: The previous solution in this answer turned out to be wrong (exclude parts of the admissable region), and so I felt obligated to provide another 'right' solution. This solution partitions the admissable region into parts and solves different optimization problems for each part. Keeping the best solution. This is not a nice solution, but if one does not have a good solver (those commercial ones) it is one way. The commercial solvers usually go through a more efficient similar process by the name of branch-and-bound.
using JuMP, Ipopt
function solveopt()
bestobj = Inf
bestx, besty = 0.0,0.0
for (ltside, xvar, val) in (
(true, true, 2.0),(false, true, 3.0),
(true, false, 3.0),(false, false, 4.0))
m = Model(Ipopt.Optimizer)
#variable(m, x)
#variable(m, y)
add_constraint(m, ScalarConstraint(xvar ? x : y,
ltside ? MOI.LessThan(val) : MOI.GreaterThan(val)))
# following is an objective optimal inside the box
#NLobjective(m, Min, (x-2.5)^2+(y-3.5)^2)
optimize!(m)
if objective_value(m) < bestobj
bestx, besty = value(x), value(y)
end
end
return bestx, besty
end
The solution for this example problem is:
julia> solveopt()
:
: lots of solver output...
:
(2.5, 3.9999999625176965)
Lastly, I benchmarked this crude method against a non-commercial solver (Pajarito) with the method from other answer and this one is 2X faster (because of simplicity, I suppose). Commercial solvers would beat both times.

General Equilibrium Problem using SymPy in Julia

I am trying to solve an economic problem using the sympy package in Julia. In this economic problem I have exogenous variables and endogenous variables and I am indexing them all. I have two questions:
How to access the indexed variables to pass: calibrated values ( to exogenous variables, calibrated in other enveiroment) or formula (to endogenous variables, determined by the first order conditions of the agents' maximalization problem using pencil and paper). This will also allow me to study the behavior of equilibrium when I disturb exogenous variables. First, consider my attempto to pass calibrated values on exogenous variables.
using SymPy
# To index
n,N = sympy.symbols("n N", integer=True)
N = 3 # It can change
# Household
#exogenous variables
α = sympy.IndexedBase("α")
#syms γ
α2 = sympy.Sum(α[n], (n, 1, N))
equation_1 = Eq(α2 + γ, 1)
The equation_1 says that the alpha's plus gamma sums one. So I would like to pass values to the α vector according to another vector, alpha3, with calibrated parameters.
# Suposse
alpha3 = [1,2,3]
for n in 1:N
α[n]= alpha3[n]
end
MethodError: no method matching setindex!(::Sym, ::Int64, ::Int64)
I will certainly do this step once the system is solved. Now, I want to pass formulas or expressions as a function of prices. Prices are endogenous and unknown variables. (As said before, the expressions were calculated using paper and pencil)
# Price vector, Endogenous, unknown in the system equations
P = sympy.IndexedBase("P")
# Other exogenous variables to be calibrated.
z = sympy.IndexedBase("z")
s = sympy.IndexedBase("s")
Y = sympy.IndexedBase("Y")
# S[n] and D[n], Supply and Demand, are endogenous, but determined by the first order conditions of the maximalization problem of the agents
# Supply and Demand
S = sympy.IndexedBase("S")
D = sympy.IndexedBase("D")
# (Hypothetical functions that I have to pass)
# S[n] = s[n]*P[n]
# D[n] = z[n]/P[n]
Once I can write the formulas on S[n] and D[n], consider the second question:
How to specify the endogenous variables indexed (All prices in their indexed format P[n]) as being unknown in the system of non-linear equations? I will ignore the possibility of not solving my system. Suppose my system has a single solution or infinite (manifold). So let's assume that I have more equations than variables:
# For all n, I want determine N indexed equations (looping?)
Eq_n = Eq(S[n] - D[n],0)
# Some other equations relating the P[n]'s
Eq0 = Eq(sympy.Sum(P[n]*Y[n] , (n, 1, N)), 0 )
# Equations system
eq_system = [Eq_n,Eq0]
# Solving
solveset(eq_system,P[n])
Many thanks
There isn't any direct support for the IndexedBase feature of SymPy. As such, the syntax alpha[n] is not available. You can call the method __getitem__ directly, as with
alpha.__getitem__[n]
I don't see a corresponding __setitem__ documented, so I'm not sure whether
α[n]= alpha3[n]
is valid in sympy itself. But if there is some other assignment method, you would likely just call that instead of the using [ for assignment.
As for the last question about equations, I'm not sure but you would presumably find the size of the IndexedBase object and use that to loop.
If possible, using native julia constructs would be preferred, as possible. For this example, you might just consider an array of variables. The recently changed #syms macro makes this easy to generate.
For example, I think the following mostly replicates what you are trying to do:
#syms n::integer, N::integer
#exogenous variables
N = 3
#syms α[1:3] # hard code 3 here or use `α =[Sym("αᵢ$i") for i ∈ 1:N]`
#syms γ
α2 = sum(α[i] for i ∈ 1:N)
equation_1 = Eq(α2 + γ, 1)
alpha3 = [1,2,3]
for n in 1:N
α[n]= alpha3[n]
end
#syms P[1:3], z[1:3], s[1:3], γ[1:3], S[1:3], D[1:3]
Eq_n = [Eq(S[n], D[n]) for n ∈ 1:N]
Eq0 = Eq(sum(P .* Y), 0)
eq_system = [Eq_n,Eq0]
solveset(eq_system,P[n])

Bound 2-norm in Julia JuMP

Using Julia's JuMP library, I have a matrix-valued variable A on which I would like to impose a 2-norm constraint (equivalently: the spectral / operator norm). However I am not sure how to do this. Below is a minimal-running code of something I would like to write
using LinearAlgebra
using JuMP
using MathOptInterface
using MosekTools
using Mosek
model = Model(optimizer_with_attributes(
Mosek.Optimizer,
"QUIET" => false,
"INTPNT_CO_TOL_DFEAS" => 1e-9
))
maxnorm = 3.0
# We want opnorm(A) <= maxnorm
#variable(model, A[1:4, 1:5])
# #SDconstraint(model, A' * A <= maxnorm^2) # Mathematically valid, but not accepted!
# Make dummy variable and constraint to satisfy
#variable(model, x)
#constraint(model, x >= 10)
#objective(model, Min, x)
optimize!(model)
A very overkill way to do this is via
#constraint(model, [maxnorm; vec(A)] in SecondOrderCone())
as this bounds the Frobenius norm instead --- but this is not preferable. I would greatly appreciate any insights into how this can be done.
MathOptInterface has a cone for the spectral norm:
https://jump.dev/MathOptInterface.jl/v0.9/apireference/#MathOptInterface.NormSpectralCone
#constraint(model, [maxnorm; vec(A)] in MOI.NormSpectralCone(4, 5))

Fitting two curves with linear/non-linear regression

I need to fit two curves(which both should belong to cubic functions) into a set of points with JuMP.
I've done fitting one curve, but I'm struggling at fitting 2 curves into same dataset.
I thought that if I can distribute points to curves - so if each point can only be used once - I can do it like below, but it didn't work. (I know that I can use much more complicated things, I want to keep it simple.)
This is a part of my current code:
# cubicFunc is a two dimensional array which accepts cubicFunc[x,degree]
#variable(m, mult1[1:4]) // 0:3 because it's cubic
#variable(m, mult2[1:4]) // 0:3 because it's cubic
#variable(m, 0 <= includeIn1[1:numOfPoints] <= 1, Int)
#variable(m, 0 <= includeIn2[1:numOfPoints] <= 1, Int)
# some kind of hack to force one of them to 0 and other one to 1
#constraint(m, loop[i in 1:numOfPoints], includeIn1[i] + includeIn2[i] == 1)
#objective(m, Min, sum( (yPoints - cubicFunc*mult1).*includeIn1 .^2 ) + sum( (yPoints - cubicFunc*mult2).*includeIn2 .^2 ))
But it gives various errors depending on what I'm trying; *includeIn1 and, .*includeIn1 doesn't work, I've tried to do it via #NLobjective but it gave me whooping ~50 lines of errors etc.
Is my idea realistic? Can I make it into the code?
Any help will be highly appreciated. Thank you very much.
You can write down the problem e.g. like this:
using JuMP, Ipopt
m = Model(with_optimizer(Ipopt.Optimizer))
#variable(m, mult1[1:4])
#variable(m, mult2[1:4])
#variable(m, 0 <= includeIn1[1:numOfPoints] <= 1)
#variable(m, 0 <= includeIn2[1:numOfPoints] <= 1)
#NLconstraint(m, loop[i in 1:numOfPoints], includeIn1[i] + includeIn2[i] == 1)
#NLobjective(m, Min, sum(includeIn1[i] * (yPoints[i] - sum(cubicFunc[i,j]*mult1[j] for j in 1:4)) ^2 for i in 1:numOfPoints) +
sum(includeIn2[i] * (yPoints[i] - sum(cubicFunc[i,j]*mult2[j] for j in 1:4)) ^2 for i in 1:numOfPoints))
optimize!(m)
Given the constraints includeIn1 and includeIn2 will be 1 or 0 in optimum (if they are not this means that it does not matter to which group you assign the point), so we do not have to constrain them to be binary. Also I use non-linear solver as the problem does not not seem to be possible to reformulate as linear or quadratic optimization task.
However, I give the above code only as an example how you can write it down. The task you have formulated does not have a unique local minimum (that is a global one then), but several local minima. Therefore using standard non-linear convex solvers that JuMP supports will only find one local optimum (not necessarily a global one). In order to look for global optima you need to switch to global solvers like e.g. https://github.com/robertfeldt/BlackBoxOptim.jl.

An Example of Nonlinear Optimization with JuMP

As a test in understanding nonlinear optimization using Julia's JuMP modeling language, I am trying to minimize the Rosenbrock function in 10 dimensions with constraints 0 <= x[i] <= 0.5. First Rosenbrock with variable arguments:
function rosen(x...)
local n = length(x); local s = 0.0
for i = 1:length(x)-1
s += 100*(x[i+1] - x[i]^2)^2 + (x[i] - 1)^2
end
return s
end
## rosen (generic function with 1 method)
Define the optimization model with Ipopt as solver,
using JuMP; using Ipopt
m = Model(solver = IpoptSolver())
## Feasibility problem with:
## * 0 linear constraints
## * 0 variables
## Solver is Ipopt
and the variables with bound constraints and starting values x[i] = 0.1:
#variable(m, 0.0 <= x[1:10] <= 0.5)
for i in 1:10 setvalue(x[i], 0.1); end
Now I understand that I have to register the objective function.
JuMP.register(m, :rosen, 10, rosen, autodiff=true)
I am uncertain here whether I can do it like this, or if I need to define and register a mysquare function, as is done in the "User-defined Functions" section of the JuMP manual.
#NLobjective(m, Min, rosen(x[1],x[2],x[3],x[4],x[5],x[6],x[7],x[8],x[9],x[10]))
How can I write this more compactly? An expression like
#NLobjective(m, Min, rosen(x[1:10]))
##ERROR: Incorrect number of arguments for "rosen" in nonlinear expression.
gives an error. What if I would like to solve this problem with 100 variables?
Now we solve this model problem and, alas, it returnes a solution, and indeed the correct solution as I know from solving it with the NEOS IPOPT solver.
sol = solve(m);
## ...
## EXIT: Optimal Solution Found.
As I am only interested in the exact value of x[10], extracting it thus:
getvalue(x[10]
## 0.00010008222367154784
Can this be simplified somehow? Think of it how easy it is to solve this problem with fminsearch in MATLAB or optim in R.
R> optim(rep(0.1,10), fnRosenbrock, method="L-BFGS-B",
lower=rep(0.0,10), upper=rep(0.5,10),
control=list(factr=1e-12, maxit=5000))
## $par
## [1] 0.50000000 0.26306537 0.08003061 0.01657414 0.01038065
## [6] 0.01021197 0.01020838 0.01020414 0.01000208 0.00000000
Except, of course, it says $par[10] is 0.0 which is not true.

Resources