Coding arrays in constraint JuMP - julia

Please consider the following image:
I know when I use a:b in #constraint it means an array from a to b. I need to code the arrays like {a_j,b_j} in #constraint of the mentioned code. Can you please help me to code that?

I understand that you are asking about a custom indice iteration over a single constraint. This can be done in JuMP as:
using JuMP, Cbc
m = Model(Cbc.Optimizer)
#variable(m,x[1:3,1:3]>=0)
#constraint(m, con[ (i,j) in [(1,2),(2,3),(3,3)] ], x[i,j] >= 5)
Let us have a look what we got:
julia> println(m)
Feasibility
Subject to
con[(1, 2)] : x[1,2] >= 5.0
con[(2, 3)] : x[2,3] >= 5.0
con[(3, 3)] : x[3,3] >= 5.0
x[1,1] >= 0.0
x[2,1] >= 0.0
x[3,1] >= 0.0
x[1,2] >= 0.0
x[2,2] >= 0.0
x[3,2] >= 0.0
x[1,3] >= 0.0
x[2,3] >= 0.0
x[3,3] >= 0.0

A key point to understand is that--unlike AMPL and GAMS--there is no specialized
syntax for constructing and managing sets in JuMP. Instead, you can use any
available Julia syntax and datastructures.
Examples:
using JuMP
N = 10
model = Model();
#variable(model, x[1:N]);
#constraint(model, [s=1:3], sum(x[j] for j in 1:N if mod(j, s) == 0) == 1)
using JuMP
N = 10
model = Model();
#variable(model, x[1:N]);
d = [Set([1, 2, 3]), Set([2, 4, 6])]
a = [Set([3, 4]), Set([5])]
J(s) = [j for j in 1:2 if s in union(d[j], a[j])]
#constraint(model, [s=1:2], sum(x[j] for j in J(s)) == 1)

Related

Julia JuMP feasibility slack of constraints

In Julia, by using JuMP am setting up a simple optimization problem (MWE, the real problem is much bigger).
model = Model()
set_optimizer(model, MosekTools.Optimizer)
#variable(model, 0 <= x[1:2])
#constraint(model, sum(x) <= 2)
#constraint(model, 1 <= sum(x))
#objective(model, Min, sum(x))
print(model)
Which gives this model:
Min x[1] + x[2]
Subject to
x[1] + x[2] ≤ 2.0
-x[1] - x[2] ≤ -1.0
x[1] ≥ 0.0
x[2] ≥ 0.0
I optimize this model via optimize!(model).
Now, obviously, the constraint x[1] + x[2] <= 2 is redundant and it has a feasibility slack of "3". My goal is to determine all the constraints that have slacks larger than 0 and display the slacks. Then I will delete those from the model.
To this end, I iterate over the constraints which are not variable bounds and print their values.
for (F, S) in list_of_constraint_types(model)
# Iterate over constraint types
if F!= JuMP.VariableRef #for constraints that
for ci in all_constraints(model, F, S)
println(value(ci))
end
end
end
However, because I print the value of the constraints, I get the left-hand sides:
1.0
-1.0
I want to instead see the slacks as
0
3
How may I do this? Note that I am not necessarily interested in linear programs, so things like shadow_value is not useful for me.
Based on the accepted answer, I am adding a MWE that solves this problem.
model = Model()
set_optimizer(model, MosekTools.Optimizer)
#variable(model, 0 <= x[1:2])
#constraint(model, sum(x) <= 2)
#constraint(model, 1 <= sum(x))
#constraint(model, 0.9 <= sum(x))
#objective(model, Min, sum(x))
print(model)
optimize!(model)
constraints_to_delete = vec([])
for (F, S) in list_of_constraint_types(model)
if F!= JuMP.VariableRef
for ci in all_constraints(model, F, S)
slack = normalized_rhs(ci) - value(ci)
if slack > 10^-5
push!(constraints_to_delete, ci)
println(slack)
#delete(model, ci)
end
end
end
end
for c in constraints_to_delete
delete(model, c)
end
print(model)
Read this (hot off the press) tutorial: https://jump.dev/JuMP.jl/dev/tutorials/linear/lp_sensitivity/.
Although focused on LPs, it shows how to compute slacks etc using normalized_rhs(ci) - value(ci).

cvxpy contrained normalization equations (abs)

I am working in an optimization problem (A*v = b) where I would like to rank a set of alternatives X = {x1,x2,x3,x4}. However, I have the following normalization constraint: |v[i] - v[j]| <= 1, which can be in the form -1 <= v[i] - v[j] <= 1.
My code is as follows:
import cvxpy as cp
n = len(X) #set of alternatives
v = cp.Variable(n)
objective = cp.Minimize(cp.sum_squares(A*v - b))
constraints = [0 <= v]
#Normalization condition -1 <= v[i] - v[j] <= 1
for i in range(n):
for j in range(n):
constraints = [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
prob = cp.Problem(objective, constraints)
# The optimal objective value is returned by `prob.solve()`.
result = prob.solve()
# The optimal value for v is stored in `v.value`.
va2 = v.value
Which outputs:
[-0.15 0.45 -0.35 0.05]
Result, which is not close to what should be and even have negative values. I think, my code for the normalization contraint most probably is wrong.
You are not appending your constraints, instead you are overwriting them each time. Instead of this line
constraints = [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
You should have
constraints += [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
For cleanliness you may want to change this
for i in range(n):
for j in range(n):
To only consider each pair once:
for i in range(n):
for j in range(i+1, n):

What is the Julia way to dot product over desired dimensions

I have a xy-grid with two vector fields u and v. I represent vector fields as Array{Float64, 3} with dimensions nx × ny × 2.
I would like to have dot product u.v as a scalar field (Array{Float64,2} with dimensions nx×ny). What is the best way to achieve that?
It would be perfect to have something like dot(u,v,3), where 3 is the dimension over which the dot product is taken.
nx, ny = 3, 4
u = Array{Float64,3}(rand(0:1, nx, ny, 2))
#[0.0 1.0 0.0 1.0; 1.0 0.0 1.0 0.0; 1.0 0.0 1.0 0.0]
#[1.0 0.0 1.0 0.0; 1.0 0.0 0.0 1.0; 1.0 0.0 1.0 1.0]
v = Array{Float64,3}(rand(0:1, nx, ny, 2))
#[1.0 1.0 1.0 1.0; 0.0 1.0 0.0 0.0; 0.0 1.0 1.0 0.0]
#[1.0 0.0 0.0 0.0; 0.0 1.0 0.0 0.0; 1.0 1.0 1.0 0.0]
[dot(u[i,j,:], v[i,j,:]) for i in 1:nx, j in 1:ny]
3×4 Array{Float64,2}:
1.0 1.0 0.0 1.0
0.0 0.0 0.0 0.0
1.0 0.0 2.0 0.0
sum(u.*v,3) is both (reasonably) fast and short.
To explicity get a matrix you can squeeze the third dimension like so squeeze(sum(u.*v,3), 3)
Update: Of course, this has allocations and is not the best answer if speed is everything. In this case, see #DNF's direct loop implementation which is basically as fast as you can get it.
squeeze(sum(u .* v, 3), 3) is clean and simple, but if you need more speed, this is approximately 20x times faster on my pc:
a = fill(zero(eltype(u)), nx, ny)
#inbounds for k in indices(u, 3)
for j in indices(u, 2)
for i in indices(u, 1)
a[i, j] += u[i, j, k] * v[i, j, k]
end
end
end
Edit: For easier benchmarking comparison, this try this code:
function dotsum3(u, v)
a = fill(zero(eltype(u)), size(u, 1), size(u, 2))
#inbounds for k in indices(u, 3)
for j in indices(u, 2)
for i in indices(u, 1)
a[i, j] += u[i, j, k] * v[i, j, k]
end
end
end
return a
end
Also note that, if using #inbounds, one should probably explicitely check that the sizes of u and v are compatible.
Another version is
a = copy(view(u,:,:,1))
a .*= view(v,:,:,1)
a .+= #views u[:,:,2].*v[:,:,2]
Which is easily extensible to arbitrary 3rd dimension length. The benchmarks from my machine:
julia> using BenchmarkTools
julia> function f1(u,v)
a = fill(zero(eltype(u)), nx, ny)
#inbounds for k in indices(u, 3)
for j in indices(u, 2)
for i in indices(u, 1)
a[i, j] += u[i, j, k] * v[i, j, k]
end
end
end
return a
end
f1 (generic function with 1 method)
julia> f2(u,v) = squeeze(sum(u .* v, 3), 3)
f2 (generic function with 1 method)
julia> function f3(u,v)
a = copy(view(u,:,:,1))
a .*= view(v,:,:,1)
a .+= #views u[:,:,2].*v[:,:,2]
return a
end
f3 (generic function with 1 method)
julia> nx, ny = 3, 4
(3, 4)
julia> u = Array{Float64,3}(rand(0:1, nx, ny, 2));
julia> v = Array{Float64,3}(rand(0:1, nx, ny, 2));
Timing results:
julia> #btime f1($u,$v); # DNF suggestion
1.016 μs (1 allocation: 176 bytes)
julia> #btime f2($u,$v); # original answer
1.263 μs (14 allocations: 816 bytes)
julia> #btime f3($u,$v); # this answer
168.591 ns (5 allocations: 432 bytes)
The suggested version is faster (which is logical considering memory ordering of Julia Arrays).

Minimize the maximum variable

I have a Mixed Integer Programming problem. The objective function is a minimization of the maximum variable value in the a vector. The variable is has an upper bound of 5. The problem is like this:
m = Model(solver = GLPKSolverMIP())
#objective(m, Min, max(x[i] for i=1:12))
#variable(m, 0 <= x[i] <= 5, Int)
#constraint(m, sum(x[i] for i=1:12) == 12)
status = solve(m)
The max variable is not part of the julia JuMP syntax. So I modified the problem to
t=1
while t<=5 && (status == :NotSolved || status == :Infeasible)
m = Model(solver = GLPKSolverMIP())
i = 1:12
#objective(m, Min, max(x[i] for i=1:12))
#variable(m, 0 <= x[i] <= t, Int)
#constraint(m, sum(x[i] for i=1:12) == 12)
status = solve(m)
t += 1
end
This solution does the job by solving the problem iterative for starting with a upper bound for the variable at 1 and then increase by one until the solutoin is feasible. Is this really the best way to do this?
The question wants to minimize a maximum, this maximum can be held in an auxiliary variable and then we will minimize it. To do so, add constraints to force the new variable to actually be an upper bound on x. In code it is:
using GLPKMathProgInterface
using JuMP
m = Model(solver = GLPKSolverMIP())
#variable(m, 0 <= x[i=1:3] <= 5, Int) # define variables
#variable(m, 0 <= t <= 12) # define auxiliary variable
#constraint(m, t .>= x) # constrain t to be the max
#constraint(m, sum(x[i] for i=1:3) == 12) # the meat of the constraints
#objective(m, Min, t) # we wish to minimize the max
status = solve(m)
Now we can inspect the solution:
julia> getValue(t)
4.0
julia> getValue(x)
3-element Array{Float64,1}:
4.0
4.0
4.0
The actual problem the poster wanted to solve is probably more complex that this, but it can be solved by a variation on this framework.

JuMP: How to get multiple solutions from getvalue(x)

I'm solving this Multi-Objective problem
f1(x,y) = x
f2(x,y) = (2.0-exp(-((y-0.2)/0.004)^2)-0.8*exp(-((y-0.6)/0.4)^2) )/x
isdefined(:f1) || JuMP.register(:f1, 2, f1, autodiff=true)
isdefined(:f2) || JuMP.register(:f2, 2, f2, autodiff=true)
m = Model(solver=IpoptSolver(print_level=0))
#variable(m, 0.1 <= x <= 1.0)
#variable(m, 0.0 <= y <= 1.0)
#variable(m, alpha1)
#variable(m, alpha2)
#NLobjective(m, Min, alpha1 + alpha2)
#constraint(m, f1(x,y) - z1_id >= -alpha1)
#constraint(m, f1(x,y) - z1_id <= alpha1)
#NLconstraint(m, f2(x,y) - z2_id >= -alpha2)
#NLconstraint(m, f2(x,y) - z2_id <= alpha2)
solve(m)
x_opt = getvalue(x)
y_opt = getvalue(y)
println("GOAL Programming (p=1): F1 = $(f1(x_opt, y_opt)), F2 = $(f2(x_opt, y_opt))")
It should have two solutions. I get only the first one with getvalue(x), how can I get all the others?
I ran into something similar, but in integer programming. Nevertheless, as my search for an answer to took me here, this seems like a good place to share my ideas.
To me it seems the way to go about this is to first solve the system and then add a constraint that the previously found solution is now unfeasible. My idea would be to append as per the following:
solve(m)
x_opt = getvalue(x)
y_opt = getvalue(y)
println("GOAL Programming (p=1): F1 = $(f1(x_opt, y_opt)), F2 = $(f2(x_opt, y_opt))")
eps = 1.e-15
#constraint(m, x - x_opt <= eps)
#constraint(m, x - x_opt >= eps)
#constraint(m, y - y_opt <= eps)
#constraint(m, y - y_opt >= eps)
solve(m)
println("GOAL Programming (p=1): F1 = $(f1(x_opt, y_opt)), F2 = $(f2(x_opt, y_opt))")
I've done the same within a loop for my integer programming, i.e. wrapping the last lines in while solve(m) == :Optimal, for finding more options.

Resources