Element-wise multiplication in JuMP environment - julia

I'm trying to implement the following constraint in a JuMP environment:
#constraint(m, ((c*x) + (p*o)) + (r.*z) - d .== g')
Unfortunately, I get the following error ERROR: MethodError: no method matching append
But trying the element-wise multiplication alone does not return any error and implements it correctly into the model.
Here you have the minimal example I'm working with.
m = Model(solver = GLPKSolverLP());
np = 3; #number of products
c = [3 7 5;
6 5 7;
3 6 5;
-28 -40 -32];
g = [200 200 200 -1500];
n = length(g);
o = [1 1 1]';
#variable(m, x[1:np] >= 0);
#variable(m, d[1:n] >= 0);
#variable(m, z[1:n] >= 0);
#variable(m, r[1:n] >= 0);
#variable(m, p[1:n,1:np] >= 0);
#objective(m, Min, sum(d));
#constraint(m, ((c*x) + (p*o)) + (r.*z) - d .== g')

It seems that there is a problem when you add quadratic term to linear term and quadratic term is on right hand side of the addition inside #constraint macro.
There are two solutions:
A. write the quadratic term as first like this:
#constraint(m, (r.*z) + ((c*x) + (p*o)) - d .== g')
B. define LHS of the equation outside (and now the order of terms does not matter)
constr = ((c*x) + (p*o)) + (r.*z) - d
#constraint(m, constr .== g')
As a side note: your problem is quadratic so GLPKSolverLP will not solve it as it does not allow such constraints.

Related

Julia JuMP feasibility slack of constraints

In Julia, by using JuMP am setting up a simple optimization problem (MWE, the real problem is much bigger).
model = Model()
set_optimizer(model, MosekTools.Optimizer)
#variable(model, 0 <= x[1:2])
#constraint(model, sum(x) <= 2)
#constraint(model, 1 <= sum(x))
#objective(model, Min, sum(x))
print(model)
Which gives this model:
Min x[1] + x[2]
Subject to
x[1] + x[2] ≤ 2.0
-x[1] - x[2] ≤ -1.0
x[1] ≥ 0.0
x[2] ≥ 0.0
I optimize this model via optimize!(model).
Now, obviously, the constraint x[1] + x[2] <= 2 is redundant and it has a feasibility slack of "3". My goal is to determine all the constraints that have slacks larger than 0 and display the slacks. Then I will delete those from the model.
To this end, I iterate over the constraints which are not variable bounds and print their values.
for (F, S) in list_of_constraint_types(model)
# Iterate over constraint types
if F!= JuMP.VariableRef #for constraints that
for ci in all_constraints(model, F, S)
println(value(ci))
end
end
end
However, because I print the value of the constraints, I get the left-hand sides:
1.0
-1.0
I want to instead see the slacks as
0
3
How may I do this? Note that I am not necessarily interested in linear programs, so things like shadow_value is not useful for me.
Based on the accepted answer, I am adding a MWE that solves this problem.
model = Model()
set_optimizer(model, MosekTools.Optimizer)
#variable(model, 0 <= x[1:2])
#constraint(model, sum(x) <= 2)
#constraint(model, 1 <= sum(x))
#constraint(model, 0.9 <= sum(x))
#objective(model, Min, sum(x))
print(model)
optimize!(model)
constraints_to_delete = vec([])
for (F, S) in list_of_constraint_types(model)
if F!= JuMP.VariableRef
for ci in all_constraints(model, F, S)
slack = normalized_rhs(ci) - value(ci)
if slack > 10^-5
push!(constraints_to_delete, ci)
println(slack)
#delete(model, ci)
end
end
end
end
for c in constraints_to_delete
delete(model, c)
end
print(model)
Read this (hot off the press) tutorial: https://jump.dev/JuMP.jl/dev/tutorials/linear/lp_sensitivity/.
Although focused on LPs, it shows how to compute slacks etc using normalized_rhs(ci) - value(ci).

User-defined (nonlinear) objective with vectorized variables in JuMP

Is it possible to use vectorized variables with user-defined objective functions in JuMP for Julia? Like so,
model = Model(GLPK.Optimizer)
A = [
1 1 9 5
3 5 0 8
2 0 6 13
]
b = [7; 3; 5]
c = [1; 3; 5; 2]
#variable(model, x[1:4] >= 0)
#constraint(model, A * x .== b)
# dummy functions, could be nonlinear hypothetically
identity(x) = x
C(x, c) = c' * x
register(model, :identity, 1, identity; autodiff = true)
register(model, :C, 2, C; autodiff = true)
#NLobjective(model, Min, C(identity(x), c))
This throws the error,
ERROR: Unexpected array VariableRef[x[1], x[2], x[3], x[4]] in nonlinear expression. Nonlinear expressions may contain only scalar expression.
Which sounds like no. Is there a workaround to this? I believe scipy.optimize.minimize is capable of optimizing user-defined objectives with vectorized variables?
No, you cannot pass vector arguments to user-defined functions.
Documentation: https://jump.dev/JuMP.jl/stable/manual/nlp/#User-defined-functions-with-vector-inputs
Issue you opened: https://github.com/jump-dev/JuMP.jl/issues/2854
The following is preferable to Prezemyslaw's answer. His suggestion to wrap things in an #expression won't work if the functions are more complicated.
using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
A = [
1 1 9 5
3 5 0 8
2 0 6 13
]
b = [7; 3; 5]
c = [1; 3; 5; 2]
#variable(model, x[1:4] >= 0)
#constraint(model, A * x .== b)
# dummy functions, could be nonlinear hypothetically
identity(x) = x
C(x, c) = c' * x
my_objective(x...) = C(identitiy(collect(x)), c)
register(model, :my_objective, length(x), my_objective; autodiff = true)
#NLobjective(model, Min, my_objective(x...))
Firstly, use optimizer that supports nonlinear models. GLPK does not. Try Ipopt:
using Ipopt
model = Model(Ipopt.Optimizer)
Secondly, JuMP documentation reads (see https://jump.dev/JuMP.jl/stable/manual/nlp/#Syntax-notes):
The syntax accepted in nonlinear macros is more restricted than the syntax for linear and quadratic macros. (...) all expressions must be simple scalar operations. You cannot use dot, matrix-vector products, vector slices, etc.
you need wrap the goal function
#expression(model, expr, C(identity(x), c))
Now you can do:
#NLobjective(model, Min, expr)
To show that it works I solve the model:
julia> optimize!(model)
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.
...
Total seconds in IPOPT = 0.165
EXIT: Optimal Solution Found.
julia> value.(x)
4-element Vector{Float64}:
0.42307697548737005
0.3461538282496562
0.6923076931757742
-8.46379887234798e-9

Minimize the maximum variable

I have a Mixed Integer Programming problem. The objective function is a minimization of the maximum variable value in the a vector. The variable is has an upper bound of 5. The problem is like this:
m = Model(solver = GLPKSolverMIP())
#objective(m, Min, max(x[i] for i=1:12))
#variable(m, 0 <= x[i] <= 5, Int)
#constraint(m, sum(x[i] for i=1:12) == 12)
status = solve(m)
The max variable is not part of the julia JuMP syntax. So I modified the problem to
t=1
while t<=5 && (status == :NotSolved || status == :Infeasible)
m = Model(solver = GLPKSolverMIP())
i = 1:12
#objective(m, Min, max(x[i] for i=1:12))
#variable(m, 0 <= x[i] <= t, Int)
#constraint(m, sum(x[i] for i=1:12) == 12)
status = solve(m)
t += 1
end
This solution does the job by solving the problem iterative for starting with a upper bound for the variable at 1 and then increase by one until the solutoin is feasible. Is this really the best way to do this?
The question wants to minimize a maximum, this maximum can be held in an auxiliary variable and then we will minimize it. To do so, add constraints to force the new variable to actually be an upper bound on x. In code it is:
using GLPKMathProgInterface
using JuMP
m = Model(solver = GLPKSolverMIP())
#variable(m, 0 <= x[i=1:3] <= 5, Int) # define variables
#variable(m, 0 <= t <= 12) # define auxiliary variable
#constraint(m, t .>= x) # constrain t to be the max
#constraint(m, sum(x[i] for i=1:3) == 12) # the meat of the constraints
#objective(m, Min, t) # we wish to minimize the max
status = solve(m)
Now we can inspect the solution:
julia> getValue(t)
4.0
julia> getValue(x)
3-element Array{Float64,1}:
4.0
4.0
4.0
The actual problem the poster wanted to solve is probably more complex that this, but it can be solved by a variation on this framework.

Converting matlab code to R code

I was wondering how I can convert this code from Matlab to R code. It seems this is the code for midpoint method. Any help would be highly appreciated.
% Usage: [y t] = midpoint(f,a,b,ya,n) or y = midpoint(f,a,b,ya,n)
% Midpoint method for initial value problems
%
% Input:
% f - Matlab inline function f(t,y)
% a,b - interval
% ya - initial condition
% n - number of subintervals (panels)
%
% Output:
% y - computed solution
% t - time steps
%
% Examples:
% [y t]=midpoint(#myfunc,0,1,1,10); here 'myfunc' is a user-defined function in M-file
% y=midpoint(inline('sin(y*t)','t','y'),0,1,1,10);
% f=inline('sin(y(1))-cos(y(2))','t','y');
% y=midpoint(f,0,1,1,10);
function [y t] = midpoint(f,a,b,ya,n)
h = (b - a) / n;
halfh = h / 2;
y(1,:) = ya;
t(1) = a;
for i = 1 : n
t(i+1) = t(i) + h;
z = y(i,:) + halfh * f(t(i),y(i,:));
y(i+1,:) = y(i,:) + h * f(t(i)+halfh,z);
end;
I have the R code for Euler method which is
euler <- function(f, h = 1e-7, x0, y0, xfinal) {
N = (xfinal - x0) / h
x = y = numeric(N + 1)
x[1] = x0; y[1] = y0
i = 1
while (i <= N) {
x[i + 1] = x[i] + h
y[i + 1] = y[i] + h * f(x[i], y[i])
i = i + 1
}
return (data.frame(X = x, Y = y))
}
so based on the matlab code, do I need to change h in euler method (R code) to (b - a) / n to modify Euler code to midpoint method?
Note
Broadly speaking, I agree with the expressed comments; however, I decided to vote up this question. (now deleted) This is due to the existence of matconv that facilitates this process.
Answer
Given your code, we could use matconv in the following manner:
pacman::p_load(matconv)
out <- mat2r(inMat = "input.m")
The created out object will attempt to translate Matlab code into R, however, the job is far from finished. If you inspect the out object you will see that it requires further work. Simple statements are usually translated correctly with Matlab comments % replaced with # and so forth but more complex statements may require a more detailed investigation. You could then inspect respective line and attempt to evaluate them to see where further work may be required, example:
eval(parse(text=out$rCode[1]))
NULL
(first line is a comment so the output is NULL)

Formulating Linear Programming Problem

This may be quite a basic question for someone who knows linear programming.
In most of the problems that I saw on LP has somewhat similar to following format
max 3x+4y
subject to 4x-5y = -34
3x-5y = 10 (and similar other constraints)
So in other words, we have same number of unknown in objective and constraint functions.
My problem is that I have one unknown variable in objective function and 3 unknowns in constraint functions.
The problem is like this
Objective function: min w1
subject to:
w1 + 0.1676x + 0.1692y >= 0.1666
w1 - 0.1676x - 0.1692y >= -0.1666
w1 + 0.3039x + 0.3058y >= 0.3
w1 - 0.3039x - 0.3058y >= -0.3
x + y = 1
x >= 0
y >= 0
As can be seen, the objective function has only one unknown i.e. w1 and constraint functions have 3 (or lets say 2) unknown i.e w1, x and y.
Can somebody please guide me how to solve this problem, especially using R or MATLAB linear programming toolbox.
Your objective only involves w1 but you can still view it as a function of w1,x,y, where the coefficient of w1 is 1, and the coeffs of x,y are zero:
min w1*1 + x*0 + y*0
Once you see this you can formulate it in the usual way as a "standard" LP.
Prasad is correct. The number of unknowns in the objective function does not matter. You can view unknowns that are not present as having a zero coefficient.
This LP is easily solved using Matlab's linprog function. For more
details on linprog see the documentation here.
% We lay out the variables as X = [w1; x; y]
c = [1; 0; 0]; % The objective is w1 = c'*X
% Construct the constraint matrix
% Inequality constraints will be written as Ain*X <= bin
% w1 x y
Ain = [ -1 -0.1676 -0.1692;
-1 0.1676 0.1692;
-1 -0.3039 -0.3058;
-1 0.3039 0.3058;
];
bin = [ -0.166; 0.166; -0.3; 0.3];
% Construct equality constraints Aeq*X == beq
Aeq = [ 0 1 1];
beq = 1;
%Construct lower and upper bounds l <= X <= u
l = [ -inf; 0; 0];
u = inf(3,1);
% Solve the LP using linprog
[X, optval] = linprog(c,Ain,bin,Aeq,beq,l,u);
% Extract the solution
w1 = X(1);
x = X(2);
y = X(3);

Resources