Linearization for Optimization in CPLEX - math

I have tried to solve an optimisation problem which has quadratic constraint. I need an linearize form of the constraint. I am looking for a way to do that for the following equality constraint:
z == x*(x-y);
where x and y are continuous decision variables and:
x1 <= x <= x2;
y1 <= y <= y2;

The idea is to first use
4xy=(x+y)*(x+y)-(x-y)(x-y)
and then use piecewise to linearize square.
I posted an example at
https://www.ibm.com/developerworks/community/forums/html/topic?id=f48c280e-144b-46aa-abb9-906a4eb4219f&ps=25

Related

Using indicator constraints in Julia

JuMP provides a special syntax for creating indicator constraints.So, Which one is better, linearizing the indicator constraints and then write a code or using this feature?
In order to constrain the constraint x + y <= 2 to hold when z binary variable a is one:
#variable(model, x)
#variable(model, y)
#variable(model, z, Bin)
#constraint(model, z => {x + y <= 2})
Actually my question is which one is faster and more efficient, To linearize ourselves or to use code?
The answer is problem- and solver-dependent. You should try both approaches and time them to find out which is more efficient for your problem.
Some solvers (e.g., Gurobi) have special support for indicators, in which case it's probably faster to use the indicators directly. If you're using a solver that doesn't have special support for indicators, we convert the indicator constraint to a SOS-I constraint (https://jump.dev/MathOptInterface.jl/stable/submodules/Bridges/reference/#MathOptInterface.Bridges.Constraint.IndicatorSOS1Bridge).
The quality of a big-M type linearization will depend on using domain knowledge to select a good big-M. JuMP doesn't do big-M reformulations automatically.

How to implement != (not equal) in lpSolve r

Since lpSolve does not allow to use != for the constraint directions, what is an alternative way to get the same result?
I would like to maximize x1 + x2 with constraints: x1 <= 5 and
x2 != 5
and keep using lpSolve R package.
I've tried using a combination of > < in order to replicate the same behaviour of !=, however I do not obtain the result I expected.
f.obj<-c(1,1)
f.con<-matrix(c(1,0,0,1),nrow=2,ncol=2,byrow=TRUE)
f.dir<-c("<=","!=")
f.rhs<-c(5,5)
lp("max",f.obj,f.con,f.dir,f.rhs)$solution
Since lpSolve does not support !=, I get the error message:
Error in lp("max",f.obj,f.con,f.dir,f.rhs): Unknown constraint direction found
EDIT
I would like to maximize x1 + x2 with constraints: x1 <= 5 and
x2 < 10 and x2 != 9.
So the solution would be 5 and 8.
You can't do that, even in theory, since the resulting constraint set is not closed. It is like trying to minimize x^2 over the set x > 0. For any proposed solution x0 in that set the solution x0/2 is better so there is no optimum.
I would just use x <= 5 as your constraint and if the constraint is not active (i.e. it turns out that x < 5) then you have found the solution; otherwise, there is no solution. If there is no solution you can try x <= 5 - eps for an arbitrarily chosen eps.
ADDED:
If what you intended was that the variables x1 and x2 are integer then
x < 10 and x != 9
is equivalent to
x <= 8
Note that lp has the all.int argument which defaults to FALSE.
ADDED 2:
If you just want to find multiple feasible solutions then if opt is the value of the objective from the first solution rerun adding the constraint (assuming a maximization problem):
objective <= opt - eps
where eps is an arbitrary small constant.
Also note that if the vectors x1 and x2 are two optimal solutions to an LP then since the constraint set is necessarily convex any convex combination of those solutions is also feasible and because the objective is linear all of those convex combinations must also be optimal so if there is more than one optimum then there are an infinite number of such optimal solutions so you can't simply enumerate them.
ADDED 3.
The feasible set of a linear program form a simplex (i.e. a polytope) and at least one vertex must be at the optimal value if such optimal value exists. If there are more than one vertex with the same optimal value then the points on the line connecting them are all optimal values as well. Although there are an infinite number of optimal values in that case there are only a finite number of vertices so you could enumerate them using the vertexenum package. Then evaluate the objective at each one. If there is one vertex whose objective value is greater than all others then that is the optimum. If there are multiple then we know that those plus all convex combinations of those are optimal. This might work if your problem is not too large.

MIP/LP - Modelling "if b=1 then x=y" constraint

I have a Mixed Integer Programming (MIP) problem, currently modelled in Python's PuLP library. My issue is however very generic, syntax doesn't play a role here.
I want to add a constraint to my model that works like this:
if b=1 then x=y
The variable b is a binary variable taking values 0 or 1. x and y are variables that represent the current stock level. x as a continuous variable, y as an integer variable.
I know constraints can only be modelled in the following format:
a*x+c <= y # a, c are constants, x, y variables
I hope there is some workaround how I can model the above described if b then x equals y constraint.
Here are my approaches so far:
b*y <= x
y >= x*b # works in theory, but multiplication of 2 variables is not allowed
For 2 binary variables x and y the following is true:
M*y > x # represents: if x then y (M is a sufficient large constant)
I guess the solution involves a large M constant, maybe even further helper variables.
A little background: I want to model an inventory problem, with continuous stock levels. However, order decisions should only be possible in integer numbers. I therefore need the stock level to be modelled with float numbers. At the point of order (b==1) however in integer.
I hope someone can help here, even if this is rather theoretic than directly coding related. Hints to further resources that might help are also highly appreciated.
b=1 => x=y
can be modeled as:
y-M(1-b) <= x <= y+M(1-b)

Nonlinear optimization with constraint

Consider the following data frame:
A=data.frame(v1=c(4,2,-3,3,-1,3,6,-2), v2=c(3,3,-1,5,-3,-2,-2,-3), v3=c(5,-2,2,2,5,5,4,-4),
v4=c(-2,-1,3,1,-1,3,2,-5), v5=c(2,-5,4,-4,3,1,1,1))
with the following optimization problem:
where a_i is the i-th row of matrix A.
I tried to solve this with the package nloptr. First the objective function:
fct <- function(p) {
return(sum((as.matrix(A)%*%p<0)*(as.matrix(A)%*%p)^2))
}
Then the constraint:
constraint <- function(p){
return(p[1]-1)
}
But all solvers that I tried demand a gradient, e.g.:
sol <- nloptr(x0=c(1,1,-0.13,-0.5,1.3), eval_f=fct, eval_g_eq=constraint,
opts=list("algorithm"="NLOPT_LD_SLSQP"))
-> A gradient for the objective function is needed by algorithm NLOPT_LD_SLSQP but was not supplied
Is it possible to calculate the gradient of this function, or are there other ways to solve this problem?
Thank you.
I suspect you can solve this as a standard QP (Quadratic Programming) problem:
min sum(i, y(i)^2 )
y(i) <= sum(j, a(i,j)*p(j))
y(i) <= 0
No need for gradients. QP Solvers like quadprog, Cplex and Gurobi can solve this: just plug the problem in.
The last constraint is just a bound which simplifies things even more.

univariate nonlinear optimization with quadratic constraint in R

I have a quadratic function f where, f = function (x) {2+.1*x+.23*(x*x)}. Let's say I have another quadratic fn g where g = function (x) {3+.4*x-.60*(x*x)}
Now, I want to maximize f given the constraints 1. g>0 and 2. 600<x<650
I have tried the packages optim,constrOptim and optimize. optimize does one dim. optimization, but without constraints and constrOptim I couldn't understand. I need to this using R. Please help.
P.S. In this example, the values may be erratic as I have given two random quadratic functions, but basically I want maximization of a quadratic fn given a quadratic constraint.
If you solve g(x)=0 for x by the usual quadratic formula then that just gives you another set of bounds on x. If your x^2 coefficent is negative then g(x) > 0 between the solutions, otherwise g(x)>0 outside the solutions, so within (-Inf, x1) and (x2, Inf).
In this case, g(x)>0 for -1.927 < x < 2.59. So in this case both your constraints cannot be simultaneously achieved (g(x) is LESS THAN 0 for 600<x<650).
But supposing your second condition was 1 < x < 5, then you'd just combine the solution from g(x)>0 with that interval to get 1 < x < 2.59, and then maximise f in that interval using standard univariate optimisation.
And you don't even need to run an optimisation algorithm. Your target f is quadratic. If the coefficient of x^2 is positive the maximum is going to be at one of your limits of x, so you only have a small number of values to try. If the coefficient of x^2 is -ve then the maximum is either at a limit or at the point where f(x) peaks (solve f'(x)=0) if that is within your limits.
So you can do this precisely, there's just a few conditions to test and then some intervals to compute and then some values of f at those interval limits to calculate.

Resources