Logical constraints in gurobi - constraints

I have a few gurobi variables a[i], b[i],c[i] (for i in 0 to some number), and I would like to add the constraint
for i in range(0, number):
m.addConstr(a[i]==b[i] if c[i]==1)
According to the gurobi website, this is possible, but when I try to implement it (in python), I keep getting an invalid syntax error because of the if. Does anyone know what I'm doing wrong?
Thanks

If c[i] is a binary variable, the if-then-logic can be achieved with indicator constraints:
for i in range(0, number):
m.addGenConstrIndicator(c[i], 1, a[i] == b[i])
Or you can use the overloaded form:
for i in range(0, number):
m.addConstr((c[i] == 1) >> (a[i] == b[i]))
See here for further details and examples.

Here your running for loop which will create multiple constraints, so try to use below code which uses addConstrs() method.
m.addConstrs((a[i]==b[i] for i in range(number) if c[i]==1), name = "c")

Related

Coding a mathematical expression

I am supposed to write a function that, for the values of Pi and P* returns the α.
I am having trouble with the usage of sum in my function.
So far I have something like this:
sqrt((sum(x[i], i == 1, i == length(pstar)])*(p-pstar)^2)/n)/pstar)*100
A sum over a vector x in R is just sum(x), not sum(x[i], i == 1, i == length(x)) * x. (In fact, the latter doesn’t make much sense even if the syntax was correct, since there’s no multiplication involved in a sum.)
So, in your case:
sum((p - pstar) ^ 2)

Tensor mode n product in Julia

I need tensor mode n product.
The defination of tenosr mode n product can be seen here.
https://www.alexejgossmann.com/tensor_decomposition_tucker/
I found python code.
I would like to convert this code into julia.
def mode_n_product(x, m, mode):
x = np.asarray(x)
m = np.asarray(m)
if mode <= 0 or mode % 1 != 0:
raise ValueError('`mode` must be a positive interger')
if x.ndim < mode:
raise ValueError('Invalid shape of X for mode = {}: {}'.format(mode, x.shape))
if m.ndim != 2:
raise ValueError('Invalid shape of M: {}'.format(m.shape))
return np.swapaxes(np.swapaxes(x, mode - 1, -1).dot(m.T), mode - 1, -1)
I have found another answer using Tensortoolbox.jl
using TensorToolbox
X=rand(5,4,3);
A=rand(2,5);
ttm(X,A,n) #X times A[1] by mode n
One way is:
using TensorOperations
#tensor y[i1, i2, i3, out, i5] := x[i1, i2, i3, s, i5] * a[out, s]
This is literally the formula given at your link to define this, except that I changed the name of the summed index to s; you can you any index names you like, they are just markers. The sum is implicit, because s does not appear on the left.
There is nothing very special about putting the index out back in the same place. Like your python code, #tensor permutes the dimensions of x in order to use ordinary matrix multiplication, and then permutes again to give y the requested order. The fewer permutations needed, the faster this will be.
Alternatively, you can try using LoopVectorization, Tullio; #tullio y[i1, i2, ... with the same notation. Instead of permuting in order to call a library matrix multiplication function, this writes a pure-Julia version which works with the array as it arrives.

Store values generated by a for-loop. JuMP/Julia

It's amazing that the internet is totally void of this simple question (or similar). Or I'm just very bad at searching. Anyway, I simply want to store values generated by a for-loop in an array and print the array. Simple as that.
On every other language Matlab, R, Python, Java etc this is very simple. But in Julia I seem to be missing something.
using JuMP
# t = int64[] has also been tested
t = 0
for i in 1:5
vector[i]
println[vector]
end
I get the error
ERROR: LoadError: BoundsError
What am I missing?
You didn't initialize vector and you should call the method println like this following way, in Julia 1.0 :
vector = Array{Int,1}(undef, 5)
for i in 1:5
vector[i] = i
println(vector[i])
end
Or, more quickly, with a comprehension list :
vector = [i for i in 1:5]
for i in 1:5
println(vector[i])
end
Another possibility using push! method :
vector = []
for i in 1:5
push!(vector, i)
println(vector[i])
end

If-Else/Function Debugging

R beginner here. I'm trying to create a function that converts values for a list in R using if-else. I'm pretty sure I'm violating some cardinal rule(s) with syntax/logic in R and I've read several manuals/online help tools for functions and if/else statements, but I cannot identify what I'm doing wrong. Here's what I am working with:
convTemp <- function(vector, to="Celsius"){
if (to = "Celsius" ) {
return (vector - 32) * 5/9
}
else
print (vector)
}
Any help/suggestions appreciated. Thanks.
You don't even need to define a function for this, just use base R's ifelse():
temp_in_celsius <- ifelse(to == "Celsius", (vector - 32) * 5/9, vector)
As for what you are doing wrong, to = "Celsius" is an assignment, not an equality expression. You probably intended to do if (to == "Celsius") {...}.
In R, checking equality requires two equal signs "==".
Change the if statement to the following:
if (to == "Celsius" )

JuMP constraint macro changes type of previously declared variable

I have a simple math prog that I am trying to solve:
m = Model(solver=MosekSolver())
#variable(m, x[1:8] >= 0)
#objective(m,Min,sum(x))
#constraint(m,A*x .== given)
#constraint(m, x, sum(x)==1)
status = solve(m)
println("x = ", getvalue(x))
A is some matrix with type Array{Float64,2
The line:
#constraint(m, x, sum(x)==1))
Changes the type of x from Array{JuMP.Variable,1} to JuMP.ConstraintRef{JuMP.Model,JuMP.GenericRangeConstraint{JuMP.GenericAffExpr{Float64,JuMP.Variable}}}.
Since x has been previously declared as a variable shouldn't the type remain the same? (Furthermore if the above line is executed, everything still works, but, getvalue will not due to the change in type.)
Is there a way to add the summation constraint without changing the type of x
refer to JuMP documentation:
Constraint References
In order to manipulate constraints after creation, it is necessary to
maintain a reference. The simplest way to do this is to use the
special three-argument named constraint syntax for #constraint, which
additionally allows you to create groups of constraints indexed by
sets analogously to #variable
So the way JuMP works is as expected, why not #constraint(m, anothersymbol, sum(x)==1)?
make it like so
#constraint(m, constr, A*x .== given)
#constraint(m, constr2, sum(x) == 1)

Resources