Building product family constraint in CPLEX - constraints

When I added constraints to make sure that the sum of flow of products in each products family must be equal to the flow of that product family from one stage to another stage, I am not sure whether it is right or not.
This is my code for that constraint. I denote that Q2 is the rate of flow of product i from warehouse m to distribution center k and Qf2 is the rate of flow of product family f from warehouse m to distribution center k. Family 1 includes products 1-6, 10. Family 2: 7-9, Family 3: 11-14.
forall (i in pr, m in Wh, k in DC)
sum(i in pr:i<=6 || i==10) Q2[i][m][k] == sum(r in ra,f in Fa:f==1)Qf2[f][m][k][r];
forall (i in pr, m in Wh, k in DC)
sum(i in pr:i<=9 || i>=7) Q2[i][m][k] == sum(r in ra,f in Fa:f==2)Qf2[f][m][k][r];
forall (i in pr, m in Wh, k in DC)
sum(i in pr:i>=11) Q2[i][m][k] == sum(r in ra,f in Fa:f==3)Qf2[f][m][k][r];
*r is the range of number of products that Qf2 belongs to

In the second constraint you wrote
sum(i in pr:i<=9 || i>=7)
You should write
sum(i in pr:i<=9 && i>=7)
which you can also write
sum(i in pr:7<=i<=9 )
instead

Related

Could not find the optimal solution after adding constraints

My code is as follows:
gekko = GEKKO(remote=True)
# create variable, each variable is a vector, each element
# of the vector is a binary
s = []
for i in range(N):
s.append(gekko.Array(gekko.Var, s_len[i], value=0, lb=0, ub=1, integer=True))
# some constants used in the objective/constraint function
c, d, r, m, L = create_c_d_r_m_L() # they are all numpy ndarry
# define the objective function
def objective():
obj = 0
for i in range(N):
obj += np.dot(s[i], c[i]) + np.dot(s[i], d[i])
for idx, (i, j) in enumerate(E):
obj += np.dot(np.dot(s[i], r[idx].reshape(s_len[i], s_len[j])),\
s[j]) # s[i] * r[i, j] * s[j]
return obj
# add constraints
# (a) each vector can only have and must have one 1
for i in range(N):
gekko.Equation(gekko.sum(s[i]) == 1)
# (b)
for t in range(N):
peak_mem = gekko.sum([np.dot(s[i], m[i]) for i in L[t]])
gekko.Equation(peak_mem < DEVICE_MEM)
# DEVICE_MEM is a predefined big int
# solve
gekko.Obj(objective())
gekko.solve(disp=True)
I found that when removing constraint (b), the solver can output the optimal solution for s. However, if we add (b) and set DEVICE_MEM to a very large number (which should not affect the solution), the s is not optimal anymore. I'm wondering if I am doing something wrong here because I tried both APOPT(solvertype=1) and IPOPT (solvertype=3) and they give the same nonoptimal results.
To give more context to the problem: this is an optimization over the graph. N represents the number of nodes in the graph. E is the set that contains all edges in the graph. c, d, m are three types of cost of a node. r is the cost of edges. Each node has multiple strategies (represented by the vector s[i]), and we need to select the best strategy for each node so that the overall cost is minimal.
Detailed constants:
# s_len: record the length of each vector
# (the # of strategies for each node,
# here we assume the length are all 10)
s_len = np.ones(N) * 10
# c, d, m are the costs of each node
# let's assume the c/d/m cost for i node is just i
c, d, m = [], [], []
for i in range(N):
c[i] = s_len[i] * [i]
d[i] = s_len[i] * [i]
m[i] = s_len[i] * [i]
# r is the edge cost, let's assume the cost for
# each edge is just i * j
r = []
for (i,j) in E: # E records all edges
cur_r = s_len[i] * s_len[j] * [i*j]
r.append(cur_r)
# L contains the node ids, we just randomly generate 10 integers here
L = []
for i in range(N):
cur_L = [randrange(N) for _ in range(10)]
L.append(cur_L)
I've been stuck on this for a while and any comments/answers are highly appreciated! Thanks!
Try reframing the inequality constraint:
for t in range(N):
peak_mem = gekko.sum([np.dot(s[i], m[i]) for i in L[t]])
gekko.Equation(peak_mem < DEVICE_MEM)
as a variable with an upper bound:
peak_mem = m.Array(m.Var,N,ub=DEVICE_MEM)
for t in range(N):
m.Equation(peak_mem[t]==\
gekko.sum([np.dot(s[i], m[i]) for i in L[t]])
The N inequality constraints peak_mem < DEVICE_MEM are converted to equality constraints with slack variables as s[i] = DEVICE_MEM - peak_mem with a simple inequality constraint on the slack s[i]>=0. If the inequality constraint far from the bound, then the slack variable can be very large. Formulating the equation as a variable may help.
I tried using the information in the question to pose a minimal problem that could reproduce the error and the potential solution. If you need more specific suggestions, please modify the code to be a complete and minimal example that reproduces the error. This helps with verifying the solution.

Jump simple graph path problem, why is it looping?

I want to have x vehicle which goes on a graph going from vertex 1 and ending in the same all vertices as to be seen one time by one and only one vehicle (for those who know, i'm interesting in a PDPTW problem, but i'm stuck at this point)
using JuMP
using Cbc
nbVertex = 5
nbTransp = 2
model = Model(optimizer_with_attributes(Cbc.Optimizer, "seconds" => limiteExec))
set_optimizer_attribute(model, "PrimalTolerance", 1e-9)
print(model)
#les variables de decisions
#variable(model, route[1:nbVertex,1:nbVertex,1:nbTransp],Bin) #if road between 2 vertex is taken by vehic v
#variable(model, object>=0)
#constraint(model, [v in 1:nbTransp],sum(route[1,i,v] for i in 1:nbVertex)==1)#starting at one
#constraint(model, [v in 1:nbTransp],sum(route[i,1,v] for i in 1:nbVertex)==1)#ending at one
#constraint(model, [j=2: nbVertex],sum(route[i,j,v] for i in 1:nbVertex , v in 1 : nbTransp if i != j )==1)
# all vertices as to be seen by one and only one vehicule
#constraint(model, [j=1:nbVertex, v= 1: nbTransp],sum(route[i,j,v] for i in 1:nbVertex if i != j)-sum(route[j,k,v] for k in 1:nbVertex if k != j)==0)
# here is the constraint
#objective(model, Min,object)
#show model
optimize!(model)
for k in 1:nbTransp
dataTmp=Matrix(undef,2,0)
for i in 1:nbVertex
for j in 1:nbVertex
if value.(route[i,j,k])==1
dataTmp=hcat(dataTmp,[i,j])
println("vehicule ", k, " from ", i, " to ", j, ": $(route[i,j,k]) " )
end
end
end
end
vehicule 1 from 1 to 2: route[1,2,1]
vehicule 1 from 2 to 1: route[2,1,1]
vehicule 2 from 1 to 3: route[1,3,2]
vehicule 2 from 3 to 1: route[3,1,2]
vehicule 2 from 4 to 5: route[4,5,2]
vehicule 2 from 5 to 4: route[5,4,2]
why is vehicle 2 looping in 4->5->4->5->4 ...?
You need to add constraint forbidding cycles in the graph.
If you represent your route as x and the set of vertices as N (that is N = 1:nbVertex it can be denoted as:
This makes sure that for any given sub-set of vertices you will have less travels than the number of vertices.
In practice this constraint will look something like this:
using Combinatorics
N = 1:nbVertex
for E in powerset(N,2,nbVertex)
#constraint(mo, [k in K], ∑(route[i,j,k] for i ∈ E, j ∈ E if i != j) <= length(E)-1)
end
The problem is that the number of possible cycles grows very quickly when the size of N increases. You could try to mitigate it by using a lazy constraints callbacks approach (there is lots of literature on that) - unfortunately it is available only with commercial solvers. GLPK supports lazy constraints but the last time I tested it it was buggy. CBC has no support for lazy constraints.

Julia implementation of Louvain algorithm

I am trying to implement the Louvain Algorithm in Julia.
The paper describes the modularity gain as:
Where Sum_in is the sum of the weights of the links inside C, Sum_tot is the sum of the weights if the links incident to nodes in C, K_i is the sum of the weights if the links incident to node i, K_i_in is the sum of the weights of the links from i to nodes in C, and m is the sum of the weights of all the links in the network.
My implementation is:
function linksIn(graph, communities, c)::Float32
reduce(+,
map(
e-> (communities[e.src] == c && communities[e.dst] == c)
? e.weight
: 0
, edges(graph)
)
)
end
function linksTot(graph, communities, c)::Float32
reduce(+,
map(
e-> (communities[e.src] == c || communities[e.dst] == c)
? e.weight
: 0
, edges(graph)
)
)
end
function weightsIncident(graph, node)::Float32
reduce(+,
map(
n-> get_weight(graph, node, n)
, neighbors(graph, node)
)
)
end
function weightsIncidentComunity(graph,communities, node, c)::Float32
reduce(+,
map(
n-> (c == communities[n])
? get_weight(graph, node, n)
: 0
, neighbors(graph, node)
)
)
end
function modulGain(graph, communities, node, c)::Float32
# Calculate the variables of the modularity gain equation
wIn = linksIn(graph, communities, c);
wTot = linksTot(graph, communities, c);
k = weightsIncident(graph, node);
k_com = weightsIncidentComunity(graph, communities, node, c);
m = reduce(+, map(e->e.weight, edges(graph)));
# return the result of the modularity gain equation
return ((wIn +k_com) / (2*m) - ((wTot+k)/(2m))^2 )
- ((wIn/(2m)) - (wTot/(2m))^2 - (k/(2m))^2 )
end
If I compare the results of the funcion modulGain the the difference in modularity I get the following examples for the first pass where each node is in its own comunity in this graph.
modulGain(graph, communities, 1, 1) -> 0.00010885417
modulDifference(graph, communities, 1, 1) -> 0.0
and
modulGain(graph, communities, 1, 3) -> 4.806646e-5
modulDifference(graph, communities, 1, 3) -> 5.51432459e-5
When running the algorithm using the Modularity Gain equation it tends to get stuck in an infinite loop.
And I want to avoid to use the modularity difference since there is a clear performance improvement when using the Modularity Gain Equation.
Can someone explain me what is wrong with my implementation?
Thank you.

For Loop in R replacing Object Values at each iteration

I am struggling to figure out how to create a for loop in which some initial objects (u, l, h, and y) and their values are updated and reported at the end of each iteration of the loop. And that the loop takes into account the values of the prior iteration as the basis (for example after updating the above objects, the runif function takes the updated values of u and l in drawing a q. I keep getting the same result repeated with no variation, and I am unsure as to what might be the best way to resolve this.
Apologies in advance as I am fairly new to R and coding in general.
reset = {
l = 0.1 #lower bound of belief in theta
u = 0.9 #upper bound of belief in theta
h = 0.2 #lower legal threshold, below which an action is not liable
y = 0.8 #upper legal threshold, above which an action is liable
}
### need 1-u <= h <= y <= 1-l for each t along every path of play
period = c(1:100) ## Number of periods in the iteration of the loop.
for (t in 1:length(period)) {
q = runif(1,min = l, max = u) ### 1 draw of q from a uniform distribution
q
probg = function(q,l,u){(u - (1-q))/(u-l)} ### probability of being found guilty given q in the ambiguous region
probg(q,l,u)
probi = function(q,l,u){1-probg(q,l,u)} ### probability of being found innocent given q in the ambiguous region
probi(q,l,u)
ruling = if(q>=y | probg(q,l,u) > 1){print("Guilty") ###Strict liability
} else if(q<=h | probi(q,l,u) > 1) {print("Innocent") ###Permissible
} else if(q>h & q<y) { ###Ambiguous region
discovery = sample(c('guilty','not guilty'), size=1, replace=TRUE, prob=c(probg(q,l,u),probi(q,l,u))) ### court discovering whether a particular ambiguous q is permissible or not
}
discovery
ruling
if(ruling == "not guilty") {u = 1-q} else if (ruling == "guilty") {l = 1-q} else (print("beliefs unchanged"))
if(ruling == "not guilty"){h = 1 - u} else if (ruling == "guilty") {y = 1 - l} else (print("legal threshold unchanged")) #### legal adjustment and updating of beliefs in ambiguous region after discovery of liability
probg(q,l,u)
probi(q,l,u)
modelparam = c(l,u,h,y)
show(modelparam)
}

Is it safe to replace "a/(b*c)" with "a/b/c" when using integer-division?

Is it safe to replace a/(b*c) with a/b/c when using integer-division on positive integers a,b,c, or am I at risk losing information?
I did some random tests and couldn't find an example of a/(b*c) != a/b/c, so I'm pretty sure it's safe but not quite sure how to prove it.
Thank you.
Mathematics
As mathematical expressions, ⌊a/(bc)⌋ and ⌊⌊a/b⌋/c⌋ are equivalent whenever b is nonzero and c is a positive integer (and in particular for positive integers a, b, c). The standard reference for these sorts of things is the delightful book Concrete Mathematics: A Foundation for Computer Science by Graham, Knuth and Patashnik. In it, Chapter 3 is mostly on floors and ceilings, and this is proved on page 71 as a part of a far more general result:
In the 3.10 above, you can define x = a/b (mathematical, i.e. real division), and f(x) = x/c (exact division again), and plug those into the result on the left ⌊f(x)⌋ = ⌊f(⌊x⌋)⌋ (after verifying that the conditions on f hold here) to get ⌊a/(bc)⌋ on the LHS equal to ⌊⌊a/b⌋/c⌋ on the RHS.
If we don't want to rely on a reference in a book, we can prove ⌊a/(bc)⌋ = ⌊⌊a/b⌋/c⌋ directly using their methods. Note that with x = a/b (the real number), what we're trying to prove is that ⌊x/c⌋ = ⌊⌊x⌋/c⌋. So:
if x is an integer, then there is nothing to prove, as x = ⌊x⌋.
Otherwise, ⌊x⌋ < x, so ⌊x⌋/c < x/c which means that ⌊⌊x⌋/c⌋ ≤ ⌊x/c⌋. (We want to show it's equal.) Suppose, for the sake of contradiction, that ⌊⌊x⌋/c⌋ < ⌊x/c⌋ then there must be a number y such that ⌊x⌋ < y ≤ x and y/c = ⌊x/c⌋. (As we increase a number from ⌊x⌋ to x and consider division by c, somewhere we must hit the exact value ⌊x/c⌋.) But this means that y = c*⌊x/c⌋ is an integer between ⌊x⌋ and x, which is a contradiction!
This proves the result.
Programming
#include <stdio.h>
int main() {
unsigned int a = 142857;
unsigned int b = 65537;
unsigned int c = 65537;
printf("a/(b*c) = %d\n", a/(b*c));
printf("a/b/c = %d\n", a/b/c);
}
prints (with 32-bit integers),
a/(b*c) = 1
a/b/c = 0
(I used unsigned integers as overflow behaviour for them is well-defined, so the above output is guaranteed. With signed integers, overflow is undefined behaviour, so the program can in fact print (or do) anything, which only reinforces the point that the results can be different.)
But if you don't have overflow, then the values you get in your program are equal to their mathematical values (that is, a/(b*c) in your code is equal to the mathematical value ⌊a/(bc)⌋, and a/b/c in code is equal to the mathematical value ⌊⌊a/b⌋/c⌋), which we've proved are equal. So it is safe to replace a/(b*c) in code by a/b/c when b*c is small enough not to overflow.
While b*c could overflow (in C) for the original computation, a/b/c can't overflow, so we don't need to worry about overflow for the forward replacement a/(b*c) -> a/b/c. We would need to worry about it the other way around, though.
Let x = a/b/c. Then a/b == x*c + y for some y < c, and a == (x*c + y)*b + z for some z < b.
Thus, a == x*b*c + y*b + z. y*b + z is at most b*c-1, so x*b*c <= a <= (x+1)*b*c, and a/(b*c) == x.
Thus, a/b/c == a/(b*c), and replacing a/(b*c) by a/b/c is safe.
Nested floor division can be reordered as long as you keep track of your divisors and dividends.
#python3.x
x // m // n = x // (m * n)
#python2.x
x / m / n = x / (m * n)
Proof (sucks without LaTeX :( ) in python3.x:
Let k = x // m
then k - 1 < x / m <= k
and (k - 1) / n < x / (m * n) <= k / n
In addition, (x // m) // n = k // n
and because x // m <= x / m and (x // m) // n <= (x / m) // n
k // n <= x // (m * n)
Now, if k // n < x // (m * n)
then k / n < x / (m * n)
and this contradicts the above statement that x / (m * n) <= k / n
so if k // n <= x // (m * n) and k // n !< x // (m * n)
then k // n = x // (m * n)
and (x // m) // n = x // (m * n)
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions#Nested_divisions

Resources