Pyomo constraint > 100000 OR 0 - constraints

All,
I am working on some code where there is a requirement to buy/sell a minimum of 100,000 packets. If not possible then this should be zeroed.
I have tried a number of things for things for this including:
def objective_rule(model):
return sum(model.Prices[ProductCount]*model.Amount[ProductCount]*(model.Amount[ProductCount]>100000) for ProductCount in model.Products)
But this is slower than expected.
I would like to put an explicit constraint in place. Something akin to:
def minTradesize_Constraint(model):
return ((model.Amount[ProductCount]>=100000)| \
(model.Amount[ProductCount]==0.00) for ProductCount in model.Products)
I have looked at indicator functions but the Pyomo continuous approximations don't help.
Any help/guidance appreciated.

Basically, what you are trying to achieve is make the model.Amount[ProductCount] terms take discontinuous values (zero or larger or equal to 100,000). To achieve that, first, you will basically need to define a binary variable: model.y = pyomo.Var(model.Products, within=pyomo.Binary).
Then you will need to add the following constraints:
def minTradesize_Constraint1(model):
return (model.Amount[ProductCount] >= 100000 * y[ProductCount] for ProductCount in model.Products)
def minTradesize_Constraint2(model):
return (model.Amount[ProductCount] <= M * y[ProductCount] for ProductCount in model.Products)
where M is a sufficiently large number (can be a realistic upper bound for your model.Amount[ProductCount] variable).
As a result of this formulation, if y[ProductCount] is zero, then the model.Amount[ProductCount] term will also be zero. If the model wants now to make model.Amount[ProductCount] variable take positive values, it will have to set the binary y[ProductCount] to 1, hence, forcing model.Amount[ProductCount] to become larger or equal to 100,000.
Note: I have formulated the constraints in the same style that you did in your answer. However, if I understand your model correctly, I would say that the first constraint, for instance, should be:
def minTradesize_Constraint1(model, ProductCount):
return (model.Amount[ProductCount] >= 100000 * y[ProductCount]
and the for ProductCount in model.Products part should be added when you create the Pyomo constraint.

to simplify the solution even further. I added a variable called route_selected
model.route_selected = pe.Var(<a set, for me its routes == r>, domain=pe.Binary, initialize=0, within=pe.Binary)
model.route_selected = pe.Var(model.R, domain=pe.Binary, initialize=0, within=pe.Binary)
thats my dependent variable looks
# Variable to solve, this is the variable that will be changed by solver to find a solution
# for each route, port, cust, year combination what should be supplied amount
model.x_rpcy = pe.Var(self.model.R, self.model.P, self.model.C, self.model.Y, domain=pe.NonNegativeIntegers, initialize=0)
and then added this constraint
for y in self.model.Y:
for r in self.model.R:
lhs = sum(self.model.x_rpcy[r, p, c, y] for p in self.model.P for c in self.model.C)
vessel_size = 1000
self.model.const_route.add(lhs == vessel_size * self.model.route_selected[r])
model.const_route.add(lhs == vessel_size * model.route_selected[r])

Related

binary variable depending on a continuous variable (mixed integer programming in cplex)

I'm trying to formulate a constraint for a MIP problem that involves binary variable v and continuous variable i, such as:
if i = 0, v = 0, and
if i > 0, v = 1
I haven't been able to think of a solution to this and I'm not sure if there is a solution. Any suggestion is greatly appreciated. Thank you!
you can rely on logical constraints.
In OPL you can write
dvar boolean v;
dvar float+ i;
subject to
{
v==!(i==0);
}
And you can do the same with all CPLEX APIs
You can also model this using the 'traditional' Big-M formulation which is documented in many places on the internet and in many textbooks.
Usually this is done in a pair of constraints like this:
i <= M * v
This forces i to be zero if v is zero, and also if i is non-zero then v must be 1 which covers most of your requirement, but still allows i = 0 and v = 1. In many cases the objective is trying to minimise some expression including v and that may be sufficient to encourage v=0 when i=0. But don't fall into the silly error of using a really big value for M as that will adversely affect your linear relaxations and possibly overal performance.
Then you might also need to add a further constraint to force v to zero if i is zero such as:
v <= i
which would have the effect of directly forcing v to zero if i is zero.

Assigning specific values to a boolean array

Say I am tossing a fair coin where 'tails' is assigned the value x = -1/2 and 'heads' is assigned x = 1/2.
I do this N times and I want to obtain the sum. This is what I have tried:
p = 0.5
N = 1e4
X(N,p)=(rand(N).<p)
I know this is incomplete but when I check (rand(N).<p) I see an array consisting of true, false. I interpret this as 'Tails' or 'Heads'. However, I don't know how to assign the values 1/2 and -1/2 to each of these elements in order for me to find the sum. If I simply use sum((rand(N).<p)) I do get an integer value, but I don't think this is the right way to do it because I haven't specified the values 1/2 and -1/2 anywhere.
Any help is greatly appreciated.
As indicated by the comments already, you want to do
sum(rand([-0.5, 0.5], N))
where N must be an integer (you wrote N=1e4, therefore typeof(N) == Float64 and rand won't work).
The documentation of rand (obtained by ?rand) describes what rand(S, N) does:
Pick a random element or array of random elements from the set of
values specified by S
Here, S can be an optional indexable collection, an array of values in your case (or a type like Int). So, above S = [-0.5, 0.5] and rand draws N random elements from this collection, which we can afterwards sum up.
Assigning specific values to a boolean array
Since this is the title of your question, and the answer above doesn't actually address this, let me comment on this as well.
You could do sum((rand(N).<p)-0.5), i.e. you shift all the ones to 0.5 and all the zeros to -0.5 to get the wanted result. Note that this is a general strategy: Let's say you want true to be a and false to be b, where a and b are numbers. You achieve this by (rand(N).<p)*(a-b) + b.
However, beyond being more "complicated", sum((rand(N).<p)-0.5) will allocate temporary arrays, first one of booleans, then one of numbers, the latter of which will eventually go into sum. Because of these unnecessary allocations this approach will be slower than the solution above.

Does runif() really have a range: 0<= runif(n) <= 1, as stated in the documentation?

I'm new to R, but the documentation surprised me by stating that runif(n) returns a number in the range 0 to 1 inclusive.
I would expect 0 <= runif(n) < 1 -- including 0 and not including 1.
I tested it with n = 100,000,000, and found that it never produced 0 or 1. I realize that the probability of actually hitting specific values in floating point is really small, but still... (There are something like 2^53 values between 0 and 1 in double precision).
So I looked into the source code for R and found in r-source-trunk\src\nmath\runif.c
do
{
u = unif_rand();
} while (u <= 0 || u >= 1);
return a + (b - a) * u;
So by design, despite the documentation, it will never ever return a 0 or 1.
Isn't this a bug?
Or at least a problem with the documentation?
The underlying uniform random number function is defined here and the final outputs use this function:
static double fixup(double x)
{
/* ensure 0 and 1 are never returned */
if(x <= 0.0) return 0.5*i2_32m1;
if((1.0 - x) <= 0.0) return 1.0 - 0.5*i2_32m1;
return x;
}
Despite this, there are comments of the form /* in [0,1) */ for each of the generator's return functions, which I assume is a mistake given the above.
And of course, the code you noticed in runif.c is preceded by:
/* This is true of all builtin generators, but protect against
user-supplied ones */
So the min or max will never be returned except in the cases mentioned by #JesseTweedle, which is not the case when just calling runif().
For reference, the magic value i2_32m1 is 1/(2^32-1) so the minimum value you can get from the default generators is 1/(2^33-2) which is approximately 1.16e-10. The maximum value is this amount short of 1.
The documentation says:
runif will not generate either of the extreme values unless max = min
or max-min is small compared to min, and in particular not for the
default arguments.
With default arguments, the documentation is consistent with the behaviour you see.

Counting the number of restricted Integer partitions

Original problem:
Let N be a positive integer (actually, N <= 2000) and P - set of all possible partitions of the N, where with and . Let A be the number of partitions . Find the A.
Input: N. Output: A - the number of partitions .
What have I tried:
I think that this problem can be solved by dynamic-based algorithm. Let p(n,a,b) be the function, which returns the number of partitons of n using only numbers a. . .b. Then we can compute the A with the code like:
int Ans = 2; // the 1+1+...+1=N & N=N partitions
for(int a = 2; a <= N/2; a += 1){ //a - from 2 to N/2
int b = a*2-1;
Ans += p[N][a][b]; // add all partitions using a..b to Answer
if(a < (a-1)*2-1){ // if a < previous b [ (a-1)*2-1 ]
Ans -= p[N][a][(a-1)*2-1]; // then we counted number of partitions
} // using numbers a..prev_b twice.
}
Next I tried to find the dynamic algorithm computing p(n,a,b) for any integer a <= b <= n. This paper (.pdf) provides the folowing algorithm:
, were I(n<=b) = 1 if n<=b and =0 otherwise.
Question(s):
How should I realize the algorithm from the paper? I'm new at d-p problems and as I can see, this problem has 3 dimensions (n,a & b), which is quite tricky for me.
How actually that algorithm works? I know how work the algorithms for computing p(n,0,b) or p(n,a,n), but a little explanation for p(n,a,b) will be very helpful.
Does original problem have simpler solution? I'm quite sure that there's another clean solution, but I didn't found it.
I calculated all A(1)-A(600) in 23 seconds with memoization approach (top-down dynamic programming). 3D table requires 1.7 GB of memory.
For reference: A[50] = 278, A(200)=465202, A(600)=38860513616
N=2000 requires too large table for 32-bit environment, and map approach worked too slow.
I can make 2D table with reasonable size, but this approach requires table zeroing at every iteration of external loop - slow again.
A(1000) = 107292471486730 in 131 sec. And I think that long arithmetic might be needed for larger values to avoid Int64 overflow.

Decompose integer into two bytes

I'm working on an embedded project where I have to write a time-out value into two byte registers of some micro-chip.
The time-out is defined as:
timeout = REG_a * (REG_b +1)
I want to program these registers using an integer in the range of 256 to lets say 60000. I am looking for an algorithm which, given a timeout-value, calculates REG_a and REG_b.
If an exact solution is impossible, I'd like to get the next possible larger time-out value.
What have I done so far:
My current solution calculates:
temp = integer_square_root (timeout) +1;
REG_a = temp;
REG_b = temp-1;
This results in values that work well in practice. However I'd like to see if you guys could come up with a more optimal solution.
Oh, and I am memory constrained, so large tables are out of question. Also the running time is important, so I can't simply brute-force the solution.
You could use the code used in that answer Algorithm to find the factors of a given Number.. Shortest Method? to find a factor of timeout.
n = timeout
initial_n = n
num_factors = 1;
for (i = 2; i * i <= initial_n; ++i) // for each number i up until the square root of the given number
{
power = 0; // suppose the power i appears at is 0
while (n % i == 0) // while we can divide n by i
{
n = n / i // divide it, thus ensuring we'll only check prime factors
++power // increase the power i appears at
}
num_factors = num_factors * (power + 1) // apply the formula
}
if (n > 1) // will happen for example for 14 = 2 * 7
{
num_factors = num_factors * 2 // n is prime, and its power can only be 1, so multiply the number of factors by 2
}
REG_A = num_factor
The first factor will be your REG_A, so then you need to find another value that multiplied equals timeout.
for (i=2; i*num_factors != timeout;i++);
REG_B = i-1
Interesting problem, Nils!
Suppose you start by fixing one of the values, say Reg_a, then compute Reg_b by division with roundup: Reg_b = ((timeout + Reg_a-1) / Reg_a) -1.
Then you know you're close, but how close? Well the upper bound on the error would be Reg_a, right? Because the error is the remainder of the division.
If you make one of factors as small as possible, then compute the other factor, you'd be making that upper bound on the error as small as possible.
On the other hand, by making the two factors close to the square root, you're making the divisor as large as possible, and therefore making the error as large as possible!
So:
First, what is the minimum value for Reg_a? (timeout + 255) / 256;
Then compute Reg_b as above.
This won't be the absolute minimum combination in all cases, but it should be better than using the square root, and faster, too.

Resources