GLPK Variable seems to change after solve - glpk

I have been experiencing a strange behaviour of a glpsol, more precisely one of its variables. I run the command using glpsol -m sol.mod
Input, in file sol.mod:
set Points := (1..3);
var a{i in Points}, >= 0;
var x1{i in Points};
var x2{i in Points};
maximize obj: sum{i in Points} a[i];
px1: x1[1] = 0;
py1: x2[1] = 0;
px2: x1[2] = 2;
py2: x2[2] = 1;
px3: x1[3] = 3;
py3: x2[3] = 3;
p1x2: x1[1] + a[1] <= x1[2] - a[2];
p1x3: x1[1] + a[1] <= x1[3] - a[3];
p2x3: x2[2] + a[2] <= x2[3] - a[3];
solve;
printf "#OUTPUT:\n";
#printf{i in Points} "a_%d = %d\n", i, a[i];
printf "a[1]: %d\n", a[1];
printf "-a[1]: %d\n", -a[1];
printf "a[3]: %d\n", a[3];
printf "#OUTPUT END:\n";
end;
Output:
GLPSOL: GLPK LP/MIP Solver, v4.52
Parameter(s) specified in the command line:
-m sol.mod
Reading model section from sol.mod...
22 lines were read
Generating obj...
Generating px1...
Generating py1...
Generating px2...
Generating py2...
Generating px3...
Generating py3...
Generating p1x2...
Generating p1x3...
Generating p2x3...
Model has been successfully generated
GLPK Simplex Optimizer, v4.52
10 rows, 9 columns, 21 non-zeros
Preprocessing...
3 rows, 3 columns, 6 non-zeros
Scaling...
A: min|aij| = 1.000e+00 max|aij| = 1.000e+00 ratio = 1.000e+00
Problem data seem to be well scaled
Constructing initial basis...
Size of triangular part is 3
* 0: obj = 0.000000000e+00 infeas = 0.000e+00 (0)
* 3: obj = 3.500000000e+00 infeas = 0.000e+00 (0)
OPTIMAL LP SOLUTION FOUND
Time used: 0.0 secs
Memory used: 0.1 Mb (126476 bytes)
#OUTPUT:
a[1]: 2
-a[1]: -1
a[3]: 2
#OUTPUT END:
Model has been successfully processed
The issue seems to be that a[1] is evaluated to 2, while -a[1] is evaluated to -1. Also the a[3] equals 2 as well, so the constraint p1x3 is not fulfilled.
Currently I have no idea how to fix this or even what caused it.

Change %d format specifier to %g and see what happens. Note that a{i} are continuous variables that may have fractional values.

Related

Error in for loop - attempt to select less than one element in integerOneIndex

I'm trying to translate a C routine from an old sound synthesis program into R, but have indexing issues which I'm struggling to understand (I'm a beginner when it comes to using loops).
The routine creates an exponential lookup table - the vector exptab:
# Define parameters
sinetabsize <- 8192
prop <- 0.8
BP <- 10
BD <- -5
BA <- -1
# Create output vector
exptab <- vector("double", sinetabsize)
# Loop
while(abs(BD) > 0.00001){
BY = (exp(BP) -1) / (exp(BP*prop)-1)
if (BY > 2){
BS = -1
}
else{
BS = 1
}
if (BA != BS){
BD = BD * -0.5
BA = BS
BP = BP + BD
}
if (BP <= 0){
BP = 0.001
}
BQ = 1 / (exp(BP) - 1)
incr = 1 / sinetabsize
x = 0
stabsize = sinetabsize + 1
for (i in (1:(stabsize-1))){
x = x + incr
exptab [[sinetabsize-i]] = 1 - (BQ * (exp(BP * x) - 1))
}
}
Running the code gives the error:
Error in exptab[[sinetabsize - i]] <- 1 - (BQ * (exp(BP * x) - 1)) :
attempt to select less than one element in integerOneIndex
Which, I understand from looking at other posts, indicates an indexing problem. But, I'm finding it difficult to work out the exact issue.
I suspect the error may lie in my translation. The original C code for the last few lines is:
for (i=1; i < stabsize;i++){
x += incr;
exptab[sinetabsize-i] = 1.0 - (float) (BQ*(exp(BP*x) - 1.0));
}
I had thought the R code for (i in (1:(stabsize-1))) was equivalent to the C code for (i=1; i< stabsize;i++) (i.e. the initial value of i is i = 1, the test is whether i < stabsize, and the increment is +1). But now I'm not so sure.
Any suggestions as to where I'm going wrong would be greatly appreciated!
As you say, array indexing in R starts at 1. In C it starts at zero. I reckon that's your problem. Can sinetabsize-i ever get to zero?

CSIM will not run in Scilab v6.0.0 or 6.0.1

in Scilab v5.5.2 this code executes without issue.
In Scilab v6.0.0 or higher I get the following error,
lsode-- at t (=r1), mxstep (=i1) steps
necessary before reaching tout
where i1 is : 500
where r1 is : 0.1202764106130D-05
Excessive work done on this call (perhaps wrong jacobian type).
at line 159 of function csim ( C:\Program Files\scilab-6.0.1\modules\cacsd\macros\csim.sci line 170 )
at line 39 of executed file C:\Users\wensrl\Documents\SciLab\Control\optTest2.sce
ode: lsode exit with state -1.
Here is the code,
clear
clc
t = linspace(1, 520, 5200)
for i = 1:5200
if (i > 15) then
if (i < (5200 / 2)) then
u(i) = 1;
else
u(i) = 0;
end
else
u(i) = 0;
end
end
P = syslin('c', 0.72, 1 + 11 * %s);
n = 4 // order of the delay function
delay = 1 / (( 1 + ((%s * 3) / n)) ^n); // make into a function
Pd = P * delay;
x0=[7.1373457 6.6467066 1.0393701 0.125];
kc = x0(1);
ki = x0(2);
kd = x0(3);
alpha = x0(4);
// stdDeltaV PID formula
pidFormula = kc * (1 + (1/(ki * %s)) + ...
((kd * %s)/(alpha * kd * %s + 1)));
C = syslin('c', pidFormula);
oL = Pd * C;
cL = oL /. 1;
[y] = csim(u', t, cL)
For me it works similarily with scilab-5.5.2and scilab-6.0.1.
But note that ode solvers are supposed to get continuous systems. Here your input is discontinuous, so the solver has difficulties to integrate it and the result may be wrong.
In fact you should make 3 successives integrations for each continuous part
[y1,x1]=csim(u(1:15)',t(1:15),cL);
[y2,x2]=csim(u(15:2599)',t(15:2599)-t(15),cL,x1(:,$));
[y3,x3]=csim(u(2599:$)',t(2599:$)-t(2599),cL,x2(:,$));
clf(),plot([t(1:15) t(15:2599) t(2599:$)],[y1 y2 y3])

rand() in range returning numbers outside of the range

In my program, I have to find two random values with certain conditions:
i needs to be int range [2...n]
k needs to be in range [i+2...n]
so I did this:
i = rand() % n + 2;
k = rand() % n + (i+2);
But it keeps giving me wrong values like
for n = 7
I get i = 4 and k = 11
or i = 3 and k = 8
How can I fix this?
The exact formula that I use in my other program is:
i = min + (rand() % (int)(max - min + 1))
Look here for other explanation
As the comments say, your range math is off.
You might find it useful to use a function to work the math out consistently each time. e.g.:
int RandInRange(int x0, int x1)
{
if(x1<=x0) return x0;
return rand() % (x1-x0+1) + x0;
}
then call it with what you want:
i = RandInRange(2,n);
k = RandInRange(i+2,n);

How to solve this hard combinatoric?

This is a contest problem (ACM ICPC South America 2015), it was the hardest in the problem set.
Summary: Given integers N and K, count the number of sequences a of length N consisting of integers 1 ≤ ai ≤ K, subject to the condition that for any x in that sequence there has to be a pair i, j satisfying i < j and ai = x − 1 and aj = x, i.e. the last x is preceded by x − 1 at some point.
Example: for N = 1000 and K = 100 the solution should be congruent to 265428620 modulo (109 + 7). Other examples and details can be found in the problem description.
I tried everything in my knowledge, but I need pointers to know how to do it. I even printed some lists with brute force to find the pattern, but I didn't succeed.
I'm looking for an algorithm, or formula that allows me to get to the right solution for this problem. It can be any language.
EDIT:
I solved the problem using a formula I found on the internet (someone who explained this problem). However, just because I programmed it, doesn't mean I understand it, so the question remains open. My code is here (the online judge returns Accepted):
#include <bits/stdc++.h>
using namespace std;
typedef long long int ll;
ll mod = 1e9+7;
ll memo[5001][5001];
ll dp(int n, int k){
// K can't be greater than N
k = min(n, k);
// if N or K is 1, it means there's only one possible list
if(n <= 1 || k <= 1) return 1;
if(memo[n][k] != -1) return memo[n][k];
ll ans1 = (n-k) * dp(n-1, k-1);
ll ans2 = k * dp(n-1, k);
memo[n][k] = ((ans1 % mod) + (ans2 % mod)) % mod;
return memo[n][k];
}
int main(){
int n, q;
for(int i=0; i<5001; i++)
fill(memo[i], memo[i]+5001, -1);
while(scanf("%d %d", &n, &q) == 2){
for(int i=0; i<q; i++){
int k;
scanf("%d", &k);
printf("%s%lld", i==0? "" : " ", dp(n, k));
}
printf("\n");
}
return 0;
}
The most important lines are the recursive call, particularly, these lines
ll ans1 = (n-k) * dp(n-1, k-1);
ll ans2 = k * dp(n-1, k);
memo[n][k] = ((ans1 % mod) + (ans2 % mod)) % mod;
Here I show the brute force algorithm for the problem in python. It works for small numbers, but for very big numbers it takes too much time. For N=1000 and K=5 it is already infeasible (Needs more than 100 years time to calculate)(In C it should also be infeasible as C is only 100 times faster than Python). So the problem actually forces you to find a shortcut.
import itertools
def checkArr(a,K):
for i in range(2,min(K+1,max(a)+1)):
if i-1 not in a:
return False
if i not in a:
return False
if a.index(i-1)>len(a)-1-a[::-1].index(i):
return False
return True
def num_sorted(N,K):
result=0
for a in itertools.product(range(1,K+1), repeat=N):
if checkArr(a,K):
result+=1
return result
num_sorted(3,10)
It returns 6 as expected.

Divide by 10 using bit shifts?

Is it possible to divide an unsigned integer by 10 by using pure bit shifts, addition, subtraction and maybe multiply? Using a processor with very limited resources and slow divide.
Editor's note: this is not actually what compilers do, and gives the wrong answer for large positive integers ending with 9, starting with div10(1073741829) = 107374183 not 107374182. It is exact for smaller inputs, though, which may be sufficient for some uses.
Compilers (including MSVC) do use fixed-point multiplicative inverses for constant divisors, but they use a different magic constant and shift on the high-half result to get an exact result for all possible inputs, matching what the C abstract machine requires. See Granlund & Montgomery's paper on the algorithm.
See Why does GCC use multiplication by a strange number in implementing integer division? for examples of the actual x86 asm gcc, clang, MSVC, ICC, and other modern compilers make.
This is a fast approximation that's inexact for large inputs
It's even faster than the exact division via multiply + right-shift that compilers use.
You can use the high half of a multiply result for divisions by small integral constants. Assume a 32-bit machine (code can be adjusted accordingly):
int32_t div10(int32_t dividend)
{
int64_t invDivisor = 0x1999999A;
return (int32_t) ((invDivisor * dividend) >> 32);
}
What's going here is that we're multiplying by a close approximation of 1/10 * 2^32 and then removing the 2^32. This approach can be adapted to different divisors and different bit widths.
This works great for the ia32 architecture, since its IMUL instruction will put the 64-bit product into edx:eax, and the edx value will be the wanted value. Viz (assuming dividend is passed in eax and quotient returned in eax)
div10 proc
mov edx,1999999Ah ; load 1/10 * 2^32
imul eax ; edx:eax = dividend / 10 * 2 ^32
mov eax,edx ; eax = dividend / 10
ret
endp
Even on a machine with a slow multiply instruction, this will be faster than a software or even hardware divide.
Though the answers given so far match the actual question, they do not match the title. So here's a solution heavily inspired by Hacker's Delight that really uses only bit shifts.
unsigned divu10(unsigned n) {
unsigned q, r;
q = (n >> 1) + (n >> 2);
q = q + (q >> 4);
q = q + (q >> 8);
q = q + (q >> 16);
q = q >> 3;
r = n - (((q << 2) + q) << 1);
return q + (r > 9);
}
I think that this is the best solution for architectures that lack a multiply instruction.
Of course you can if you can live with some loss in precision. If you know the value range of your input values you can come up with a bit shift and a multiplication which is exact.
Some examples how you can divide by 10, 60, ... like it is described in this blog to format time the fastest way possible.
temp = (ms * 205) >> 11; // 205/2048 is nearly the same as /10
to expand Alois's answer a bit, we can expand the suggested y = (x * 205) >> 11 for a few more multiples/shifts:
y = (ms * 1) >> 3 // first error 8
y = (ms * 2) >> 4 // 8
y = (ms * 4) >> 5 // 8
y = (ms * 7) >> 6 // 19
y = (ms * 13) >> 7 // 69
y = (ms * 26) >> 8 // 69
y = (ms * 52) >> 9 // 69
y = (ms * 103) >> 10 // 179
y = (ms * 205) >> 11 // 1029
y = (ms * 410) >> 12 // 1029
y = (ms * 820) >> 13 // 1029
y = (ms * 1639) >> 14 // 2739
y = (ms * 3277) >> 15 // 16389
y = (ms * 6554) >> 16 // 16389
y = (ms * 13108) >> 17 // 16389
y = (ms * 26215) >> 18 // 43699
y = (ms * 52429) >> 19 // 262149
y = (ms * 104858) >> 20 // 262149
y = (ms * 209716) >> 21 // 262149
y = (ms * 419431) >> 22 // 699059
y = (ms * 838861) >> 23 // 4194309
y = (ms * 1677722) >> 24 // 4194309
y = (ms * 3355444) >> 25 // 4194309
y = (ms * 6710887) >> 26 // 11184819
y = (ms * 13421773) >> 27 // 67108869
each line is a single, independent, calculation, and you'll see your first "error"/incorrect result at the value shown in the comment. you're generally better off taking the smallest shift for a given error value as this will minimise the extra bits needed to store the intermediate value in the calculation, e.g. (x * 13) >> 7 is "better" than (x * 52) >> 9 as it needs two less bits of overhead, while both start to give wrong answers above 68.
if you want to calculate more of these, the following (Python) code can be used:
def mul_from_shift(shift):
mid = 2**shift + 5.
return int(round(mid / 10.))
and I did the obvious thing for calculating when this approximation starts to go wrong with:
def first_err(mul, shift):
i = 1
while True:
y = (i * mul) >> shift
if y != i // 10:
return i
i += 1
(note that // is used for "integer" division, i.e. it truncates/rounds towards zero)
the reason for the "3/1" pattern in errors (i.e. 8 repeats 3 times followed by 9) seems to be due to the change in bases, i.e. log2(10) is ~3.32. if we plot the errors we get the following:
where the relative error is given by: mul_from_shift(shift) / (1<<shift) - 0.1
Considering Kuba Ober’s response, there is another one in the same vein.
It uses iterative approximation of the result, but I wouldn’t expect any surprising performances.
Let say we have to find x where x = v / 10.
We’ll use the inverse operation v = x * 10 because it has the nice property that when x = a + b, then x * 10 = a * 10 + b * 10.
Let use x as variable holding the best approximation of result so far. When the search ends, x Will hold the result. We’ll set each bit b of x from the most significant to the less significant, one by one, end compare (x + b) * 10 with v. If its smaller or equal to v, then the bit b is set in x. To test the next bit, we simply shift b one position to the right (divide by two).
We can avoid the multiplication by 10 by holding x * 10 and b * 10 in other variables.
This yields the following algorithm to divide v by 10.
uin16_t x = 0, x10 = 0, b = 0x1000, b10 = 0xA000;
while (b != 0) {
uint16_t t = x10 + b10;
if (t <= v) {
x10 = t;
x |= b;
}
b10 >>= 1;
b >>= 1;
}
// x = v / 10
Edit: to get the algorithm of Kuba Ober which avoids the need of variable x10 , we can subtract b10 from v and v10 instead. In this case x10 isn’t needed anymore. The algorithm becomes
uin16_t x = 0, b = 0x1000, b10 = 0xA000;
while (b != 0) {
if (b10 <= v) {
v -= b10;
x |= b;
}
b10 >>= 1;
b >>= 1;
}
// x = v / 10
The loop may be unwinded and the different values of b and b10 may be precomputed as constants.
On architectures that can only shift one place at a time, a series of explicit comparisons against decreasing powers of two multiplied by 10 might work better than the solution form hacker's delight. Assuming a 16 bit dividend:
uint16_t div10(uint16_t dividend) {
uint16_t quotient = 0;
#define div10_step(n) \
do { if (dividend >= (n*10)) { quotient += n; dividend -= n*10; } } while (0)
div10_step(0x1000);
div10_step(0x0800);
div10_step(0x0400);
div10_step(0x0200);
div10_step(0x0100);
div10_step(0x0080);
div10_step(0x0040);
div10_step(0x0020);
div10_step(0x0010);
div10_step(0x0008);
div10_step(0x0004);
div10_step(0x0002);
div10_step(0x0001);
#undef div10_step
if (dividend >= 5) ++quotient; // round the result (optional)
return quotient;
}
Well division is subtraction, so yes. Shift right by 1 (divide by 2). Now subtract 5 from the result, counting the number of times you do the subtraction until the value is less than 5. The result is number of subtractions you did. Oh, and dividing is probably going to be faster.
A hybrid strategy of shift right then divide by 5 using the normal division might get you a performance improvement if the logic in the divider doesn't already do this for you.
I've designed a new method in AVR assembly, with lsr/ror and sub/sbc only. It divides by 8, then sutracts the number divided by 64 and 128, then subtracts the 1,024th and the 2,048th, and so on and so on. Works very reliable (includes exact rounding) and quick (370 microseconds at 1 MHz).
The source code is here for 16-bit-numbers:
http://www.avr-asm-tutorial.net/avr_en/beginner/DIV10/div10_16rd.asm
The page that comments this source code is here:
http://www.avr-asm-tutorial.net/avr_en/beginner/DIV10/DIV10.html
I hope that it helps, even though the question is ten years old.
brgs, gsc
elemakil's comments' code can be found here: https://doc.lagout.org/security/Hackers%20Delight.pdf
page 233. "Unsigned divide by 10 [and 11.]"

Resources