Running SCIMP for various n - plot

I have a linear program where I can put in numbers for n and it gives me the ouput of the LP for this specific n. I want to do this now for various n=10...1000. Is there a technique where I dont have to do it manually fpr each n and instead does this automatically and outputs the solution of the LP for each n in a file? I like to plot the graph later on.
This is my linear program:
#Specify the number of n for the linear program.
param n := 5000;
#This is the set of probabilities of
set N := {1 .. n};
#We specify the variables for the probabilities p_1,...p_n.
var p[<i> in N] real >= 0;
#These are the values of the vector c. It specifies a constant for each p_i.
param c[<i> in N] := i/n ;
#We define the entries a_{ij} of the Matrix A.
defnumb a(i,j) :=
if i < j then 0
else if i == j then i
else 1 end end;
#The objective function.
maximize prob: sum <i> in N : c[i] * p[i];
#The condition which needs to be fulfilled.
subto condition:
forall <i> in N:
sum <j> in N: a(i,j) * p[j] <= 1;

You can give arguments via the console with:
-D n=[number you want] -o [output file]
Then you could iterate several n by just using a shell script, e.g.,
for i in {1..100}
do
zimpl -D n=$i -o 'output_'$i yourfile.zpl
done

Related

Could not find the optimal solution after adding constraints

My code is as follows:
gekko = GEKKO(remote=True)
# create variable, each variable is a vector, each element
# of the vector is a binary
s = []
for i in range(N):
s.append(gekko.Array(gekko.Var, s_len[i], value=0, lb=0, ub=1, integer=True))
# some constants used in the objective/constraint function
c, d, r, m, L = create_c_d_r_m_L() # they are all numpy ndarry
# define the objective function
def objective():
obj = 0
for i in range(N):
obj += np.dot(s[i], c[i]) + np.dot(s[i], d[i])
for idx, (i, j) in enumerate(E):
obj += np.dot(np.dot(s[i], r[idx].reshape(s_len[i], s_len[j])),\
s[j]) # s[i] * r[i, j] * s[j]
return obj
# add constraints
# (a) each vector can only have and must have one 1
for i in range(N):
gekko.Equation(gekko.sum(s[i]) == 1)
# (b)
for t in range(N):
peak_mem = gekko.sum([np.dot(s[i], m[i]) for i in L[t]])
gekko.Equation(peak_mem < DEVICE_MEM)
# DEVICE_MEM is a predefined big int
# solve
gekko.Obj(objective())
gekko.solve(disp=True)
I found that when removing constraint (b), the solver can output the optimal solution for s. However, if we add (b) and set DEVICE_MEM to a very large number (which should not affect the solution), the s is not optimal anymore. I'm wondering if I am doing something wrong here because I tried both APOPT(solvertype=1) and IPOPT (solvertype=3) and they give the same nonoptimal results.
To give more context to the problem: this is an optimization over the graph. N represents the number of nodes in the graph. E is the set that contains all edges in the graph. c, d, m are three types of cost of a node. r is the cost of edges. Each node has multiple strategies (represented by the vector s[i]), and we need to select the best strategy for each node so that the overall cost is minimal.
Detailed constants:
# s_len: record the length of each vector
# (the # of strategies for each node,
# here we assume the length are all 10)
s_len = np.ones(N) * 10
# c, d, m are the costs of each node
# let's assume the c/d/m cost for i node is just i
c, d, m = [], [], []
for i in range(N):
c[i] = s_len[i] * [i]
d[i] = s_len[i] * [i]
m[i] = s_len[i] * [i]
# r is the edge cost, let's assume the cost for
# each edge is just i * j
r = []
for (i,j) in E: # E records all edges
cur_r = s_len[i] * s_len[j] * [i*j]
r.append(cur_r)
# L contains the node ids, we just randomly generate 10 integers here
L = []
for i in range(N):
cur_L = [randrange(N) for _ in range(10)]
L.append(cur_L)
I've been stuck on this for a while and any comments/answers are highly appreciated! Thanks!
Try reframing the inequality constraint:
for t in range(N):
peak_mem = gekko.sum([np.dot(s[i], m[i]) for i in L[t]])
gekko.Equation(peak_mem < DEVICE_MEM)
as a variable with an upper bound:
peak_mem = m.Array(m.Var,N,ub=DEVICE_MEM)
for t in range(N):
m.Equation(peak_mem[t]==\
gekko.sum([np.dot(s[i], m[i]) for i in L[t]])
The N inequality constraints peak_mem < DEVICE_MEM are converted to equality constraints with slack variables as s[i] = DEVICE_MEM - peak_mem with a simple inequality constraint on the slack s[i]>=0. If the inequality constraint far from the bound, then the slack variable can be very large. Formulating the equation as a variable may help.
I tried using the information in the question to pose a minimal problem that could reproduce the error and the potential solution. If you need more specific suggestions, please modify the code to be a complete and minimal example that reproduces the error. This helps with verifying the solution.

Mathematical flop count of column based back substitution function ( Julia )

I am new to Linear Algebra and learning about triangular systems implemented in Julia lang. I have a col_bs() function I will show here that I need to do a mathematical flop count of. It doesn't have to be super technical this is for learning purposes. I tried to break the function down into it's inner i loop and outer j loop. In between is a count of each FLOP , which I assume is useless since the constants are usually dropped anyway.
I also know the answer should be N^2 since its a reversed version of the forward substitution algorithm which is N^2 flops. I tried my best to derive this N^2 count but when I tried I ended up with a weird Nj count. I will try to provide all work I have done! Thank you to anyone who helps.
function col_bs(U, b)
n = length(b)
x = copy(b)
for j = n:-1:2
if U[j,j] == 0
error("Error: Matrix U is singular.")
end
x[j] = x[j]/U[j,j]
for i=1:j-1
x[i] = x[i] - x[j] * U[i , j ]
end
end
x[1] = x[1]/U[1,1]
return x
end
1: To start 2 flops for the addition and multiplication x[i] - x[j] * U[i , j ]
The $i$ loop does: $$ \sum_{i=1}^{j-1} 2$$
2: 1 flop for the division $$ x[j] / = U[j,j] $$
3: Inside the for $j$ loop in total does: $$ 1 + \sum_{i=1}^{j-1} 2$$
4:The $j$ loop itself does:$$\sum_{j=2}^n ( 1 + \sum_{i=1}^{j-1} 2)) $$
5: Then one final flop for $$ x[1] = x[1]/U[1,1].$$
6: Finally we have
$$\\ 1 + (\sum_{j=2}^n ( 1 + \sum_{i=1}^{j-1} 2))) .$$
Which we can now break down.
If we distribute and simplify
$$\\ 1 + (\sum_{j=2}^n + \sum_{j=2}^n \sum_{i=1}^{j-1} 2) .$$
We can look at only the significant variables and ignore constants,
$$\\
\\ 1 + (n + n(j-1))
\\ n + nj - n
\\ nj
$$
Which then means that if we ignore constants the highest possibility of flops for this formula would be $n$ ( which may be a hint to whats wrong with my function since it should be $n^2$ just like the rest of our triangular systems I believe)
Reduce your code to this form:
for j = n:-1:2
...
for i = 1:j-1
... do k FLOPs
end
end
The inner loop takes k*(j-1) flops. The cost of the outer loop is thus
Since you know that j <= n, you know that this sum is less than (n-1)^2 which is enough for big O.
In fact, however, you should also be able to figure out that

sum of an infinite series in octave

Just started acquaintance with Octave. Trying to calculate the sum of an infinite series. Math is bad, can anyone help? In the code for x I took one.
row formula
sum((-1)^([1:inf]+2)/factorial([1:inf])*1^[1:inf]);
Mathematically, the formula is
exp(-1/x)-1
which can derived from Taylor expansion.
Symbolic verification
pkg load symbolic
syms x n
simplify(symsum((-1/x).^n./(factorial(n)),n,1,inf))
which gives
>> simplify(symsum((-1/x).^n./(factorial(n)),n,1,inf))
ans = (sym)
-1
---
x
-1 + e
Numeric verification
n = 20;
x = 2;
such that
>> sum((-1/x).^(1:n)./(factorial(1:n)))
ans = -0.39347
>> exp(-1/x)-1
ans = -0.39347

solving matrices using Cramer's rule

So I searched the in internet looking for programs with Cramer's Rule and there were some few, but apparently these examples were for fixed matrices only like 2x2 or 4x4.
However, I am looking for a way to solve a NxN Matrix. So I started and reached the point of asking the user for the size of the matrix and asked the user to input the values of the matrix but then I don't know how to move on from here.
As in I guess my next step is to apply Cramer's rule and get the answers but I just don't know how.This is the step I'm missing. can anybody help me please?
First, you need to calculate the determinant of your equations system matrix - that is the matrix, that consists of the coefficients (from the left-hand side of the equations) - let it be D.
Then, to calculate the value of a certain variable, you need to take the matrix of your system (from the previous step), replace the coefficients of the corresponding column with constant terms (from the right-hand side), calculate the determinant of resulting matrix - let it be C, and divide C by D.
A bit more about the replacement from the previous step: say, your matrix if 3x3 (as in the image) - so, you have a system of equations, where every a coefficient is multiplied by x, every b - by y, and every c by z, and ds are the constant terms. So, to calculate y, you replace those coefficients that are multiplied by y - bs in this case, with ds.
You perform the second step for every variable and your system gets solved.
You can find an example in https://rosettacode.org/wiki/Cramer%27s_rule#C
Although the specific example deals with a 4X4 matrix the code is written to accommodate any size square matrix.
What you need is calculate the determinant. Cramer's rule is just for the determinant of a NxN matrix
if N is not big, you can use the Cramer's rule(see code below), which is quite straightforward. However, this method is not efficient; if your N is big, you need to resort to other methods, such as lu decomposition
Assuming your data is double, and result can be hold by double.
#include <malloc.h>
#include <stdio.h>
double det(double * matrix, int n) {
if( 1 >= n ) return matrix[ 0 ];
double *subMatrix = (double*)malloc(( n - 1 )*( n - 1 ) * sizeof(double));
double result = 0.0;
for( int i = 0; i < n; ++i ) {
for( int j = 0; j < n - 1; ++j ) {
for( int k = 0; k < i; ++k )
subMatrix[ j*( n - 1 ) + k ] = matrix[ ( j + 1 )*n + k ];
for( int k = i + 1; k < n; ++k )
subMatrix[ j*( n - 1 ) + ( k - 1 ) ] = matrix[ ( j + 1 )*n + k ];
}
if( i % 2 == 0 )
result += matrix[ 0 * n + i ] * det(subMatrix, n - 1);
else
result -= matrix[ 0 * n + i ] * det(subMatrix, n - 1);
}
free(subMatrix);
return result;
}
int main() {
double matrix[ ] = { 1,2,3,4,5,6,7,8,2,6,4,8,3,1,1,2 };
printf("%lf\n", det(matrix, 4));
return 0;
}

Number of solutions for equation with n variables with constraints

I wanted to calculate the number of solutions of the equation, but I am unable to get any lead. The equation is:
All I could get is by doing something like,
But I don't know how to proceed on this.
I'd try solving this by using dynamic programming.
Here's some pseudocode to get you started:
Procedure num_solutions(n, k, m):
# Initialize memoization cache:
if this function has been called for the first time:
initialize memo_cache with (n+1)*(k+1)*(m+1) elements, all set to -1
# Return cached solution if available
if memo_cache[n][k][m] is not -1:
return memo_cache[n][k][m]
# Edge case:
if m is equal to 1:
# Solution only exists if 1 <= m <= k
if n >= 1 and n <= k, set memo_cache[n][k][m] to 1 and return 1
otherwise set memo_cache[n][k][m] to 0 and return 0
# Degenerate case: No solution possible if n<m or n>k*m
if n < m or n > k * m:
set memo_cache[n][k][m] to 0 and return 0
# Call recursively for a solution with m-1 elements
set sum to 0
for all i in range 1..k:
sum = sum + num_solutions(n - i, k, m - 1)
set memo_cache[n][k][m] to sum and return sum

Resources