Initialization of Arcs depening on Sets/Subsets in directed graphs in CPLEX - graph

I am dealing with a directed weighted graph and have a question about how to initialize a set a defined in the following:
Assume that the graph has the following nodes, which are subdivided into three different subsets.
//Subsets of Nodes
{int} Subset1= {44,99};
{int} Subset2={123456,123457,123458};
{int} Subset3={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15};
{int} Nodes=Subset1 union Subset2 union Subset3;
Now there is a set of H_j arcs, where j is in Nodes. H_j gives all arcs outgoing from Node j.
The arcs are stored in an excel file with the following structure:enter image description here
For node 44 in Nodes (Subset 1), there are the arcs <44,123456>, <44,123457>, <44,123458>. For 66 in Nodes (Subset 2), there is no arc. Can somebody help me how to implement this?
Important is that the code uses the input from the excel because in my real case there will be too much data to make a manual input... :(
Maybe there is a real easy solution for that. I would be very thankful!
Thank you so much in advance!
enter image description here
This addition refers to the answer from #Alex Fleischer:
Your code seems to work also in the overall context.
I am trying to implement the following constraints within a Maximization optimization ( The formulations (j,99) and (j,i) in the lower sum boundaries represent arcs):
enter image description here
I tried to implement it like this:
{int} TEST= {99};
subject to {
sum(m in M, j in a[m])x[<44,j>]==3;
sum(j in destPerOrig[99], t in TEST)x[<j,t>]==3;
forall(i in Nodes_wo_Subset1)
sum(j in destPerOrig[i],i in destPerOrig[i])x[<j,i>]==1;
}
M is a set of trains and a[M] gives a specific cost value for each indiviudal train. CPLEX shows 33 failure messages.
The most frequent one is that it cannot extract x[<j,i>], sum(in in destPerOrig[i]), sum(j in destPerOrig[i] and that x and destPerOrig are outside of the valid area.
Most probably the problem is that I implement the constraints in the wrong manner. Again, it is a directed graph.
Referring to the mathematical formulation in the screenshot: Could the format of destPerOrig[i] be a problem?
At the moment destPerOrig[44] gives {2 3 4}. But should´t it give:
{<44 2> <44 3> <44 4>} to work within the mathematical formulation?
I hope that this is enoug information for you to help me :(
I would be very thankful!

all arcs outgoing from Node j.
How to do this depends on how you store the adjacencies of the graph.
Perhaps you store a vector of arcs:
LOOP over arcs
IF arc source == node J
ADD to output

.mod
tuple arcE
{
string o;
string d;
}
{arcE} arcsInExcel=...;
{int} orig={ intValue(a.o) | a in arcsInExcel};
{int} destPerOrig[o in orig]={intValue(a.d) | a in arcsInExcel : intValue(a.o)==o && a.d!="" };
execute
{
writeln(orig);
writeln("==>");
writeln(destPerOrig);
}
/*
which gives
{44 66}
==>
[{2 3 4} {}]
*/
https://github.com/AlexFleischerParis/oplexcel/blob/main/readarcs.mod
.dat
SheetConnection s("readarcs.xlsx");
arcsInExcel from SheetRead(s,"A2:B5");
https://github.com/AlexFleischerParis/oplexcel/blob/main/readarcs.dat

Related

Tinkerpop/Gremlin: How to join multiple traversals and use their value in a main traversal?

I want to use the result of two traversals a, b to calculate the value for an property. I don't know how to join them into combinedTraversal with both traversals starting from the origin element/vertex.
Graph graph = TinkerFactory.createModern();
GraphTraversal a = __.values("age").count().as("count")
.constant(10).as("allCount")
.math("count / allCount")
// .as("a");
GraphTraversal b = __.values("age").count().as("count")
.constant(10).as("allCount")
.math("count / allCount")
// .as("b");
GraphTraversal combinedTraversal = __
.local(a).as("a")
.local(b).as("b")
.math("a * b");
graph.traversal().V()
.has("age")
.property(Cardinality.single, "value", combinedTraversal)
.iterate();
In the example, traversal a and b assume to start at a given vertex and calculate their value. When i'm using local() the position of the traversal is changed and I don't know how to step back to the origin.
Using sideEffect() on the other hand does not refer to the current element, instead calculating the values for all elements into a map.
Is there any way to build such a combinedTraversal? (The "complex" calculation traversals are chosen on purpose, of course there are simpler ways to solve this example)
Thank You.
I found a solution: projection https://tinkerpop.apache.org/docs/current/reference/#project-step
GraphTraversal combinedTraversal = __.project("a", "b").by(a).by(b).math("a * b");

Doubts with transitive closure in Alloy

I am doing a model in Alloy to represent a subset of Java language. Below we have some elements of this model:
sig Method {
id : one MethodId,
param: lone Type,
return: one Type,
acc: lone Accessibility,
b: one Block
}
abstract sig Expression {}
abstract sig StatementExpression extends Expression {}
sig MethodInvocation extends StatementExpression{
pExp: lone PrimaryExpression,
id_methodInvoked: one Method,
param: lone Type
}
sig Block {
statements: set StatementExpression
}
pred noRecursiveMethodInvocationCall [] {
all bl:Block | all mi, mi2: MethodInvocation | all m:Method |
bl in m.b && mi in bl.statements
&& mi2 = mi.*(id_methodInvoked.b.statements) =>
m != mi2.id_methodInvoked
}
The problem is that the predicate noRecursiveMethodInvocationCall apparently is not working since the instances generated contains methods being invoked in a recursive way (even indirectly, e.g. m1 invokes m2, that invokes m3 that in turn invokes m1) and i want to avoid recursion.
The instances are generated through another model, see below:
open javametamodel_withfield_final
one sig BRight, CRight, BLeft, CLeft, Test extends Class{
}
one sig F extends Field{}
fact{
BRight in CRight.extend
BLeft in CLeft.extend
F in BRight.fields
F in CLeft.fields
all c:{Class-BRight-CLeft} | F !in c.fields
}
pred law6RightToLeft[]{
proviso[]
}
pred proviso [] {
some BRight.extend
some BLeft.extend
#(extend.BRight) > 2
#(extend.BLeft) > 2
no cfi:FieldAccess | ( cfi.pExp.id_cf in extend.BRight || cfi.pExp.id_cf in BRight || cfi.pExp.id_cf in extend.BLeft || cfi.pExp.id_cf in BLeft) && cfi.id_fieldInvoked=F
some Method
}
run law6RightToLeft for 9 but 15 Id, 15 Type, 15 Class
Please, does anyone have any clue what the problem is?
Thanks in advance for the attention,
Follow-on query
Still regarding this question, the predicate suggested solves the recursion problem:
pred noRecursiveMethodInvocationCall [] {
no m:Method
| m in m.^(b.statements.id_methodInvoked)
}
However, it causes inconsistency with another predicate (see below), and instances are not generated when both predicates exist.
pred atLeastOneMethodInvocNonVoidMethods [] {
all m:Method
| some mi:MethodInvocation
| mi in (m.b).statements
}
Any idea why instances can not be generated with both predicates?
You might look closely at the condition
mi2 = mi.*(id_methodInvoked.b.statements)
which seems to check whether the set of all statements reachable recursively from mi is equal to the single statement mi2. Now, unless I've confused myself about multiplicities again, mi2 is a scalar, so in any case where the method in question has a block with more than one method-invocation statement, this condition won't fire and the predicate will be vacuously true.
Changing = to in may be the simplest fix, but in that case I expect you won't get any non-empty instances, because you're using * and getting reflexive transitive closure, and not ^ (positive transitive closure).
It looks at first glance as if the condition might be simplified to something like
pred noRecursion {
no m : Method
| m in m.^(b.statements.idMethodInvoked)
}
but perhaps I'm missing something.
Postscript: a later addition to the question asks why no instances are generated when the prohibition on recursion is combined with a requirement that every method contain at least one method invocation:
pred atLeastOneMethodInvocNonVoidMethods [] {
all m:Method
| some mi:MethodInvocation
| mi in (m.b).statements
}
Perhaps the simplest way to see what's wrong is to imagine constructing a call graph. The nodes of the graph are methods, and the arcs of the graph are method invocations. There is an arc from node M1 to node M2 if the body of method M1 contains an invocation of method M2.
If we interpret the two predicates in terms of the graph, the predicate noRecursiveMethodInvocationCall means that the graph is acyclic. The predicate atLeastOneMethodInvocNonVoidMethods means that every node in the graph has at least one outgoing arc.
Try it with a single method M. This method must contain a method invocation, and this method invocation must invoke M (since there is no other method in the universe). So we have an arc from M to M, and the graph has a cycle. But the graph is not allowed to have a cycle. So we cannot create a one-method universe that satisfies both predicates.
Try again with two methods, M1 and M2. Let M1 call M2. Now, what does M2 call? It can't call M1 without making a cycle. It can't call M2 without making a cycle. Again we fail.
I don't have the time just now to look it up, but I think you'll find there is a basic theorem of graph theory that if the number of edges equals the number of nodes, the graph must have a cycle.

Sum of ranks in a binary tree - is there a better way

Maybe this question does not belong as this is not a programming question per se, and i do apologize if this is the case.
I just had an exam in abstract data structures, and there was this question:
the rank of a tree node is defined like this: if you are the root of the tree, your rank is 0. Otherwise, your rank is the rank of your parents + 1.
Design an algorithm that calculates the sum of the ranks of all nodes in a binary tree. What is the runtime of your algorithm?
My answer I believe solves this question, my psuedo-code is as such:
int sum_of_tree_ranks(tree node x)
{
if x is a leaf return rank(x)
else, return sum_of_tree_ranks(x->left_child)+sum_of_tree_ranks(x->right_child)+rank(x)
}
where the function rank is
int rank(tree node x)
{
if x->parent=null return 0
else return 1+rank(x->parent)
}
it's very simple, the sum of ranks of a tree is the sum of the left subtree+sum of the right subtree + rank of the root.
The runtime of this algorithm I believe is n^2. i believe this is the case because we were not given the binary tree is balanced. it could be that there are n numbers in the tree but also n different "levels", as in, the tree looks like a linked list rather than a tree. so to calculate the rank of a leaf, potentially we go n steps up. the father of the leaf will be n-1 steps up etc...so thats n+(n-1)+(n-2)+...+1+0=O(n^2)
My question is, is this correct? does my algorithm solve the problem? is my analysis of the runtime correct? and most importantly, is there a better solution to solve this, that does not run in n^2?
Your algorithm works. your analysis is correct. The problem can be solved in O(n) time: (take care of leaves by yourself)
int rank(tree node x, int r)
{
if x is a leaf return r
else
return rank(x->left_child, r + 1)+ ranks(x->right_child, r + 1) + r
}
rank(tree->root, 0)
You're right but there is an O(n) solution providing you can use a more "complex" data structure.
Let each node hold its rank and update the ranks whenever you add/remove, that way you can use the O(1) statement:
return 1 + node->left.rank + node->right.rank;
and do this for each node on the tree to achieve O(n).
A thumb rule for reducing Complexity time is: if you can complex the data structure and add features to adapt it to your problem, you can reduce Complexity time to O(n) most of the times.
It can be solved in O(n) time where n is number of Nodes in Binary tree .
It's nothing but sum of height of all nodes where height of root node is zero .
As
Algorithm:
Input binary tree with left and right child
sum=0;
output sum
PrintSumOfrank(root,sum):
if(root==NULL) return 0;
return PrintSumOfrank(root->lchild,sum+1)+PrintSumOfRank(root->Rchild,sum+1)+sum;
Edit:
This can be also solved using queue or level order of traversal tree.
Algorithm using Queue:
int sum=0;
int currentHeight=0;
Node *T;
Node *t1;
if(T!=NULL)
enque(T);
while(Q is not empty) begin
currentHeight:currentHeight+1 ;
for each nodes in Q do
t1 = deque();
if(t1->lchild!=NULL)begin
enque(t1->lchild);sum = sum+currentHeight;
end if
if(t1->rchild!=NULL)begin
enque(t1->rchild);sum = sum+currentHeight;
end if
end for
end while
print sum ;

Parallel edge detection

I am working on a problem (from Algorithms by Sedgewick, section 4.1, problem 32) to help my understanding, and I have no idea how to proceed.
"Parallel edge detection. Devise a linear-time algorithm to count the parallel edges in a (multi-)graph.
Hint: maintain a boolean array of the neighbors of a vertex, and reuse this array by only reinitializing the entries as needed."
Where two edges are considered to be parallel if they connect the same pair of vertices
Any ideas what to do?
I think we can use BFS for this.
Main idea is to be able to tell if two or more paths exist between two nodes or not, so for this, we can use a set and see if adjacent nodes corresponding to a Node's adjacent list already are in the set.
This uses O(n) extra space but has O(n) time complexity.
boolean bfs(int start){
Queue<Integer> q = new Queue<Integer>(); // get a Queue
boolean[] mark = new boolean[num_of_vertices];
mark[start] = true; // put 1st node into Queue
q.add(start);
while(!q.isEmpty()){
int current = q.remove();
HashSet<Integer> set = new HashSet<Integer>(); /* use a hashset for
storing nodes of current adj. list*/
ArrayList<Integer> adjacentlist= graph.get(current); // get adj. list
for(int x : adjacentlist){
if(set.contains(x){ // if it already had a edge current-->x
return true; // then we have our parallel edge
}
else set.add(x); // if not then we have a new edge
if(!marked[x]){ // normal bfs routine
mark[x]=true;
q.add(x);
}
}
}
}// assumed graph has ArrayList<ArrayList<Integer>> representation
// undirected
Assuming that the vertices in your graph are integers 0 .. |V|.
If your graph is directed, edges in the graph are denoted (i, j).
This allows you to produce a unique mapping of any edge to an integer (a hash function) which can be found in O(1).
h(i, j) = i * |V| + j
You can insert/lookup the tuple (i, j) in a hash table in amortised O(1) time. For |E| edges in the adjacency list, this means the total running time will be O(|E|) or linear in the number of edges in the adjacency list.
A python implementation of this might look something like this:
def identify_parallel_edges(adj_list):
# O(n) list of edges to counts
# The Python implementation of tuple hashing implements a more sophisticated
# version of the approach described above, but is still O(1)
edges = {}
for edge in adj_list:
if edge not in edges:
edges[edge] = 0
edges[edge] += 1
# O(n) filter non-parallel edges
res = []
for edge, count in edges.iteritems():
if count > 1:
res.append(edge)
return res
edges = [(1,0),(2,1),(1,0),(3,4)]
print identify_parallel_edges(edges)

recursively counting nodes in k-ary tree

This isn't exactly homework but I need to understand it for a class. Language doesn't really matter, psuedocode would be fine.
Write a recursive member function of the “static K-ary” tree class that counts the number of nodes in the tree.
I'm thinking the signature would look like this:
int countNodes(Node<AnyType> t, ctr, k){}
I don't know how to look through k children. In a binary tree, I would check for left and right. Could anyone give me an example of this?
You can think of the recursive equation like:
The total number of nodes starting at a node is 1 + number of total children.
Then total number of nodes can be found as follows:
def count(node):
numOfNodes = 1
for child in node.children:
numOfNodes += count(child)
return numOfNodes
Pseudocode:
count(r)
result = 1
for each child node k
result = result + count(k)
return result

Resources