How to detect cycles in directed graph in iterative DFS? - recursion

My current project features a set of nodes with inputs and outputs. Each node can take its input values and generate some output values. Those outputs can be used as inputs for other nodes. To minimize the amount of computation needed, node dependencies are checked on application start. When updating the nodes, they are updated in the reverse order they depend on each other.
That said, the nodes resemble a directed graph. I am using iterative DFS (no recursion to avoid stack overflows in huge graphs) to work out the dependencies and create an order for updating the nodes.
I further want to avoid cycles in a graph because cyclic dependencies will break the updater algorithm and causing a forever running loop.
There are recursive approaches to finding cycles with DFS by tracking nodes on the recursion stack, but is there a way to do it iteratively? I could then embed the cycle search in the main dependency resolver to speed things up.

There are plenty of cycle-detection algorithms available on line. The simplest ones are augmented versions of Dijkstra's algorithm. You maintain a list of visited nodes and costs to get there. In your design, replace the "cost" with the path to get there.
In each iteration of the algorithm, you grab the next node on the "active" list and look at each node that follows it in the graph (i.e. each of its dependencies). If that node is on the "visited" list, then you have a cycle. The path you maintained in getting here shows the loop path.
Is that enough to get you moving?

Try a timestamp. Add a meta timestamp and set it to zero on your nodes.
Previous Answer (non applicable):
When you start a search, increment or grab a time() stamp. Then, when
you visit a node, compare it to the current search timestamp. If it
is the same, then you have found a cycle. If not then set the stamp
to current.
Next search, increment again.
Ok, this is how I'm assuming you are performing your DFS search:
Add Root node to a stack (for searching) and a vector (for updating).
Pop the stack and add children of the current node to the stack and to the vector
loop until stack is empty
reverse iterate the vector and update values (by referencing child nodes)
The problem: Cycles will cause the same set of nodes to be added to the stack.
Solution 1: Use a boolean/timestamp to see if the node has been visited before adding to the DFS search stack. This will eliminate cycles, but will not resolve them. You can spit out an error and quit.
Solution 2: Use a timestamp, but increment it each time you pop the stack. If a child node has a timestamp set, and it is less than the current stamp, you have found a cycle. Here's the kicker. When iterating over the values backwards, you can check the timestamps of the child nodes to see if they are greater than the current node. If less, then you've found a cycle, but you can use a default value.
In fact, I think Solution 1 can be resolved the same way by never following more than one child when updating the value and setting all nodes to a default value on start. Solution 2 will give you a warning while evaluating the graph whereas solution 1 only gives you a warning when creating the vector.

Related

What is a Topological Sort

I have looked up numerous examples online and watched a YouTube video but I am still a little lost on what a topological sort is. As far as i understand you should start with like a visited and non-visited queue and get the topological sort order when you are done visiting all of the children of a node?
Topological Sort means you are given a list of jobs and list of prerequisites and you have to figure out the ordering of jobs.
jobs = [1,2,3]
prerequisites = [[1,2], [1, 3], [3,2]]
result = [1,3,2] should be the order in which jobs should execute.
here [1,2] signifies that job 2 cannot be started until job 1 is completed (job 1 is a prerequisite).
So for this you can use a straightforward depth first search (graph traversal algo). wherein you can have a custom class named with JobVertex
class JobVertex {
int job;
List<JobVertex> preRequisites;
boolean inProgress;
boolean visited;}
Initially both inProgress and visited flags can be set to false.
inProgress flag is used to detect cycles (because to make topological sort to work graph has to be DAG)
List<Integer> result = new ArrayList<>();
result is the final list where our ordered jobs will be added.
dfs(node, result) method can look like this wherein you can start with any of the node and then traverse through its prerequisites in recursive manner and updating the flags with every iteration.
Time Complexity can be same as that of dfs (v + e) where v corresponds to vertices and e corresponds to edges respectively.
A topological sort is a linear ordering of nodes in which for every directed edge 'a -> b', 'a' comes before 'b' in the ordering. Since the edges must be directed, topological sorts must be done on directed graphs, and the graphs must also be acyclic (they can't contain cycles). This is a special subset of graphs known as a DAG (Directed Acyclic Graph).
Here is an example of what it looks like, notice all edges (arrows) go left to right
The best way to find a topological sort is to use DFS with a temporary Stack as opposed to a Queue. Your understanding is close, but not fully there. As DFS is running on the graph, you don't push the node onto the temporary Stack until all of the children of that node have been explored as well. This is where the recursive element of this algorithm comes into play. Since you don't want to push the node onto the stack until of its children have been explored, you have to wait until the algorithm is finished running on the children. By implementing a stack, the first node that is popped off the top will have no edges pointing towards it, and the last node popped off will have no edges coming from it. If we used a queue it would be the other way around.
You can use a second stack and remove nodes from it once they are added to the temporary stack for housekeeping purposes. Once all the nodes are on the temporary stack, print them by popping them off one by one, and you will have the topological sort of the given graph.

Finding optimal order of all nodes to be visited in a graph

The following problem comes from geography, but I don't kown of any GIS method to solve it. I think it's solution can be found with graph analysis, but I need some guidance to think in the right direction.
There is a geographical area, say a state. It is subdivided in several quadrants, which are subdivided further, and once again. So its a tree structure with the state as root, and 3 levels of child nodes, each parent having 4 childs. But from the perspective of the underlying process its more like a completed graph, since in theory a node is directly reachable from each other node.
The subdivisions reflect map sheet boundaries at different mapscales. Each mapsheet has to reviewed by a topographer in a time span dependend on the complexity of the map contents.
While reviewing the map, the underlying digital data is locked in the database. And as the objects have topological relationships with objects of neighboring map sheet (eg. roads crossing the map boundaries), all 8 surrounding map sheets are locked also.
The question is, what is the optimal order in which the leafs (on the lowest level) should be visited to satisfy following requirements:
each node has to be visited
we do not deal with travel times but with the timespan a worker spent at each node (map)
the time spent at a node is different
while the worker is at a node, all adjacent nodes cannot be visited; this is true also for other workers too; they cannot work on a map side by side with a map already being processed
if a node has been visited, other nodes having the same parent should be prefered as next node; this is true for all levels of parents
Finally for a given number of nodes/maps and workers we need an ordered series of nodes, each worker visites to minimize the overall time, and the time for each parent also.
After designing the solution the real work begins. We will recognize, that the actual work may need more or less time, than expected. Therefore it is necessary to replay the solution up to a current state, and design a new solution with slightly different conditions, leading to another order of nodes.
Has somebody an idea which data structure and which algorithm to use to find a solution for such kind of problem?
Not havig a ready made algorithm, but may be the following helps devising one:
Your exakt topologie is not cler. I assume from the other remarks,
you are targeting a regular structure. In your case a 4x4 square.
Given the restriction that working on a node blocks any adjacient node can be used to identify a starting condition for the algorithm:
Put a worker to one corner of the total are and then put others
at ditance 2 from this (first in x direction and as soon as the side is "filled" with y direction . This will occupy all (x,y) nodes (x,y in 0,2,..,2n where 2n <= size of grid)
With a 4x4 area this will allow a maximum of 4 workers, and will position a worker per child node of each 2 level grid node)
from this let each worker process (x,y),(x+1),(y+1),(x+1,y). This are the 4 nodes of a small square.
If a worker is done but can not proceed to the next planned node, you may advance it to the next free node from the schedule.
The more workers you will have, the higher the risk for contention will be. If you have any estimates on the expected wokload per node,
then you may prefer starting with the most expensive ones and arrange the processing sequence to continue with the ones that have the highest total expected costs.

Setting a new label on all nodes takes too long in a huge graph

I'm working on a graph containing about 50 million nodes and 40 million relationships.
I need to update every node.
I'm trying to set a new label to these nodes, but it's taking too long.
The label applies to all 50 million nodes, so the operation never ends.
After some research, i found out that Neo4j treats this operation as a single transaction (i don't know if optimistic or not), keeping the changes uncommitted, until the end (which will never happen in this fashion).
I'm currently using Neo4j 2.1.4, which has a feature called "USING PERIODIC COMMIT" (already present in earlier versions). Unfortunately, this feature is coupled to the "LOAD CSV" feature, and not available to every cypher command.
The cypher is quite simple:
match n set n:Person;
I decided to use a workaround, and make some sort of block update, as follows:
match n
where not n:Person
with n
limit 500000
set n:node;
It's ugly, but i couldn't come up with a better solution yet.
Here are some of my confs:
== neo4j.properties =========
neostore.nodestore.db.mapped_memory=250M
neostore.relationshipstore.db.mapped_memory=500M
neostore.propertystore.db.mapped_memory=900M
neostore.propertystore.db.strings.mapped_memory=1300M
neostore.propertystore.db.arrays.mapped_memory=1300M
keep_logical_logs=false
node_auto_indexing=true
node_keys_indexable=name_autocomplete,document
relationship_auto_indexing=true
relationship_keys_indexable=role
execution_guard_enabled=true
cache_type=weak
=============================
== neo4j-server.properties ==
org.neo4j.server.webserver.limit.executiontime=20000
org.neo4j.server.webserver.maxthreads=200
=============================
The hardware spec is:
RAM: 24GB
PROC: Intel(R) Xeon(R) X5650 # 2.67GHz, 32 cores
HDD1: 1.2TB
In this environment, each block update of 500000 nodes took from 200 to 400 seconds. I think this is because every node satisfies the query at the start, but as the updates take place, more nodes need to be scanned to find the unlabeled ones (but again, it's a hunch).
So what's the best course of action whenever an operation needs to touch every node in the graph?
Any help towards a better solution to this will be appreciated!
Thanks in advance.
The most performant way to achieve this is using the batch inserter API. You might use the following recipe:
take a look at http://localhost:7474/webadmin and note the "node count". In fact it's not the number of nodes it's more the highest node id in use - we'll need that later on.
make sure to cleanly shut down your graph database.
take a backup copy of your graph.db directory.
write a short piece of java/groovy/(whatever jvm language you prefer...) program that performs the following tasks
open your graph.db folder using the batch inserter api
in a loop from 0..<node count> (from step above) check if the node with given id exists, if so grab its current labels and amend the list by the new label and use setNodeLabels to write it back.
make sure you run shutdown with the batchinserter
start up your Neo4j instance again

Load balancing using the heat equation

I didn't think this was mathsy enough for mathoverflow, so I thought I would try here. Its for a programming problem though, so its sorta on topic.
I have a graph (as in the maths one), where I have a number of vertices, A,B,C.. each with a load "value", these are connected by some arbitrary topology (there is at least a spanning tree). The objective of the problem is to transfer load between each of the vertices, preferably using the minimal flow possible.
I wish to know the transfer values along each edge.
The solution to the problem I was thinking of was to treat it as a heat transfer problem, and iteratively transfer load, or solve the heat equation in some way, counting the amount of load dissipated along each edge. The amount of heat transfer until the network reaches steady state should thus yield the result.
Whilst I think this will work, it seems like the stupid solution. I am wondering if there is a reference or sample problem that someone can point me to -- I am unsure what keywords to search for.
I could not see how to couple the problem as either a simplex problem or as a network flow problem -- each edge has unlimited capacity, and so does each node. There are two simultaneous minimisation problems to solve, so simplex seems to not apply??
If you have a spanning tree, then there is an O(n) solution.
Calculate the average value. (In the end, every node will have this value.) Then iterate over the leaves: calculate the change in value necessary for that leaf (average - value), add that change to the leaf, assign it (as flow) to the edge that connects that leaf to the tree, and subtract it from the other node. Remove that leaf and edge from the tree (not from the graph of course); the other node to may become a new leaf, if it has only one remaining edge in the tree.
When you reach the last node, if you've done the arithmetic right it will end up with the average value, just like all the other nodes.

Fixed length path between two graph nodes

Is there an algorithm that will, if given two nodes on a graph, find a route between them that takes the specified number of hops? Any node can be connected to any other.
The points at the moment are located in 2D space, so I'm not sure if a graph is the best approach.
Have you tried iterated-deepening DFS?
If you have nodes are seeking to find routes in terms of hops, then a graph is probably the right approach. I'm not sure I understand what you are trying to do and what the constraints are, though, especially with respect to "Any Node can be connected to any other" .. which seems a bit open ended.
Disregarding that, however; with some graph representation:
It seems like starting at the first node, and doing a depth first search from there, and terminating a search if (a) the hops taken is larger than your specified number or (b) we have arrived at the second node; this will determine the first (not only) path connecting the two nodes in (at most) that many hops.
If it has to be exactly the specified hops, terminate any branch of the search if the hops have gone over, and terminate with success if you have also arrived at the second node.
Dumb approach: (data structure is array of stacks). This is basically doing Breadth First Search (BFS) to depth N, except that if loops are allowed (you did not clarify but I assume they are), you don't exclude the visited nodes from further searching.
Push starting node on a stack stored in the array at index 0 (index=depth)
For each level/index "l" 0-N:
For each node on a stack stored at level "l", find all its neighbors, and push them onto a stack stored in level "l+1".
Important: if your task allows finding paths that contain loops, do NOT check if you already visited any node you add. If it does not allow loops, use a hash of visited nodes to not add any node twice**
Stop when you end level "N-1".
Loop over all the nodes you just added to stack at index "N" and find your destination node. If found: success, if not, no such path.
Please note that if by "every node can be connected" you are implying a FULLY CONNECTED graph, then there exists a mathematical answer not involving actually visiting nodes
(however, the formula is too long to write in the text-entry field of StackOverflow)

Resources