I am trying to understand the time complexity of some efficient methods of detecting cycles in a graph.
Two approaches for doing this are explained here. I would assume that time complexity is provided in terms of worst-case.
The first is union-find, which is said to have a time complexity of O(Vlog E).
The second uses a DFS based approach and is said to have a time complexity of O(V+E). If I am correct this is a more efficient complexity asymptotically than O(Vlog E). It is also convenient that the DFS-based approach can be used for directed and undirected graphs.
My issue is that I fail to see how the second approach can be considered to run in O(V+E) time because DFS runs in O(V+E) time and the algorithm checks the nodes adjacent to any discovered nodes for the starting node. Surely this would mean that the algorithm runs in O(V2) time because up to V-1 adjacent nodes might have to be traversed over for each discovered node? It is obviously impossible for more than one node to require the traversal of n-1 adjacent nodes but from my understanding this would still be the upper bound of the runtime.
Hopefully someone understands why I think this and can help me to understand why the complexity is O(V+E).
The algorithm, based on DFS, typically maintains a "visited" boolean variable for each vertex, which contains one bit of information - this vertex was already visited or not. So, none of vertices can be visited more than once.
If the graph is connected, then starting the DFS from any vertex will give you an answer right away. If the graph is a tree, then all the vertices will be visited in a single call to the DFS. If the graph is not a tree, then a single call to the DFS will find a cycle - and in this case not all the vertices might be visited. In both cases the subgraph, induced by all the already visited vertices, will be a tree at each step of the DFS lookup - so the total number of traversed edges will be O(V). Because of that we can reduce the time complexity estimate O(V+E) of the cycle detection algorithm to O(V).
Starting the DFS from all vertices of the graph is necessary in the case when the graph consists of a number of connected components - the "visited" boolean variable guarantees that the DFS won't traverse the same component again and again.
Related
I am trying to understand an algorithm to count the number of isolated cycles in a graph. By isolated, I mean the cycles that do not share any vertex. Here is an example:
Here, 1-2-4-1 and 3-6-5-3 are the two cycles that do not share any vertex. Hence, the algorithm must return 2. Off course, we can find all possible cycles and then figure out the isolated ones but I was wondering if there could be an alternate method for this special case.
I got an undirected graph with such that all the edges with same weight and all the vertices are connected. I want to find path between any given pair of vertices.
A less efficient solution is:
To perform BFS starting with one of the vertices, keep track of the visited vertices till the destination vertex is reached. This would perform in O(V + E). But this will have to be done for every pair of vertices given. Hence if there are M number of queries to find path, complexity would be O(M *(E+V)).
Can we do it better? Is it possible to leverage the output a BFS and solve the rest?
As you state that the graph is connected, it is not necessary to call a searching algorithm for the graph more than once. It is sufficient to use a single call to BFS (or DFS, this should make no major difference) to generate a spanning tree for the input. As from your problem statement it is apparently not necessary to find a shortest path between the vertices, any pair (a,b) of vertices is connected via the path from a to the root r of the spanning tree and the path from r to b.
The runtime would be O(V+E), namely the runtime of te searching algorithm; an additional computational cost might be necessary for the generation of the actual paths themselves.
I'm reading a book about algorithms ("Data Structures and Algorithms in C++") and have come across the following exercise:
Ex. 20. Modify cycleDetectionDFS() so that it could determine whether a particular edge is part of a cycle in an undirected graph.
In the chapter about graphs, the book reads:
Let us recall from a preceding section that depth-first search
guaranteed generating a spanning tree in which no elements of edges
used by depthFirstSearch() led to a cycle with other element of edges.
This was due to the fact that if vertices v and u belonged to edges,
then the edge(vu) was disregarded by depthFirstSearch(). A problem
arises when depthFirstSearch() is modified so that it can detect
whether a specific edge(vu) is part of a cycle (see Exercise 20).
Should such a modified depth-first search be applied to each edge
separately, then the total run would be O(E(E+V)), which could turn
into O(V^4) for dense graphs. Hence, a better method needs to be
found.
The task is to determine if two vertices are in the same set. Two
operations are needed to implement this task: finding the set to which
a vertex v belongs and uniting two sets into one if vertex v belongs
to one of them and w to another. This is known as the union-find
problem.
Later on, author describes how to merge two sets into one in case an edge passed to the function union(edge e) connects vertices in distinct sets.
However, still I don't know how to quickly check whether an edge is part of a cycle. Could someone give me a rough explanation of such algorithm which is related to the aforementioned union-find problem?
a rough explanation could be checking if a link is a backlink, whenever you have a backlink you have a loop, and whenever you have a loop you have a backlink (that is true for directed and undirected graphs).
A backlink is an edge that points from a descendant to a parent, you should know that when traversing a graph with a DFS algorithm you build a forest, and a parent is a node that is marked finished later in the traversal.
I gave you some pointers to where to look, let me know if that helps you clarify your problems.
I have a directed acyclic weighted graph which I want to traverse.
The constraints for a valid solution route are:
The sum of the weights of all edges traversed in the route must be the highest possible in the graph, taking in mind the second constraint.
Exactly N vertices must have been visited in the chosen route (including the start and end vertex).
Typically the graph will have a high amount of vertices and edges, so trying all possibilities is not an option, and requires quite an efficient algorithm.
Looking for some pointers or a suitable algorithm for this problem. I know the first condition is easily fulfilled using Dijkstra's algorithm, but I am not sure how to incorporate the second condition, or even where to begin to look.
Please let me know if any additional information is needed.
I'm not sure if you are interested in any path of length N in the graph or just path between two specific vertices; I suspect the latter, but you did not mention that constraint in your question.
If the former, the solution should be a trivial Dijkstra-like algorithm where you sort all edges by their potential path value that starts at the edge weight and gets adjusted by already built adjecent paths. In each iteration, take the node with the best potential path value and add it to an adjecent path. Stop when you get a path of length N (or longer that you cut off at the sides). There are some other technical details esp. wrt. creating long paths, but I won't go into details as I suspect this is not what you are interested in. :-)
If you have fixed source and sink, I think there is no deep magic involved - just run a basic Dijkstra where a path will be associated with each vertex added to the queue, but do not insert vertices with path length >= N into the queue and do not insert sink into the queue unless its path length is N.
a little out of my depth here and need to phone a friend. I've got a directed acyclical graph I need to traverse and I'm stumbling into to graph theory for the first time. I've been reading a lot about it lately but unfortunately I don't have time to figure this out academically. Can someone give me a kick with some help as to how to process this tree?
Here are the rules:
there are n root nodes (I call them "sources")
there are n end nodes
source nodes carry a numeric value
downstream nodes (I call them "worker" nodes) preform various operations on the incoming values like Add, Mult, etc.
As you can see from the graph below, nodes a, b, and c need to be processed before d, e, or f.
What's the proper order to walk this tree?
I would look into linearization of DAGs which should be achievable through Topological sorts.
Linearization, from what I remember, basically sorts in an order which holds to the invariant that for all nodes (Node_X) that have an outdegree to any other given node NodeA, NodeX appears before NodeA.
This would mean that, from your example, nodes a,b, and d would be processed first. Node c second. Nodes e and f, last.
http://en.wikipedia.org/wiki/Topological_sorting
You need to process the nodes via a Topological sort. The sort is not necessarily unique so there might be more than one available order (not that this should matter anyway).
The linked wikipedia page should have concrete algorithms to help you.