Recursive node traversal go through whole graph? - recursion

I'm trying to solve this work problem, but I'm having some difficuly, since I suck at recursion.
My question is: is there a way where I can pick a node in a graph and traverse the graph all the way through back to the same node where I started, but the nodes can only be visited once? And of course to save the resulting edges traversed.
The graph is unweighted, but it is coordinates in a 2D x and y coordinate system, so each coordinate has an x and y value, meaning the edges can weighted by calculating the distance between the coordinates. If that helps...

I'm not completely sure I understand, but here is a suggestion: pick a node n0, then an edge e=(n0,n1). Then remove that edge from the graph, and use a breadth-first search to find the shortest path from n1 back to n0 if it exists.
Another suggestion, which might help you to control the length of the resulting path better: Pick a starting node n0 and find a spanning tree T emanating from n0. Remove n0, and T will (hopefully) break into components. Find an edge e=(n1,n2) from one component to another. Then that edge, plus the edges in T connecting the n1 to n0, plus the edges in T connecting n2 to n0, is a cycle with the properties you desire.

Related

Single source shortest path using BFS for a undirected weighted graph

I was trying to come up with a solution for finding the single-source shortest path algorithm for an undirected weighted graph using BFS.
I came up with a solution to convert every edge weight say x into x edges between the vertices each new edge with weight 1 and then run the BFS. I would get a new BFS tree and since it is a tree there exists only 1 path from the root node to every other vertex.
The problem I am having with is trying to come up with the analysis of the following algorithm. Every edge needs to be visited once and then be split into the corresponding number of edges according to its weight. Then we need to find the BFS of the new graph.
The cost for visiting every edge is O(m) where m is the number of edges as every edge is visited once to split it. Suppose the new graph has km edges (say m').
The time complexity of BFS is O (n + m') = O(n + km) = O(n + m) i.e the Time complexity remains unchanged.
Is the given proof correct?
I'm aware that I could use Dijkstra's algorithm here, but I'm specifically interested in analyzing this BFS-based algorithm.
The analysis you have included here is close but not correct. If you assume that every edge's cost is at most k, then your new graph will have O(kn) nodes (there are extra nodes added per edge) and O(km) edges, so the runtime would be O(kn + km). However, you can't assume that k is a constant here. After all, if I increase the weight on the edges, I will indeed increase the amount of time that your code takes to run. So overall, you could give a runtime of O(kn + km).
Note that k here is a separate parameter to the runtime, the same way that m and n are. And that makes sense - larger weights give you larger runtimes.
(As a note, this is not considered a polynomial-time algorithm. Rather, it's a pseudopolynomial-time algorithm because the number of bits required to write out the weight k is O(log k).)

Easy way to find a graph edge between graph vertexes

if there are 100 graph vertexes, each graph vertex has 4 graph edges toward another graph vertex, and are stored in an array, X. "X(100, 4)" is the array's size, while "X(38, 2)" means the contents of the array at two dimensional index 38, 2.
Is there any simple way to find a way form a given starting graph vertex to another given graph vertex?
It does not have to be the shortest wat, as long as the destination can be reached.
Thanks!
Yes. This is the same as finding a path between two vertices in an undirected graph, and is a thoroughly studied concept in mathematics and computer science. The usual method is a "Depth First Search" (DFS). A suitable algorithm is described here.
Essentially it follows this pattern:
Start with x equal to the start node.
If x is the end node, then we're done.
If we've already visited x then abandon this path.
For each of the nodes y connected to x,
Add x to the current path and set y=x.
Run algorithm from step 2.
Loop to step 4.
That will explore every possible path from x, going as deep down each branch as possible to find the goal or a deadend. Hence "depth first".

Pathfinding through graph with special vertices

Heyo!
So I've got this directed and/or undirected graph with a bunch of vertices and edges. In this graph there is a start vertex and an end vertex. There's also a subset of vertices which are coloured red (this subset can include the start and end vertices). Also, no pair of vertices can have more than one edge between them.
What I have to do is to find:
A) The shortest path that passes no red vertices
B) If there is a path that passes at least one red vertex
C) The path with the greatest amount of red vertices
D) The path with the fewest amount of red vertices
For A I use a breadth first search ignoring red branches. For B I simply brute force it with a depth first search of the graph. And for C and D I use dynamic programming, memoizing the number of red vertices I find in all paths, using the same DFS as in B.
I am moderately happy with all the solutions and I would very much appreciate any suggestions! Thanks!!
For A I use a breadth first search ignoring red branches
A) is a Typical pathfinding problem happening in the sub-graph that contains no red edges. So your solution is good (could be improved with heuristics if you can come up with one, then use A*)
For B I simply brute force it with a depth first search of the graph
Well here's the thing. Every optimal path A->C can be split at an arbitrary intermediate point B. A Nice property of optimal paths, is that every sub-path is optimal. So A->B and B->C are optimal.
This means if you know you must travel from some start to some end through an intermediary red vertex, you can do the following:
Perform a BFS from the start vertex
Perform a BFS from the endvertex backwards (If your edges are directed - as I think - you'll have to take them in reverse, here)
Alternate expanding both BFS so that both their 'edge' (or open lists, as they are called) have the same distance to their respective start.
Stop when:
One BFS hits a red vertex encountered by (or in the 'closed' list of) the other one. In this case, Each BFS can construct the optimal path to that commen vertex. Stitch both semi-paths, and you have your optimal path with at least a red vertex.
One BFS is stuck ('open' list is empty). In this case, there is no solution.
C) The path with the greatest amount of red vertices
This is a combinatorial problem. the first thing I would do is make a matrix of reachability of [start node + red nodes + end nodes] where:
reachability[i, j] = 1 iff there is a path from node i to node j
To compute this matrix, simply perform one BFS search starting at the start node and at every red node. If the BFS reaches a red node, put a 1 in the corresponding line/column.
This will abstract away the underlying complexity of the graph, and make an order of magnitude speedup on the combinatorial search.
The problem is now a longest path problem through that connectivity matrix. dynamic programming would be the way to go indeed.
D) The path with the fewest amount of red vertices
Simply perform a Dijkstra search, but use the following metric when sorting the nodes in the 'open' list:
dist(start, a) < dist(start, b) if:
numRedNodesInPath(start -> a) < numRedNodesInPath(start -> b)
OR (
numRedNodesInPath(start -> a) == numRedNodesInPath(start -> b)
AND
numNodesInPath(start -> a) < numNodesInPath(start -> a)
)
For this, when discovering new vertices, you'll have to store the path leading up to them (well, just the nb of nodes in the path, as well as the nb of red nodes, separately) in a dedicated map to be fetched. I mention this because usually, the length of the path is stored implicitly as the position of the verrtex in the array. You'll have to enforce it explicitely in your case.
Note on length optimality:
Even though you stated you did not care about length optimality outside of problem A), the algorithm I provided will produce shortest-length solutions. In many cases (like in D) it helps Dijkstra converge better I believe.

Time Complexity of modified dfs algorithm

I want to write an algorithm that finds an optimal vertex cover of a tree in linear time O(n), where n is the number of the vertices of the tree.
A vertex cover of a graph G=(V,E) is a subset W of V such that for every edge (a,b) in E, a is in W or b is in W.
In a vertex cover we need to have at least one vertex for each edge.
If we pick a non-leaf, it can cover more than one edge.
That's why I thought we can do it as follows:
We visit the root of the tree, then we visit one of its children, a child of the latter that we visited and so on.
Then if we have reached at a leaf, we check if we already have taken its father for the optimal vertex cover, and if not we pick it. Then, if the vertex that we picked for the optimal vertex cover has also other children, we pick the first of them and visit the leftmost children recursively and if we reach at the leaf and its father hasn't been chosen for the desired vertex cover, we choose it and so on.
I have written the following algorithm:
DFS(node x){
discovered[x]=1;
for each (v in Adj(x)){
if discovered[v]==0{
DFS(v);
if (v->taken==0){
x<-taken=1;
}
}
}
}
I thought that its time complexity is
(|V_i|, |E_i| are the number of vertices and edges respectively of the subtrees at the root of which we call DFS )
Is the time complexity I found right? Or have I calculated it wrong?
EDIT: Is the complexity of the algorithm described by the recurrence relation:
T(|V|)=E*T(|V|-1)+O(1)
? Or am I wrong?

How can I find all 'long' simple acyclic paths in a graph?

Let's say we have a fully connected directed graph G. The vertices are [a,b,c]. There are edges in both directions between each vertex.
Given a starting vertex a, I would like to traverse the graph in all directions and save the path only when I hit a vertex which is already in the path.
So, the function full_paths(a,G) should return:
- [{a,b}, {b,c}, {c,d}]
- [{a,b}, {b,d}, {d,c}]
- [{a,c}, {c,b}, {b,d}]
- [{a,c}, {c,d}, {d,b}]
- [{a,d}, {d,c}, {c,b}]
- [{a,d}, {d,b}, {b,c}]
I do not need 'incomplete' results like [{a,b}] or [{a,b}, {b,c}], because it is contained in the first result already.
Is there any other way to do it except of generating a powerset of G and filtering out results of certain size?
How can I calculate this?
Edit: As Ethan pointed out, this could be solved with depth-first search method, but unfortunately I do not understand how to modify it, making it store a path before it backtracks (I use Ruby Gratr to implement my algorithm)
Have you looked into depth first search or some variation? A depth first search traverses as far as possible and then backtracks. You can record the path each time you need to backtrack.
If you know your graph G is fully connected there is N! paths of length N when N is number of vertices in graph G. You can easily compute it in this way. You have N possibilities of choice starting point, then for each starting point you can choose N-1 vertices as second vertex on a path and so on when you can chose only last not visited vertex on each path. So you have N*(N-1)*...*2*1 = N! possible paths. When you can't chose starting point i.e. it is given it is same as finding paths in graph G' with N-1 vertices. All possible paths are permutation of set of all vertices i.e. in your case all vertices except starting point. When you have permutation you can generate path by:
perm_to_path([A|[B|_]=T]) -> [{A,B}|perm_to_path(T)];
perm_to_path(_) -> [].
simplest way how to generate permutations is
permutations([]) -> [];
permutations(L) ->
[[H|T] || H <- L, T <- permutations(L--[H])].
So in your case:
paths(A, GV) -> [perm_to_path([A|P]) || P <- permutations(GV--[A])].
where GV is list of vertices of graph G.
If you would like more efficient version it would need little bit more trickery.

Resources