Wikipedia about Depth First Search:
Depth-first search (DFS) is an
algorithm for traversing or searching
a tree, tree structure, or graph. One
starts at the root (selecting some
node as the root in the graph case)
and explores as far as possible along
each branch before backtracking.
So what is Breadth First Search?
"an algorithm that choose a starting
node, checks all nodes backtracks,
chooses the shortest path, chose neighbour nodes backtracks,
chose the shortest path, finally
finds the optimal path because of
traversing each path due to continuous
backtracking.
Regex find's pruning -- backtracking?
The term backtracking confuses due to its variety of use. UNIX's find pruning an SO-user explained with backtracking. Regex Buddy uses the term "catastrophic backtracking" if you do not limit the scope of your Regexes. It seems to be a too widely used umbrella-term. So:
How do you define "backtracking" specifically for Graph Theory?
What is "backtracking" in Breadth First Search and Depth First Search?
[Added]
Good definitions about backtracking and examples
The Brute-force method
Stallman's(?) invented term "dependency-directed backtracking"
Backtracking and regex example
Depth First Search definition.
The confusion comes in because backtracking is something that happens during search, but it also refers to a specific problem-solving technique where a lot of backtracking is done. Such programs are called backtrackers.
Picture driving into a neighborhood, always taking the first turn you see (let's assume there are no loops) until you hit a dead end, at which point you drive back to the intersection of the next unvisited street. This the "first" kind of backtracking, and it's roughly equivalent to colloquial usage of the word.
The more specific usage refers to a problem-solving strategy that is similar to depth-first search but backtracks when it realizes that it's not worth continuing down some subtree.
Put another way -- a naive DFS blindly visits each node until it reaches the goal. Yes, it "backtracks" on leaf nodes. But a backtracker also backtracks on useless branches. One example is searching a Boggle board for words. Each tile is surrounded by 8 others, so the tree is huge, and naive DFS can take too long. But when we see a combination like "ZZQ," we can safely stop searching from this point, since adding more letters won't make that a word.
I love these lectures by Prof. Julie Zelenski. She solves 8 queens, a sudoku puzzle, and a number substitution puzzle using backtracking, and everything is nicely animated.
Programming Abstractions, Lecture 10
Programming Abstractions, Lecture 11
A tree is a graph where any two vertices only have one path between them. This eliminates the possibility of cycles. When you're searching a graph, you will usually have some logic to eliminate cycles anyway, so the behavior is the same. Also, with a directed graph, you can't follow edges in the "wrong" direction.
From what I can tell, in the Stallman paper they developed a logic system that doesn't just say "yes" or "no" on a query but actually suggests fixes for incorrect queries by making the smallest number of changes. You can kind of see where the first definition of backtracking might come into play.
Related
I have a undirected graph with about 100 nodes and about 200 edges. One node is labelled 'start', one is 'end', and there's about a dozen labelled 'mustpass'.
I need to find the shortest path through this graph that starts at 'start', ends at 'end', and passes through all of the 'mustpass' nodes (in any order).
( http://3e.org/local/maize-graph.png / http://3e.org/local/maize-graph.dot.txt is the graph in question - it represents a corn maze in Lancaster, PA)
Everyone else comparing this to the Travelling Salesman Problem probably hasn't read your question carefully. In TSP, the objective is to find the shortest cycle that visits all the vertices (a Hamiltonian cycle) -- it corresponds to having every node labelled 'mustpass'.
In your case, given that you have only about a dozen labelled 'mustpass', and given that 12! is rather small (479001600), you can simply try all permutations of only the 'mustpass' nodes, and look at the shortest path from 'start' to 'end' that visits the 'mustpass' nodes in that order -- it will simply be the concatenation of the shortest paths between every two consecutive nodes in that list.
In other words, first find the shortest distance between each pair of vertices (you can use Dijkstra's algorithm or others, but with those small numbers (100 nodes), even the simplest-to-code Floyd-Warshall algorithm will run in time). Then, once you have this in a table, try all permutations of your 'mustpass' nodes, and the rest.
Something like this:
//Precomputation: Find all pairs shortest paths, e.g. using Floyd-Warshall
n = number of nodes
for i=1 to n: for j=1 to n: d[i][j]=INF
for k=1 to n:
for i=1 to n:
for j=1 to n:
d[i][j] = min(d[i][j], d[i][k] + d[k][j])
//That *really* gives the shortest distance between every pair of nodes! :-)
//Now try all permutations
shortest = INF
for each permutation a[1],a[2],...a[k] of the 'mustpass' nodes:
shortest = min(shortest, d['start'][a[1]]+d[a[1]][a[2]]+...+d[a[k]]['end'])
print shortest
(Of course that's not real code, and if you want the actual path you'll have to keep track of which permutation gives the shortest distance, and also what the all-pairs shortest paths are, but you get the idea.)
It will run in at most a few seconds on any reasonable language :)
[If you have n nodes and k 'mustpass' nodes, its running time is O(n3) for the Floyd-Warshall part, and O(k!n) for the all permutations part, and 100^3+(12!)(100) is practically peanuts unless you have some really restrictive constraints.]
run Djikstra's Algorithm to find the shortest paths between all of the critical nodes (start, end, and must-pass), then a depth-first traversal should tell you the shortest path through the resulting subgraph that touches all of the nodes start ... mustpasses ... end
This is two problems... Steven Lowe pointed this out, but didn't give enough respect to the second half of the problem.
You should first discover the shortest paths between all of your critical nodes (start, end, mustpass). Once these paths are discovered, you can construct a simplified graph, where each edge in the new graph is a path from one critical node to another in the original graph. There are many pathfinding algorithms that you can use to find the shortest path here.
Once you have this new graph, though, you have exactly the Traveling Salesperson problem (well, almost... No need to return to your starting point). Any of the posts concerning this, mentioned above, will apply.
Actually, the problem you posted is similar to the traveling salesman, but I think closer to a simple pathfinding problem. Rather than needing to visit each and every node, you simply need to visit a particular set of nodes in the shortest time (distance) possible.
The reason for this is that, unlike the traveling salesman problem, a corn maze will not allow you to travel directly from any one point to any other point on the map without needing to pass through other nodes to get there.
I would actually recommend A* pathfinding as a technique to consider. You set this up by deciding which nodes have access to which other nodes directly, and what the "cost" of each hop from a particular node is. In this case, it looks like each "hop" could be of equal cost, since your nodes seem relatively closely spaced. A* can use this information to find the lowest cost path between any two points. Since you need to get from point A to point B and visit about 12 inbetween, even a brute force approach using pathfinding wouldn't hurt at all.
Just an alternative to consider. It does look remarkably like the traveling salesman problem, and those are good papers to read up on, but look closer and you'll see that its only overcomplicating things. ^_^ This coming from the mind of a video game programmer who's dealt with these kinds of things before.
This is not a TSP problem and not NP-hard because the original question does not require that must-pass nodes are visited only once. This makes the answer much, much simpler to just brute-force after compiling a list of shortest paths between all must-pass nodes via Dijkstra's algorithm. There may be a better way to go but a simple one would be to simply work a binary tree backwards. Imagine a list of nodes [start,a,b,c,end]. Sum the simple distances [start->a->b->c->end] this is your new target distance to beat. Now try [start->a->c->b->end] and if that's better set that as the target (and remember that it came from that pattern of nodes). Work backwards over the permutations:
[start->a->b->c->end]
[start->a->c->b->end]
[start->b->a->c->end]
[start->b->c->a->end]
[start->c->a->b->end]
[start->c->b->a->end]
One of those will be shortest.
(where are the 'visited multiple times' nodes, if any? They're just hidden in the shortest-path initialization step. The shortest path between a and b may contain c or even the end point. You don't need to care)
Andrew Top has the right idea:
1) Djikstra's Algorithm
2) Some TSP heuristic.
I recommend the Lin-Kernighan heuristic: it's one of the best known for any NP Complete problem. The only other thing to remember is that after you expanded out the graph again after step 2, you may have loops in your expanded path, so you should go around short-circuiting those (look at the degree of vertices along your path).
I'm actually not sure how good this solution will be relative to the optimum. There are probably some pathological cases to do with short circuiting. After all, this problem looks a LOT like Steiner Tree: http://en.wikipedia.org/wiki/Steiner_tree and you definitely can't approximate Steiner Tree by just contracting your graph and running Kruskal's for example.
Considering the amount of nodes and edges is relatively finite, you can probably calculate every possible path and take the shortest one.
Generally this known as the travelling salesman problem, and has a non-deterministic polynomial runtime, no matter what the algorithm you use.
http://en.wikipedia.org/wiki/Traveling_salesman_problem
The question talks about must-pass in ANY order. I have been trying to search for a solution about the defined order of must-pass nodes. I found my answer but since no question on StackOverflow had a similar question I'm posting here to let maximum people benefit from it.
If the order or must-pass is defined then you could run dijkstra's algorithm multiple times. For instance let's assume you have to start from s pass through k1, k2 and k3 (in respective order) and stop at e. Then what you could do is run dijkstra's algorithm between each consecutive pair of nodes. The cost and path would be given by:
dijkstras(s, k1) + dijkstras(k1, k2) + dijkstras(k2, k3) + dijkstras(k3, 3)
How about using brute force on the dozen 'must visit' nodes. You can cover all the possible combinations of 12 nodes easily enough, and this leaves you with an optimal circuit you can follow to cover them.
Now your problem is simplified to one of finding optimal routes from the start node to the circuit, which you then follow around until you've covered them, and then find the route from that to the end.
Final path is composed of :
start -> path to circuit* -> circuit of must visit nodes -> path to end* -> end
You find the paths I marked with * like this
Do an A* search from the start node to every point on the circuit
for each of these do an A* search from the next and previous node on the circuit to the end (because you can follow the circuit round in either direction)
What you end up with is a lot of search paths, and you can choose the one with the lowest cost.
There's lots of room for optimization by caching the searches, but I think this will generate good solutions.
It doesn't go anywhere near looking for an optimal solution though, because that could involve leaving the must visit circuit within the search.
One thing that is not mentioned anywhere, is whether it is ok for the same vertex to be visited more than once in the path. Most of the answers here assume that it's ok to visit the same edge multiple times, but my take given the question (a path should not visit the same vertex more than once!) is that it is not ok to visit the same vertex twice.
So a brute force approach would still apply, but you'd have to remove vertices already used when you attempt to calculate each subset of the path.
i'm tryna to find it out how many different BFS and DFS trees can i construct out of a given graph if this is impossible to determine exactly then i wanna to know if the DFS's has more variety than BFS's sure all of them related to the given graph !
Thank you
Perhaps the complexities answer your question
Depth-first search
Worst case performance O(|E|) for explicit graphs traversed without repetition, O(b^{d}) for implicit graphs with branching factor b searched to depth d.
Worst case space complexity O(|V|) if entire graph is traversed without repetition, O(longest path length searched) for implicit graphs without elimination of duplicate nodes
&
Breadth-first search
Worst case performance O(|E|)=O(b^{d})
Worst case space complexity O(|V|)=O(b^{d})
I'm reading a book about algorithms ("Data Structures and Algorithms in C++") and have come across the following exercise:
Ex. 20. Modify cycleDetectionDFS() so that it could determine whether a particular edge is part of a cycle in an undirected graph.
In the chapter about graphs, the book reads:
Let us recall from a preceding section that depth-first search
guaranteed generating a spanning tree in which no elements of edges
used by depthFirstSearch() led to a cycle with other element of edges.
This was due to the fact that if vertices v and u belonged to edges,
then the edge(vu) was disregarded by depthFirstSearch(). A problem
arises when depthFirstSearch() is modified so that it can detect
whether a specific edge(vu) is part of a cycle (see Exercise 20).
Should such a modified depth-first search be applied to each edge
separately, then the total run would be O(E(E+V)), which could turn
into O(V^4) for dense graphs. Hence, a better method needs to be
found.
The task is to determine if two vertices are in the same set. Two
operations are needed to implement this task: finding the set to which
a vertex v belongs and uniting two sets into one if vertex v belongs
to one of them and w to another. This is known as the union-find
problem.
Later on, author describes how to merge two sets into one in case an edge passed to the function union(edge e) connects vertices in distinct sets.
However, still I don't know how to quickly check whether an edge is part of a cycle. Could someone give me a rough explanation of such algorithm which is related to the aforementioned union-find problem?
a rough explanation could be checking if a link is a backlink, whenever you have a backlink you have a loop, and whenever you have a loop you have a backlink (that is true for directed and undirected graphs).
A backlink is an edge that points from a descendant to a parent, you should know that when traversing a graph with a DFS algorithm you build a forest, and a parent is a node that is marked finished later in the traversal.
I gave you some pointers to where to look, let me know if that helps you clarify your problems.
I've faved a question here, and the most promising answer to-date implies "graph carvings". Problem is, I have no clue what it is (neither does the OP, apparently), and it sounds very promising and interesting for several uses. My Googlefu failed me on this topic, as I found no useful/free resource talking about them.
Can someone please tell me what is a 'graph carving', how I can make one for a graph, and how I can determine what makes a certain carving better suited for a task than another?
Please don't go too mathematical on me (or be ready to answer more questions): I understand what's a graph, what's a node and what's a vertex, I manage with big O notation, but I have no real maths background.
I think the answer given in the linked question is a little loose with terminology. I think it is describing a tree carving of a graph G. This is still not particularly google-friendly, I admit, but perhaps it will get you going on your way. The main application of this structure appears to be in one particular DFS algorithm, described in these two papers.
A possibly more clear description of the same algorithm may appear in this book.
I'm not sure stepping through this algorithm would be particularly helpful. It is a reasonably complex algorithm and the explanation would probably just parrot those given in the papers I linked. I can't claim to understand it very well myself. Perhaps the most fruitful approach would be to look at the common elements of those three links, and post specific questions about parts you don't understand.
Q1:
what is a 'graph carving'
There are two types of graph carving: Tree-Carving and Carving.
A tree-carving of a graph is a partition of the vertex set V into subsets V1,V2,...,Vk with the following properties. Each subset constitutes a node of a tree T. For every vertex v in Vj, all the neighbors of v in G belong either to Vj itself, or to Vi where Vi is adjacent to Vj in the tree T.
A carving of a graph is a partitioning of the vertex set V into a collection of subsets V1,V2,...Vk with the following properties. Each subset constitutes a node of a rooted tree T. Each non-leaf node Vj of T has a special vertex denoted by g(Vj) that belongs to p(Vj). For every vertex v in Vi, all the neighbors of v that are in ancestor sets of Vi belong to either
Vi or
Vj, where Vj is the parent of Vi in the tree T, or
Vl, where Vl is the grandparent in the tree T. In this case, however the neighbor of v can only be g(p(Vi))
Those defination referred from chapter 6 of book "Approximation Algorithms for NP-Hard problems" and paper1. (paper1 is picked from Gain's answer, thanks Gain.)
According to my understanding. Tree-Carving or Carving are a kind of representation (or a simplification) of an original graph G. So that the resulting new graph still preserve 'connection properties' of G, but with much smaller size(less vertex, less nodes). These two methods both somehow try to delete 'local' 'similar' information but to keep 'structure' 'vital' information. By merging some 'closed' vertices into one vertex and deleting some edges.
And It seems that Tree Carving is a little bit simpler and easier to understand Since in **Carving**, edges are allowed to go to a single vertex in the grapdhparenet node as well. It would preserve more information.
Q2:
how I can make one for a graph
I only know how to get a tree-carving.
You can refer the algorithm from paper1.
It's a Depth-First-Search based algorithm.
Do DFS, before return from an iteration, check whether this edges is 'bridge' edge or not. If yes, you need remove this 'bridge' and adding some 'back edge'.
You would get a DFS-partition which yields a tree-carving of G.
Q3:
how I can determine what makes a certain carving better suited for a task than another?
Sorry I don't know. I am also a new guy in graph theory.
If you have more question:
What's g function of g(Vj)?
a special node called gray node. go to paper1
What's p function of p(Vj)?
I am not sure. maybe p represent 'parent'. go to paper1
What's the back edge of node t?
some edge(u,v) s.t. u is a decent of t and v is a precedent of t. goto to paper1
What's bridge?
bridge wiki
I'm looking for a general idea (and maybe some code example or at least pseudocode)
Now, this is from a problem that someone gave me, or rather showed me, I don't have to solve it, but I did most of the questions anyway, the problem that I'm having is this:
Let's say you have a directed weighted graph with the following nodes:
AB5, BC4, CD8, DC8, DE6, AD5, CE2, EB3, AE7
and the question is:
how many different routes from C to C with a distance of less than x. (say, 10, 20, 30, 40)
The answer of different trips is: CDC, CEBC, CEBCDC, CDCEBC, CDEBC, CEBCEBC, CEBCEBCEBC.
The main problem I'm having with it is that when I do DFS or BFS, my implementation first chooses the node and marks it as visited therefore I'm only able to find 2 paths which are CDC and CEBC and then my algorithm quits. If I don't mark it as visited then on the next iteration (or recursive call) it will choose the same node and not next available route, so I have to always mark them as visited however by doing that how can I get for example CEBCEBCEBC, which is pretty much bouncing between nodes.
I've looked at all the different algorithms books that I have at home and while every algorithm describes how to do DFS, BFS and find shortest paths (all the good stufF), none show how to iterate indefinitively and stop only when one reaches certain weight of the graph or hits certain vertex number of times.
So why not just keep branching and branching; at each node you will evaluate two things; has this particular path exceeded the weight limit (if so, terminate the branch) and is this node where I started (in which case log my path history to an 'acceptable solutions' list); then make new branches which each take a step in each possible direction.
You should not mark nodes as visited; as MikeB points out, CDCDC is a valid solution and yet it revisits D.
I'd do it lke this:
Start with two lists of paths:
Solutions (empty) and
ActivePaths (containing one path, "C").
While ActivePaths is not empty,
Take a path out of ActivePaths (suppose it's "CD"[8]).
If its distance is not over the limit,
see where you are by looking at the last node in the path ("D").
If you're at "C", add a copy of this path to Solutions.
Now for each possible next destination ("C", "E")
make a copy of this path, ("CD"[8])
append the destination, ("CDC"[8])
add the weight, ("CDC"[16])
and put it in ActivePaths
Discard the path.
Whether this turns out to be a DFS, a BFS or something else depends on where in ActivePaths you insert and remove paths.
No offense, but this is pretty simple and you're talking about consulting a lot of books for the answer. I'd suggest playing around with the simple examples until they become more obvious.
In fact you have two different problems:
Find all distinct cycles from C to C, we will call them C_1, C_2, ..., C_n (done with a DFS)
Each C_i has a weight w_i, then you want every combination of cycles with a total weight less than N. This is a combinatorial problem (and seems to be easily solvable with dynamic programming).