Are there any advantages to writing a BFS tree-traversal algorithm recursively vs iteratively? It seems to me iterative is the way to go since it can be implemented in a simple loop:
Enqueue root node
Dequeue node and examine
Enqueue its children
Go to step 2
Is there any advantage to recursion? It seems more complex with no advantages.
Thanks in advance...
When considering algorithms, we mainly consider time complexity and space complexity.
The time complexity of iterative BFS is O(|V|+|E|), where |V| is the number of vertices and |E| is the number of edges in the graph. So does recursive BFS.
And the space complexity of iterative BFS is O(|V|). So does recursive BFS.
From the perspective of time complexity and space complexity, there is no difference between these two algorithms. Since iterative BFS is easy to understand, it is not surprising that people prefer iterative BFS.
Related
I have implemented the Dijkstra algorithm from the pseudocode found in the reference "Introduction to Algorithms", 3rd edition by Cormen, for the single-source shortest path problem.
My implementation was made on python using linked lists to represent graphs in an adjacency list representation. This means that the list of nodes is a linked list and each node has a linked list to represent the edges of each node. Furthermore, I didn't implement or use any binary heap or fibonacci heap for the minimum priority queue that the algorithm needs, so I search for each node in O(V) time inside the linked list of nodes when the procedure needs to extract the next node with the smallest distance from the source.
On the other hand, the reference also provides an algorithm for DAG's (which I have implemented) using a topological sort before applying the relaxation procedure to all the edges.
With all these context, I have a Dijkstra algorithm with a complexity of
O(V^2)
And a DAG-shortest path algorithm with a complexity of
O(V+E)
By using the timeit.default_timer() function to calculate the running times of the algorithms, I have found that the Dijkstra algorithm is faster that the DAG algorithm when applied to DAGs with positive edge weights and different graph densities, all for 100 and 1000 nodes.
Shouldn't the DAG-shortest path algorithm be faster than Dijkstra for DAGs?
Your running time analysis for both algorithms is correct and it's true that DAG shortest path algorithm is faster than Dijkstra's algorithm for DAGs.
However, there are 3 possible reasons for your testing results:
The graph you used for testing is very dense. When the graph is very dense, E ≈ V^2, so the running time for both algorithms approach O(V^2).
The number of vertices is still not large enough. To solve this, you can use a much larger graph for further testing.
The initialization of the DAG costs a lot of running time.
Anyway, DAG shortest path algorithm should be faster than Dijkstra's algorithm theoretically.
It is very tough to convert a sequential code which has recursion in it to an equivalent parallel code written in openmp,CUDA or MPI .
Why is it so ?
If a piece of code has been written as a recursive algorithm, there is a good chance that the calculations performed in each level of recursion depends on the results of the next. This would imply that it is hard to do the calculations from different recursive steps in parallel.
Another way of thinking about this is to imagine flattening out the recursion into iteration (see for example Can every recursion be converted into iteration?). A recursive algorithm is likely to generate a flattened version where each iterations depend on the results from other iterations, making it hard to do the iterations in parallel.
Or say, do multicore CPUs process recursion faster than iteration?
Or it simply depends on how one language runs on the machine?
like c executes function calls with large cost, comparing to doing simple iterations.
I had this question because one day I told one of my friend that recursion isn't any amazing magic that can speed up programs, and he told me that with multicore CPUs recursion can be faster than iteration.
EDIT:
If we consider the most recursion-loved situation (data structure, function call),
is it even possible for recursion to be faster?
EDIT ont Oct 12th:
So how are the multicore cpus performing for now?
Are the softwares nowadays all programed for multi-core cpus?
There are really two ways to look at this problem:
1. Looking purely at the compiled code, then yes, iteration is faster than recursion. This is because recursion adds a function call (=overhead), and iteration does not. However, a common type of recursion is tail recursion: the recursive call is made at the end of the function. This is always optimized to iteration by compilers. So in that case it does not matter. Ergo: in some cases recursion is slower, but it is never faster.
2. From a functional programming viewpoint, most of the time recursive functions are written to be without side effects. (Having side effects in a recursive function would make it really difficult to get it to produce correct results.) If a function doesn't have side effects, then it is trivial to parallelize (thus easier to run on a multicore system). This isn't a property of recursive functions per se, but that could be the reason why your friend argues that recursion can be faster than iteration.
While the recursion is elegant and mathematically beautiful, it consumes a lot of resources, especially memory. If you have an efficient iterative solution, You should go for that.
I've read that the BFS algorithm is better suited to parallel implementation than DFS. I'm having trouble grasping the intuition of why this should be true. Can anyone explain?
Thanks
BFS is a procedure of propagating frontiers. 'Propagate' means push all their unvisited adjacent vertices into a queue, and they can be processing independently.
in DFS, the vertices are visited one by one along a direction such as A->B->C. Visiting B must occur before visiting C. It's a sequential procedure which can not be parallelized easily.
But the truth is both BFS and DFS are hard to parallelize because all the processing nodes have to know the global information. Accessing global variables always requires synchronizations and communications between nodes.
There a some good informations about DFS and BFS in general:
Breadth first search and depth first search
Difference Between BFS and DFS
Breadth First Search/Depth First Search Animations
What’s the difference between DFS and BFS?
Especially the animation shows how BFS use more parallel concepts. I think BFS could be implemented parallel, but there is no parallel solution for DFS.
DFS is a linear algorithm who calls every child once.
I'm implementing the Euclidian algorithm for finding the GCD (Greatest Common Divisor) of two integers.
Two sample implementations are given: Recursive and Iterative.
http://en.wikipedia.org/wiki/Euclidean_algorithm#Implementations
My Question:
In school I remember my professors talking about recursive functions like they were all the rage, but I have one doubt. Compared to an iterative version don't recursive algorithms take up more stack space and therefore much more memory? Also, because calling a function requires uses some overhead for initialization, aren't recursive algorithms more slower than their iterative counterpart?
It depends entirely on the language. If your language has tail-call recursion support(a lot do now days) then they will go at an equal speed. If it does not, then the recursive version will be slower and take more (precious) stack space.
It all depends on the language and compiler. Current computers aren't really geared towards efficient recursion, but some compilers can optimize some cases of recursion to run just as efficiently as a loop (essentially, it becomes a loop in the machine code). Then again, some compilers can't.
Recursion is perhaps more beautiful in a mathematical sense, but if you feel more comfortable with iteration, just use it.