difference between structural recursion and accumulative recursion - recursion

I have been learning scheme and I like to know the difference between the two. Thanks.

Accumulative recursion uses an extra parameter in which we collect new information as we go deeper into the recursion. The computed value is returned unchanged back up through the layers of recursion.
Structural recursion performs much of the work on the way back up through the layers of recursion.Accumulative recursion is often more efficient than stack recursion.

Related

General recursion to tail-recursion

Is it theoretically possible to transform every kind of general-recursion into tail-recursion? Are they equivalent for example from a lambda-calculus point of view? That's a debate between me and an acquaintance.
My view is that it's not possible everytime. For example if you have a function which calls itself recursively twice or three times, then you can't make all the recursive calls into tail-calls, right? Or is there always a way to reduce the number of recursive calls to one single recursive call?
No. If you cannot rewrite your algorithm to have only tail calls, like in tree traversal, at least one call will not be in tail position.
Some may say loop + explicit stack is iterative, but IMO it's still recursion and a tree traversal will grow the stack just as general recursion.

Recursive functions difficult to comprehend

I am learning data structure and algorithms. I found it especially difficult to understand recursions.
So I have the following questions. But they are not related to any specific code.
When I implement methods, when/where should I consider recursion?
In general coding convention, should I prefer recursion over simple iteration if they are both feasible?
How to actually comprehend most possible forms of recursion so I can think of them when I need? What is the best way to learn it? (Any related book or website?) Is there any pattern?
I know the question may sounds unconstructive if you find recursion simple and natural.
But for me it doesn't align with my intuition well. I do appreciate any help.
1
Very often recursive solutions to problems are smaller when data can be seen as similar. Eg. If you have a binary tree and you want to get the sum of all the leaf nodes you define sum-tree as If it's a leaf node, it's sum is it's value, if it's not a leaf node it the addition of the sum of both sub-trees.
Here's a Scheme implementation of my text
(define (sum-tree tree)
(if (leaf? tree)
(node-value tree)
(+ (sum-tree (node-left tree))
(sum-tree (node-right tree)))))
Or the same in Java, defined as a method in the Node class.
public int sum()
{
if ( isLeaf() )
return value;
else
return left.sum() + right.sum();
}
An iterative solution to this would be longer and harder to read. In this case you should prefer recursion.
2
It depends. If you are programming in Python or Java you should not since they donæt have tail recursion. With Scheme however, it's the only way to go. If your language supports tail recursion you should pick recursion when it makes clearer code.
3
Learn by doing. You need to write some algorithms that uses recursion as a tool. Use paper to follow the flow of the stack if you are unsure of the flow. Learning some Scheme or a similar functional language might help you a lot.
Recursion can be used when you are repeating the same thing over and over. For example, you are traversing a tree, you can use a recursion method to go to the left or right child.
I would go for the one that is easier to read. Generally, simple iteration will be faster as it does not have any overhead (recursion has some overhead, and can cause stack overflow if the levels are to deep, while simple iteration won't). But for some case, writing a recursive function is a lot easier than writing the equivalent in the simple iteration.
I would rather see the problem first and then decide whether I need recursion to solve it, not vice versa. Any algorithm book should be good enough. Perhaps you can start over reading http://en.wikipedia.org/wiki/Recursion to begin with. There is a simple example there about recursion, which I think you will be able to implement too using simple iteration.
At first, wrapping my head around recursion was hard as well. When I was learning recursion it was during school with Java. I found it more often I would use recursion over iterators as they were annoying to write in Java. However, I learned Ruby and I found myself writing recursive methods less and less. Then, I learned Elixir and Erlang and found myself writing a lot of recursive functions. My point? Some tools will give themselves for writing with certain style.
Now to answer your questions, since you're just starting to learn recursion, I would suggest diving deep into them and trying to get comfortable with them writing them as much as you can.
Certain tasks are much better off with recursion (e.g. Fibonacci sequence, traversing trees, etc..). Some other's you're better off writing a simple loop. However, note that you can write any recursive method with a loop. It might get tricky on certain occasions though.
All in all, recursion is actually a pretty cool concept once you get the hang of it.
Take a look at this question that relates to recursion: Erlang exercise, creating lists
I'd go for a study of some well known recursive algorithms. For instance, you could try to implement a factorial computation, or to get all the paths lengths in a tree.
By doing that you'll (hopefully) see how the recursive approach helps to simplify the code, and why it is a good approach in these particular cases. This could give you some ideas for future applications :)

Dynamic Programming - top-down vs bottom-up

What I have learnt is that dynamic programming (DP) is of two kinds: top-down and bottom-up.
In top-down, you use recursion along with memoization. In bottom-up, you just fill an array (a table).
Also, both these methods use same time complexity. Personally, I find top-down approach more easier and natural to follow. Is it true that a given question of DP can be solved using either of the approaches? Or will I ever face a problem which can only be solved by one of the two methods?
Well I believe theoretically you should be able to solve a DP problem with either approach. However, there are instances when bottom up approach can become too expensive. Consider a knapsack problem with the knapsack_size = 200,000 and the num_items = 2000. To fill in a two dimensional DP table with just ints is not going to be possible. You'll exhaust the main memory of an ordinary computer. Moreover, you do not require to fill in all the entries in a table to achieve the desired final computation. A recursive top-down approach is far superior in a case like this. Hope it helps.
bottom-up DP requires you to see how the recursion was precisely building the complete solution i.e what kind of subproblems were being created how they were filled by the base cases hence it is somewhat difficult to write bottom-up dynamic programming while in top-down have to write a backtrack solution(which is still hard to think of) and then see the states of backtrack solution.states means those arguments to the recursive function that were varying during subsequent recursive calls.
Now coming to the time complexities:
Bottom-up DP is faster than top-down since it doesn't involve any function calls. It completely depends on the table entries while in top-down DP it requires function calls and thereby causing an implicit stack formation.
PS: to see difference between time complexities of top-down and bottom-up you need to solve ASSIGNMENTS problem on SPOJ.
When solving a dynamic problem you may consider two things...
In bottom-up you have to fill an entire level before going to the next upper level. But in case of top-down an entire subtree can be skipped if not needed. In this way top-down can save a lot of extra calculations. So for optimal time,
if all sub-problems need not to be solved
top-down
else
bottom-up
There is an advantage in bottom-up in terms of space requirement. Because it saves a lot of stack space as compared to recursive top-down approach. So for optimal space,
bottom-up
However, if you feel recursion is not too deep but is very wide and can lead to a lot of unnecessary calculation by tabulation, you can then go for top-down approach with memoization.

iterative or recursive to implement a binary search tree?

I'm learning Data Structures & Algorithms now.
My lecture notes have an implementation of a binary search tree which is implemented using a recursive method. That is an elegant way, but my question is in real life code, should I implement a binary search tree recursively, will it generate a lot of calling stack if the tree has large height/depth number.
I understand that recursion is a key concept to understand lots of data structure concepts, but would you choose to use recursion in real life code?
A tree is recursive by nature. Each node of a tree represents a subtree, and each child of each note represents a subtree of that subtree, so recursion is the best bet, especially in practice where other people people might have to edit and maintain your code.
Now, IF depth becomes a problem for your call stack, than I'm afraid that there are deeper problems with your data structure (either it's monstrously huge, or it's very unbalanced)
"I understand that recursive is a key concept to understand lots of
data structure, but will you choose to use recursive in real life
code?"
After first learning about recursion I felt the same way. However, having been working in the Software industry for over a year now, I can say that I have used the concept of recursion to solve several problems. There are often times that recursion is cleaner, easier to understand/read, and just downright better. And to emphasize a point in the previous answer, a tree is a recursive data structure. IMO, there is no other way to traverse a BST :)
Many times, the compiler can optimize your code, to avoid creating a new stack frame for each recursive call (look up tail recursion, for example). Of course, it all depends on the algorithm and on your data structure. If the tree is reasonably balanced, I don't think a recursive algorithm should cause any problems.
its true that recursion is intutive and elegent and it produces code that is clear and concise. its also correct that some methods such as quick sort, DFS etc. are really hard to implement iterativelly.
but in practice recursive implementations are almost always going to be slow when compared to iterative counterparts because of all the function calls (To really understand the performance hit I suggest you learn how much book keeping stuff assembler has to do for a single function call).
the optimizations that we talk about are not applicable to every recursive method in general and manny compilers and interpreters dont even support them.
so in summary if you are writing something which is performance critical such as a data strucute then stay away from recursion (or use it if you are sure that your compiler/interpreter got you covered)
PS: CLRS (introduction to algorithms, page 290, last line) suggests that iterative search procedure for a BST is faster compared to recursive one.

What are the advantages and disadvantages of recursion?

With respect to using recursion over non-recursive methods in sorting algorithms or, for that matter, any algorithm what are its pros and cons?
For the most part recursion is slower, and takes up more of the stack as well. The main advantage of recursion is that for problems like tree traversal it make the algorithm a little easier or more "elegant".
Check out some of the comparisons:
link
Recursion means a function calls repeatedly
It uses system stack to accomplish its task. As stack uses LIFO approach
and when a function is called the controlled is moved to where function is defined which has it is stored in memory with some address, this address is stored in stack
Secondly, it reduces a time complexity of a program.
Though bit off-topic,a bit related. Must read. : Recursion vs Iteration
All algorithms can be defined recursively. That makes it much, much easier to visualize and prove.
Some algorithms (e.g., the Ackermann Function) cannot (easily) be specified iteratively.
A recursive implementation will use more memory than a loop if tail call optimization can't be performed. While iteration may use less memory than a recursive function that can't be optimized, it has some limitations in its expressive power.
Any algorithm implemented using recursion can also be implemented using iteration.
Why not to use recursion
It is usually slower due to the overhead of maintaining the stack.
It usually uses more memory for the stack.
Why to use recursion
Recursion adds clarity and (sometimes) reduces the time needed to write and debug code (but doesn't necessarily reduce space requirements or speed of execution).
Reduces time complexity.
Performs better in solving problems based on tree structures.
For example, the Tower of Hanoi problem is more easily solved using recursion as opposed to iteration.
I personally prefer using Iterative over recursive function. Especially if you function has complex/heavy logic and number of iterations are large. This because with every recursive call call stack increases. It could potentially crash the stack if you operations are too large and also slow up process.
To start:
Pros:
It is the unique way of implementing a variable number of nested loops (and the only elegant way of implementing a big constant number of nested loops).
Cons:
Recursive methods will often throw a StackOverflowException when processing big sets. Recursive loops don't have this problem though.
To start :
Recursion:
A function that calls itself is called as recursive function and this technique is called as recursion.
Pros:
1. Reduce unnecessary calling of functions.
2. Through Recursion one can solve problems in easy way while its iterative solution is very big and complex.
3. Extremely useful when applying the same solution.
Cons:
1. Recursive solution is always logical and it is very difficult to trace.
2. In recursive we must have an if statement somewhere to force the function to return without the recursive call being executed, otherwise the function will never return.
3. Recursion uses more processor time.
Expressiveness
Most problems are naturally expressed by recursion such as Fibonacci, Merge sorting and quick sorting. In this respect, the code is written for humans, not machines.
Immutability
Iterative solutions often rely on varying temporary variables which makes the code hard to read. This can be avoided with recursion.
Performance
Recursion is not stack friendly. Stack can overflow when the recursion is not well designed or tail optimization is not supported.
Some situation would arise where you would have to abandon recursion in a problem where recursion appears to be to your advantage, this is because for problems where your recursion would have to occur thousand of times this would result in a stackoverflow error even though your code did not get stuck in an infinite recursion. Most programming languages limits you to a number of stack calls, so if your recursion goes beyond this limit, then you might consider not using recursion.
We should use recursion in following scenarios:
when we don't know the finite number of iteration for example our fuction exit condition is based on dynamic programming (memoization)
when we need to perform operations on reverse order of the elements. Meaning we want to process last element first and then n-1, n-2 and so on till first element
Recursion will save multiple traversals. And it will be useful, if we can divide the stack allocation like:
int N = 10;
int output = process(N) + process(N/2);
public void process(int n) {
if (n==N/2 + 1 || n==1) {
return 1;
}
return process(n-1) + process(n-2);
}
In this case only half stacks will be allocated at any given time.
Recursion gets a bad rep, I'm always surprised by the number of developers that wont even touch recursion because someone told them it was evil incarnate.
I've learned through trial and error that when done properly recursion can be one of the fastest ways to iterate over something, it is not a steadfast rule and each language/ compiler/ engine has it's own quirks so mileage will vary.
In javascript I can reliably speed up almost any iterative process by introducing recursion with the added benefit of reducing side effects and making the code more clear concise and reusable. Also pro tip its possible to get around the stack overflow issue (and no you dont disable the warning).
My personal Pros & Cons:
Pros:
- Reduces side effects.
- Makes code more concise and easier to reason about.
- Reduces system resource usage and performs better than the traditional for loop.
Cons:
- Can lead to stack overflow.
- More complicated to setup than a traditional for loop.
Mileage will vary depending on language/ complier/ engine.

Resources