What is the relationship between recursion functions and memory stack? - recursion

is there's a direct relationship between recursive functions and the memory stack, for more explanation consider that code:
public static int triangle(int n) {
System.out.println(“Entering: n = ” + n);
if (n == 1) {
System.out.println(“Returning 1”);
return 1;
} else {
int temp = n + triangle(n - 1);
System.out.println(“Returning“ + temp);
return temp;
}
}​
in this example where will the values 2,3,4,5 be stored until the function returns ? note that they will be returned in LIFO(LastInFirstOut) is these a special case of recursion that deals with the memory stack or they always goes together?

Yes, there is direct relationship between recursion functions and memory stack as some function with a high limit will crash your program just because stack size limit is reached and function will override parts of your program code (that is what we call stack-overflow).
R: Recursive
I: Iterative
first call:
R | I
|_| |_|
second call:
R | I
|_| |_|
|_|
third call:
R | I
|_| |_|
|_|
|_|
.
.
.
n call :
R | I
|_| |_|
|_|
|_|
.
.
.
|_|
I hope this makes sense, for iterative call function will be pushed to the stack once done it goes off from the stack and the next call will load a similar function, in the other hand the recursive function loads into the stack and calls itself and reloads the stack with each call, and then they start to go off (LIFO last one called first one out) when the stopping condition is reached.
So now to be specific to your matter, the n value as you said will be hold in the memory when stopping condition is met then the last function will display n, and then exits to give the hand to the function that just called it which will also display its own value of n and the same thing will be repeated until the very first function called, however the iterative function will display a value of a counter n (only one variable used and we are changing its value).
The below is a good article about stackoverflow,
Very deep or infinite recursion Main article: Infinite recursion The
most common cause of stack overflow is excessively deep or infinite
recursion. Languages like Scheme, which implement tail-call
optimization, allow infinite recursion of a specific sort—tail
recursion—to occur without stack overflow. This works because
tail-recursion calls do not take up additional stack space.
http://en.wikipedia.org/wiki/Stack_overflow

Assuming this is a C++ class method, calls to triangle(n) will push data to the stack that looks like:
function code
int *returnAddress
int n
Where the return value doesn't get assigned until the function returns. R.S. Shaw provided a nice image example to the Call Stack wikipedia page here.
This data gets pushed onto the top of the call stack once per recursion, so the code for the last call to triangle(n) will be on top. The value stored in *returnAddress is the place in memory where the result needs to go in order to unwind the recursion.
In other words, the results themselves (for example: 1 for triangle(1), 3 for triangle(2)) end up somewhere in the function code part of the stack, not in a particular named place in memory. If you run a debugger, you should be able to track down the location of your returnAddress by placing a breakpoint inside the triangle function code.
By the way, this is not a special case of recursion. This is the classic textbook case.

The temporary variable n will be on the stack, as will be the parameter to the call n-1 as will be the return address. These are all on the same stack.

As QuentinQK explained, the local variable n will consume stack space as well as the function's return address and -for a short moment- the return value and all paramters handed over to the recursive function consume space on the stack. And this happens on each recursion level. So it depends on how deep your recursion will go (how often this function calls itself) how much of the stack will be eventually requried and wether it bursts or not.
For that very reason it is essential having sim final condition. In your excaple it is this if (n==1)' condition where the recursion is stopped when that value is reached. It only works along with then-1in the parameter list of the recursive call oftriangle`.
But what do you really want to konw?

Related

Can infinite recursion be implemented?

I understand that for C at least the stack frame and return address are written to the stack every time the recursive function is called, but is there an obscure way of making it not run out of memory? Obviously this is purely a hypothetical question as I can't imagine a use case for it.
You can emulate recursion using a stack
The part of memory related to function calls and static variables (declared with int x; in C) is separate from the part of memory related to dynamic allocation (using malloc() in C). Only the former, called "the stack" is limited and will lead to a "Stack Overflow" error. Well, of course that's not entirely true. The latter is called "the heap" and of course your computer is not magic and will run out of memory at some point if you really try to push its limits.
Recursive function to loop and stack
How can you emulate recursion with loop and stack?
How ti rewrite a recursive method by using a stack?
Way to go from recursion to iteration
You can use tail-recursion to avoid adding layers to the call stack
Stack overflow is due to the size of the call stack. Imagine a function like this:
int f(int n)
{
int x;
if (n < 2)
{
return 1;
}
else
{
x = f(n-1);
return n * x;
}
}
When making the recursive call to f, the computer needs to keep some note of the fact that we'll need to do one more multiplication once the recursive call is completed. Taking note of this is achieved by adding a layer to a "call stack" with some information on the values of variables, and where in the code we are. This requires memory and will lead to stack overflow in case the stack becomes too big.
Now compare with the following code:
int f(int n, int acc)
{
if (n < 2)
{
return acc;
}
else
{
return f(n-1, n * acc);
}
}
This time the recursive call is directly encapsulated in the return, meaning there is no more work to do after the recursive call. Imagine you asked me to do a job and report the result to you; by making the recursive call I'm delegating some work to my friend; then instead of staying around waiting for my friend to report back to me so that I can report back to you, I leave immediately and tell my friend to report directly to you. This saves memory by "cutting the middle man".
Read more:
Wikipedia: Tail call
Wikipedia: Tail-recursive functions
In languages that feature lazy evaluation, you can write a seemingly infinitely-recursive function, then only evaluate it as far as required:
Haskell infinite recursion

Tail-call recursive behavior of the BEAM bytecode instruction call_last

We were recently reading the BEAM Book as part of a reading group.
In appendix B.3.3 it states that the call_last instruction has the following behavior
Deallocate Deallocate words of stack, then do a tail recursive
call to the function of arity Arity in the same module at label
Label
Based on our current understanding, tail-recursive would imply that the memory allocated on the stack can be reused from the current call.
As such, we were wondering what is being deallocated from the stack.
Additionally, we were also wondering why there is a need to deallocate from the stack before doing the tail-recursive call, instead of directly doing the tail recursive call.
In asm for CPUs, an optimized tailcall is just a jump to the function entry point. I.e. running the whole function as a loop body in the case of tail-recursion. (Without pushing a return address, so when you reach the base-case it's just one return back to the ultimate parent.)
I'm going to take a wild guess that Erlang / BEAM bytecode is remotely similar, even though I know nothing about it specifically.
When execution reaches the top of a function, it doesn't know whether it got there by recursion or a call from another function and would thus have to allocate more space if it needed any.
If you want to reuse already-allocated stack space, you'd have to further optimize the tail-recursion into an actual loop inside the function body, not recursion at all anymore.
Or to put it another way, to tailcall anything, you need the callstack in the same state it was in on function entry. Jumping instead of calling loses the opportunity to do any cleanup after the called function returns, because it returns to your caller, not to you.
But can't we just put the stack-cleanup in the recursion base-case that does actually return instead of tailcalling? Yes, but that only works if the "tailcall" is to a point in this function after allocating new space is already done, not the entry point that external callers will call. Those 2 changes are exactly the same as turning tail-recursion into a loop.
(Disclaimer: This is a guess)
Tail-recursion calls does not mean that it cannot perform any other call previously or use the stack in the meantime. In that case, the allocated stack for those calls must be deallocated before performing the tail-recursion. The call_last deallocates surplus stack before behaving like call_only.
You can see an example if you erlc -S the following code:
-module(test).
-compile(export_all).
fun1([]) ->
ok;
fun1([1|R]) ->
fun1(R).
funN() ->
A = list(),
B = list(),
fun1([A, B]).
list() ->
[1,2,3,4].
I've annotated the relevant parts:
{function, fun1, 1, 2}.
{label,1}.
{line,[{location,"test.erl",4}]}.
{func_info,{atom,test},{atom,fun1},1}.
{label,2}.
{test,is_nonempty_list,{f,3},[{x,0}]}.
{get_list,{x,0},{x,1},{x,2}}.
{test,is_eq_exact,{f,1},[{x,1},{integer,1}]}.
{move,{x,2},{x,0}}.
{call_only,1,{f,2}}. % No stack allocated, no need to deallocate it
{label,3}.
{test,is_nil,{f,1},[{x,0}]}.
{move,{atom,ok},{x,0}}.
return.
{function, funN, 0, 5}.
{label,4}.
{line,[{location,"test.erl",10}]}.
{func_info,{atom,test},{atom,funN},0}.
{label,5}.
{allocate_zero,1,0}. % Allocate 1 slot in the stack
{call,0,{f,7}}. % Leaves the result in {x,0} (the 0 register)
{move,{x,0},{y,0}}.% Moves the previous result from {x,0} to the stack because next function needs {x,0} free
{call,0,{f,7}}. % Leaves the result in {x,0} (the 0 register)
{test_heap,4,1}.
{put_list,{x,0},nil,{x,0}}. % Create a list with only the last value, [B]
{put_list,{y,0},{x,0},{x,0}}. % Prepend A (from the stack) to the previous list, creating [A, B] ([A | [B]]) in {x,0}
{call_last,1,{f,2},1}. % Tail recursion call deallocating the stack
{function, list, 0, 7}.
{label,6}.
{line,[{location,"test.erl",15}]}.
{func_info,{atom,test},{atom,list},0}.
{label,7}.
{move,{literal,[1,2,3,4]},{x,0}}.
return.
EDIT:
To actually answer your questions:
The thread's memory is used for both the stack and the heap, which use the same memory block in opposite sides, growning towards each other (the thread's GC triggers when they meet).
"Allocating" in this case means increasing the space used for the stack, and if that space is not going to be used anymore, it must be deallocated (returned to the memory block) in order to be able to use it again later (either as heap or as stack).

Recursion in java help

I am new to the site and am not familiar with how and where to post so please excuse me. I am currently studying recursion and am having trouble understanding the output of this program. Below is the method body.
public static int Asterisk(int n)
{
if (n<1)
return;
Asterisk(n-1);
for (int i = 0; i<n; i++)
{
System.out.print("*");
}
System.out.println();
}
This is the output
*
**
***
****
*****
it is due to the fact that the "Asterisk(n-1)" lies before the for loop.
I would think that the output should be
****
***
**
*
This is the way head recursion works. The call to the function is made before execution of other statements. So, Asterisk(5) calls Asterisk(4) before doing anything else. This further cascades into serial function calls from Asterisk(3) → Asterisk(2) → Asterisk(1) → Asterisk(0).
Now, Asterisk(0) simply returns as it passes the condition n<1. The control goes back to Asterisk(1) which now executes the rest of its code by printing n=1 stars. Then it relinquishes control to Asterisk(2) which again prints n=2 stars, and so on. Finally, Asterisk(5) prints its n=5 stars and the function calls end. This is why you see the pattern of ascending number of stars.
There are two ways to create programming loops. One is using imperative loops normally native to the language (for, while, etc) and the other is using functions (functional loops). In your example the two kinds of loops are presented.
One loop is the unrolling of the function
Asterisk(int n)
This unrolling uses recursion, where the function calls itself. Every functional loop must know when to stop, otherwise it goes on forever and blows up the stack. This is called the "stopping condition". In your case it is :
if (n<1)
return;
There is bidirectional equivalence between functional loops and imperative loops (for, while, etc). You can turn any functional loop into a regular loop and vice versa.
IMO this particular exercise was meant to show you the two different ways to build loops. The outer loop is functional (you could substitute it for a for loop) and the inner loop is imperative.
Think of recursive calls in terms of a stack. A stack is a data structure which adds to the top of a pile. A real world analogy is a pile of dishes where the newest dish goes on the top. Therefore recursive calls add another layer to the top of the stack, then once some criteria is met which prevents further recursive calls, the stack starts to unwind and we work our way back down to the original item (the first plate in pile of dishes).
The input of a recursive method tends towards a base case which is the termination factor and prevents the method from calling itself indefinitely (infinite loop). Once this base condition is met the method returns a value rather than calling itself again. This is how the stack in unwound.
In your method, the base case is when $n<1$ and the recursive calls use the input $n-1$. This means the method will call itself, each time decreasing $n$ by 1, until $n<1$ i.e. $n=0$. Once the base condition is met, the value 0 is returned and we start to execute the $for$ loop. This is why the first line contains a single asterix.
So if you run the method with an input of 5, the recursive calls build a stack of values of $n$ as so
0
1
2
3
4
5
Then this stack is unwound starting with the top, 0, all the way down to 5.

Compiler activity during recursion; answer needed to aid comprehension.

What happens with the stack calls and so on and so forth when executing a recursive function? Does recursion even use a stack in the first place? I would appreciate an answer that helps to visualize better what happens during recursion.
Generally, a recursive call is the same as any other function call. It creates a new stack frame, saves old variables and ultimately returns to the caller, just like any old function call. This means that a recursive function can cause a stack overflow. (In fact, that's probably the easiest way to overflow your stack!)
In some languages, however, there is an exception for tail recursion. Tail recursion involves a recursive call that is the very last thing a function does (ie a call in tail position). This means the function can't do anything to the result of the recursive call except returning it directly. Compare these two silly examples:
// Not tail-recursive: we add 1 to the result of foo()
function foo(x) {
if (x > 0) {
return 1 + foo(x - 1)
} else {
return 0;
}
}
// Tail recursive: we return foo() directly
// (`x - 1' happens *before* foo is called)
function foo(x) {
if (x > 0) {
return foo(x - 1);
} else {
return 0;
}
}
If a function is tail recursive, there is not point in allocating a stack frame at each iteration since no information needs to be preserved. Instead, the existing stack frame can be reused or the whole thing can be rewritten into a loop.
Some languages like Scala do this, which means you can write iterative procedures in a recursive style without hitting stack overflows.
However, there is really nothing special about recursion. If a function call is in tail position, we don't need the stack even if it's a call to a different function. We can just implement tail calls as jumps. This is mandated by certain languages (like Scheme) but cannot be implemented in Scala because of Java compatibility reasons.
Proper tail calls like this are important for enabling mutual recursion and continuation passing style without worrying about stack overflows.
So really, there is nothing fundamentally special about recursive calls as opposed to normal calls except that certain languages can only optimize direct recursion in tail position, not tail calls in general.

Quicksort and tail recursive optimization

In Introduction to Algorithms p169 it talks about using tail recursion for Quicksort.
The original Quicksort algorithm earlier in the chapter is (in pseudo-code)
Quicksort(A, p, r)
{
if (p < r)
{
q: <- Partition(A, p, r)
Quicksort(A, p, q)
Quicksort(A, q+1, r)
}
}
The optimized version using tail recursion is as follows
Quicksort(A, p, r)
{
while (p < r)
{
q: <- Partition(A, p, r)
Quicksort(A, p, q)
p: <- q+1
}
}
Where Partition sorts the array according to a pivot.
The difference is that the second algorithm only calls Quicksort once to sort the LHS.
Can someone explain to me why the 1st algorithm could cause a stack overflow, whereas the second wouldn't? Or am I misunderstanding the book.
First let's start with a brief, probably not accurate but still valid, definition of what stack overflow is.
As you probably know right now there are two different kind of memory which are implemented in too different data structures: Heap and Stack.
In terms of size, the Heap is bigger than the stack, and to keep it simple let's say that every time a function call is made a new environment(local variables, parameters, etc.) is created on the stack. So given that and the fact that stack's size is limited, if you make too many function calls you will run out of space hence you will have a stack overflow.
The problem with recursion is that, since you are creating at least one environment on the stack per iteration, then you would be occupying a lot of space in the limited stack very quickly, so stack overflow are commonly associated with recursion calls.
So there is this thing called Tail recursion call optimization that will reuse the same environment every time a recursion call is made and so the space occupied in the stack is constant, preventing the stack overflow issue.
Now, there are some rules in order to perform a tail call optimization. First, each call most be complete and by that I mean that the function should be able to give a result at any moment if you interrupts the execution, in SICP
this is called an iterative process even when the function is recursive.
If you analyze your first example, you will see that each iteration is defined by two recursive calls, which means that if you stop the execution at any time you won't be able to give a partial result because you the result depends of those calls to be finished, in this scenario you can't reuse the stack environment because the total information is split between all those recursive calls.
However, the second example doesn't have that problem, A is constant and the state of p and r can be locally determined, so since all the information to keep going is there then TCO can be applied.
The essence of the tail recursion optimization is that there is no recursion when the program is actually executed. When the compiler or interpreter is able to kick TRO in, it means that it will essentially figure out how to rewrite your recursively-defined algorithm into a simple iterative process with the stack not used to store nested function invocations.
The first code snippet can't be TR-optimized because there are 2 recursive calls in it.
Tail recursion by itself is not enough. The algorithm with the while loop can still use O(N) stack space, reducing it to O(log(N)) is left as exercise in that section of CLRS.
Assume we are working in a language with array slices and tail call optimization. Consider the difference between these two algorithms:
Bad:
Quicksort(arraySlice) {
if (arraySlice.length > 1) {
slices = Partition(arraySlice)
(smallerSlice, largerSlice) = sortBySize(slices)
Quicksort(largerSlice) // Not a tail call, requires a stack frame until it returns.
Quicksort(smallerSlice) // Tail call, can replace the old stack frame.
}
}
Good:
Quicksort(arraySlice) {
if (arraySlice.length > 1){
slices = Partition(arraySlice)
(smallerSlice, largerSlice) = sortBySize(slices)
Quicksort(smallerSlice) // Not a tail call, requires a stack frame until it returns.
Quicksort(largerSlice) // Tail call, can replace the old stack frame.
}
}
The second one is guarenteed to never need more than log2(length) stack frames because smallerSlice is less than half as long as arraySlice. But for the first one, the inequality is reversed and it will always need more than or equal to log2(length) stack frames, and can require O(N) stack frames in the worst case where smallerslice always has length 1.
If you don't keep track of which slice is smaller or larger, you will have similar worst cases to the first overflowing case, even though it will require O(log(n)) stack frames on average. If you always sort the smaller slice first, you will never need more than log_2(length) stack frames.
If you are using a language that doesn't have tail call optimization, you can write the second (not stack-blowing) version as:
Quicksort(arraySlice) {
while (arraySlice.length > 1) {
slices = Partition(arraySlice)
(smallerSlice, arraySlice) = sortBySize(slices)
Quicksort(smallerSlice) // Still not a tail call, requires a stack frame until it returns.
}
}
Another thing worth noting is that if you are implementing something like Introsort which changes to Heapsort if the recursion depth exceeds some number proportional to log(N), you will never hit the O(N) worst case stack memory usage of quicksort, so you technically don't need to do this. Doing this optimization (popping smaller slices first) still improves the constant factor of the O(log(N)) though, so it is strongly recommended.
Well, the most obvious observation would be:
Most common stack overflow problem - definition
The most common cause of stack overflow is excessively deep or infinite recursion.
The second uses less deep recursion than the first (n branches per call instead of n^2) hence it is less likely to cause a stack overflow..
(so lower complexity means less chance to cause a stack overflow)
But somebody would have to add why the second can never cause a stack overflow while the first can.
source
Well If you consider the complexity of the two methods the first method obviously has more complexity than the second since it calls Recursion on both LHS and RHS as a result there are more chances of getting stack overflow
Note: That doesnt mean that there are absolutely no chances of getting SO in second method
In the function 2 that you shared, Tail Call elimination is implemented. Before proceeding further let us understand what is tail recursion function?. If the last statement in the code is the recursive call and does do anything after that, then it is called tail recursive function. So the first function is a tail recursion function. For such a function with some changes in the code one can remove the last recursion call like you showed in function 2 which performs the same work as function 1. This process is called tail recursion optimization or tail call elimination and following are the result of it
Optimizing in terms of auxiliary space
Optimizing in terms of recursion call overhead
Last recursive call is eliminated by using the while loop. The good thing is that for function 2, no auxiliary space is used for the right call as its recursion is eliminated using p: <- q+1 and the overall function does not have recursion call overhead. So whatever way partition happens maximum space needed is theta(log n)

Resources