Understanding recursion - recursion

I am struggling to understand this recursion used in the dynamic programming example. Can anyone explain the working of this. The objective is to find the least number of coins for a value.
//f(n) = 1 + min f(n-d) for all denomimations d
Pseudocode:
int memo[128]; //initialized to -1
int min_coin(int n)
{
if(n < 0) return INF;
if(n == 0) return 0;
if(memo[n] != -1)
int ans = INF;
for(int i = 0; i < num_denomination; ++i)
{
ans = min(ans, min_coin(n - denominations[i]));
}
return memo[n] = ans+1; //when does this get called?
}

This particular example is explained very well in this article at Topcoder.
Basically this recursion is using the solutions to smaller problems (least number of coins for a smaller n) to find the solution for the overall problem. The dynamic programming aspect of this is the memoization of the solutions to the sub-problems so they don't have to be recalculated every time.
And yes - there are {} missing as ring0 mentioned in his comment - the recursion should only be executed if the sub-problem has not been solved before.

To answer the owner's question when does this get called? : in a solution based on a recursive program, the same function is called by itself... but eventually returns... When does it return? from the time the function ceased to call itself
f(a) {
if (a > 0) f(a-1);
display "x"
}
f(5);
f(5) would call f(4), in turns call f(3) that call f(2) which calls f(1) calling f(0).
f(0) has a being 0, so it does not call f(), and displays "x" then returns. It returns to the previous f(1) that, after calling f(0) - done - displays also "x". f(1) ends, f(2) displays "x", ... , until f(5). You get 6 "x".

In another terms from what ring0 has already mentioned - when the program reaches the base case and starts to unwind by going up the stack (call frames). For similar case using factorial example see this.
#!/usr/bin/env perl
use strict;
use IO::Handle;
use Carp qw(cluck);
STDOUT->autoflush(1);
STDERR->autoflush(1);
sub factorial {
my $v = shift;
dummy_func();
return 1 if $v == 1;
print "Variable v value: $v and it's address:", \$v, "\ncurrent sub factorial addr:", \&factorial, "\n","-"x40;
return $v * factorial($v - 1);
}
sub dummy_func {
cluck;
}
factorial(5);

Related

Time and space complexity of a "nested" recursive function

In one of the previous intro to cs exams there was a question: calculate the space and time complexity of the function f1 as a function of n, assume that the time complexity of malloc(n) is O(1) and its space complexity is O(n).
int f1(int n) {
if(n < 3)
return 1;
int* arr = (int*) malloc(sizeof(int) * n);
f1(f1(n – 3));
free(arr);
return n;
}
The official solution is: time complexity: O(2^(n/3)), space complexity: O(n^2)
I tried to solve it but i didn't know how until i saw a note in my notebook that said: since the function returns n then we can treat f(f(n-3)) as f(n-3)+f(n-3) or as 2f(n-3). In this case the question becomes very similar to this one: Space complexity of recursive function
I tried solving it this way and i got the correct answer.
For the time complexity:
T(n)=2T(n-3)+1 , T(0)=1
T(n-3)=2T(n-3*2)+1
T(n)=2*2T(n-3*2)+2+1
T(n-3*2)=2T(n-3*3)+1
T(n)=2*2*2T(n-3*3)+2*2+2+1
...
T(n)=(2^k)T(n-3*k)+2^(k-1)+...+2^2+2+1
n-3*k=0
k=n/3
===> 2^(n/3)+...+2^2+2+1=2^(n/3)[1+(1/2)+(1/2^2)+...]=2^(n/3)*constant
Thus I got O(2^(n/3))
For the space complexity: the tree depth is n/3 and each time we do malloc so we get (n/3)^2 thus O(n^2).
My question:
why can we treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3)?
if the function didn't return n but changed it, for example: return n/3 instead of return n, then how do we solve it? do we treat it as 2f1((n-3)/3)?
if we can't always treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3) then how do we draw the recursion tree and how do we write and solve it using the induction method T(n)?
Why can we treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3)?
Because i) the nested f1 is evaluated first, and its return value is used to call the outer f1; so these nested calls are equivalent to:
int result = f1(n - 3);
f1(result);
... and ii) the return value of f1 is just its argument (except for the base case, but it doesn't matter asymptotically), so the above is further equivalent to:
f1(n - 3);
f1(n - 3); // result = n - 3
If the function didn't return n but changed it, for example: return n/3 instead of return n, then how do we solve it? do we treat it as 2f1((n-3)/3)?
Only the outer call is affected. Again, using the equivalent expression from before:
f1(n - 3); // = (n - 3) / 3
f1((n - 3) / 3);
i.e. just f1(n - 3) + f1((n - 3) / 3) for your example.
If we can't always treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3) then how do we draw the recursion tree and how do we write and solve it using the induction method T(n)?
You can always separate them into two separate calls as above, and again remember that only the second call is affected by the return result. If this is different to n - 3 then you would need a recursion tree instead of simple expansion. Depends on the specific problem, needless to say.

What happens after non-tail recursion bottoms out

I am new to recursion, and have been looking at non-tail recursion. I believe this is where the recursive call is not at the end of the function. I was looking at some recursive code examples, and I am confused about when and how the code after the recursive call is executed. To be more specific, when the current line of the program reaches the recursive call, is that last bit of code executed before actually executing the entire function again, or does the current line of execution immediately jump back to the beginning of the function, only to execute that remaining bit of code after the entire recursion bottoms out? Can anybody please clarify how this works for me?
Imagine this code here: (I'm using JS here, but it applies to all eager languages)
function add(a, b) {
return a + b;
}
function sub(a, b) {
return a - b;
}
function fib(n) {
if (n < 2) {
return n;
} else {
return add(
fib(sub(n, 2)),
fib(sub(n, 1)),
);
}
}
console.log(fib(10));
Now when n is 2 or more we we add the result of calling fib on the two predecessors to n. The order between some of the calls are JS engines choice, but sub needs to be called and return before fib can get a value and add needs to be called when both it's arguments have results. This can be rewritten as tail calls using continuation passing style:
function cadd(a, b, k) {
k(a + b);
}
function csub(a, b, k) {
k(a - b);
}
function cfib(n, k) {
if (n < 2) {
k(n);
} else {
csub(n, 2, n2 =>
cfib(n2, rn2 =>
csub(n, 1, n1 =>
cfib(n1, rn1 =>
cadd(rn2, rn1, k)))));
}
}
cfib(10, console.log);
Here you see the order which was engines choice is explicit left to right. Each call does one thing then the continuation is called with that result. Notice also that this never returns anything. "return" is only done by continuation, yet it does the exact same thing as the previous code.
In many languages they would compile to the same.

tail rec kotlin list

I'm trying to do some operations that would cause a StackOverflow in Kotlin just now.
Knowing that, I remembered that Kotlin has support for tailrec functions, so I tried to do:
private tailrec fun Turn.debuffPhase(): List<Turn> {
val turns = listOf(this)
if (facts.debuff == 0 || knight.damage == 0) {
return turns
}
// Recursively find all possible thresholds of debuffing
return turns + debuff(debuffsForNextThreshold()).debuffPhase()
}
Upon my surprise that IDEA didn't recognize it as a tailrec, I tried to unmake it a extension function and make it a normal function:
private tailrec fun debuffPhase(turn: Turn): List<Turn> {
val turns = listOf(turn)
if (turn.facts.debuff == 0 || turn.knight.damage == 0) {
return turns
}
// Recursively find all possible thresholds of debuffing
val newTurn = turn.debuff(turn.debuffsForNextThreshold())
return turns + debuffPhase(newTurn)
}
Even so it isn't accepted. The important isn't that the last function call is to the same function? I know that the + is a sign to the List plus function, but should it make a difference? All the examples I see on the internet for tail call for another languages allow those kind of actions.
I tried to do that with Int too, that seemed to be something more commonly used than addition to lists, but had the same result:
private tailrec fun discoverBuffsNeeded(dragon: RPGChar): Int {
val buffedDragon = dragon.buff(buff)
if (dragon.turnsToKill(initKnight) < 1 + buffedDragon.turnsToKill(initKnight)) {
return 0
}
return 1 + discoverBuffsNeeded(buffedDragon)
}
Shouldn't all those implementations allow for tail call? I thought of some other ways to solve that(Like passing the list as a MutableList on the parameters too), but when possible I try to avoid sending collections to be changed inside the function and this seems a case that this should be possible.
PS: About the question program, I'm implementing a solution to this problem.
None of your examples are tail-recursive.
A tail call is the last call in a subroutine. A recursive call is a call of a subroutine to itself. A tail-recursive call is a tail call of a subroutine to itself.
In all of your examples, the tail call is to +, not to the subroutine. So, all of those are recursive (because they call themselves), and all of those have tail calls (because every subroutine always has a "last call"), but none of them is tail-recursive (because the recursive call isn't the last call).
Infix notation can sometimes obscure what the tail call is, it is easier to see when you write every operation in prefix form or as a method call:
return plus(turns, debuff(debuffsForNextThreshold()).debuffPhase())
// or
return turns.plus(debuff(debuffsForNextThreshold()).debuffPhase())
Now it becomes much easier to see that the call to debuffPhase is not in tail position, but rather it is the call to plus (i.e. +) which is in tail position. If Kotlin had general tail calls, then that call to plus would indeed be eliminated, but AFAIK, Kotlin only has tail-recursion (like Scala), so it won't.
Without giving away an answer to your puzzle, here's a non-tail-recursive function.
fun fac(n: Int): Int =
if (n <= 1) 1 else n * fac(n - 1)
It is not tail recursive because the recursive call is not in a tail position, as noted by Jörg's answer.
It can be transformed into a tail-recursive function using CPS,
tailrec fun fac2(n: Int, k: Int = 1): Int =
if (n <= 1) k else fac2(n - 1, n * k)
although a better interface would likely hide the continuation in a private helper function.
fun fac3(n: Int): Int {
tailrec fun fac_continue(n: Int, k: Int): Int =
if (n <= 1) k else fac_continue(n - 1, n * k)
return fac_continue(n, 1)
}

Why is the second solution faster than the first?

First:
boolean isPrime(int n) {
if (n < 2)
return false;
return isPrime(n, 2);
}
private boolean isPrime(int n, int i) {
if (i == n)
return true;
return (n % i == 0) ? false : isPrime(n, i + 1);
}
Second:
boolean isPrime(int n) {
if (n < 0) n = -n;
if (n < 2) return false;
if (n == 2) return true;
if (n % 2 == 0) return false;
return rec_isPrime(n, 3);
}
boolean rec_isPrime(int n, int div) {
if (div * div > n) return true;
if (n % div == 0) return false;
return rec_isPrime(n, div + 2);
}
Please explain why is the second solution better than the first. I offered the first solution in an exam and my pointes were recudec under the claim that the solution is not effecient. I want to know what is the big differene
So this is a test question and I always keep in mind some professors have a colorful prerogative, but i could see a few reasons one might claim the first is slower:
when calculating primes you really only need to test if another primes are factors. The second so seeds with an odd, 3, then adds 2 ever recursive call which skips checking evens factors and reduces the numbers of calls needed by half.
and as #uselpa pointed out, the second code snippet stops at the when the testing factor's square is greater than n. Which effectively means in this version that all odd's between 1 and n have been accounted for. This allows deducing n is prime faster than the first which counts all the way up to n before declaring prime.
may argue that since the first tests for evens inside the recursive function instead of the outer method like the second, is it is an unnessary method on the call stack.
I have also seem some claims that ternary operations are slower than if-else checks, so you professor may fall into this belief. [Personally, I am not convinced there is a performance difference.]
Hope this helps. Was fun to think about some primes!
The first solution has a complexity of O(n) as it takes linear time, the second solution takes O(sqrt(n)) due to this line of code : if (div * div > n) return true;, because to look for a divisor past the square root is not required. For more details about that you can check : Why do we check up to the square root of a prime number to determine if it is prime?

What does this recursive function do?

I got this question in an interview. So, this seems to me a messed up Fibonacci seq. sum generator and this gives a stackoverflow. Because if(n==0) should be if(n<3) (exit condition is wrong). What should be the precise answer to this question? What was expected as an answer?
foo (int n) {
if (n == 0) return 1;
return foo(n-1) + foo(n-2);
}
UPDATE
What does this recursive function do?
What do you infer when you see these 3 lines of code. No debugging.
Yes, it's just a recursive Fibonacci method with an incorrect base case (although I don't think I'd use n < 3 either... it depends on how you define it).
As for "what was expected as an answer" - that would depend on the question. Were you asked exactly the question in the title of your post? If so, the answer is "it recurses forever until it blows up the stack when you pass it any value other than 0" - with an explanation, of course. (It will never end because either n-1 isn't 0, or n-2 isn't 0, or both.)
EDIT: The above answers the first question. To answer "What do you infer when you see these 3 lines of code" - I would infer that the developer in question has never run the code with a value other than 0, or doesn't care about whether or not the code works.
The problem with the code posted is that if we evaluate foo(1) we need to find foo(0) and foo (-1), foo(-1) then needs to find foo(-2) and foo(-3) and so on. This will keep putting more calls to foo() until there is no more space in the memory resulting in a stack overflow. How many calls to foo are made will depend on the size of the call stack, which will be implementation specific.
When I see these lines of code I immediately get the impression that whoever wrote it hasn't thought about all the possible inputs that could be fed to the function.
To make a recursive Fibonacci function that doesn't fail for foo(1) or a negative input we get:
foo (int n) {
if( n < 0 ) return 0;
if (n == 0) return 1;
return foo(n-1) + foo(n-2);
}
Edit: perhaps the return for a negative number should be something else, as the fibonacci sequence isn't implicitly defined for negative indexes.
However if we use the extension that fib(-n) = (-1)^(n+1) fib(n) we could get the following code:
int fib(int n) {
if( n == -1){
return 1;
}else if ( n < 0 ){
return ( (-1)^(-n+1 ) * fib(-n) );
}else if (n == 0){
return 1;
}else{
return fib(n-1) + fib(n-2);
}
}
suppose, you call foo ( 1 ), it will have two sub calls of foo(0) and foo(-1). As you can see, once you get to foo(-1), n will decrease indefinitely and will never reach a terminating condition.
The precise answer would be:
foo (int n) {
if (n <= 1) return 1; // assuming 0th and 1st fib is 1
return foo(n-1) + foo(n-2);
}
Its a recursive fibonacci function with 0th element being 1.
Ignoring the incorrect termination condition, the code is the "naive" recursive implementation of the Fibonacci function. It has very poor big-O time complexity. It would work fine for small n, but if n is greater than say 50 (I'm just guessing that number), then it would take "forever" to run.

Resources