Time and space complexity of a "nested" recursive function - recursion

In one of the previous intro to cs exams there was a question: calculate the space and time complexity of the function f1 as a function of n, assume that the time complexity of malloc(n) is O(1) and its space complexity is O(n).
int f1(int n) {
if(n < 3)
return 1;
int* arr = (int*) malloc(sizeof(int) * n);
f1(f1(n – 3));
free(arr);
return n;
}
The official solution is: time complexity: O(2^(n/3)), space complexity: O(n^2)
I tried to solve it but i didn't know how until i saw a note in my notebook that said: since the function returns n then we can treat f(f(n-3)) as f(n-3)+f(n-3) or as 2f(n-3). In this case the question becomes very similar to this one: Space complexity of recursive function
I tried solving it this way and i got the correct answer.
For the time complexity:
T(n)=2T(n-3)+1 , T(0)=1
T(n-3)=2T(n-3*2)+1
T(n)=2*2T(n-3*2)+2+1
T(n-3*2)=2T(n-3*3)+1
T(n)=2*2*2T(n-3*3)+2*2+2+1
...
T(n)=(2^k)T(n-3*k)+2^(k-1)+...+2^2+2+1
n-3*k=0
k=n/3
===> 2^(n/3)+...+2^2+2+1=2^(n/3)[1+(1/2)+(1/2^2)+...]=2^(n/3)*constant
Thus I got O(2^(n/3))
For the space complexity: the tree depth is n/3 and each time we do malloc so we get (n/3)^2 thus O(n^2).
My question:
why can we treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3)?
if the function didn't return n but changed it, for example: return n/3 instead of return n, then how do we solve it? do we treat it as 2f1((n-3)/3)?
if we can't always treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3) then how do we draw the recursion tree and how do we write and solve it using the induction method T(n)?

Why can we treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3)?
Because i) the nested f1 is evaluated first, and its return value is used to call the outer f1; so these nested calls are equivalent to:
int result = f1(n - 3);
f1(result);
... and ii) the return value of f1 is just its argument (except for the base case, but it doesn't matter asymptotically), so the above is further equivalent to:
f1(n - 3);
f1(n - 3); // result = n - 3
If the function didn't return n but changed it, for example: return n/3 instead of return n, then how do we solve it? do we treat it as 2f1((n-3)/3)?
Only the outer call is affected. Again, using the equivalent expression from before:
f1(n - 3); // = (n - 3) / 3
f1((n - 3) / 3);
i.e. just f1(n - 3) + f1((n - 3) / 3) for your example.
If we can't always treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3) then how do we draw the recursion tree and how do we write and solve it using the induction method T(n)?
You can always separate them into two separate calls as above, and again remember that only the second call is affected by the return result. If this is different to n - 3 then you would need a recursion tree instead of simple expansion. Depends on the specific problem, needless to say.

Related

How to make Pre and Post conditions for recursive functions in SPARK?

I'm translating an exercise I made in Dafny into SPARK, where one verifies a tail recursive function against a recursive one. The Dafny source (censored, because it might still be used for classes):
function Sum(n:nat):nat
decreases n
{
if n==0 then n else n+Sum(n-1)
}
method ComputeSum(n:nat) returns (s:nat)
ensures s == Sum(n)
{
s := 0;
// ...censored...
}
What I got in SPARK so far:
function Sum (n : in Natural) return Natural
is
begin
if n = 0 then
return n;
else
return n + Sum(n - 1);
end if;
end Sum;
function ComputeSum(n : in Natural) return Natural
with
Post => ComputeSum'Result = Sum(n)
is
s : Natural := 0;
begin
-- ...censored...
return s;
end ComputeSum;
I cannot seem to figure out how to express the decreases n condition (which now that I think about it might be a little odd... but I got graded for it a few years back so who am I to judge, and the question remains how to get it done). As a result I get warnings of possible overflow and/or infinite recursion.
I'm guessing there is a pre or post condition to be added. Tried Pre => n <= 1 which obviously does not overflow, but I still get the warning. Adding Post => Sum'Result <= n**n on top of that makes the warning go away, but that condition gets a "postcondition might fail" warning, which isn't right, but guess the prover can't tell. Also not really the expression I should check against, but I cannot seem to figure what other Post I'm looking for. Possibly something very close to the recursive expression, but none of my attempts work. Must be missing out on some language construct...
So, how could I express the recursive constraints?
Edit 1:
Following links to this SO answer and this SPARK doc section, I tried this:
function Sum (n : in Natural) return Natural
is
(if n = 0 then 0 else n + Sum(n - 1))
with
Pre => (n in 0 .. 2),
Contract_Cases => (n = 0 => Sum'Result = 0,
n >= 1 => Sum'Result = n + Sum(n - 1)),
Subprogram_Variant => (Decreases => n);
However getting these warnings from SPARK:
spark.adb:32:30: medium: overflow check might fail [reason for check: result of addition must fit in a 32-bits machine integer][#0]
spark.adb:36:56: warning: call to "Sum" within its postcondition will lead to infinite recursion
If you want to prove that the result of some tail-recursive summation function equals the result of a given recursive summation function for some value N, then it should, in principle, suffice to only define the recursive function (as an expression function) without any post-condition. You then only need to mention the recursive (expression) function in the post-condition of the tail-recursive function (note that there was no post-condition (ensures) on the recursive function in Dafny either).
However, as one of SPARK's primary goal is to proof the absence of runtime errors, you must have to prove that overflow cannot occur and for this reason, you do need a post-condition on the recursive function. A reasonable choice for such a post-condition is, as #Jeffrey Carter already suggested in the comments, the explicit summation formula for arithmetic progression:
Sum (N) = N * (1 + N) / 2
The choice is actually very attractive as with this formula we can now also functionally validate the recursive function itself against a well-known mathematically explicit expression for computing the sum of a series of natural numbers.
Unfortunately, using this formula as-is will only bring you somewhere half-way. In SPARK (and Ada as well), pre- and post-conditions are optionally executable (see also RM 11.4.2 and section 5.11.1 in the SPARK Reference Guide) and must therefore themselves be free of any runtime errors. Therefore, using the formula as-is will only allow you to prove that no overflow occurs for any positive number up until
max N s.t. N * (1 + N) <= Integer'Last <-> N = 46340
as in the post-condition, the multiplication is not allowed to overflow either (note that Natural'Last = Integer'Last = 2**31 - 1).
To work around this, you'll need to make use of the big integers package that has been introduced in the Ada 202x standard library (see also RM A.5.6; this package is already included in GNAT CE 2021 and GNAT FSF 11.2). Big integers are unbounded and computations with these integers never overflow. Using these integers, one can proof that overflow will not occur for any positive number up until
max N s.t. N * (1 + N) / 2 <= Natural'Last <-> N = 65535 = 2**16 - 1
The usage of these integers in a post-condition is illustrated in the example below.
Some final notes:
The Subprogram_Variant aspect is only needed to prove that a recursive subprogram will eventually terminate. Such a proof of termination must be requested explicitly by adding an annotation to the function (also shown in the example below and as discussed in the SPARK documentation pointed out by #egilhh in the comments). The Subprogram_Variant aspect is, however, not needed for your initial purpose: proving that the result of some tail-recursive summation function equals the result of a given recursive summation function for some value N.
To compile a program that uses functions from the new Ada 202x standard library, use compiler option -gnat2020.
While I use a subtype to constrain the range of permissible values for N, you could also use a precondition. This should not make any difference. However, in SPARK (and Ada as well), it is in general considered to be a best practise to express contraints using (sub)types as much as possible.
Consider counterexamples as possible clues rather than facts. They may or may not make sense. Counterexamples are optionally generated by some solvers and may not make sense. See also the section 7.2.6 in the SPARK user’s guide.
main.adb
with Ada.Numerics.Big_Numbers.Big_Integers;
procedure Main with SPARK_Mode is
package BI renames Ada.Numerics.Big_Numbers.Big_Integers;
use type BI.Valid_Big_Integer;
-- Conversion functions.
function To_Big (Arg : Integer) return BI.Valid_Big_Integer renames BI.To_Big_Integer;
function To_Int (Arg : BI.Valid_Big_Integer) return Integer renames BI.To_Integer;
subtype Domain is Natural range 0 .. 2**16 - 1;
function Sum (N : Domain) return Natural is
(if N = 0 then 0 else N + Sum (N - 1))
with
Post => Sum'Result = To_Int (To_Big (N) * (1 + To_Big (N)) / 2),
Subprogram_Variant => (Decreases => N);
-- Request a proof that Sum will terminate for all possible values of N.
pragma Annotate (GNATprove, Terminating, Sum);
begin
null;
end Main;
output (gnatprove)
$ gnatprove -Pdefault.gpr --output=oneline --report=all --level=1 --prover=z3
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
main.adb:13:13: info: subprogram "Sum" will terminate, terminating annotation has been proved
main.adb:14:30: info: overflow check proved
main.adb:14:32: info: subprogram variant proved
main.adb:14:39: info: range check proved
main.adb:16:18: info: postcondition proved
main.adb:16:31: info: range check proved
main.adb:16:53: info: predicate check proved
main.adb:16:69: info: division check proved
main.adb:16:71: info: predicate check proved
Summary logged in [...]/gnatprove.out
ADDENDUM (in response to comment)
So you can add the post condition as a recursive function, but that does not help in proving the absence of overflow; you will still have to provide some upper bound on the function result in order to convince the prover that the expression N + Sum (N - 1) will not cause an overflow.
To check the absence of overflow during the addition, the prover will consider all possible values that Sum might return according to it's specification and see if at least one of those value might cause the addition to overflow. In the absence of an explicit bound in the post condition, Sum might, according to its return type, return any value in the range Natural'Range. That range includes Natural'Last and that value will definitely cause an overflow. Therefore, the prover will report that the addition might overflow. The fact that Sum never returns that value given its allowable input values is irrelevant here (that's why it reports might). Hence, a more precise upper bound on the return value is required.
If an exact upper bound is not available, then you'll typically fallback onto a more conservative bound like, in this case, N * N (or use saturation math as shown in the Fibonacci example from the SPARK user manual, section 5.2.7, but that approach does change your function which might not be desirable).
Here's an alternative example:
example.ads
package Example with SPARK_Mode is
subtype Domain is Natural range 0 .. 2**15;
function Sum (N : Domain) return Natural
with Post =>
Sum'Result = (if N = 0 then 0 else N + Sum (N - 1)) and
Sum'Result <= N * N; -- conservative upper bound if the closed form
-- solution to the recursive function would
-- not exist.
end Example;
example.adb
package body Example with SPARK_Mode is
function Sum (N : Domain) return Natural is
begin
if N = 0 then
return N;
else
return N + Sum (N - 1);
end if;
end Sum;
end Example;
output (gnatprove)
$ gnatprove -Pdefault.gpr --output=oneline --report=all
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
example.adb:8:19: info: overflow check proved
example.adb:8:28: info: range check proved
example.ads:7:08: info: postcondition proved
example.ads:7:45: info: overflow check proved
example.ads:7:54: info: range check proved
Summary logged in [...]/gnatprove.out
I landed in something that sometimes works, which I think is enough for closing the title question:
function Sum (n : in Natural) return Natural
is
(if n = 0 then 0 else n + Sum(n - 1))
with
Pre => (n in 0 .. 10), -- works with --prover=z3, not Default (CVC4)
-- Pre => (n in 0 .. 100), -- not working - "overflow check might fail, e.g. when n = 2"
Subprogram_Variant => (Decreases => n),
Post => ((n = 0 and then Sum'Result = 0)
or (n > 0 and then Sum'Result = n + Sum(n - 1)));
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => Sum'Result = n + Sum(n - 1)); -- warning: call to "Sum" within its postcondition will lead to infinite recursion
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => n + Sum(n - 1) = Sum'Result); -- works
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => Sum'Result = n * (n + 1) / 2); -- works and gives good overflow counterexamples for high n, but isn't really recursive
Command line invocation in GNAT Studio (Ctrl+Alt+F), --counterproof=on and --prover=z3 my additions to it:
gnatprove -P%PP -j0 %X --output=oneline --ide-progress-bar --level=0 -u %fp --counterexamples=on --prover=z3
Takeaways:
Subprogram_Variant => (Decreases => n) is required to tell the prover n decreases for each recursive invocation, just like the Dafny version.
Works inconsistently for similar contracts, see commented Contract_Cases.
Default prover (CVC4) fails, using Z3 succeeds.
Counterproof on fail makes no sense.
n = 2 presented as counterproof for range 0 .. 100, but not for 0 .. 10.
Possibly related to this mention in the SPARK user guide: However, note that since the counterexample is always generated only using CVC4 prover, it can just explain why this prover cannot prove the property.
Cleaning between changing options required, e.g. --prover.

Derive recursive formula for Josephus problem when index start with 1

I am trying to understand josephus problem recursive algorithms.It has been explained in details here . But I am not clear about certain details . here those are :
The 1-indexed position of the survivor in the ring of four people is given by josephus(n - 1, 2). We can therefore walk forward josephus(n - 1, 2) - 1 positions, wrapping around the ring if necessary, to get to our final position. In other words, the survivor is given by position.
doubts : how are we reaching to equation josephus(n-1,2)-1 by which we need to walk forward.
Now this also has been explained in the comment section but i am not able to understand how this information is helping us to reach the final recursive method.
Here is the final recursive method :
public int josephus(int n, int k)
{
if (n == 1) return 1;
return (josephus(n - 1, k) + k-1) % n + 1;
}
I understand that this question may be marked as duplicate but still if someone can explain me the intuition. I tried Wikipedia but no luck so far.

tail rec kotlin list

I'm trying to do some operations that would cause a StackOverflow in Kotlin just now.
Knowing that, I remembered that Kotlin has support for tailrec functions, so I tried to do:
private tailrec fun Turn.debuffPhase(): List<Turn> {
val turns = listOf(this)
if (facts.debuff == 0 || knight.damage == 0) {
return turns
}
// Recursively find all possible thresholds of debuffing
return turns + debuff(debuffsForNextThreshold()).debuffPhase()
}
Upon my surprise that IDEA didn't recognize it as a tailrec, I tried to unmake it a extension function and make it a normal function:
private tailrec fun debuffPhase(turn: Turn): List<Turn> {
val turns = listOf(turn)
if (turn.facts.debuff == 0 || turn.knight.damage == 0) {
return turns
}
// Recursively find all possible thresholds of debuffing
val newTurn = turn.debuff(turn.debuffsForNextThreshold())
return turns + debuffPhase(newTurn)
}
Even so it isn't accepted. The important isn't that the last function call is to the same function? I know that the + is a sign to the List plus function, but should it make a difference? All the examples I see on the internet for tail call for another languages allow those kind of actions.
I tried to do that with Int too, that seemed to be something more commonly used than addition to lists, but had the same result:
private tailrec fun discoverBuffsNeeded(dragon: RPGChar): Int {
val buffedDragon = dragon.buff(buff)
if (dragon.turnsToKill(initKnight) < 1 + buffedDragon.turnsToKill(initKnight)) {
return 0
}
return 1 + discoverBuffsNeeded(buffedDragon)
}
Shouldn't all those implementations allow for tail call? I thought of some other ways to solve that(Like passing the list as a MutableList on the parameters too), but when possible I try to avoid sending collections to be changed inside the function and this seems a case that this should be possible.
PS: About the question program, I'm implementing a solution to this problem.
None of your examples are tail-recursive.
A tail call is the last call in a subroutine. A recursive call is a call of a subroutine to itself. A tail-recursive call is a tail call of a subroutine to itself.
In all of your examples, the tail call is to +, not to the subroutine. So, all of those are recursive (because they call themselves), and all of those have tail calls (because every subroutine always has a "last call"), but none of them is tail-recursive (because the recursive call isn't the last call).
Infix notation can sometimes obscure what the tail call is, it is easier to see when you write every operation in prefix form or as a method call:
return plus(turns, debuff(debuffsForNextThreshold()).debuffPhase())
// or
return turns.plus(debuff(debuffsForNextThreshold()).debuffPhase())
Now it becomes much easier to see that the call to debuffPhase is not in tail position, but rather it is the call to plus (i.e. +) which is in tail position. If Kotlin had general tail calls, then that call to plus would indeed be eliminated, but AFAIK, Kotlin only has tail-recursion (like Scala), so it won't.
Without giving away an answer to your puzzle, here's a non-tail-recursive function.
fun fac(n: Int): Int =
if (n <= 1) 1 else n * fac(n - 1)
It is not tail recursive because the recursive call is not in a tail position, as noted by Jörg's answer.
It can be transformed into a tail-recursive function using CPS,
tailrec fun fac2(n: Int, k: Int = 1): Int =
if (n <= 1) k else fac2(n - 1, n * k)
although a better interface would likely hide the continuation in a private helper function.
fun fac3(n: Int): Int {
tailrec fun fac_continue(n: Int, k: Int): Int =
if (n <= 1) k else fac_continue(n - 1, n * k)
return fac_continue(n, 1)
}

Why is the second solution faster than the first?

First:
boolean isPrime(int n) {
if (n < 2)
return false;
return isPrime(n, 2);
}
private boolean isPrime(int n, int i) {
if (i == n)
return true;
return (n % i == 0) ? false : isPrime(n, i + 1);
}
Second:
boolean isPrime(int n) {
if (n < 0) n = -n;
if (n < 2) return false;
if (n == 2) return true;
if (n % 2 == 0) return false;
return rec_isPrime(n, 3);
}
boolean rec_isPrime(int n, int div) {
if (div * div > n) return true;
if (n % div == 0) return false;
return rec_isPrime(n, div + 2);
}
Please explain why is the second solution better than the first. I offered the first solution in an exam and my pointes were recudec under the claim that the solution is not effecient. I want to know what is the big differene
So this is a test question and I always keep in mind some professors have a colorful prerogative, but i could see a few reasons one might claim the first is slower:
when calculating primes you really only need to test if another primes are factors. The second so seeds with an odd, 3, then adds 2 ever recursive call which skips checking evens factors and reduces the numbers of calls needed by half.
and as #uselpa pointed out, the second code snippet stops at the when the testing factor's square is greater than n. Which effectively means in this version that all odd's between 1 and n have been accounted for. This allows deducing n is prime faster than the first which counts all the way up to n before declaring prime.
may argue that since the first tests for evens inside the recursive function instead of the outer method like the second, is it is an unnessary method on the call stack.
I have also seem some claims that ternary operations are slower than if-else checks, so you professor may fall into this belief. [Personally, I am not convinced there is a performance difference.]
Hope this helps. Was fun to think about some primes!
The first solution has a complexity of O(n) as it takes linear time, the second solution takes O(sqrt(n)) due to this line of code : if (div * div > n) return true;, because to look for a divisor past the square root is not required. For more details about that you can check : Why do we check up to the square root of a prime number to determine if it is prime?

Understanding recursion

I am struggling to understand this recursion used in the dynamic programming example. Can anyone explain the working of this. The objective is to find the least number of coins for a value.
//f(n) = 1 + min f(n-d) for all denomimations d
Pseudocode:
int memo[128]; //initialized to -1
int min_coin(int n)
{
if(n < 0) return INF;
if(n == 0) return 0;
if(memo[n] != -1)
int ans = INF;
for(int i = 0; i < num_denomination; ++i)
{
ans = min(ans, min_coin(n - denominations[i]));
}
return memo[n] = ans+1; //when does this get called?
}
This particular example is explained very well in this article at Topcoder.
Basically this recursion is using the solutions to smaller problems (least number of coins for a smaller n) to find the solution for the overall problem. The dynamic programming aspect of this is the memoization of the solutions to the sub-problems so they don't have to be recalculated every time.
And yes - there are {} missing as ring0 mentioned in his comment - the recursion should only be executed if the sub-problem has not been solved before.
To answer the owner's question when does this get called? : in a solution based on a recursive program, the same function is called by itself... but eventually returns... When does it return? from the time the function ceased to call itself
f(a) {
if (a > 0) f(a-1);
display "x"
}
f(5);
f(5) would call f(4), in turns call f(3) that call f(2) which calls f(1) calling f(0).
f(0) has a being 0, so it does not call f(), and displays "x" then returns. It returns to the previous f(1) that, after calling f(0) - done - displays also "x". f(1) ends, f(2) displays "x", ... , until f(5). You get 6 "x".
In another terms from what ring0 has already mentioned - when the program reaches the base case and starts to unwind by going up the stack (call frames). For similar case using factorial example see this.
#!/usr/bin/env perl
use strict;
use IO::Handle;
use Carp qw(cluck);
STDOUT->autoflush(1);
STDERR->autoflush(1);
sub factorial {
my $v = shift;
dummy_func();
return 1 if $v == 1;
print "Variable v value: $v and it's address:", \$v, "\ncurrent sub factorial addr:", \&factorial, "\n","-"x40;
return $v * factorial($v - 1);
}
sub dummy_func {
cluck;
}
factorial(5);

Resources