Big(O) of recursive n choose k code - recursion

Is the big(O) of the following recursive code simply O(n choose k)?
int nchoosek(int n, int k) {
if (n == k) return 1;
if (k == 0) return 1;
return nchoosek(n-1, k) + nchoosek(n-1, k-1);
}

I'm not sure if the notation is correct but I guess you can write the recurrence for this function like
T(n) = T(n-1, k) + T(n-1, k-1) + O(1)
Since you have two possible paths we just have to analyse the worst case of each and choose the slowest.
Worst case of T(n-1, k)
Given that 0<k<n and k is as far as possible from n then we have
T(n-1, k) = T(n-1) = O(n)
Worst case of T(n-1, k-1)
We need 0<k<n and k should be as close to n as possible. Then
T(n-1, k-1) = T(n-1) = O(n)
Therefore T(n, k) = 2*O(n) + O(1) = O(n)
Another way of seeing this is by reducing the problem to other known problems, for example you can solve the same problem by using the definition of the choose function in terms of the Factorial function:
nCk = n!/k!(n-k)!
The running time of the Factorial is O(n) even in the recursive case.
nCk requires calculating the Factorial three times:
n! => O(n)
k! => O(k) = O(n)
(n-k)! => O(n-k) = O(n)
Then the multiplication and division are both constant time O(1), hence the running time is:
T(n) = 3*O(n) + 2*O(1) = O(n)

Related

Reccurence Relation

I have a method:
int Tree (int n) {
if (n <= 0) return 0;
if (n == 1) return 1;
return ((n*n) + Tree (n-3));
}
I'm trying to find the recurrence relation that captures the running time T(n) for the method 'Tree', so far I've got T(n) = T(n-3) + O(1), then I will need to express the running time as a series of terms, where each term denotes the number of operations at a
distinct level of the recursion tree:
I have T(n) = T(n-3) + O(1) then T(n-1) = T(n-4) + O(1) then T(n-2) = T(n-5) + O(1)
...
But Im unsure if this is right
After T(n) = T(n-3) + O(1) you don't need to check T(n-1) but T(n-3) which is = T(n-6) + O(1). Replacing n by 3p+r you get T(3p+r) = T(3(p-1)+r)+O(1) which gives T(3p+r)=T(r)+O(p), Since T(r) = O(1) and O(p) = O(n), T(n) = O(n)

Big-O complexity recursion Vs iteration

Question 5 on Determining complexity for recursive functions (Big O notation) is:
int recursiveFun(int n)
{
for(i=0; i<n; i+=2)
// Do something.
if (n <= 0)
return 1;
else
return 1 + recursiveFun(n-5);
}
To highlight my question, I'll change the recursive parameter from n-5 to n-2:
int recursiveFun(int n)
{
for(i=0; i<n; i+=2)
// Do something.
if (n <= 0)
return 1;
else
return 1 + recursiveFun(n-2);
}
I understand the loop runs in n/2 since a standard loop runs in n and we're iterating half the number of times.
But isn't the same also happening for the recursive call? For each recursive call, n is decremented by 2. If n is 10, call stack is:
recursiveFun(8)
recursiveFun(6)
recursiveFun(4)
recursiveFun(2)
recursiveFun(0)
...which is 5 calls (i.e. 10/2 or n/2). Yet the answer provided by Michael_19 states it runs in n-5 or, in my example, n-2. Clearly n/2 is not the same as n-2. Where have I gone wrong and why is recursion different from iteration when analyzing for Big-O?
Common way to analyze big-O of a recursive algorithm is to find a recursive formula that "counts" the number of operation done by the algorithm. It is usually denoted as T(n).
In your example: the time complexity of this code can be described with the formula:
T(n) = C*n/2 + T(n-2)
^ ^
assuming "do something is constant Recursive call
Since it's pretty obvious it will be in O(n^2), let's show Omega(n^2) using induction:
Induction Hypothesis:
T(k) >= C/8 *k^2 for 0 <= k < n
And indeed:
T(n) = C*n/2 + T(n-2) >= (i.h.) C*n/2 + C*(n-2)^2 / 8
= C* n/2 + C/8(n^2 - 4n + 2) =
= C/8 (4n + n^2 - 4n + 2) =
= C/8 *(n^2 + 2)
And indeed:
T(n) >= C/8 * (n^2 + 2) > C/8 * n^2
Thus, T(n) is in big-Omega(n^2).
Showing big-O is done similarly:
Hypothesis: T(k) <= C*k^2 for all 2 <= k < n
T(n) = C*n/2 + T(n-2) <= (i.h.) C*n/2 + C*(n^2 - 4n + 4)
= C* (2n + n^2 - 4n + 4) = C (n^2 -2n + 4)
For all n >= 2, -2n + 4 <= 0, so for any n>=2:
T(n) <= C (n^2 - 2n + 4) <= C^n^2
And the hypothesis is correct - and by definition of big-O, T(n) is in O(n^2).
Since we have shown T(n) is both in O(n^2) and Omega(n^2), it is also in Theta(n^2)
Analyzing recursion is different from analyzing iteration because:
n (and other local variable) change each time, and it might be hard to catch this behavior.
Things get way more complex when there are multiple recursive calls.

How to calculate the complexity of time and memory

I have the following code
public int X(int n)
{
if (n == 0)
return 0;
if (n == 1)
return 1;
else
return (X(n- 1) + X(n- 2));
}
I want to calculate the complexity of time and memory of this code
My code consists of a constant checking if (n == 0) return 0; so this will take a constant time assume c so we have either c or c or the calculation of the recursion functions which I can't calculate
Can anyone help me in this?
To calculate the value of X(n), you are calculating X(n-1) and X(n-2)
So T(n) = T(n-1) + T(n-2);
T(0) = 1
T(1) = 1
which is exponential O(2^n)
If you want detailed proof of how it will be O(2^n), check here.
Space complexity is linear.
(Just to be precise, If you consider the stack space taken for recursion, it's O(n))

Complexity of recursive algorithm, that finds largest element in an array

How to calculate complexity of ths recursive algorithm?
int findMax(int a[ ], int l, int r)
{
if (r – l == 1)
return a[ l ];
int m = ( l + r ) / 2;
int u = findMax(a, l, m);
int v = findMax(a, m, r);
if (u > v)
return u;
else
return v;
}
From the Master Theorem:
T(n) = a * T(n/b) + f(n)
Where:
a is number of sub-problems
f(n) is cost of operation outside the recursion; f(n) = O(nc)
n/b size of the sub-problem
The idea behind this function is that you repeat the operation on the first half of items (T(n/2)) and on the second half of items (T(n/2)). You get the results and compare them (O(1)) so you have:
T(n) = 2 * T(n/2) + O(1)
So f(n) = O(1) and in terms of n value we get O(n0) - we need that to calculate c. So a = 2 and b = 2 and c = 0. From the Master Theorem (as correctly pointed out in comments) we end up with case where c < logba as log22 = 0. In this case the complexity of whole recursive call is O(n).

Computational complexity of a longest path algorithm witn a recursive method

I wrote a code segment to determine the longest path in a graph. Following is the code. But I don't know how to get the computational complexity in it because of the recursive method in the middle. Since finding the longest path is an NP complete problem I assume it's something like O(n!) or O(2^n), but how can I actually determine it?
public static int longestPath(int A) {
int k;
int dist2=0;
int max=0;
visited[A] = true;
for (k = 1; k <= V; ++k) {
if(!visited[k]){
dist2= length[A][k]+longestPath(k);
if(dist2>max){
max=dist2;
}
}
}
visited[A]=false;
return(max);
}
Your recurrence relation is T(n, m) = mT(n, m-1) + O(n), where n denotes number of nodes and m denotes number of unvisited nodes (because you call longestPath m times, and there is a loop which executes the visited test n times). The base case is T(n, 0) = O(n) (just the visited test).
Solve this and I believe you get T(n, n) is O(n * n!).
EDIT
Working:
T(n, n) = nT(n, n-1) + O(n)
= n((n-1)T(n, n-2) + O(n)) + O(n) = ...
= n(n-1)...1T(n, 0) + O(n)(1 + n + n(n-1) + ... + n(n-1)...2)
= O(n)(1 + n + n(n-1) + ... + n!)
= O(n)O(n!) (see http://oeis.org/A000522)
= O(n*n!)

Resources