Big-O complexity recursion Vs iteration - recursion

Question 5 on Determining complexity for recursive functions (Big O notation) is:
int recursiveFun(int n)
{
for(i=0; i<n; i+=2)
// Do something.
if (n <= 0)
return 1;
else
return 1 + recursiveFun(n-5);
}
To highlight my question, I'll change the recursive parameter from n-5 to n-2:
int recursiveFun(int n)
{
for(i=0; i<n; i+=2)
// Do something.
if (n <= 0)
return 1;
else
return 1 + recursiveFun(n-2);
}
I understand the loop runs in n/2 since a standard loop runs in n and we're iterating half the number of times.
But isn't the same also happening for the recursive call? For each recursive call, n is decremented by 2. If n is 10, call stack is:
recursiveFun(8)
recursiveFun(6)
recursiveFun(4)
recursiveFun(2)
recursiveFun(0)
...which is 5 calls (i.e. 10/2 or n/2). Yet the answer provided by Michael_19 states it runs in n-5 or, in my example, n-2. Clearly n/2 is not the same as n-2. Where have I gone wrong and why is recursion different from iteration when analyzing for Big-O?

Common way to analyze big-O of a recursive algorithm is to find a recursive formula that "counts" the number of operation done by the algorithm. It is usually denoted as T(n).
In your example: the time complexity of this code can be described with the formula:
T(n) = C*n/2 + T(n-2)
^ ^
assuming "do something is constant Recursive call
Since it's pretty obvious it will be in O(n^2), let's show Omega(n^2) using induction:
Induction Hypothesis:
T(k) >= C/8 *k^2 for 0 <= k < n
And indeed:
T(n) = C*n/2 + T(n-2) >= (i.h.) C*n/2 + C*(n-2)^2 / 8
= C* n/2 + C/8(n^2 - 4n + 2) =
= C/8 (4n + n^2 - 4n + 2) =
= C/8 *(n^2 + 2)
And indeed:
T(n) >= C/8 * (n^2 + 2) > C/8 * n^2
Thus, T(n) is in big-Omega(n^2).
Showing big-O is done similarly:
Hypothesis: T(k) <= C*k^2 for all 2 <= k < n
T(n) = C*n/2 + T(n-2) <= (i.h.) C*n/2 + C*(n^2 - 4n + 4)
= C* (2n + n^2 - 4n + 4) = C (n^2 -2n + 4)
For all n >= 2, -2n + 4 <= 0, so for any n>=2:
T(n) <= C (n^2 - 2n + 4) <= C^n^2
And the hypothesis is correct - and by definition of big-O, T(n) is in O(n^2).
Since we have shown T(n) is both in O(n^2) and Omega(n^2), it is also in Theta(n^2)
Analyzing recursion is different from analyzing iteration because:
n (and other local variable) change each time, and it might be hard to catch this behavior.
Things get way more complex when there are multiple recursive calls.

Related

Reccurence Relation

I have a method:
int Tree (int n) {
if (n <= 0) return 0;
if (n == 1) return 1;
return ((n*n) + Tree (n-3));
}
I'm trying to find the recurrence relation that captures the running time T(n) for the method 'Tree', so far I've got T(n) = T(n-3) + O(1), then I will need to express the running time as a series of terms, where each term denotes the number of operations at a
distinct level of the recursion tree:
I have T(n) = T(n-3) + O(1) then T(n-1) = T(n-4) + O(1) then T(n-2) = T(n-5) + O(1)
...
But Im unsure if this is right
After T(n) = T(n-3) + O(1) you don't need to check T(n-1) but T(n-3) which is = T(n-6) + O(1). Replacing n by 3p+r you get T(3p+r) = T(3(p-1)+r)+O(1) which gives T(3p+r)=T(r)+O(p), Since T(r) = O(1) and O(p) = O(n), T(n) = O(n)

Is this recurrence relation O(infinity)?

Is this recurrence relation O(infinity)?
T(n) = 49*T(n/7) + n
There are no base conditions given.
I tried solving using master's theorem and the answer is Theta(n^2). But when solving with recurrence tree, the solution comes to be an infinite series, of n*(7 + 7^2 + 7^3 +...)
Can someone please help?
Let n = 7^m. The recurrence becomes
T(7^m) = 49 T(7^(m-1)) + 7^m,
or
S(m) = 49 S(m-1) + 7^m.
The homogeneous part gives
S(m) = C 49^m
and the general solution is
S(m) = C 49^m - 7^m / 6
i.e.
T(n) = C n² - n / 6 = (T(1) + 1 / 6) n² - n / 6.
If you try the recursion method:
T(n) = 7^2 T(n/7) + n = 7^2 [7^2 T(n/v^2) + n/7] + n = 7^4 T(n/7^2) + 7n + n
= ... = 7^(2i) * T(n/7^i) + n * [7^0 + 7^1 + 7^2 + ... + 7^(i-1)]
When the i grows n/7^i gets closer to 1 and as mentioned in the other answer, T(1) is a constant. So if we assume T(1) = 1, then:
T(n/7^i) = 1
n/7^i = 1 => i = log_7 (n)
So
T(n) = 7^(2*log_7 (n)) * T(1) + n * [7^0 + 7^1 + 7^2 + ... + 7^(log_7(n)-1)]
=> T(n) = n^2 + n * [1+7+7^2+...+(n-1)] = n^2 + c*n = theta(n^2)
Usually, when no base case is provided for a recurrence relation, the assumption is that the base case is something T(1) = 1 or something along those lines. That way, the recursion eventually terminates.
Something to think about - you can only get an infinite series from your recursion tree if the recursion tree is infinitely deep. Although no base case was specified in the problem, you can operate under the assumption that there is one and that the recursion stops when the input gets sufficiently small for some definition of "sufficiently small." Based on that, at what point does the recursion stop? From there, you should be able to convert your infinite series into a series of finite length, which then will give you your answer.
Hope this helps!

Prove recursion: Show that M(n) >= 1/2 (n + 1) lg(n + 1)

I want to show that the recursion of quicksort run on best time time on n log n.
i got this recursion formula
M(0) = 1
M(1) = 1
M(n) = min (0 <= k <= n-1) {M(K) + M(n - k - 1)} + n
show that M(n) >= 1/2 (n + 1) lg(n + 1)
what i have got so far:
By induction hyposes
M(n) <= min {M(k) + M(n - k - 1} + n
focusing on the inner expresison i got:
1/2(k + 1)lg(k + 1) + 1/2(n - k)lg(n - k)
1/2lg(k + 1)^(k + 1) + 1/2lg(n - k)^(n - k)
1/2(lg(k + 1)^(k + 1) + lg(n - k)^(n - k)
1/2(lg((k + 1)^(k + 1) . (n - k)^(n - k))
But i think im doing something wrong. i think the "k" should be gonne but i cant see how this equation would cancel out all the "k". So, probably, im doing something wrong
You indeed want to get rid of k. To do this, you want to find the lower bound on the minimum of M(k) + M(n - k - 1). In general it can be arbitrarily tricky, but in this case the standard approach works: take derivative by k.
((k+1) ln(k+1) + (n-k) ln(n-k))' =
ln(k+1) + (k+1)/(k+1) - ln(n-k) - (n-k)/(n-k) =
ln((k+1) / (n-k))
We want the derivative to be 0, so
ln((k+1) / (n-k)) = 0 <=>
(k+1) / (n-k) = 1 <=>
k + 1 = n - k <=>
k = (n-1) / 2
You can check that it's indeed a local minimum.
Therefore, the best lower bound on M(k) + M(n - k - 1) (which we can get from the inductive hypothesis) is reached for k=(n-1)/2. Now you can just substitute this value instead of k, and n will be your only remaining variable.

Solving the recurrence equation T(n) = 3 + m * T(n - m)

I have a Computer Science Midterm tomorrow and I need help determining the complexity of a particular recursive function as below, which is much complicated than the stuffs I've already worked on: it has two variables
T(n) = 3 + mT(n-m)
In simpler cases where m is a constant, the formula can be easily obtained by writing unpacking the relation; however, in this case, unpacking doesn't make the life easier as follows (let's say T(0) = c):
T(n) = 3 + mT(n-m)
T(n-1) = 3 + mT(n-m-1)
T(n-2) = 3 + mT(n-m-2)
...
Obviously, there's no straightforward elimination according to these inequalities. So, I'm wondering whether or not I should use another technique for such cases.
Don't worry about m - this is just a constant parameter. However you're unrolling your recursion incorrectly. Each step of unrolling involves three operations:
Taking value of T with argument value, which is m less
Multiplying it by m
Adding constant 3
So, it will look like this:
T(n) = m * T(n - m) + 3 = (Step 1)
= m * (m * T(n - 2*m) + 3) + 3 = (Step 2)
= m * (m * (m * T(n - 3*m) + 3) + 3) + 3 = ... (Step 3)
and so on. Unrolling T(n) up to step k will be given by following formula:
T(n) = m^k * T(n - k*m) + 3 * (1 + m + m^2 + m^3 + ... + m^(k-1))
Now you set n - k*m = 0 to use the initial condition T(0) and get:
k = n / m
Now you need to use a formula for the sum of geometric progression - and finally you'll get a closed formula for the T(n) (I'm leaving that final step to you).

Big(O) of recursive n choose k code

Is the big(O) of the following recursive code simply O(n choose k)?
int nchoosek(int n, int k) {
if (n == k) return 1;
if (k == 0) return 1;
return nchoosek(n-1, k) + nchoosek(n-1, k-1);
}
I'm not sure if the notation is correct but I guess you can write the recurrence for this function like
T(n) = T(n-1, k) + T(n-1, k-1) + O(1)
Since you have two possible paths we just have to analyse the worst case of each and choose the slowest.
Worst case of T(n-1, k)
Given that 0<k<n and k is as far as possible from n then we have
T(n-1, k) = T(n-1) = O(n)
Worst case of T(n-1, k-1)
We need 0<k<n and k should be as close to n as possible. Then
T(n-1, k-1) = T(n-1) = O(n)
Therefore T(n, k) = 2*O(n) + O(1) = O(n)
Another way of seeing this is by reducing the problem to other known problems, for example you can solve the same problem by using the definition of the choose function in terms of the Factorial function:
nCk = n!/k!(n-k)!
The running time of the Factorial is O(n) even in the recursive case.
nCk requires calculating the Factorial three times:
n! => O(n)
k! => O(k) = O(n)
(n-k)! => O(n-k) = O(n)
Then the multiplication and division are both constant time O(1), hence the running time is:
T(n) = 3*O(n) + 2*O(1) = O(n)

Resources