Calculate big O complexity of partitioning problem - recursion

My pseudo-code looks like:
solve(n)
for i:= 1 to n do
process(i);
solve(n-i);
where process(n) is a function with some complexity f(n). In my case f(n)=O(n^2), but I am also interested in general case (for example if f(n)=O(n)).
So, I have T(n) = f(n) + ... + f(1) + T(n-1) + ... + T(1). I cannot apply Master theorem as the sub-problems are not the same size.
How to calculate complexity of this recursion?

Small trick – consider solve(n-1):
solve(n) : T(n) = f(n) + f(n-1) + f(n-2) + ... + f(1) + T(n-1) + T(n-2) + ... + T(0)
solve(n-1): T(n-1) = f(n-1) + f(n-2) + ... + f(1) + T(n-2) + ... + T(0)
Subtract the latter from the former:
Expand repeatedly:
Solve the last summation for f(n) to obtain the complexity.
e.g. for f(n) = O(n):
Alternative method – variable substitution:
S(m) is in the correct form for the Master Theorem.
e.g. for f(n) = O(n) = O(log m), use Case 2 with k = 0:
Same result, q.e.d.

Related

Is this recurrence relation O(infinity)?

Is this recurrence relation O(infinity)?
T(n) = 49*T(n/7) + n
There are no base conditions given.
I tried solving using master's theorem and the answer is Theta(n^2). But when solving with recurrence tree, the solution comes to be an infinite series, of n*(7 + 7^2 + 7^3 +...)
Can someone please help?
Let n = 7^m. The recurrence becomes
T(7^m) = 49 T(7^(m-1)) + 7^m,
or
S(m) = 49 S(m-1) + 7^m.
The homogeneous part gives
S(m) = C 49^m
and the general solution is
S(m) = C 49^m - 7^m / 6
i.e.
T(n) = C n² - n / 6 = (T(1) + 1 / 6) n² - n / 6.
If you try the recursion method:
T(n) = 7^2 T(n/7) + n = 7^2 [7^2 T(n/v^2) + n/7] + n = 7^4 T(n/7^2) + 7n + n
= ... = 7^(2i) * T(n/7^i) + n * [7^0 + 7^1 + 7^2 + ... + 7^(i-1)]
When the i grows n/7^i gets closer to 1 and as mentioned in the other answer, T(1) is a constant. So if we assume T(1) = 1, then:
T(n/7^i) = 1
n/7^i = 1 => i = log_7 (n)
So
T(n) = 7^(2*log_7 (n)) * T(1) + n * [7^0 + 7^1 + 7^2 + ... + 7^(log_7(n)-1)]
=> T(n) = n^2 + n * [1+7+7^2+...+(n-1)] = n^2 + c*n = theta(n^2)
Usually, when no base case is provided for a recurrence relation, the assumption is that the base case is something T(1) = 1 or something along those lines. That way, the recursion eventually terminates.
Something to think about - you can only get an infinite series from your recursion tree if the recursion tree is infinitely deep. Although no base case was specified in the problem, you can operate under the assumption that there is one and that the recursion stops when the input gets sufficiently small for some definition of "sufficiently small." Based on that, at what point does the recursion stop? From there, you should be able to convert your infinite series into a series of finite length, which then will give you your answer.
Hope this helps!

Solve recurrence: T(n) = T(n − 1) + T(n − 2) + 3

T(1) = T(2) = 1, and for n > 2, T(n) = T(n − 1) + T(n − 2) + 3.
What Ive done so far:
T(n-1) = T(n-2) + T(n-3) + 3 + 3
T(n-2) = T(n-3) + T(n-4) + 3 + 3 + 3
T(n) = T(n-2) + 2T(n-3) + T(n-4) + 3 + 3 + 3 + 3 + 3
T(n) = T(1) + 2T(2) + T(n-4) + 3(n + 2)
Im not sure if this is right, and if it is, how do I get rid of T(n-4).
These types of recurrences are tricky, and the repeated expansion method will unfortunately get you nowhere. Observing the recursion tree will only give you an upper bound, which is often not tight.
Two methods I can suggest:
1. Substitution + Standard Theorem
Make the following variable substitution:
This is in the correct form for the Akra-Bazzi method, with parameters:
2. Fibonacci formula
The Fibonacci series has an explicit formula which can be derived by guessing a solution of the form Fn = a^n. Using this as an analogy, substitute a similar expression for T(n):
Equating the constant and exponential terms:
Take the positive root because the negative root has absolute value less than 1, and will therefore decay to zero with increasing n:
Which is consistent with (1).

Algorithm Recurrence Relation

How is the following recurrence relation solved:
T(n) = 2T(n-2)+O(1)
What I have tried so far is:
O(1) is lower or equal than a constant c.
So
T(n) <= 2T(n-2) + c
T(n) <= 4T(n-4) + 2c
T(n) <= 8T(n-6) + 3c
.
.
.
So a pattern is emerging... the general term is:
T(n) <= 2^k*T(n-2k) + kc
But I dont know how to continue from there.Any advice is appreciated.
Assuming your generalization for k is true[1]
T(n) <= 2^k*T(n-2k) + kc
For k=n/2 you get:
T(n) <= 2^n/2 * T(0) + n/2 * c = 2^(n/2) + n/2*c
Which is in O(sqrt(2^n))
Formal proof can be done with induction, with the induction hypothesis of:
T(n) <= 2^(n/2) + c*n
And step of:
T(n) = 2T(n-2) + c = (induction hypothesis)
T(n) = 2* 2^((n-2)/2) + (n-2)*c + c
T(n) = 2^ (n/2 - 2/2 + 1) + (n-1)*c
And indeed:
T(n) = 2^(n/2) + (n-1)*c <= 2^(n/2) + c*n
(1) It is not, it ignores the fact that the constant is multiplied in the loop.

Calculating the complexity of the function

I have writing a function for calculating the length of longest increasing sequence. Here arr[] is array of length n. And lisarr is of length of n to store the length of element i.
I am having difficulty to calculate recurrence expression which is a input for master theorem.
int findLIS(int n){
if(n==0)
return 1 ;
int res;
for(int i=0;i<n;i++){
res=findLIS(i);
if(arr[n]>arr[i] && res+1>lisarr[n])
lisarr[n]=res+1;
}
return lisarr[n];
}
Please give the way to calculate the recurrence relation.
Should it be
T(n)=O(n)+T(1)?
It is O(2^n). Let's calculate exact number of iterations and denote it with f(n). Recurrence relation is f(n) = 1 + f(n-1) + f(n-2) + .. + f(1) + f(0), with f(1)=2 and f(0)=1, which gives f(n)=2*f(n-1), and finally f(n)=2^n.
Proof by induction:
Base n=0 -> Only one iteration of the function.
Let us assume that f(n)=2^n. Then for input n+1 we have f(n+1) = 1 + f(n) + f(n-1) + .. + f(1) + f(0) = 1 + (2^n + 2^(n-1) + .. + 2 + 1) = 1 + (2^(n+1) - 1)=2^(n+1).. Number one at the beginning comes from the part outside of the for loop, and the sum is the consequence of the for loop - you always have one recursive call for each i in {0,1,2,..,n}.

Calculating complexity of recurrence

I am having trouble understanding the concept of recurrences. Given you have T(n) = 2T(n/2) +1 how do you calculate the complexity of this relationship? I know in mergesort, the relationship is T(n) = 2T(n/2) + cn and you can see that you have a tree with depth log2^n and cn work at each level. But I am unsure how to proceed given a generic function. Any tutorials available that can clearly explain this?
The solution to your recurrence is T(n) ∈ Θ(n).
Let's expand the formula:
T(n) = 2*T(n/2) + 1. (Given)
T(n/2) = 2*T(n/4) + 1. (Replace n with n/2)
T(n/4) = 2*T(n/8) + 1. (Replace n with n/4)
T(n) = 2*(2*T(n/4) + 1) + 1 = 4*T(n/4) + 2 + 1. (Substitute)
T(n) = 2*(2*(2*T(n/8) + 1) + 1) + 1 = 8*T(n/8) + 4 + 2 + 1. (Substitute)
And do some observations and analysis:
We can see a pattern emerge: T(n) = 2k * T(n/2k) + (2k − 1).
Now, let k = log2 n. Then n = 2k.
Substituting, we get: T(n) = n * T(n/n) + (n − 1) = n * T(1) + n − 1.
For at least one n, we need to give T(n) a concrete value. So we suppose T(1) = 1.
Therefore, T(n) = n * 1 + n − 1 = 2*n − 1, which is in Θ(n).
Resources:
https://www.cs.duke.edu/courses/spring05/cps100/notes/slides07-4up.pdf
http://www.cs.duke.edu/~ola/ap/recurrence.html
However, for routine work, the normal way to solve these recurrences is to use the Master theorem.

Resources