I have writing a function for calculating the length of longest increasing sequence. Here arr[] is array of length n. And lisarr is of length of n to store the length of element i.
I am having difficulty to calculate recurrence expression which is a input for master theorem.
int findLIS(int n){
if(n==0)
return 1 ;
int res;
for(int i=0;i<n;i++){
res=findLIS(i);
if(arr[n]>arr[i] && res+1>lisarr[n])
lisarr[n]=res+1;
}
return lisarr[n];
}
Please give the way to calculate the recurrence relation.
Should it be
T(n)=O(n)+T(1)?
It is O(2^n). Let's calculate exact number of iterations and denote it with f(n). Recurrence relation is f(n) = 1 + f(n-1) + f(n-2) + .. + f(1) + f(0), with f(1)=2 and f(0)=1, which gives f(n)=2*f(n-1), and finally f(n)=2^n.
Proof by induction:
Base n=0 -> Only one iteration of the function.
Let us assume that f(n)=2^n. Then for input n+1 we have f(n+1) = 1 + f(n) + f(n-1) + .. + f(1) + f(0) = 1 + (2^n + 2^(n-1) + .. + 2 + 1) = 1 + (2^(n+1) - 1)=2^(n+1).. Number one at the beginning comes from the part outside of the for loop, and the sum is the consequence of the for loop - you always have one recursive call for each i in {0,1,2,..,n}.
Related
I am supposed to find the recurrence function of this snippet of code, but I am really confused if what I am doing is right. I will put --- lines to try to show my way of thinking.
Assume the tree is balanced.
// compute tree height (longest root-to-leaf path)
int height(TreeNode* root) {
if (root == NULL) return 0; **-------- C**
else {
// Find height of left subtree, height of right subtree
//Use results to determine height of tree
return 1 + max(height(root->left), height(root->right)); **---- n/2**
}
}
I believe the recurrence function of this code would be T(n) = c + n/2, however I feel like I am missing something.
The recurrence relation will be:
T(n) = 2T(n/2) + 1
At any level there is some operation on that level also 2 calls to the left and right subtrees.
We can derive the time complexity from here:
T(n) = 2T(n/2) + 1
= 2 [2(T(n/4) + 1] + 1
= 4T(n/4) + 1 + 1= 4T(n/4) + 2
= 4 [2T(n/8) + 1] + 2
= 8T(n/8) + 3
= 2kT(n/2k) + n
• Holds n = 1, 2, … Let n = 2k, so k=log2 n
= T(1) + n log(n)
so the time complexity will be nlog(n)
My pseudo-code looks like:
solve(n)
for i:= 1 to n do
process(i);
solve(n-i);
where process(n) is a function with some complexity f(n). In my case f(n)=O(n^2), but I am also interested in general case (for example if f(n)=O(n)).
So, I have T(n) = f(n) + ... + f(1) + T(n-1) + ... + T(1). I cannot apply Master theorem as the sub-problems are not the same size.
How to calculate complexity of this recursion?
Small trick – consider solve(n-1):
solve(n) : T(n) = f(n) + f(n-1) + f(n-2) + ... + f(1) + T(n-1) + T(n-2) + ... + T(0)
solve(n-1): T(n-1) = f(n-1) + f(n-2) + ... + f(1) + T(n-2) + ... + T(0)
Subtract the latter from the former:
Expand repeatedly:
Solve the last summation for f(n) to obtain the complexity.
e.g. for f(n) = O(n):
Alternative method – variable substitution:
S(m) is in the correct form for the Master Theorem.
e.g. for f(n) = O(n) = O(log m), use Case 2 with k = 0:
Same result, q.e.d.
Question 5 on Determining complexity for recursive functions (Big O notation) is:
int recursiveFun(int n)
{
for(i=0; i<n; i+=2)
// Do something.
if (n <= 0)
return 1;
else
return 1 + recursiveFun(n-5);
}
To highlight my question, I'll change the recursive parameter from n-5 to n-2:
int recursiveFun(int n)
{
for(i=0; i<n; i+=2)
// Do something.
if (n <= 0)
return 1;
else
return 1 + recursiveFun(n-2);
}
I understand the loop runs in n/2 since a standard loop runs in n and we're iterating half the number of times.
But isn't the same also happening for the recursive call? For each recursive call, n is decremented by 2. If n is 10, call stack is:
recursiveFun(8)
recursiveFun(6)
recursiveFun(4)
recursiveFun(2)
recursiveFun(0)
...which is 5 calls (i.e. 10/2 or n/2). Yet the answer provided by Michael_19 states it runs in n-5 or, in my example, n-2. Clearly n/2 is not the same as n-2. Where have I gone wrong and why is recursion different from iteration when analyzing for Big-O?
Common way to analyze big-O of a recursive algorithm is to find a recursive formula that "counts" the number of operation done by the algorithm. It is usually denoted as T(n).
In your example: the time complexity of this code can be described with the formula:
T(n) = C*n/2 + T(n-2)
^ ^
assuming "do something is constant Recursive call
Since it's pretty obvious it will be in O(n^2), let's show Omega(n^2) using induction:
Induction Hypothesis:
T(k) >= C/8 *k^2 for 0 <= k < n
And indeed:
T(n) = C*n/2 + T(n-2) >= (i.h.) C*n/2 + C*(n-2)^2 / 8
= C* n/2 + C/8(n^2 - 4n + 2) =
= C/8 (4n + n^2 - 4n + 2) =
= C/8 *(n^2 + 2)
And indeed:
T(n) >= C/8 * (n^2 + 2) > C/8 * n^2
Thus, T(n) is in big-Omega(n^2).
Showing big-O is done similarly:
Hypothesis: T(k) <= C*k^2 for all 2 <= k < n
T(n) = C*n/2 + T(n-2) <= (i.h.) C*n/2 + C*(n^2 - 4n + 4)
= C* (2n + n^2 - 4n + 4) = C (n^2 -2n + 4)
For all n >= 2, -2n + 4 <= 0, so for any n>=2:
T(n) <= C (n^2 - 2n + 4) <= C^n^2
And the hypothesis is correct - and by definition of big-O, T(n) is in O(n^2).
Since we have shown T(n) is both in O(n^2) and Omega(n^2), it is also in Theta(n^2)
Analyzing recursion is different from analyzing iteration because:
n (and other local variable) change each time, and it might be hard to catch this behavior.
Things get way more complex when there are multiple recursive calls.
The following is a recursive function for generating powerset
void powerset(int[] items, int s, Stack<Integer> res) {
System.out.println(res);
for(int i = s; i < items.length; i++) {
res.push(items[i]);
powerset(items, s+1, res);
res.pop();
}
}
I don't really understand why this would take O(2^N). Where's that 2 coming from ?
Why T(N) = T(N-1) + T(N-2) + T(N-3) + .... + T(1) + T(0) solves to O(2^n). Can someone explains why ?
We are doubling the number of operations we do every time we decide to add another element to the original array.
For example, let us say we only have the empty set {}. What happens to the power set if we want to add {a}? We would then have 2 sets: {}, {a}. What if we wanted to add {b}? We would then have 4 sets: {}, {a}, {b}, {ab}.
Notice 2^n also implies a doubling nature.
2^1 = 2, 2^2 = 4, 2^3 = 8, ...
Below is more generic explanation.
Note that generating power set is basically generating combinations.
(nCr is number of combinations can be made by taking r items from total n items)
formula: nCr = n!/((n-r)! * r!)
Example:Power Set for {1,2,3} is {{}, {1}, {2}, {3}, {1,2}, {2,3}, {1,3} {1,2,3}} = 8 = 2^3
1) 3C0 = #combinations possible taking 0 items from 3 = 3! / ((3-0)! * 0!) = 1
2) 3C1 = #combinations possible taking 1 items from 3 = 3! / ((3-1)! * 1!) = 3
3) 3C2 = #combinations possible taking 2 items from 3 = 3! / ((3-2)! * 2!) = 3
4) 3C3 = #combinations possible taking 3 items from 3 = 3! / ((3-3)! * 3!) = 1
if you add above 4 it comes out 1 + 3 + 3 + 1 = 8 = 2^3. So basically it turns out to be 2^n possible sets in a power set of n items.
So in an algorithm if you are generating a power set with all these combinations, then its going to take time proportional to 2^n. And so the time complexity is 2^n.
Something like this
T(1)=T(0);
T(2)=T(1)+T(0)=2T(0);
T(3)=T(2)+T(1)+T(0)=2T(2);
Thus we have
T(N)=2T(N-1)=4T(N-2)=... = 2^(N-1)T(1), which is O(2^N)
Shriganesh Shintre's answer is pretty good, but you can simplify it even more:
Assume we have a set S:
{a1, a2, ..., aN}
Now we can write a subset s where each item in the set can have the value 1 (included) or 0 (excluded). Now we can see that the number of possible sets s is a product of:
2 * 2 * ... * 2 or 2^N
I could explain this in several mathematical ways to you the first one :
Consider one element like a each subset have 2 option about a either they have it or not so we must have $ 2^n $ subset and since you need to call function for create every subset you need to call this function $ 2^n $.
another solution:
This solution is with this recursion and it produce you equation , let me define T(0) = 2 for first a set with one element we have T(1) = 2, you just call the function and it ends here. now suppose that for every sets with k < n elements we have this formula
T(k) = T(k-1) + T(k-2) + ... + T(1) + T(0) (I name it * formula)
I want to prove that for k = n this equation is true.
consider every subsets that have the first element ( like what you do at began of your algorithm and you push first element) now we have n-1 elements so it take T(n-1) to find every subsets that have the first element. so far we have :
T(n) = T(n-1) + T(subsets that dont have the first element) (I name it ** formula)
at the end of your for loop you remove the first element now we have every subsets that dont have the first element like I said in (**) and again you have n-1 elements so we have :
T(subsets that dont have the first element) = T(n-1) (I name it * formula)
from formula (*) and (*) we have :
T(subsets that dont have the first element) = T(n-1) = T(n-2) + ... + T(1) + T(0) (I name it **** formula)
And now we have what you want from the first, from formula() and (**) we have :
T(n) = T(n-1) + T(n-2) + ... + T(1) + T(0)
And also we have T(n) = T(n-1) + T(n-1) = 2 * T(n-1) so T(n) = $2^n $
I wrote a code segment to determine the longest path in a graph. Following is the code. But I don't know how to get the computational complexity in it because of the recursive method in the middle. Since finding the longest path is an NP complete problem I assume it's something like O(n!) or O(2^n), but how can I actually determine it?
public static int longestPath(int A) {
int k;
int dist2=0;
int max=0;
visited[A] = true;
for (k = 1; k <= V; ++k) {
if(!visited[k]){
dist2= length[A][k]+longestPath(k);
if(dist2>max){
max=dist2;
}
}
}
visited[A]=false;
return(max);
}
Your recurrence relation is T(n, m) = mT(n, m-1) + O(n), where n denotes number of nodes and m denotes number of unvisited nodes (because you call longestPath m times, and there is a loop which executes the visited test n times). The base case is T(n, 0) = O(n) (just the visited test).
Solve this and I believe you get T(n, n) is O(n * n!).
EDIT
Working:
T(n, n) = nT(n, n-1) + O(n)
= n((n-1)T(n, n-2) + O(n)) + O(n) = ...
= n(n-1)...1T(n, 0) + O(n)(1 + n + n(n-1) + ... + n(n-1)...2)
= O(n)(1 + n + n(n-1) + ... + n!)
= O(n)O(n!) (see http://oeis.org/A000522)
= O(n*n!)