The following is a recursive function for generating powerset
void powerset(int[] items, int s, Stack<Integer> res) {
System.out.println(res);
for(int i = s; i < items.length; i++) {
res.push(items[i]);
powerset(items, s+1, res);
res.pop();
}
}
I don't really understand why this would take O(2^N). Where's that 2 coming from ?
Why T(N) = T(N-1) + T(N-2) + T(N-3) + .... + T(1) + T(0) solves to O(2^n). Can someone explains why ?
We are doubling the number of operations we do every time we decide to add another element to the original array.
For example, let us say we only have the empty set {}. What happens to the power set if we want to add {a}? We would then have 2 sets: {}, {a}. What if we wanted to add {b}? We would then have 4 sets: {}, {a}, {b}, {ab}.
Notice 2^n also implies a doubling nature.
2^1 = 2, 2^2 = 4, 2^3 = 8, ...
Below is more generic explanation.
Note that generating power set is basically generating combinations.
(nCr is number of combinations can be made by taking r items from total n items)
formula: nCr = n!/((n-r)! * r!)
Example:Power Set for {1,2,3} is {{}, {1}, {2}, {3}, {1,2}, {2,3}, {1,3} {1,2,3}} = 8 = 2^3
1) 3C0 = #combinations possible taking 0 items from 3 = 3! / ((3-0)! * 0!) = 1
2) 3C1 = #combinations possible taking 1 items from 3 = 3! / ((3-1)! * 1!) = 3
3) 3C2 = #combinations possible taking 2 items from 3 = 3! / ((3-2)! * 2!) = 3
4) 3C3 = #combinations possible taking 3 items from 3 = 3! / ((3-3)! * 3!) = 1
if you add above 4 it comes out 1 + 3 + 3 + 1 = 8 = 2^3. So basically it turns out to be 2^n possible sets in a power set of n items.
So in an algorithm if you are generating a power set with all these combinations, then its going to take time proportional to 2^n. And so the time complexity is 2^n.
Something like this
T(1)=T(0);
T(2)=T(1)+T(0)=2T(0);
T(3)=T(2)+T(1)+T(0)=2T(2);
Thus we have
T(N)=2T(N-1)=4T(N-2)=... = 2^(N-1)T(1), which is O(2^N)
Shriganesh Shintre's answer is pretty good, but you can simplify it even more:
Assume we have a set S:
{a1, a2, ..., aN}
Now we can write a subset s where each item in the set can have the value 1 (included) or 0 (excluded). Now we can see that the number of possible sets s is a product of:
2 * 2 * ... * 2 or 2^N
I could explain this in several mathematical ways to you the first one :
Consider one element like a each subset have 2 option about a either they have it or not so we must have $ 2^n $ subset and since you need to call function for create every subset you need to call this function $ 2^n $.
another solution:
This solution is with this recursion and it produce you equation , let me define T(0) = 2 for first a set with one element we have T(1) = 2, you just call the function and it ends here. now suppose that for every sets with k < n elements we have this formula
T(k) = T(k-1) + T(k-2) + ... + T(1) + T(0) (I name it * formula)
I want to prove that for k = n this equation is true.
consider every subsets that have the first element ( like what you do at began of your algorithm and you push first element) now we have n-1 elements so it take T(n-1) to find every subsets that have the first element. so far we have :
T(n) = T(n-1) + T(subsets that dont have the first element) (I name it ** formula)
at the end of your for loop you remove the first element now we have every subsets that dont have the first element like I said in (**) and again you have n-1 elements so we have :
T(subsets that dont have the first element) = T(n-1) (I name it * formula)
from formula (*) and (*) we have :
T(subsets that dont have the first element) = T(n-1) = T(n-2) + ... + T(1) + T(0) (I name it **** formula)
And now we have what you want from the first, from formula() and (**) we have :
T(n) = T(n-1) + T(n-2) + ... + T(1) + T(0)
And also we have T(n) = T(n-1) + T(n-1) = 2 * T(n-1) so T(n) = $2^n $
Related
I am supposed to find the recurrence function of this snippet of code, but I am really confused if what I am doing is right. I will put --- lines to try to show my way of thinking.
Assume the tree is balanced.
// compute tree height (longest root-to-leaf path)
int height(TreeNode* root) {
if (root == NULL) return 0; **-------- C**
else {
// Find height of left subtree, height of right subtree
//Use results to determine height of tree
return 1 + max(height(root->left), height(root->right)); **---- n/2**
}
}
I believe the recurrence function of this code would be T(n) = c + n/2, however I feel like I am missing something.
The recurrence relation will be:
T(n) = 2T(n/2) + 1
At any level there is some operation on that level also 2 calls to the left and right subtrees.
We can derive the time complexity from here:
T(n) = 2T(n/2) + 1
= 2 [2(T(n/4) + 1] + 1
= 4T(n/4) + 1 + 1= 4T(n/4) + 2
= 4 [2T(n/8) + 1] + 2
= 8T(n/8) + 3
= 2kT(n/2k) + n
• Holds n = 1, 2, … Let n = 2k, so k=log2 n
= T(1) + n log(n)
so the time complexity will be nlog(n)
Is this recurrence relation O(infinity)?
T(n) = 49*T(n/7) + n
There are no base conditions given.
I tried solving using master's theorem and the answer is Theta(n^2). But when solving with recurrence tree, the solution comes to be an infinite series, of n*(7 + 7^2 + 7^3 +...)
Can someone please help?
Let n = 7^m. The recurrence becomes
T(7^m) = 49 T(7^(m-1)) + 7^m,
or
S(m) = 49 S(m-1) + 7^m.
The homogeneous part gives
S(m) = C 49^m
and the general solution is
S(m) = C 49^m - 7^m / 6
i.e.
T(n) = C n² - n / 6 = (T(1) + 1 / 6) n² - n / 6.
If you try the recursion method:
T(n) = 7^2 T(n/7) + n = 7^2 [7^2 T(n/v^2) + n/7] + n = 7^4 T(n/7^2) + 7n + n
= ... = 7^(2i) * T(n/7^i) + n * [7^0 + 7^1 + 7^2 + ... + 7^(i-1)]
When the i grows n/7^i gets closer to 1 and as mentioned in the other answer, T(1) is a constant. So if we assume T(1) = 1, then:
T(n/7^i) = 1
n/7^i = 1 => i = log_7 (n)
So
T(n) = 7^(2*log_7 (n)) * T(1) + n * [7^0 + 7^1 + 7^2 + ... + 7^(log_7(n)-1)]
=> T(n) = n^2 + n * [1+7+7^2+...+(n-1)] = n^2 + c*n = theta(n^2)
Usually, when no base case is provided for a recurrence relation, the assumption is that the base case is something T(1) = 1 or something along those lines. That way, the recursion eventually terminates.
Something to think about - you can only get an infinite series from your recursion tree if the recursion tree is infinitely deep. Although no base case was specified in the problem, you can operate under the assumption that there is one and that the recursion stops when the input gets sufficiently small for some definition of "sufficiently small." Based on that, at what point does the recursion stop? From there, you should be able to convert your infinite series into a series of finite length, which then will give you your answer.
Hope this helps!
I have a Computer Science Midterm tomorrow and I need help determining the complexity of a particular recursive function as below, which is much complicated than the stuffs I've already worked on: it has two variables
T(n) = 3 + mT(n-m)
In simpler cases where m is a constant, the formula can be easily obtained by writing unpacking the relation; however, in this case, unpacking doesn't make the life easier as follows (let's say T(0) = c):
T(n) = 3 + mT(n-m)
T(n-1) = 3 + mT(n-m-1)
T(n-2) = 3 + mT(n-m-2)
...
Obviously, there's no straightforward elimination according to these inequalities. So, I'm wondering whether or not I should use another technique for such cases.
Don't worry about m - this is just a constant parameter. However you're unrolling your recursion incorrectly. Each step of unrolling involves three operations:
Taking value of T with argument value, which is m less
Multiplying it by m
Adding constant 3
So, it will look like this:
T(n) = m * T(n - m) + 3 = (Step 1)
= m * (m * T(n - 2*m) + 3) + 3 = (Step 2)
= m * (m * (m * T(n - 3*m) + 3) + 3) + 3 = ... (Step 3)
and so on. Unrolling T(n) up to step k will be given by following formula:
T(n) = m^k * T(n - k*m) + 3 * (1 + m + m^2 + m^3 + ... + m^(k-1))
Now you set n - k*m = 0 to use the initial condition T(0) and get:
k = n / m
Now you need to use a formula for the sum of geometric progression - and finally you'll get a closed formula for the T(n) (I'm leaving that final step to you).
I have writing a function for calculating the length of longest increasing sequence. Here arr[] is array of length n. And lisarr is of length of n to store the length of element i.
I am having difficulty to calculate recurrence expression which is a input for master theorem.
int findLIS(int n){
if(n==0)
return 1 ;
int res;
for(int i=0;i<n;i++){
res=findLIS(i);
if(arr[n]>arr[i] && res+1>lisarr[n])
lisarr[n]=res+1;
}
return lisarr[n];
}
Please give the way to calculate the recurrence relation.
Should it be
T(n)=O(n)+T(1)?
It is O(2^n). Let's calculate exact number of iterations and denote it with f(n). Recurrence relation is f(n) = 1 + f(n-1) + f(n-2) + .. + f(1) + f(0), with f(1)=2 and f(0)=1, which gives f(n)=2*f(n-1), and finally f(n)=2^n.
Proof by induction:
Base n=0 -> Only one iteration of the function.
Let us assume that f(n)=2^n. Then for input n+1 we have f(n+1) = 1 + f(n) + f(n-1) + .. + f(1) + f(0) = 1 + (2^n + 2^(n-1) + .. + 2 + 1) = 1 + (2^(n+1) - 1)=2^(n+1).. Number one at the beginning comes from the part outside of the for loop, and the sum is the consequence of the for loop - you always have one recursive call for each i in {0,1,2,..,n}.
I have this recursive function:
f(n) = 2 * f(n-1) + 3 * f(n-2) + 4
f(1) = 2
f(2) = 8
I know from experience that explicit form of it would be:
f(n) = 3 ^ n - 1 // pow(3, n) - 1
I wanna know if there's any way to prove that. I googled a bit, yet didn't find anything simple to understand. I already know that generation functions probably solve it, they're too complex, I'd rather not get into them. I'm looking for a simpler way.
P.S.
If it helps I remember something like this solved it:
f(n) = 2 * f(n-1) + 3 * f(n-2) + 4
// consider f(n) = x ^ n
x ^ n = 2 * x ^ (n-1) + 3 * x ^ (n-2) + 4
And then you somehow computed x that lead to explicit form of the recursive formula, yet I can't quite remember
f(n) = 2 * f(n-1) + 3 * f(n-2) + 4
f(n+1) = 2 * f(n) + 3 * f(n-1) + 4
f(n+1)-f(n) = 2 * f(n) - 2 * f(n-1) + 3 * f(n-1) - 3 * f(n-2)
f(n+1) = 3 * f(n) + f(n-1) - 3 * f(n-2)
Now the 4 is gone.
As you said the next step is letting f(n) = x ^ n
x^(n+1) = 3 * x^n + x^(n-1) - 3 * x^(n-2)
divide by x^(n-2)
x^3 = 3 * x^2 + x - 3
x^3 - 3 * x^2 - x + 3 = 0
factorise to find x
(x-3)(x-1)(x+1) = 0
x = -1 or 1 or 3
f(n) = A * (-1)^n + B * 1^n + C * 3^n
f(n) = A * (-1)^n + B + C * 3^n
Now find A,B and C using the values you have
f(1) = 2; f(2) = 8; f(3) = 26
f(1) = 2 = -A + B + 3C
f(2) = 8 = A + B + 9C
f(3) = 26 = -A + B + 27C
solving for A,B and C:
f(3)-f(1) = 24 = 24C => C = 1
f(2)-f(1) = 6 = 2A + 6 => A = 0
2 = B + 3 => B = -1
Finally
f(n) = 3^n - 1
Ok, I know you didn't want generating functions (GF from now on) and all the complicated stuff, but my problem turned out to be nonlinear and simple linear methods didn't seem to work. So after a full day of searching, I found the answer and hopefully these findings will be of help to others.
My problem: a[n+1]= a[n]/(1+a[n]) (i.e. not linear (nor polynomial), but also not completely nonlinear - it is a rational difference equation)
if your recurrence is linear (or polynomial), wikihow has step-by-step instructions (with and without GF)
if you want to read something about GF, go to this wiki, but I didn't get it till I started doing examples (see next)
GF usage example on Fibonacci
if the previous example didn't make sense, download GF book and read the simplest GF example (section 1.1, ie a[n+1]= 2 a[n]+1, then 1.2, a[n+1]= 2 a[n]+1, then 1.3 - Fibonacci)
(while I'm on the book topic) templatetypedef mentioned Concrete Mathematics, download here, but I don't know much about it except it has a recurrence, sums, and GF chapter (among others) and a table of simple GFs on page 335
as I dove deeper for nonlinear stuff, I saw this page, using which I failed at z-transforms approach and didn't try linear algebra, but the link to rational difference eqn was the best (see next step)
so as per this page, rational functions are nice because you can transform them into polynomials and use linear methods of step 1. 3. and 4. above, which I wrote out by hand and probably made some mistake, because (see 8)
Mathematica (or even the free WolframAlpha) has a recurrence solver, which with RSolve[{a[n + 1] == a[n]/(1 + a[n]), a[1] == A}, a[n], n] got me a simple {{a[n] -> A/(1 - A + A n)}}. So I guess I'll go back and look for mistake in hand-calculations (they are good for understanding how the whole conversion process works).
Anyways, hope this helps.
In general, there is no algorithm for converting a recursive form into an iterative one. This problem is undecidable. As an example, consider this recursive function definition, which defines the Collatz sequence:
f(1) = 0
f(2n) = 1 + f(n)
f(2n + 1) = 1 + f(6n + 4)
It's not known whether or not this is even a well-defined function or not. Were an algorithm to exist that could convert this into a closed-form, we could decide whether or not it was well-defined.
However, for many common cases, it is possible to convert a recursive definition into an iterative one. The excellent textbook Concrete Mathematics spends much of its pages showing how to do this. One common technique that works quite well when you have a guess of what the answer is is to use induction. As an example for your case, suppose that you believe that your recursive definition does indeed give 3^n - 1. To prove this, try proving that it holds true for the base cases, then show that this knowledge lets you generalize the solution upward. You didn't put a base case in your post, but I'm assuming that
f(0) = 0
f(1) = 2
Given this, let's see whether your hunch is correct. For the specific inputs of 0 and 1, you can verify by inspection that the function does compute 3^n - 1. For the inductive step, let's assume that for all n' < n that f(n) = 3^n - 1. Then we have that
f(n) = 2f(n - 1) + 3f(n - 2) + 4
= 2 * (3^{n-1} - 1) + 3 * (3^{n-2} - 1) + 4
= 2 * 3^{n-1} - 2 + 3^{n-1} - 3 + 4
= 3 * 3^{n-1} - 5 + 4
= 3^n - 1
So we have just proven that this recursive function does indeed produce 3^n - 1.