Is this recurrence relation O(infinity)?
T(n) = 49*T(n/7) + n
There are no base conditions given.
I tried solving using master's theorem and the answer is Theta(n^2). But when solving with recurrence tree, the solution comes to be an infinite series, of n*(7 + 7^2 + 7^3 +...)
Can someone please help?
Let n = 7^m. The recurrence becomes
T(7^m) = 49 T(7^(m-1)) + 7^m,
or
S(m) = 49 S(m-1) + 7^m.
The homogeneous part gives
S(m) = C 49^m
and the general solution is
S(m) = C 49^m - 7^m / 6
i.e.
T(n) = C n² - n / 6 = (T(1) + 1 / 6) n² - n / 6.
If you try the recursion method:
T(n) = 7^2 T(n/7) + n = 7^2 [7^2 T(n/v^2) + n/7] + n = 7^4 T(n/7^2) + 7n + n
= ... = 7^(2i) * T(n/7^i) + n * [7^0 + 7^1 + 7^2 + ... + 7^(i-1)]
When the i grows n/7^i gets closer to 1 and as mentioned in the other answer, T(1) is a constant. So if we assume T(1) = 1, then:
T(n/7^i) = 1
n/7^i = 1 => i = log_7 (n)
So
T(n) = 7^(2*log_7 (n)) * T(1) + n * [7^0 + 7^1 + 7^2 + ... + 7^(log_7(n)-1)]
=> T(n) = n^2 + n * [1+7+7^2+...+(n-1)] = n^2 + c*n = theta(n^2)
Usually, when no base case is provided for a recurrence relation, the assumption is that the base case is something T(1) = 1 or something along those lines. That way, the recursion eventually terminates.
Something to think about - you can only get an infinite series from your recursion tree if the recursion tree is infinitely deep. Although no base case was specified in the problem, you can operate under the assumption that there is one and that the recursion stops when the input gets sufficiently small for some definition of "sufficiently small." Based on that, at what point does the recursion stop? From there, you should be able to convert your infinite series into a series of finite length, which then will give you your answer.
Hope this helps!
I have a Computer Science Midterm tomorrow and I need help determining the complexity of a particular recursive function as below, which is much complicated than the stuffs I've already worked on: it has two variables
T(n) = 3 + mT(n-m)
In simpler cases where m is a constant, the formula can be easily obtained by writing unpacking the relation; however, in this case, unpacking doesn't make the life easier as follows (let's say T(0) = c):
T(n) = 3 + mT(n-m)
T(n-1) = 3 + mT(n-m-1)
T(n-2) = 3 + mT(n-m-2)
...
Obviously, there's no straightforward elimination according to these inequalities. So, I'm wondering whether or not I should use another technique for such cases.
Don't worry about m - this is just a constant parameter. However you're unrolling your recursion incorrectly. Each step of unrolling involves three operations:
Taking value of T with argument value, which is m less
Multiplying it by m
Adding constant 3
So, it will look like this:
T(n) = m * T(n - m) + 3 = (Step 1)
= m * (m * T(n - 2*m) + 3) + 3 = (Step 2)
= m * (m * (m * T(n - 3*m) + 3) + 3) + 3 = ... (Step 3)
and so on. Unrolling T(n) up to step k will be given by following formula:
T(n) = m^k * T(n - k*m) + 3 * (1 + m + m^2 + m^3 + ... + m^(k-1))
Now you set n - k*m = 0 to use the initial condition T(0) and get:
k = n / m
Now you need to use a formula for the sum of geometric progression - and finally you'll get a closed formula for the T(n) (I'm leaving that final step to you).
The following is a recursive function for generating powerset
void powerset(int[] items, int s, Stack<Integer> res) {
System.out.println(res);
for(int i = s; i < items.length; i++) {
res.push(items[i]);
powerset(items, s+1, res);
res.pop();
}
}
I don't really understand why this would take O(2^N). Where's that 2 coming from ?
Why T(N) = T(N-1) + T(N-2) + T(N-3) + .... + T(1) + T(0) solves to O(2^n). Can someone explains why ?
We are doubling the number of operations we do every time we decide to add another element to the original array.
For example, let us say we only have the empty set {}. What happens to the power set if we want to add {a}? We would then have 2 sets: {}, {a}. What if we wanted to add {b}? We would then have 4 sets: {}, {a}, {b}, {ab}.
Notice 2^n also implies a doubling nature.
2^1 = 2, 2^2 = 4, 2^3 = 8, ...
Below is more generic explanation.
Note that generating power set is basically generating combinations.
(nCr is number of combinations can be made by taking r items from total n items)
formula: nCr = n!/((n-r)! * r!)
Example:Power Set for {1,2,3} is {{}, {1}, {2}, {3}, {1,2}, {2,3}, {1,3} {1,2,3}} = 8 = 2^3
1) 3C0 = #combinations possible taking 0 items from 3 = 3! / ((3-0)! * 0!) = 1
2) 3C1 = #combinations possible taking 1 items from 3 = 3! / ((3-1)! * 1!) = 3
3) 3C2 = #combinations possible taking 2 items from 3 = 3! / ((3-2)! * 2!) = 3
4) 3C3 = #combinations possible taking 3 items from 3 = 3! / ((3-3)! * 3!) = 1
if you add above 4 it comes out 1 + 3 + 3 + 1 = 8 = 2^3. So basically it turns out to be 2^n possible sets in a power set of n items.
So in an algorithm if you are generating a power set with all these combinations, then its going to take time proportional to 2^n. And so the time complexity is 2^n.
Something like this
T(1)=T(0);
T(2)=T(1)+T(0)=2T(0);
T(3)=T(2)+T(1)+T(0)=2T(2);
Thus we have
T(N)=2T(N-1)=4T(N-2)=... = 2^(N-1)T(1), which is O(2^N)
Shriganesh Shintre's answer is pretty good, but you can simplify it even more:
Assume we have a set S:
{a1, a2, ..., aN}
Now we can write a subset s where each item in the set can have the value 1 (included) or 0 (excluded). Now we can see that the number of possible sets s is a product of:
2 * 2 * ... * 2 or 2^N
I could explain this in several mathematical ways to you the first one :
Consider one element like a each subset have 2 option about a either they have it or not so we must have $ 2^n $ subset and since you need to call function for create every subset you need to call this function $ 2^n $.
another solution:
This solution is with this recursion and it produce you equation , let me define T(0) = 2 for first a set with one element we have T(1) = 2, you just call the function and it ends here. now suppose that for every sets with k < n elements we have this formula
T(k) = T(k-1) + T(k-2) + ... + T(1) + T(0) (I name it * formula)
I want to prove that for k = n this equation is true.
consider every subsets that have the first element ( like what you do at began of your algorithm and you push first element) now we have n-1 elements so it take T(n-1) to find every subsets that have the first element. so far we have :
T(n) = T(n-1) + T(subsets that dont have the first element) (I name it ** formula)
at the end of your for loop you remove the first element now we have every subsets that dont have the first element like I said in (**) and again you have n-1 elements so we have :
T(subsets that dont have the first element) = T(n-1) (I name it * formula)
from formula (*) and (*) we have :
T(subsets that dont have the first element) = T(n-1) = T(n-2) + ... + T(1) + T(0) (I name it **** formula)
And now we have what you want from the first, from formula() and (**) we have :
T(n) = T(n-1) + T(n-2) + ... + T(1) + T(0)
And also we have T(n) = T(n-1) + T(n-1) = 2 * T(n-1) so T(n) = $2^n $
In our Data Structures class we are learning how to solve recurrence relations in 1 variable. Unfortunately some things seem to come "out of the blue".
For example, some exercises already tell you how to substitute the variable n:
Compute T(n) for n = 2^k
T(n) = a for n =< 2
T(n) = 8T(n/2) + bn^2 (a and b are > 0)
But some exercises just give you the T(n) without providing a replacement for the variable n:
T(n) = 1 n =<1
T(n) = 2T(n/4) + sqrt(n)
I used the iterative method and arrived to the right answer: sqrt(n) + (1/2) * sqrt(n) * Log(n).
But when the professor explained she started by saying: "Let n = 4^k", which is what I mean by "out of the blue". Using that fact the answer is simpler to obtain.
But how is the student supposed to come up with that?
This is another example:
T(n) = 1 n =<1
T(n) = 2T( (n-1)/2 ) + n
Here I started again with the iterative method but I can't reach a definitive answer, it looks more complex that way.
After 3 iterative steps I arrived to this:
T(n) = 4T( (n-2)/4 ) + 2n - 1
T(n) = 8T( (n-3)/8 ) + 3n - 3
T(n) = 16T( (n-4)/16 ) + 4n - 6
I am inclined to say T(i) = 2^i * T( (n-i)/2^i ) + i*n - ? This last part I can't figure out, maybe I made a mistake.
However in the answer she provides she starts again with another substitution: Let n = (2^k) -1. I don’t see where this comes from - why would I do this? What is the logic behind that?
In all of these cases, these substitutions are reasonable because they rewrite the recurrence as one of the form S(k) = aS(k - 1) + f(k). These recurrences are often easier to solve than other recurrences because they define S(k) purely in terms of S(k - 1).
Let's do some examples to see how this works. Consider this recurrence:
T(n) = 1 (if n ≤ 1)
T(n) = 2T(n/4) + sqrt(n) (otherwise)
Here, the size of the problem shrinks by a factor of four on each iteration. Therefore, if the input is a perfect power of four, then the input will shrink from size 4k to 4k-1, from 4k-1 to 4k-2, etc. until the recursion bottoms out. If we make this substitution and let S(k) = T(4k), then we get hat
S(0) = 1
S(k) = 2S(k - 1) + 2k
This is now a recurrence relation where S(k) is defined in terms of S(k - 1), which can make the recurrence easier to solve.
Let's look at your original recurrence:
T(n) = a (for n ≤ 2)
T(n) = 8T(n/2) + bn2
Notice that the recursive step divides n by two. If n is a perfect power of two, then the recursive step considers the power of two that comes right before n. Letting S(k) = T(2k) gives
S(k) = a (for k ≤ 1)
S(k) = 8S(k - 1) + b22k
Notice how that S(k) is defined in terms of S(k - 1), which is a much easier recurrence to solve. The choice of powers of two was "natural" here because it made the recursive step talk purely about the previous value of S and not some arbitrarily smaller value of S.
Now, look at the last recurrence:
T(n) = 1 (n ≤ 1)
T(n) = 2T( (n-1)/2 ) + n
We'd like to make some substitution k = f(n) such that T(f(n)) = 2T(f(n) - 1) + n. The question is how to do that.
With some trial and error, we get that setting f(n) = 2n - 1 fits the bill, since
(f(n) - 1) / 2 = ((2n - 1) - 1) / 2 = (2n - 2) / 2 = 2n-1 - 1 = f(n) - 1
Therefore, letting k = 2n - 1 and setting S(k) = T(2n - 1), we get
S(n) = 1 (if n ≤ 1)
S(n) = 2S(n - 1) + 2n - 1
Hope this helps!
Exercise 1.11:
A function f is defined by the rule that f(n) = n if n < 3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3) if n > 3. Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by means of an iterative process.
Implementing it recursively is simple enough. But I couldn't figure out how to do it iteratively. I tried comparing with the Fibonacci example given, but I didn't know how to use it as an analogy. So I gave up (shame on me) and Googled for an explanation, and I found this:
(define (f n)
(if (< n 3)
n
(f-iter 2 1 0 n)))
(define (f-iter a b c count)
(if (< count 3)
a
(f-iter (+ a (* 2 b) (* 3 c))
a
b
(- count 1))))
After reading it, I understand the code and how it works. But what I don't understand is the process needed to get from the recursive definition of the function to this. I don't get how the code could have formed in someone's head.
Could you explain the thought process needed to arrive at the solution?
You need to capture the state in some accumulators and update the state at each iteration.
If you have experience in an imperative language, imagine writing a while loop and tracking information in variables during each iteration of the loop. What variables would you need? How would you update them? That's exactly what you have to do to make an iterative (tail-recursive) set of calls in Scheme.
In other words, it might help to start thinking of this as a while loop instead of a recursive definition. Eventually you'll be fluent enough with recursive -> iterative transformations that you won't need to extra help to get started.
For this particular example, you have to look closely at the three function calls, because it's not immediately clear how to represent them. However, here's the likely thought process: (in Python pseudo-code to emphasise the imperativeness)
Each recursive step keeps track of three things:
f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)
So I need three pieces of state to track the current, the last and the penultimate values of f. (that is, f(n-1), f(n-2) and f(n-3).) Call them a, b, c. I have to update these pieces inside each loop:
for _ in 2..n:
a = NEWVALUE
b = a
c = b
return a
So what's NEWVALUE? Well, now that we have representations of f(n-1), f(n-2) and f(n-3), it's just the recursive equation:
for _ in 2..n:
a = a + 2 * b + 3 * c
b = a
c = b
return a
Now all that's left is to figure out the initial values of a, b and c. But that's easy, since we know that f(n) = n if n < 3.
if n < 3: return n
a = 2 # f(n-1) where n = 3
b = 1 # f(n-2)
c = 0 # f(n-3)
# now start off counting at 3
for _ in 3..n:
a = a + 2 * b + 3 * c
b = a
c = b
return a
That's still a little different from the Scheme iterative version, but I hope you can see the thought process now.
I think you are asking how one might discover the algorithm naturally, outside of a 'design pattern'.
It was helpful for me to look at the expansion of the f(n) at each n value:
f(0) = 0 |
f(1) = 1 | all known values
f(2) = 2 |
f(3) = f(2) + 2f(1) + 3f(0)
f(4) = f(3) + 2f(2) + 3f(1)
f(5) = f(4) + 2f(3) + 3f(2)
f(6) = f(5) + 2f(4) + 3f(3)
Looking closer at f(3), we see that we can calculate it immediately from the known values.
What do we need to calculate f(4)?
We need to at least calculate f(3) + [the rest]. But as we calculate f(3), we calculate f(2) and f(1) as well, which we happen to need for calculating [the rest] of f(4).
f(3) = f(2) + 2f(1) + 3f(0)
↘ ↘
f(4) = f(3) + 2f(2) + 3f(1)
So, for any number n, I can start by calculating f(3), and reuse the values I use to calculate f(3) to calculate f(4)...and the pattern continues...
f(3) = f(2) + 2f(1) + 3f(0)
↘ ↘
f(4) = f(3) + 2f(2) + 3f(1)
↘ ↘
f(5) = f(4) + 2f(3) + 3f(2)
Since we will reuse them, lets give them a name a, b, c. subscripted with the step we are on, and walk through a calculation of f(5):
Step 1: f(3) = f(2) + 2f(1) + 3f(0) or f(3) = a1 + 2b1 +3c1
where
a1 = f(2) = 2,
b1 = f(1) = 1,
c1 = 0
since f(n) = n for n < 3.
Thus:
f(3) = a1 + 2b1 + 3c1 = 4
Step 2: f(4) = f(3) + 2a1 + 3b1
So:
a2 = f(3) = 4 (calculated above in step 1),
b2 = a1 = f(2) = 2,
c2 = b1 = f(1) = 1
Thus:
f(4) = 4 + 2*2 + 3*1 = 11
Step 3: f(5) = f(4) + 2a2 + 3b2
So:
a3 = f(4) = 11 (calculated above in step 2),
b3 = a2 = f(3) = 4,
c3 = b2 = f(2) = 2
Thus:
f(5) = 11 + 2*4 + 3*2 = 25
Throughout the above calculation we capture state in the previous calculation and pass it to the next step,
particularily:
astep = result of step - 1
bstep = astep - 1
cstep = bstep -1
Once I saw this, then coming up with the iterative version was straightforward.
Since the post you linked to describes a lot about the solution, I'll try to only give complementary information.
You're trying to define a tail-recursive function in Scheme here, given a (non-tail) recursive definition.
The base case of the recursion (f(n) = n if n < 3) is handled by both functions. I'm not really sure why the author does this; the first function could simply be:
(define (f n)
(f-iter 2 1 0 n))
The general form would be:
(define (f-iter ... n)
(if (base-case? n)
base-result
(f-iter ...)))
Note I didn't fill in parameters for f-iter yet, because you first need to understand what state needs to be passed from one iteration to another.
Now, let's look at the dependencies of the recursive form of f(n). It references f(n - 1), f(n - 2), and f(n - 3), so we need to keep around these values. And of course we need the value of n itself, so we can stop iterating over it.
So that's how you come up with the tail-recursive call: we compute f(n) to use as f(n - 1), rotate f(n - 1) to f(n - 2) and f(n - 2) to f(n - 3), and decrement count.
If this still doesn't help, please try to ask a more specific question — it's really hard to answer when you write "I don't understand" given a relatively thorough explanation already.
I'm going to come at this in a slightly different approach to the other answers here, focused on how coding style can make the thought process behind an algorithm like this easier to comprehend.
The trouble with Bill's approach, quoted in your question, is that it's not immediately clear what meaning is conveyed by the state variables, a, b, and c. Their names convey no information, and Bill's post does not describe any invariant or other rule that they obey. I find it easier both to formulate and to understand iterative algorithms if the state variables obey some documented rules describing their relationships to each other.
With this in mind, consider this alternative formulation of the exact same algorithm, which differs from Bill's only in having more meaningful variable names for a, b and c and an incrementing counter variable instead of a decrementing one:
(define (f n)
(if (< n 3)
n
(f-iter n 2 0 1 2)))
(define (f-iter n
i
f-of-i-minus-2
f-of-i-minus-1
f-of-i)
(if (= i n)
f-of-i
(f-iter n
(+ i 1)
f-of-i-minus-1
f-of-i
(+ f-of-i
(* 2 f-of-i-minus-1)
(* 3 f-of-i-minus-2)))))
Suddenly the correctness of the algorithm - and the thought process behind its creation - is simple to see and describe. To calculate f(n):
We have a counter variable i that starts at 2 and climbs to n, incrementing by 1 on each call to f-iter.
At each step along the way, we keep track of f(i), f(i-1) and f(i-2), which is sufficient to allow us to calculate f(i+1).
Once i=n, we are done.
What did help me was running the process manually using a pencil and using hint author gave for the fibonacci example
a <- a + b
b <- a
Translating this to new problem is how you push state forward in the process
a <- a + (b * 2) + (c * 3)
b <- a
c <- b
So you need a function with an interface to accept 3 variables: a, b, c. And it needs to call itself using process above.
(define (f-iter a b c)
(f-iter (+ a (* b 2) (* c 3)) a b))
If you run and print each variable for each iteration starting with (f-iter 1 0 0), you'll get something like this (it will run forever of course):
a b c
=========
1 0 0
1 1 0
3 1 1
8 3 1
17 8 3
42 17 8
100 42 17
235 100 42
...
Can you see the answer? You get it by summing columns b and c for each iteration. I must admit I found it by doing some trail and error. Only thing left is having a counter to know when to stop, here is the whole thing:
(define (f n)
(f-iter 1 0 0 n))
(define (f-iter a b c count)
(if (= count 0)
(+ b c)
(f-iter (+ a (* b 2) (* c 3)) a b (- count 1))))
A function f is defined by the rule that f(n) = n, if n<3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3), if n > 3. Write a procedure that computes f by means of a recursive process.
It is already written:
f(n) = n, (* if *) n < 3
= f(n - 1) + 2f(n - 2) + 3f(n - 3), (* if *) n > 3
Believe it or not, there was once such a language. To write this down in another language is just a matter of syntax. And by the way, the definition as you (mis)quote it has a bug, which is now very apparent and clear.
Write a procedure that computes f by means of an iterative process.
Iteration means going forward (there's your explanation!) as opposed to the recursion's going backwards at first, to the very lowest level, and then going forward calculating the result on the way back up:
f(0) = 0
f(1) = 1
f(2) = 2
f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)
= a + 2b + 3c
f(n+1) = f(n ) + 2f(n - 1) + 3f(n - 2)
= a' + 2b' + 3c' where
a' = f(n) = a+2b+3c,
b' = f(n-1) = a,
c' = f(n-2) = b
......
This thus describes the problem's state transitions as
(n, a, b, c) -> (n+1, a+2*b+3*c, a, b)
We could code it as
g (n, a, b, c) = g (n+1, a+2*b+3*c, a, b)
but of course it wouldn't ever stop. So we must instead have
f n = g (2, 2, 1, 0)
where
g (k, a, b, c) = g (k+1, a+2*b+3*c, a, b), (* if *) k < n
g (k, a, b, c) = a, otherwise
and this is already exactly like the code you asked about, up to syntax.
Counting up to n is more natural here, following our paradigm of "going forward", but counting down to 0 as the code you quote does is of course entirely equivalent.
The corner cases and possible off-by-one errors are left out as exercise non-interesting technicalities.