Induction Proof $T(n) = 9T(n/3) + n^2$ - math

How can I prove that the reccurence
T(n) = 9T(n/3) + n2
leads to T(n) = O(n2 log(n)) using the substitution method and a proof by induction? I’m not allowed to use the Master Theorem.
Using induction and assuming T(n) ≤ cn2 log n, I got to the following point:
T(n) = 9 * T(n/3) + n2
≤ 9c ( n2 / 9 + log(n/3)) +n2
= cn2 + 9c log(n/3) + n2
Thank you.

I think you've made a math error in your substitution. If we assume that T(n) ≤ cn2 log n, then we'd get
T(n) = 9T(n / 3) + n2
≤ 9(c(n / 3)2 log(n / 3)) + n2
= 9((1 / 9)cn2 log (n / 3)) + n2
= cn2 log(n / 3) + n2
You're very close to having things complete at this point. As a hint, suppose that the logarithm is a base-3 logarithm. What happens if you then use properties of logarithms to simplify cn2 log(n / 3)?

Related

Solving the recurrence equation T(n) = 3 + m * T(n - m)

I have a Computer Science Midterm tomorrow and I need help determining the complexity of a particular recursive function as below, which is much complicated than the stuffs I've already worked on: it has two variables
T(n) = 3 + mT(n-m)
In simpler cases where m is a constant, the formula can be easily obtained by writing unpacking the relation; however, in this case, unpacking doesn't make the life easier as follows (let's say T(0) = c):
T(n) = 3 + mT(n-m)
T(n-1) = 3 + mT(n-m-1)
T(n-2) = 3 + mT(n-m-2)
...
Obviously, there's no straightforward elimination according to these inequalities. So, I'm wondering whether or not I should use another technique for such cases.
Don't worry about m - this is just a constant parameter. However you're unrolling your recursion incorrectly. Each step of unrolling involves three operations:
Taking value of T with argument value, which is m less
Multiplying it by m
Adding constant 3
So, it will look like this:
T(n) = m * T(n - m) + 3 = (Step 1)
= m * (m * T(n - 2*m) + 3) + 3 = (Step 2)
= m * (m * (m * T(n - 3*m) + 3) + 3) + 3 = ... (Step 3)
and so on. Unrolling T(n) up to step k will be given by following formula:
T(n) = m^k * T(n - k*m) + 3 * (1 + m + m^2 + m^3 + ... + m^(k-1))
Now you set n - k*m = 0 to use the initial condition T(0) and get:
k = n / m
Now you need to use a formula for the sum of geometric progression - and finally you'll get a closed formula for the T(n) (I'm leaving that final step to you).

Algorithm Recurrence Relation

How is the following recurrence relation solved:
T(n) = 2T(n-2)+O(1)
What I have tried so far is:
O(1) is lower or equal than a constant c.
So
T(n) <= 2T(n-2) + c
T(n) <= 4T(n-4) + 2c
T(n) <= 8T(n-6) + 3c
.
.
.
So a pattern is emerging... the general term is:
T(n) <= 2^k*T(n-2k) + kc
But I dont know how to continue from there.Any advice is appreciated.
Assuming your generalization for k is true[1]
T(n) <= 2^k*T(n-2k) + kc
For k=n/2 you get:
T(n) <= 2^n/2 * T(0) + n/2 * c = 2^(n/2) + n/2*c
Which is in O(sqrt(2^n))
Formal proof can be done with induction, with the induction hypothesis of:
T(n) <= 2^(n/2) + c*n
And step of:
T(n) = 2T(n-2) + c = (induction hypothesis)
T(n) = 2* 2^((n-2)/2) + (n-2)*c + c
T(n) = 2^ (n/2 - 2/2 + 1) + (n-1)*c
And indeed:
T(n) = 2^(n/2) + (n-1)*c <= 2^(n/2) + c*n
(1) It is not, it ignores the fact that the constant is multiplied in the loop.

Asymptotic time complexity of recursive function (Theta)

I have been asked to analyze the asymptotic time complexity of the following recursion function:
for-all k ≥ 1:
T(n) = n + T(n/2) + T(n/4) + T(n/8) + .... + T(n/2^k)
I was able to prove that:
T(n) = O(n⋅log n) and T(n) = Ω(n),
but I am looking for a tighter bound (Big Theta).
First of all:
I understand "for-all k >= 1" this way: for k = 1 to k = m where 2m-1 ≤ n ≤ 2m.
So basicly m = log₂(n) holds.
Have a look at my calculation:
T(n) = n + Σk=1,...,m T(n/2k)
= n + T(n/2) + Σk=2,...,m T(n/2k)
= n + n/2 + 2⋅Σk=2,...,m T(n/2k)
= ...
= n + Σk=1,...,m k⋅n/2k
= n + n⋅Σk=1,...,m k/2k
= n + n⋅(2 - 2-mm - 21-m)
≤ n + 2⋅n
= 3n
So T(n) is in Θ(n).
Notice:
You can also approximate Σk=1,...,m k/2k by the integral s(m) = ∫1m k/2k dk.
And here limm → ∞s(m) = 2 also holds.

Applying the Master Theorem when there are three terms?

How would I go about solving this kind of recurrence using the Master Theorem?
T(n) = 4T(n/2) + n2 + logn
I have no idea how to go about doing this, but I'm pretty sure it is possible to solve it using Master Theorem. Do I have to ignore one of the terms? Any help is appreciated, thanks.
The Master Theorem works for functions that can be written as
T(n) = aT(n / b) + f(n)
Here, you have that a = 4, b = 2, and f(n) = n2 + log n. Notice that we're grouping "n2 + log n" together as the f(n) term, rather than treating it as two separate terms.
Now that we've done that, we can apply the Master Theorem directly. Notice that logb a = log2 4 = 2 and that f(n) = Θ(n2), so by the Master Theorem this solves to Θ(n2 log n). The reason this works is that n2 + log n = Θ(n2), and the Master Theorem only cares about the asymptotic complexity of f(n). In fact, any of these recurrences can be solved the same way:
T(n) = 4T(n / 2) + n2 + 137n + 42
T(n) = 4T(n / 2) + 5n2 + 42n log n + 42n + 5 log n + 106
T(n) = 4T(n / 2) + 0.5n2 + n log137 n + n log n + n2 / log n + 5
Hope this helps!

Calculating complexity of recurrence

I am having trouble understanding the concept of recurrences. Given you have T(n) = 2T(n/2) +1 how do you calculate the complexity of this relationship? I know in mergesort, the relationship is T(n) = 2T(n/2) + cn and you can see that you have a tree with depth log2^n and cn work at each level. But I am unsure how to proceed given a generic function. Any tutorials available that can clearly explain this?
The solution to your recurrence is T(n) ∈ Θ(n).
Let's expand the formula:
T(n) = 2*T(n/2) + 1. (Given)
T(n/2) = 2*T(n/4) + 1. (Replace n with n/2)
T(n/4) = 2*T(n/8) + 1. (Replace n with n/4)
T(n) = 2*(2*T(n/4) + 1) + 1 = 4*T(n/4) + 2 + 1. (Substitute)
T(n) = 2*(2*(2*T(n/8) + 1) + 1) + 1 = 8*T(n/8) + 4 + 2 + 1. (Substitute)
And do some observations and analysis:
We can see a pattern emerge: T(n) = 2k * T(n/2k) + (2k − 1).
Now, let k = log2 n. Then n = 2k.
Substituting, we get: T(n) = n * T(n/n) + (n − 1) = n * T(1) + n − 1.
For at least one n, we need to give T(n) a concrete value. So we suppose T(1) = 1.
Therefore, T(n) = n * 1 + n − 1 = 2*n − 1, which is in Θ(n).
Resources:
https://www.cs.duke.edu/courses/spring05/cps100/notes/slides07-4up.pdf
http://www.cs.duke.edu/~ola/ap/recurrence.html
However, for routine work, the normal way to solve these recurrences is to use the Master theorem.

Resources