Algorithm Recurrence Relation - recursion

How is the following recurrence relation solved:
T(n) = 2T(n-2)+O(1)
What I have tried so far is:
O(1) is lower or equal than a constant c.
So
T(n) <= 2T(n-2) + c
T(n) <= 4T(n-4) + 2c
T(n) <= 8T(n-6) + 3c
.
.
.
So a pattern is emerging... the general term is:
T(n) <= 2^k*T(n-2k) + kc
But I dont know how to continue from there.Any advice is appreciated.

Assuming your generalization for k is true[1]
T(n) <= 2^k*T(n-2k) + kc
For k=n/2 you get:
T(n) <= 2^n/2 * T(0) + n/2 * c = 2^(n/2) + n/2*c
Which is in O(sqrt(2^n))
Formal proof can be done with induction, with the induction hypothesis of:
T(n) <= 2^(n/2) + c*n
And step of:
T(n) = 2T(n-2) + c = (induction hypothesis)
T(n) = 2* 2^((n-2)/2) + (n-2)*c + c
T(n) = 2^ (n/2 - 2/2 + 1) + (n-1)*c
And indeed:
T(n) = 2^(n/2) + (n-1)*c <= 2^(n/2) + c*n
(1) It is not, it ignores the fact that the constant is multiplied in the loop.

Related

Induction Proof $T(n) = 9T(n/3) + n^2$

How can I prove that the reccurence
T(n) = 9T(n/3) + n2
leads to T(n) = O(n2 log(n)) using the substitution method and a proof by induction? I’m not allowed to use the Master Theorem.
Using induction and assuming T(n) ≤ cn2 log n, I got to the following point:
T(n) = 9 * T(n/3) + n2
≤ 9c ( n2 / 9 + log(n/3)) +n2
= cn2 + 9c log(n/3) + n2
Thank you.
I think you've made a math error in your substitution. If we assume that T(n) ≤ cn2 log n, then we'd get
T(n) = 9T(n / 3) + n2
≤ 9(c(n / 3)2 log(n / 3)) + n2
= 9((1 / 9)cn2 log (n / 3)) + n2
= cn2 log(n / 3) + n2
You're very close to having things complete at this point. As a hint, suppose that the logarithm is a base-3 logarithm. What happens if you then use properties of logarithms to simplify cn2 log(n / 3)?

Solve recurrence: T(n) = T(n − 1) + T(n − 2) + 3

T(1) = T(2) = 1, and for n > 2, T(n) = T(n − 1) + T(n − 2) + 3.
What Ive done so far:
T(n-1) = T(n-2) + T(n-3) + 3 + 3
T(n-2) = T(n-3) + T(n-4) + 3 + 3 + 3
T(n) = T(n-2) + 2T(n-3) + T(n-4) + 3 + 3 + 3 + 3 + 3
T(n) = T(1) + 2T(2) + T(n-4) + 3(n + 2)
Im not sure if this is right, and if it is, how do I get rid of T(n-4).
These types of recurrences are tricky, and the repeated expansion method will unfortunately get you nowhere. Observing the recursion tree will only give you an upper bound, which is often not tight.
Two methods I can suggest:
1. Substitution + Standard Theorem
Make the following variable substitution:
This is in the correct form for the Akra-Bazzi method, with parameters:
2. Fibonacci formula
The Fibonacci series has an explicit formula which can be derived by guessing a solution of the form Fn = a^n. Using this as an analogy, substitute a similar expression for T(n):
Equating the constant and exponential terms:
Take the positive root because the negative root has absolute value less than 1, and will therefore decay to zero with increasing n:
Which is consistent with (1).

Calculate big O complexity of partitioning problem

My pseudo-code looks like:
solve(n)
for i:= 1 to n do
process(i);
solve(n-i);
where process(n) is a function with some complexity f(n). In my case f(n)=O(n^2), but I am also interested in general case (for example if f(n)=O(n)).
So, I have T(n) = f(n) + ... + f(1) + T(n-1) + ... + T(1). I cannot apply Master theorem as the sub-problems are not the same size.
How to calculate complexity of this recursion?
Small trick – consider solve(n-1):
solve(n) : T(n) = f(n) + f(n-1) + f(n-2) + ... + f(1) + T(n-1) + T(n-2) + ... + T(0)
solve(n-1): T(n-1) = f(n-1) + f(n-2) + ... + f(1) + T(n-2) + ... + T(0)
Subtract the latter from the former:
Expand repeatedly:
Solve the last summation for f(n) to obtain the complexity.
e.g. for f(n) = O(n):
Alternative method – variable substitution:
S(m) is in the correct form for the Master Theorem.
e.g. for f(n) = O(n) = O(log m), use Case 2 with k = 0:
Same result, q.e.d.

Asymptotic time complexity of recursive function (Theta)

I have been asked to analyze the asymptotic time complexity of the following recursion function:
for-all k ≥ 1:
T(n) = n + T(n/2) + T(n/4) + T(n/8) + .... + T(n/2^k)
I was able to prove that:
T(n) = O(n⋅log n) and T(n) = Ω(n),
but I am looking for a tighter bound (Big Theta).
First of all:
I understand "for-all k >= 1" this way: for k = 1 to k = m where 2m-1 ≤ n ≤ 2m.
So basicly m = log₂(n) holds.
Have a look at my calculation:
T(n) = n + Σk=1,...,m T(n/2k)
= n + T(n/2) + Σk=2,...,m T(n/2k)
= n + n/2 + 2⋅Σk=2,...,m T(n/2k)
= ...
= n + Σk=1,...,m k⋅n/2k
= n + n⋅Σk=1,...,m k/2k
= n + n⋅(2 - 2-mm - 21-m)
≤ n + 2⋅n
= 3n
So T(n) is in Θ(n).
Notice:
You can also approximate Σk=1,...,m k/2k by the integral s(m) = ∫1m k/2k dk.
And here limm → ∞s(m) = 2 also holds.

Calculating complexity of recurrence

I am having trouble understanding the concept of recurrences. Given you have T(n) = 2T(n/2) +1 how do you calculate the complexity of this relationship? I know in mergesort, the relationship is T(n) = 2T(n/2) + cn and you can see that you have a tree with depth log2^n and cn work at each level. But I am unsure how to proceed given a generic function. Any tutorials available that can clearly explain this?
The solution to your recurrence is T(n) ∈ Θ(n).
Let's expand the formula:
T(n) = 2*T(n/2) + 1. (Given)
T(n/2) = 2*T(n/4) + 1. (Replace n with n/2)
T(n/4) = 2*T(n/8) + 1. (Replace n with n/4)
T(n) = 2*(2*T(n/4) + 1) + 1 = 4*T(n/4) + 2 + 1. (Substitute)
T(n) = 2*(2*(2*T(n/8) + 1) + 1) + 1 = 8*T(n/8) + 4 + 2 + 1. (Substitute)
And do some observations and analysis:
We can see a pattern emerge: T(n) = 2k * T(n/2k) + (2k − 1).
Now, let k = log2 n. Then n = 2k.
Substituting, we get: T(n) = n * T(n/n) + (n − 1) = n * T(1) + n − 1.
For at least one n, we need to give T(n) a concrete value. So we suppose T(1) = 1.
Therefore, T(n) = n * 1 + n − 1 = 2*n − 1, which is in Θ(n).
Resources:
https://www.cs.duke.edu/courses/spring05/cps100/notes/slides07-4up.pdf
http://www.cs.duke.edu/~ola/ap/recurrence.html
However, for routine work, the normal way to solve these recurrences is to use the Master theorem.

Resources