Hi i wanted to know how can i solve the tine complexity of this algorithm
I solved with f(n/4) but not f(n/i)
void f(int n){
if (n<4) return;
for (int i=0;i*i<n;i++)
printf("-");
for (int i=2;i<4;i++)
f(n/i); // solved the case f(n/4) but stuck f(n/i)
}
Note that the loop condition is i<4, so i never reaches 4. i.e. the only recursive terms are f(n/2) and f(n/3).
Recurrence relation:
T(n) = T(n/2) + T(n/3) + Θ(sqrt(n))
There are two ways to approach this problem:
Find upper and lower bounds by replacing one of the recursive terms with the other:
R(n) = 2T(n/3) + Θ(sqrt(n))
S(n) = 2T(n/2) + Θ(sqrt(n))
R(n) ≤ T(n) ≤ S(n)
You can easily solve for both bounds by substitution or applying the Master Theorem:
R(n) = O(n^[log3(2)]) = O(n^0.63...)
S(n) = O(n)
If you need an exact answer, use the Akra-Bazzi method:
a1 = a2 = 1
h1(x) = h2(x) = 0
g(x) = sqrt(x)
b1 = 1/2
b2 = 1/3
You need to solve for a power p such that [1/2]^p + [1/3]^p = 1. Do this numerically with e.g. Newton-Raphson, to obtain p = 0.78788.... Perform the integral:
‒ to obtain T(n) = O(n^0.78...), which is consistent with the bounds found before.
I think this is about O(sqrt(9/2) * sqrt(n)) time, but I'd go with O(sqrt(n)) to be safe. It's admittedly been a while since I worked with time complexity.
If n < 4, the function returns immediately, at constant time O(1)
If n >= 4, the function's for loop, for (int i=0; i*i<n; i++) performs the constant-time function printf("-"); a total number of sqrt(n) times. So far we're at O(sqrt(n)) time.
The next for loop performs two recursive calls: one for f(n/2) and one for f(n/3)
The first runs in O(sqrt(n/2)) time, the second in O(sqrt(n/4)) time, and so on - this series converges to O(sqrt(2n))
Likewise, the function f(n/3) converges to O(sqrt(3/2 n))
This doesn't factor in the fact that each recursive call also invokes a little extra time by calling both of these functions when it runs, but I believe this converges to about O(sqrt(n)) + O(sqrt(2n)) + O(sqrt(3/2 n)), which itself converges to O(sqrt(9/2) * sqrt(n))
This is likely a little low bit low for an exact constant value, but I believe you can safely say this runs at O(sqrt(n)) time, with some small-ish constant out front.
Related
I have the following algorithm:
def func(n):
if n <= 1:
return 1
x = 0
for i in range(n ** 2):
if i % 4 == 0:
x += i
return x + func(n//3) + func(n//3) + func(n//3)
The complexity analysis is:
$ T(n) = n^2 + 3*T(\frac {n}{3}) + 1 $
I know that the complexity is $ O(n^2) $, but my question is how is it possible that without the recursive calls and with them the complexity is the same? Is there any intuitive explanation for this?
An algorithm complexity is the time/space of the most expensive operation. If other operations are less expensive comparing to it, they do not affect the algorithm complexity.
E.g. If an algorithm runs in T(n) = n^2 + log(n) -> O(n)=n^2 since log(n) will not affect n^2 since it's too much lower than it as the variable n increases.
Even if T(n) = n^2 + 3n^2 = 4n^2 -> O(n)=n^2 because the scalar 4 will not take the complexity to another quantitive level, as the dependency of the variable n (the most important and expensive part) is equal.
I am wondering what the runtime for the following recursive function would be:
int f(int n) {
if (n <= 1) {
return 1;
}
return f(n-1) + f(n-1);
}
If you think of it as a call tree, each node would have 2 branches. The number of nodes in that call tree would be 2⁰ + 2¹ + 2² + 2³ + ... + 2^n which is equivalent to 2^(n+1) - 1. So the time complexity of this function should be O(2^(n+1)-1) assuming that each call has a constant time of O(1) - Am I correct?. According to the book where I have this example from, the time complexity is O(2^n). I am confused - what am I missing?
Big-O Notation ignores constant factors and lower order terms. So O(2^(n+1)-1) is equivalent to O(2^n).
O(2^(n+1)-1) = O(2^n * 2^1 - 1)
We drop the constant factor of 2^1, and then we drop the lower order term of -1 as 2^n grows asymptotically faster.
I want to find out how to solve the Master Theorem for this code:
unsigned long fac (unsigned long n ) {
if (n == 1 )
return 1;
else
return fact(n-1)*n;
}
So based on the fact that I have only 1 time calling itself a=1. Besides that function call there is nothing else so O(n) = 1 as well. Now I am struggling with my b. Normally the general formula is:
T(n) = a*T(n/2) + f(n)
In this case I don't divide the main problem though. The new problem has to solve just n-1. What is b now? Because my recurrence would be:
T(n) = 1*T(n-1) + O(1)
How can I use the Master Theorem now, since I don't know my exact b?
You can "cheat" by using a change of variable.
Let T(n) = S(2^n). Then the recurrence says
S(2^n) = S(2^n/2) + O(1)
which we rewrite
S(m) = S(m/2) + O(1).
By the Master theorem with a=1, b=2, the solution is logarithmic
S(m) = O(log m),
which means
T(n) = S(2^n) = O(log 2^n) = O(n).
Anyway, the recurrence is easier to solve directly, using
T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = ... = T(0) + O(1) + O(1) + ... O(1) = O(n).
The Master Theorem doesn't apply to this particular recurrence relation, but that's okay - it's not supposed to apply everywhere. You most commonly see the Master Theorem show up in divide-and-conquer style recurrences where you split the input apart into blocks that are a constant fraction of the original size of the input, and in this particular case that's not what's happening.
To solve this recurrence, you'll need to use another method like the iteration method or looking at the shape of the recursion tree in a different way.
So, I have a psuedocode that I have to analyze for a class. I'm trying to figure out the best case and the worst case in terms of theta. I figured out the best case, but I'm having trouble with the worst case. I think the worst case is actually the same as the best case, but am second guessing myself and would like some feedback on how to properly develop the recurrence for the worst case if in fact they are not the same.
Code:
function max-element(A)
if n = 1
return A[1]
val = max-element(A[2...n]
if A[1] > val
return A[1]
else
return val
Best Case Recurrence:
T(1) = 1
T(n) = T(n-1) + 1
T(n-1) = T(n-2) + 1
T(n) = T((n-2) + 1) + 1
T(n) = T(n-1) + 1 -> T(n) = T(n-k) + k
Let k = n-1
T(n) = T(n-(n-1)) + n - 1
T(n) = T(1) + n -1
T(n) = 1 + n - 1
T(n) = n
The running time only depends on the number of elements of the array; in particular, it is independent of the contents of the array. So the best- and worst-case running times coincide.
A more correct way to model the time complexity is via the recurrence T(n) = T(n-1) + O(1) and T(1)=O(1) because the O(1) says that you spend some additional constant time in each recursive call. It clearly solves to T(n)=O(n) as you already noted. In fact, this is tight, i.e., we have T(n)=Theta(n).
The running time only depends on the number of elements of the array; in particular, it is independent of the contents of the array. So the best- and worst-case running times coincide.
A more correct way to model the time complexity is via the recurrence T(n) = T(n-1) + O(1) and T(1)=O(1) because the O(1) says that you spend some additional constant time in each recursive call. It clearly solves to T(n)=O(n) as you already noted. In fact, this is tight, i.e., we have T(n)=Theta(n).
I'm really confused on simplifying this recurrence relation: c(n) = c(n/2) + n^2.
So I first got:
c(n/2) = c(n/4) + n^2
so
c(n) = c(n/4) + n^2 + n^2
c(n) = c(n/4) + 2n^2
c(n/4) = c(n/8) + n^2
so
c(n) = c(n/8) + 3n^2
I do sort of notice a pattern though:
2 raised to the power of whatever coefficient is in front of "n^2" gives the denominator of what n is over.
I'm not sure if that would help.
I just don't understand how I would simplify this recurrence relation and then find the theta notation of it.
EDIT: Actually I just worked it out again and I got c(n) = c(n/n) + n^2*lgn.
I think that is correct, but I'm not sure. Also, how would I find the theta notation of that? Is it just theta(n^2lgn)?
Firstly, make sure to substitute n/2 everywhere n appears in the original recurrence relation when placing c(n/2) on the lhs.
i.e.
c(n/2) = c(n/4) + (n/2)^2
Your intuition is correct, in that it is a very important part of the problem. How many times can you divide n by 2 before we reach 1?
Let's take 8 for an example
8/2 = 4
4/2 = 2
2/2 = 1
You see it's 3, which as it turns out is log(8)
In order to prove the theta notation, it might be helpful to check out the master theorem. This is a very useful tool for proving complexity of a recurrence relation.
Using the master theorem case 3, we can see
a = 1
b = 2
logb(a) = 0
c = 2
n^2 = Omega(n^2)
k = 9/10
(n/2)^2 < k*n^2
c(n) = Theta(n^2)
The intuition as to why the answer is Theta(n^2) is that you have n^2 + (n^2)/4 + (n^2)/16 + ... + (n^2)/2^(2n), which won't give us logn n^2s, but instead increasingly smaller n^2s
Let's answer a more generic question for recurrences of the form:
r(n) = r(d(n)) + f(n). There are some restrictions for the functions, that need further discussion, e.g. if x is a fix point of d, then f(x) should be 0, otherwise there isn't any solution. In your specific case this condition is satisfied.
Rearranging the equation we get that r(n) - r(d(n)) = f(n), and we get the intuition that r(n) and r(d(n)) are both a sum of some terms, but r(n) has one more term than r(d(n)), that's why the f(n) as the difference. On the other hand, r(n) and r(d(n)) have to have the same 'form', so the number of terms in the previously mentioned sum has to be infinite.
Thus we are looking for a telescopic sum, in which the terms for r(d(n)) cancel out all but one terms for r(n):
r(n) = f(n) + a_0(n) + a_1(n) + ...
- r(d(n)) = - a_0(n) - a_1(n) - ...
This latter means that
r(d(n)) = a_0(n) + a_1(n) + ...
And just by substituting d(n) into the place of n into the equation for r(n), we get:
r(d(n)) = f(d(n)) + a_0(d(n)) + a_1(d(n)) + ...
So by choosing a_0(n) = f(d(n)), a_1(n) = a_0(d(n)) = f(d(d(n))), and so on: a_k(n) = f(d(d(...d(n)...))) (with k+1 pieces of d in each other), we get a correct solution.
Thus in general, the solution is of the form r(n) = sum{i=0..infinity}(f(d[i](n))), where d[i](n) denotes the function d(d(...d(n)...)) with i number of iterations of the d function.
For your case, d(n)=n/2 and f(n)=n^2, hence you can get the solution in closed form by using identities for geometric series. The final result is r(n)=4/3*n^2.
Go for advance Master Theorem.
T(n) = aT(n/b)+n^klog^p
where a>0 b>1 k>0 p=real number.
case 1: a>b^k
T(n) = 0(n^logba) b is in base.
case 2 a=b^k
1. p>-1 T(n) than T(n)=0(n^logba log^p+1)
2. p=-1 Than T(n)=0(n^logba logn)
3. p<-1 than T(n)=0(n^logba)
case 3: a<b^k
1.if p>=0 than T(n)=0(n^k log^p n)
2 if p<0 than T(n)=O(n^k)
forgave Constant bcoz constant doesn't change time complexity or constant change processor to processor .(i.e n/2 ==n*1/2 == n)