Time complexity analysis of this flip method - recursion

static void flip(int [] list, int start, int end){
if(end - start > 0){
flip(list, start + 1, end - 1);
swap(list, end, start);
}
}
Here, the time complexity of the swap method is O(1). Is the time complexity of the entire flip function O(n) or O(log n)? If not both, how do you analyze it?

As you mentioned time complexity of swap is O(1), so if we consider T(n) as the time complexity of flip(list, x, x+n), then we have : T(n) = T(n-2) + 1 which means T(n) is O(n) so time complexity of flip is O(n).

Related

An explanation for the time complexity of the following algorithm

I have the following algorithm:
def func(n):
if n <= 1:
return 1
x = 0
for i in range(n ** 2):
if i % 4 == 0:
x += i
return x + func(n//3) + func(n//3) + func(n//3)
The complexity analysis is:
$ T(n) = n^2 + 3*T(\frac {n}{3}) + 1 $
I know that the complexity is $ O(n^2) $, but my question is how is it possible that without the recursive calls and with them the complexity is the same? Is there any intuitive explanation for this?
An algorithm complexity is the time/space of the most expensive operation. If other operations are less expensive comparing to it, they do not affect the algorithm complexity.
E.g. If an algorithm runs in T(n) = n^2 + log(n) -> O(n)=n^2 since log(n) will not affect n^2 since it's too much lower than it as the variable n increases.
Even if T(n) = n^2 + 3n^2 = 4n^2 -> O(n)=n^2 because the scalar 4 will not take the complexity to another quantitive level, as the dependency of the variable n (the most important and expensive part) is equal.

Complexity Recursion in For

Hi i wanted to know how can i solve the tine complexity of this algorithm
I solved with f(n/4) but not f(n/i)
void f(int n){
if (n<4) return;
for (int i=0;i*i<n;i++)
printf("-");
for (int i=2;i<4;i++)
f(n/i); // solved the case f(n/4) but stuck f(n/i)
}
Note that the loop condition is i<4, so i never reaches 4. i.e. the only recursive terms are f(n/2) and f(n/3).
Recurrence relation:
T(n) = T(n/2) + T(n/3) + Θ(sqrt(n))
There are two ways to approach this problem:
Find upper and lower bounds by replacing one of the recursive terms with the other:
R(n) = 2T(n/3) + Θ(sqrt(n))
S(n) = 2T(n/2) + Θ(sqrt(n))
R(n) ≤ T(n) ≤ S(n)
You can easily solve for both bounds by substitution or applying the Master Theorem:
R(n) = O(n^[log3(2)]) = O(n^0.63...)
S(n) = O(n)
If you need an exact answer, use the Akra-Bazzi method:
a1 = a2 = 1
h1(x) = h2(x) = 0
g(x) = sqrt(x)
b1 = 1/2
b2 = 1/3
You need to solve for a power p such that [1/2]^p + [1/3]^p = 1. Do this numerically with e.g. Newton-Raphson, to obtain p = 0.78788.... Perform the integral:
‒ to obtain T(n) = O(n^0.78...), which is consistent with the bounds found before.
I think this is about O(sqrt(9/2) * sqrt(n)) time, but I'd go with O(sqrt(n)) to be safe. It's admittedly been a while since I worked with time complexity.
If n < 4, the function returns immediately, at constant time O(1)
If n >= 4, the function's for loop, for (int i=0; i*i<n; i++) performs the constant-time function printf("-"); a total number of sqrt(n) times. So far we're at O(sqrt(n)) time.
The next for loop performs two recursive calls: one for f(n/2) and one for f(n/3)
The first runs in O(sqrt(n/2)) time, the second in O(sqrt(n/4)) time, and so on - this series converges to O(sqrt(2n))
Likewise, the function f(n/3) converges to O(sqrt(3/2 n))
This doesn't factor in the fact that each recursive call also invokes a little extra time by calling both of these functions when it runs, but I believe this converges to about O(sqrt(n)) + O(sqrt(2n)) + O(sqrt(3/2 n)), which itself converges to O(sqrt(9/2) * sqrt(n))
This is likely a little low bit low for an exact constant value, but I believe you can safely say this runs at O(sqrt(n)) time, with some small-ish constant out front.

What is the runtime of this recursive code?

I am wondering what the runtime for the following recursive function would be:
int f(int n) {
if (n <= 1) {
return 1;
}
return f(n-1) + f(n-1);
}
If you think of it as a call tree, each node would have 2 branches. The number of nodes in that call tree would be 2⁰ + 2¹ + 2² + 2³ + ... + 2^n which is equivalent to 2^(n+1) - 1. So the time complexity of this function should be O(2^(n+1)-1) assuming that each call has a constant time of O(1) - Am I correct?. According to the book where I have this example from, the time complexity is O(2^n). I am confused - what am I missing?
Big-O Notation ignores constant factors and lower order terms. So O(2^(n+1)-1) is equivalent to O(2^n).
O(2^(n+1)-1) = O(2^n * 2^1 - 1)
We drop the constant factor of 2^1, and then we drop the lower order term of -1 as 2^n grows asymptotically faster.

Master Theorem & Recurrences

I want to find out how to solve the Master Theorem for this code:
unsigned long fac (unsigned long n ) {
if (n == 1 )
return 1;
else
return fact(n-1)*n;
}
So based on the fact that I have only 1 time calling itself a=1. Besides that function call there is nothing else so O(n) = 1 as well. Now I am struggling with my b. Normally the general formula is:
T(n) = a*T(n/2) + f(n)
In this case I don't divide the main problem though. The new problem has to solve just n-1. What is b now? Because my recurrence would be:
T(n) = 1*T(n-1) + O(1)
How can I use the Master Theorem now, since I don't know my exact b?
You can "cheat" by using a change of variable.
Let T(n) = S(2^n). Then the recurrence says
S(2^n) = S(2^n/2) + O(1)
which we rewrite
S(m) = S(m/2) + O(1).
By the Master theorem with a=1, b=2, the solution is logarithmic
S(m) = O(log m),
which means
T(n) = S(2^n) = O(log 2^n) = O(n).
Anyway, the recurrence is easier to solve directly, using
T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = ... = T(0) + O(1) + O(1) + ... O(1) = O(n).
The Master Theorem doesn't apply to this particular recurrence relation, but that's okay - it's not supposed to apply everywhere. You most commonly see the Master Theorem show up in divide-and-conquer style recurrences where you split the input apart into blocks that are a constant fraction of the original size of the input, and in this particular case that's not what's happening.
To solve this recurrence, you'll need to use another method like the iteration method or looking at the shape of the recursion tree in a different way.

Best Case / Worst Case Recurrence Relations

So, I have a psuedocode that I have to analyze for a class. I'm trying to figure out the best case and the worst case in terms of theta. I figured out the best case, but I'm having trouble with the worst case. I think the worst case is actually the same as the best case, but am second guessing myself and would like some feedback on how to properly develop the recurrence for the worst case if in fact they are not the same.
Code:
function max-element(A)
if n = 1
return A[1]
val = max-element(A[2...n]
if A[1] > val
return A[1]
else
return val
Best Case Recurrence:
T(1) = 1
T(n) = T(n-1) + 1
T(n-1) = T(n-2) + 1
T(n) = T((n-2) + 1) + 1
T(n) = T(n-1) + 1 -> T(n) = T(n-k) + k
Let k = n-1
T(n) = T(n-(n-1)) + n - 1
T(n) = T(1) + n -1
T(n) = 1 + n - 1
T(n) = n
The running time only depends on the number of elements of the array; in particular, it is independent of the contents of the array. So the best- and worst-case running times coincide.
A more correct way to model the time complexity is via the recurrence T(n) = T(n-1) + O(1) and T(1)=O(1) because the O(1) says that you spend some additional constant time in each recursive call. It clearly solves to T(n)=O(n) as you already noted. In fact, this is tight, i.e., we have T(n)=Theta(n).
The running time only depends on the number of elements of the array; in particular, it is independent of the contents of the array. So the best- and worst-case running times coincide.
A more correct way to model the time complexity is via the recurrence T(n) = T(n-1) + O(1) and T(1)=O(1) because the O(1) says that you spend some additional constant time in each recursive call. It clearly solves to T(n)=O(n) as you already noted. In fact, this is tight, i.e., we have T(n)=Theta(n).

Resources