Big Theta runtime analysis - math

I don't really understand the 2 questions below about T(n). I understand what theta means but I'm not sure about the answer for the questions. Can someone explain?
I thought that first one was false because T(2n/3) + 1 = Theta(log n) because
the constant 1 added doesn't make a difference
and log is closer to halving continuously but 2n/3 is not
I thought that second one was true because T(n/2) + n = Theta(n * log n) because
the linear "n *" in Theta represents the "+n" in T(n/2) + n
the "n/2" represents the log n in Theta...

The first is Θ(log n).
Intuitively, when you multiply n by a constant factor, T(n) increases by a constant amount.
Example: T(n) = log(n)/log(3/2)
The second is Θ(n).
Intuitively, when you multiply n by a constant factor, T(n) increases by an amount proportional to n.
Example: T(n) = 2n

Related

Big-O complexity of n/1 + n/2 + n/3 +

Is the big O complexity of n/1 + n/2 + n/3 + ... + n/n O(nlogn) or O(n)? I want to know this for calculating all divisors of all numbers from 1 to n. My approach would be to go over all the numbers and marking their multiples. This would take the above-mentioned time.
You have n multiplied with harmonic series sum which has logarithmic growth.
So O(nlogn)

Tree method with 8T(n/2) +n^2

I'm trying to solve this problem, but I think I haven't understood how to do it correctly. The first thing I do in this type of exercises is taking the bigger value in the row (in this case is n^2) and divide it multiple times, so I can find what kind of relation there is between the values. After found the relation, I try to mathematically found its value and then as the final step, I multiply the result for the root. In this case the result should be n^3. How is possible?
Unfortunately #vahidreza's solutions seems false to me because it contradicts the Master theorem. In terms of the Master theorem a = 8, b = 2, c = 2. So log_b(a) = 3 so log_b(a) > c and thus this is the case of a recursion dominated by the subproblems so the answer should be T(n) = Ө(n^3) rather than O(m^(2+1/3)) which #vahidreza has.
The main issue is probably in this statement:
Also you know that the tree has log_8 m levels. Because at each level, you divide the number by 8.
Let's try to solve it properly:
On the zeroth level you have n^2 (I prefer to start counting from 0 as it simplifies notation a bit)
on the first level you have 8 nodes of (n/2)^2 or a total of 8*(n/2)^2
on the second level you have 8 * 8 nodes of (n/(2^2))^2 or a total of 8^2*(n/(2^2))^2
on the i-th level you have 8^i nodes of (n/(2^i))^2 or a total of 8^i*(n/(2^i))^2 = n^2 * 8^i/2^(2*i) = n^2 * 2^i
At each level your value n is divided by two so at level i the value is n/2^i and so you'll have log_2(n) levels. So what you need to calculate is sum for i from 0 to log_2(n) of n^2 * 2^i. That's a geometric progression with a ratio of 2 so it's sum is
Σ (n^2 * 2^i) = n^2 * Σ(2^i) = n^2 * (2^(log_2(n)+1) - 1)/2
Since we are talking about Ө/O we can ignore constants and so we need to estimate
n^2 * 2^log_2(n)
Obviously 2^log_2(n) is just n so the answer is
T(n) = Ө(n^3)
exactly as predicted by the Master theorem.

Calculating average case complexity of Quicksort

I'm trying to calculate the big-O for Worst/Best/Average case of QuickSort using recurrence relations. My understanding is that the efficiency of your implementation is depends on how good the partition function is.
Worst Case: pivot always leaves one side empty
T(N) = N + T(N-1) + T(1)
T(N) = N + T(N-1)
T(N) ~ N2/2 => O(n^2)
Best Case: pivot divides elements equally
T(N) = N + T(N/2) + T(N/2)
T(N) = N + 2T(N/2) [Master Theorem]
T(N) ~ Nlog(N) => O(nlogn)
Average Case: This is where I'm confused how to represent the recurrence relation or how to approach it in general.
I know the average case big-O for Quicksort is O(nlogn) I'm just unsure how to derive it.
When you pick the pivot, the worst you can do is 0 | n and the best you can do is n/2 | n/2. The average case will find you getting a split of something more like n/4 | 3n/4, assuming uniform randomness. Plug that in and you get O(nlogn) once constants are eliminated.

Troublesome recurrence equation: T(n) = 2*T(ceil((sqrt(n)))+1

I have recently encountered a recurrence problem:
T(n) = 2*T(ceil((sqrt(n)))+1
T(1)=1;
I am unable to see this function terminate at all when I draw my recurrence tree. The general node form in the tree (n1/2i) becomes 1 only when 1/2i becomes 0. This means i should tend to infinity.
You're right that if sqrt is the ceiling of the square root, then you'll never reach 1 by repeatedly applying square roots. I'm going to assume that you meant to use the floor, which means that you will indeed eventually hit 1 as the recurrence unwinds.
In this case, your recurrence is more properly
T(1) = 1
T(n) = 2T(⌊√n⌋) + 1
A standard technique for solving recurrences involve square roots is to make a substitution. Let's define a new value k such that n = 2k. Notice that √n = (2k)1/2 = 2k/2. In other words, taking the square root of n is equivalent to halving the value of k. Because of this, we can convert the above recurrence, which involves square roots, into a new recurrence that will more closely match the form used by the Master Theorem and other recurrence-solving techniques. Specifically, let's define S(k) = T(2k). Then we get the recurrence
S(0) = 1
S(k) = 2S(⌊k / 2⌋) + 1
It's a lot easier to see how to solve this recurrence. Either by recognizing this recurrence from elsewhere or by using the Master Theorem, we get that S(k) = Θ(k). Now, we wanted to solve for T(n), so we can use the fact that S(k) = T(2k) = T(n). Since S(k) = Θ(k), we now see that T(n) = Θ(k). Since we chose k such that 2k = n, this means that k = lg n. Therefore, T(n) = Θ(log n), so the recurrence works out to Θ(log n).
Hope this helps!

Recurrence relations and asymptotic complexity

I am trying to understand the recurrence relation of f(n) = n^cos n and g(n) = n. I am told that this relation has no asymptotic behavior related to Big O, little o, Big Omega, little omega, or Theta. Something about the oscillations of cos n? Can I receive a little more understanding on this behavior?
When I use L' Hospital rule on my calculator, I get undefined.
The function ncos n is O(n). Since -1 ≤ cos n ≤ 1, the function ncos n is always bounded between n-1 and n1, so in particular it's always upper-bounded by O(n). However, it's not Ω(n), because for any number n0 and any constant c, you can find an n > n0 where ncos n < cn. One way to do this is to look for choices of n where cos n is negative; the value of n-ε for any ε > 0 will eventually be smaller than cn for any c.
Hope this helps!

Resources