Calculating Omega for algorithm efficiency - math

I am having trouble understanding calculating the efficiency of an algorithm. Here is one of my problems. Can someone help explain what method is used for figuring this out? The book is very confusing and does not spend a lot of time on this.
Find the appropriate Ω relationship between the functions n^3 and 3n^3 - 2n2 + 2 and find the constants c and n0
I know that there isnt much difference between n^3 and 3n^3 but I am not sure how to find the constants c and n0;

The functions n3 and 3n3 - 2n2 + 2 are actually Θ of one another, meaning that they grow at the same rates. It should be possible to lower-bound each of the functions with the other.
In the first direction, notice that
3n3 - 2n2 + 2
≥ 3n3 - 2n2
≥ 3n3 - 2n3
= n3
Therefore, if you pick c = 1 and n0 = 0, then we get that for any n ≥ n0 that 3n3 - 2n2 + 2 ≥ cn3, so 3n3 - 2n2 + 2 = Ω(n3)
I'll leave the reverse direction as the proverbial Exercise to the Reader. :-)
Hope this helps!

Related

Finding time complexity of recursive formula

I'm trying to find time complexity (big O) of a recursive formula.
I tried to find a solution, you may see the formula and my solution below:
Like Brenner said, your last assumption is false. Here is why: Let's take the definition of O(n) from the Wikipedia page (using n instead of x):
f(n) = O(n) if and only if there exist constants c, n0 s.t. |f(n)| <= c |g(n)|, for alln >= n0.
We want to check if O(2^n^2) = O(2^n). Clearly, 2^n^2 is in O(2^n^2), so let's pick f(n) = 2^n^2 and check if this is in O(2^n). Put this into the above formula:
exists c, n0: 2^n^2 <= c * 2^n for all n >= n0
Let's see if we can find suitable constant values n0 and c for which the above is true, or if we can derive a contradiction to proof that it is not true:
Take the log on both sides:
log(2^n^2) <= log(c * 2 ^ n)
Simplify:
2 ^n log(2) <= log(c) + n * log(2)
Divide by log(2):
n^2 <= log(c)/log(2) * n
It's easy to see know that there is no c, n0 for which the above is true for all n >= n0, thus O(2^n^2) = O(n^2) is not a valid assumption.
The last assumption you've specified with the question mark is false! Do not make such assumptions.
The rest of the manipulations you've supplied seem to be correct. But they actually bring you nowhere.
You should have finished this exercise in the middle of your draft:
T(n) = O(T(1)^(3^log2(n)))
And that's it. That's the solution!
You could actually claim that
3^log2(n) == n^log2(3) ==~ n^1.585
and then you get:
T(n) = O(T(1)^(n^1.585))
which is somewhat similar to the manipulations you've made in the second part of the draft.
So you can also leave it like this. But you cannot mess with the exponent. Changing the value of the exponent changes the big-O classification.

Finding the upper bound of a mathematical function (function analysis)

I am trying to understand Big-O notation through a book I have and it is covering Big-O by using functions although I am a bit confused. The book says that O(g(n)) where g(n) is the upper bound of f(n). So I understand that means that g(n) gives the max rate of growth for f(n) at larger values of n.
and that there exists an n_0 where the rate of growth of cg(n) (where c is some constant) and f(n) have the same rate of growth.
But what I am confused is on these examples on finding Big O in mathmatical functions.
This book says findthe upper bound for f(n) = n^4 +100n^2 + 50
they then state that n^4 +100n^2 + 50 <= 2n^4 (unsure why the 2n^4)
then they some how find n_0 =11 and c = 2, I understand why the big O is O(n^4) but I am just confused about the rest.
This is all discouraging as I don't understand but I feel like this is an important topic that I must understand.
If any one is curious the book is Data Structures and Algorithms Made Easy by Narasimha Karumanchi
Not sure if this post belongs here or in the math board.
Preparations
First, lets state, loosely, the definition of f being in O(g(n)) (note: O(g(n)) is a set of functions, so to be picky, we say that f is in O(...), rather than f(n) being in O(...)).
If a function f(n) is in O(g(n)), then c · g(n) is an upper bound on
f(n), for some constant c such that f(n) is always ≤ c · g(n),
for large enough n (i.e. , n ≥ n0 for some constant n0).
Hence, to show that f(n) is in O(g(n)), we need to find a set of constants (c, n0) that fulfils
f(n) < c · g(n), for all n ≥ n0, (+)
but this set is not unique. I.e., the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Showing that f ∈ O(n^4)
Now, lets proceed and look at the example that confused you
Find an upper asymptotic bound for the function
f(n) = n^4 + 100n^2 + 50 (*)
One straight-forward approach is to express the lower-order terms in (*) in terms of the higher order terms, specifically, w.r.t. bounds (... < ...).
Hence, we see if we can find a lower bound on n such that the following holds
100n^2 + 50 ≤ n^4, for all n ≥ ???, (i)
We can easily find when equality holds in (i) by solving the equation
m = n^2, m > 0
m^2 - 100m - 50 = 0
(m - 50)^2 - 50^2 - 50 = 0
(m - 50)^2 = 2550
m = 50 ± sqrt(2550) = { m > 0, single root } ≈ 100.5
=> n ≈ { n > 0 } ≈ 10.025
Hence, (i) holds for n ≳ 10.025, bu we'd much rather present this bound on n with a neat integer value, hence rounding up to 11:
100n^2 + 50 ≤ n^4, for all n ≥ 11, (ii)
From (ii) it's apparent that the following holds
f(n) = n^4 + 100n^2 + 50 ≤ n^4 + n^4 = 2 · n^4, for all n ≥ 11, (iii)
And this relation is exactly (+) with c = 2, n0 = 11 and g(n) = n^4, and hence we've shown that f ∈ O(n^4). Note again, however, that the choice of constants c and n0 is one of convenience, that is not unique. Since we've shown that (+) holds for on set of constants (c,n0), we can show that it holds for an infinite amount of different such choices of constants (e.g., it naturally holds for c=10 and n0=20, ..., and so on).

Solving the recurrence T(n) = T(n/2) + T(n/4) + T(n/8)?

I'm trying to solve a recurrence T(n) = T(n/8) + T(n/2) + T(n/4).
I thought it would be a good idea to first try a recurrence tree method, and then use that as my guess for substitution method.
For the tree, since no work is being done at the non-leaves levels, I thought we could just ignore that, so I tried to come up with an upper bound on the # of leaves since that's the only thing that's relevant here.
I considered the height of the tree taking the longest path through T(n/2), which yields a height of log2(n). I then assume the tree is complete, with all levels filled (ie. we have 3T(n/2)), and so we would have 3^i nodes at each level, and so n^(log2(3)) leaves. T(n) would then be O(n^log2(3)).
Unfortunately I think this is an unreasonable upper bound, I think I've made it a bit too high... Any advice on how to tackle this?
One trick you can use here is rewriting the recurrence in terms of another variable. Let's suppose that you write n = 2k. Then the recurrence simplifies to
T(2k) = T(2k-3) + T(2k-2) + T(2k-1).
Let's let S(k) = T(2k). This means that you can rewrite this recurrence as
S(k) = S(k-3) + S(k-2) + S(k-1).
Let's assume the base cases are S(0) = S(1) = S(2) = 1, just for simplicity. Given this, you can then use a variety of approaches to solve this recurrence. For example, the annihilator method (section 5 of the link) would be great here for solving this recurrence, since it's a linear recurrence. If you use the annihilator approach here, you get that
S(k) - S(k - 1) - S(k - 2) - S(k - 3) = 0
S(k+3) - S(k+2) - S(k+1) - S(k) = 0
(E3 - E2 - E - 1)S(k) = 0
If you find the roots of the equation E3 - E2 - E - 1, then you can write the solution to the recurrence as a linear combination of those roots raised to the power of k. In this case, it turns out that the recurrence is similar to that for the Tribonacci numbers, and if you solve everything you'll find that the recurrence solves to something of the form O(1.83929k).
Now, since you know that 2k = n, we know that k = lg n. Therefore, the recurrence solves to O(1.83929lg n). Let's let a = 1.83929. Then the solution has the form O(alg n) = O(a(loga n) / loga2)) = O(n1/loga 2). This works out to approximately O(n0.87914...). Your initial upper bound of O(nlg 3) = O(n1.584962501...) is significantly weaker than this one.
Hope this helps!
There is a way simpler method than proposed by #template. Apart from a Master's theorem, there is also an Akra-Bazzi method that allows you to solve the recurrences of this kind:
which is exactly what you have. So your g(x) = 0, a1 = a2 = a3 = 1 and b1 = 1/2, b2 = 1/4 and b3= 1/8. So now you have to solve the equation: 1/2^p + 1/4^p + 1/8^p = 1.
Solving it p is approximately 0.879. You do not even need to solve the integral because it is equal to 0. So your overall complexity is O(n^0.879).

Why is an + b = O(n^2)?

I need to prove that an + b = O(n2) using the formal definition of big-O notation. I have searched several textbooks I own on discrete mathematics as well as several online sources for any examples or theorems that are related to this proof, with no good results. I am not looking for a direct solution, but perhaps the right methods or paradigms to solve the proof.
Can anyone point me in the right direction?
Here's a hint: nr ≤ ns for all n ≥ 1 if r ≤ s. Therefore:
an + b ≤ an2 + bn2 = (a + b)n2 if n ≥ 1
From this, can you see what choice of n0 and c you would pick to show that an + b ≤ cn2 whenever n ≥ n0? Can you see how you could also use this to prove that an + b = O(n)?
Hope this helps!

Show an log n + bn = O(n log n) regardless of a and b

I need help understanding a Big-O problem. I get the concept and have done a few practice problems already, but this one has me stumped.
Using the definition of big O, show that f(n)=anlogn+bn is O(nlogn). (a, b > 0)
I don't know how to find C or N, because if constants A or B change, then C and N have to change as well? Or am I looking at this the wrong way?
I have a test coming up, and I'd really like to understand this beforehand.
Thanks!
When you're given a statement like this one:
Prove that an log n + bn = O(n log n)
You can think of it as the following:
For any choice of a and b, prove that an log n + bn = O(n log n)
Which in turn means
For any choice of a and b, there is some choice of c and n0 such that an log n + bn ≤ cn log n for any n ≥ n0.
In other words, you first pick a and b, then show that an log n + bn = O(n log n). You're not trying to show that there are a fixed c and n0 that work in the definition of big-O notation regardless of a and b, but rather should show that no matter how someone picks a and b, you'll always be able to find a c and n0 - which probably depend on a and b - such that an log n + bn = O(n log n) using those choices of c and n0.
To see how you'd do this in this example, one observation that might be useful is that (assuming our logs are base two), 1 ≤ log n as long as n ≥ 2. Therefore, as long as we restrict n such that n ≥ 2, we get that
an log n + bn ≤ an log n + bn log n = (a + b) n log n
Given this, do you see how you might pick c and n0? We're restricting n such that n ≥ 2, so it makes sense to pick n0 = 2. Similarly, since we've just proven that an log n + bn ≤ (a + b) n log n, we can pick c = a + b.
You can think of this argument as a dialog between two people:
Someone else: I'm going to pick an a and b, but I won't tell you what they are.
You: Um, okay.
Someone else: So prove to me that there's an n0 and c such that an log n + bn ≤ cn log n whenever n ≥ n0.
You: Sure! Try picking c = a + b and n0 = 2. Does that work?
Someone else: Hey, you're right! That does work!
Notice that the dialog starts with the other party choosing a and b. That way, you can tailor your choice of c and n0 to make sure the claim holds. If you tried picking c and n0 first, they could always find an a and b that would break it.
Hope this helps!
Since A and B are constants, it's OK to express C and N in terms of A and B. For example, you might show that C=A+B and N > 2A are sufficient to prove that f(n) = O(n lg n).
Maybe I'm missing something but this doesn't seem like typical use of Big-O. Your function is a straightforward mathematical function, not an algorithm. If:
f(n)=anlog(n)+bn
Then the complexity is the higher of that of (anlog(n)) and (bn), assuming that the plus operation has negligible complexity. Since b is constant, (bn) is O(n) and likewise as a is constant, (anlog(n)) is O(nlog(n)).

Resources