Is O(n) greater than O(2^log n) - math

I read in a data structures book complexity hierarchy diagram that n is greater than 2log n. But cannot understand how and why. On using simple examples in power of 2 as n, I get values equal to n.
It is not mentioned in book , but I am assuming it to base 2 ( as context is DS complexity)
a) Is O(n) > O(pow(2,logn))?
b) Is O(pow(2,log n)) better than O(n)?

Notice that 2logb n = 2log2 n / log2 b = n(1 / log2 b). If log2 b ≥ 1 (that is, b ≥ 2), then this entire expression is strictly less than n and is therefore O(n). If log2 b < 1 (that is, b < 2), then this expression is of the form n1 + ε and therefore not O(n). Therefore, it boils down to what the log base is. If b ≥ 2, then the expression is O(n). If b < 2, then the expression is ω(n).
Hope this helps!

There is a constant factor in there somewhere, but it's not in the right place to make O(n) equal to O(pow(2,log n)), assuming log means the natural logarithm.
n = 2 ** log2(n) // by definition of log2, the base-2 logarithm
= 2 ** (log(n)/log(2)) // standard conversion of logs from one base to another
n ** log(2) = 2 ** log(n) // raise both sides of that to the log(2) power
Since log(2) < 1, O(n ** log(2)) < O(n ** 1). Sure, there is only a constant ratio between the exponents, but the fact remains that they are different exponents. O(n ** 3) is greater than O(n ** 2) for the same reason: even though 3 is bigger than 2 by only a constant factor, it is bigger and the Orders are different.
We therefore have
O(n) = O(n ** 1) > O(n ** log(2)) = O(2 ** log(n))
Just like in the book.

Related

Comparing O((logn) ^ const) with O(n)

I did some computation and found out that if const = 2, then the derivative of the n in the infinity would be 1, and the derivative of the (logn) ^ 2 would be 2logn/n, which is tend to be 0, thus it seems O(n) /O((logn)^2) should be divergent when n is going to infinity, but what if const > 2?
Rather than looking at the derivative, consider rewriting each expression in terms of the same base. Notice, for example, that for any logarithm base b that
n = blogb n,
so in particular
n = (log n)log(log n) n
which can be rewritten using properties of logarithms as
n = (log n)log n / log log n
Your question asks how (log n)k compares against n. This means that we're comparing (log n)k against (log n)log n / log log n. This should make clearer that no constant k will ever cause (log n)k to exceed n, since the term log n / log log n will eventually exceed k for any fixed constant k.

Finding the upper bound of a mathematical function (function analysis)

I am trying to understand Big-O notation through a book I have and it is covering Big-O by using functions although I am a bit confused. The book says that O(g(n)) where g(n) is the upper bound of f(n). So I understand that means that g(n) gives the max rate of growth for f(n) at larger values of n.
and that there exists an n_0 where the rate of growth of cg(n) (where c is some constant) and f(n) have the same rate of growth.
But what I am confused is on these examples on finding Big O in mathmatical functions.
This book says findthe upper bound for f(n) = n^4 +100n^2 + 50
they then state that n^4 +100n^2 + 50 <= 2n^4 (unsure why the 2n^4)
then they some how find n_0 =11 and c = 2, I understand why the big O is O(n^4) but I am just confused about the rest.
This is all discouraging as I don't understand but I feel like this is an important topic that I must understand.
If any one is curious the book is Data Structures and Algorithms Made Easy by Narasimha Karumanchi
Not sure if this post belongs here or in the math board.
Preparations
First, lets state, loosely, the definition of f being in O(g(n)) (note: O(g(n)) is a set of functions, so to be picky, we say that f is in O(...), rather than f(n) being in O(...)).
If a function f(n) is in O(g(n)), then c · g(n) is an upper bound on
f(n), for some constant c such that f(n) is always ≤ c · g(n),
for large enough n (i.e. , n ≥ n0 for some constant n0).
Hence, to show that f(n) is in O(g(n)), we need to find a set of constants (c, n0) that fulfils
f(n) < c · g(n), for all n ≥ n0, (+)
but this set is not unique. I.e., the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Showing that f ∈ O(n^4)
Now, lets proceed and look at the example that confused you
Find an upper asymptotic bound for the function
f(n) = n^4 + 100n^2 + 50 (*)
One straight-forward approach is to express the lower-order terms in (*) in terms of the higher order terms, specifically, w.r.t. bounds (... < ...).
Hence, we see if we can find a lower bound on n such that the following holds
100n^2 + 50 ≤ n^4, for all n ≥ ???, (i)
We can easily find when equality holds in (i) by solving the equation
m = n^2, m > 0
m^2 - 100m - 50 = 0
(m - 50)^2 - 50^2 - 50 = 0
(m - 50)^2 = 2550
m = 50 ± sqrt(2550) = { m > 0, single root } ≈ 100.5
=> n ≈ { n > 0 } ≈ 10.025
Hence, (i) holds for n ≳ 10.025, bu we'd much rather present this bound on n with a neat integer value, hence rounding up to 11:
100n^2 + 50 ≤ n^4, for all n ≥ 11, (ii)
From (ii) it's apparent that the following holds
f(n) = n^4 + 100n^2 + 50 ≤ n^4 + n^4 = 2 · n^4, for all n ≥ 11, (iii)
And this relation is exactly (+) with c = 2, n0 = 11 and g(n) = n^4, and hence we've shown that f ∈ O(n^4). Note again, however, that the choice of constants c and n0 is one of convenience, that is not unique. Since we've shown that (+) holds for on set of constants (c,n0), we can show that it holds for an infinite amount of different such choices of constants (e.g., it naturally holds for c=10 and n0=20, ..., and so on).

Show an log n + bn = O(n log n) regardless of a and b

I need help understanding a Big-O problem. I get the concept and have done a few practice problems already, but this one has me stumped.
Using the definition of big O, show that f(n)=anlogn+bn is O(nlogn). (a, b > 0)
I don't know how to find C or N, because if constants A or B change, then C and N have to change as well? Or am I looking at this the wrong way?
I have a test coming up, and I'd really like to understand this beforehand.
Thanks!
When you're given a statement like this one:
Prove that an log n + bn = O(n log n)
You can think of it as the following:
For any choice of a and b, prove that an log n + bn = O(n log n)
Which in turn means
For any choice of a and b, there is some choice of c and n0 such that an log n + bn ≤ cn log n for any n ≥ n0.
In other words, you first pick a and b, then show that an log n + bn = O(n log n). You're not trying to show that there are a fixed c and n0 that work in the definition of big-O notation regardless of a and b, but rather should show that no matter how someone picks a and b, you'll always be able to find a c and n0 - which probably depend on a and b - such that an log n + bn = O(n log n) using those choices of c and n0.
To see how you'd do this in this example, one observation that might be useful is that (assuming our logs are base two), 1 ≤ log n as long as n ≥ 2. Therefore, as long as we restrict n such that n ≥ 2, we get that
an log n + bn ≤ an log n + bn log n = (a + b) n log n
Given this, do you see how you might pick c and n0? We're restricting n such that n ≥ 2, so it makes sense to pick n0 = 2. Similarly, since we've just proven that an log n + bn ≤ (a + b) n log n, we can pick c = a + b.
You can think of this argument as a dialog between two people:
Someone else: I'm going to pick an a and b, but I won't tell you what they are.
You: Um, okay.
Someone else: So prove to me that there's an n0 and c such that an log n + bn ≤ cn log n whenever n ≥ n0.
You: Sure! Try picking c = a + b and n0 = 2. Does that work?
Someone else: Hey, you're right! That does work!
Notice that the dialog starts with the other party choosing a and b. That way, you can tailor your choice of c and n0 to make sure the claim holds. If you tried picking c and n0 first, they could always find an a and b that would break it.
Hope this helps!
Since A and B are constants, it's OK to express C and N in terms of A and B. For example, you might show that C=A+B and N > 2A are sufficient to prove that f(n) = O(n lg n).
Maybe I'm missing something but this doesn't seem like typical use of Big-O. Your function is a straightforward mathematical function, not an algorithm. If:
f(n)=anlog(n)+bn
Then the complexity is the higher of that of (anlog(n)) and (bn), assuming that the plus operation has negligible complexity. Since b is constant, (bn) is O(n) and likewise as a is constant, (anlog(n)) is O(nlog(n)).

Big Oh with log (n) and exponents

So I have a few given functions and need to fond Big Oh for them (which I did).
n log(n) = O(n log(n))
n^2 = O(n^2)
n log(n^2) = O(n log(n))
n log(n)^2 = O(n^3)
n = O(n)
log is the natural logarithm.
I am pretty sure that 1,2,5 are correct.
For 3 I found a solution somewhere here: n log(n^2) = 2 n log (n) => O (n log n)
But I am completely unsure about 4). n^3 is definitely bigger than n*log(n^2) but is it the Oh of it? My other guess would be O(n^2).
A few other things:
n^2 * log(n)
n^2 * log(n)^2
What would that be?
Would be great if someone could explain it if it is wrong. Thank you!
Remember that big-O provides an asymptotic upper bound on a function, so any function that is O(n) is also O(n log n), O(n2), O(n!), etc. Since log n = O(n), we have n log2 n = O(n3). It's also the case that n log2 n = O(n log2 n) and n log2 n = O(n2). In fact, n log2 n = O(n1 + ε) for any ε > 0, since logk n = O(nε) for any ε > 0.
The functions n2 log n and n2 log2 n can't be simplified in the way that some of the other ones can. Runtimes of the form O(nk logr n) aren't all that uncommon. In fact, there are many algorithms that have runtime O(n2 log n) and O(n2 log2 n), and these runtimes are often left as such. For example, each iteration of the Karger-Stein algorithm takes time O(n2 log n) because this runtime comes from the Master Theorem as applied to the recurrence
T(n) = 2T(n / √2) + O(n2)
Hope this helps!

Quadratic probing: (f(k) + a*j + b*j^2) % M, How to choose a and b?

If M is prime, how to choose a and b to minimize collisions?
Also in books it is written that to find the empty slot while quadratic probing in (f(k)+j^2) % M, the hash table has to be at least half empty? Can someone provide me a proof of that?
There are some values for choosing a and b on wikipedia:
For prime M > 2, most choices of a and b will make f(k,j) distinct for j in [0,(M − 1) / 2]. Such choices include a = b = 1/2, a = b = 1, and a = 0,b = 1. Because there are only about M/2 distinct probes for a given element, it is difficult to guarantee that insertions will succeed when the load factor is > 1/2.
A proof for the guarantee of finding the empty slots is here or here.

Resources