Comparing O((logn) ^ const) with O(n) - math

I did some computation and found out that if const = 2, then the derivative of the n in the infinity would be 1, and the derivative of the (logn) ^ 2 would be 2logn/n, which is tend to be 0, thus it seems O(n) /O((logn)^2) should be divergent when n is going to infinity, but what if const > 2?

Rather than looking at the derivative, consider rewriting each expression in terms of the same base. Notice, for example, that for any logarithm base b that
n = blogb n,
so in particular
n = (log n)log(log n) n
which can be rewritten using properties of logarithms as
n = (log n)log n / log log n
Your question asks how (log n)k compares against n. This means that we're comparing (log n)k against (log n)log n / log log n. This should make clearer that no constant k will ever cause (log n)k to exceed n, since the term log n / log log n will eventually exceed k for any fixed constant k.

Related

Big-O proof involving a sum of logs

Prove that
I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated
There are two useful mathematical facts that can help out here. First, note that ⌈x⌉ ≤ x + 1 for any x. Therefore,
sum from i = 1 to n (⌈log (n/i)⌉) ≤ (sum from i = 1 to n log (n / i)) + n
Therefore, if we can show that the second summation is O(n), we're done.
Using properties of logs, we can rewrite
log(n/1) + log(n/2) + ... + log(n/n)
= log(nn / n!)
Let's see if we can simplify this. Using properties of logarithms, we get that
log(nn / n!) = log(nn) - log(n!)
= n log n - log (n!)
Now, we can use Stirling's approximation, which says that
log (n!) = n log n - n + O(log n)
Therefore:
n log n - log (n!)
= n log n - n log n + n - O(log n)
= O(n)
So the summation is O(n), as required.
Hope this helps!
As a rule we know that:
Consequently:

Is O(n) greater than O(2^log n)

I read in a data structures book complexity hierarchy diagram that n is greater than 2log n. But cannot understand how and why. On using simple examples in power of 2 as n, I get values equal to n.
It is not mentioned in book , but I am assuming it to base 2 ( as context is DS complexity)
a) Is O(n) > O(pow(2,logn))?
b) Is O(pow(2,log n)) better than O(n)?
Notice that 2logb n = 2log2 n / log2 b = n(1 / log2 b). If log2 b ≥ 1 (that is, b ≥ 2), then this entire expression is strictly less than n and is therefore O(n). If log2 b < 1 (that is, b < 2), then this expression is of the form n1 + ε and therefore not O(n). Therefore, it boils down to what the log base is. If b ≥ 2, then the expression is O(n). If b < 2, then the expression is ω(n).
Hope this helps!
There is a constant factor in there somewhere, but it's not in the right place to make O(n) equal to O(pow(2,log n)), assuming log means the natural logarithm.
n = 2 ** log2(n) // by definition of log2, the base-2 logarithm
= 2 ** (log(n)/log(2)) // standard conversion of logs from one base to another
n ** log(2) = 2 ** log(n) // raise both sides of that to the log(2) power
Since log(2) < 1, O(n ** log(2)) < O(n ** 1). Sure, there is only a constant ratio between the exponents, but the fact remains that they are different exponents. O(n ** 3) is greater than O(n ** 2) for the same reason: even though 3 is bigger than 2 by only a constant factor, it is bigger and the Orders are different.
We therefore have
O(n) = O(n ** 1) > O(n ** log(2)) = O(2 ** log(n))
Just like in the book.

Complexity asymptotic relation (theta, Big O, little o, Big Omega, little omega) between functions

Let's define:
Tower(1) of n is: n.
Tower(2) of n is: n^n (= power(n,n)).
Tower(10) of n is: n^n^n^n^n^n^n^n^n^n.
And also given two functions:
f(n) = [Tower(logn n) of n] = n^n^n^n^n^n^....^n (= log n times "height of tower").
g(n) = [Tower(n) of log n] = log(n)^log(n)^....^log(n) (= n times "height of tower").
Three questions:
How are functions f(n)/g(n) related each other asymptotically (n-->infinity),
in terms of: theta, Big O, little o, Big Omega, little omega ?
Please describe exact way of solution and not only eventual result.
Does base of log (i.e.: 0.5, 2, 10, log n, or n) effect the result ?
If no - why ?
If yes - how ?
I'd like to know whether in any real (even if hypotetic) application there complexity performance looks similar to f(n) or g(n) above. Please give case description - if such exist.
P.S.
I tried to substitute: log n = a, therefore: n = 2^a or 10^a.
And got confused of counting height of received "towers".
I won't provide you a solution, because you have to work on your homework, but maybe there are other people interested about some hints.
1) Mathematics:
log(a^x) = x*log(a)
this will humanize your problem
2) Mathematics:
logx(y) = log2(y) / log2(x) = log10(y) / log10(x)
of course: if x is constant => log2(x) and log10(x) are constants
3) recursive + stop condition

Big Oh with log (n) and exponents

So I have a few given functions and need to fond Big Oh for them (which I did).
n log(n) = O(n log(n))
n^2 = O(n^2)
n log(n^2) = O(n log(n))
n log(n)^2 = O(n^3)
n = O(n)
log is the natural logarithm.
I am pretty sure that 1,2,5 are correct.
For 3 I found a solution somewhere here: n log(n^2) = 2 n log (n) => O (n log n)
But I am completely unsure about 4). n^3 is definitely bigger than n*log(n^2) but is it the Oh of it? My other guess would be O(n^2).
A few other things:
n^2 * log(n)
n^2 * log(n)^2
What would that be?
Would be great if someone could explain it if it is wrong. Thank you!
Remember that big-O provides an asymptotic upper bound on a function, so any function that is O(n) is also O(n log n), O(n2), O(n!), etc. Since log n = O(n), we have n log2 n = O(n3). It's also the case that n log2 n = O(n log2 n) and n log2 n = O(n2). In fact, n log2 n = O(n1 + ε) for any ε > 0, since logk n = O(nε) for any ε > 0.
The functions n2 log n and n2 log2 n can't be simplified in the way that some of the other ones can. Runtimes of the form O(nk logr n) aren't all that uncommon. In fact, there are many algorithms that have runtime O(n2 log n) and O(n2 log2 n), and these runtimes are often left as such. For example, each iteration of the Karger-Stein algorithm takes time O(n2 log n) because this runtime comes from the Master Theorem as applied to the recurrence
T(n) = 2T(n / √2) + O(n2)
Hope this helps!

Growth of inverse factorial

Consider the inverse factorial function, f(n) = k where k! is the greatest factorial <= n. I've been told that the inverse factorial function is O(log n / log log n). Is it true? Or is it just a really really good approximation to the asymptotic growth? The methods I tried all give things very close to log(n)/log log(n) (either a small factor or a small term in the denominator) but not quite.
Remember that, when we're using O(...), constant factors don't matter, and any term that grows more slowly than another term can be dropped. ~ means "is proportional to."
If k is large, then n = k! ~ k^k. So log n ~ k log k, or k ~ log n / log k or k ~ log n / log(log n / log k) = log n / (log log n - log log k). Because n >> k we can drop the term in the denominator, and we get k ~ log n / log log n so k = O(log n / log log n).
Start from Stirling's Approximation for ln(k!) and work backwards from there. Apologies for not working the whole thing out; my brain doesn't seem to be working tonight.

Resources