Big-O proof involving a sum of logs - math

Prove that
I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated

There are two useful mathematical facts that can help out here. First, note that ⌈x⌉ ≤ x + 1 for any x. Therefore,
sum from i = 1 to n (⌈log (n/i)⌉) ≤ (sum from i = 1 to n log (n / i)) + n
Therefore, if we can show that the second summation is O(n), we're done.
Using properties of logs, we can rewrite
log(n/1) + log(n/2) + ... + log(n/n)
= log(nn / n!)
Let's see if we can simplify this. Using properties of logarithms, we get that
log(nn / n!) = log(nn) - log(n!)
= n log n - log (n!)
Now, we can use Stirling's approximation, which says that
log (n!) = n log n - n + O(log n)
Therefore:
n log n - log (n!)
= n log n - n log n + n - O(log n)
= O(n)
So the summation is O(n), as required.
Hope this helps!

As a rule we know that:
Consequently:

Related

Comparing O((logn) ^ const) with O(n)

I did some computation and found out that if const = 2, then the derivative of the n in the infinity would be 1, and the derivative of the (logn) ^ 2 would be 2logn/n, which is tend to be 0, thus it seems O(n) /O((logn)^2) should be divergent when n is going to infinity, but what if const > 2?
Rather than looking at the derivative, consider rewriting each expression in terms of the same base. Notice, for example, that for any logarithm base b that
n = blogb n,
so in particular
n = (log n)log(log n) n
which can be rewritten using properties of logarithms as
n = (log n)log n / log log n
Your question asks how (log n)k compares against n. This means that we're comparing (log n)k against (log n)log n / log log n. This should make clearer that no constant k will ever cause (log n)k to exceed n, since the term log n / log log n will eventually exceed k for any fixed constant k.

Complexity asymptotic relation (theta, Big O, little o, Big Omega, little omega) between functions

Let's define:
Tower(1) of n is: n.
Tower(2) of n is: n^n (= power(n,n)).
Tower(10) of n is: n^n^n^n^n^n^n^n^n^n.
And also given two functions:
f(n) = [Tower(logn n) of n] = n^n^n^n^n^n^....^n (= log n times "height of tower").
g(n) = [Tower(n) of log n] = log(n)^log(n)^....^log(n) (= n times "height of tower").
Three questions:
How are functions f(n)/g(n) related each other asymptotically (n-->infinity),
in terms of: theta, Big O, little o, Big Omega, little omega ?
Please describe exact way of solution and not only eventual result.
Does base of log (i.e.: 0.5, 2, 10, log n, or n) effect the result ?
If no - why ?
If yes - how ?
I'd like to know whether in any real (even if hypotetic) application there complexity performance looks similar to f(n) or g(n) above. Please give case description - if such exist.
P.S.
I tried to substitute: log n = a, therefore: n = 2^a or 10^a.
And got confused of counting height of received "towers".
I won't provide you a solution, because you have to work on your homework, but maybe there are other people interested about some hints.
1) Mathematics:
log(a^x) = x*log(a)
this will humanize your problem
2) Mathematics:
logx(y) = log2(y) / log2(x) = log10(y) / log10(x)
of course: if x is constant => log2(x) and log10(x) are constants
3) recursive + stop condition

Calculating the Recurrence Relation T(n)=T(n-1)+logn

We are to solve the recurrence relation through repeating substitution:
T(n)=T(n-1)+logn
I started the substitution and got the following.
T(n)=T(n-2)+log(n)+log(n-1)
By logarithm product rule, log(mn)=logm+logn,
T(n)=T(n-2)+log[n*(n-1)]
Continuing this, I get
T(n)=T(n-k)+log[n*(n-1)*...*(n-k)]
We know that the base case is T(1), so n-1=k -> k=n+1, and substituting this in we get
T(n)=T(1)+log[n*(n-1)*...*1]
Clearly n*(n-1)*...*1 = n! so,
T(n)=T(1)+log(n!)
I do not know how to solve beyond this point. Is the answer simply O(log(n!))? I have read other explanations saying that it is Θ(nlogn) and thus it follows that O(nlogn) and Ω(nlogn) are the upper and lower bounds respectively.
This expands out to log (n!). You can see this because
T(n) = T(n - 1) + log n
= T(n - 2) + log (n - 1) + log n
= T(n - 3) + log (n - 2) + log (n - 1) + log n
= ...
= T(0) + log 1 + log 2 + ... + log (n - 1) + log n
= T(0) + log n!
The exact answer depends on what T(0) is, but this is Θ(log n!) for any fixed constant value of T(0).
A note - using Stirling's approximation, Θ(log n!) = Θ(n log n). That might help you relate this back to existing complexity classes.
Hope this helps!
Stirling's formula is not needed to get the big-Theta bound. It's O(n log n) because it's a sum of at most n terms each at most log n. It's Omega(n log n) because it's a sum of at least n/2 terms each at least log (n/2) = log n - 1.
Yes, this is a linear recurrence of the first order. It can be solved exactly. If your initial value is $T(1) = 0$, you do get $T(n) = \log n!$. You can approximate $\log n!$ (see Stirling's formula):
$$
\ln n! = n \ln n - n + \frac{1}{2} \ln \pí n + O(\ln n)
$$
[Need LaTeX here!!]

Big Oh with log (n) and exponents

So I have a few given functions and need to fond Big Oh for them (which I did).
n log(n) = O(n log(n))
n^2 = O(n^2)
n log(n^2) = O(n log(n))
n log(n)^2 = O(n^3)
n = O(n)
log is the natural logarithm.
I am pretty sure that 1,2,5 are correct.
For 3 I found a solution somewhere here: n log(n^2) = 2 n log (n) => O (n log n)
But I am completely unsure about 4). n^3 is definitely bigger than n*log(n^2) but is it the Oh of it? My other guess would be O(n^2).
A few other things:
n^2 * log(n)
n^2 * log(n)^2
What would that be?
Would be great if someone could explain it if it is wrong. Thank you!
Remember that big-O provides an asymptotic upper bound on a function, so any function that is O(n) is also O(n log n), O(n2), O(n!), etc. Since log n = O(n), we have n log2 n = O(n3). It's also the case that n log2 n = O(n log2 n) and n log2 n = O(n2). In fact, n log2 n = O(n1 + ε) for any ε > 0, since logk n = O(nε) for any ε > 0.
The functions n2 log n and n2 log2 n can't be simplified in the way that some of the other ones can. Runtimes of the form O(nk logr n) aren't all that uncommon. In fact, there are many algorithms that have runtime O(n2 log n) and O(n2 log2 n), and these runtimes are often left as such. For example, each iteration of the Karger-Stein algorithm takes time O(n2 log n) because this runtime comes from the Master Theorem as applied to the recurrence
T(n) = 2T(n / √2) + O(n2)
Hope this helps!

Growth of inverse factorial

Consider the inverse factorial function, f(n) = k where k! is the greatest factorial <= n. I've been told that the inverse factorial function is O(log n / log log n). Is it true? Or is it just a really really good approximation to the asymptotic growth? The methods I tried all give things very close to log(n)/log log(n) (either a small factor or a small term in the denominator) but not quite.
Remember that, when we're using O(...), constant factors don't matter, and any term that grows more slowly than another term can be dropped. ~ means "is proportional to."
If k is large, then n = k! ~ k^k. So log n ~ k log k, or k ~ log n / log k or k ~ log n / log(log n / log k) = log n / (log log n - log log k). Because n >> k we can drop the term in the denominator, and we get k ~ log n / log log n so k = O(log n / log log n).
Start from Stirling's Approximation for ln(k!) and work backwards from there. Apologies for not working the whole thing out; my brain doesn't seem to be working tonight.

Resources