Complexity asymptotic relation (theta, Big O, little o, Big Omega, little omega) between functions - aggregate-functions

Let's define:
Tower(1) of n is: n.
Tower(2) of n is: n^n (= power(n,n)).
Tower(10) of n is: n^n^n^n^n^n^n^n^n^n.
And also given two functions:
f(n) = [Tower(logn n) of n] = n^n^n^n^n^n^....^n (= log n times "height of tower").
g(n) = [Tower(n) of log n] = log(n)^log(n)^....^log(n) (= n times "height of tower").
Three questions:
How are functions f(n)/g(n) related each other asymptotically (n-->infinity),
in terms of: theta, Big O, little o, Big Omega, little omega ?
Please describe exact way of solution and not only eventual result.
Does base of log (i.e.: 0.5, 2, 10, log n, or n) effect the result ?
If no - why ?
If yes - how ?
I'd like to know whether in any real (even if hypotetic) application there complexity performance looks similar to f(n) or g(n) above. Please give case description - if such exist.
P.S.
I tried to substitute: log n = a, therefore: n = 2^a or 10^a.
And got confused of counting height of received "towers".

I won't provide you a solution, because you have to work on your homework, but maybe there are other people interested about some hints.
1) Mathematics:
log(a^x) = x*log(a)
this will humanize your problem
2) Mathematics:
logx(y) = log2(y) / log2(x) = log10(y) / log10(x)
of course: if x is constant => log2(x) and log10(x) are constants
3) recursive + stop condition

Related

Reversing mod for a function?

I'm trying to solve this equation:
(b(ax+b ) - c) % n = e
Where everything is given except x
I tried the approach of :
(A + x) % B = C
(B + C - A) % B = x
where A is (-c) and then manually solve for x given my other subs, but I am not getting the correct output. Would I possibly need to use eea? Any help would be appreciated! I understand this question has been asked, I tried their solutions but it doesn't work for me.
(b*(a*x+b) - c) % n = e
can be rewritten as:
(b*a*x) % n = (e - b*b + c) % n
x = ((e - b*b + c) * modular_inverse(b*a, n)) % n
where the modular inverse of u, modular_inverse(u, n), is a number v such that u*v % n == 1. See this question for code to calculate the modular inverse.
Some caveats:
When simplifying modular equations, you can never simply divide, you need to multiply with the modular inverse.
There is no straightforward formula to calculate the modular inverse, but there is a simple, quick algorithm to calculate it, similar to calculating the gcd.
The modular inverse doesn't always exist.
Depending on the programming language, when one or both arguments are negative, the result of modulo can also be negative.
As every solution works for every x modulo n, for small n only the numbers from 0 till n-1 need to be tested, so in many cases a simple loop is sufficient.
What language are you doing this in, and are the variables constant?
Here's a quick way to determine the possible values of x in Java:
for (int x = -1000; x < 1000; x++){
if ((b*((a*x)+b) - c) % n == e){
System.out.println(x);
}
}

Finding time complexity of recursive formula

I'm trying to find time complexity (big O) of a recursive formula.
I tried to find a solution, you may see the formula and my solution below:
Like Brenner said, your last assumption is false. Here is why: Let's take the definition of O(n) from the Wikipedia page (using n instead of x):
f(n) = O(n) if and only if there exist constants c, n0 s.t. |f(n)| <= c |g(n)|, for alln >= n0.
We want to check if O(2^n^2) = O(2^n). Clearly, 2^n^2 is in O(2^n^2), so let's pick f(n) = 2^n^2 and check if this is in O(2^n). Put this into the above formula:
exists c, n0: 2^n^2 <= c * 2^n for all n >= n0
Let's see if we can find suitable constant values n0 and c for which the above is true, or if we can derive a contradiction to proof that it is not true:
Take the log on both sides:
log(2^n^2) <= log(c * 2 ^ n)
Simplify:
2 ^n log(2) <= log(c) + n * log(2)
Divide by log(2):
n^2 <= log(c)/log(2) * n
It's easy to see know that there is no c, n0 for which the above is true for all n >= n0, thus O(2^n^2) = O(n^2) is not a valid assumption.
The last assumption you've specified with the question mark is false! Do not make such assumptions.
The rest of the manipulations you've supplied seem to be correct. But they actually bring you nowhere.
You should have finished this exercise in the middle of your draft:
T(n) = O(T(1)^(3^log2(n)))
And that's it. That's the solution!
You could actually claim that
3^log2(n) == n^log2(3) ==~ n^1.585
and then you get:
T(n) = O(T(1)^(n^1.585))
which is somewhat similar to the manipulations you've made in the second part of the draft.
So you can also leave it like this. But you cannot mess with the exponent. Changing the value of the exponent changes the big-O classification.

Big-O proof involving a sum of logs

Prove that
I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated
There are two useful mathematical facts that can help out here. First, note that ⌈x⌉ ≤ x + 1 for any x. Therefore,
sum from i = 1 to n (⌈log (n/i)⌉) ≤ (sum from i = 1 to n log (n / i)) + n
Therefore, if we can show that the second summation is O(n), we're done.
Using properties of logs, we can rewrite
log(n/1) + log(n/2) + ... + log(n/n)
= log(nn / n!)
Let's see if we can simplify this. Using properties of logarithms, we get that
log(nn / n!) = log(nn) - log(n!)
= n log n - log (n!)
Now, we can use Stirling's approximation, which says that
log (n!) = n log n - n + O(log n)
Therefore:
n log n - log (n!)
= n log n - n log n + n - O(log n)
= O(n)
So the summation is O(n), as required.
Hope this helps!
As a rule we know that:
Consequently:

Big Oh with log (n) and exponents

So I have a few given functions and need to fond Big Oh for them (which I did).
n log(n) = O(n log(n))
n^2 = O(n^2)
n log(n^2) = O(n log(n))
n log(n)^2 = O(n^3)
n = O(n)
log is the natural logarithm.
I am pretty sure that 1,2,5 are correct.
For 3 I found a solution somewhere here: n log(n^2) = 2 n log (n) => O (n log n)
But I am completely unsure about 4). n^3 is definitely bigger than n*log(n^2) but is it the Oh of it? My other guess would be O(n^2).
A few other things:
n^2 * log(n)
n^2 * log(n)^2
What would that be?
Would be great if someone could explain it if it is wrong. Thank you!
Remember that big-O provides an asymptotic upper bound on a function, so any function that is O(n) is also O(n log n), O(n2), O(n!), etc. Since log n = O(n), we have n log2 n = O(n3). It's also the case that n log2 n = O(n log2 n) and n log2 n = O(n2). In fact, n log2 n = O(n1 + ε) for any ε > 0, since logk n = O(nε) for any ε > 0.
The functions n2 log n and n2 log2 n can't be simplified in the way that some of the other ones can. Runtimes of the form O(nk logr n) aren't all that uncommon. In fact, there are many algorithms that have runtime O(n2 log n) and O(n2 log2 n), and these runtimes are often left as such. For example, each iteration of the Karger-Stein algorithm takes time O(n2 log n) because this runtime comes from the Master Theorem as applied to the recurrence
T(n) = 2T(n / √2) + O(n2)
Hope this helps!

Growth of inverse factorial

Consider the inverse factorial function, f(n) = k where k! is the greatest factorial <= n. I've been told that the inverse factorial function is O(log n / log log n). Is it true? Or is it just a really really good approximation to the asymptotic growth? The methods I tried all give things very close to log(n)/log log(n) (either a small factor or a small term in the denominator) but not quite.
Remember that, when we're using O(...), constant factors don't matter, and any term that grows more slowly than another term can be dropped. ~ means "is proportional to."
If k is large, then n = k! ~ k^k. So log n ~ k log k, or k ~ log n / log k or k ~ log n / log(log n / log k) = log n / (log log n - log log k). Because n >> k we can drop the term in the denominator, and we get k ~ log n / log log n so k = O(log n / log log n).
Start from Stirling's Approximation for ln(k!) and work backwards from there. Apologies for not working the whole thing out; my brain doesn't seem to be working tonight.

Resources