I'm trying to order some functions by their growth rate. All logarithms have 2 as their base. These are the functions:
2n+(n log n)
3logn
(n∑i=1i)2
4^n/n^4
n^(7/8)
2n
10logn
n (log n)!
√log5n
n^(log n)
I tried plotting them but I'm still very confused as to what the correct order is. Any idea as to how I have to order them? I also tried calculating their big-o limits but some won't return 0 or infinity.
2*n+(n log n)==> o(n log n)
3*log n ==> o(log n)
1+2+3+...=[n(n+1)]/2 ==> o(n^2)
(4^n)/(n^4) ==> o((4^n)/(n^4))
n^(7/8) ==> o(n^(7/8))
2*n ==> o(n)
10*log n==> o(log n)
n*(log n)!==> o(n*(log n)!)
sqrt(log 5*n) ==> o(sqrt(n))
n^(log n) ==> o(n^(log n))
Hence:
2=7<9<5<6<3<8<10<4
Related
Im trying to solve the n rooks problem with tail recursion since it is faster than standard recursion, but am having trouble figuring our how to make it all work. I've looked up the theory behind this problem and found that the solution is given by something called "telephone numbers," which are given by the equation:
where T(1) = 1 and T(2) = 2.
I have created a recursive function that evaluates this equation but it only works quickly up to T(40), and I need it to calculate where n > 1000, which currently by my estimates will take days of computation.
Tail recursion seems to be my best option, but I was hoping someone here might know how to program this relation using tail recursion, as I don't really understand it.
I'm working in LISP, but would be open to using any language that supports tail recursion
Tail recursion is just a loop expressed as recursion.
When you have a recursive definition that "forks", the naive implementation is most likely exponential in time, going from T(n) to T(n-1) and T(n-2), then T(n-2), T(n-3), T(n-3), T(n-4), doubling the computation at each step.
The trick is reversing the computation so that you build up from T(1) and T(2). This just needs constant time at each step so the overall computation is linear.
Start with
(let ((n 2)
(t-n 2)
(t-n-1 1))
…)
Let's use a do loop for updating:
(do ((n 2 (1+ n))
(t-n 2 (+ t-n (* n t-n-1)))
(t-n-1 1 t-n))
…)
Now you just need to stop when you reach your desired n:
(defun telephone-number (x)
(do ((n 2 (1+ n))
(t-n 2 (+ t-n (* n t-n-1)))
(t-n-1 1 t-n))
((= n x) t-n)))
To be complete, check your inputs:
(defun telephone-number (x)
(check-type x (integer 1))
(if (< x 3)
x
(do ((n 2 (1+ n))
(t-n 2 (+ t-n (* n t-n-1)))
(t-n-1 1 t-n))
((= n x) t-n))))
Also, do write tests and add documentation what this is for and how to use it. This is untested yet.
When writing this tail recursive, you recurse with the new values:
(defun telephone (x)
(labels ((tel-aux (n t-n t-n-1)
(if (= n x)
t-n
(tel-aux (1+ n)
(+ t-n (* n t-n-1))
t-n))))
(tel-aux 2 2 1)))
When tail recursion is optimized, this scales like the loop (but the constant factor might differ). Note that Common Lisp does not mandate tail call optimization.
I have two unsigned longs a and q and I would like to find a number n between 0 and q-1 such that n + a is divisible by q (without overflow).
In other words, I'm trying to find a (portable) way of computing (-a)%q which lies between 0 and q-1. (The sign of that expression is implementation-defined in C89.) What's a good way to do this?
What you're looking for is mathematically equivalent to (q - a) mod q, which in turn is equivalent to (q - (a mod q)) mod q. I think you should therefore be able to compute this as follows:
unsigned long result = (q - (a % q)) % q;
Prove that
I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated
There are two useful mathematical facts that can help out here. First, note that ⌈x⌉ ≤ x + 1 for any x. Therefore,
sum from i = 1 to n (⌈log (n/i)⌉) ≤ (sum from i = 1 to n log (n / i)) + n
Therefore, if we can show that the second summation is O(n), we're done.
Using properties of logs, we can rewrite
log(n/1) + log(n/2) + ... + log(n/n)
= log(nn / n!)
Let's see if we can simplify this. Using properties of logarithms, we get that
log(nn / n!) = log(nn) - log(n!)
= n log n - log (n!)
Now, we can use Stirling's approximation, which says that
log (n!) = n log n - n + O(log n)
Therefore:
n log n - log (n!)
= n log n - n log n + n - O(log n)
= O(n)
So the summation is O(n), as required.
Hope this helps!
As a rule we know that:
Consequently:
So I have a few given functions and need to fond Big Oh for them (which I did).
n log(n) = O(n log(n))
n^2 = O(n^2)
n log(n^2) = O(n log(n))
n log(n)^2 = O(n^3)
n = O(n)
log is the natural logarithm.
I am pretty sure that 1,2,5 are correct.
For 3 I found a solution somewhere here: n log(n^2) = 2 n log (n) => O (n log n)
But I am completely unsure about 4). n^3 is definitely bigger than n*log(n^2) but is it the Oh of it? My other guess would be O(n^2).
A few other things:
n^2 * log(n)
n^2 * log(n)^2
What would that be?
Would be great if someone could explain it if it is wrong. Thank you!
Remember that big-O provides an asymptotic upper bound on a function, so any function that is O(n) is also O(n log n), O(n2), O(n!), etc. Since log n = O(n), we have n log2 n = O(n3). It's also the case that n log2 n = O(n log2 n) and n log2 n = O(n2). In fact, n log2 n = O(n1 + ε) for any ε > 0, since logk n = O(nε) for any ε > 0.
The functions n2 log n and n2 log2 n can't be simplified in the way that some of the other ones can. Runtimes of the form O(nk logr n) aren't all that uncommon. In fact, there are many algorithms that have runtime O(n2 log n) and O(n2 log2 n), and these runtimes are often left as such. For example, each iteration of the Karger-Stein algorithm takes time O(n2 log n) because this runtime comes from the Master Theorem as applied to the recurrence
T(n) = 2T(n / √2) + O(n2)
Hope this helps!
Consider the inverse factorial function, f(n) = k where k! is the greatest factorial <= n. I've been told that the inverse factorial function is O(log n / log log n). Is it true? Or is it just a really really good approximation to the asymptotic growth? The methods I tried all give things very close to log(n)/log log(n) (either a small factor or a small term in the denominator) but not quite.
Remember that, when we're using O(...), constant factors don't matter, and any term that grows more slowly than another term can be dropped. ~ means "is proportional to."
If k is large, then n = k! ~ k^k. So log n ~ k log k, or k ~ log n / log k or k ~ log n / log(log n / log k) = log n / (log log n - log log k). Because n >> k we can drop the term in the denominator, and we get k ~ log n / log log n so k = O(log n / log log n).
Start from Stirling's Approximation for ln(k!) and work backwards from there. Apologies for not working the whole thing out; my brain doesn't seem to be working tonight.