Big-Oh Comparison (mathematically) - math

I have this exercise to solve:
EX) You have 2 algorithms. The first one is O(n) on best case, and O(n^3) on the worst case, and the another one is O(n^2) on both cases. Suppose that the best case happens 90% of the time. Which one would you choose?
I know that from some n, 90%*O(n) + 10%*O(n^3) > 100%*O(n^2). But, how can I prove that mathematically?
Thanks in advance!

To have a basic impression, try solving the equation 0.9*n+0.1*n^3>n^2.

thanks for answering.
To compare algorithms we must analyze the 'often case', or 'medium case', if you prefer. So, we can reach formulas for both, that is:
a) 0.9n + 0.1n^3
b) 0.5n^2 + 0.5n^2.
Now we can compare both. If there is any n>0 that 0.9n + 0.1n^3 > n^2, b algorithm is better.
Thanks!

Related

Trying to simplify expression with factorials

First off, apologies if there is a better way to format math equations, I could not find anything, but alas, the expressions are pretty short.
As part of an assigned problem I have to produce some code in C that will evaluate x^n/n! for an arbitrary x, and n = { 1-10 , 50, 100}
I can always brute force it with a large number library, but I am wondering if someone with better math skills then mine can suggest a better algorithm than something with a O(n!)...
I understand that I can split the numerator to x^(n/2)x^(n/2) for even values of n, and xx^(n-1/2)*x^(n-1/2) for odd values of n. And that I can further change that into a logarithm base x of n/2.
But I am stuck for multiple reasons:
1 - I do not think that computationally any of these changes actually make a lot of difference since they are not really helping me reduce the large number multiplications I have to perform, or their overall number.
2 - Even as I think of n! as 1*2*3*...*(n-1)*n, I still cannot rationalize a good way to simplify the overall equation.
3 - I have looked at Karatsuba's algorithm for multiplications, and although it is a possibility, it seems a bit complex for an intro to programming problem.
So I am wondering if you guys can think of any middle ground. I prefer explanations to straight answers if you have the time :)
Cheers,
My advice is to compute all the terms of the summation (put them in an array), and then sum them up in reverse order (i.e., smallest to largest) -- that reduces rounding error a little bit.
Note that you can compute the k-th term from the preceding one by multiplying by x/k -- you do not need to ever compute x^n or n! directly (this is important).

What is the answer for: n! = Θ( )?

How do I find the answer for: n! = Θ( )?
Even Big O is enough. All clues I found are complex math ideas.
What would be the correct approach to tackle this problem? recursion tree seems too much of a work
the goal is to compare between n! and n^logn
Θ(n!) is a perfectly fine, valid complexity, so n! = Θ(n!).
As Niklas pointed out, this is actually true for every function, although, for something like
6x² + 15x + 2, you could write Θ(6x² + 15x + 2), but it would generally be preferred to simply write Θ(x²) instead.
If you want to compare two functions, simply plotting it on WolframAlpha might be considered sufficient to see that Θ(n!) functions grow faster.
To mathematically determine the result, we can take the log of both, giving us log (n!) and log nlog n = log n . log n = (log n)2.
Then, since log(n!) = Θ(n log n), and n log n > (log n)2 for any large n, we could derive that Θ(n!) grows faster.
The derivation is perhaps non-trivial, and I'm slightly unsure whether it's actually possible, but we're a bit beyond the scope of Stack Overflow already (try the Mathematics site if you want more details).
If you want some sort of "closed form" expressions, you can get n! = Ω((sqrt(n/2))^n) and n! = O(n^n). Note sure those are more useful.
To derive them, see that (n/2)^(n/2) < n! < n^n.
To compare against n^(log n), use limit rules; you may also want to use n = e^(log n).

Recursive Runtime of T(n-k)

I am trying to find the runtime of the equation;
T(n) = T(n-2) + n³.
When I am solving it I arrive at the summation T(n) = T(n-k) + Σk = 0,...,n/2(n-2k)³.
Solving that sum I get 1/8(n²)(n + 2)². Solving this I would get the runtime to be Θ(n⁴).
However, I think I did something wrong, does anyone have any ideas?
Why do you think that it is wrong? This equation is clearly Theta(n^4)
The more detailed solution can be obtained from WolframALpha (did you know it solves recurrence equations?)
https://www.wolframalpha.com/input/?i=T%28n%29%3DT%28n-2%29%2Bn%5E3
You can also add some border cases, like T(0)=T(1)=1
https://www.wolframalpha.com/input/?i=T%28n%29%3DT%28n-2%29%2Bn%5E3%2C+T%281%29%3D1%2C+T%282%29%3D1
and finally: asymptotic plot, showing that it truly behaves like n^4 function
Here is an attempt to show your recursive recursive recurrence with steps:
With WolframAlpha engine solving the summation.

What is the difference between approaches to "coin change" and "Number of of ways of climbing staircase"

I have come across two dynamic programming problems. One of the problems is
What is the number of possible ways to climb a staircase with n steps given that I can hop either 1 , 2 or 3 steps at a time.
The Dynamic programming approach for solving this problem goes as follows.
If C(n) is number of ways of climbing the staircase, then
C(n) = C(n-1) + C(n-2) + C(n-3) .
This is because , if we reach n-1 stairs, we can hop to n by 1 step hop or
if we reach n-2 stairs, we can hop to n by 2 step hop or
if we reach n-3 stairs, we can hop to n by 3 step hop
As I was about to think, I understood the above approach, I came across the coin change problem which is
What is the number of ways of representing n cents, given infinite number of 25 cent coins, 10 cent coins (dimes), 5 cent coins(nickels) and 1 cent coins
It turns out the solution for this problem is not similar to the one above and is bit complex. That is ,
C(n) = C(n-1) + C(n-5) + C(n-10) + C(n-25) is not true. I am still trying to understand the approach for solving this problem. But my question is How is the coin change problem different from the much simpler climbing steps problem?
In the steps problem, the order matters: (1,2) is not the same as (2,1). With the coin problem, only the number of each type of coin used matters.
Scott's solution is absolutely correct, and he mentions the crux of the difference between the two problems. Here's a little more color on the two problems to help intuitively understand the difference.
For Dynamic Programming problems that involve recursion, the trick is to get the subproblem right. Once the subproblem is correct, it is just a matter of building on top of that.
The Staircase problem deals with sequences so the subproblem is easier to see intuitively. For the Coin-change problem, we are dealing with counts so the subproblem is around whether or not to use a particular denomination. We compute one part of the solution using a denomination, and another without using it. That is a slightly more difficult insight to see, but once you see that, you can recursively compute the rest.
So here's one way to think about the two problems:
Staircase Sequence
Introduce one new step. The nth step has been added. How do we compute S[N]?
S[N] = S[N-1] + S[N-2] + S[N-3]
Coin Change
Introduce a new small coin denomination. Let's say a coin of denomination 'm' has newly been introduced.
How do we now compute C[n], knowing C[N, with all coins except m]?
All the ways to reach N without coin m still hold. But each new coin denomination 'm' fundamentally changes the ways to get to N. So to compute C[N using m] we have to recursively compute C[N-m, using the new coin m], and C[N-2m using m]...and so on.
C[N, with m] = C[N, without m] + C[N-m, with m]
Hope that helps.

How to calculate complexity from special Mergesort

i try to calculate the complexity from Mergesort.
Standard Mergesort has the recursion T(n) = T(n/2)+T(n/2)+n
So its easy to calculate with the Master-theorem.
But my question is, how to calculate a Mergesort with T(n) = T(2n/3) + T(n/3) + n
and T(n) = T(n-100) + T(100) ?
Can you help me guys?
Thanks =)
this two examples are the textbook examples of calculating the recursive equations .
for solving them you need to use "The Recursion Tree" method .
I know that the answer to the first condition is theta(nlogn) and the answer to the second one is theta(n^2) . now to find the solutions , I think you can get a pretty good perspective of The recursion tree in the Introduction to algorithms , CLRS .

Resources