How to calculate complexity from special Mergesort - recursion

i try to calculate the complexity from Mergesort.
Standard Mergesort has the recursion T(n) = T(n/2)+T(n/2)+n
So its easy to calculate with the Master-theorem.
But my question is, how to calculate a Mergesort with T(n) = T(2n/3) + T(n/3) + n
and T(n) = T(n-100) + T(100) ?
Can you help me guys?
Thanks =)

this two examples are the textbook examples of calculating the recursive equations .
for solving them you need to use "The Recursion Tree" method .
I know that the answer to the first condition is theta(nlogn) and the answer to the second one is theta(n^2) . now to find the solutions , I think you can get a pretty good perspective of The recursion tree in the Introduction to algorithms , CLRS .

Related

Big-Oh Comparison (mathematically)

I have this exercise to solve:
EX) You have 2 algorithms. The first one is O(n) on best case, and O(n^3) on the worst case, and the another one is O(n^2) on both cases. Suppose that the best case happens 90% of the time. Which one would you choose?
I know that from some n, 90%*O(n) + 10%*O(n^3) > 100%*O(n^2). But, how can I prove that mathematically?
Thanks in advance!
To have a basic impression, try solving the equation 0.9*n+0.1*n^3>n^2.
thanks for answering.
To compare algorithms we must analyze the 'often case', or 'medium case', if you prefer. So, we can reach formulas for both, that is:
a) 0.9n + 0.1n^3
b) 0.5n^2 + 0.5n^2.
Now we can compare both. If there is any n>0 that 0.9n + 0.1n^3 > n^2, b algorithm is better.
Thanks!

What is the answer for: n! = Θ( )?

How do I find the answer for: n! = Θ( )?
Even Big O is enough. All clues I found are complex math ideas.
What would be the correct approach to tackle this problem? recursion tree seems too much of a work
the goal is to compare between n! and n^logn
Θ(n!) is a perfectly fine, valid complexity, so n! = Θ(n!).
As Niklas pointed out, this is actually true for every function, although, for something like
6x² + 15x + 2, you could write Θ(6x² + 15x + 2), but it would generally be preferred to simply write Θ(x²) instead.
If you want to compare two functions, simply plotting it on WolframAlpha might be considered sufficient to see that Θ(n!) functions grow faster.
To mathematically determine the result, we can take the log of both, giving us log (n!) and log nlog n = log n . log n = (log n)2.
Then, since log(n!) = Θ(n log n), and n log n > (log n)2 for any large n, we could derive that Θ(n!) grows faster.
The derivation is perhaps non-trivial, and I'm slightly unsure whether it's actually possible, but we're a bit beyond the scope of Stack Overflow already (try the Mathematics site if you want more details).
If you want some sort of "closed form" expressions, you can get n! = Ω((sqrt(n/2))^n) and n! = O(n^n). Note sure those are more useful.
To derive them, see that (n/2)^(n/2) < n! < n^n.
To compare against n^(log n), use limit rules; you may also want to use n = e^(log n).

Recursive Runtime of T(n-k)

I am trying to find the runtime of the equation;
T(n) = T(n-2) + n³.
When I am solving it I arrive at the summation T(n) = T(n-k) + Σk = 0,...,n/2(n-2k)³.
Solving that sum I get 1/8(n²)(n + 2)². Solving this I would get the runtime to be Θ(n⁴).
However, I think I did something wrong, does anyone have any ideas?
Why do you think that it is wrong? This equation is clearly Theta(n^4)
The more detailed solution can be obtained from WolframALpha (did you know it solves recurrence equations?)
https://www.wolframalpha.com/input/?i=T%28n%29%3DT%28n-2%29%2Bn%5E3
You can also add some border cases, like T(0)=T(1)=1
https://www.wolframalpha.com/input/?i=T%28n%29%3DT%28n-2%29%2Bn%5E3%2C+T%281%29%3D1%2C+T%282%29%3D1
and finally: asymptotic plot, showing that it truly behaves like n^4 function
Here is an attempt to show your recursive recursive recurrence with steps:
With WolframAlpha engine solving the summation.

Big-O running time for functions

Find the big-O running time for each of these functions:
T(n) = T(n - 2) + n²
Our Answers: n², n³
T(n) = 3T(n/2) + n
Our Answers: O(n log n), O(nlog₂3)
T(n) = 2T(n/3) + n
Our Answers: O(n log base 3 of n), O(n)
T(n) = 2T(n/2) + n^3
Our Answers: O(n³ log₂n), O(n³)
So we're having trouble deciding on the right answers for each of the questions.
We all got different results and would like an outside opinion on what the running time would be.
Thanks in advance.
A bit of clarification:
The functions in the questions appear to be running time functions as hinted by their T() name and their n parameter. A more subtle hint is the fact that they are all recursive and recursive functions are, alas, a common occurrence when one produces a function to describe the running time of an algorithm (even when the algorithm itself isn't formally using recursion). Indeed, recursive formulas are a rather inconvenient form and that is why we use the Big O notation to better summarize the behavior of an algorithm.
A running time function is a parametrized mathematical expression which allows computing a [sometimes approximate] relative value for the running time of an algorithm, given specific value(s) for the parameter(s). As is the case here, running time functions typically have a single parameter, often named n, and corresponding to the total number of items the algorithm is expected to work on/with (for e.g. with a search algorithm it could be the total number of records in a database, with a sort algorithm it could be the number of entries in the unsorted list and for a path finding algorithm, the number of nodes in the graph....). In some cases a running time function may have multiple arguments, for example, the performance of an algorithm performing some transformation on a graph may be bound to both the total number of nodes and the total number of vertices or the average number of connections between two nodes, etc.
The task at hand (for what appears to be homework, hence my partial answer), is therefore to find a Big O expression that qualifies the upper bound limit of each of running time functions, whatever the underlying algorithm they may correspond to. The task is not that of finding and qualifying an algorithm to produce the results of the functions (this second possibility is also a very common type of exercise in Algorithm classes of a CS cursus but is apparently not what is required here.)
The problem is therefore more one of mathematics than of Computer Science per se. Basically one needs to find the limit (or an approximation thereof) of each of these functions as n approaches infinity.
This note from Prof. Jeff Erikson at University of Illinois Urbana Champaign provides a good intro to solving recurrences.
Although there are a few shortcuts to solving recurrences, particularly if one has with a good command of calculus, a generic approach is to guess the answer and then to prove it by induction. Tools like Excel, a few snippets in a programming languages such as Python or also MATLAB or Sage can be useful to produce tables of the first few hundred values (or beyond) along with values such as n^2, n^3, n! as well as ratios of the terms of the function; these tables often provide enough insight into the function to find the closed form of the function.
A few hints regarding the answers listed in the question:
Function a)
O(n^2) is for sure wrong:
a quick inspection of the first few values in the sequence show that n^2 is increasingly much smaller than T(n)
O(n^3) on the other hand appears to be systematically bigger than T(n) as n grows towards big numbers. A closer look shows that O(n^3) is effectively the order of the Big O notation for this function, but that O(n^3 / 6) is a more precise notation which systematically exceed the value of T(n) [for bigger values of n, and/or as n tends towards infinity] but only by a minute fraction compared with the coarser n^3 estimate.
One can confirm that O(n^3 / 6) is it, by induction:
T(n) = T(n-2) + n^2 // (1) by definition
T(n) = n^3 / 6 // (2) our "guess"
T(n) = ((n - 2)^3 / 6) + n^2 // by substitution of T(n-2) by the (2) expression
= (n^3 - 2n^2 -4n^2 -8n + 4n - 8) / 6 + 6n^2 / 6
= (n^3 - 4n -8) / 6
= n^3/6 - 2n/3 - 4/3
~= n^3/6 // as n grows towards infinity, the 2n/3 and 4/3 factors
// become relatively insignificant, leaving us with the
// (n^3 / 6) limit expression, QED

Having issues dealing with T(n) runtime problems

I was going to meet with my TA today but just didn't have the time. I am in an algorithms analysis class and we started doing recurrence relations and I'm not 100% sure if I am doing this problem correct. I get to a point where I am just stuck and don't know what to do. Maybe I'm doing this wrong, who knows. The question doesn't care about upper or lower bounds, it just wants a theta.
The problem is this:
T(n) = T(n-1) + cn^(2)
This is what I have so far....
=T(n-2) + (n-1)^(2) + cn^(2)
=T(n-3) + (n-2)^(2) + 2cn^(2)
=T(n-4) + (n-3)^(2) + 3cn^(2)
So, at this point I was going to generalize and substitute K into the equation.
T(n-k) + (n-k+1)^(2) + c(K-1)^(2)
Now, I start to bring the base case of 1 into the picture. On a couple of previous, more simple problems, I was able to set my generalized k equation equal to 1 and then solve for K. Then put K back into the equation to get my ultimate answer.
But I am totally stuck on the (n-k+1)^(2) part. I mean, should I actually foil all this out? I did it and got k^(2)-2kn-2k+n^(2) +2n +1 = 1. At this point I'm thinking I totally must have done something wrong since I've never see this in previous problems.
Could anyone offer me some help with how to solve this one? I would greatly appreciate it.
What you have isn't fully correct even at the first line of "what I have so far".
Go ahead and do the full substitutions, to see that:
T(n-1) = T(n-2) + c(n-1)^2
so
T(n) = T(n-2) + c(n-1)^2 + c(n)^2
T(n) = T(n-3) + c(n-2)^2 + c(n-1)^2 + c(n)^2
Overall running time looks like adding "c(n-i)^2" for each value of i from 0 to your base case. Hopefully that puts you on the right track.

Resources