Big-O running time of recursive function - recursion

What is the Big-O running time of the following pseudocode:
rec(int N) {
if (N<0) return;
for i=1:N {
for j=1:N {
print("waste time");
}
rec(N-1);
}
}
If my understanding is correct, the precise running time of this code would be
N^2 * 1 + (N-1)^2 * N + (N-2)^2 * N * (N-1) ... + N!
Or equivalently
(N-k)^2 * nPk from k=0 to k=N-1
Would the Big O runtime still be O(N!)? What if we nested the "waste time" loop even more? What if we replaced the "waste time" loop with something that takes 2^(N-k) time instead of (N-k)^2 time?
My guess is that the answer to all of these questions is still O(N!) because the last few terms of the series dominate. Please correct me if I'm wrong.

You're right: in all of the scenarios you described, it would still be O(n!) because that's what dominates the series - the factorial grows much quicker than the other factors, it quickly becomes the main bottleneck in the algorithm's running time.
Unless you replace the waste time loop with something worse than O(n!) (for example, something O(n^n)), it'll always be O(n!).

Related

How to do Harmonic Sum in O(lg n)?

I read recently in Steven Skiena "Algorithms design manual book" that the harmonic Sum is of O(lg n). I understand how this is possible mathematically, but when i saw the recursive solution to it, which is the following:
public static double harmonic(int n) {
if(n == 1) {
return 1.0;
} else {
return (1.0 / n) + harmonic(n - 1);
}
}
I can not see how this is O(lg n), I see it as O(n). is this code above really lg(n) if so how ? if not can someone refer me to an example where this is the case ?
I believe the confusion here is that big-O notation can be used not only for running times, but for the growth of functions in general.
As kaya3 comments, the return value of the function above is O(lg n), which means it grows asymptotically as fast as lg n as you increase n. This is quite likely what the book is claiming. It is a mathematical property of the harmonic series.
The running time of the function, however, takes O(n) time. This is what you are claiming, and it is perfectly true (assuming constant-time arithmetics).
If you are still confused about how big-O notation can be used for purposes other than the running time of algorithms, I recommend you check its definition.

Dictionary and factorial of large numbers

For n queries I am given a number x and I have to print its factorial under modulo 1000000007.
def fact_eff(n, d):
if n in d:
return d[n]
else:
ans=n*fact_eff(n-1,d)
d[n]=ans
return ans
d={0:1}
n=int(input())
while(n!=0):
x=int(input())
print(fact_eff(x, d)%1000000007)
n=n-1
The problem is that x can be as large as 100000 and I receive runtime error for values greater than 3000 as maximum recursion depth exceeds. Am I missing something with the modulus operator?
Why would you use recursion in the first place to compute a simple factorial? You can check the dictionary in a loop. Or better, start at the highest valid memoized position and go higher from there, creating new entries as you go.
To save space, maybe only record n! every 32 iterations or something, so future calls need at most 31 multiplies. Still O(1) but trading some computation for huge space savings.
Also, does it work to apply the modulus before you get the final huge product? Like every few multiply steps to keep the numbers small? Or every single step if that keeps the numbers small enough for CPython's single-limb fast path. I think (x * y) % n = ((x%n) * y) % n. (But I didn't double-check that.)
If so, you could combine early modulo with sparse memoization to memoize the final modulo-reduced result.
(For numbers above 2^30, Python BigInteger multiply cost should scale with number of 2^30 chunks required to represent the number. Fortunately one of the multiplicands is always small, being the counter. Keeping the product small buys speed, but division is expensive so it's a tradeoff. And doing any more operations costs Python interpreter overhead which may simply dominate anyway until numbers get really huge.)

Recursion Time Complexity Definition Confusion

The time complexity of a recursive algorithm is said to be
Given a recursion algorithm, its time complexity O(T) is typically
the product of the number of recursion invocations (denoted as R)
and the time complexity of calculation (denoted as O(s))
that incurs along with each recursion
O(T) = R * O(s)
Looking at a recursive function:
void algo(n){
if (n == 0) return; // base case just to not have stack overflow
for(i = 0; i < n; i++);// to do O(n) work
algo(n/2);
}
According to the definition above I may say that, the time complexity is, R is logn times and O(s) is n. So the result should be n logn where as with mathmetical induction it is proved that the result in o(n).
Please do not prove the induction method. I am asking why the given definition does not work with my approach.
Great question! This hits at two different ways of accounting for the amount of work that's done in a recursive call chain.
The original strategy that you described for computing the amount of work done in a recursive call - multiply the work done per call by the number of calls - has an implicit assumption buried within it. Namely, this assumes that every recursive call does the same amount of work. If that is indeed the case, then you can determine the total work done as the product of the number of calls and the work per call.
However, this strategy doesn't usually work if the amount of work done per call varies as a function of the arguments to the call. After all, we can't talk about multiplying "the" amount of work done by a call by the number of calls if there isn't a single value representing how much work is done!
A more general strategy for determining how much work is done by a recursive call chain is to add up the amount of work done by each individual recursive call. In the case of the function that you've outlined above, the work done by the first call is n. The second call does n/2 work, because the amount of work it does is linear in its argument. The third call does n/4 work, the fourth n/8 work, etc. This means that the total work done is bounded by
n + n/2 + n/4 + n/8 + n/16 + ...
= n(1 + 1/2 + 1/4 + 1/8 + 1/16 + ...)
≤ 2n,
which is where the tighter O(n) bound comes from.
As a note, the idea of "add up all the work done by all the calls" is completely equivalent to "multiply the amount of work done per call by the number of calls" in the specific case where the amount of work done by each call is the same. Do you see why?
Alternatively, if you're okay getting a conservative upper bound on the amount of work done by a recursive call chain, you can multiply the number of calls by the maximum work done by any one call. That will never underestimate the total, but it won't always give you the right bound. That's what's happening here in the example you've listed - each call does at most n work, and there are O(log n) calls, so the total work is indeed O(n log n). That just doesn't happen to be a tight bound.
A quick note - I don't think it would be appropriate to call the strategy of multiplying the total work done by the number of calls the "definition" of the amount of work done by a recursive call chain. As mentioned above, that's more of a "strategy for determining the work done" than a formal definition. If anything, I'd argue that the correct formal definition would be "the sum of the amounts of work done by each individual recursive calls," since that more accurately accounts for how much total time will be spent.
Hope this helps!
I think you are trying to find information about master theorem which is what is used to prove the time complexity of recursive algorithms.
https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
Also, you usually can't determine an algorithms runtime just from looking at it, especially recursive ones. That's why your quick analysis is different than the proof by induction.

Cannot understand recursive doubling in this context

I am struggling with a recursive doubling problem I was assigned. I understand that recursive doubling breaks up a bigger problem into smaller sub-problems so that the computation may be parallelized, but I don't think it is doable with this question.
Exercise 1.4. The operation
for (i) {
x[i+1] = a[i]*x[i] + b[i];
}
cannot be handled by a pipeline because there is a dependency between
input of one iteration of the operation and the output of the
previous. However, you can transform the loop into one that is
mathematically equivalent, and potentially more efficient to compute.
Derive an expression that computes x[i+2] from x[i] without involving
x[i+1]. This is known as recursive doubling. Assume you have plenty of
temporary storage. You can now perform the calculation by
• Doing some preliminary calculations;
• Computing x[i],x[i+2],x[i+4],..., and from these,
• Compute the missing terms x[i+1],x[i+3],....
Analyze the efficiency of this scheme by giving formulas for T0(n) and
Ts(n). Can you think of an argument why the preliminary calculations
may be of lesser importance in some circumstances?
So I understand the expression for x2 would be: x2 = a1(a0*x0+b0)+b1
but what I do not understand is A. how this relates to recursive double ... and B. how this would achieve any speedup if the result of the previous calculation is still needed.
The central concept is that once you can compute x[i+2] in terms of x[i], a[i], and b[i], you can then split into two threads:
Start with x[0] and compute the even-numbered terms.
Compute x[1] from x[0], then compute the odd-numbered terms.
In fact, if you have good insight into your parallelization overhead, you can generate a Fibonacci tree of processes, a new one starting each time a previous thread gets going nicely.

Big-O running time for functions

Find the big-O running time for each of these functions:
T(n) = T(n - 2) + n²
Our Answers: n², n³
T(n) = 3T(n/2) + n
Our Answers: O(n log n), O(nlog₂3)
T(n) = 2T(n/3) + n
Our Answers: O(n log base 3 of n), O(n)
T(n) = 2T(n/2) + n^3
Our Answers: O(n³ log₂n), O(n³)
So we're having trouble deciding on the right answers for each of the questions.
We all got different results and would like an outside opinion on what the running time would be.
Thanks in advance.
A bit of clarification:
The functions in the questions appear to be running time functions as hinted by their T() name and their n parameter. A more subtle hint is the fact that they are all recursive and recursive functions are, alas, a common occurrence when one produces a function to describe the running time of an algorithm (even when the algorithm itself isn't formally using recursion). Indeed, recursive formulas are a rather inconvenient form and that is why we use the Big O notation to better summarize the behavior of an algorithm.
A running time function is a parametrized mathematical expression which allows computing a [sometimes approximate] relative value for the running time of an algorithm, given specific value(s) for the parameter(s). As is the case here, running time functions typically have a single parameter, often named n, and corresponding to the total number of items the algorithm is expected to work on/with (for e.g. with a search algorithm it could be the total number of records in a database, with a sort algorithm it could be the number of entries in the unsorted list and for a path finding algorithm, the number of nodes in the graph....). In some cases a running time function may have multiple arguments, for example, the performance of an algorithm performing some transformation on a graph may be bound to both the total number of nodes and the total number of vertices or the average number of connections between two nodes, etc.
The task at hand (for what appears to be homework, hence my partial answer), is therefore to find a Big O expression that qualifies the upper bound limit of each of running time functions, whatever the underlying algorithm they may correspond to. The task is not that of finding and qualifying an algorithm to produce the results of the functions (this second possibility is also a very common type of exercise in Algorithm classes of a CS cursus but is apparently not what is required here.)
The problem is therefore more one of mathematics than of Computer Science per se. Basically one needs to find the limit (or an approximation thereof) of each of these functions as n approaches infinity.
This note from Prof. Jeff Erikson at University of Illinois Urbana Champaign provides a good intro to solving recurrences.
Although there are a few shortcuts to solving recurrences, particularly if one has with a good command of calculus, a generic approach is to guess the answer and then to prove it by induction. Tools like Excel, a few snippets in a programming languages such as Python or also MATLAB or Sage can be useful to produce tables of the first few hundred values (or beyond) along with values such as n^2, n^3, n! as well as ratios of the terms of the function; these tables often provide enough insight into the function to find the closed form of the function.
A few hints regarding the answers listed in the question:
Function a)
O(n^2) is for sure wrong:
a quick inspection of the first few values in the sequence show that n^2 is increasingly much smaller than T(n)
O(n^3) on the other hand appears to be systematically bigger than T(n) as n grows towards big numbers. A closer look shows that O(n^3) is effectively the order of the Big O notation for this function, but that O(n^3 / 6) is a more precise notation which systematically exceed the value of T(n) [for bigger values of n, and/or as n tends towards infinity] but only by a minute fraction compared with the coarser n^3 estimate.
One can confirm that O(n^3 / 6) is it, by induction:
T(n) = T(n-2) + n^2 // (1) by definition
T(n) = n^3 / 6 // (2) our "guess"
T(n) = ((n - 2)^3 / 6) + n^2 // by substitution of T(n-2) by the (2) expression
= (n^3 - 2n^2 -4n^2 -8n + 4n - 8) / 6 + 6n^2 / 6
= (n^3 - 4n -8) / 6
= n^3/6 - 2n/3 - 4/3
~= n^3/6 // as n grows towards infinity, the 2n/3 and 4/3 factors
// become relatively insignificant, leaving us with the
// (n^3 / 6) limit expression, QED

Resources