A binary search is O(log2 N). Does this mean that the depth of the activation records stack would be log2 N? In other words, how many recursive function calls are made?
Yes, the recursion depth is O(log N). You need to keep making calls until you reach your base case, which is individual elements. However, the exact quantity of calls is dependent on the algorithm: some stop at the atom level, some one call deeper when the passed list is 0. It depends on the list length, but the exact count depends on the implementation.
Related
The time complexity of a recursive algorithm is said to be
Given a recursion algorithm, its time complexity O(T) is typically
the product of the number of recursion invocations (denoted as R)
and the time complexity of calculation (denoted as O(s))
that incurs along with each recursion
O(T) = R * O(s)
Looking at a recursive function:
void algo(n){
if (n == 0) return; // base case just to not have stack overflow
for(i = 0; i < n; i++);// to do O(n) work
algo(n/2);
}
According to the definition above I may say that, the time complexity is, R is logn times and O(s) is n. So the result should be n logn where as with mathmetical induction it is proved that the result in o(n).
Please do not prove the induction method. I am asking why the given definition does not work with my approach.
Great question! This hits at two different ways of accounting for the amount of work that's done in a recursive call chain.
The original strategy that you described for computing the amount of work done in a recursive call - multiply the work done per call by the number of calls - has an implicit assumption buried within it. Namely, this assumes that every recursive call does the same amount of work. If that is indeed the case, then you can determine the total work done as the product of the number of calls and the work per call.
However, this strategy doesn't usually work if the amount of work done per call varies as a function of the arguments to the call. After all, we can't talk about multiplying "the" amount of work done by a call by the number of calls if there isn't a single value representing how much work is done!
A more general strategy for determining how much work is done by a recursive call chain is to add up the amount of work done by each individual recursive call. In the case of the function that you've outlined above, the work done by the first call is n. The second call does n/2 work, because the amount of work it does is linear in its argument. The third call does n/4 work, the fourth n/8 work, etc. This means that the total work done is bounded by
n + n/2 + n/4 + n/8 + n/16 + ...
= n(1 + 1/2 + 1/4 + 1/8 + 1/16 + ...)
≤ 2n,
which is where the tighter O(n) bound comes from.
As a note, the idea of "add up all the work done by all the calls" is completely equivalent to "multiply the amount of work done per call by the number of calls" in the specific case where the amount of work done by each call is the same. Do you see why?
Alternatively, if you're okay getting a conservative upper bound on the amount of work done by a recursive call chain, you can multiply the number of calls by the maximum work done by any one call. That will never underestimate the total, but it won't always give you the right bound. That's what's happening here in the example you've listed - each call does at most n work, and there are O(log n) calls, so the total work is indeed O(n log n). That just doesn't happen to be a tight bound.
A quick note - I don't think it would be appropriate to call the strategy of multiplying the total work done by the number of calls the "definition" of the amount of work done by a recursive call chain. As mentioned above, that's more of a "strategy for determining the work done" than a formal definition. If anything, I'd argue that the correct formal definition would be "the sum of the amounts of work done by each individual recursive calls," since that more accurately accounts for how much total time will be spent.
Hope this helps!
I think you are trying to find information about master theorem which is what is used to prove the time complexity of recursive algorithms.
https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
Also, you usually can't determine an algorithms runtime just from looking at it, especially recursive ones. That's why your quick analysis is different than the proof by induction.
For example, the Fibonacci sequence can be solved with memoization when using recursion. But can solving Fibonacci iteratively (stack + while loop) also take advantage of memoization?
Of course ... start at the base case F(0) and F(1), and compute values. Keep them all in an array, indexed by the functional subscript. When you get an input argument greater than your current array extent, compute more values. When you get one within the current bounds, simply return that value from the array.
I'm doing some math, and today I learned that the inverse of a^n is log(n). I'm wondering if this applies to complexity. Is the inverse of superpolynomial time logarithmic time and vice versa?
I would think that the inverse of logarithmic time would be O(n^2) time.
Can you characterize the inverses of the common time complexities?
Cheers!
First, you have to define what you mean by inverse here. If you mean an inverse by composing two functions together with the linear function being the identity function, i.e. f(x)=x, then the inverse of f(x)=log x would be f(x)=10^x. However, one could define a multiplicative function inverse where the constant function f(x)=1 is the identity function, then the inverse of f(x)=x would be f(x)=1/x. While this is a bit complicated, it isn't that different than saying, "What is the inverse of 2?" and without stating an operation, this is quite difficult to answer. An additive inverse would be -2 while a multiplicative inverse would be 1/2 so there are different answers depending on which operator you want to use.
In composing functions, the key becomes what is the desired end result: Is it O(n) or O(1)? If the latter may be much more challenging in composing functions as I'm not sure if composing O(log n) with a O(1) would give you a constant in the end or if it doesn't negate the initial count. For example, consider doing a binary search for something with O(log n) time complexity and a basic print statement as something with O(1) time complexity and if you put these together, you'd still get O(log n) as there would still be log n calls within the composed function that prints a number each time going through the search.
Consider the idea of taking two different complexity functions and putting one inside the other, the overall complexity is likely to be the product of each. Consider a double for loop where each loop is O(n) complexity, the overall complexity is O(n) X O(n) = O(n^2) which would mean that in the case of finding something that cancels out the log n would be challenging as you'd have to find something with O(1/(log n)) which I'm not sure exists in reality.
"Given an array of n integers, return an array of their factorials."
Instead of the straight forward way of iterating through the array and finding factorial for each, I was thinking of a memoized approach, where I store previously calculated factorials and use them in subsequent ones.
For example: 7! can be calculated much fasted if the result 6! is stored somewhere. However, I noticed the run time of both algorithms is still O(n). (I might be wrong) Does that imply that we're not speeding up the process here? If so, does that mean that memoization is not useful in problems with non-tree recursion? (In Fibonacci, we effectively prune the recursion tree by memoizing the previously found values, in the case of factorial, we don't really have the tree, more like a recursion ladder)
Any comments appreciated.
However, I noticed the run time of both algorithms is still O(n).
O-Notation is hiding the most important characteristic here. It only considers the length of the array and not the size of the numbers which is much much more important when dealing with factorials.
You should implement memoization with a hashtable. If you do then you will get O(1) for each entry asymptotically.
In the case without memoization, the time complexity should be O(n^2) since you would need (i-1) multiplications to calculate factorial(i) without memoization.
I'm having difficulty determining the big O of simple recursive methods. I can't wrap my head around what happens when a method is called multiple times. I would be more specific about my areas of confusion, but at the moment I'm trying to answer some hw questions, and in lieu of not wanting to cheat, I ask that anyone responding to this post come up with a simple recursive method and provide a simple explanation of the big O of said method. (Preferably in Java... a language I'm learning.)
Thank you.
You can define the order recursively as well. For instance, let's say you have a function f. To calculate f(n) takes k steps. Now you want to calculate f(n+1). Lets say f(n+1) calls f(n) once, then f(n+1) takes k + some constant steps. Each invocation will take some constant steps extra, so this method is O(n).
Now look at another example. Lets say you implement fibonacci naively by adding the two previous results:
fib(n) = { return fib(n-1) + fib(n-2) }
Now lets say you can calculate fib(n-2) and fib(n-1) both in about k steps. To calculate fib(n) you need k+k = 2*k steps. Now lets say you want to calculate fib(n+1). So you need twice as much steps as for fib(n-1). So this seems to be O(2^N)
Admittedly, this is not very formal, but hopefully this way you can get a bit of a feel.
You might want to refer to the master theorem for finding the big O of recursive methods. Here is the wikipedia article: http://en.wikipedia.org/wiki/Master_theorem
You want to think of a recursive problem like a tree. Then, consider each level of the tree and the amount of work required. Problems will generally fall into 3 categories, root heavy (first iteration >> rest of tree), balanced (each level has equal amounts of work), leaf heavy (last iteration >> rest of tree).
Taking merge sort as an example:
define mergeSort(list toSort):
if(length of toSort <= 1):
return toSort
list left = toSort from [0, length of toSort/2)
list right = toSort from [length of toSort/2, length of toSort)
merge(mergeSort(left), mergeSort(right))
You can see that each call of mergeSort in turn calls 2 more mergeSorts of 1/2 the original length. We know that the merge procedure will take time proportional to the number of values being merged.
The recurrence relationship is then T(n) = 2*T(n/2)+O(n). The two comes from the 2 calls and the n/2 is from each call having only half the number of elements. However, at each level there are the same number of elements n which need to be merged, so the constant work at each level is O(n).
We know the work is evenly distributed (O(n) each depth) and the tree is log_2(n) deep, so the big O of the recursive function is O(n*log(n)).