I'm doing some math, and today I learned that the inverse of a^n is log(n). I'm wondering if this applies to complexity. Is the inverse of superpolynomial time logarithmic time and vice versa?
I would think that the inverse of logarithmic time would be O(n^2) time.
Can you characterize the inverses of the common time complexities?
Cheers!
First, you have to define what you mean by inverse here. If you mean an inverse by composing two functions together with the linear function being the identity function, i.e. f(x)=x, then the inverse of f(x)=log x would be f(x)=10^x. However, one could define a multiplicative function inverse where the constant function f(x)=1 is the identity function, then the inverse of f(x)=x would be f(x)=1/x. While this is a bit complicated, it isn't that different than saying, "What is the inverse of 2?" and without stating an operation, this is quite difficult to answer. An additive inverse would be -2 while a multiplicative inverse would be 1/2 so there are different answers depending on which operator you want to use.
In composing functions, the key becomes what is the desired end result: Is it O(n) or O(1)? If the latter may be much more challenging in composing functions as I'm not sure if composing O(log n) with a O(1) would give you a constant in the end or if it doesn't negate the initial count. For example, consider doing a binary search for something with O(log n) time complexity and a basic print statement as something with O(1) time complexity and if you put these together, you'd still get O(log n) as there would still be log n calls within the composed function that prints a number each time going through the search.
Consider the idea of taking two different complexity functions and putting one inside the other, the overall complexity is likely to be the product of each. Consider a double for loop where each loop is O(n) complexity, the overall complexity is O(n) X O(n) = O(n^2) which would mean that in the case of finding something that cancels out the log n would be challenging as you'd have to find something with O(1/(log n)) which I'm not sure exists in reality.
Related
Ok So I'm pretty new to CS and was recently learning about Big-O, Theta, and Omega as well as the master theorem and in lecture I saw that this was not the case for some reason and was wondering why is that?
Although both O(n) and T(n) use capital letters on the outside and lower-case n in the middle, they represent fundamentally different concepts.
If you’re analyzing an algorithm using a recurrence relation, it’s common to let T(n) denote the amount of time it takes for the algorithm to complete on an input of size n. As a result, we wouldn’t expect T(n) to be the same as T(n-1), since, in most cases, algorithms take longer to run when you give them larger inputs.
More generally, for any function f, if you wanted to claim that f(n) = f(n-1), you’d need to explain why you could assume this because this generally isn’t the case.
The tricky bit here is that when we write O(n), it looks like we’re writing a function named O and passing in the argument n, but the notation means something totally different. The notation O(n) is a placeholder for “some function that, when the inputs gets really big, is bounded from above by a multiple of n.” Similarly, O(n-1) means “some function that, when the inputs get really big, is bounded from above by a multiple of n-1.” And it happens to be the case that any function that’s bounded above by a multiple of n is also bounded from above by a multiple of n-1, which is why O(n) and O(n-1) denote the same thing.
Hope this helps!
I am trying to figure out the runtime and space complexity of the algorithm below.
Some say that the runtime complexity of this is O(n!) and I am guessing it is because there are n! recursive calls for a recursive algorithm that solves for a n*n matrix. But I am not sure if I am right.
Also, is the space complexity also n!?
It might help to write out an explicit recurrence relation that governs the runtime of a straightforward implementation of the recursive algorithm. Notice that, in working on an n × n matrix, evaluating the sum requires making n recursive calls on matrices of size (n - 1) × (n - 1). Each recursive call requires about (n - 1)2 additional time to set up, since we need to extract a submatrix of that size from the original matrix, so the total per-call overhead of the algorithm would be Θ(n3) because we’re doing quadratic work linearly many times. That means that our work done is roughly
T(n) = nT(n - 1) + n3.
Completely ignoring the cubic term here, notice that expanding out the recursion will have the following effect:
T(n) = nT(n - 1) + ...
= n(n-1)T(n-2) + ...
= n(n-1)(n-2)T(n-3) + ...
and eventually we’ll get an n! term showing up, plus a bunch of extra terms from the cubic. So the work done here is at least Ω(n!), and probably a lot more once we factor in the cubic term.
As for the space complexity - when working with the space complexity, remember that once one branch of the recursion terminates we can reuse the space that branch was using. This means that we only really need to look at any one branch to see how much space is needed.
With a naive implementation of this summation where we explicitly compute the submatrices for the recursive calls, we’ll need space to store one matrix of size n × n, one of size (n-1) × (n-1), one of size (n-2) × (n-2), etc. That space usage sums up to Θ(n3).
There are a bunch of other algorithms you can use to compute determinants in much less time and space. Some are based on Gaussian elimination and run in time O(n3), for example.
The time complexity of a recursive algorithm is said to be
Given a recursion algorithm, its time complexity O(T) is typically
the product of the number of recursion invocations (denoted as R)
and the time complexity of calculation (denoted as O(s))
that incurs along with each recursion
O(T) = R * O(s)
Looking at a recursive function:
void algo(n){
if (n == 0) return; // base case just to not have stack overflow
for(i = 0; i < n; i++);// to do O(n) work
algo(n/2);
}
According to the definition above I may say that, the time complexity is, R is logn times and O(s) is n. So the result should be n logn where as with mathmetical induction it is proved that the result in o(n).
Please do not prove the induction method. I am asking why the given definition does not work with my approach.
Great question! This hits at two different ways of accounting for the amount of work that's done in a recursive call chain.
The original strategy that you described for computing the amount of work done in a recursive call - multiply the work done per call by the number of calls - has an implicit assumption buried within it. Namely, this assumes that every recursive call does the same amount of work. If that is indeed the case, then you can determine the total work done as the product of the number of calls and the work per call.
However, this strategy doesn't usually work if the amount of work done per call varies as a function of the arguments to the call. After all, we can't talk about multiplying "the" amount of work done by a call by the number of calls if there isn't a single value representing how much work is done!
A more general strategy for determining how much work is done by a recursive call chain is to add up the amount of work done by each individual recursive call. In the case of the function that you've outlined above, the work done by the first call is n. The second call does n/2 work, because the amount of work it does is linear in its argument. The third call does n/4 work, the fourth n/8 work, etc. This means that the total work done is bounded by
n + n/2 + n/4 + n/8 + n/16 + ...
= n(1 + 1/2 + 1/4 + 1/8 + 1/16 + ...)
≤ 2n,
which is where the tighter O(n) bound comes from.
As a note, the idea of "add up all the work done by all the calls" is completely equivalent to "multiply the amount of work done per call by the number of calls" in the specific case where the amount of work done by each call is the same. Do you see why?
Alternatively, if you're okay getting a conservative upper bound on the amount of work done by a recursive call chain, you can multiply the number of calls by the maximum work done by any one call. That will never underestimate the total, but it won't always give you the right bound. That's what's happening here in the example you've listed - each call does at most n work, and there are O(log n) calls, so the total work is indeed O(n log n). That just doesn't happen to be a tight bound.
A quick note - I don't think it would be appropriate to call the strategy of multiplying the total work done by the number of calls the "definition" of the amount of work done by a recursive call chain. As mentioned above, that's more of a "strategy for determining the work done" than a formal definition. If anything, I'd argue that the correct formal definition would be "the sum of the amounts of work done by each individual recursive calls," since that more accurately accounts for how much total time will be spent.
Hope this helps!
I think you are trying to find information about master theorem which is what is used to prove the time complexity of recursive algorithms.
https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
Also, you usually can't determine an algorithms runtime just from looking at it, especially recursive ones. That's why your quick analysis is different than the proof by induction.
I am struggling with a recursive doubling problem I was assigned. I understand that recursive doubling breaks up a bigger problem into smaller sub-problems so that the computation may be parallelized, but I don't think it is doable with this question.
Exercise 1.4. The operation
for (i) {
x[i+1] = a[i]*x[i] + b[i];
}
cannot be handled by a pipeline because there is a dependency between
input of one iteration of the operation and the output of the
previous. However, you can transform the loop into one that is
mathematically equivalent, and potentially more efficient to compute.
Derive an expression that computes x[i+2] from x[i] without involving
x[i+1]. This is known as recursive doubling. Assume you have plenty of
temporary storage. You can now perform the calculation by
• Doing some preliminary calculations;
• Computing x[i],x[i+2],x[i+4],..., and from these,
• Compute the missing terms x[i+1],x[i+3],....
Analyze the efficiency of this scheme by giving formulas for T0(n) and
Ts(n). Can you think of an argument why the preliminary calculations
may be of lesser importance in some circumstances?
So I understand the expression for x2 would be: x2 = a1(a0*x0+b0)+b1
but what I do not understand is A. how this relates to recursive double ... and B. how this would achieve any speedup if the result of the previous calculation is still needed.
The central concept is that once you can compute x[i+2] in terms of x[i], a[i], and b[i], you can then split into two threads:
Start with x[0] and compute the even-numbered terms.
Compute x[1] from x[0], then compute the odd-numbered terms.
In fact, if you have good insight into your parallelization overhead, you can generate a Fibonacci tree of processes, a new one starting each time a previous thread gets going nicely.
(I'm not sure whether I should post this problem on this site or on the math site. Please feel free to migrate this post if necessary.)
My problem at hand is that given a value of k I'd like to numerically compute a rational function of nonlinear polynomials in k which looks like the following: (sorry I don't know how to typeset equations here...)
where {a_0, ..., a_N; b_0, ..., b_N} are complex constants, {u_0, ..., u_N, v_0, ..., v_N} are real constants and i is the imaginary number. I learned from Numerical Recipes that there are whole bunch of ways to compute polynomials quickly, in the meanwhile keeping the rounding error small enough, if all coefficients were constant. But I do not think those ideas are useful in my case since the exponential prefactors also depend on k.
Currently I calculate it in a brute force way in C with complex.h (this is just a pseudo code):
double complex function(double k)
{
return (a_0+a_1*cexp(I*u_1*k)*k+a_2*cexp(I*u_2*k)*k*k+...)/(b_0+b_1*cexp(I*v_1*k)*k+v_2*cexp(I*v_2*k)*k*k+...);
}
However when the number of calls of function increases (because this is just a part of my real calculation), it is very slow and inaccurate (only 6 valid digits). I appreciate any comments and/or suggestions.
I trust that this isn't a homework assignment!
Normally the trick is to use a loop add the next coefficient to the running sum, and multiply by k. However, in your case, I think the "e" term in the coefficient is going to overwhelm any savings by factoring out k. You can still do it, but the savings will probably be small.
Is u_i a constant? Depending on how many times you need to run this formula, maybe you could premultiply u_i * k (unless k changes each run). It's been so many decades since I took a Numerical Analysis course that I have only vague recollections of the tricks of the trade. Let's see... is e^(i*u_i*k) the same as (e^(i*u_i))^k? I don't remember the rules on imaginary numbers, or whether you'll save anything since you've got a real^real (assuming k is real) anyway (internally done using e^power).
If you're getting only 6 digits, that suggests that your math, and maybe your library, is working in single precision (32 bit) reals. Check your library and check your declarations that you are using at least double precision (64 bit) reals everywhere.