How is the running time of a d-Ary heap simplified from O(logd n) to O( (log n) / (log d))?
A correct simplification would be:
logdn = log d * log n
How is the division simplification derived?
This uses the common identity to convert between logarithmic bases:
logx(z) = logm(z) / logm(x)
By multiplying both sides by logm(x), you get:
logm(z) = logx(z) * logm(x)
Which is equivalent to the answer in the question you site.
More information is available here.
Suppose x = logd(n) Equivalently we have, n = dx Then log2n = log2(dx) = x log2(d) Dividing through by log2(d) yields: log2(n) / log2(d) = x And so log2(n) / log2(d) = x = logd(n)
Of course, assuming d is fixed, then log2(d) is just a constant. And so O( logd(n) ) = O( 1 / log2(d) * log2(n) ) = O( log2(n) ) That is, so far as Big-Oh notation is concerned, you can change out any logarithm base (larger than 1) for any other (such) logarithm base. And so it's customary to just drop the base and write O( log(n) )
Related
I came across a problem where I've to select the correct Big O for the function f(n) = n^5 + 2^log(n)...
I tried putting large values and found out that n^5 grows significantly as compare to 2^log(n)... But then someone told me that exponential functions grow significantly as compared to other functions... And I got confused again... To be honest I think 2^log(n) is not an exponential function... But because of my weak logarithmic concepts, I am unable to prove that...
I just want someone to tell me that yes n^5 is larger than 2^log(n) so that I can prove that 2^log(n) is not an exponential function...
Thanks in advance. :)
2^log(n) = (2/e)^log(n) * e^log(n) = a^log(n) * n where a = 2/e < 1 (assuming log is the natural logarithm).
It follows that f(n) = n^5 + 2^log(n) < n^5 + n and therefore f(n) = O(n^5).
[ EDIT ] In the general case of logarithms of an arbitrary base b, using that 2 = b^log_b(2) it follows that:
2^log_b(n) = (b^log_b(2))^(log_b(n))
= b^(log_b(2)*log_b(n))
= (b^log_b(n))^log_b(2)
= n^log_b(2)
= n^(1/log_2(b))
Therefore f(n) = n^5 + log_b(n) = O( n^5 + n^(1/log_2(b)) ) = O( n^max(5, 1/log_2(b)) ).
In particular, f(n) = O(n^5) for log_2(b) > 1/5 ⇔ b > 2^(1/5), which covers the common log bases of 2, e, 10.
O(2logn)=O(n) - this follows straight from the definition of logarithm.
More formally:
f(n)=2logn
log2f(n)=log2(2logn)=lognlog22=log2n
==>f(n)=n
==> O(2logn)=O(n)
==> O(n5 + 2logn)=O(n5 + n)=O(n5)
I came across this time complexity function and according to me, it is actually constant. Please correct me if I am wrong.
n^(1/logn) => (2^m)^(1/log(2^m)) => (2^m)^(1/m) => 2
Since any n can be written as a power of 2, I can do the above simplification and prove that it is constant, right?
Assuming log is the natural log, then this is equivalent to e, not 2, but either way it's a constant.
First, let:
k = n^(1 / log n)
Then take the log of both sides:
log k = (1 / log n) * log n
So:
log k = 1
Now raise both sides to the power of e to get:
e^(log k) = e^(1)
And thus:
k = e.
Here's an alternative proof:
1 / (log n) = (log e) / (log n) = logn e by the change of base identity.
Then, nlogn e = e by the definition of the logarithm as the inverse of exponentiation.
Let's say I have a program that calculates the value of the sine wave at time t. The sine wave is of the form sin(f*t + phi). Amplitude is 1.
If I only have one sin term all is fine. I can easily calculate the value at any time t.
But, at runtime, the wave form becomes modified when it combines with other waves. sin(f1 * t + phi1) + sin(f2 * t + phi2) + sin(f3 * t + phi3) + ...
The simplest solution is to have a table with columns for phi and f, iterate over all rows, and sum the results. But to me it feels that once I reach thousands of rows, the computation will become slow.
Is there a different way of doing this? Like combining all the sines into one statement/formula?
If you have a Fourier series (i.e. f_i = i f for some f) you can use the Clenshaw recurrence relation which is significantly faster than computing all the sines (but it might be slightly less accurate).
In your case you can consider the sequence:
f_k = exp( i ( k f t + phi_k) ) , where i is the imaginary unit.
Notice that Im(f_k) = sin( k f t + phi_k ), that is your sequence.
Also
f_k = exp( i ( k f t + phi_k) ) = exp( i k f t ) exp( i phi_k )
Hence you have a_k = exp(i phi_k). You can precompute these values and store them in an array. For simplicity from now on assume a_0 = 0.
Now, exp( i (k + 1) f t) = exp(i k f t) * exp(i f t), so alpha_k = exp(i f t) and beta_k = 0.
You can now apply the recurrence formula, in C++ you can do something like this:
complex<double> clenshaw_fourier(double f, double t, const vector< complex<double> > & a )
{
const complex<double> alpha = exp(f * t * i);
complex<double> b = 0;
for (int k = a.size() - 1; k >0; -- k )
b = a[k] + alpha * b;
return a[0] + alpha * b;
}
Assuming that a[k] == exp( i phi_k ).
The real part of the answer is the sum of cos(k f t + phi_k), while the imaginary part is the sum of sin(k f t + phi_k).
As you can see this only uses addition and multiplications, except for exp(f * t * i) that is only computed once.
There are different bases (plural of basis) that can be advantageous (i.e. compact) for representing different waveforms. The most common and well-known one is that which you mention, called the Fourier basis usually. Daubechies wavelets for example are a relatively recent addition that cope with more discontinuous waveforms much better than a Fourier basis does. But this is really a math topic and probably if you post on Math Overflow you will get better answers.
So I've encountered a case where I have 2 recursive calls - rather than one. I do know how to solve for one recursive call, but in this case I'm not sure whether I'm right or wrong.
I have the following problem:
T(n) = T(2n/5) + T(3n/5) + n
And I need to find the worst-case complexity for this.
(FYI - It's some kind of augmented merge sort)
My feeling was to use the first equation from the Theorem, but I feel something is wrong with my idea. Any explanation on how to solve problems like this will be appreciated :)
The recursion tree for the given recursion will look like this:
Size Cost
n n
/ \
2n/5 3n/5 n
/ \ / \
4n/25 6n/25 6n/25 9n/25 n
and so on till size of input becomes 1
The longes simple path from root to a leaf would be n-> 3/5n -> (3/5) ^2 n .. till 1
Therefore let us assume the height of tree = k
((3/5) ^ k )*n = 1 meaning k = log to the base 5/3 of n
In worst case we expect that every level gives a cost of n and hence
Total Cost = n * (log to the base 5/3 of n)
However we must keep one thing in mind that ,our tree is not complete and therefore
some levels near the bottom would be partially complete.
But in asymptotic analysis we ignore such intricate details.
Hence in worst Case Cost = n * (log to the base 5/3 of n)
which is O( n * log n )
Now, let us verify this using substitution method:
T(n) = O( n * log n) iff T(n) < = dnlog(n) for some d>0
Assuming this to be true:
T(n) = T(2n/5) + T(3n/5) + n
<= d(2n/5)log(2n/5) + d(3n/5)log(3n/5) + n
= d*2n/5(log n - log 5/2 ) + d*3n/5(log n - log 5/3) + n
= dnlog n - d(2n/5)log 5/2 - d(3n/5)log 5/3 + n
= dnlog n - dn( 2/5(log 5/2) - 3/5(log 5/3)) + n
<= dnlog n
as long as d >= 1/( 2/5(log 5/2) - 3/5(log 5/3) )
In our Data Structures class we are learning how to solve recurrence relations in 1 variable. Unfortunately some things seem to come "out of the blue".
For example, some exercises already tell you how to substitute the variable n:
Compute T(n) for n = 2^k
T(n) = a for n =< 2
T(n) = 8T(n/2) + bn^2 (a and b are > 0)
But some exercises just give you the T(n) without providing a replacement for the variable n:
T(n) = 1 n =<1
T(n) = 2T(n/4) + sqrt(n)
I used the iterative method and arrived to the right answer: sqrt(n) + (1/2) * sqrt(n) * Log(n).
But when the professor explained she started by saying: "Let n = 4^k", which is what I mean by "out of the blue". Using that fact the answer is simpler to obtain.
But how is the student supposed to come up with that?
This is another example:
T(n) = 1 n =<1
T(n) = 2T( (n-1)/2 ) + n
Here I started again with the iterative method but I can't reach a definitive answer, it looks more complex that way.
After 3 iterative steps I arrived to this:
T(n) = 4T( (n-2)/4 ) + 2n - 1
T(n) = 8T( (n-3)/8 ) + 3n - 3
T(n) = 16T( (n-4)/16 ) + 4n - 6
I am inclined to say T(i) = 2^i * T( (n-i)/2^i ) + i*n - ? This last part I can't figure out, maybe I made a mistake.
However in the answer she provides she starts again with another substitution: Let n = (2^k) -1. I don’t see where this comes from - why would I do this? What is the logic behind that?
In all of these cases, these substitutions are reasonable because they rewrite the recurrence as one of the form S(k) = aS(k - 1) + f(k). These recurrences are often easier to solve than other recurrences because they define S(k) purely in terms of S(k - 1).
Let's do some examples to see how this works. Consider this recurrence:
T(n) = 1 (if n ≤ 1)
T(n) = 2T(n/4) + sqrt(n) (otherwise)
Here, the size of the problem shrinks by a factor of four on each iteration. Therefore, if the input is a perfect power of four, then the input will shrink from size 4k to 4k-1, from 4k-1 to 4k-2, etc. until the recursion bottoms out. If we make this substitution and let S(k) = T(4k), then we get hat
S(0) = 1
S(k) = 2S(k - 1) + 2k
This is now a recurrence relation where S(k) is defined in terms of S(k - 1), which can make the recurrence easier to solve.
Let's look at your original recurrence:
T(n) = a (for n ≤ 2)
T(n) = 8T(n/2) + bn2
Notice that the recursive step divides n by two. If n is a perfect power of two, then the recursive step considers the power of two that comes right before n. Letting S(k) = T(2k) gives
S(k) = a (for k ≤ 1)
S(k) = 8S(k - 1) + b22k
Notice how that S(k) is defined in terms of S(k - 1), which is a much easier recurrence to solve. The choice of powers of two was "natural" here because it made the recursive step talk purely about the previous value of S and not some arbitrarily smaller value of S.
Now, look at the last recurrence:
T(n) = 1 (n ≤ 1)
T(n) = 2T( (n-1)/2 ) + n
We'd like to make some substitution k = f(n) such that T(f(n)) = 2T(f(n) - 1) + n. The question is how to do that.
With some trial and error, we get that setting f(n) = 2n - 1 fits the bill, since
(f(n) - 1) / 2 = ((2n - 1) - 1) / 2 = (2n - 2) / 2 = 2n-1 - 1 = f(n) - 1
Therefore, letting k = 2n - 1 and setting S(k) = T(2n - 1), we get
S(n) = 1 (if n ≤ 1)
S(n) = 2S(n - 1) + 2n - 1
Hope this helps!