I read recently in Steven Skiena "Algorithms design manual book" that the harmonic Sum is of O(lg n). I understand how this is possible mathematically, but when i saw the recursive solution to it, which is the following:
public static double harmonic(int n) {
if(n == 1) {
return 1.0;
} else {
return (1.0 / n) + harmonic(n - 1);
}
}
I can not see how this is O(lg n), I see it as O(n). is this code above really lg(n) if so how ? if not can someone refer me to an example where this is the case ?
I believe the confusion here is that big-O notation can be used not only for running times, but for the growth of functions in general.
As kaya3 comments, the return value of the function above is O(lg n), which means it grows asymptotically as fast as lg n as you increase n. This is quite likely what the book is claiming. It is a mathematical property of the harmonic series.
The running time of the function, however, takes O(n) time. This is what you are claiming, and it is perfectly true (assuming constant-time arithmetics).
If you are still confused about how big-O notation can be used for purposes other than the running time of algorithms, I recommend you check its definition.
Related
Are there algorithms that are only feasible in a recursive manner? If yes, how can one identify these algorithms?
Consider the following as a case study.
In order to approximate a exponential, one can use.
ef ≈ 1 + f(1 + f/2(1 + f/3(1 + f/4)))
Here is a function that computes this approximation at a level of n steps using recursion.
exp_approximation = function(f, n, i = 2)
{
if (n == 1)
{
return( 1 + f/i )
} else
{
1 + f/i * exp_approximation(f,n=n-1,i=i+1)
}
}
Is it possible to write the same algorithm using iteration instead of recursion?
I wrote my code in R but I welcome solution in pseudo code other some other commonly used language such as C, C++, Java, Python, ...
Yes, technically it's possible. See, for example, this question for a generic conversion method. That said, many algorithms are much more intuitive in the recursive form (and might be more efficient in functional languages but not in imperative ones). That's especially true for math-related problems involving recurrences, where recursion is in fact a representation of a recurrence relation.
First of all - yes, I know that there are a ton kind of similar questions about this but I still don't get it.
So the Big-O Notation of this code is O(2^n)
public static int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
And even though if I run it using let's say 6 , function is getting called 25 times as you can see in this picture:
Fibonacci
Shouldn't it be 64 because of O(2^6) = 64 ?
Few issues with the logic here:
Big O notation is only giving upper asymptotic bound, not a tight bound, that's what big Theta is for.
Fibonacci is actually Theta(phi^n), if you are looking for tighter bound (where phi is the "golden ration", phi ~= 1.618.
When talking about asymptotic notation, there isn't much point in talking about low numbers, and neglecting the constants - since they are omitted for the asymptotic complexity (This is not the case here though).
In here, the issue is fibonacci is indeed O(2^n), but that bound is not tight, so the actual number of calls is going to be lower than the estimated one (for sufficiently large n, onwards).
Fibonacci series is 0 1 1 2 3 5 8... and so on. It can be obtained using swapping elements and displaying them whereas we can obtain it using array. I was asked to find it using recursion in interview and main logic for it,
int fib(int n){
if(n<1)
return 1;
else
return fib(n-1)+fib(n-2);}
It generate problem for stack for big number because we are increasing complexity here. So what is optimum way here?
Ironically, the method used above i.e. binary recursion computes the Fibonacci number by making two recursive calls in each non-base case. Unfortunately, such a direct implementation of the Fibonacci formula number in this way requires an exponential number of calls to the method.
We are tempted to use the bad recursive formulation because of the way the nth Fibonacci number, F(n), depends on the two previous values, F(n-2) and F(n-1). But notice that after computing F(n-2), the call to compute F(n-1) requires its own recursive call to compute F(n-2), as it does not have the knowledge of value of F(n-2) that was computed at the earlier level of recursion. That is duplicate work. Worse yet, both of these calls will need to (re)compute the value of F(n-3), as will the computation of F(n-1). This snowballing effect is what leads to the exponential running time of fib().
We can compute F(n) much more efficiently using linear recursion in which each invocation makes only one recursive call. To do so, we need to redefine the expectations of the method. Rather than having the method that returns a single value, which is the nth Fibonacci number, we define a recursive method that returns an array with two consecutive Fibonacci numbers {F(n), F(n-1)} using the convention F(-1)=0. Although it seems to be a greater burden to report two consecutive Fibonacci numbers instead of one, passing this extra information from one level of the recursion to the next makes it much easier to continue the process. (It allows us to avoid having to recompute the second value that was already known within the recursion.)
An implementation based on this strategy is clearly shown here.
Memoization. Create logic that calculates each fib numb only once.
static BigInteger[] fibNumbs = new BigInteger[10000];
public static void main(String[] args) {
fibNumbs[1] = BigInteger.ONE;
fibNumbs[2] = BigInteger.ONE;
System.out.println(fibOf(10000));
}
public static BigInteger fibOf(int n) {
if (n <= 1) {
return BigInteger.ONE;
}
if (fibNumbs[n - 1]==null) {
fibNumbs[n - 1] = fibOf(n - 1);
}
if (fibNumbs[n - 2]==null) {
fibNumbs[n - 2] = fibOf(n - 2);
}
return fibNumbs[n - 1].add(fibNumbs[n - 2]);
}
If I told you two consecutive fibonacci numbers, eg. a=3 and b=5, could you guess the next? It's the two summed so its 8. Now with a=5 and the newly computed number b=8 you can calculate the next? You start the iteration with the two first 0, 1 and the index of the number you'd like for each iteration you count down and when you hit zero a is your answer. This is a O(n) algorithm.
I have no idea if this is even remotely possible (I looked up "computing algebra" etc with discouraging results). How can one compute Algebra and find Derivatives with Unity?
For example, simplifying the distance formula with one variable (x unkown, some function f(x) known):
d = sqrt( (int-x)^2 + (int-f(x))^2 );
and then finding the derivative of this simplified expression?:
d=>d'
Thank you for your time and any light you can shed on this question. And once again, I have no idea if algebraic operations are even commonplace among most programs, let alone Unity-script specifically.
I have also noticed a few systems claiming algebra manipulation (e.g. http://coffeequate.readthedocs.org/en/latest/), but even if this is so how would one go about applying these systems to unity?
If you are writing in C#, you can pull off derivatives with delegates and the definition of a derivative, like this:
delegate double MathFunc(double d);
MathFunc derive(MathFunc f, float h) {
return (x) => (f(x+h) - f(x)) / h;
}
where f in the function you are taking the derivative of, and h determines how accurate your derivative is.
What is the Big-O running time of the following pseudocode:
rec(int N) {
if (N<0) return;
for i=1:N {
for j=1:N {
print("waste time");
}
rec(N-1);
}
}
If my understanding is correct, the precise running time of this code would be
N^2 * 1 + (N-1)^2 * N + (N-2)^2 * N * (N-1) ... + N!
Or equivalently
(N-k)^2 * nPk from k=0 to k=N-1
Would the Big O runtime still be O(N!)? What if we nested the "waste time" loop even more? What if we replaced the "waste time" loop with something that takes 2^(N-k) time instead of (N-k)^2 time?
My guess is that the answer to all of these questions is still O(N!) because the last few terms of the series dominate. Please correct me if I'm wrong.
You're right: in all of the scenarios you described, it would still be O(n!) because that's what dominates the series - the factorial grows much quicker than the other factors, it quickly becomes the main bottleneck in the algorithm's running time.
Unless you replace the waste time loop with something worse than O(n!) (for example, something O(n^n)), it'll always be O(n!).