Extension of Fast Doubling Method - math

Nth term of sequence in which
F(N) = F(N-1) + F(N-2) + F(N-1)×F(N-2)
mod any big no. lets say 10^9+7.. F(0)=a and F(1)=b is also given.
I am trying Fast Doubling Method but I am not able to get the matrix. How to efficiently compute except obvious O(n) algorithm

Consider G[n]=log(1+F[n]) to find
G[n] = G[n-1] + G[n-2]
This is the Fibonacci recursion that has the general solution
G[n] = Fib[n-1]*G[0] + Fib[n]*G[1]
which translates to
1+F[n] = (1+F[0])^Fib[n-1] * (1+F[1])^Fib[n]
where Fib is the Fibonacci sequence that has values 1,0,1 for indices n=-1,0,1.
Now apply the usual techniques for the Fibonacci sequence.

Related

Time Complexity of recursive Power Set function

I am having trouble with simplifying the time complexity for this recursive algorithm for finding the Power-Set of a given Input Set. I not entirely sure if what I have got is correct so far either.
It's described at the bottom of the page in this link: http://www.ecst.csuchico.edu/~akeuneke/foo/csci356/notes/ch1/solutions/recursionSol.html
By considering each step taken by the function for an arbitrarily chosen Input Set of size 4 and then translating that to an Input Set of size n, I came to the result that the time complexity in terms of Big-O notation for this algorithm is: 2nnn
Is this correct? And is there a specific way to approach finding the time-complexity of recursive functions?
The run-time is actually O(n*2n). The simple explanation is that this is an asymptotically optimal algorithm insofar as the total work it does is dominated by creating the subsets which feature directly in the final output of the algorithm, with the total length of the output generated being O(n*2n). We can also analyze an annotated implementation of the pseudo-code (in JavaScript) to show this complexity more rigorously:
function powerSet(S) {
if (S.length == 0) return [[]] // O(1)
let e = S.pop() // O(1)
let pSetWithoutE = powerSet(S); // T(n-1)
let pSet = pSetWithoutE // O(1)
pSet.push(...pSetWithoutE.map(set => set.concat(e))) // O(2*|T(n-1)| + ||T(n-1)||)
return pSet; // O(1)
}
// print example:
console.log('{');
for (let subset of powerSet([1,2,3])) console.log(`\t{`, subset.join(', '), `}`);
console.log('}')
Where T(n-1) represents the run-time of the recursive call on n-1 elements, |T(n-1)| represents the number of subsets in the power-set returned by the recursive call, and ||T(n-1)|| represents the total number of elements across all subsets returned by the recursive call.
The line with complexity represented in these terms corresponds to the second bullet point of step 2. of the pseudocode: returning the union of the powerset without element e, and that same powerset with every subset s unioned with e:
(1) U ((2) = {s in (1) U e})
This union is implemented in terms of push and concat operations. The push does the union of (1) with (2) in |T(n-1)| time as |T(n-1)| new subsets are being unioned into the power-set. The map of concat operations is responsible for generating (2) by appending e to every element of pSetWithoutE in |T(n-1)| + ||T(n-1)|| time. This second complexity corresponds to there being ||T(n-1)|| elements across the |T(n-1)| subsets of pSetWithoutE (by definition), and each of those subsets being increased in size by 1.
We can then represent the run-time on input size n in these terms as:
T(n) = T(n-1) + 2|T(n-1)| + ||T(n-1)|| + 1; T(0) = 1
It can be proven via induction that:
|T(n)| = 2n
||T(n)|| = n2n-1
which yields:
T(n) = T(n-1) + 2*2n-1 + (n-1)2n-2 + 1; T(0) = 1
When you solve this recurrence relation analytically, you get:
T(n) = n + 2n + n/2*2n = O(n2n)
which matches the expected complexity for an optimal power-set generation algorithm. The solution of the recurrence relation can also be understood intuitively:
Each of n iterations does O(1) work outside of generating new subsets of the power-set, hence the n term in the final expression.
In terms of the work done in generating every subset of the power-set, each subset is pushed once after it is generated through concat. There are 2n subsets pushed, producing the 2n term. Each of these subsets has an average length of n/2, giving a combined length of n/2*2n which corresponds to the complexity of all concat operations. Hence, the total time is given by n + 2n + n/2*2n.

Time complexity of a function with two recursive calls that divide N in half

int function8(int N) {
int sum = 0;
for (int i = 0; i < N; i++) sum += 1;
if (N > 1)
return sum + function8(N / 2) + function8(N / 2);
else
return 0;
}
What is the time complexity of the above algorithm?
Referring to the Fibonacci recursion theory, I consider this as N*2^N. Because the for loop is O(N) and the recursion part is 2^N.
the recursion part is 2^N
No, the recursion part is O(log(N)) because N is chopped in half for each recursive call. The algorithm basically reduces to merge sort, O(n log(n)), because we do O(N) work per stack frame and there are log(N) stack frames created.
Fibonacci is unrelated here since that's a dynamic programming problem that involves using the previous two solutions N-1 and N-2 to build N which gives a naive exponential algorithm as you say. That's a different complexity class than divide & conquer as in the code we're looking at here. If the recursive calls were function8(N - 2) + function8(N - 1);, then you'd be correct.
The time complexity of your implementation is O(N log N) since the recurrence relation is T(N) = O(N) + 2 * T(N/2) (O(N) for computing sum, and the rest for the two recursive calls).
But instead of function8(N / 2) + function8(N / 2) you could have written 2 * function8(N / 2). This would give you the recurrence relation T(N) = O(N) + T(N/2) (only one recursive call),
thereby reducing the complexity to O(N).

Why does n^O(1) mean “polynomial time?”

An algorithm runs in polynomial time if it's runtime is O(nk) for some k. However, I've also seen polynomial time defined as time nO(1).
I have some questions about this:
Why is nO(1) polynomial time? What happened to k?
If nO(1) is polynomial time, then 3n2 should be nO(1). But where did the 3 go? How does that work?
Thanks!
When you have an expression like "the runtime is O(n)" or "the runtime is O(n2)," the O(n) and O(n2) terms aren't actual functions. Instead, they're placeholders for some other function with some property. For example, take this statement:
The runtime of the algorithm is O(n)
This statement really means
There is some function f(n) where the runtime of the algorithm is f(n) and f(n) = O(n)
For example, if a function's actual runtime is 137n + 42, the statement "the runtime of the algorithm is O(n)" is true because there is some function (namely, f(n) = 137n + 42) where the runtime of the algorithm is f(n) and f(n) = O(n).
Given this, let's think about what the statement "the runtime of the algorithm is nO(1)" means. This statement is equivalent to
There is some function f(n) where the runtime of the algorithm is nf(n) and f(n) = O(1)
Now that we've gotten the terminology clearer, what exactly does this mean? Intuitively, a function is O(1) if it's eventually bounded from above by some constant. Therefore, any function f(n) that's O(1) must satisfy f(n) ≤ k once n gets sufficiently large. Therefore, at least intuitively, nO(1) means "n raised to some power that's at most k," which sounds like the definition of a polynomial function.
Of course, there's that pesky issue of constant factors. The function 137n3 is definitely O(n3), but it has a huge constant term in front. On the other hand, if we have a function of the form nO(1), there isn't a constant term in front of the n3. How do we handle this?
This is where we can get cute with the math. In the case of 137n3, note that when n > 1, we have
137n3 = nlogn137 n3 = n3 + logn 137
Notice that this is n raised to the power of logn 137. Although it might look like the function logn 137 grows as n grows larger, it actually has the opposite behavior: it decreases as n grows. The reason for this is that we can use the change of base formula to rewrite logn 137 as
logn 137 = log 137 / log n
Which clearly decreases in the long term when log n decreases. Therefore, the expression 3 + logn137 ends up being bounded from above by some constant, so it's O(1).
Using this technique, it's possible to convert O(nk) to nO(1) by choosing the exponent of n to be k plus the log base n of the constant factor in front of the nk term that comes up in the big-O notation. Similarly, we can convert back from nO(1) to O(nk) by choosing k to be any constant that upper-bounds the function hidden by the O(1) term in the exponent of n.
Hope this helps!

Proving worst case running time of QuickSort

I am trying to perform asymptotic analysis on the following recursive function for an efficient way to power a number. I am having trouble determining the recurrence equation due to having different equations for when the power is odd and when the power is even. I am unsure how to handle this situation. I understand that the running time is theta(logn) so any advice on how to proceed to this result would be appreciated.
Recursive-Power(x, n):
if n == 1
return x
if n is even
y = Recursive-Power(x, n/2)
return y*y
else
y = Recursive-Power(x, (n-1)/2)
return y*y*x
In any case, the following condition holds:
T(n) = T(floor(n/2)) + Θ(1)
where floor(n) is the biggest integer not greater than n.
Since floor doesn't have influence on results, the equation is informally written as:
T(n) = T(n/2) + Θ(1)
You have guessed the asymptotic bound correctly. The result could be proved using Substitution method or Master theorem. It is left as an exercise for you.

Recursion and Big O

I've been working through a recent Computer Science homework involving recursion and big-O notation. I believe I understand this pretty well (certainly not perfectly, though!) But there is one question in particular that is giving me the most problems. The odd thing is that by looking it, it looks to be the most simple one on the homework.
Provide the best rate of growth using the big-Oh notation for the solution to the following recurrence?
T(1) = 2
T(n) = 2T(n - 1) + 1 for n>1
And the choices are:
O(n log n)
O(n^2)
O(2^n)
O(n^n)
I understand that big O works as an upper bound, to describe the most amount of calculations, or the highest running time, that program or process will take. I feel like this particular recursion should be O(n), since, at most, the recursion only occurs once for each value of n. Since n isn't available, it's either better than that, O(nlogn), or worse, being the other three options.
So, my question is: Why isn't this O(n)?
There's a couple of different ways to solve recurrences: substitution, recurrence tree and master theorem. Master theorem won't work in the case, because it doesn't fit the master theorem form.
You could use the other two methods, but the easiest way for this problem is to solve it iteratively.
T(n) = 2T(n-1) + 1
T(n) = 4T(n-2) + 2 + 1
T(n) = 8T(n-3) + 4 + 2 + 1
T(n) = ...
See the pattern?
T(n) = 2n-1⋅T(1) + 2n-2 + 2n-3 + ... + 1
T(n) = 2n-1⋅2 + 2n-2 + 2n-3 + ... + 1
T(n) = 2n + 2n-2 + 2n-3 + ... + 1
Therefore, the tightest bound is Θ(2n).
I think you have misunderstood the question a bit. It does not ask you how long it would take to solve the recurrence. It is asking what the big-O (the asymptotic bound) of the solution itself is.
What you have to do is to come up with a closed form solution, i. e. the non-recursive formula for T(n), and then determine what the big-O of that expression is.
The question is asking for the big-Oh notation for the solution to the recurrence, not the cost of calculation the recurrence.
Put another way: the recurrence produces:
1 -> 2
2 -> 5
3 -> 11
4 -> 23
5 -> 47
What big-Oh notation best describes the sequence 2, 5, 11, 23, 47, ...
The correct way to solve that is to solve the recurrence equations.
I think this will be exponential. Each increment to n makes the value to be twice as large.
T(2) = 2 * T(1) = 4
T(3) = 2 * T(2) = 2 * 4
...
T(x) would be the running time of the following program (for example):
def fn(x):
if (x == 1):
return # a constant time
# do the calculation for n - 1 twice
fn(x - 1)
fn(x - 1)
I think this will be exponential. Each increment to n brings twice as much calculation.
No, it doesn't. Quite on the contrary:
Consider that for n iterations, we get running time R. Then for n + 1 iterations we'll get exactly R + 1.
Thus, the growth rate is constant and the overall runtime is indeed O(n).
However, I think Dima's assumption about the question is right although his solution is overly complicated:
What you have to do is to come up with a closed form solution, i. e. the non-recursive formula for T(n), and then determine what the big-O of that expression is.
It's sufficient to examine the relative size of T(n) and T(n + 1) iterations and determine the relative growth rate. The amount obviously doubles which directly gives the asymptotic growth.
First off, all four answers are worse than O(n)... O(n*log n) is more complex than plain old O(n). What's bigger: 8 or 8 * 3, 16 or 16 * 4, etc...
On to the actual question. The general solution can obviously be solved in constant time if you're not doing recursion
( T(n) = 2^(n - 1) + 2^(n) - 1 ), so that's not what they're asking.
And as you can see, if we write the recursive code:
int T( int N )
{
if (N == 1) return 2;
return( 2*T(N-1) + 1);
}
It's obviously O(n).
So, it appears to be a badly worded question, and they are probably asking you the growth of the function itself, not the complexity of the code. That's 2^n. Now go do the rest of your homework... and study up on O(n * log n)
Computing a closed form solution to the recursion is easy.
By inspection, you guess that the solution is
T(n) = 3*2^(n-1) - 1
Then you prove by induction that this is indeed a solution. Base case:
T(1) = 3*2^0 - 1 = 3 - 1 = 2. OK.
Induction:
Suppose T(n) = 3*2^(n-1) - 1. Then
T(n+1) = 2*T(n) + 1 = 3*2^n - 2 + 1 = 3*2^((n+1)-1) - 1. OK.
where the first equality stems from the recurrence definition,
and the second from the inductive hypothesis. QED.
3*2^(n-1) - 1 is clearly Theta(2^n), hence the right answer is the third.
To the folks that answered O(n): I couldn't agree more with Dima. The problem does not ask the tightest upper bound to the computational complexity of an algorithm to compute T(n) (which would be now O(1), since its closed form has been provided). The problem asks for the tightest upper bound on T(n) itself, and that is the exponential one.

Resources