I came across this sequence in a programming contest
F(n)= F(n-1)-F(n-2);
Given F0 and F1 find nth term
(http://codeforces.com/contest/450/problem/B) (the contest is over)
Now the solution of this problem is like this
The sequence take value f0, f1, f1-f0, -f0, -f1, f0 - f1 then again f0 and the whole sequence is repeated.
I get that this value is being repeated but could not found the reason for this cyclic order. I searched for cyclic order and sequences but could not find sufficient material that could give the actual feel for the reason of the cycle.
If applying your original formula for n-1
F(n -1) = F(n-2) - F(n -3)
Than if I replace F(n-1) in the original F(n) expression
F(n) = F(n-2) - F(n -3) - F(n-2) = -F(n - 3)
F(n) = - F(n-3)
Since the later also is valid if I replace n with n-3
F(n - 3) = - F(n -6)
Combining the last two
F(n) = -(-F(n-6)) = F(n-6)
Thus sequence is cyclical with the period of six
Another way to approach this problem. Note that F(n) = F(n - 1) - F(n - 2) is the same as F(n) - F(n - 1) + F(n - 2) = 0 which makes it a linear difference equation. Such equations have fundamental solutions a^n where a is a root of a polynomial: suppose F(n) = a^n, then a^n - a^(n - 1) + a^(n - 2) = (a^2 - a + 1)*a^(n - 2) = 0, so a^2 - a + 1 = 0 which has two complex roots (you can find them) which have modulus 1 and argument pi/3. So their powers 1, a, a^2, a^3, ... travel around the unit circle and come back to 1 after 2 pi/(pi/3) = 6 steps.
This analysis has the same defect as the corresponding one for differential equations -- how do you know to look for solutions of a certain kind? I don't have an answer for that, maybe someone else does. In the meantime, whenever you see a linear difference equation, think about solutions of the form a^n.
Related
I'm really confused on simplifying this recurrence relation: c(n) = c(n/2) + n^2.
So I first got:
c(n/2) = c(n/4) + n^2
so
c(n) = c(n/4) + n^2 + n^2
c(n) = c(n/4) + 2n^2
c(n/4) = c(n/8) + n^2
so
c(n) = c(n/8) + 3n^2
I do sort of notice a pattern though:
2 raised to the power of whatever coefficient is in front of "n^2" gives the denominator of what n is over.
I'm not sure if that would help.
I just don't understand how I would simplify this recurrence relation and then find the theta notation of it.
EDIT: Actually I just worked it out again and I got c(n) = c(n/n) + n^2*lgn.
I think that is correct, but I'm not sure. Also, how would I find the theta notation of that? Is it just theta(n^2lgn)?
Firstly, make sure to substitute n/2 everywhere n appears in the original recurrence relation when placing c(n/2) on the lhs.
i.e.
c(n/2) = c(n/4) + (n/2)^2
Your intuition is correct, in that it is a very important part of the problem. How many times can you divide n by 2 before we reach 1?
Let's take 8 for an example
8/2 = 4
4/2 = 2
2/2 = 1
You see it's 3, which as it turns out is log(8)
In order to prove the theta notation, it might be helpful to check out the master theorem. This is a very useful tool for proving complexity of a recurrence relation.
Using the master theorem case 3, we can see
a = 1
b = 2
logb(a) = 0
c = 2
n^2 = Omega(n^2)
k = 9/10
(n/2)^2 < k*n^2
c(n) = Theta(n^2)
The intuition as to why the answer is Theta(n^2) is that you have n^2 + (n^2)/4 + (n^2)/16 + ... + (n^2)/2^(2n), which won't give us logn n^2s, but instead increasingly smaller n^2s
Let's answer a more generic question for recurrences of the form:
r(n) = r(d(n)) + f(n). There are some restrictions for the functions, that need further discussion, e.g. if x is a fix point of d, then f(x) should be 0, otherwise there isn't any solution. In your specific case this condition is satisfied.
Rearranging the equation we get that r(n) - r(d(n)) = f(n), and we get the intuition that r(n) and r(d(n)) are both a sum of some terms, but r(n) has one more term than r(d(n)), that's why the f(n) as the difference. On the other hand, r(n) and r(d(n)) have to have the same 'form', so the number of terms in the previously mentioned sum has to be infinite.
Thus we are looking for a telescopic sum, in which the terms for r(d(n)) cancel out all but one terms for r(n):
r(n) = f(n) + a_0(n) + a_1(n) + ...
- r(d(n)) = - a_0(n) - a_1(n) - ...
This latter means that
r(d(n)) = a_0(n) + a_1(n) + ...
And just by substituting d(n) into the place of n into the equation for r(n), we get:
r(d(n)) = f(d(n)) + a_0(d(n)) + a_1(d(n)) + ...
So by choosing a_0(n) = f(d(n)), a_1(n) = a_0(d(n)) = f(d(d(n))), and so on: a_k(n) = f(d(d(...d(n)...))) (with k+1 pieces of d in each other), we get a correct solution.
Thus in general, the solution is of the form r(n) = sum{i=0..infinity}(f(d[i](n))), where d[i](n) denotes the function d(d(...d(n)...)) with i number of iterations of the d function.
For your case, d(n)=n/2 and f(n)=n^2, hence you can get the solution in closed form by using identities for geometric series. The final result is r(n)=4/3*n^2.
Go for advance Master Theorem.
T(n) = aT(n/b)+n^klog^p
where a>0 b>1 k>0 p=real number.
case 1: a>b^k
T(n) = 0(n^logba) b is in base.
case 2 a=b^k
1. p>-1 T(n) than T(n)=0(n^logba log^p+1)
2. p=-1 Than T(n)=0(n^logba logn)
3. p<-1 than T(n)=0(n^logba)
case 3: a<b^k
1.if p>=0 than T(n)=0(n^k log^p n)
2 if p<0 than T(n)=O(n^k)
forgave Constant bcoz constant doesn't change time complexity or constant change processor to processor .(i.e n/2 ==n*1/2 == n)
We are to solve the recurrence relation through repeating substitution:
T(n)=T(n-1)+logn
I started the substitution and got the following.
T(n)=T(n-2)+log(n)+log(n-1)
By logarithm product rule, log(mn)=logm+logn,
T(n)=T(n-2)+log[n*(n-1)]
Continuing this, I get
T(n)=T(n-k)+log[n*(n-1)*...*(n-k)]
We know that the base case is T(1), so n-1=k -> k=n+1, and substituting this in we get
T(n)=T(1)+log[n*(n-1)*...*1]
Clearly n*(n-1)*...*1 = n! so,
T(n)=T(1)+log(n!)
I do not know how to solve beyond this point. Is the answer simply O(log(n!))? I have read other explanations saying that it is Θ(nlogn) and thus it follows that O(nlogn) and Ω(nlogn) are the upper and lower bounds respectively.
This expands out to log (n!). You can see this because
T(n) = T(n - 1) + log n
= T(n - 2) + log (n - 1) + log n
= T(n - 3) + log (n - 2) + log (n - 1) + log n
= ...
= T(0) + log 1 + log 2 + ... + log (n - 1) + log n
= T(0) + log n!
The exact answer depends on what T(0) is, but this is Θ(log n!) for any fixed constant value of T(0).
A note - using Stirling's approximation, Θ(log n!) = Θ(n log n). That might help you relate this back to existing complexity classes.
Hope this helps!
Stirling's formula is not needed to get the big-Theta bound. It's O(n log n) because it's a sum of at most n terms each at most log n. It's Omega(n log n) because it's a sum of at least n/2 terms each at least log (n/2) = log n - 1.
Yes, this is a linear recurrence of the first order. It can be solved exactly. If your initial value is $T(1) = 0$, you do get $T(n) = \log n!$. You can approximate $\log n!$ (see Stirling's formula):
$$
\ln n! = n \ln n - n + \frac{1}{2} \ln \pí n + O(\ln n)
$$
[Need LaTeX here!!]
I am confused on how to use mathematical induction to prove Big O for a recursive function, given using its recursion relation.
Example:
The recurrence relation for recursive implementation of Towers of Hanoi is T(n) = 2T(n-1) + 1
and T(1) = 1. We claimed that this recursive method is O(n) = 2n - 1. Prove this claim using the mathematical induction.
In the case of recursion, do I always assume that n = k-1, rather than n=k? This is the assumption that the lecture notes give.
Assume f(n-1) = 2^(n-1) - 1 is true.
I understand with non-recursive mathematical induction we assume that n = k, because it is only a change of variables. Why then, is it safe to assume that n = k - 1?
One possible way: Postulate a non-recursive formula for T and proove it. After that, show that the formula you found is in the Big O you wanted.
For the proof, you may use induction, which is quick and easy in that case. To do that, you first show that your formula holds for the first value (usually 0 or 1, in your example that's 1 and trivial).
Then you show that if it holds for any number n - 1, it also holds for its successor n. For that you use the definition for T(n) (in your example that's T(n) = 2 T(n - 1) + 1): as you know that your formla holds for n - 1, you can replace occurences of T(n - 1) with your formula. In your example you then get (with formula T(n) = 2^n - 1)
T(n) = 2T(n - 1) + 1
= 2(2^(n - 1) - 1) + 1
= 2^n - 2 + 1
= 2^n + 1
As you can see, it holds for n if we assume it holds for n - 1.
Now comes the trick of induction: we showed that our formula holds for n = 1, and we showed that if it holds for any n = k - 1, it holds for k as well. That is, as we prooved it for 1, it is also proven for 2. And as it is proven for 2, it is also proven for 3. And as it is...
Thus, we do not assume that the term is true for n - 1 in our proof, we only made a statement under the assumption that it is true, then prooved our formula for one initial case, and used induction.
I've been doing some past papers for my ComSci course and I've ran into a bit of trouble understanding this question:
"Define a recursive relationship that expresses the number of calls involved in using the below function to find the nth Fibonacci number.: "
def f(n):
if n == 1 or n == 2:
return 1
else:
return f(n - 1) + f(n - 2)
I understand how the function works f(1), f(2) requires 1 call f(3) requires 3, f(4) requires 5 etc... However, I'm at a loss as to how to approach this question.
Thanks for reading :)
The question asks you to explain how many calls will be made to f based on n. The part that says, "Define a recursive relationship" is actually a hint about your answer.
So your answer will look something like:
Let T(x) be the function which defines the number of calls to compute f(x)
Then:
T(n) = { something using T and values less than n }
If you are trying to figure this out yourself - stop here, Spoilers follow (so your question is answered completely).
---------------------------------- Spoiler -------------------------------
n=1: T(1) = 1
n=2: T(2) = 1
n>2: T(n) = 1 + T(n - 1) + T(n - 2)
--------------------------------- End Spoiler ------------------------------
I'm trying to solve a recurrence T(n) = T(n/8) + T(n/2) + T(n/4).
I thought it would be a good idea to first try a recurrence tree method, and then use that as my guess for substitution method.
For the tree, since no work is being done at the non-leaves levels, I thought we could just ignore that, so I tried to come up with an upper bound on the # of leaves since that's the only thing that's relevant here.
I considered the height of the tree taking the longest path through T(n/2), which yields a height of log2(n). I then assume the tree is complete, with all levels filled (ie. we have 3T(n/2)), and so we would have 3^i nodes at each level, and so n^(log2(3)) leaves. T(n) would then be O(n^log2(3)).
Unfortunately I think this is an unreasonable upper bound, I think I've made it a bit too high... Any advice on how to tackle this?
One trick you can use here is rewriting the recurrence in terms of another variable. Let's suppose that you write n = 2k. Then the recurrence simplifies to
T(2k) = T(2k-3) + T(2k-2) + T(2k-1).
Let's let S(k) = T(2k). This means that you can rewrite this recurrence as
S(k) = S(k-3) + S(k-2) + S(k-1).
Let's assume the base cases are S(0) = S(1) = S(2) = 1, just for simplicity. Given this, you can then use a variety of approaches to solve this recurrence. For example, the annihilator method (section 5 of the link) would be great here for solving this recurrence, since it's a linear recurrence. If you use the annihilator approach here, you get that
S(k) - S(k - 1) - S(k - 2) - S(k - 3) = 0
S(k+3) - S(k+2) - S(k+1) - S(k) = 0
(E3 - E2 - E - 1)S(k) = 0
If you find the roots of the equation E3 - E2 - E - 1, then you can write the solution to the recurrence as a linear combination of those roots raised to the power of k. In this case, it turns out that the recurrence is similar to that for the Tribonacci numbers, and if you solve everything you'll find that the recurrence solves to something of the form O(1.83929k).
Now, since you know that 2k = n, we know that k = lg n. Therefore, the recurrence solves to O(1.83929lg n). Let's let a = 1.83929. Then the solution has the form O(alg n) = O(a(loga n) / loga2)) = O(n1/loga 2). This works out to approximately O(n0.87914...). Your initial upper bound of O(nlg 3) = O(n1.584962501...) is significantly weaker than this one.
Hope this helps!
There is a way simpler method than proposed by #template. Apart from a Master's theorem, there is also an Akra-Bazzi method that allows you to solve the recurrences of this kind:
which is exactly what you have. So your g(x) = 0, a1 = a2 = a3 = 1 and b1 = 1/2, b2 = 1/4 and b3= 1/8. So now you have to solve the equation: 1/2^p + 1/4^p + 1/8^p = 1.
Solving it p is approximately 0.879. You do not even need to solve the integral because it is equal to 0. So your overall complexity is O(n^0.879).