I'm trying to find time complexity (big O) of a recursive formula.
I tried to find a solution, you may see the formula and my solution below:
Like Brenner said, your last assumption is false. Here is why: Let's take the definition of O(n) from the Wikipedia page (using n instead of x):
f(n) = O(n) if and only if there exist constants c, n0 s.t. |f(n)| <= c |g(n)|, for alln >= n0.
We want to check if O(2^n^2) = O(2^n). Clearly, 2^n^2 is in O(2^n^2), so let's pick f(n) = 2^n^2 and check if this is in O(2^n). Put this into the above formula:
exists c, n0: 2^n^2 <= c * 2^n for all n >= n0
Let's see if we can find suitable constant values n0 and c for which the above is true, or if we can derive a contradiction to proof that it is not true:
Take the log on both sides:
log(2^n^2) <= log(c * 2 ^ n)
Simplify:
2 ^n log(2) <= log(c) + n * log(2)
Divide by log(2):
n^2 <= log(c)/log(2) * n
It's easy to see know that there is no c, n0 for which the above is true for all n >= n0, thus O(2^n^2) = O(n^2) is not a valid assumption.
The last assumption you've specified with the question mark is false! Do not make such assumptions.
The rest of the manipulations you've supplied seem to be correct. But they actually bring you nowhere.
You should have finished this exercise in the middle of your draft:
T(n) = O(T(1)^(3^log2(n)))
And that's it. That's the solution!
You could actually claim that
3^log2(n) == n^log2(3) ==~ n^1.585
and then you get:
T(n) = O(T(1)^(n^1.585))
which is somewhat similar to the manipulations you've made in the second part of the draft.
So you can also leave it like this. But you cannot mess with the exponent. Changing the value of the exponent changes the big-O classification.
Related
I'm trying to solve this equation:
(b(ax+b ) - c) % n = e
Where everything is given except x
I tried the approach of :
(A + x) % B = C
(B + C - A) % B = x
where A is (-c) and then manually solve for x given my other subs, but I am not getting the correct output. Would I possibly need to use eea? Any help would be appreciated! I understand this question has been asked, I tried their solutions but it doesn't work for me.
(b*(a*x+b) - c) % n = e
can be rewritten as:
(b*a*x) % n = (e - b*b + c) % n
x = ((e - b*b + c) * modular_inverse(b*a, n)) % n
where the modular inverse of u, modular_inverse(u, n), is a number v such that u*v % n == 1. See this question for code to calculate the modular inverse.
Some caveats:
When simplifying modular equations, you can never simply divide, you need to multiply with the modular inverse.
There is no straightforward formula to calculate the modular inverse, but there is a simple, quick algorithm to calculate it, similar to calculating the gcd.
The modular inverse doesn't always exist.
Depending on the programming language, when one or both arguments are negative, the result of modulo can also be negative.
As every solution works for every x modulo n, for small n only the numbers from 0 till n-1 need to be tested, so in many cases a simple loop is sufficient.
What language are you doing this in, and are the variables constant?
Here's a quick way to determine the possible values of x in Java:
for (int x = -1000; x < 1000; x++){
if ((b*((a*x)+b) - c) % n == e){
System.out.println(x);
}
}
Let's define:
Tower(1) of n is: n.
Tower(2) of n is: n^n (= power(n,n)).
Tower(10) of n is: n^n^n^n^n^n^n^n^n^n.
And also given two functions:
f(n) = [Tower(logn n) of n] = n^n^n^n^n^n^....^n (= log n times "height of tower").
g(n) = [Tower(n) of log n] = log(n)^log(n)^....^log(n) (= n times "height of tower").
Three questions:
How are functions f(n)/g(n) related each other asymptotically (n-->infinity),
in terms of: theta, Big O, little o, Big Omega, little omega ?
Please describe exact way of solution and not only eventual result.
Does base of log (i.e.: 0.5, 2, 10, log n, or n) effect the result ?
If no - why ?
If yes - how ?
I'd like to know whether in any real (even if hypotetic) application there complexity performance looks similar to f(n) or g(n) above. Please give case description - if such exist.
P.S.
I tried to substitute: log n = a, therefore: n = 2^a or 10^a.
And got confused of counting height of received "towers".
I won't provide you a solution, because you have to work on your homework, but maybe there are other people interested about some hints.
1) Mathematics:
log(a^x) = x*log(a)
this will humanize your problem
2) Mathematics:
logx(y) = log2(y) / log2(x) = log10(y) / log10(x)
of course: if x is constant => log2(x) and log10(x) are constants
3) recursive + stop condition
I am confused on how to use mathematical induction to prove Big O for a recursive function, given using its recursion relation.
Example:
The recurrence relation for recursive implementation of Towers of Hanoi is T(n) = 2T(n-1) + 1
and T(1) = 1. We claimed that this recursive method is O(n) = 2n - 1. Prove this claim using the mathematical induction.
In the case of recursion, do I always assume that n = k-1, rather than n=k? This is the assumption that the lecture notes give.
Assume f(n-1) = 2^(n-1) - 1 is true.
I understand with non-recursive mathematical induction we assume that n = k, because it is only a change of variables. Why then, is it safe to assume that n = k - 1?
One possible way: Postulate a non-recursive formula for T and proove it. After that, show that the formula you found is in the Big O you wanted.
For the proof, you may use induction, which is quick and easy in that case. To do that, you first show that your formula holds for the first value (usually 0 or 1, in your example that's 1 and trivial).
Then you show that if it holds for any number n - 1, it also holds for its successor n. For that you use the definition for T(n) (in your example that's T(n) = 2 T(n - 1) + 1): as you know that your formla holds for n - 1, you can replace occurences of T(n - 1) with your formula. In your example you then get (with formula T(n) = 2^n - 1)
T(n) = 2T(n - 1) + 1
= 2(2^(n - 1) - 1) + 1
= 2^n - 2 + 1
= 2^n + 1
As you can see, it holds for n if we assume it holds for n - 1.
Now comes the trick of induction: we showed that our formula holds for n = 1, and we showed that if it holds for any n = k - 1, it holds for k as well. That is, as we prooved it for 1, it is also proven for 2. And as it is proven for 2, it is also proven for 3. And as it is...
Thus, we do not assume that the term is true for n - 1 in our proof, we only made a statement under the assumption that it is true, then prooved our formula for one initial case, and used induction.
The full Context of the Problem can be seen here
Details.
Also you can try my Sourcecode to plot the recursion for small numbers:
Pastebin
I'm looking at this problem the math way, its a nested recursion and looks like follows:
Function Find(integer n, function func)
If n=1
For i = 1 to a do func()
Elseif n=2
For i = 1 to b do func()
Else Find(n-1,Find(n-2,func))
Function Main
Find(n,funny)
My implementation in Mathematica without the Modulo-Operation is:
$IterationLimit = Infinity
Clear[A]
A [a_, b_, f_, 1] := A [a, b, f, 1, p] = (f a);
A [a_, b_, f_, 2] := A [a, b, f, 2, p] = (f b);
A [a_, b_, f_, n_] :=
A [a, b, f, n, p] = (A[a, b, A[a, b, f, n - 2], n - 1]);
This reveals some nice Output for general a and b
A[a, b, funny, 1]
a funny
A[a, b, funny, 2]
b funny
A[a, b, funny, 3]
a b funny
A[a, b, funny, 4]
a b^2 funny
A[a, b, funny, 5]
a^2 b^3 funny
A[a, b, funny, 6]
a^3 b^5 funny
So when we are looking at how often the Func is called, it seems like a^(F(n)) * b^(F(n+1))
with F(n) as the n-th Fibonacci Number. So my Problem is: How do i get very huge Fibonacci-Numbers modulo p, i did a lot of research on this, read through Cycle-Lenghts of Fibonacci, tried some Recursion with:
F(a+b) = F(a+1) * F(b) + F(a)*F(b-1)
but it seems like the Recursion-Depth (log_2(1.000.000.000) ~=30 ) when splitting p into two numbers is way to much, even with a deep first recursion.
a= floor(n/2)
b= ceiling(n/2)
When i have the Fib-Numbers, the multiplication and exponentiation
should not be a problem in my point of view.
Unfortunately not :/
I'm still stuck with the Problem. Computing the Fibonacci-Numbers in the Exponent first did not solve the Problem correct, it was a wrong Mathformula I applied there :/
So i thought of other ways Computing the Formula:
(a^(Fibonacci(n-2))*b^(Fibonacci(n-1))) mod p
But as the Fibonacci Numbers get really large, I am assuming that there must be an easier way than computing the whole Fibonacci-Number and then applying the discrete exponential function with BigInteger/BigFloat. Does someone have a hint for me, i see no further progress. Thanks
So this is where i am so far, might be just a little thing I'm missing, so looking forward to your replies
Thanks
If it's about calculating fibonacci numbers, there is a non-recursive, non-iterative formula for it. It's featured prominently on the Dutch wikipedia page about fibonacci numbers, but not so much on the English page.
F(n) = ( ( 1 + sqrt(5) ) ^ n - ( 1- sqrt(5) ) ^ n ) / (2 ^ n * sqrt(5))
http://upload.wikimedia.org/wikipedia/nl/math/1/7/4/1747ee745fbe1fbf10fb3d9de36b8927.png
Source: http://nl.wikipedia.org/wiki/Rij_van_Fibonacci
Maybe there's something you can do with this formula.
You might find helpful my ruminations on various ways to compute the Fibonacci and Lucas numbers. In there I show how to do the computation using a recursive scheme that is basically O(log2(n)). It works very nicely for large fibonacci numbers. And if you do it all modulo some small number, you need not even use a big integer tool for the computations. This would be blindingly fast for even huge Fibonacci numbers. This one below is only moderately large.
fibonacci(10000)
ans =
33644764876431783266621612005107543310302148460680063906564769974680
081442166662368155595513633734025582065332680836159373734790483865268263
040892463056431887354544369559827491606602099884183933864652731300088830
269235673613135117579297437854413752130520504347701602264758318906527890
855154366159582987279682987510631200575428783453215515103870818298969791
613127856265033195487140214287532698187962046936097879900350962302291026
368131493195275630227837628441540360584402572114334961180023091208287046
088923962328835461505776583271252546093591128203925285393434620904245248
929403901706233888991085841065183173360437470737908552631764325733993712
871937587746897479926305837065742830161637408969178426378624212835258112
820516370298089332099905707920064367426202389783111470054074998459250360
633560933883831923386783056136435351892133279732908133732642652633989763
922723407882928177953580570993691049175470808931841056146322338217465637
321248226383092103297701648054726243842374862411453093812206564914032751
086643394517512161526545361333111314042436854805106765843493523836959653
428071768775328348234345557366719731392746273629108210679280784718035329
131176778924659089938635459327894523777674406192240337638674004021330343
297496902028328145933418826817683893072003634795623117103101291953169794
607632737589253530772552375943788434504067715555779056450443016640119462
580972216729758615026968443146952034614932291105970676243268515992834709
891284706740862008587135016260312071903172086094081298321581077282076353
186624611278245537208532365305775956430072517744315051539600905168603220
349163222640885248852433158051534849622434848299380905070483482449327453
732624567755879089187190803662058009594743150052402532709746995318770724
376825907419939632265984147498193609285223945039707165443156421328157688
908058783183404917434556270520223564846495196112460268313970975069382648
706613264507665074611512677522748621598642530711298441182622661057163515
069260029861704945425047491378115154139941550671256271197133252763631939
606902895650288268608362241082050562430701794976171121233066073310059947
366875
The trick is simple. Simply relate the 2n'th Fibonacci and Lucas numbers to the n'th such numbers. It allows us to work backwards. So to compute F(n) and L(n), we need to know F(n/2) and L(n/2). Clearly this works as long as n is even. For odd n, there are similar schemes that will allow us to move recursively downwards.
For kicks, I just modified the above tool, to accept a modulus. So to compute the last 6 digits of the Fibonacci number with index 1e15, it took about 1/6 of a second.
tic,[Fn,Ln] = fibonacci(1e15,1000000),toc
Elapsed time is 0.161468 seconds.
Fn =
546875
Ln =
328127
Note: In my discussion of recursion to compute the Fibonacci numbers, I do make a few comments on the number of recursive calls required. See that that number is indeed related quite nicely to the Fibonacci sequence itself. This is easily derived.
Is 2(n+1) = O(2n)?
I believe that this one is correct because n+1 ~= n.
Is 2(2n) = O(2n)?
This one seems like it would use the same logic, but I'm not sure.
First case is obviously true - you just multiply the constant C in by 2.
Current answers to the second part of the question, look like a handwaving to me, so I will try to give a proper math explanation. Let's assume that the second part is true, then from the definition of big-O, you have:
which is clearly wrong, because there is no such constant that satisfy such inequality.
Claim: 2^(2n) != O(2^n)
Proof by contradiction:
Assume: 2^(2n) = O(2^n)
Which means, there exists c>0 and n_0 s.t. 2^(2n) <= c * 2^n for all n >= n_0
Dividing both sides by 2^n, we get: 2^n <= c * 1
Contradiction! 2^n is not bounded by a constant c.
Therefore 2^(2n) != O(2^n)
Note that 2n+1 = 2(2n) and 22n = (2n)2
From there, either use the rules of Big-O notation that you know, or use the definition.
I'm assuming you just left off the O() notation on the left side.
O(2^(n+1)) is the same as O(2 * 2^n), and you can always pull out constant factors, so it is the same as O(2^n).
However, constant factors are the only thing you can pull out. 2^(2n) can be expressed as (2^n)(2^n), and 2^n isn't a constant. So, the answer to your questions are yes and no.
2n+1 = O(2n) because 2n+1 = 21 * 2n = O(2n).
Suppose 22n = O(2n) Then there exists a constant c such that for n beyond some n0, 22n <= c 2n. Dividing both sides by 2n, we get 2n < c. There's no values for c and n0 that can make this true, so the hypothesis is false and 22n != O(2n)
To answer these questions, you must pay attention to the definition of big-O notation. So you must ask:
is there any constant C such that 2^(n+1) <= C(2^n) (provided that n is big enough)?
And the same goes for the other example: is there any constant C such that 2^(2n) <= C(2^n) for all n that is big enough?
Work on those inequalities and you'll be on your way to the solution.
We will use=> a^(m*n) = (a^m)^n = (a^n)^m
now,
2^(2*n) = (2^n)^2 = (2^2)^n
so,
(2^2)^n = (4)^n
hence,
O(4^n)
Obviously,
rate of growth of (2^n) < (4^n)
Let me give you an solution that would make sense instantly.
Assume that 2^n = X
(since n is a variable, X should be a variable), then 2^(2n) = X^2. So, we basically have X^2 = O(X). You probably already know this is untrue based on a easier asymptotic notation exercises you did.
For instance:
X^2 = O(X^2) is true.
X^2+X+1 = O(X^2) is also true.
X^2+X+1 = O(X) is untrue.
X^2 = O(X) is also untrue.
Think about polynomials.