Ok,
so this is a application of existing mathematical practices, but I can't really apply them to my case.
So, I have x of a currency to increase the level of a game-object y for cost z.
z is calculated in cost(y.lvl) = c_1 * c_2^y.lvl / c_3, where the c's are constants.
I am seeking an efficient way to calculate, how often I can increase the level of y, given x. Currently I'm using a loop that does something like this:
double tempX = x;
int counter = 0;
while(tempX >= cost(y.lvl+counter)){
tempX-=cost(y.lvl)+counter;
counter++;
}
The problem is, that in some cases, this loop has to iterate too many times to stay performant.
What I am looking for is essentially a function
int howManyCanBeBought(x,y.lvl), which calculates it's result in a single go, instead of looping a lot of times.
I've read something about transforming recursions to generating functions and transforming them to closed formulas, but I didn't get the math behind it. Is there an easy way to it?
If I understand correctly, you're looking for the largest n such that:
Σi=0..n c1/c3 c2lvl+i ≤ x
Dividing by the constant factor:
Σi=0..n c2i ≤ c3 / (c1 c2lvl) x
Using the formula for the sum of a geometric series:
(c2n+1 - 1) / (c2 - 1) ≤ c3 / (c1 c2lvl) x
And solving for the maximum integer:
n = floor(logc2(c3 (c2 - 1) / (c1 c2lvl) x + 1) - 1)
I'm trying to find time complexity (big O) of a recursive formula.
I tried to find a solution, you may see the formula and my solution below:
Like Brenner said, your last assumption is false. Here is why: Let's take the definition of O(n) from the Wikipedia page (using n instead of x):
f(n) = O(n) if and only if there exist constants c, n0 s.t. |f(n)| <= c |g(n)|, for alln >= n0.
We want to check if O(2^n^2) = O(2^n). Clearly, 2^n^2 is in O(2^n^2), so let's pick f(n) = 2^n^2 and check if this is in O(2^n). Put this into the above formula:
exists c, n0: 2^n^2 <= c * 2^n for all n >= n0
Let's see if we can find suitable constant values n0 and c for which the above is true, or if we can derive a contradiction to proof that it is not true:
Take the log on both sides:
log(2^n^2) <= log(c * 2 ^ n)
Simplify:
2 ^n log(2) <= log(c) + n * log(2)
Divide by log(2):
n^2 <= log(c)/log(2) * n
It's easy to see know that there is no c, n0 for which the above is true for all n >= n0, thus O(2^n^2) = O(n^2) is not a valid assumption.
The last assumption you've specified with the question mark is false! Do not make such assumptions.
The rest of the manipulations you've supplied seem to be correct. But they actually bring you nowhere.
You should have finished this exercise in the middle of your draft:
T(n) = O(T(1)^(3^log2(n)))
And that's it. That's the solution!
You could actually claim that
3^log2(n) == n^log2(3) ==~ n^1.585
and then you get:
T(n) = O(T(1)^(n^1.585))
which is somewhat similar to the manipulations you've made in the second part of the draft.
So you can also leave it like this. But you cannot mess with the exponent. Changing the value of the exponent changes the big-O classification.
I have two unsigned longs a and q and I would like to find a number n between 0 and q-1 such that n + a is divisible by q (without overflow).
In other words, I'm trying to find a (portable) way of computing (-a)%q which lies between 0 and q-1. (The sign of that expression is implementation-defined in C89.) What's a good way to do this?
What you're looking for is mathematically equivalent to (q - a) mod q, which in turn is equivalent to (q - (a mod q)) mod q. I think you should therefore be able to compute this as follows:
unsigned long result = (q - (a % q)) % q;
Let's define:
Tower(1) of n is: n.
Tower(2) of n is: n^n (= power(n,n)).
Tower(10) of n is: n^n^n^n^n^n^n^n^n^n.
And also given two functions:
f(n) = [Tower(logn n) of n] = n^n^n^n^n^n^....^n (= log n times "height of tower").
g(n) = [Tower(n) of log n] = log(n)^log(n)^....^log(n) (= n times "height of tower").
Three questions:
How are functions f(n)/g(n) related each other asymptotically (n-->infinity),
in terms of: theta, Big O, little o, Big Omega, little omega ?
Please describe exact way of solution and not only eventual result.
Does base of log (i.e.: 0.5, 2, 10, log n, or n) effect the result ?
If no - why ?
If yes - how ?
I'd like to know whether in any real (even if hypotetic) application there complexity performance looks similar to f(n) or g(n) above. Please give case description - if such exist.
P.S.
I tried to substitute: log n = a, therefore: n = 2^a or 10^a.
And got confused of counting height of received "towers".
I won't provide you a solution, because you have to work on your homework, but maybe there are other people interested about some hints.
1) Mathematics:
log(a^x) = x*log(a)
this will humanize your problem
2) Mathematics:
logx(y) = log2(y) / log2(x) = log10(y) / log10(x)
of course: if x is constant => log2(x) and log10(x) are constants
3) recursive + stop condition
The full Context of the Problem can be seen here
Details.
Also you can try my Sourcecode to plot the recursion for small numbers:
Pastebin
I'm looking at this problem the math way, its a nested recursion and looks like follows:
Function Find(integer n, function func)
If n=1
For i = 1 to a do func()
Elseif n=2
For i = 1 to b do func()
Else Find(n-1,Find(n-2,func))
Function Main
Find(n,funny)
My implementation in Mathematica without the Modulo-Operation is:
$IterationLimit = Infinity
Clear[A]
A [a_, b_, f_, 1] := A [a, b, f, 1, p] = (f a);
A [a_, b_, f_, 2] := A [a, b, f, 2, p] = (f b);
A [a_, b_, f_, n_] :=
A [a, b, f, n, p] = (A[a, b, A[a, b, f, n - 2], n - 1]);
This reveals some nice Output for general a and b
A[a, b, funny, 1]
a funny
A[a, b, funny, 2]
b funny
A[a, b, funny, 3]
a b funny
A[a, b, funny, 4]
a b^2 funny
A[a, b, funny, 5]
a^2 b^3 funny
A[a, b, funny, 6]
a^3 b^5 funny
So when we are looking at how often the Func is called, it seems like a^(F(n)) * b^(F(n+1))
with F(n) as the n-th Fibonacci Number. So my Problem is: How do i get very huge Fibonacci-Numbers modulo p, i did a lot of research on this, read through Cycle-Lenghts of Fibonacci, tried some Recursion with:
F(a+b) = F(a+1) * F(b) + F(a)*F(b-1)
but it seems like the Recursion-Depth (log_2(1.000.000.000) ~=30 ) when splitting p into two numbers is way to much, even with a deep first recursion.
a= floor(n/2)
b= ceiling(n/2)
When i have the Fib-Numbers, the multiplication and exponentiation
should not be a problem in my point of view.
Unfortunately not :/
I'm still stuck with the Problem. Computing the Fibonacci-Numbers in the Exponent first did not solve the Problem correct, it was a wrong Mathformula I applied there :/
So i thought of other ways Computing the Formula:
(a^(Fibonacci(n-2))*b^(Fibonacci(n-1))) mod p
But as the Fibonacci Numbers get really large, I am assuming that there must be an easier way than computing the whole Fibonacci-Number and then applying the discrete exponential function with BigInteger/BigFloat. Does someone have a hint for me, i see no further progress. Thanks
So this is where i am so far, might be just a little thing I'm missing, so looking forward to your replies
Thanks
If it's about calculating fibonacci numbers, there is a non-recursive, non-iterative formula for it. It's featured prominently on the Dutch wikipedia page about fibonacci numbers, but not so much on the English page.
F(n) = ( ( 1 + sqrt(5) ) ^ n - ( 1- sqrt(5) ) ^ n ) / (2 ^ n * sqrt(5))
http://upload.wikimedia.org/wikipedia/nl/math/1/7/4/1747ee745fbe1fbf10fb3d9de36b8927.png
Source: http://nl.wikipedia.org/wiki/Rij_van_Fibonacci
Maybe there's something you can do with this formula.
You might find helpful my ruminations on various ways to compute the Fibonacci and Lucas numbers. In there I show how to do the computation using a recursive scheme that is basically O(log2(n)). It works very nicely for large fibonacci numbers. And if you do it all modulo some small number, you need not even use a big integer tool for the computations. This would be blindingly fast for even huge Fibonacci numbers. This one below is only moderately large.
fibonacci(10000)
ans =
33644764876431783266621612005107543310302148460680063906564769974680
081442166662368155595513633734025582065332680836159373734790483865268263
040892463056431887354544369559827491606602099884183933864652731300088830
269235673613135117579297437854413752130520504347701602264758318906527890
855154366159582987279682987510631200575428783453215515103870818298969791
613127856265033195487140214287532698187962046936097879900350962302291026
368131493195275630227837628441540360584402572114334961180023091208287046
088923962328835461505776583271252546093591128203925285393434620904245248
929403901706233888991085841065183173360437470737908552631764325733993712
871937587746897479926305837065742830161637408969178426378624212835258112
820516370298089332099905707920064367426202389783111470054074998459250360
633560933883831923386783056136435351892133279732908133732642652633989763
922723407882928177953580570993691049175470808931841056146322338217465637
321248226383092103297701648054726243842374862411453093812206564914032751
086643394517512161526545361333111314042436854805106765843493523836959653
428071768775328348234345557366719731392746273629108210679280784718035329
131176778924659089938635459327894523777674406192240337638674004021330343
297496902028328145933418826817683893072003634795623117103101291953169794
607632737589253530772552375943788434504067715555779056450443016640119462
580972216729758615026968443146952034614932291105970676243268515992834709
891284706740862008587135016260312071903172086094081298321581077282076353
186624611278245537208532365305775956430072517744315051539600905168603220
349163222640885248852433158051534849622434848299380905070483482449327453
732624567755879089187190803662058009594743150052402532709746995318770724
376825907419939632265984147498193609285223945039707165443156421328157688
908058783183404917434556270520223564846495196112460268313970975069382648
706613264507665074611512677522748621598642530711298441182622661057163515
069260029861704945425047491378115154139941550671256271197133252763631939
606902895650288268608362241082050562430701794976171121233066073310059947
366875
The trick is simple. Simply relate the 2n'th Fibonacci and Lucas numbers to the n'th such numbers. It allows us to work backwards. So to compute F(n) and L(n), we need to know F(n/2) and L(n/2). Clearly this works as long as n is even. For odd n, there are similar schemes that will allow us to move recursively downwards.
For kicks, I just modified the above tool, to accept a modulus. So to compute the last 6 digits of the Fibonacci number with index 1e15, it took about 1/6 of a second.
tic,[Fn,Ln] = fibonacci(1e15,1000000),toc
Elapsed time is 0.161468 seconds.
Fn =
546875
Ln =
328127
Note: In my discussion of recursion to compute the Fibonacci numbers, I do make a few comments on the number of recursive calls required. See that that number is indeed related quite nicely to the Fibonacci sequence itself. This is easily derived.