Related
This is a continuation of the two questions posted here,
Declaring a functional recursive sequence in Matlab
Nesting a specific recursion in Pari-GP
To make a long story short, I've constructed a family of functions which solve the tetration functional equation. I've proven these things are holomorphic. And now it's time to make the graphs, or at least, somewhat passable code to evaluate these things. I've managed to get to about 13 significant digits in my precision, but if I try to get more, I encounter a specific error. That error is really nothing more than an overflow error. But it's a peculiar overflow error; Pari-GP doesn't seem to like nesting the logarithm.
My particular mathematical function is approximated by taking something large (think of the order e^e^e^e^e^e^e) to produce something small (of the order e^(-n)). The math inherently requires samples of large values to produce these small values. And strangely, as we get closer to numerically approximating (at about 13 significant digits or so), we also get closer to overflowing because we need such large values to get those 13 significant digits. I am a god awful programmer; and I'm wondering if there could be some work around I'm not seeing.
/*
This function constructs the approximate Abel function
The variable z is the main variable we care about; values of z where real(z)>3 almost surely produces overflow errors
The variable l is the multiplier of the approximate Abel function
The variable n is the depth of iteration required
n can be set to 100, but produces enough accuracy for about 15
The functional equation this satisfies is exp(beta_function(z,l,n))/(1+exp(-l*z)) = beta_function(z+1,l,n); and this program approaches the solution for n to infinity
*/
beta_function(z,l,n) =
{
my(out = 0);
for(i=0,n-1,
out = exp(out)/(exp(l*(n-i-z)) +1));
out;
}
/*
This function is the error term between the approximate Abel function and the actual Abel function
The variable z is the main variable we care about
The variable l is the multiplier
The variable n is the depth of iteration inherited from beta_function
The variable k is the new depth of iteration for this function
n can be set about 100, still; but 15 or 20 is more optimal.
Setting the variable k above 10 will usually produce overflow errors unless the complex arguments of l and z are large.
Precision of about 10 digits is acquired at k = 5 or 6 for real z, for complex z less precision is acquired. k should be set to large values for complex z and l with large imaginary arguments.
*/
tau_K(z,l,n,k)={
if(k == 1,
-log(1+exp(-l*z)),
log(1 + tau_K(z+1,l,n,k-1)/beta_function(z+1,l,n)) - log(1+exp(-l*z))
)
}
/*
This is the actual Abel function
The variable z is the main variable we care about
The variable l is the multiplier
The variable n is the depth of iteration inherited from beta_function
The variable k is the depth of iteration inherited from tau_K
The functional equation this satisfies is exp(Abl_L(z,l,n,k)) = Abl_L(z+1,l,n,k); and this function approaches that solution for n,k to infinity
*/
Abl_L(z,l,n,k) ={
beta_function(z,l,n) + tau_K(z,l,n,k);
}
This is the code for approximating the functions I've proven are holomorphic; but sadly, my code is just horrible. Here, is attached some expected output, where you can see the functional equation being satisfied for about 10 - 13 significant digits.
Abl_L(1,log(2),100,5)
%52 = 0.1520155156321416705967746811
exp(Abl_L(0,log(2),100,5))
%53 = 0.1520155156321485241351294757
Abl_L(1+I,0.3 + 0.3*I,100,14)
%59 = 0.3353395055605129001249035662 + 1.113155080425616717814647305*I
exp(Abl_L(0+I,0.3 + 0.3*I,100,14))
%61 = 0.3353395055605136611147422467 + 1.113155080425614418399986325*I
Abl_L(0.5+5*I, 0.2+3*I,100,60)
%68 = -0.2622549204469267170737985296 + 1.453935357725113433325798650*I
exp(Abl_L(-0.5+5*I, 0.2+3*I,100,60))
%69 = -0.2622549205108654273925182635 + 1.453935357685525635276573253*I
Now, you'll notice I have to change the k value for different values. When the arguments z,l are further away from the real axis, we can make k very large (and we have to to get good accuracy), but it'll still overflow eventually; typically once we've achieved about 13-15 significant digits, is when the functions will start to blow up. You'll note, that setting k =60, means we're taking 60 logarithms. This already sounds like a bad idea, lol. Mathematically though, the value Abl_L(z,l,infinity,infinity) is precisely the function I want. I know that must be odd; nested infinite for-loops sounds like nonsense, lol.
I'm wondering if anyone can think of a way to avoid these overflow errors and obtaining a higher degree of accuracy. In a perfect world, this object most definitely converges, and this code is flawless (albeit, it may be a little slow); but we'd probably need to increase the stacksize indefinitely. In theory this is perfectly fine; but in reality, it's more than impractical. Is there anyway, as a programmer, one can work around this?
The only other option I have at this point is to try and create a bruteforce algorithm to discover the Taylor series of this function; but I'm having less than no luck at doing this. The process is very unique, and trying to solve this problem using Taylor series kind of takes us back to square one. Unless, someone here can think of a fancy way of recovering Taylor series from this expression.
I'm open to all suggestions, any comments, honestly. I'm at my wits end; and I'm wondering if this is just one of those things where the only solution is to increase the stacksize indefinitely (which will absolutely work). It's not just that I'm dealing with large numbers. It's that I need larger and larger values to compute a small value. For that reason, I wonder if there's some kind of quick work around I'm not seeing. The error Pari-GP spits out is always with tau_K, so I'm wondering if this has been coded suboptimally; and that I should add something to it to reduce stacksize as it iterates. Or, if that's even possible. Again, I'm a horrible programmer. I need someone to explain this to me like I'm in kindergarten.
Any help, comments, questions for clarification, are more than welcome. I'm like a dog chasing his tail at this point; wondering why he can't take 1000 logarithms, lol.
Regards.
EDIT:
I thought I'd add in that I can produce arbitrary precision but we have to keep the argument of z way off in the left half plane. If the variables n,k = -real(z) then we can produce arbitrary accuracy by making n as large as we want. Here's some output to explain this, where I've used \p 200 and we pretty much have equality at this level (minus some digits).
Abl_L(-1000,1+I,1000,1000)
%16 = -0.29532276871494189936534470547577975723321944770194434340228137221059739121428422475938130544369331383702421911689967920679087535009910425871326862226131457477211238400580694414163545689138863426335946 + 1.5986481048938885384507658431034702033660039263036525275298731995537068062017849201570422126715147679264813047746465919488794895784667843154275008585688490133825421586142532469402244721785671947462053*I
exp(Abl_L(-1001,1+I,1000,1000))
%17 = -0.29532276871494189936534470547577975723321944770194434340228137221059739121428422475938130544369331383702421911689967920679087535009910425871326862226131457477211238400580694414163545689138863426335945 + 1.5986481048938885384507658431034702033660039263036525275298731995537068062017849201570422126715147679264813047746465919488794895784667843154275008585688490133825421586142532469402244721785671947462053*I
Abl_L(-900 + 2*I, log(2) + 3*I,900,900)
%18 = 0.20353875452777667678084511743583613390002687634123569448354843781494362200997943624836883436552749978073278597542986537166527005507457802227019178454911106220050245899257485038491446550396897420145640 - 5.0331931122239257925629364016676903584393129868620886431850253696250415005420068629776255235599535892051199267683839967636562292529054669236477082528566454129529102224074017515566663538666679347982267*I
exp(Abl_L(-901+2*I,log(2) + 3*I,900,900))
%19 = 0.20353875452777667678084511743583613390002687634123569448354843781494362200997943624836883436552749978073278597542986537166527005507457802227019178454911106220050245980468697844651953381258310669530583 - 5.0331931122239257925629364016676903584393129868620886431850253696250415005420068629776255235599535892051199267683839967636562292529054669236477082528566454129529102221938340371793896394856865112060084*I
Abl_L(-967 -200*I,12 + 5*I,600,600)
%20 = -0.27654907399026253909314469851908124578844308887705076177457491260312326399816915518145788812138543930757803667195961206089367474489771076618495231437711085298551748942104123736438439579713006923910623 - 1.6112686617153127854042520499848670075221756090591592745779176831161238110695974282839335636124974589920150876805977093815716044137123254329208112200116893459086654166069454464903158662028146092983832*I
exp(Abl_L(-968 -200*I,12 + 5*I,600,600))
%21 = -0.27654907399026253909314469851908124578844308887705076177457491260312326399816915518145788812138543930757803667195961206089367474489771076618495231437711085298551748942104123731995533634133194224880928 - 1.6112686617153127854042520499848670075221756090591592745779176831161238110695974282839335636124974589920150876805977093815716044137123254329208112200116893459086654166069454464833417170799085356582884*I
The trouble is, we can't just apply exp over and over to go forward and expect to keep the same precision. The trouble is with exp, which displays so much chaotic behaviour as you iterate it in the complex plane, that this is doomed to work.
Well, I answered my own question. #user207421 posted a comment, and I'm not sure if it meant what I thought it meant, but I think it got me to where I want. I sort of assumed that exp wouldn't inherit the precision of its argument, but apparently that's true. So all I needed was to define,
Abl_L(z,l,n,k) ={
if(real(z) <= -max(n,k),
beta_function(z,l,n) + tau_K(z,l,n,k),
exp(Abl_L(z-1,l,n,k)));
}
Everything works perfectly fine from here; of course, for what I need it for. So, I answered my own question, and it was pretty simple. I just needed an if statement.
Thanks anyway, to anyone who read this.
This is a continuation of my questions:
Declaring a functional recursive sequence in Matlab
Is there a more efficient way of nesting logarithms?
Nesting a specific recursion in Pari-GP
But I'll keep this question self contained. I have made a coding project for myself; which is to program a working simple calculator for a tetration function I've constructed. This tetration function is holomorphic, and stated not to be Kneser's solution (as to all the jargon, ignore); long story short, I need to run the numbers; to win over the nay-sayers.
As to this, I have to use Pari-GP; as this is a fantastic language for handling large numbers and algebraic expressions. As we are dealing with tetration (think numbers of the order e^e^e^e^e^e); this language is, of the few that exist, the best for such affairs. It is the favourite when doing iterated exponential computations.
Now, the trouble I am facing is odd. It is not so much that my code doesn't work; it's that it's overflowing because it should over flow (think, we're getting inputs like e^e^e^e^e^e; and no computer can handle it properly). I'll post the first batch of code, before I dive deeper.
The following code works perfectly; and does everything I want. The trouble is with the next batch of code. This produces all the numbers I want.
\\This is the asymptotic solution to tetration. z is the variable, l is the multiplier, and n is the depth of recursion
\\Warning: z with large real part looks like tetration; and therefore overflows very fast. Additionally there are singularities which occur where l*(z-j) = (2k+1)*Pi*I.
\\j,k are integers
beta_function(z,l,n) =
{
my(out = 0);
for(i=0,n-1,
out = exp(out)/(exp(l*(n-i-z)) +1));
out;
}
\\This is the error between the asymptotic tetration and the tetration. This is pretty much good for 200 digit accuracy if you need.
\\modify the 0.000000001 to a bigger number to make this go faster and receive less precision. When graphing 0.0001 is enough
\\Warning: This will blow up at some points. This is part of the math; these functions have singularities/branch cuts.
tau(z,l,n)={
if(1/real(beta_function(z,l,n)) <= 0.000000001, //this is where we'll have problems; if I try to grab a taylor series with this condition we error out
-log(1+exp(-l*z)),
log(1 + tau(z+1,l,n)/beta_function(z+1,l,n)) - log(1+exp(-l*z))
)
}
\\This is the sum function. I occasionally modify it; to make better graphs, but the basis is this.
Abl(z,l,n) = {
beta_function(z,l,n) + tau(z,l,n)
}
Plugging this in, you get the following expressions:
Abl(1,log(2),100)
realprecision = 28 significant digits (20 digits displayed)
%109 = 0.15201551563214167060
exp(Abl(0,log(2),100))
%110 = 0.15201551563214167060
Abl(1+I,2+0.5*I,100)
%111 = 0.28416643148885326261 + 0.80115283113944703984*I
exp(Abl(0+I,2+0.5*I,100))
%112 = 0.28416643148885326261 + 0.80115283113944703984*I
And so on and so forth; where Abl(z,l,n) = exp(Abl(z-1,l,n)). There's no problem with this code. Absolutely none at all; we can set this to 200 precision and it'll still produce correct results. The graphs behave exactly as the math says they should behave. The problem is, in my construction of tetration (the one we actually want); we have to sort of paste together the solutions of Abl(z,l,n) across the value l. Now, you don't have to worry about any of that at all; but, mathematically, this is what we're doing.
This is the second batch of code; which is designed to "paste together" all these Abl(z,l,n) into one function.
//This is the modified asymptotic solution to the Tetration equation.
beta(z,n) = {
beta_function(z,1/sqrt(1+z),n);
}
//This is the Tetration function.
Tet(z,n) ={
if(1/abs(beta_function(z,1/sqrt(1+z),n)) <= 0.00000001,//Again, we see here this if statement; and we can't have this.
beta_function(z,1/sqrt(1+z),n),
log(Tet(z+1,n))
)
}
This code works perfectly for real-values; and for complex values. Some sample values,
Tet(1+I,100)
%113 = 0.12572857262453957030 - 0.96147559586703141524*I
exp(Tet(0+I,100))
%114 = 0.12572857262453957030 - 0.96147559586703141524*I
Tet(0.5,100)
%115 = -0.64593666417664607364
exp(Tet(0.5,100))
%116 = 0.52417133958039107545
Tet(1.5,100)
%117 = 0.52417133958039107545
We can also effectively graph this object on the real-line. Which just looks like the following,
ploth(X=0,4,Tet(X,100))
Now, you may be asking; What's the problem then?
If you try and plot this function in the complex plane, it's doomed to fail. The nested logarithms produce too many singularities near the real line. For imaginary arguments away from the real-line, there's no problem. And I've produced some nice graphs; but the closer you get to the real line; the more it misbehaves and just short circuits. You may be thinking; well then, the math is wrong! But, no, the reason this is happening is because Kneser's tetration is the only tetration that is stable about the principal branch of the logarithm. Since this tetration IS NOT Kneser's tetration, it's inherently unstable about the principal branch of the logarithm. Of course, Pari just chooses the principal branch. So when I do log(log(log(log(log(beta(z+5,100)))))); the math already says this will diverge. But on the real line; it's perfectly adequate. And for values of z with an imaginary argument away from zero, we're fine too.
So, how I want to solve this, is to grab the Taylor series at Tet(1+z,100); which Pari-GP is perfect for. The trouble?
Tet(1+z,100)
*** at top-level: Tet(1+z,100)
*** ^------------
*** in function Tet: ...unction(z,1/sqrt(1+z),n))<=0.00000001,beta_fun
*** ^---------------------
*** _<=_: forbidden comparison t_SER , t_REAL.
The numerical comparison I've done doesn't translate to a comparison between t_SER and t_REAL.
So, my question, at long last: what is an effective strategy at getting the Taylor series of Tet(1+z,100) using only real inputs. The complex inputs near z=0 are erroneous; the real values are not. And if my math is right; we can take the derivatives along the real-line and get the right result. Then, we can construct a Tet_taylor(z,n) which is just the Taylor Series expansion. Which; will most definitely have no errors when trying to graph.
Any help, questions, comments, suggestions--anything, is greatly appreciated! I really need some outside eyes on this.
Thanks so much if you got to the bottom of this post. This one is bugging me.
Regards, James
EDIT:
I should add that a Tet(z+c,100) for some number c is the actual tetration function we want. There is a shifting constant I haven't talked about yet. Nonetheless; this is spurious to the question, and is more a mathematical point.
This is definitely not an answer - I have absolutely no clue what you are trying to do. However, I see no harm in offering suggestions. PARI has a built in type for power series (essentially Taylor series) - and is very good at working with them (many operations are supported). I was originally going to offer some suggestions on how to get a Taylor series out of a recursive definition using your functions as an example - but in this case, I'm thinking that you are trying to expand around a singularity which might be doomed to failure. (On your plot it seems as x->0, the result goes to -infinity???)
In particular if I compute:
log(beta(z+1, 100))
log(log(beta(z+2, 100)))
log(log(log(beta(z+3, 100))))
log(log(log(log(beta(z+4, 100)))))
...
The different series are not converging to anything. Even the constant term of the series is getting smaller with each iteration, so I am not entirely sure there is even a Taylor series expansion about x = 0.
Questions/suggestions:
Should you be expanding about a different point? (say where the curve
crosses the x-axis).
Does the Taylor series satisfy some recursive relation? For example: A(z) = log(A(z+1)). [This doesn't work, but perhaps there is another way to write it].
I suspect my answer is unlikely to be satisfactory - but then again your question is more mathematical than a practical programming problem.
So I've successfully answered my question. I haven't programmed in so long; I'm kind of shoddy. But I figured it out after enough coffee. I created 3 new functions, which allow me to grab the Taylor series.
\\This function attempts to find the number of iterations we need.
Tet_GRAB_k(A,n) ={
my(k=0);
while( 1/real(beta(A+k,n)) >= 0.0001, k++);
return(k);
}
\\This function will run and produce the same results as Tet; but it's slower; but it let's us estimate Taylor coefficients.
\\You have to guess which k to use for whatever accuracy before overflowing; which is what the last function is good for.
Tet_taylor(z,n,k) = {
my(val = beta(z+k,n));
for(i=1,k,val = log(val));
return(val);
}
\\This function produces an array of all the coefficients about a value A.
TAYLOR_SERIES(A,n) = {
my(ser = vector(40,i,0));
for(i=1,40, ser[i] = polcoeff(Tet_taylor(A+z,n,Tet_GRAB_k(A,n)),i-1,z));
return(ser);
}
After running the numbers, I'm confident this works. The Taylor series is converging; albeit rather slowly and slightly less accurately than desired; but this will have to do.
Thanks to anyone who read this. I'm just answering this question for completeness.
I have a sound function I am trying to break up into sines/cosines. So I resorted to Fast Fourier Transformation. By using the fft(y,inverse=FALSE) function I was able to convert the time domain of the sound into the frequency doman. The output is complex. I read that in order to convert this output which is in imaginary form and weed out the necessary information, which are the amplitude and the phase of A(v)cos(2*pi*v+P), one must use the abs() of the output to get the amplitude; however I am having difficulty finding the R function that gets us the phase. In MATLAB, the angle() function returns the phases of the FFT. What is the respective function in R to find the phase??
Update
Thank you for the suggestions guys; still expericieng an issue. I am running FFT on a simple function to test to see if it works. My function is y=cos(2*pi*(seq(0,10,by=.01)*(1/5)+7.5).
So the frequency is 1/5 with a phase shift of 7.5.
y=cos(2*pi*(seq(0,10,by=.01)*(1/5)+7.5)
fty=(y,inverse=F)
plot(abs(fty),xlim=c(0,10),type="l")
angle=atan2(Im(fty), Re(fty))
> angle[3]
[1] 1.222766
When I plot the series, the amplitude is peaking at a frequency value of 3 and the angle function (which should give me my phase at the frequency at which amplitude peaks at) is giving me a phase of 1.2. What am I doing wrong?
You might find that the atan2 function does what you want. You would give it the imaginary and real parts of the value, since the prototype is atan2(y, x). So you could do:
angle = atan2(Im(value), Re(value));
I have a function that takes a floating point number and returns a floating point number. It can be assumed that if you were to graph the output of this function it would be 'n' shaped, ie. there would be a single maximum point, and no other points on the function with a zero slope. We also know that input value that yields this maximum output will lie between two known points, perhaps 0.0 and 1.0.
I need to efficiently find the input value that yields the maximum output value to some degree of approximation, without doing an exhaustive search.
I'm looking for something similar to Newton's Method which finds the roots of a function, but since my function is opaque I can't get its derivative.
I would like to down-thumb all the other answers so far, for various reasons, but I won't.
An excellent and efficient method for minimizing (or maximizing) smooth functions when derivatives are not available is parabolic interpolation. It is common to write the algorithm so it temporarily switches to the golden-section search (Brent's minimizer) when parabolic interpolation does not progress as fast as golden-section would.
I wrote such an algorithm in C++. Any offers?
UPDATE: There is a C version of the Brent minimizer in GSL. The archives are here: ftp://ftp.club.cc.cmu.edu/gnu/gsl/ Note that it will be covered by some flavor of GNU "copyleft."
As I write this, the latest-and-greatest appears to be gsl-1.14.tar.gz. The minimizer is located in the file gsl-1.14/min/brent.c. It appears to have termination criteria similar to what I implemented. I have not studied how it decides to switch to golden section, but for the OP, that is probably moot.
UPDATE 2: I googled up a public domain java version, translated from FORTRAN. I cannot vouch for its quality. http://www1.fpl.fs.fed.us/Fmin.java I notice that the hard-coded machine efficiency ("machine precision" in the comments) is 1/2 the value for a typical PC today. Change the value of eps to 2.22045e-16.
Edit 2: The method described in Jive Dadson is a better way to go about this. I'm leaving my answer up since it's easier to implement, if speed isn't too much of an issue.
Use a form of binary search, combined with numeric derivative approximations.
Given the interval [a, b], let x = (a + b) /2
Let epsilon be something very small.
Is (f(x + epsilon) - f(x)) positive? If yes, the function is still growing at x, so you recursively search the interval [x, b]
Otherwise, search the interval [a, x].
There might be a problem if the max lies between x and x + epsilon, but you might give this a try.
Edit: The advantage to this approach is that it exploits the known properties of the function in question. That is, I assumed by "n"-shaped, you meant, increasing-max-decreasing. Here's some Python code I wrote to test the algorithm:
def f(x):
return -x * (x - 1.0)
def findMax(function, a, b, maxSlope):
x = (a + b) / 2.0
e = 0.0001
slope = (function(x + e) - function(x)) / e
if abs(slope) < maxSlope:
return x
if slope > 0:
return findMax(function, x, b, maxSlope)
else:
return findMax(function, a, x, maxSlope)
Typing findMax(f, 0, 3, 0.01) should return 0.504, as desired.
For optimizing a concave function, which is the type of function you are talking about, without evaluating the derivative I would use the secant method.
Given the two initial values x[0]=0.0 and x[1]=1.0 I would proceed to compute the next approximations as:
def next_x(x, xprev):
return x - f(x) * (x - xprev) / (f(x) - f(xprev))
and thus compute x[2], x[3], ... until the change in x becomes small enough.
Edit: As Jive explains, this solution is for root finding which is not the question posed. For optimization the proper solution is the Brent minimizer as explained in his answer.
The Levenberg-Marquardt algorithm is a Newton's method like optimizer. It has a C/C++ implementation levmar that doesn't require you to define the derivative function. Instead it will evaluate the objective function in the current neighborhood to move to the maximum.
BTW: this website appears to be updated since I last visited it, hope it's even the same one I remembered. Apparently it now also support other languages.
Given that it's only a function of a single variable and has one extremum in the interval, you don't really need Newton's method. Some sort of line search algorithm should suffice. This wikipedia article is actually not a bad starting point, if short on details. Note in particular that you could just use the method described under "direct search", starting with the end points of your interval as your two points.
I'm not sure if you'd consider that an "exhaustive search", but it should actually be pretty fast I think for this sort of function (that is, a continuous, smooth function with only one local extremum in the given interval).
You could reduce it to a simple linear fit on the delta's, finding the place where it crosses the x axis. Linear fit can be done very quickly.
Or just take 3 points (left/top/right) and fix the parabola.
It depends mostly on the nature of the underlying relation between x and y, I think.
edit this is in case you have an array of values like the question's title states. When you have a function take Newton-Raphson.
I'm working on a 2D game where I'm trying to accelerate an object to a top speed using some basic physics code.
Here's the pseudocode for it:
const float acceleration = 0.02f;
const float friction = 0.8f; // value is always 0.0..1.0
float velocity = 0;
float position = 0;
move()
{
velocity += acceleration;
velocity *= friction;
position += velocity;
}
This is a very simplified approach that doesn't rely on mass or actual friction (the in-code friction is just a generic force acting against movement). It works well as the "velocity *= friction;" part keeps the velocity from going past a certain point. However, it's this top speed and its relationship to the acceleration and friction where I'm a bit lost.
What I'd like to do is set a top speed, and the amount of time it takes to reach it, then use them to derive the acceleration and friction values.
i.e.,
const float max_velocity = 2.0;
const int ticks; = 120; // If my game runs at 60 FPS, I'd like a
// moving object to reach max_velocity in
// exactly 2 seconds.
const float acceleration = ?
const float friction = ?
I found this question very interesting since I had recently done some work on modeling projectile motion with drag.
Point 1: You are essentially updating the position and velocity using an explicit/forward Euler iteration where each new value for the states should be a function of the old values. In such a case, you should be updating the position first, then updating the velocity.
Point 2: There are more realistic physics models for the effect of drag friction. One model (suggested by Adam Liss) involves a drag force that is proportional to the velocity (known as Stokes' drag, which generally applies to low velocity situations). The one I previously suggested involves a drag force that is proportional to the square of the velocity (known as quadratic drag, which generally applies to high velocity situations). I'll address each one with regard to how you would deduce formulas for the maximum velocity and the time required to effectively reach the maximum velocity. I'll forego the complete derivations since they are rather involved.
Stokes' drag:
The equation for updating the velocity would be:
velocity += acceleration - friction*velocity
which represents the following differential equation:
dv/dt = a - f*v
Using the first entry in this integral table, we can find the solution (assuming v = 0 at t = 0):
v = (a/f) - (a/f)*exp(-f*t)
The maximum (i.e. terminal) velocity occurs when t >> 0, so that the second term in the equation is very close to zero and:
v_max = a/f
Regarding the time needed to reach the maximum velocity, note that the equation never truly reaches it, but instead asymptotes towards it. However, when the argument of the exponential equals -5, the velocity is around 98% of the maximum velocity, probably close enough to consider it equal. You can then approximate the time to maximum velocity as:
t_max = 5/f
You can then use these two equations to solve for f and a given a desired vmax and tmax.
Quadratic drag:
The equation for updating the velocity would be:
velocity += acceleration - friction*velocity*velocity
which represents the following differential equation:
dv/dt = a - f*v^2
Using the first entry in this integral table, we can find the solution (assuming v = 0 at t = 0):
v = sqrt(a/f)*(exp(2*sqrt(a*f)*t) - 1)/(exp(2*sqrt(a*f)*t) + 1)
The maximum (i.e. terminal) velocity occurs when t >> 0, so that the exponential terms are much greater than 1 and the equation approaches:
v_max = sqrt(a/f)
Regarding the time needed to reach the maximum velocity, note that the equation never truly reaches it, but instead asymptotes towards it. However, when the argument of the exponential equals 5, the velocity is around 99% of the maximum velocity, probably close enough to consider it equal. You can then approximate the time to maximum velocity as:
t_max = 2.5/sqrt(a*f)
which is also equivalent to:
t_max = 2.5/(f*v_max)
For a desired vmax and tmax, the second equation for tmax will tell you what f should be, and then you can plug that in to the equation for vmax to get the value for a.
This seems like a bit of overkill, but these are actually some of the simplest ways to model drag! Anyone who really wants to see the integration steps can shoot me an email and I'll send them to you. They are a bit too involved to type here.
Another Point: I didn't immediately realize this, but the updating of the velocity is not necessary anymore if you instead use the formulas I derived for v(t). If you are simply modeling acceleration from rest, and you are keeping track of the time since the acceleration began, the code would look something like:
position += velocity_function(timeSinceStart)
where "velocity_function" is one of the two formulas for v(t) and you would no longer need a velocity variable. In general, there is a trade-off here: calculating v(t) may be more computationally expensive than simply updating velocity with an iterative scheme (due to the exponential terms), but it is guaranteed to remain stable and bounded. Under certain conditions (like trying to get a very short tmax), the iteration can become unstable and blow-up, a common problem with the forward Euler method. However, maintaining limits on the variables (like 0 < f < 1), should prevent these instabilities.
In addition, if you're feeling somewhat masochistic, you may be able to integrate the formula for v(t) to get a closed form solution for p(t), thus foregoing the need for a Newton iteration altogether. I'll leave this for others to attempt. =)
Warning: Partial Solution
If we follow the physics as stated, there is no maximum velocity. From a purely physical viewpoint, you've fixed the acceleration at a constant value, which means the velocity is always increasing.
As an alternative, consider the two forces acting on your object:
The constant external force, F, that tends to accelerate it, and
The force of drag, d, which is proportional to the velocity and tends to slow it down.
So the velocity at iteration n becomes: vn = v0 + n F - dvn-1
You've asked to choose the maximum velocity, vnmax, that occurs at iteration nmax.
Note that the problem is under-constrained; that is, F and d are related, so you can arbitrarily choose a value for one of them, then calculate the other.
Now that the ball's rolling, is anyone willing to pick up the math?
Warning: it's ugly and involves power series!
Edit: Why doe the sequence n**F** in the first equation appear literally unless there's a space after the n?
velocity *= friction;
This doesn't prevent the velocity from going about a certain point...
Friction increases exponentially (don't quote me on that) as the velocity increases, and will be 0 at rest. Eventually, you will reach a point where friction = acceleration.
So you want something like this:
velocity += (acceleration - friction);
position += velocity;
friction = a*exp(b*velocity);
Where you pick values for a and b. b will control how long it takes to reach top speed, and a will control how abruptly the friction increases. (Again, don't do your own research on this- I'm going from what I remember from grade 12 physics.)
This isn't answering your question, but one thing you shouldn't do in simulations like this is depend on a fixed frame rate. Calculate the time since the last update, and use the delta-T in your equations. Something like:
static double lastUpdate=0;
if (lastUpdate!=0) {
deltaT = time() - lastUpdate;
velocity += acceleration * deltaT;
position += velocity * deltaT;
}
lastUpdate = time();
It's also good to check if you lose focus and stop updating, and when you gain focus set lastUpdate to 0. That way you don't get a huge deltaT to process when you get back.
If you want to see what can be done with very simple physics models using very simple maths, take a look at some of the Scratch projects at http://scratch.mit.edu/ - you may get some useful ideas & you'll certainly have fun.
This is probably not what you are looking for but depending on what engine you are working on, it might be better to use a engine built by some one else, like farseer(for C#).
Note Codeplex is down for maintenance.