Runge-Kutta (RK4) integration for game physics - math

Gaffer on Games has a great article about using RK4 integration for better game physics. The implementation is straightforward, but the math behind it confuses me. I understand derivatives and integrals on a conceptual level, but haven't manipulated equations in a long while.
Here's the brunt of Gaffer's implementation:
void integrate(State &state, float t, float dt)
{
Derivative a = evaluate(state, t, 0.0f, Derivative());
Derivative b = evaluate(state, t+dt*0.5f, dt*0.5f, a);
Derivative c = evaluate(state, t+dt*0.5f, dt*0.5f, b);
Derivative d = evaluate(state, t+dt, dt, c);
const float dxdt = 1.0f/6.0f * (a.dx + 2.0f*(b.dx + c.dx) + d.dx);
const float dvdt = 1.0f/6.0f * (a.dv + 2.0f*(b.dv + c.dv) + d.dv)
state.x = state.x + dxdt * dt;
state.v = state.v + dvdt * dt;
}
Can anybody explain in simple terms how RK4 works? Specifically, why are we averaging the derivatives at 0.0f, 0.5f, 0.5f, and 1.0f? How is averaging derivatives up to the 4th order different from doing a simple euler integration with a smaller timestep?
After reading the accepted answer below, and several other articles, I have a grasp on how RK4 works. To answer my own questions:
Can anybody explain in simple terms how RK4 works?
RK4 takes advantage of the fact that
we can get a much better approximation
of a function if we use its
higher-order derivatives rather than
just the first or second derivative.
That's why the Taylor series
converges much faster than Euler
approximations. (take a look at the
animation on the right side of that
page)
Specifically, why are we averaging the derivatives at 0.0f, 0.5f, 0.5f, and 1.0f?
The Runge-Kutta method is an
approximation of a function that
samples derivatives of several points
within a timestep, unlike the Taylor
series which only samples derivatives
of a single point. After sampling
these derivatives we need to know how
to weigh each sample to get the
closest approximation possible. An
easy way to do this is to pick
constants that coincide with the
Taylor series, which is how the
constants of a Runge-Kutta equation
are determined.
This article made it clearer for
me. Notice how (15) is the Taylor
series expansion while (17) is the
Runge-Kutta derivation.
How is averaging derivatives up to the 4th order different from doing a simple euler integration with a smaller timestep?
Mathematically, it converges much
faster than doing many Euler
approximations. Of course, with enough
Euler approximations we can gain equal
accuracy to RK4, but the computational
power needed doesn't justify using
Euler.

This may be a bit oversimplified so far as actual math, but meant as an intuitive guide to Runge Kutta integration.
Given some quantity at some time t1, we want to know the quantity at another time t2. With a first-order differential equation, we can know the rate of change of that quantity at t1. There is nothing else we can know for sure; the rest is guessing.
Euler integration is the simplest way to guess: linearly extrapolate from t1 to t2, using the precisely known rate of change at t1. This usually gives a bad answer. If t2 is far from t1, this linear extrapolation will fail to match any curvature in the ideal answer. If we take many small steps from t1 to t2, we'll have the problem of subtraction of similar values. Roundoff errors will ruin the result.
So we refine our guess. One way is to go ahead and do this linear extrapolation anyway, then hoping it's not too far off from truth, use the differential equation to compute an estimate of the rate of change at t2. This, averaged with the (accurate) rate of change at t1, better represents the typical slope of the true answer between t1 and t2. We use this to make a fresh linear extrapolation from to t1 to t2. It's not obvious if we should take the simple average, or give more weight to the rate at t1, without doing the math to estimate errors, but there is a choice here. In any case, it's a better answer than Euler gives.
Perhaps better, make our initial linear extrapolation to a point in time midway between t1 and t2, and use the differential equation to compute the rate of change there. This gives roughly as good an answer as the average just described. Then use this for a linear extrapolation from t1 to t2, since our purpose it to find the quantity at t2. This is the midpoint algorithm.
You can imagine using the mid-point estimate of the rate of change to make another linear extrapolation of the quantity from t1 to the midpoint. With the differential equation we get an better estimate of the slope there. Using this, we end by extrapolating from t1 all the way to t2 where we want an answer. This is the Runge Kutta algorithm.
Could we do a third extrapolation to the midpoint? Sure, it's not illegal, but detailed analysis shows diminishing improvement, such that other sources of error dominate the final result.
Runge Kutta applies the differential equation to the intial point t1, twice to the midpoint, and once at the final point t2. The in-between points are a matter of choice. It is possible to use other points between t1 and t2 for making those improved estimates of the slope. For example, we could use t1, a point one third the way toward t2, another 2/3 the way toward t2, and at t2. The weights for the average of the four derivatives will be different. In practice this doesn't really help, but might have a place in testing since it ought to give the same answer but will provide a different set of round off errors.

As to your question why: I recall once writing a cloth simulator where the cloth was a series of springs interconnected at nodes. In the simulator, the force exerted by the spring is proportional to how far the spring is stretched. The force causes acceleration at the node, which causes velocity which moves the node which stretches the spring. There are two integrals (integrating acceleration to get velocity, and integrating velocity to get position) and if they are inaccurate, the errors snowball: Too much acceleration causes too much velocity which causes too much stretch which causes even more acceleration, making the whole system unstable.
It is difficult to explain without graphics, but I'll try: Say you have f(t), where f(0) = 10, f(1) = 20, and f(2) = 30.
A proper integration of f(t) over the interval 0 < t < 1 would give you the surface under the graph of f(t) over that interval.
The rectangle rule integration approximates that surface with a rectangle where the breadth is the delta in time and the length is the new value of f(t), so in the interval 0 < t < 1 , it will yield 20 * 1 = 20, and in the next interval 1
Now if you were to plot these points and draw a line through them you'll see that it is actually triangular, with a surface of 30 (units), and therefore the Euler integration is inadequate.
To get a more accurate estimation of the surface (integral) you can take smaller intervals of t, evaluating at for example f(0), f(0.5), f(1), f(1.5) and f(2).
If you're still following me, the RK4 method is then simply a way of estimating values of f(t) for t0 < t < t0+dt invented by people smarter than myself for getting accurate estimates of the integral.
(but as others have said, read the Wikipedia article for a more detailed explanation. RK4 is in the category of numerical integration)

RK4 in the simplest sense is making a approximation function that that is based on 4 derivatives and point for each time step: Your initial condition at starting point A, a first approximated slope B based on data point A at your time step/2 and the slope from A, a third approximation C , which has an correction value for the slope at B to reflect the shape changes of your function, and finally a final slope based on the corrected slope at point C.
So basically this method lets you calculate using a starting point, an averaged midpoint which has corrections built into both parts to adjust for the shape, and a doubly corrected endpoint. This makes the effective contribution from each data point 1/6 1/3 1/3 and 1/6, so most of your answer is based on your corrections for the shape of your function.
It turns out that the order of an RK approximation (Euler is considered an RK1) corresponds to how its accuracy scales with smaller time steps.
The relationship between RK1 approximations is linear, so for 10 times the precision you get roughly 10 times better convergence.
For RK4, 10 times the precision yields you about 10^4 times better convergence. So while your calcuation time increases linearly in RK4, it increases your accuracy polynomially.

Related

lpsolve - unfeasible solution, but I have example of 1

I'm trying to solve this in LPSolve IDE:
/* Objective function */
min: x + y;
/* Variable bounds */
r_1: 2x = 2y;
r_2: x + y = 1.11 x y;
r_3: x >= 1;
r_4: y >= 1;
but the response I get is:
Model name: 'LPSolver' - run #1
Objective: Minimize(R0)
SUBMITTED
Model size: 4 constraints, 2 variables, 5 non-zeros.
Sets: 0 GUB, 0 SOS.
Using DUAL simplex for phase 1 and PRIMAL simplex for phase 2.
The primal and dual simplex pricing strategy set to 'Devex'.
The model is INFEASIBLE
lp_solve unsuccessful after 2 iter and a last best value of 1e+030
How come this can happen when x=1.801801802 and y=1.801801802 are possible solutions here?
How To Find The Solution
Let's do some math.
Your problem is:
min x+y
s.t. 2x = 2y
x + y = 1.11 x y
x >= 1
y >= 1
The first constraint 2x = 2y can be simplified to x=y. We now substitute throughout the problem:
min 2*x
s.t. 2*x = 1.11 x^2
x >= 1
And rearrange:
min 2*x
s.t. 1.11 x^2-2*x=0
x >= 1
From geometry we know that 1.11 x^2-2*x makes an upward-opening parabola with a minimum less than zero. Therefore, there are exactly two points. These are given by the quadratic equation: 200/111 and 0.
Only one of these satisfies the second constraint: 200/111.
Why Can't I Find This Constraint With My Solver
The easy way out is to say it's because the x^2 term (x*y before the substitution is nonlinear). But it goes a little deeper than that. Nonlinear problems can be easy to solve as long as they are convex. A convex problem is one whose constraints form a single, contiguous space such that any line drawn between two points in the space stays within the boundaries of the space.
Your problem is not convex. The constraint 1.11 x^2-2*x=0 defines an infinite number of points. No two of these points can be connected by a straight line which stays in the space defined by the constraint because that space is curved. If the constraint were instead 1.11 x^2-2*x<=0 then the space would be convex because all points could be connected with straight lines that stay in its interior.
Nonconvex problems are part of a broader class of problems called NP-Hard. This means that there is not (and perhaps cannot) be any easy way of solving the problem. We have to be smart.
Solvers that can handle mixed-integer programming (MIP/MILP) can solve many non-convex problems efficiently, as can other techniques such as genetic algorithms. But, beneath the hood, these techniques all rely on glorified guess-and-check.
So your solver fails because the problem is nonconvex and your solver is neither smart enough to use MIP to guess-and-check its way to a solution nor smart enough to use the quadratic equation.
How Then Can I Solve The Problem?
In this particular instance, we are able to use mathematics to quickly find a solution because, although the problem is nonconvex, it is part of a class of special cases. Deep thinking by mathematicians has given us a simple way of handling this class.
But consider a few generalizations of the problem:
(a) a x^3+b x^2+c x+d=0
(b) a x^4+b x^3+c x^2+d x+e =0
(c) a x^5+b x^4+c x^3+d x^2+e x+f=0
(a) has three potential solutions which must be checked (exact solutions are tricky), (b) has four (trickier), and (c) has five. The formulas for (a) and (b) are much more complex than the quadratic formula and mathematicians have shown that there is no formula for (c) that can be expressed using "elementary operations". Instead, we have to resort to glorified guess-and-check.
So the techniques we used to solve your problem don't generalize very well. This is what it means to live in the realm of the nonconvex and NP-hard, and it's a good reason to fund research in mathematics, computer science, and related fields.

apply fourier shift theorem to complex signal

Im trying to apply the fourier phase shift theorem to a complex signal in R. However, only the magnitude of my signal shifts as I expect it. I think it should be possible to apply this theorem to complex signals, so probably I make an error somewhere. My guess is that there is an error in the frequency axis I calculate.
How do I correctly apply the fourier shift theorem to a complex signal (using R)?
i = complex(0,0,1)
t.in = (1+i)*matrix(c(1,0,0,0,0,0,0,0,0,0))
n.shift = 5
#the output of fft() has the mean / 0 frequency at the first element
#it then increases to the highest frequency, flips to negative frequencies
#and then increases again to the negative frequency closest to 0
N = length(t.in)
if (N%%2){#odd
kmin = -(N-1)/2
kmax = (N-1)/2
} else {#even
kmin = -N/2
kmax = N/2-1
#center frequency negative, is that correct?
}
#create frequency axis for fft() output, no sampling frequency or sample duration needed
k = (kmin:kmax)
kflip = floor(N/2)
k = k[c((kflip+1):N,1:kflip)]
f = 2*pi*k/N
shiftterm = exp( -i*n.shift*f )
T.in = fft(t.in)
T.out = T.in*shiftterm
t.out = fft(T.out, inverse=T)/N
par(mfrow=c(2,2))
plot(Mod(t.in),col="green");
plot(Mod(t.out), col="red");
plot(Arg(t.in),col="green");
plot(Arg(t.out),col="red");
As you can see the magnitude of the signal is nicely shifted, but the phase is scrambled. I think the negative frequencies are where my error is, but I cant see it.
What am I doing wrong?
The questions about fourier phase shift theorem I could find:
real 2d signal in python
real 2d signal in matlab
real 1d signal in python
math question about what fourier shift does
But these were not about complex signals.
Answer
As Steve suggested in the comments, I checked the phase on the 6th element.
> Arg(t.out)[6]
[1] 0.7853982
> Arg(t.in)[1]
[1] 0.7853982
So the only element that has a magnitude (at least one order of magnitude higher than the EPS) does have the phase that I expected.
TL;DR The result from the original approach in the question was already correct, we see the Gibbs Phenomenon sliding by.
Just discard low magnitude elements?
If ever the phase of elements that should be zero will be a problem I can run t.out[Mod(t.out)<epsfactor*.Machine$double.eps] = 0 where in this case epsfactor has to be 10 to get rid of the '0' magnitude elements.
Adding that line before plotting gives the following result, which is what I expected to get beforehand. However, the 'scrambled' phase might actually be accurate in most cases as I'll explain below.
The original result really was correct
Just setting low magnitude elements to 0 does not make the phase of the shifted signal more intuitive however. This is a plot where I apply a 4.5 sample shift, the phase is still 'scrambled'.
Applying fourier shift equivalent to downsmapling shifted fourier interpolation
It occurred to me that applying a non-integer number of elements phase shift is equivalent to fourier interpolating the signal and then downsample the interpolated signal at points between the original elements. Since the vector I used as input is an impulse function, the fourier interpolated signal is just not well behaved. Then the signal after applying the fourier phase shift theorem can be expected to have exactly the phase that the fourier interpolated signal has, as seen below.
Gibbs Ringing
Its just at the discontinuities where phase is not well behaved and where small rounding errors might cause large errors in the reconstructed phase. So not really related to low magnitude but to not well defined fourier transform of the input vector. This is called Gibbs Ringing, I could use low-pass filtering with a gaussian filter to decrease it.
Questions related to fourier interpolation and phase shift
symbolic approach in R to estimate fourier transform error
non integer signal shift by use of linear interpolation
downsampling complex signal
fourier interpolation application
estimating sub-sample shift between two signals using fourier transforms
estimating sub-sample shift between two signals without interpolation

Is it feasible to denoise time irrelevant sensor reading with Kalman Filter and how to code it?

After I did some research, I can understand how to implement it with time relevant functions. However, I'm not very sure about whether can I apply it to time irrelevant scenarios.
Giving that we have a simple function y=a*x^2, where both y and x are measured at a constant interval (say 1 min/sample) and a is a constant. However, both y and x measurements have white noise.
More specifically, x and y are two independently measured variables. For example, x is air flow rate in a duct and the y is the pressure drop across the duct. Because the air flow is varying due to the variation of the fan speed, the pressure drop across the duct is also varying. The relation between the pressure drop y and flow rate x is y=a*x^2, however both measurement embedded white noise. Is that possible to use Kalman Filter to estimate a more accurate y? Both x and y are recorded in a constant time interval.
Here are my questions:
Is it feasible to implement Kalman Filter for the y reading noise reduction? Or in another word, have a better estimation of y?
If this is feasible, how to code it in R or C?
P.S.
I tried to apply Kalman Filter to single variable and it works well. The result is as below. I'll have a try Ben's suggestion then and have a look whether can I make it works.
I think you can apply some Kalman Filter like ideas here.
Make your state a, with variance P_a. Your update is just F=[1], and your measurement is just H=[1] with observation y/x^2. In other words, you measure x and y and estimate a by solving for a in your original equation. Update your scalar KF as usual. Approximating R will be important. If x and y both have zero mean Gaussian noise, then y/x^2 certainly doesn't, but you can come up with an approximation.
Now that you have a running estimate of a (which is a random constant, so Q=0 ideally, but maybe Q=[tiny] to avoid numerical issues) you can use it to get a better y.
You have y_meas and y_est=a*x_meas^2. Combine those using your variances as (R_y * a * x^2 + (P_a + R_x2) * y_meas) / (R_y + P_a + R_x2). Over time as P_a goes to zero (you become certain of your estimate of a) you can see you end up combining information from your x and y measurements proportional to your trust in them individually. Early on, when P_a is high you are mostly trusting the direct measurement of y_meas because you don't know the relationship.

How to make sure if a polynomial curve is monotonic under interval [a,b]?

If I got a polynomial curve, and I want to find all monotonic curve segments and corresponding intervals by programming.
What's the best way to do this...
I want to avoid solving equation like f'(x) = 0;
Using some nice numerical ways to do this,like bi-section, is preferred.
f'(x) expression is available.
Thanks.
Add additional details. For example, I get a curve in 2d space, and its polynomial is
x: f(t)
y: g(t)
t is [0,1]
So, if I want to get its monotonic curve segment, I must know the position of t where its tangent vector is (1,0).
One direct way to resolve this is to setup an equation "f'(x) = 0".
But I want to use the most efficient way to do this.
For example, I try to use recursive ways to find this.
Divide the range [0,1] to four parts, and check whether the four tangents projection on vector (1,0) are in same direction, and two points are close enough. If not, continue to divide the range into 4 parts, until they are in same direction in (1,0) and (0,1), and close enough.
I think you will have to find the roots of f'(x) using a numerical method (feel free to implement any root-seeking algorithm you want, Wikipedia has a list). The roots will be those points where the gradient reaches zero; say x1, x2, x3.
You then have a set of intervals (-inf, x1) (x1, x2) etc, continuity of a polynomial ensures that the gradient will be always positive or always negative between a particular pair of points.
So evaluating the gradient sign at a point within each interval will tell you whether that interval is monotically increasing or not. If you don't care for a "strictly" increasing section, you could patch together adjacent intervals which have positive gradient (as a point of inflection will show up as one of the f'(x)=0 roots).
As an alternative to computing the roots of f', you can also use Sturm Sequences.
They allow counting the number of roots (here, the roots of f') in an interval.
The monotic curve segments are delimited by the roots of f'(x). You can find the roots by using an iterative algorithm like Newton's method.

approximation methods

I attached image:
(source: piccy.info)
So in this image there is a diagram of the function, which is defined on the given points.
For example on points x=1..N.
Another diagram, which was drawn as a semitransparent curve,
That is what I want to get from the original diagram,
i.e. I want to approximate the original function so that it becomes smooth.
Are there any methods for doing that?
I heard about least squares method, which can be used to approximate a function by straight line or by parabolic function. But I do not need to approximate by parabolic function.
I probably need to approximate it by trigonometric function.
So are there any methods for doing that?
And one idea, is it possible to use the Least squares method for this problem, if we can deduce it for trigonometric functions?
One more question!
If I use the discrete Fourier transform and think about the function as a sum of waves, so may be noise has special features by which we can define it and then we can set to zero the corresponding frequency and then perform inverse Fourier transform.
So if you think that it is possible, then what can you suggest in order to identify the frequency of noise?
Unfortunately many solutions here presented don't solve the problem and/or they are plain wrong.
There are many approaches and they are specifically built to solve conditions and requirements you must be aware of !
a) Approximation theory: If you have a very sharp defined function without errors (given by either definition or data) and you want to trace it exactly as possible, you are using
polynominal or rational approximation by Chebyshev or Legendre polynoms, meaning that you
approach the function by a polynom or, if periodical, by Fourier series.
b) Interpolation: If you have a function where some points (but not the whole curve!) are given and you need a function to get through this points, you can use several methods:
Newton-Gregory, Newton with divided differences, Lagrange, Hermite, Spline
c) Curve fitting: You have a function with given points and you want to draw a curve with a given (!) function which approximates the curve as closely as possible. There are linear
and nonlinear algorithms for this case.
Your drawing implicates:
It is not remotely like a mathematical function.
It is not sharply defined by data or function
You need to fit the curve, not some points.
What do you want and need is
d) Smoothing: Given a curve or datapoints with noise or rapidly changing elements, you only want to see the slow changes over time.
You can do that with LOESS as Jacob suggested (but I find that overkill, especially because
choosing a reasonable span needs some experience). For your problem, I simply recommend
the running average as suggested by Jim C.
http://en.wikipedia.org/wiki/Running_average
Sorry, cdonner and Orendorff, your proposals are well-minded, but completely wrong because you are using the right tools for the wrong solution.
These guys used a sixth polynominal to fit climate data and embarassed themselves completely.
http://scienceblogs.com/deltoid/2009/01/the_australians_war_on_science_32.php
http://network.nationalpost.com/np/blogs/fullcomment/archive/2008/10/20/lorne-gunter-thirty-years-of-warmer-temperatures-go-poof.aspx
Use loess in R (free).
E.g. here the loess function approximates a noisy sine curve.
(source: stowers-institute.org)
As you can see you can tweak the smoothness of your curve with span
Here's some sample R code from here:
Step-by-Step Procedure
Let's take a sine curve, add some
"noise" to it, and then see how the
loess "span" parameter affects the
look of the smoothed curve.
Create a sine curve and add some noise:
period <- 120 x <- 1:120 y <-
sin(2*pi*x/period) +
runif(length(x),-1,1)
Plot the points on this noisy sine curve:
plot(x,y, main="Sine Curve +
'Uniform' Noise") mtext("showing
loess smoothing (local regression
smoothing)")
Apply loess smoothing using the default span value of 0.75:
y.loess <- loess(y ~ x, span=0.75,
data.frame(x=x, y=y))
Compute loess smoothed values for all points along the curve:
y.predict <- predict(y.loess,
data.frame(x=x))
Plot the loess smoothed curve along with the points that were already
plotted:
lines(x,y.predict)
You could use a digital filter like a FIR filter. The simplest FIR filter is just a running average. For more sophisticated treatment look a something like a FFT.
This is called curve fitting. The best way to do this is to find a numeric library that can do it for you. Here is a page showing how to do this using scipy. The picture on that page shows what the code does:
(source: scipy.org)
Now it's only 4 lines of code, but the author doesn't explain it at all. I'll try to explain briefly here.
First you have to decide what form you want the answer to be. In this example the author wants a curve of the form
f(x) = p0 cos (2π/p1 x + p2) + p3 x
You might instead want the sum of several curves. That's OK; the formula is an input to the solver.
The goal of the example, then, is to find the constants p0 through p3 to complete the formula. scipy can find this array of four constants. All you need is an error function that scipy can use to see how close its guesses are to the actual sampled data points.
fitfunc = lambda p, x: p[0]*cos(2*pi/p[1]*x+p[2]) + p[3]*x # Target function
errfunc = lambda p: fitfunc(p, Tx) - tX # Distance to the target function
errfunc takes just one parameter: an array of length 4. It plugs those constants into the formula and calculates an array of values on the candidate curve, then subtracts the array of sampled data points tX. The result is an array of error values; presumably scipy will take the sum of the squares of these values.
Then just put some initial guesses in and scipy.optimize.leastsq crunches the numbers, trying to find a set of parameters p where the error is minimized.
p0 = [-15., 0.8, 0., -1.] # Initial guess for the parameters
p1, success = optimize.leastsq(errfunc, p0[:])
The result p1 is an array containing the four constants. success is 1, 2, 3, or 4 if ths solver actually found a solution. (If the errfunc is sufficiently crazy, the solver can fail.)
This looks like a polynomial approximation. You can play with polynoms in Excel ("Add Trendline" to a chart, select Polynomial, then increase the order to the level of approximation that you need). It shouldn't be too hard to find an algorithm/code for that.
Excel can show the equation that it came up with for the approximation, too.

Resources