Closing towards a point, independent of the frame rate? - math

I'm making a Camera class in 3D that closes in towards a point, slowing down and easing into stopping. To do this regularly is pretty simple:
// positions are vectors, dampening is a scalar, usually set to ~0.9
currentPosition += (targetPosition - currentPosition) * dampening;
However, this is locked to the framerate, assuming it's executed exactly once per frame.
How would one best implement this behaviour that's dependent on time, instead of frame rate or times executed?

A short experiment (and if you need a bit of induction) shows that after n frames you are at
targetPosition*(1 - (1 - dampening)^n) + currentPosition*(1 - dampening)^n
so to make this time-dependent, write
targetPosition*(1 - pow(1 - dampening, t)) + currentPosition*pow(1 - dampening, t)
where dampening is now per unit of time, and time might even be fractional.
Your question reminds me of Calculate speed by distance and friction.

Related

Equation to distribute items unevenly

I'm writing a javascript program that sends a list of MIDI signals over a specified period of time.
If the signals are sent evenly, it's easy to determine how long to wait in between each signal: it's just the total duration divided by the number of signals.
However, I want to be able to offer a setting where the signals aren't sent equally: either the signals are sent with increasing or decreasing speed. In either case, the number of signals and the total amount of time remain the same.
Here's a picture to visualize what I'm talking about
Is there a simple logarithmic/exponential function where I can compute what these values are? I'm especially hoping it might be possible to use the same equation for both, simply changing a variable.
Thank you so much!
Since you do not give any method to get a pulse value, from the previous value or any other way, I assume we are free to come up with our own.
In both of your cases, it looks like you start with an initial time interval: let's call it a. Then the next interval is that value multiplied by a constant ratio: let's call it r. In the first decreasing case, your value of r is between zero and one (it looks like around 0.6), while in the second case your value of r is greater than one (around 1.6). So your time intervals, in Python notation, are
a, a*r, a*r**2, a*r**3, ...
Then the time of each signal is the sum of a geometric series,
a * (1 - r**n) / (1 - r)
where n is the number of the pulse (1 for the first, 2 for the second, etc.). That formula is valid if r is not one, but if r is one then the sequence is a trivial sequence of a regular signal and the nth signal is given at time
a * n
This is not a "fixed result" since you have two degrees of freedom--you can choose values of a and of r.
If you want to spread the signals more evenly, just bring r closer to one. A value of one is perfectly even, a value farther from one is more clumped at one end. One disadvantage of this method is that if the signal intervals are decreasing then the signals will completely stop at some point, namely at
a / (1 - r)
If you have signals already sent or received and you want to find the value of r, just find the time interval between three consecutive signals, and r is the value of the time interval between the 2nd and 3rd signal divided by the time interview between the 1st and 2nd signal. If you want to see if this model is a good one for a given set of signals, check the value of r at multiple signals--if the value of r is nearly constant then this is a good model.

Data propagation time (single link)

this is not Home work!
I preparing my self to test in Networking :
i had this questen in the midterm test and i got half the points i cant figure it out
in this question i got reciver-sender connection.
link data rate is R(b/s)
Packet size is S(b)
Window Size is W(pkts)
Link distance is D(m)
medium propagation speed is p(m/s)
i need to write the utilisation Formula using those Letters
this is what i wrote:
Tp-Propagation time is D/p ===>this got me a big X on test page
i wrote that frame transmition (Tt) time is window size in bits (W*S)
divided by link Data Rate i.e (W*S)/R
thats why the formula is U=Tt/(Tt+2*tp)==>((W*S)/R)/(((W*S)/R)+2*(D/p))
(again X)
i guess somthing is wrong with the Propagation time calculation .
all the slides refaring to sliding window do not mention the utilisation
in referande to distance and propagation delay
i would love some help with this.
thank you .
It depends on how propagation time is supposed to be measured1, but the general formula is:
Propagation time = (Frame Serialization Time) + (Link Media Delay)
Link Media Delay = D/p
Frame Serialization Time = S/R
I don't see the relevance of TCP's sliding window in this question yet; sometimes professors include extra data to discern how well you understand the principles.
END-NOTES:
Does the professor measure propagation time at the bit-level or at the frame-level? My answer assumes it is a frame-level calculation (measured from first bit transmitted until the last bit in the frame is received), so I include frame serialization time.

Detecting and fixing overflows

we have a particle detector hard-wired to use 16-bit and 8-bit buffers. Every now and then, there are certain [predicted] peaks of particle fluxes passing through it; that's okay. What is not okay is that these fluxes usually reach magnitudes above the capacity of the buffers to store them; thus, overflows occur. On a chart, they look like the flux suddenly drops and begins growing again. Can you propose a [mostly] accurate method of detecting points of data suffering from an overflow?
P.S. The detector is physically inaccessible, so fixing it the 'right way' by replacing the buffers doesn't seem to be an option.
Update: Some clarifications as requested. We use python at the data processing facility; the technology used in the detector itself is pretty obscure (treat it as if it was developed by a completely unrelated third party), but it is definitely unsophisticated, i.e. not running a 'real' OS, just some low-level stuff to record the detector readings and to respond to remote commands like power cycle. Memory corruption and other problems are not an issue right now. The overflows occur simply because the designer of the detector used 16-bit buffers for counting the particle flux, and sometimes the flux exceeds 65535 particles per second.
Update 2: As several readers have pointed out, the intended solution would have something to do with analyzing the flux profile to detect sharp declines (e.g. by an order of magnitude) in an attempt to separate them from normal fluctuations. Another problem arises: can restorations (points where the original flux drops below the overflowing level) be detected by simply running the correction program against the reverted (by the x axis) flux profile?
int32[] unwrap(int16[] x)
{
// this is pseudocode
int32[] y = new int32[x.length];
y[0] = x[0];
for (i = 1:x.length-1)
{
y[i] = y[i-1] + sign_extend(x[i]-x[i-1]);
// works fine as long as the "real" value of x[i] and x[i-1]
// differ by less than 1/2 of the span of allowable values
// of x's storage type (=32768 in the case of int16)
// Otherwise there is ambiguity.
}
return y;
}
int32 sign_extend(int16 x)
{
return (int32)x; // works properly in Java and in most C compilers
}
// exercise for the reader to write similar code to unwrap 8-bit arrays
// to a 16-bit or 32-bit array
Of course, ideally you'd fix the detector software to max out at 65535 to prevent wraparound of the sort that is causing your grief. I understand that this isn't always possible, or at least isn't always possible to do quickly.
When the particle flux exceeds 65535, does it do so quickly, or does the flux gradually increase and then gradually decrease? This makes a difference in what algorithm you might use to detect this. For example, if the flux goes up slowly enough:
true flux measurement
5000 5000
10000 10000
30000 30000
50000 50000
70000 4465
90000 24465
60000 60000
30000 30000
10000 10000
then you'll tend to have a large negative drop at times when you have overflowed. A much larger negative drop than you'll have at any other time. This can serve as a signal that you've overflowed. To find the end of the overflow time period, you could look for a large jump to a value not too far from 65535.
All of this depends on the maximum true flux that is possible and on how rapidly the flux rises and falls. For example, is it possible to get more than 128k counts in one measurement period? Is it possible for one measurement to be 5000 and the next measurement to be 50000? If the data is not well-behaved enough, you may be able to make only statistical judgment about when you have overflowed.
Your question needs to provide more information about your implementation - what language/framework are you using?
Data overflows in software (which is what I think you're talking about) are bad practice and should be avoided. While you are seeing (strange data output) is only one side effect that is possible when experiencing data overflows, but it is merely the tip of the iceberg of the sorts of issues you can see.
You could quite easily experience more serious issues like memory corruption, which can cause programs to crash loudly, or worse, obscurely.
Is there any validation you can do to prevent the overflows from occurring in the first place?
I really don't think you can fix it without fixing the underlying buffers. How are you supposed to tell the difference between the sequences of values (0, 1, 2, 1, 0) and (0, 1, 65538, 1, 0)? You can't.
How about using an HMM where the hidden state is whether you are in an overflow and the emissions are observed particle flux?
The tricky part would be coming up with the probability models for the transitions (which will basically encode the time-scale of peaks) and for the emissions (which you can build if you know how the flux behaves and how overflow affects measurement). These are domain-specific questions, so there probably aren't ready-made solutions out there.
But one you have the model, everything else---fitting your data, quantifying uncertainty, simulation, etc.---is routine.
You can only do this if the actual jumps between successive values are much smaller than 65536. Otherwise, an overflow-induced valley artifact is indistinguishable from a real valley, you can only guess. You can try to match overflows to corresponding restorations, by simultaneously analysing a signal from the right and the left (assuming that there is a recognizable base line).
Other than that, all you can do is to adjust your experiment by repeating it with different original particle flows, so that real valleys will not move, but artifact ones move to the point of overflow.

Using an epsilon value to determine if a ball in a game is not moving?

I have balls bouncing around and each time they collide their speed vector is reduced by the Coefficient of Restitution.
Right now my balls CoR for my balls is .80 . So after many bounces my balls have "stopped" rolling because their speed has becoming some ridiculously small number.
In what stage is it appropriate to check if a speed value is small enough to simply call it zero (so I don't have the crazy jittering of the balls reacting to their micro-velocities). I've read on some forums before that people will sometimes use an epsilon constant, some small number and check against that.
Should I define an epsilon constant and do something like:
if Math.abs(velocity.x) < epsilon then velocity.x = 0
Each time I update the balls velocity and position? Is this what is generally done? Would it be reasonable to place that in my Vector classes setters for x and y? Or should I do it outside of my vector class when I'm calculating the velocities.
Also, what would be a reasonable epsilon value if I was using floats for my speed vector?
A reasonable value for epsilon is going to depend on the constraints of your system. If you are representing the ball graphically, then your epsilon might correspond to, say, a velocity of .1 pixels a second (ensuring that your notion of stopping matches the user's experience of the screen objects stopping). If you're doing a physics simulation, you'll want to tune it to the accuracy to which you're trying to measure your system.
As for how often you check - that depends as well. If you're simulating something in real time, the extra check might be costly, and you'll want to check every 10 updates or once per second or something. Or performance might not be an issue, and you can check with every update.
Instead of an epsilon for an IsStillMoving function, maybe you could use an UpdatePosition function, scheduled on an object-by-object basis based on its velocity.
I'd do something like this (in my own make-it-up-as-you-go pseudocode):
void UpdatePosition(Ball b) {
TimeStamp now = Clock.GetTime();
float secondsSinceLastUpdate = now.TimeSince(b.LastUpdate).InSeconds;
Point3D oldPosition = b.Position;
Point3D newPosition = CalculatePosition(b.Position, b.Velocity, interval);
b.MoveTo(newPosition);
float epsilonOfAccuracy = 0.5; // Accurate to one half-pixel
float pixelDistance = Camera.PixelDistance(oldPosition, newPosition);
float fps = System.CurrentFramesPerSecond;
float secondsToMoveOnePixel = (pixelDistance * secondsSinceLastUpdate) / fps;
float nextUpdateInterval = secondsToMoveOnePixel / epsilonOfAccuracy;
b.SetNextUpdateAt(now + nextUpdateInterval);
}
Balls moving very quickly would get updated on every frame. Balls moving more slowly might update every five or ten frames. And balls that have stopped (or nearly stopped) would update only very very rarely.
IMO your epsilon approach is fine. I would just experiment to see what looks or feels natural to the animation in the game.
Epsilon by nature is the smallest possible increment. Unfortunately, computers have different "minimal" increments of their own depending on the floating point representation. I would be very careful (and might even go higher than what I would calculate just for safety) playing around with that, especially if I want a code to be portable.
You may want to write a function that figures out the minimal increment on your floats rather than use a magic value.

Simple physics-based movement

I'm working on a 2D game where I'm trying to accelerate an object to a top speed using some basic physics code.
Here's the pseudocode for it:
const float acceleration = 0.02f;
const float friction = 0.8f; // value is always 0.0..1.0
float velocity = 0;
float position = 0;
move()
{
velocity += acceleration;
velocity *= friction;
position += velocity;
}
This is a very simplified approach that doesn't rely on mass or actual friction (the in-code friction is just a generic force acting against movement). It works well as the "velocity *= friction;" part keeps the velocity from going past a certain point. However, it's this top speed and its relationship to the acceleration and friction where I'm a bit lost.
What I'd like to do is set a top speed, and the amount of time it takes to reach it, then use them to derive the acceleration and friction values.
i.e.,
const float max_velocity = 2.0;
const int ticks; = 120; // If my game runs at 60 FPS, I'd like a
// moving object to reach max_velocity in
// exactly 2 seconds.
const float acceleration = ?
const float friction = ?
I found this question very interesting since I had recently done some work on modeling projectile motion with drag.
Point 1: You are essentially updating the position and velocity using an explicit/forward Euler iteration where each new value for the states should be a function of the old values. In such a case, you should be updating the position first, then updating the velocity.
Point 2: There are more realistic physics models for the effect of drag friction. One model (suggested by Adam Liss) involves a drag force that is proportional to the velocity (known as Stokes' drag, which generally applies to low velocity situations). The one I previously suggested involves a drag force that is proportional to the square of the velocity (known as quadratic drag, which generally applies to high velocity situations). I'll address each one with regard to how you would deduce formulas for the maximum velocity and the time required to effectively reach the maximum velocity. I'll forego the complete derivations since they are rather involved.
Stokes' drag:
The equation for updating the velocity would be:
velocity += acceleration - friction*velocity
which represents the following differential equation:
dv/dt = a - f*v
Using the first entry in this integral table, we can find the solution (assuming v = 0 at t = 0):
v = (a/f) - (a/f)*exp(-f*t)
The maximum (i.e. terminal) velocity occurs when t >> 0, so that the second term in the equation is very close to zero and:
v_max = a/f
Regarding the time needed to reach the maximum velocity, note that the equation never truly reaches it, but instead asymptotes towards it. However, when the argument of the exponential equals -5, the velocity is around 98% of the maximum velocity, probably close enough to consider it equal. You can then approximate the time to maximum velocity as:
t_max = 5/f
You can then use these two equations to solve for f and a given a desired vmax and tmax.
Quadratic drag:
The equation for updating the velocity would be:
velocity += acceleration - friction*velocity*velocity
which represents the following differential equation:
dv/dt = a - f*v^2
Using the first entry in this integral table, we can find the solution (assuming v = 0 at t = 0):
v = sqrt(a/f)*(exp(2*sqrt(a*f)*t) - 1)/(exp(2*sqrt(a*f)*t) + 1)
The maximum (i.e. terminal) velocity occurs when t >> 0, so that the exponential terms are much greater than 1 and the equation approaches:
v_max = sqrt(a/f)
Regarding the time needed to reach the maximum velocity, note that the equation never truly reaches it, but instead asymptotes towards it. However, when the argument of the exponential equals 5, the velocity is around 99% of the maximum velocity, probably close enough to consider it equal. You can then approximate the time to maximum velocity as:
t_max = 2.5/sqrt(a*f)
which is also equivalent to:
t_max = 2.5/(f*v_max)
For a desired vmax and tmax, the second equation for tmax will tell you what f should be, and then you can plug that in to the equation for vmax to get the value for a.
This seems like a bit of overkill, but these are actually some of the simplest ways to model drag! Anyone who really wants to see the integration steps can shoot me an email and I'll send them to you. They are a bit too involved to type here.
Another Point: I didn't immediately realize this, but the updating of the velocity is not necessary anymore if you instead use the formulas I derived for v(t). If you are simply modeling acceleration from rest, and you are keeping track of the time since the acceleration began, the code would look something like:
position += velocity_function(timeSinceStart)
where "velocity_function" is one of the two formulas for v(t) and you would no longer need a velocity variable. In general, there is a trade-off here: calculating v(t) may be more computationally expensive than simply updating velocity with an iterative scheme (due to the exponential terms), but it is guaranteed to remain stable and bounded. Under certain conditions (like trying to get a very short tmax), the iteration can become unstable and blow-up, a common problem with the forward Euler method. However, maintaining limits on the variables (like 0 < f < 1), should prevent these instabilities.
In addition, if you're feeling somewhat masochistic, you may be able to integrate the formula for v(t) to get a closed form solution for p(t), thus foregoing the need for a Newton iteration altogether. I'll leave this for others to attempt. =)
Warning: Partial Solution
If we follow the physics as stated, there is no maximum velocity. From a purely physical viewpoint, you've fixed the acceleration at a constant value, which means the velocity is always increasing.
As an alternative, consider the two forces acting on your object:
The constant external force, F, that tends to accelerate it, and
The force of drag, d, which is proportional to the velocity and tends to slow it down.
So the velocity at iteration n becomes: vn = v0 + n F - dvn-1
You've asked to choose the maximum velocity, vnmax, that occurs at iteration nmax.
Note that the problem is under-constrained; that is, F and d are related, so you can arbitrarily choose a value for one of them, then calculate the other.
Now that the ball's rolling, is anyone willing to pick up the math?
Warning: it's ugly and involves power series!
Edit: Why doe the sequence n**F** in the first equation appear literally unless there's a space after the n?
velocity *= friction;
This doesn't prevent the velocity from going about a certain point...
Friction increases exponentially (don't quote me on that) as the velocity increases, and will be 0 at rest. Eventually, you will reach a point where friction = acceleration.
So you want something like this:
velocity += (acceleration - friction);
position += velocity;
friction = a*exp(b*velocity);
Where you pick values for a and b. b will control how long it takes to reach top speed, and a will control how abruptly the friction increases. (Again, don't do your own research on this- I'm going from what I remember from grade 12 physics.)
This isn't answering your question, but one thing you shouldn't do in simulations like this is depend on a fixed frame rate. Calculate the time since the last update, and use the delta-T in your equations. Something like:
static double lastUpdate=0;
if (lastUpdate!=0) {
deltaT = time() - lastUpdate;
velocity += acceleration * deltaT;
position += velocity * deltaT;
}
lastUpdate = time();
It's also good to check if you lose focus and stop updating, and when you gain focus set lastUpdate to 0. That way you don't get a huge deltaT to process when you get back.
If you want to see what can be done with very simple physics models using very simple maths, take a look at some of the Scratch projects at http://scratch.mit.edu/ - you may get some useful ideas & you'll certainly have fun.
This is probably not what you are looking for but depending on what engine you are working on, it might be better to use a engine built by some one else, like farseer(for C#).
Note Codeplex is down for maintenance.

Resources