Find start point (time) of each cycle in a sine wave - math

I am tying to achieve sine wave gradually changing from 8Hz to 2Hz over 5 seconds:
This waveform was produced in Cool Edit. I gave it a start frequency of 8Hz, an end frequency of 2Hz and a duration of 5 seconds. The sine wave gradually changes from one frequency to the other over the given time.
My question is, how can I accurately find the start time of each cycle (highlighted with a red dot), using a FOR loop?
Pseudo code:
time = 5 //Duration
freq1 = 8 //Start frequency
freq2 = 2 //End frequency
cycles = ( (freq1 + freq2) / 2 ) * time //Total number of cycles
for(i = 0; i < cycles; i++) {
/* Formula to find start time of each cycle */
}

That is backward thinking for this problem which leads to madness in the program. Not to mention the individual waves will not be a sin wave because the frequency is changing (they will be slightly distorted) which you will not achieve with your generator and also there is very slight chance the ending signal will stop on zero after 5sec. Instead do a continuous sin wave with variable frequency:
First compute actual frequency
linear interpolation will suffice (unless you need different change)
f=f0+(f1-f0)*t/T
where:
f0=8 [Hz] start frequency
f1=2 [Hz] stop frequency
T =5 [s] change time
t =<0,T> is actual time in [s]
compute the sin wave data
for (t=0.0,angle=0.0;t<=T;t+=dt)
{
f=f0+((f1-f0)*t/T); // actual frequency
signal=Amplitude*sin(angle); // your signal put it in a array or output somewhere ...
angle+=6.283185307179586476925286766559*dt*f; // update phase
while (angle>6.283185307179586476925286766559) // cut just to avoid floating rounding problems
angle-=6.283185307179586476925286766559;
}
Where dt [s] is a time step you want to sample your signal with. If you are generating this in Real Time and outputting to real HW you can use a timer or measure the time directly (with performance counters on Windows or by RDTSC or whatever you have at disposal)
If you got predefined number of samples n for this then
dt=T/double(n-1);
Here sample output (n=image width):
If you also need the number of periods then add counter increment inside the angle cut while loop And also there is your zero point too (but if samplerate is too small or you need high precision you need to interpolate the real zero position).

Related

How to convert a spectrogram matrix into wav file

Is there a way to convert a matrix representing a grayscale spectrogram (values non-complex and between 0 and 1) like the one shown in the image below back into a sound file, e.g. wav file? This post explains how to do it with a seewave spectrogram using the istft function. However, in my case I see two problems which need to be solved:
The original spectrogram (obtained by signal::specgram) is lost and matrix dimensions are different from the original spectrogram (i.e. both frequency and time are up-/ or downsampled) while exact frequency and time values for each row and each column are known
The matrix values range between 0 and 1 and are not complex as required by istft
Furthermore, the dimensions of the original spectrogram, the sample frequency of the original wave object and the window length and overlap used to obtain the original spectrogram are known.
Thank you!
audio is just a curve which wobbles over time where this wobble mirrors your eardrum or microphone pickup membrane ... this signal is in the time domain where axis are time on X and curve height on Y ... typical CD quality audio has 44,100 samples per second meaning you capture that number of points on this audio curve per second ... what gets captured is the audio curve height whereas time is implied knowing each sample is captured in a known sample rate ... so sample rate is one of the two critical audio attributes on digital audio ... bit depth is the other attribute ... if you devote two bytes ( 16 bits ) to record CD quality curve height you get 2 raised to the 16th power ( 2^16 == 65536 ) distinct possible values to store the curve height
its critical to emphasize a raw audio signal is in the time domain (X is time Y is curve height) ... when you send a set of these samples into a fft call the data gets transformed into the frequency domain (X is frequency Y is magnitude [energy]) so the direct dimension of time is gone yet is baked into the notion of that entire body of frequency domain data ... there are trade offs when deciding both the number of samples you feed into the fft call ( sample window size ) namely to increase the frequency resolution of the freq domain signal (to lower incr_freq ) you need more audio samples to get fed into the fft call however to gain temporal specificity in the freq domain you need as few samples as possible which you pay for by getting a lower frequency resolution and lower peak freq ( lower nyquist limit )
to generate a spectrogram you feed a memory buffer of say 4096 samples of this curve height array ( time domain ) into a Fourier Transform ( fft ) which will return back an array ( freq domain ) of same number of array elements yet this time each element stores a complex number from which you can calculate the magnitude ( energy level ) and phase ... array element zero is the DC bias which can be ignored ... each array element represents a distinct frequency where the freq increment can be calculated
with sample_rate of 44100 samples per second, and one second worth of samples ( 44100 )
this gives you a frequency increment resolution of 1 hertz ... IE each freq bin is 1 Hertz apart
incr_freq := sample_rate / number_of_samples
nyquist_limit_index := int(number_of_samples / 2)
here is how you can iterate across the array complex_fft (in go not r)
for index_fft, curr_complex := range complex_fft { // we really only use half this range + 1
if index_fft <= nyquist_limit_index && curr_freq >= min_freq && curr_freq < max_freq {
curr_real = real(curr_complex) // pluck out real portion of complex number
curr_imag = imag(curr_complex) // ditto for imaginary portion
curr_mag = 2.0 * math.Sqrt(curr_real*curr_real+curr_imag*curr_imag) / number_of_samples
curr_theta = math.Atan2(curr_imag, curr_real)
curr_dftt := discrete_fft{
real: 2.0 * curr_real,
imaginary: 2.0 * curr_imag,
magnitude: curr_mag,
theta: curr_theta,
}
as time marches along you repeat above process of feeding the next set of 4096 samples into the fft api call so you collect a set of pairs of time domain arrays and their corresponding freq domain representation
the process which created your plot has done this repeat process which is why time is shown as X axis ... on your plot each vertical bar of data represents output from single fft call where its resultant magnitude is shown as the dark portions of that vertical bar and the lighter dots on the plot show the lower energy frequencies ... only after the process which generated that plot progressed over time was the data available to plot the next vertical bar as the plot progressed from left to right hence the time axis across the X axis on bottom
another critical insight is to be aware you can start with audio (time domain) ... populate a window of samples ( 4096 for example ) and send this array into a fft call to obtain a new array (freq domain) of frequencies each with its magnitude and phase ... here is the pure magic, you can then perform an inverse Fourier Transform ( ifft ) on this freq domain array to get an array in the time domain which will match (to a 1st approx ) your original input audio signal
so in your case walk across your data from left to right on the plot and for each set of vertical magnitude values ( indicated by grayscale ) which is a single frequency domain array perform this inverse Fourier Transform which will give you the raw audio signal ( time domain ) only for a very quick segment of time ( as defined by the 4096 audio samples or similar ) ... this raw audio is the payload portion of a wav file ... repeat this process for the next vertical column of data until you have walked across the entire plot from left to right ... stitch together this sequence of payload buffers into a wav file

Plotting large time series

Summary of Question:
Are there any easy to implement algorithms for reducing the number of points needed to represent a time series without altering how it appears in a plot?
Motivating Problem:
I'm trying to interactively visualize 10 to 15 channels of data logged from an embedded system at ~20 kHz. Logs can cover upwards of an hour of time which means that I'm dealing with between 1e8 and 1e9 points. Further, I care about potentially small anomalies that last for very short periods of time (i.e. less than 1 ms) such that simple decimation isn't an option.
Not surprisingly, most plotting libraries get a little sad if you do the naive thing and try to hand them arrays of data larger than the dedicated GPU memory. It's actually a bit worse than this on my system; using a vector of random floats as a test case, I'm only getting about 5e7 points out of the stock Matlab plotting function and Python + matplotlib before my refresh rate drops below 1 FPS.
Existing Questions and Solutions:
This problem is somewhat similar to a number of existing questions such as:
How to plot large data vectors accurately at all zoom levels in real time?
How to plot large time series (thousands of administration times/doses of a medication)?
[Several Cross Validated questions]
but deals with larger data sets and/or is more stringent about fidelity at the cost of interactivity (it would be great to get 60 FPS silky smooth panning and zooming, but realistically, I would be happy with 1 FPS).
Clearly, some form of data reduction is needed. There are two paradigms that I have found while searching for existing tools that solve my problem:
Decimate but track outliers: A good example of this is Matlab + dsplot (i.e. the tool suggested in the accepted answer of the first question I linked above). dsplot decimates down to a fixed number of evenly spaced points, but then adds back in outliers identified using the standard deviation of a high pass FIR filter. While this is probably a viable solution for several classes of data, it potentially has difficulties if there is substantial frequency content past the filter cutoff frequency and may require tuning.
Plot min and max: With this approach, you divide the time series up in to intervals corresponding to each horizontal pixel and plot just the minimum and maximum values in each interval. Matlab + Plot (Big) is a good example of this, but uses an O(n) calculation of min and max making it a bit slow by the time you get to 1e8 or 1e9 points. A binary search tree in a mex function or python would solve this problem, but is complicated to implemented.
Are there any simpler solutions that do what I want?
Edit (2018-02-18): Question refactored to focus on algorithms instead of tools implementing algorithms.
I had the very same problem displaying pressure timeseries of hundreds of sensors, with samples every minute for several years. In some cases (like when cleaning the data), I wanted to see all the outliers, others I was more interested in the trend. So I wrote a function that can reduce the number of data points using two methods: visvalingam and Douglas-Peucker. The first tend to remove outliers, and the second keeps them. I've optimized the function to work over large datasets.
I did that after realizing that all the plotting methods weren't capable to handle that many points, and the ones that did, were decimating the dataset in a way that I couldn't control. The function is the following:
function [X, Y, indices, relevance] = lineSimplificationI(X,Y,N,method,option)
%lineSimplification Reduce the number of points of the line described by X
%and Y to N. Preserving the most relevant ones.
% Using an adapted method of visvalingam and Douglas-Peucker algorithms.
% The number of points of the line is reduced iteratively until reaching
% N non-NaN points. Repeated NaN points in original data are deleted but
% non-repeated NaNs are preserved to keep line breaks.
% The two available methods are
%
% Visvalingam: The relevance of a point is proportional to the area of
% the triangle defined by the point and its two neighbors.
%
% Douglas-Peucker: The relevance of a point is proportional to the
% distance between it and the straight line defined by its two neighbors.
% Note that the implementation here is iterative but NOT recursive as in
% the original algorithm. This allows to better handle large data sets.
%
% DIFFERENCES: Visvalingam tend to remove outliers while Douglas-Peucker
% keeps them.
%
% INPUTS:
% X: X coordinates of the line points
% Y: Y coordinates of the line points
% method: Either 'Visvalingam' or 'DouglasPeucker' (default)
% option: Either 'silent' (default) or 'verbose' if additional outputs
% of the calculations are desired.
%
% OUTPUTS:
% X: X coordinates of the simplified line points
% Y: Y coordinates of the simplified line points
% indices: Indices to the positions of the points preserved in the
% original X and Y. Therefore Output X is equal to the input
% X(indices).
% relevance: Relevance of the returned points. It can be used to furder
% simplify the line dinamically by keeping only points with
% higher relevance. But this will produce bigger distortions of
% the line shape than calling again lineSimplification with a
% smaller value for N, as removing a point changes the relevance
% of its neighbors.
%
% Implementation by Camilo Rada - camilo#rada.cl
%
if nargin < 3
error('Line points positions X, Y and target point count N MUST be specified');
end
if nargin < 4
method='DouglasPeucker';
end
if nargin < 5
option='silent';
end
doDisplay=strcmp(option,'verbose');
X=double(X(:));
Y=double(Y(:));
indices=1:length(Y);
if length(X)~=length(Y)
error('Vectors X and Y MUST have the same number of elements');
end
if N>=length(Y)
relevance=ones(length(Y),1);
if doDisplay
disp('N is greater or equal than the number of points in the line. Original X,Y were returned. Relevances were not computed.')
end
return
end
% Removing repeated NaN from Y
% We find all the NaNs with another NaN to the left
repeatedNaNs= isnan(Y(2:end)) & isnan(Y(1:end-1));
%We also consider a repeated NaN the first element if NaN
repeatedNaNs=[isnan(Y(1)); repeatedNaNs(:)];
Y=Y(~repeatedNaNs);
X=X(~repeatedNaNs);
indices=indices(~repeatedNaNs);
%Removing trailing NaN if any
if isnan(Y(end))
Y=Y(1:end-1);
X=X(1:end-1);
indices=indices(1:end-1);
end
pCount=length(X);
if doDisplay
disp(['Initial point count = ' num2str(pCount)])
disp(['Non repeated NaN count in data = ' num2str(sum(isnan(Y)))])
end
iterCount=0;
while pCount>N
iterCount=iterCount+1;
% If the vertices of a triangle are at the points (x1,y1) , (x2, y2) and
% (x3,y3) the are uf such triangle is
% area = abs((x1*(y2-y3)+x2*(y3-y1)+x3*(y1-y2))/2)
% now the areas of the triangles defined by each point of X,Y and its two
% neighbors are
twiceTriangleArea =abs((X(1:end-2).*(Y(2:end-1)-Y(3:end))+X(2:end-1).*(Y(3:end)-Y(1:end-2))+X(3:end).*(Y(1:end-2)-Y(2:end-1))));
switch method
case 'Visvalingam'
% In this case the relevance is given by the area of the
% triangle formed by each point end the two points besides
relevance=twiceTriangleArea/2;
case 'DouglasPeucker'
% In this case the relevance is given by the minimum distance
% from the point to the line formed by its two neighbors
neighborDistances=ppDistance([X(1:end-2) Y(1:end-2)],[X(3:end) Y(3:end)]);
relevance=twiceTriangleArea./neighborDistances;
otherwise
error(['Unknown method: ' method]);
end
relevance=[Inf; relevance; Inf];
%We remove the pCount-N least relevant points as long as they are not contiguous
[srelevance, sortorder]= sort(relevance,'descend');
firstFinite=find(isfinite(srelevance),1,'first');
startPos=uint32(firstFinite+N+1);
toRemove=sort(sortorder(startPos:end));
if isempty(toRemove)
break;
end
%Now we have to deal with contigous elements, as removing one will
%change the relevance of the neighbors. Therefore we have to
%identify pairs of contigous points and only remove the one with
%leeser relevance
%Contigous will be true for an element if the next or the previous
%element is also flagged for removal
contiguousToKeep=[diff(toRemove(:))==1; false] | [false; (toRemove(1:end-1)-toRemove(2:end))==-1];
notContiguous=~contiguousToKeep;
%And the relevances asoociated to the elements flagged for removal
contRel=relevance(toRemove);
% Now we rearrange contigous so it is sorted in two rows, therefore
% if both rows are true in a given column, we have a case of two
% contigous points that are both flagged for removal
% this process is demenden of the rearrangement, as contigous
% elements can end up in different colums, so it has to be done
% twice to make sure no contigous elements are removed
nContiguous=length(contiguousToKeep);
for paddingMode=1:2
%The rearragngement is only possible if we have an even number of
%elements, so we add one dummy zero at the end if needed
if paddingMode==1
if mod(nContiguous,2)
pcontiguous=[contiguousToKeep; false];
pcontRel=[contRel; -Inf];
else
pcontiguous=contiguousToKeep;
pcontRel=contRel;
end
else
if mod(nContiguous,2)
pcontiguous=[false; contiguousToKeep];
pcontRel=[-Inf; contRel];
else
pcontiguous=[false; contiguousToKeep(1:end-1)];
pcontRel=[-Inf; contRel(1:end-1)];
end
end
contiguousPairs=reshape(pcontiguous,2,[]);
pcontRel=reshape(pcontRel,2,[]);
%finding colums with contigous element
contCols=all(contiguousPairs);
if ~any(contCols) && paddingMode==2
break;
end
%finding the row of the least relevant element of each column
[~, lesserElementRow]=max(pcontRel);
%The index in contigous of the first element of each pair is
if paddingMode==1
firstElementIdx=((1:size(contiguousPairs,2))*2)-1;
else
firstElementIdx=((1:size(contiguousPairs,2))*2)-2;
end
% and the index in contigous of the most relevant element of each
% pair is
lesserElementIdx=firstElementIdx+lesserElementRow-1;
%now we set the least relevant element as NOT continous, so it is
%removed
contiguousToKeep(lesserElementIdx(contCols))=false;
end
%and now we delete the relevant continous points from the toRemove
%list
toRemove=toRemove(contiguousToKeep | notContiguous);
if any(diff(toRemove(:))==1) && doDisplay
warning([num2str(sum(diff(toRemove(:))==1)) ' continous elements removed in one iteration.'])
end
toRemoveLogical=false(pCount,1);
toRemoveLogical(toRemove)=true;
X=X(~toRemoveLogical);
Y=Y(~toRemoveLogical);
indices=indices(~toRemoveLogical);
pCount=length(X);
nRemoved=sum(toRemoveLogical);
if doDisplay
disp(['Iteration ' num2str(iterCount) ', Point count = ' num2str(pCount) ' (' num2str(nRemoved) ' removed)'])
end
if nRemoved==0
break;
end
end
end
function d = ppDistance(p1,p2)
d=sqrt((p1(:,1)-p2(:,1)).^2+(p1(:,2)-p2(:,2)).^2);
end

Generating a smooth sinusoidal wave

I am creating a program to generate a sinusoidal wave over a long period of time.
Currently I am doing it like this, every update. with these starting values for time 0.0f;
time += 0.025f;
if(time > 1.0f)
{
time -= 2.0f;
}
The problem with this approach is that as you can see I have some value that if time goes beyond that my calculations start to break. So I need to reset it back to something less than that value.
Doing it this way there's obvious jumps in my wave once it's passed that threshold.
What's the method to make a smooth sine wave without this limitation?
You can use the trigonometric theorems to get an iteration for the sequence of sine values.
sin(A+B) + sin(A-B) = 2*sin(A)*cos(B)
Thus if you want to generate the sequence of values sin(w*k*dt) then you only have to compute
s[0] = 0, s[1] = sin(w*dt), cc = 2*cos(w*dt)
and then iterate
s[k+1] = cc*s[k] - s[k-1]
This linear recursion has eigenvalues on the unit circle and thus accumulates floating point errors, which may lead to phase shift and changes in amplitude over very long time spans. However, locally it will always look like a sine wave.
The second effect can be avoided by iterating the cosine sequence c[k]=cos(w*k*dt) at the same time,
s[k+1] = c[1]*s[k] + s[1]*c[k]
c[k+1] = c[1]*c[k] - s[1]*s[k]
and periodically rescaling the pair c[k],s[k] to have euclidean length 1.

heat transfer for spherical coordinates boundary conditions implementation

I want to apply heat transfer ( heat conduction and convection) for a hemisphere. It is a transient homogeneous heat transfer in spherical coordinates. There is no heat generation. Boundary conditions of hemisphere is in the beginning at Tinitial= 20 degree room temperature. External-enviromental temperature is -22 degree. You can imagine that hemisphere is a solid material. Also, it is a non-linear model, because thermal conductivity is changing after material is frozen, and this is going to change the temperature profile.
I want to find the temperature profile of this solid during a certain time until center temperature reach to -22 degree.
In this case, Temperature depends on 3 parameters : T(r,theta,t). radius, angle, and time.
1/α(∂T(r,θ,t))/∂t =1/r^2*∂/∂r(r^2(∂T(r,θ,t))/∂r)+ 1/(r^2*sinθ )∂/∂θ(sinθ(∂T(r,θ,t))/∂θ)
I applied finite difference method using matlab, However, boundary conditions have issues. There are convection on surface of the hemisphere, and conduction in the inner nodes, bottom of the hemisphere has constant temperature which is air temperature (-22). You can see the scripts which i am using for BCs in the matlab file.
% Temperature at surface of hemisphere solid boundary node
for i=nodes
for j=1:1:(nodes-1)
Qcd_ot(i,j)= ((k(i,j)+ k(i-1,j))/2)*A(i-1,j)*(( Told(i,j)-Told(i-1,j))/dr); % heat conduction out of node
Qcv(i,j) = h*(Tair-Told(i,j))*A(i,j); % heat transfer through convectioin on surface
Tnew(i,j) = ((Qcv(i,j)-Qcd_ot(i,j))/(mass(i,j)*cp(i,j))/2)*dt + Told(i,j);
end % end of for loop
end
% Temperature at inner nodes
for i=2:1:(nodes-1)
for j=2:1:(nodes-1)
Qcd_in(i,j)= ((k(i,j)+ k(i+1,j))/2)*A(i,j) *((2/R)*(( Told(i+1,j)-Told(i,j))/(2*dr)) + ((Told(i+1,j)-2*Told(i,j)+Told(i-1,j))/(dr^2)) + ((cot(y)/(R^2))*((Told(i,j+1)-Told(i,j-1))/(2*dy))) + (1/(R^2))*(Told(i,j+1)-2*Told(i,j)+ Told(i,j-1))/(dy^2));
Qcd_out(i,j)= ((k(i,j)+ k(i-1,j))/2)*A(i-1,j)*((2/R)*(( Told(i,j)-Told(i-1,j))/(2*dr)) +((Told(i+1,j)-2*Told(i,j)+Told(i-1,j))/(dr^2)) + ((cot(y)/(R^2))*((Told(i,j+1)-Told(i,j-1))/(2*dy))) + (1/(R^2))*(Told(i,j+1)-2*Told(i,j)+ Told(i,j-1))/(dy^2));
Tnew(i,j) = ((Qcd_in(i,j)-Qcd_out(i,j))/(mass(i,j)*cp(i,j)))*dt + Told(i,j);
end %end for loop
end % end for loop
%Temperature for at center line nodes
for i=2:1:(nodes-1)
for j=1
Qcd_line(i,j)=((k(i,j)+ k(i+1,j))/2)*A(i,j)*(Told(i+1,j)-Told(i,j))/dr;
Qcd_lineout(i,j)=((k(i,j)+ k(i-1,j))/2)*A(i-1,j)*(Told(i,j)-Told(i-1,j))/dr;
Tnew(i,j)= ((Qcd_line(i,j)-Qcd_lineout(i,j))/(mass(i,j)*cp(i,j)))*dt + Told(i,j);
end
end
% Temperature at bottom point (center) of the hemisphere solid
for i=1
for j=1:1:(nodes-1)
Qcd_center(i,j)=(((k(i,j)+k(i+1,j))/2)*A(i,j)*(Told(i+1,j)-Tair)/dr);
Tnew(i,j)= ((Qcd_center(i,j))/(mass(i,j)*cp(i,j)))*dt + Told(i,j);
end
end
% Temperature at all bottom points of the hemisphere
Tnew(:,nodes)=-22;
Told=Tnew;
t=t+dt;
Tnew temperatures values are getting bigger exponentially after program is run, and then becoming NaN. It supposed to show me cooling and freezing temperature profile of solid until it reaches to Tair temperature. I could not figure out the reasons why it is changing like that.
I would like to hear your suggestions for BCs implementation to this program, or how should i change them according to this conditions. Thanks in advance !!
Your code is too long to read and understand completely, but it looks like you are using a simple forward Euler scheme, is that correct? If so, try to reduce the time-step dt, maybe by a lot, since this method can become numerically unstable if dt is too big. This might slow down the speed of the computation (again by a lot), but that is the price you pay for such a simple algorithm. There are alternatives methods that do not suffer from instability, but they are much harder to implement, since you need to solve a system of equations.
I did some thermal simulations using this simple scheme a long time ago. I found that the stability criteria was dt < (dx)^2 * c_p * rho / (6 * k), which should be valid for a simulation on a 3D cartesian grid, where dx is the spatial step, c_p is the specific heat, rho the density and k the thermal conductivity of the material. I don't know how to convert this to your case with spherical coordinates. The thing I learned then was to choose small time-steps, but more importantly as large dx as possible: when you reduce dx by a factor 2, you also need to reduce dt by a factor 4 to keep things stable. At the same time, for a 3D problem, the number of elements will increase by a factor 8. So the total simulation time scales with 1 / (dx)^5!!!

Collision Detection between Accelerating Spheres

I am writing a physics engine/simulator which incorporates 3D space flight, planetary/stellar gravitation, ship thrust and relativistic effects. So far, it is going very well, however, one thing that I need help with is the math of the collision detection algorithm.
The iterative simulation of movement that I am using is basically as follows:
(Note: 3D Vectors are ALL CAPS.)
For each obj
obj.ACC = Sum(all acceleration influences)
obj.POS = obj.POS + (obj.VEL * dT) + (obj.ACC * dT^2)/2 (*EQ.2*)
obj.VEL = obj.VEL + (obj.ACC * dT)
Next
Where:
obj.ACC is the acceleration vector of the object
obj.POS is the position or location vector of the object
obj.VEL is the velocity vector of the object
obj.Radius is the radius (scalar) of the object
dT is the time delta or increment
What I basically need to do is to find some efficient formula that derives from (EQ.2) above for two objects (obj1, obj2) and tell if they ever collide, and if so, at what time. I need the exact time both so that I can determine if it is in this particular time increment (because acceleration will be different at different time increments) and also so that I can locate the exact position (which I know how to do, given the time)
For this engine, I am modelling all objects as spheres, all this formula/algorithm needs to do is to figure out at what points:
(obj1.POS - obj2.POS).Distance = (obj1.Radius + obj2.Radius)
where .Distance is a positive scalar value. (You can also square both sides if this is easier, to avoid the square root function implicit in the .Distance calculation).
(yes, I am aware of many, many other collision detection questions, however, their solutions all seem to be very particular to their engine and assumptions, and none appear to match my conditions: 3D, spheres, and acceleration applied within the simulation increments. Let me know if I am wrong.)
Some Clarifications:
1) It is not sufficient for me to check for Intersection of the two spheres before and after the time increment. In many cases their velocities and position changes will far exceed their radii.
2) RE: efficiency, I do not need help (at this point anyway) with respect to determine likely candidates for collisions, I think that I have that covered.
Another clarification, which seems to be coming up a lot:
3) My equation (EQ.2) of incremental movement is a quadratic equation that applies both Velocity and Acceleration:
obj.POS = obj.POS + (obj.VEL * dT) + (obj.ACC * dT^2)/2
In the physics engines that I have seen, (and certainly every game engine that I ever heard of) only linear equations of incremental movement that apply only Velocity:
obj.POS = obj.POS + (obj.VEL * dT)
This is why I cannot use the commonly published solutions for collision detection found on StackOverflow, on Wikipedia and all over the Web, such as finding the intersection/closest approach of two line segments. My simulation deals with variable accelerations that are fundamental to the results, so what I need is the intersection/closest approach of two parabolic segments.
On the webpage AShelley referred to, the Closest Point of Approach method is developed for the case of two objects moving at constant velocity. However, I believe the same vector-calculus method can be used to derive a result in the case of two objects both moving with constant non-zero acceleration (quadratic time dependence).
In this case, the time derivative of the distance-squared function is 3rd order (cubic) instead of 1st order. Therefore there will be 3 solutions to the Time of Closest Approach, which is not surprising since the path of both objects is curved so multiple intersections are possible. For this application, you would probably want to use the earliest value of t which is within the interval defined by the current simulation step (if such a time exists).
I worked out the derivative equation which should give the times of closest approach:
0 = |D_ACC|^2 * t^3 + 3 * dot(D_ACC, D_VEL) * t^2 + 2 * [ |D_VEL|^2 + dot(D_POS, D_ACC) ] * t + 2 * dot(D_POS, D_VEL)
where:
D_ACC = ob1.ACC-obj2.ACC
D_VEL = ob1.VEL-obj2.VEL (before update)
D_POS = ob1.POS-obj2.POS (also before update)
and dot(A, B) = A.x*B.x + A.y*B.y + A.z*B.z
(Note that the square of the magnitude |A|^2 can be computed using dot(A, A))
To solve this for t, you'll probably need to use formulas like the ones found on Wikipedia.
Of course, this will only give you the moment of closest approach. You will need to test the distance at this moment (using something like Eq. 2). If it is greater than (obj1.Radius + obj2.Radius), it can be disregarded (i.e. no collision). However, if the distance is less, that means the spheres collide before this moment. You could then use an iterative search to test the distance at earlier times. It might also be possible to come up with another (even more complicated) derivation which takes the size into account, or possible to find some other analytic solution, without resorting to iterative solving.
Edit: because of the higher order, some of the solutions to the equation are actually moments of farthest separation. I believe in all cases either 1 of the 3 solutions or 2 of the 3 solutions will be a time of farthest separation. You can test analytically whether you're at a min or a max by evaluating the second derivative with respect to time (at the values of t which you found by setting the first derivative to zero):
D''(t) = 3 * |D_ACC|^2 * t^2 + 6 * dot(D_ACC, D_VEL) * t + 2 * [ |D_VEL|^2 + dot(D_POS, D_ACC) ]
If the second derivative evaluates to a positive number, then you know the distance is at a minimum, not a maximum, for the given time t.
Draw a line between the start location and end location of each sphere. If the resulting line segments intersect the spheres definitely collided at some point and some clever math can find at what time the collision occurred. Also make sure to check if the minimum distance between the segments (if they don't intersect) is ever less than 2*radius. This will also indicate a collision.
From there you can backstep your delta time to happen exactly at collision so you can correctly calculate the forces.
Have you considered using a physics library which already does this work? Many libraries use far more advanced and more stable (better integrators) systems for solving the systems of equations you're working with. Bullet Physics comes to mind.
op asked for time of collision. A slightly different approach will compute it exactly...
Remember that the position projection equation is:
NEW_POS=POS+VEL*t+(ACC*t^2)/2
If we replace POS with D_POS=POS_A-POS_B, VEL with D_VEL=VEL_A-VEL_B, and ACC=ACC_A-ACC_B for objects A and B we get:
$D_NEW_POS=D_POS+D_VEL*t+(D_ACC*t^2)/2
This is the formula for vectored distance between the objects. In order to get the squared scalar distance between them, we can take the square of this equation, which after expansion looks like:
distsq(t) = D_POS^2+2*dot(D_POS,D_VEL)*t + (dot(D_POS, D_ACC)+D_VEL^2)*t^2 + dot(D_VEL,D_ACC)*t^3 + D_ACC^2*t^4/4
In order to find the time where collision occurs, we can set the equation equal to the square of the sum of radii and solve for t:
0 = D_POS^2-(r_A+r_B)^2 + 2*dot(D_POS,D_VEL)*t + (dot(D_POS, D_ACC)+D_VEL^2)*t^2 + dot(D_VEL,D_ACC)*t^3 + D_ACC^2*t^4/4
Now, we can solve for the equation using the quartic formula.
The quartic formula will yield 4 roots, but we are only interested in real roots. If there is a double real root, then the two objects touch edges at exactly one point in time. If there are two real roots, then the objects continuously overlap between root 1 and root 2 (i.e. root 1 is the time when collision starts and root 2 is the time when collision stops). Four real roots means that the objects collide twice, continuously between root pairs 1,2 and 3,4.
In R, I used polyroot() to solve as follows:
# initial positions
POS_A=matrix(c(0,0),2,1)
POS_B=matrix(c(2,0),2,1)
# initial velocities
VEL_A=matrix(c(sqrt(2)/2,sqrt(2)/2),2,1)
VEL_B=matrix(c(-sqrt(2)/2,sqrt(2)/2),2,1)
# acceleration
ACC_A=matrix(c(sqrt(2)/2,sqrt(2)/2),2,1)
ACC_B=matrix(c(0,0),2,1)
# radii
r_A=.25
r_B=.25
# deltas
D_POS=POS_B-POS_A
D_VEL=VEL_B-VEL_A
D_ACC=ACC_B-ACC_A
# quartic coefficients
z=c(t(D_POS)%*%D_POS-r*r, 2*t(D_POS)%*%D_VEL, t(D_VEL)%*%D_VEL+t(D_POS)%*%D_ACC, t(D_ACC)%*%D_VEL, .25*(t(D_ACC)%*%D_ACC))
# get roots
roots=polyroot(z)
# In this case there are only two real roots...
root1=as.numeric(roots[1])
root2=as.numeric(roots[2])
# trajectory over time
pos=function(p,v,a,t){
T=t(matrix(t,length(t),2))
return(t(matrix(p,2,length(t))+matrix(v,2,length(t))*T+.5*matrix(a,2,length(t))*T*T))
}
# plot A in red and B in blue
t=seq(0,2,by=.1) # from 0 to 2 seconds.
a1=pos(POS_A,VEL_A,ACC_A,t)
a2=pos(POS_B,VEL_B,ACC_B,t)
plot(a1,type='o',col='red')
lines(a2,type='o',col='blue')
# points of a circle with center 'p' and radius 'r'
circle=function(p,r,s=36){
e=matrix(0,s+1,2)
for(i in 1:s){
e[i,1]=cos(2*pi*(1/s)*i)*r+p[1]
e[i,2]=sin(2*pi*(1/s)*i)*r+p[2]
}
e[s+1,]=e[1,]
return(e)
}
# plot circles with radius r_A and r_B at time of collision start in black
lines(circle(pos(POS_A,VEL_A,ACC_A,root1),r_A))
lines(circle(pos(POS_B,VEL_B,ACC_B,root1),r_B))
# plot circles with radius r_A and r_B at time of collision stop in gray
lines(circle(pos(POS_A,VEL_A,ACC_A,root2),r_A),col='gray')
lines(circle(pos(POS_B,VEL_B,ACC_B,root2),r_B),col='gray')
Object A follows the red trajectory from the lower left to the upper right. Object B follows the blue trajectory from the lower right to the upper left. The two objects collide continuously between time 0.9194381 and time 1.167549. The two black circles just touch, showing the beginning of overlap - and overlap continues in time until the objects reach the location of the gray circles.
Seems like you want the Closest Point of Approach (CPA). If it is less than the sum of the radiuses, you have a collision. There is example code in the link. You can calculate each frame with the current velocity, and check if the CPA time is less than your tick size. You could even cache the cpa time, and only update when acceleration was applied to either item.

Resources