Modifying multiplying calculation to use delta time - math

function(deltaTime) {
x = x * FACTOR; // FACTOR = 0.9
}
This function is called in a game loop. First assume that it's running at a constant 30 FPS, so deltaTime is always 1/30.
Now the game is changed so deltaTime isn't always 1/30 but becomes variable. How can I incorporate deltaTime in the calculation of x to keep the "effect per second" the same?
And what about
function(deltaTime) {
x += (target - x) * FACTOR; // FACTOR = 0.2
}

x = x * Math.pow(0.9, deltaTime*30)
Edit
For your new update:
x = (x-target) * Math.pow(1-FACTOR, deltaTime*30) + target;
To show how I got there:
Let x0 be the initial value, and xn be the value after n/30 seconds. Also let T=target, F=factor. Then:
x1 = x0 + (T-x0)F = (1-F)x0 + TF
x2 = (1-F)x1 + TF = (1-F)^2 * x0 + (1-F)TF + TF
Continuing with x3,x4,... will show:
xn = (1-F)^n * x0 + TF * (1 + (1-F) + (1-F)^2 + ... + (1-F)^(n-1))
Now substituting the formula for the sum of a geometric sequence will give the result above. This really only proves the result for integer n, but it should work for all values.

x = x * powf(0.9, deltaTime / (1.0f / 30.0f))

Related

Plot a point in 3D space perpendicular to a line vector

I'm attempting to calculate a point in 3D space which is orthogonal/perpendicular to a line vector.
So I have P1 and P2 which form the line. I’m then trying to find a point with a radius centred at P1, which is orthogonal to the line.
I'd like to do this using trigonometry, without any programming language specific functions.
At the moment I'm testing how accurate this function is by potting a circle around the line vector.
I also rotate the line vector in 3D space to see what happens to the plotted circle, this is where my results vary.
Some of the unwanted effects include:
The circle rotating and passing through the line vector.
The circle's radius appearing to reducing to zero before growing as the line vector changes direction.
I used this question to get me started, and have since made some adjustments to the code.
Any help with this would be much appreciated - I've spent 3 days drawing circles now. Here's my code so far
//Define points which form the line vector
dx = p2x - p1x;
dy = p2y - p1y;
dz = p2z - p1z;
// Normalize line vector
d = sqrt(dx*dx + dy*dy + dz*dz);
// Line vector
v3x = dx/d;
v3y = dy/d;
v3z = dz/d;
// Angle and distance to plot point around line vector
angle = 123 * pi/180 //convert to radians
radius = 4;
// Begin calculating point
s = sqrt(v3x*v3x + v3y*v3y + v3z*v3z);
// Calculate v1.
// I have been playing with these variables (v1x, v1y, v1z) to try out different configurations.
v1x = s * v3x;
v1y = s * v3y;
v1z = s * -v3z;
// Calculate v2 as cross product of v3 and v1.
v2x = v3y*v1z - v3z*v1y;
v2y = v3z*v1x - v3x*v1z;
v2z = v3x*v1y - v3y*v1x;
// Point in space around the line vector
px = p1x + (radius * (v1x * cos(angle)) + (v2x * sin(angle)));
py = p1y + (radius * (v1y * cos(angle)) + (v2y * sin(angle)));
pz = p1z + (radius * (v1z * cos(angle)) + (v2z * sin(angle)));
EDIT
After wrestling with this for days while in lockdown, I've finally managed to get this working. I'd like to thank MBo and Futurologist for their invaluable input.
Although I wasn't able to get their examples working (more likely due to me being at fault), their answers pointed me in the right direction and to that eureka moment!
The key was in swapping the correct vectors.
So thank you to you both, you really helped me along with this. This is my final (working) code:
//Set some vars
angle = 123 * pi/180;
radius = 4;
//P1 & P2 represent vectors that form the line.
dx = p2x - p1x;
dy = p2y - p1y;
dz = p2z - p1z;
d = sqrt(dx*dx + dy*dy + dz*dz)
//Normalized vector
v3x = dx/d;
v3y = dy/d;
v3z = dz/d;
//Store vector elements in an array
p = [v3x, v3y, v3z];
//Store vector elements in second array, this time with absolute value
p_abs = [abs(v3x), abs(v3y), abs(v3z)];
//Find elements with MAX and MIN magnitudes
maxval = max(p_abs[0], p_abs[1], p_abs[2]);
minval = min(p_abs[0], p_abs[1], p_abs[2]);
//Initialise 3 variables to store which array indexes contain the (max, medium, min) vector magnitudes.
maxindex = 0;
medindex = 0;
minindex = 0;
//Loop through p_abs array to find which magnitudes are equal to maxval & minval. Store their indexes for use later.
for(i = 0; i < 3; i++) {
if (p_abs[i] == maxval) maxindex = i;
else if (p_abs[i] == minval) minindex = i;
}
//Find the remaining index which has the medium magnitude
for(i = 0; i < 3; i++) {
if (i!=maxindex && i!=minindex) {
medindex = i;
break;
}
}
//Store the maximum magnitude for now.
storemax = (p[maxindex]);
//Swap the 2 indexes that contain the maximum & medium magnitudes, negating maximum. Set minimum magnitude to zero.
p[maxindex] = (p[medindex]);
p[medindex] = -storemax;
p[minindex] = 0;
//Calculate v1. Perpendicular to v3.
s = sqrt(v3x*v3x + v3z*v3z + v3y*v3y);
v1x = s * p[0];
v1y = s * p[1];
v1z = s * p[2];
//Calculate v2 as cross product of v3 and v1.
v2x = v3y*v1z - v3z*v1y;
v2y = v3z*v1x - v3x*v1z;
v2z = v3x*v1y - v3y*v1x;
//For each circle point.
circlepointx = p2x + radius * (v1x * cos(angle) + v2x * sin(angle))
circlepointy = p2y + radius * (v1y * cos(angle) + v2y * sin(angle))
circlepointz = p2z + radius * (v1z * cos(angle) + v2z * sin(angle))
Your question is too vague, but I may suppose what you really want.
You have line through two point p1 and p2. You want to build a circle of radius r centered at p1 and perpendicular to the line.
At first find direction vector of this line - you already know how - normalized vector v3.
Now you need arbitrary vector perpendicular to v3: find components of v3 with the largest magnitude and with the second magnitude. For example, abs(v3y) is the largest and abs(v3x) has the second magnitude. Exchange them, negate the largest, and make the third component zero:
p = (-v3y, v3x, 0)
This vector is normal to v3 (their dot product is zero)
Now normalize it
pp = p / length(p)
Now get binormal vector as cross product of v3 and pp (I has unit length, no need to normalize), it is perpendicular to both v3 and pp
b = v3 x pp
Now build needed circle
circlepoint(theta) = p1 + radius * pp * Cos(theta) + radius * b * Sin(theta)
Aslo note that angle in radians is
angle = degrees * pi / 180
#Input:
# Pair of points which determine line L:
P1 = [x_P1, y_P1, z_P1]
P2 = [x_P1, y_P1, z_P1]
# Radius:
Radius = R
# unit vector aligned with the line passing through the points P1 and P2:
V3 = P1 - P2
V3 = V3 / norm(V3)
# from the three basis vectors, e1 = [1,0,0], e2 = [0,1,0], e3 = [0,0,1]
# pick the one that is the most transverse to vector V3
# this means, look at the entries of V3 = [x_V3, y_V3, z_V3] and check which
# one has the smallest absolute value and record its index. Take the coordinate
# vector that has 1 at that selected index. In other words,
# if min( abs(x_V3), abs(y_V)) = abs(y_V3),
# then argmin( abs(x_V3), abs(y_V3), abs(z_V3)) = 2 and so take e = [0,1,0]:
e = [0,0,0]
i = argmin( abs(V3[1]), abs(V3[2]), abs(V3[3]) )
e[i] = 1
# a unit vector perpendicular to both e and V3:
V1 = cross(e, V3)
V1 = V1 / norm(V1)
# third unit vector perpendicular to both V3 and V1:
V2 = cross(V3, V1)
# an arbitrary point on the circle (i.e. equation of the circle with parameter s):
P = P1 + Radius*( np.cos(s)*V1 + np.sin(s)*V2 )
# E.g. say you want to find point P on the circle, 60 degrees relative to vector V1:
s = pi/3
P = P1 + Radius*( cos(s)*V1 + sin(s)*V2 )
Test example in Python:
import numpy as np
#Input:
# Pair of points which determine line L:
P1 = np.array([1, 1, 1])
P2 = np.array([3, 2, 3])
Radius = 3
V3 = P1 - P2
V3 = V3 / np.linalg.norm(V3)
e = np.array([0,0,0])
e[np.argmin(np.abs(V3))] = 1
V1 = np.cross(e, V3)
V1 = V1 / np.linalg.norm(V3)
V2 = np.cross(V3, V1)
# E.g., say you want to rotate point P along the circle, 60 degrees along rel to V1:
s = np.pi/3
P = P1 + Radius*( np.cos(s)*V1 + np.sin(s)*V2 )

finding intersection point using scilab

How can I find intersection points in the graph shown below using fsolve function (from scilab)?
Here is what I've tried so far:
function y=f(x)
y = 30 + 0 * x;
endfunction
function y= g(x)
y=zeros(x)
k1 = find(x >= 5 & x <= 11);
if k1<>[] then
y(k1)= -59.535905 +24.763399*x(k1) -3.135727*x(k1)^2+0.1288967*x(k1)^3;
end;
k2=find(x >= 11 & x <= 12);
if k2 <> [] then
y(k2)=1023.4465 - 270.59543 * x(k2) + 23.715076 * x(k2)^2 - 0.684764 * x(k2)^3;
end;
k3 = find(x >= 12 & x <= 17);
if k3 <> [] then
y(k3) =-307.31448 + 62.094807 *x(k3) - 4.0091108 * x(k3)^2 + 0.0853523 * x(k3)^3;
end;
k4 = find(x >= 17 & x <= 50);
if k4 <> [] then
y(k4) = 161.42601 - 20.624104 *x(k4) + 0.8567075 * x(k4)^2 - 0.0100559 * x(k4)^3;
end;
endfunction
t=[5:50];
plot(t, g(t));
plot2d(t, f(t));
deff('res = fct', ['res(1) = f(x)'; 'res(2) = g(x)']);
k1=[5, 45];
xsol1 = fsolve(k1, f, g)
Your original post was utterly unreadable and chaotic. It took me while to edit it and understand what you are trying to achieve. However I will try to help you. Lets go step by step:
I am not sure why you have used find function this way. probably you were trying to vectorize the g function? Please consider that Scilab does not broadcast functions over arrays by default. You need to either vectorize them or use feval to do so. Please read this other answer I have written before. find is a vectorized operation applying on an array, a Boolean operation and a scalar, finding the elements of the array which satisfy the operation. For example from the find page:
beers = ["Desperados", "Leffe", "Kronenbourg", "Heineken"];
find(beers == "Leffe")
returns 2 and
A = rand(1, 20);
w = find(A < 0.4)
returns those elements of array A which are smaller than 0.4.
Please learn about conditionals and specifically if, then, elsif, else, end statements. If you learn this you will not use the find function in that way. Sometimes you have so many ifs in a row, then try to use select, case, else, end instead. Your second function could be written as:
function y = g(x)
if x < 5 | 50 < x then
error("Out of range");
elseif x <= 11 then
y = -59.535905 + 24.763399 * x - 3.135727 * x^2 + 0.1288967 * x^3;
return;
elseif x <= 12 then
y = 1023.4465 - 270.59543 * x + 23.715076 * x^2 - 0.684764 * x^3;
return;
elseif x <= 17 then
y = -307.31448 + 62.094807 * x - 4.0091108 * x^2 + 0.0853523 * x^3;
return;
else
y = 161.42601 - 20.624104 * x + 0.8567075 * x^2 - 0.0100559 * x^3;
end
endfunction
Now apparently you want to find the points on this curve which have a value of 30. Although there are methods to find these points automatically plotting can be very helpful to find the proper range:
t = [5:50];
plot(t, feval(t, g) - 30)
showing that the the two solutions are in the range of 20 < x1 < 30 and 40 < x < 50.
Now if we use fsolve with the proper initial values it gives us good results:
--> deff('[y] = g2(x)', 'y = g(x) - 30');
--> fsolve([25; 45], g2)
ans =
26.67373
48.396547
The third parameter of the fsolve function is the Jacobin / derivative of the g(x) function. You either should calculate the derivatives of the above polynomials manually (or use a proper symbolic software like Maxima), or define them as polynomials using poly function. See this tutorial for example. Then differentiate them, defining a new function like dgdx.

Automating a function to return an expression with math constants and unknowns

I am trying to build a transitions matrix from Panel data observations in order to obtain the ML estimators of a weighted transitions matrix. A key step is obtaining the individual likelihood function for individuals. Say you have the following data frame:
ID Feature1 Feature2 Transition
120421006 10000 1 ab
120421006 12000 0 ba
120421006 10000 1 ab
123884392 3000 1 ab
123884392 2000 0 ba
908747738 1000 1 ab
The idea is to return, for each agent, the log-likelihood of his path. For agent 120421006, for instance, this boils down to (ignoring the initial term)
LL = log(exp(Yab)/1 + exp(Yab)) + log(exp(Yba) /(1 + exp(Yba))) +
log(exp(Yab)/1 + exp(Yab))
i.e,
log(exp(Y_transition)/(1 + exp(Y_transition)))
where Y_transition = xFeature1 + yFeature2 for that transition, and x and y are unknowns.
For example, for individual 120421006, this would boil down to an expression with three elements, since he transitions thrice, and the function would return
LL = log(exp(10000x + 1y)/ 1 + exp(10000x + 1y)) +
log(exp(12000x + 0y)/ 1 + exp(12000x + 0y)) +
log(exp(10000x + 1y)/ 1 + exp(10000x + 1y))
And here's the catch: I need x and y to return as unknowns, since the objective is to obtain a sum over the likelihoods of all individuals in order to pass it to an ML estimator. How would you automate a function that returns this output for all IDs?
Many thanks in advance
First you have to decide how flexible your function has to be. I am leaving it fairly rigid, but you can alter it at your flavor.
First, you have to input the initial guess parameters, which you will supply in the optimizer. Then, declare your data and variables to be used in your estimation.
Assuming you will always have only 2 variables (you can change it later)
y <- function(initial_param, data, features){
x = initial_param[1]
y = initial_param[2]
F1 = data[, features[1]]
F2 = data[, features[2]]
LL = log(exp(F1 * x + F2 * y) / (1 + exp(F1 * x + F2 * y)))
return(-sum(LL))
}
This function returns the sum of minus the log likelihood, given that most optimizers try to find the parameters at which your function reaches a minimum, by default.
To find your parameters just supply the below function with your likelihood function y, the initial parameters, data set and a vector with the names of your variables:
nlm(f = y, initial_param = your_starting_guess, data = your_data,
features = c("name_of_first_feature", "name_of_second_feature"), iterlim=1000, hessian=F)
Create the function:
fun=function(x){
a=paste0("exp(",x[1],"*x","+",x[2],"*y)")
parse(text=paste("sum(",paste0("log(",a,"/(1+",a,"))"),")"))
}
by(test[2:3],test[,1],fun)
sum(log(exp(c(10000, 12000, 10000) * x + c(1, 0, 1) * y)/(1 +
exp(c(10000, 12000, 10000) * x + c(1, 0, 1) * y))))
--------------------------------------------------------------------
sum(log(exp(c(3000, 2000) * x + c(1, 0) * y)/(1 + exp(c(3000,
2000) * x + c(1, 0) * y))))
--------------------------------------------------------------------
sum(log(exp(1000 * x + 1 * y)/(1 + exp(1000 * x + 1 * y))))
taking an example of x=0 and y=3 we can solve this:
x=0
y=3
sapply(by(test[2:3],test[,1],fun),eval)
[1] -0.79032188 -0.74173453 -0.04858735
in your example above:
x=0
y=3
log(exp(10000*x + 1*y)/ (1 + exp(10000*x + 1*y))) +#There should be paranthesis
log(exp(12000*x + 0*y)/ (1 + exp(12000*x + 0*y))) +
log(exp(10000*x + 1*y)/( 1 + exp(10000*x + 1*y)))
[1] -0.7903219
To get what you need within the comments:
fun1=function(x){
a=paste0("exp(",x[1],"*x","+",x[2],"*y)")
paste("sum(",paste0("log(",a,"/(1+",a,"))"),")")
}
paste(by(test[2:3],test[,1],fun1),collapse = "+")
1] "sum( log(exp(c(10000, 12000, 10000)*x+c(1, 0, 1)*y)/(1+exp(c(10000, 12000, 10000)*x+c(1, 0, 1)*y))) )+sum( log(exp(c(3000, 2000)*x+c(1, 0)*y)/(1+exp(c(3000, 2000)*x+c(1, 0)*y))) )+sum( log(exp(1000*x+1*y)/(1+exp(1000*x+1*y))) )"
But this doesnt make sense why you would group them and then sum all of them. That is same as just summing them without grouping them using the ID which would be simpler and faster

Trilateration with limits?

I'm in need of help solving an issue, the problem came up doing one of my small robot experiments, the basic idea, is that each little robot has the ability to approximate the distance, from themselves to an object, however the approximate I'm getting is way too rough, and I'm hoping to calculate something more accurate.
So:
Input: A list of vertex (v_1, v_2, ... v_n), a vertex v_* (robots)
Output: The coordinates for the unknown vertex v_* (object)
Each vertex v_1 to v_n's coordinates are well known (supplied by calling getX() and getY() on the vertex), and its possible to get the approximate range to v_* by calling; getApproximateDistance(v_*), function getApproximateDistance() returns two variables variables, that is; minDistance and maxDistance. - The actual distance lies in between these.
So what I've been trying to do to obtain the coordinates for v_*, is to use trilateration, however I can't seem to find a formula for doing trilateration with limits (lower and upperbound), so that's really what I'm looking for (not really good enough at math, to figure it out myself).
Note: is triangulation the way to go instead?
Note: I would possibly love to know a way to do, performance/accuracy trade-offs.
An example of data:
[Vertex . `getX()` . `getY()` . `minDistance` . `maxDistance`]
[`v_1` . 2 . 2 . 0.5 . 1 ]
[`v_2` . 1 . 2 . 0.3 . 1 ]
[`v_3` . 1.5 . 1 . 0.3 . 0.5]
Picture to show data: http://img52.imageshack.us/img52/6414/unavngivetcb.png
It's obvious that the approximate for v_1 can be better, than [0.5; 1], as the figure that the above data creates is small cut of a annulus (limited by v_3), however how would I calculate that, and possibly find the approximate within that figure (this figure is possibly concave)?
Would this be better suited for MathOverflow?
I would go for a simple discrete approach. The implicit formula for an annulus is trivial and the intersection of multiple annulus if the number of them is high can be computed somewhat efficently with a scanline based approach.
For getting high accuracy with a fast computation an option could be using a multiresolution approach (i.e. first starting in low-res and then recomputing in high-res only samples that are close to a valid point.
A small python toy I wrote can generate a 400x400 pixel image of the intersection area in about 0.5 secs (this is the kind of computation that would get a 100x speedup if done with C).
# x, y, r0, r1
data = [(2.0, 2.0, 0.5, 1.0),
(1.0, 2.0, 0.3, 1.0),
(1.5, 1.0, 0.3, 0.5)]
x0 = max(x - r1 for x, y, r0, r1 in data)
y0 = max(y - r1 for x, y, r0, r1 in data)
x1 = min(x + r1 for x, y, r0, r1 in data)
y1 = min(y + r1 for x, y, r0, r1 in data)
def hit(x, y):
for cx, cy, r0, r1 in data:
if not (r0**2 <= ((x - cx)**2 + (y - cy)**2) <= r1**2):
return False
return True
res = 400
step = 16
white = chr(255)
grey = chr(192)
black = chr(0)
img = [black] * (res * res)
# Low-res pass
cells = {}
for i in xrange(0, res, step):
y = y0 + i * (y1 - y0) / res
for j in xrange(0, res, step):
x = x0 + j * (x1 - x0) / res
if hit(x, y):
for h in xrange(-step*2, step*3, step):
for v in xrange(-step*2, step*3, step):
cells[(i+v, j+h)] = True
# High-res pass
for i in xrange(0, res, step):
for j in xrange(0, res, step):
if cells.get((i, j), False):
img[i * res + j] = grey
img[(i + step - 1) * res + j] = grey
img[(i + step - 1) * res + (j + step - 1)] = grey
img[i * res + (j + step - 1)] = grey
for v in xrange(step):
y = y0 + (i + v) * (y1 - y0) / res
for h in xrange(step):
x = x0 + (j + h) * (x1 - x0) / res
if hit(x, y):
img[(i + v)*res + (j + h)] = white
open("result.pgm", "wb").write(("P5\n%i %i 255\n" % (res, res)) +
"".join(img))
Another interesting option could be using a GPU if available. Starting from a white picture and drawing in black the exterior of each annulus will leave at the end the intersection area in white.
For example with Python/Qt the code for doing this computation is simply:
img = QImage(res, res, QImage.Format_RGB32)
dc = QPainter(img)
dc.fillRect(0, 0, res, res, QBrush(QColor(255, 255, 255)))
dc.setPen(Qt.NoPen)
dc.setBrush(QBrush(QColor(0, 0, 0)))
for x, y, r0, r1 in data:
xa1 = (x - r1 - x0) * res / (x1 - x0)
xb1 = (x + r1 - x0) * res / (x1 - x0)
ya1 = (y - r1 - y0) * res / (y1 - y0)
yb1 = (y + r1 - y0) * res / (y1 - y0)
xa0 = (x - r0 - x0) * res / (x1 - x0)
xb0 = (x + r0 - x0) * res / (x1 - x0)
ya0 = (y - r0 - y0) * res / (y1 - y0)
yb0 = (y + r0 - y0) * res / (y1 - y0)
p = QPainterPath()
p.addEllipse(QRectF(xa0, ya0, xb0-xa0, yb0-ya0))
p.addEllipse(QRectF(xa1, ya1, xb1-xa1, yb1-ya1))
p.addRect(QRectF(0, 0, res, res))
dc.drawPath(p)
and the computation part for an 800x800 resolution image takes about 8ms (and I'm not sure it's hardware accelerated).
If only the barycenter of the intersection is to be computed then there is no memory allocation at all. For example a "brute-force" approach is just a few lines of C
typedef struct TReading {
double x, y, r0, r1;
} Reading;
int hit(double xx, double yy,
Reading *readings, int num_readings)
{
while (num_readings--)
{
double dx = xx - readings->x;
double dy = yy - readings->y;
double d2 = dx*dx + dy*dy;
if (d2 < readings->r0 * readings->r0) return 0;
if (d2 > readings->r1 * readings->r1) return 0;
readings++;
}
return 1;
}
int computeLocation(Reading *readings, int num_readings,
int resolution,
double *result_x, double *result_y)
{
// Compute bounding box of interesting zone
double x0 = -1E20, y0 = -1E20, x1 = 1E20, y1 = 1E20;
for (int i=0; i<num_readings; i++)
{
if (readings[i].x - readings[i].r1 > x0)
x0 = readings[i].x - readings[i].r1;
if (readings[i].y - readings[i].r1 > y0)
y0 = readings[i].y - readings[i].r1;
if (readings[i].x + readings[i].r1 < x1)
x1 = readings[i].x + readings[i].r1;
if (readings[i].y + readings[i].r1 < y1)
y1 = readings[i].y + readings[i].r1;
}
// Scan processing
double ax = 0, ay = 0;
int total = 0;
for (int i=0; i<=resolution; i++)
{
double yy = y0 + i * (y1 - y0) / resolution;
for (int j=0; j<=resolution; j++)
{
double xx = x0 + j * (x1 - x0) / resolution;
if (hit(xx, yy, readings, num_readings))
{
ax += xx; ay += yy; total += 1;
}
}
}
if (total)
{
*result_x = ax / total;
*result_y = ay / total;
}
return total;
}
And on my PC can compute the barycenter with resolution = 100 in 0.08 ms (x=1.50000, y=1.383250) or with resolution = 400 in 1.3ms (x=1.500000, y=1.383308). Of course a double-step speedup could be implemented even for the barycenter-only version.
I would switch from "max/min" to trying to minimize an error function. That gets you to the problem discussed at Finding a point that best fits the intersection of n spheres which is more tractable than intersecting a series of complicated shapes. (And what if one robot's sensor is messed up and it gives an impossible value? That variation will still usually give a reasonable answer.)
Not sure about your case, but in a typical robotics application you're going to be reading sensors periodically and crunching the data. If that's the case, you're trying to estimate the location based on noisy data and that's a common problem. As a simple (less rigorous) method, you could take the existing position and adjust it toward or away from each known point. Take the measured distance to target minus the present distance to target, multiply that delta (error) by some value between 0 and 1, and move your estimated position that much toward the target. Repeat for each target. Then repeat each time you get a new set of measurements. The multiplier will have an effect like a low-pass filter, smaller values will give you a more stable position estimate with slower response to movement. For the distance, use the average of the min and max. If you can put tighter bounds on the range to one target, you can increase the multiplier closer to 1 for just that target.
This is of course a crude position estimator. The math guys can probably be more rigorous, but also more complicated. The solution is definitely not anything to do with intersecting areas and working with geometric shapes.

How to generate a lower frequency version of a signal in Matlab?

With a sine input, I tried to modify it's frequency cutting some lower frequencies in the spectrum, shifting the main frequency towards zero. As the signal is not fftshifted I tried to do that by eliminating some samples at the begin and at the end of the fft vector:
interval = 1;
samplingFrequency = 44100;
signalFrequency = 440;
sampleDuration = 1 / samplingFrequency;
timespan = 1 : sampleDuration : (1 + interval);
original = sin(2 * pi * signalFrequency * timespan);
fourierTransform = fft(original);
frequencyCut = 10; %% Hertz
frequencyCut = floor(frequencyCut * (length(pattern) / samplingFrequency) / 4); %% Samples
maxFrequency = length(fourierTransform) - (2 * frequencyCut);
signal = ifft(fourierTransform(frequencyCut + 1:maxFrequency), 'symmetric');
But it didn't work as expected. I also tried to remove the center part of the spectrum, but it wielded a higher frequency sine wave too.
How to make it right?
#las3rjock:
its more like downsampling the signal itself, not the FFT..
Take a look at downsample.
Or you could create a timeseries object, and resample it using the resample method.
EDIT:
a similar example :)
% generate a signal
Fs = 200;
f = 5;
t = 0:1/Fs:1-1/Fs;
y = sin(2*pi * f * t) + sin(2*pi * 2*f * t) + 0.3*randn(size(t));
% downsample
n = 2;
yy = downsample([t' y'], n);
% plot
subplot(211), plot(t,y), axis([0 1 -2 2])
subplot(212), plot(yy(:,1), yy(:,2)), axis([0 1 -2 2])
A crude way to downsample your spectrum by a factor of n would be
% downsample by a factor of 2
n = 2; % downsampling factor
newSpectrum = fourierTransform(1:n:end);
For this to be a lower-frequency signal on your original time axis, you will need to zero-pad this vector up to the original length on both the positive and negative ends. This will be made much simpler using fftshift:
pad = length(fourierTransform);
fourierTransform = [zeros(1,pad/4) fftshift(newSpectrum) zeros(1,pad/4)];
To recover the downshifted signal, you fftshift back before applying the inverse transform:
signal = ifft(fftshift(fourierTransform));
EDIT: Here is a complete script which generates a plot comparing the original and downshifted signal:
% generate original signal
interval = 1;
samplingFrequency = 44100;
signalFrequency = 440;
sampleDuration = 1 / samplingFrequency;
timespan = 1 : sampleDuration : (1 + interval);
original = sin(2 * pi * signalFrequency * timespan);
% plot original signal
subplot(211)
plot(timespan(1:1000),original(1:1000))
title('Original signal')
fourierTransform = fft(original)/length(original);
% downsample spectrum by a factor of 2
n = 2; % downsampling factor
newSpectrum = fourierTransform(1:n:end);
% zero-pad the positive and negative ends of the spectrum
pad = floor(length(fourierTransform)/4);
fourierTransform = [zeros(1,pad) fftshift(newSpectrum) zeros(1,pad)];
% inverse transform
signal = ifft(length(original)*fftshift(fourierTransform),'symmetric');
% plot the downshifted signal
subplot(212)
plot(timespan(1:1000),signal(1:1000))
title('Shifted signal')
Plot of original and downshifted signals http://img5.imageshack.us/img5/5426/downshift.png

Resources