How can I draw a perpendicular on a line segment from a given point? My line segment is defined as (x1, y1), (x2, y2), If I draw a perpendicular from a point (x3,y3) and it meets to line on point (x4,y4). I want to find out this (x4,y4).
I solved the equations for you:
k = ((y2-y1) * (x3-x1) - (x2-x1) * (y3-y1)) / ((y2-y1)^2 + (x2-x1)^2)
x4 = x3 - k * (y2-y1)
y4 = y3 + k * (x2-x1)
Where ^2 means squared
From wiki:
In algebra, for any linear equation
y=mx + b, the perpendiculars will all
have a slope of (-1/m), the opposite
reciprocal of the original slope. It
is helpful to memorize the slogan "to
find the slope of the perpendicular
line, flip the fraction and change the
sign." Recall that any whole number a
is itself over one, and can be written
as (a/1)
To find the perpendicular of a given
line which also passes through a
particular point (x, y), solve the
equation y = (-1/m)x + b, substituting
in the known values of m, x, and y to
solve for b.
The slope of the line, m, through (x1, y1) and (x2, y2) is m = (y1 - y2) / (x1 - x2)
I agree with peter.murray.rust, vectors make the solution clearer:
// first convert line to normalized unit vector
double dx = x2 - x1;
double dy = y2 - y1;
double mag = sqrt(dx*dx + dy*dy);
dx /= mag;
dy /= mag;
// translate the point and get the dot product
double lambda = (dx * (x3 - x1)) + (dy * (y3 - y1));
x4 = (dx * lambda) + x1;
y4 = (dy * lambda) + y1;
You know both the point and the slope, so the equation for the new line is:
y-y3=m*(x-x3)
Since the line is perpendicular, the slope is the negative reciprocal. You now have two equations and can solve for their intersection.
y-y3=-(1/m)*(x-x3)
y-y1=m*(x-x1)
You will often find that using vectors makes the solution clearer...
Here is a routine from my own library:
public class Line2 {
Real2 from;
Real2 to;
Vector2 vector;
Vector2 unitVector = null;
public Real2 getNearestPointOnLine(Real2 point) {
unitVector = to.subtract(from).getUnitVector();
Vector2 lp = new Vector2(point.subtract(this.from));
double lambda = unitVector.dotProduct(lp);
Real2 vv = unitVector.multiplyBy(lambda);
return from.plus(vv);
}
}
You will have to implement Real2 (a point) and Vector2 and dotProduct() but these should be simple:
The code then looks something like:
Point2 p1 = new Point2(x1, y1);
Point2 p2 = new Point2(x2, y2);
Point2 p3 = new Point2(x3, y3);
Line2 line = new Line2(p1, p2);
Point2 p4 = getNearestPointOnLine(p3);
The library (org.xmlcml.euclid) is at:
http://sourceforge.net/projects/cml/
and there are unit tests which will exercise this method and show you how to use it.
#Test
public final void testGetNearestPointOnLine() {
Real2 p = l1112.getNearestPointOnLine(new Real2(0., 0.));
Real2Test.assertEquals("point", new Real2(0.4, -0.2), p, 0.0000001);
}
Compute the slope of the line joining points (x1,y1) and (x2,y2) as m=(y2-y1)/(x2-x1)
Equation of the line joining (x1,y1) and (x2,y2) using point-slope form of line equation, would be y-y2 = m(x-x2)
Slope of the line joining (x3,y3) and (x4,y4) would be -(1/m)
Again, equation of the line joining (x3,y3) and (x4,y4) using point-slope form of line equation, would be y-y3 = -(1/m)(x-x3)
Solve these two line equations as you solve a linear equation in two variables and the values of x and y you get would be your (x4,y4)
I hope this helps.
cheers
Find out the slopes for both the
lines, say slopes are m1 and m2 then
m1*m2=-1 is the condition for
perpendicularity.
Matlab function code for the following problem
function Pr=getSpPoint(Line,Point)
% getSpPoint(): find Perpendicular on a line segment from a given point
x1=Line(1,1);
y1=Line(1,2);
x2=Line(2,1);
y2=Line(2,1);
x3=Point(1,1);
y3=Point(1,2);
px = x2-x1;
py = y2-y1;
dAB = px*px + py*py;
u = ((x3 - x1) * px + (y3 - y1) * py) / dAB;
x = x1 + u * px;
y = y1 + u * py;
Pr=[x,y];
end
Mathematica introduced the function RegionNearest[] in version 10, 2014. This function could be used to return an answer to this question:
{x4,y4} = RegionNearest[Line[{{x1,y1},{x2,y2}}],{x3,y3}]
This is mostly a duplicate of Arnkrishn's answer. I just wanted to complete his section with a complete Mathematica code snippet:
m = (y2 - y1)/(x2 - x1)
eqn1 = y - y3 == -(1/m)*(x - x3)
eqn2 = y - y1 == m*(x - x1)
Solve[eqn1 && eqn2, {x, y}]
This is a C# implementation of the accepted answer. It's also using ArcGis to return a MapPoint as that's what we're using for this project.
private MapPoint GenerateLinePoint(double startPointX, double startPointY, double endPointX, double endPointY, double pointX, double pointY)
{
double k = ((endPointY - startPointY) * (pointX - startPointX) - (endPointX - startPointX) * (pointY - startPointY)) / (Math.Pow(endPointY - startPointY, 2)
+ Math.Pow(endPointX - startPointX, 2));
double resultX = pointX - k * (endPointY - startPointY);
double resultY = pointY + k * (endPointX - startPointX);
return new MapPoint(resultX, resultY, 0, SpatialReferences.Wgs84);
}
Thanks to Ray as this worked perfectly for me.
c#arcgis
Just for the sake of completeness, here is a solution using homogeneous coordinates.
The homogeneous points are:
p1 = (x1,y1,1), p2 = (x2,y2,1), p3 = (x3,y3,1)
a line through two points is their cross-product
l_12 := p1 x p2 = (y1-y2, x2-x1, x1*y2 - x2*y1)
The (signed) distance of a point to a line is their dot product.
d := l_12 * p3 = x3*(y1-y2) + y3*(x2-x1) + x1*y2 - x2*y1
The vector from p4 to p3 is d times the normal vector of l_12 divided by the squared length of the normal vector.
n2 := (y1-y2)^2 + (x2-x1)^2
p4 := p3 + d/n2*(y1-y2, x2-x1, 0)
Note: if you divide l_12 by the length of the normal vector
l_12 := l_12 / sqrt((y1-y2)^2 + (x2-x1)^2)
the distance d will be the euclidean distance.
First, calculate the linear function determined by the points
(x1,y2),(x2,y2).
We get:
y1 = mx+b1 where m and b1 are constants.
This step is easy to calculate by the formula of linear function between two points.
Then, calculate the linear function y that goes through (x3,y3).
The function slope is -m, where m is the slope of y1.
Then calculate the const b2 by the coordinates of the point (x3,y3).
We get y2 = -mx+b2 where m and b2 are constants.
The last thing to do is to find the intersection of y1, y2.
You can find x by solving the equation: -mx+b2 = mx+b1, then place x in one of the equations to find y.
This is a vectorized Matlab function for finding pairwise projections of m points onto n line segments. Here xp and yp are m by 1 vectors holding coordinates of m different points, and x1, y1, x2 and y2 are n by 1 vectors holding coordinates of start and end points of n different line segments.
It returns m by n matrices, x and y, where x(i, j) and y(i, j) are coordinates of projection of i-th point onto j-th line.
The actual work is done in first few lines and the rest of the function runs a self-test demo, just in case where it is called with no parameters. It's relatively fast, I managed to find projections of 2k points onto 2k line segments in less than 0.05s.
function [x, y] = projectPointLine(xp, yp, x1, y1, x2, y2)
if nargin > 0
xd = (x2-x1)';
yd = (y2-y1)';
dAB = xd.*xd + yd.*yd;
u = bsxfun(#rdivide, bsxfun(#times, bsxfun(#minus, xp, x1'), xd) + ...
bsxfun(#times, bsxfun(#minus, yp, y1'), yd), dAB);
x = bsxfun(#plus, x1', bsxfun(#times, u, xd));
y = bsxfun(#plus, y1', bsxfun(#times, u, yd));
else
nLine = 3;
nPoint = 2;
xp = rand(nPoint, 1) * 2 -1;
yp = rand(nPoint, 1) * 2 -1;
x1 = rand(nLine, 1) * 2 -1;
y1 = rand(nLine, 1) * 2 -1;
x2 = rand(nLine, 1) * 2 -1;
y2 = rand(nLine, 1) * 2 -1;
tic;
[x, y] = projectPointLine(xp, yp, x1, y1, x2, y2);
toc
close all;
plot([x1'; x2'], [y1'; y2'], '.-', 'linewidth', 2, 'markersize', 20);
axis equal;
hold on
C = lines(nPoint + nLine);
for i=1:nPoint
scatter(x(i, :), y(i, :), 100, C(i+nLine, :), 'x', 'linewidth', 2);
scatter(xp(i), yp(i), 100, C(i+nLine, :), 'x', 'linewidth', 2);
end
for i=1:nLine
scatter(x(:, i)', y(:, i)', 100, C(i, :), 'o', 'linewidth', 2);
end
end
end
Related
I want to calculate a point on a given line that is perpendicular from a given point.
I have a line segment AB and have a point C outside line segment. I want to calculate a point D on AB such that CD is perpendicular to AB.
I have to find point D.
It quite similar to this, but I want to consider to Z coordinate also as it does not show up correctly in 3D space.
Proof:
Point D is on a line CD perpendicular to AB, and of course D belongs to AB.
Write down the Dot product of the two vectors CD.AB = 0, and express the fact D belongs to AB as D=A+t(B-A).
We end up with 3 equations:
Dx=Ax+t(Bx-Ax)
Dy=Ay+t(By-Ay)
(Dx-Cx)(Bx-Ax)+(Dy-Cy)(By-Ay)=0
Subtitute the first two equations in the third one gives:
(Ax+t(Bx-Ax)-Cx)(Bx-Ax)+(Ay+t(By-Ay)-Cy)(By-Ay)=0
Distributing to solve for t gives:
(Ax-Cx)(Bx-Ax)+t(Bx-Ax)(Bx-Ax)+(Ay-Cy)(By-Ay)+t(By-Ay)(By-Ay)=0
which gives:
t= -[(Ax-Cx)(Bx-Ax)+(Ay-Cy)(By-Ay)]/[(Bx-Ax)^2+(By-Ay)^2]
getting rid of the negative signs:
t=[(Cx-Ax)(Bx-Ax)+(Cy-Ay)(By-Ay)]/[(Bx-Ax)^2+(By-Ay)^2]
Once you have t, you can figure out the coordinates for D from the first two equations.
Dx=Ax+t(Bx-Ax)
Dy=Ay+t(By-Ay)
function getSpPoint(A,B,C){
var x1=A.x, y1=A.y, x2=B.x, y2=B.y, x3=C.x, y3=C.y;
var px = x2-x1, py = y2-y1, dAB = px*px + py*py;
var u = ((x3 - x1) * px + (y3 - y1) * py) / dAB;
var x = x1 + u * px, y = y1 + u * py;
return {x:x, y:y}; //this is D
}
There is a simple closed form solution for this (requiring no loops or approximations) using the vector dot product.
Imagine your points as vectors where point A is at the origin (0,0) and all other points are referenced from it (you can easily transform your points to this reference frame by subtracting point A from every point).
In this reference frame point D is simply the vector projection of point C on the vector B which is expressed as:
// Per wikipedia this is more efficient than the standard (A . Bhat) * Bhat
Vector projection = Vector.DotProduct(A, B) / Vector.DotProduct(B, B) * B
The result vector can be transformed back to the original coordinate system by adding point A to it.
A point on line AB can be parametrized by:
M(x)=A+x*(B-A), for x real.
You want D=M(x) such that DC and AB are orthogonal:
dot(B-A,C-M(x))=0.
That is: dot(B-A,C-A-x*(B-A))=0, or dot(B-A,C-A)=x*dot(B-A,B-A), giving:
x=dot(B-A,C-A)/dot(B-A,B-A) which is defined unless A=B.
What you are trying to do is called vector projection
Here i have converted answered code from "cuixiping" to matlab code.
function Pr=getSpPoint(Line,Point)
% getSpPoint(): find Perpendicular on a line segment from a given point
x1=Line(1,1);
y1=Line(1,2);
x2=Line(2,1);
y2=Line(2,1);
x3=Point(1,1);
y3=Point(1,2);
px = x2-x1;
py = y2-y1;
dAB = px*px + py*py;
u = ((x3 - x1) * px + (y3 - y1) * py) / dAB;
x = x1 + u * px;
y = y1 + u * py;
Pr=[x,y];
end
I didn't see this answer offered, but Ron Warholic had a great suggestion with the Vector Projection. ACD is merely a right triangle.
Create the vector AC i.e (Cx - Ax, Cy - Ay)
Create the Vector AB i.e (Bx - Ax, By - Ay)
Dot product of AC and AB is equal to the cosine of the angle between the vectors. i.e cos(theta) = ACx*ABx + ACy*ABy.
Length of a vector is sqrt(x*x + y*y)
Length of AD = cos(theta)*length(AC)
Normalize AB i.e (ABx/length(AB), ABy/length(AB))
D = A + NAB*length(AD)
For anyone who might need this in C# I'll save you some time:
double Ax = ;
double Ay = ;
double Az = ;
double Bx = ;
double By = ;
double Bz = ;
double Cx = ;
double Cy = ;
double Cz = ;
double t = ((Cx - Ax) * (Bx - Ax) + (Cy - Ay) * (By - Ay)) / (Math.Pow(Bx - Ax, 2) + Math.Pow(By - Ay, 2));
double Dx = Ax + t*(Bx - Ax);
double Dy = Ay + t*(By - Ay);
Here is another python implementation without using a for loop. It works for any number of points and any number of line segments. Given p_array as a set of points, and x_array , y_array as continues line segments or a polyline.
This uses the equation Y = mX + n and considering that the m factor for a perpendicular line segment is -1/m.
import numpy as np
def ortoSegmentPoint(self, p_array, x_array, y_array):
"""
:param p_array: np.array([[ 718898.941 9677612.901 ], [ 718888.8227 9677718.305 ], [ 719033.0528 9677770.692 ]])
:param y_array: np.array([9677656.39934991 9677720.27550726 9677754.79])
:param x_array: np.array([718895.88881594 718938.61392781 718961.46])
:return: [POINT, LINE] indexes where point is orthogonal to line segment
"""
# PENDIENTE "m" de la recta, y = mx + n
m_array = np.divide(y_array[1:] - y_array[:-1], x_array[1:] - x_array[:-1])
# PENDIENTE INVERTIDA, 1/m
inv_m_array = np.divide(1, m_array)
# VALOR "n", y = mx + n
n_array = y_array[:-1] - x_array[:-1] * m_array
# VALOR "n_orto" PARA LA RECTA PERPENDICULAR
n_orto_array = np.array(p_array[:, 1]).reshape(len(p_array), 1) + inv_m_array * np.array(p_array[:, 0]).reshape(len(p_array), 1)
# PUNTOS DONDE SE INTERSECTAN DE FORMA PERPENDICULAR
x_intersec_array = np.divide(n_orto_array - n_array, m_array + inv_m_array)
y_intersec_array = m_array * x_intersec_array + n_array
# LISTAR COORDENADAS EN PARES
x_coord = np.array([x_array[:-1], x_array[1:]]).T
y_coord = np.array([y_array[:-1], y_array[1:]]).T
# FILAS: NUMERO DE PUNTOS, COLUMNAS: NUMERO DE TRAMOS
maskX = np.where(np.logical_and(x_intersec_array < np.max(x_coord, axis=1), x_intersec_array > np.min(x_coord, axis=1)), True, False)
maskY = np.where(np.logical_and(y_intersec_array < np.max(y_coord, axis=1), y_intersec_array > np.min(y_coord, axis=1)), True, False)
mask = maskY * maskX
return np.argwhere(mask == True)
As Ron Warholic and Nicolas Repiquet answered, this can be solved using vector projection. For completeness I'll add a python/numpy implementation of this here in case it saves anyone else some time:
import numpy as np
# Define some test data that you can solve for directly.
first_point = np.array([4, 4])
second_point = np.array([8, 4])
target_point = np.array([6, 6])
# Expected answer
expected_point = np.array([6, 4])
# Create vector for first point on line to perpendicular point.
point_vector = target_point - first_point
# Create vector for first point and second point on line.
line_vector = second_point - first_point
# Create the projection vector that will define the position of the resultant point with respect to the first point.
projection_vector = (np.dot(point_vector, line_vector) / np.dot(line_vector, line_vector)) * line_vector
# Alternative method proposed in another answer if for whatever reason you prefer to use this.
_projection_vector = (np.dot(point_vector, line_vector) / np.linalg.norm(line_vector)**2) * line_vector
# Add the projection vector to the first point
projected_point = first_point + projection_vector
# Test
(projected_point == expected_point).all()
Since you're not stating which language you're using, I'll give you a generic answer:
Just have a loop passing through all the points in your AB segment, "draw a segment" to C from them, get the distance from C to D and from A to D, and apply pithagoras theorem. If AD^2 + CD^2 = AC^2, then you've found your point.
Also, you can optimize your code by starting the loop by the shortest side (considering AD and BD sides), since you'll find that point earlier.
Here is a python implementation based on Corey Ogburn's answer from this thread.
It projects the point q onto the line segment defined by p1 and p2 resulting in the point r.
It will return null if r falls outside of line segment:
def is_point_on_line(p1, p2, q):
if (p1[0] == p2[0]) and (p1[1] == p2[1]):
p1[0] -= 0.00001
U = ((q[0] - p1[0]) * (p2[0] - p1[0])) + ((q[1] - p1[1]) * (p2[1] - p1[1]))
Udenom = math.pow(p2[0] - p1[0], 2) + math.pow(p2[1] - p1[1], 2)
U /= Udenom
r = [0, 0]
r[0] = p1[0] + (U * (p2[0] - p1[0]))
r[1] = p1[1] + (U * (p2[1] - p1[1]))
minx = min(p1[0], p2[0])
maxx = max(p1[0], p2[0])
miny = min(p1[1], p2[1])
maxy = max(p1[1], p2[1])
is_valid = (minx <= r[0] <= maxx) and (miny <= r[1] <= maxy)
if is_valid:
return r
else:
return None
I'm trying to draw a line between two (2D) points when the user swipes their finger across a touch screen. To do this, I plan on drawing a rectangle on every touch update between the X and Y of the previous touch update and the X and Y of the latest touch update. This should create a continuous and solid line as the user swipes their finger across the screen. However, I would also like this line to have an arbitrary width. My question is, how should I go about calculating the coordinates I need for each rectangle (x1, y1, x2, y2)?
--
Also: if anyone has any information on how I could then go about applying anti-aliasing to this line it'd be a massive help.
Calculate a vector between start and end points
V.X := Point2.X - Point1.X;
V.Y := Point2.Y - Point1.Y;
Then calculate a perpendicular to it (just swap X and Y coordinates)
P.X := V.Y; //Use separate variable otherwise you overwrite X coordinate here
P.Y := -V.X; //Flip the sign of either the X or Y (edit by adam.wulf)
Normalize that perpendicular
Length = sqrt(P.X * P.X + P.Y * P.Y); //Thats length of perpendicular
N.X = P.X / Length;
N.Y = P.Y / Length; //Now N is normalized perpendicular
Calculate 4 points that form a rectangle by adding normalized perpendicular and multiplying it by half of the desired width
R1.X := Point1.X + N.X * Width / 2;
R1.Y := Point1.Y + N.Y * Width / 2;
R2.X := Point1.X - N.X * Width / 2;
R2.Y := Point1.Y - N.Y * Width / 2;
R3.X := Point2.X + N.X * Width / 2;
R3.Y := Point2.Y + N.Y * Width / 2;
R4.X := Point2.X - N.X * Width / 2;
R4.Y := Point2.Y - N.Y * Width / 2;
Draw rectangle using these 4 points.
Here's the picture:
EDIT: To answer the comments: If X and Y are the same then the line is exactly diagonal and perpendicular to a diagonal is a diagonal. Normalization is a method of making a length to equal to 1, so that the width of your line in this example will not depend on perpendiculars length (which is equal to lines length here).
Easy way (I'll call the "width" the thickness of the line):
We need to calculate 2 values, the shift on the x axis and the shift on the y axis for each of the 4 corners. Which is easy enough.
The dimensions of the line are:
width = x2 - x1
height = y2 - y1
Now the x shift (let's call it xS):
xS = (thickness * height / length of line) / 2
yS = (thickness * width / length of line) / 2
To find the length of the line, use Pythagoras's theorem:
length = square_root(width * width + height * height)
Now you have the x shift and y shift.
First coordinate is: (x1 - xS, y1 + yS)
Second: (x1 + xS, y1 - yS)
Third: (x2 + xS, y2 - yS)
Fourth: (x2 - xS, y2 + yS)
And there you go! (Those coordinates are drawn counterclockwise, the default for OpenGL)
If I understand you correctly, you have two end points say A(x1,y1) and B(x2,y2) and an arbitrary width for the rectangle say w. I assume the end points will be just at the middle of the rectangle's shorter sides meaning the distance of the final rectangles corner coordinates would be w/2 to A and B.
You can compute the slope of the line by;
s1 = (y2 - y1) / (x2 - x1) // assuming x1 != x2
The slope of the shorter sides is nothing but s2 = -1/s1.
We have slope, we have distance and we have the reference points.
We than can derive two equations for each corner point:
For one corner closer to A
C(x3,y3):
(y3 - y1) / (x3 - x1) = s2 // by slope
(y3 - y1)^2 + (x3 - x1)^2 = (w/2)^2 // by distance
replacing (y3 - y1) by a and (x3 - x1) by b yields
a = b * s2 // slope equation
// replace a by b*s2
b^2 * s2^2 + b^2 = (w/2)^2 // distance equaiton
b^2 = (w/2)^2 / (s2^2+1)
b = sqrt((w/2)^2 / (s2^2+1))
we know w and s2 and hence compute b
If b is known, we can deduce x3
x3 = b + x1
and a, as well
a = b * s2
and so y3
y3 = b*s2 + y1
we have one corner point C(x3,y3).
To compute the other corner point closer to A, say D(x4,y4), the slope equation can be constructed as
(y1 - y4) / (x1 - x4) = s2
and the calculations listed above should be applied.
Other two corners can be calculated with the steps listed here replacing A(x1, y1) with B(x2,y2).
I have a square bitmap of a circle and I want to compute the normals of all the pixels in that circle as if it were a sphere of radius 1:
The sphere/circle is centered in the bitmap.
What is the equation for this?
Don't know much about how people program 3D stuff, so I'll just give the pure math and hope it's useful.
Sphere of radius 1, centered on origin, is the set of points satisfying:
x2 + y2 + z2 = 1
We want the 3D coordinates of a point on the sphere where x and y are known. So, just solve for z:
z = ±sqrt(1 - x2 - y2).
Now, let us consider a unit vector pointing outward from the sphere. It's a unit sphere, so we can just use the vector from the origin to (x, y, z), which is, of course, <x, y, z>.
Now we want the equation of a plane tangent to the sphere at (x, y, z), but this will be using its own x, y, and z variables, so instead I'll make it tangent to the sphere at (x0, y0, z0). This is simply:
x0x + y0y + z0z = 1
Hope this helps.
(OP):
you mean something like:
const int R = 31, SZ = power_of_two(R*2);
std::vector<vec4_t> p;
for(int y=0; y<SZ; y++) {
for(int x=0; x<SZ; x++) {
const float rx = (float)(x-R)/R, ry = (float)(y-R)/R;
if(rx*rx+ry*ry > 1) { // outside sphere
p.push_back(vec4_t(0,0,0,0));
} else {
vec3_t normal(rx,sqrt(1.-rx*rx-ry*ry),ry);
p.push_back(vec4_t(normal,1));
}
}
}
It does make a nice spherical shading-like shading if I treat the normals as colours and blit it; is it right?
(TZ)
Sorry, I'm not familiar with those aspects of C++. Haven't used the language very much, nor recently.
This formula is often used for "fake-envmapping" effect.
double x = 2.0 * pixel_x / bitmap_size - 1.0;
double y = 2.0 * pixel_y / bitmap_size - 1.0;
double r2 = x*x + y*y;
if (r2 < 1)
{
// Inside the circle
double z = sqrt(1 - r2);
.. here the normal is (x, y, z) ...
}
Obviously you're limited to assuming all the points are on one half of the sphere or similar, because of the missing dimension. Past that, it's pretty simple.
The middle of the circle has a normal facing precisely in or out, perpendicular to the plane the circle is drawn on.
Each point on the edge of the circle is facing away from the middle, and thus you can calculate the normal for that.
For any point between the middle and the edge, you use the distance from the middle, and some simple trig (which eludes me at the moment). A lerp is roughly accurate at some points, but not quite what you need, since it's a curve. Simple curve though, and you know the beginning and end values, so figuring them out should only take a simple equation.
I think I get what you're trying to do: generate a grid of depth data for an image. Sort of like ray-tracing a sphere.
In that case, you want a Ray-Sphere Intersection test:
http://www.siggraph.org/education/materials/HyperGraph/raytrace/rtinter1.htm
Your rays will be simple perpendicular rays, based off your U/V coordinates (times two, since your sphere has a diameter of 2). This will give you the front-facing points on the sphere.
From there, calculate normals as below (point - origin, the radius is already 1 unit).
Ripped off from the link above:
You have to combine two equations:
Ray: R(t) = R0 + t * Rd , t > 0 with R0 = [X0, Y0, Z0] and Rd = [Xd, Yd, Zd]
Sphere: S = the set of points[xs, ys, zs], where (xs - xc)2 + (ys - yc)2 + (zs - zc)2 = Sr2
To do this, calculate your ray (x * pixel / width, y * pixel / width, z: 1), then:
A = Xd^2 + Yd^2 + Zd^2
B = 2 * (Xd * (X0 - Xc) + Yd * (Y0 - Yc) + Zd * (Z0 - Zc))
C = (X0 - Xc)^2 + (Y0 - Yc)^2 + (Z0 - Zc)^2 - Sr^2
Plug into quadratic equation:
t0, t1 = (- B + (B^2 - 4*C)^1/2) / 2
Check discriminant (B^2 - 4*C), and if real root, the intersection is:
Ri = [xi, yi, zi] = [x0 + xd * ti , y0 + yd * ti, z0 + zd * ti]
And the surface normal is:
SN = [(xi - xc)/Sr, (yi - yc)/Sr, (zi - zc)/Sr]
Boiling it all down:
So, since we're talking unit values, and rays that point straight at Z (no x or y component), we can boil down these equations greatly:
Ray:
X0 = 2 * pixelX / width
Y0 = 2 * pixelY / height
Z0 = 0
Xd = 0
Yd = 0
Zd = 1
Sphere:
Xc = 1
Yc = 1
Zc = 1
Factors:
A = 1 (unit ray)
B
= 2 * (0 + 0 + (0 - 1))
= -2 (no x/y component)
C
= (X0 - 1) ^ 2 + (Y0 - 1) ^ 2 + (0 - 1) ^ 2 - 1
= (X0 - 1) ^ 2 + (Y0 - 1) ^ 2
Discriminant
= (-2) ^ 2 - 4 * 1 * C
= 4 - 4 * C
From here:
If discriminant < 0:
Z = ?, Normal = ?
Else:
t = (2 + (discriminant) ^ 1 / 2) / 2
If t < 0 (hopefully never or always the case)
t = -t
Then:
Z: t
Nx: Xi - 1
Ny: Yi - 1
Nz: t - 1
Boiled farther still:
Intuitively it looks like C (X^2 + Y^2) and the square-root are the most prominent figures here. If I had a better recollection of my math (in particular, transformations on exponents of sums), then I'd bet I could derive this down to what Tom Zych gave you. Since I can't, I'll just leave it as above.
I'm in need of help solving an issue, the problem came up doing one of my small robot experiments, the basic idea, is that each little robot has the ability to approximate the distance, from themselves to an object, however the approximate I'm getting is way too rough, and I'm hoping to calculate something more accurate.
So:
Input: A list of vertex (v_1, v_2, ... v_n), a vertex v_* (robots)
Output: The coordinates for the unknown vertex v_* (object)
Each vertex v_1 to v_n's coordinates are well known (supplied by calling getX() and getY() on the vertex), and its possible to get the approximate range to v_* by calling; getApproximateDistance(v_*), function getApproximateDistance() returns two variables variables, that is; minDistance and maxDistance. - The actual distance lies in between these.
So what I've been trying to do to obtain the coordinates for v_*, is to use trilateration, however I can't seem to find a formula for doing trilateration with limits (lower and upperbound), so that's really what I'm looking for (not really good enough at math, to figure it out myself).
Note: is triangulation the way to go instead?
Note: I would possibly love to know a way to do, performance/accuracy trade-offs.
An example of data:
[Vertex . `getX()` . `getY()` . `minDistance` . `maxDistance`]
[`v_1` . 2 . 2 . 0.5 . 1 ]
[`v_2` . 1 . 2 . 0.3 . 1 ]
[`v_3` . 1.5 . 1 . 0.3 . 0.5]
Picture to show data: http://img52.imageshack.us/img52/6414/unavngivetcb.png
It's obvious that the approximate for v_1 can be better, than [0.5; 1], as the figure that the above data creates is small cut of a annulus (limited by v_3), however how would I calculate that, and possibly find the approximate within that figure (this figure is possibly concave)?
Would this be better suited for MathOverflow?
I would go for a simple discrete approach. The implicit formula for an annulus is trivial and the intersection of multiple annulus if the number of them is high can be computed somewhat efficently with a scanline based approach.
For getting high accuracy with a fast computation an option could be using a multiresolution approach (i.e. first starting in low-res and then recomputing in high-res only samples that are close to a valid point.
A small python toy I wrote can generate a 400x400 pixel image of the intersection area in about 0.5 secs (this is the kind of computation that would get a 100x speedup if done with C).
# x, y, r0, r1
data = [(2.0, 2.0, 0.5, 1.0),
(1.0, 2.0, 0.3, 1.0),
(1.5, 1.0, 0.3, 0.5)]
x0 = max(x - r1 for x, y, r0, r1 in data)
y0 = max(y - r1 for x, y, r0, r1 in data)
x1 = min(x + r1 for x, y, r0, r1 in data)
y1 = min(y + r1 for x, y, r0, r1 in data)
def hit(x, y):
for cx, cy, r0, r1 in data:
if not (r0**2 <= ((x - cx)**2 + (y - cy)**2) <= r1**2):
return False
return True
res = 400
step = 16
white = chr(255)
grey = chr(192)
black = chr(0)
img = [black] * (res * res)
# Low-res pass
cells = {}
for i in xrange(0, res, step):
y = y0 + i * (y1 - y0) / res
for j in xrange(0, res, step):
x = x0 + j * (x1 - x0) / res
if hit(x, y):
for h in xrange(-step*2, step*3, step):
for v in xrange(-step*2, step*3, step):
cells[(i+v, j+h)] = True
# High-res pass
for i in xrange(0, res, step):
for j in xrange(0, res, step):
if cells.get((i, j), False):
img[i * res + j] = grey
img[(i + step - 1) * res + j] = grey
img[(i + step - 1) * res + (j + step - 1)] = grey
img[i * res + (j + step - 1)] = grey
for v in xrange(step):
y = y0 + (i + v) * (y1 - y0) / res
for h in xrange(step):
x = x0 + (j + h) * (x1 - x0) / res
if hit(x, y):
img[(i + v)*res + (j + h)] = white
open("result.pgm", "wb").write(("P5\n%i %i 255\n" % (res, res)) +
"".join(img))
Another interesting option could be using a GPU if available. Starting from a white picture and drawing in black the exterior of each annulus will leave at the end the intersection area in white.
For example with Python/Qt the code for doing this computation is simply:
img = QImage(res, res, QImage.Format_RGB32)
dc = QPainter(img)
dc.fillRect(0, 0, res, res, QBrush(QColor(255, 255, 255)))
dc.setPen(Qt.NoPen)
dc.setBrush(QBrush(QColor(0, 0, 0)))
for x, y, r0, r1 in data:
xa1 = (x - r1 - x0) * res / (x1 - x0)
xb1 = (x + r1 - x0) * res / (x1 - x0)
ya1 = (y - r1 - y0) * res / (y1 - y0)
yb1 = (y + r1 - y0) * res / (y1 - y0)
xa0 = (x - r0 - x0) * res / (x1 - x0)
xb0 = (x + r0 - x0) * res / (x1 - x0)
ya0 = (y - r0 - y0) * res / (y1 - y0)
yb0 = (y + r0 - y0) * res / (y1 - y0)
p = QPainterPath()
p.addEllipse(QRectF(xa0, ya0, xb0-xa0, yb0-ya0))
p.addEllipse(QRectF(xa1, ya1, xb1-xa1, yb1-ya1))
p.addRect(QRectF(0, 0, res, res))
dc.drawPath(p)
and the computation part for an 800x800 resolution image takes about 8ms (and I'm not sure it's hardware accelerated).
If only the barycenter of the intersection is to be computed then there is no memory allocation at all. For example a "brute-force" approach is just a few lines of C
typedef struct TReading {
double x, y, r0, r1;
} Reading;
int hit(double xx, double yy,
Reading *readings, int num_readings)
{
while (num_readings--)
{
double dx = xx - readings->x;
double dy = yy - readings->y;
double d2 = dx*dx + dy*dy;
if (d2 < readings->r0 * readings->r0) return 0;
if (d2 > readings->r1 * readings->r1) return 0;
readings++;
}
return 1;
}
int computeLocation(Reading *readings, int num_readings,
int resolution,
double *result_x, double *result_y)
{
// Compute bounding box of interesting zone
double x0 = -1E20, y0 = -1E20, x1 = 1E20, y1 = 1E20;
for (int i=0; i<num_readings; i++)
{
if (readings[i].x - readings[i].r1 > x0)
x0 = readings[i].x - readings[i].r1;
if (readings[i].y - readings[i].r1 > y0)
y0 = readings[i].y - readings[i].r1;
if (readings[i].x + readings[i].r1 < x1)
x1 = readings[i].x + readings[i].r1;
if (readings[i].y + readings[i].r1 < y1)
y1 = readings[i].y + readings[i].r1;
}
// Scan processing
double ax = 0, ay = 0;
int total = 0;
for (int i=0; i<=resolution; i++)
{
double yy = y0 + i * (y1 - y0) / resolution;
for (int j=0; j<=resolution; j++)
{
double xx = x0 + j * (x1 - x0) / resolution;
if (hit(xx, yy, readings, num_readings))
{
ax += xx; ay += yy; total += 1;
}
}
}
if (total)
{
*result_x = ax / total;
*result_y = ay / total;
}
return total;
}
And on my PC can compute the barycenter with resolution = 100 in 0.08 ms (x=1.50000, y=1.383250) or with resolution = 400 in 1.3ms (x=1.500000, y=1.383308). Of course a double-step speedup could be implemented even for the barycenter-only version.
I would switch from "max/min" to trying to minimize an error function. That gets you to the problem discussed at Finding a point that best fits the intersection of n spheres which is more tractable than intersecting a series of complicated shapes. (And what if one robot's sensor is messed up and it gives an impossible value? That variation will still usually give a reasonable answer.)
Not sure about your case, but in a typical robotics application you're going to be reading sensors periodically and crunching the data. If that's the case, you're trying to estimate the location based on noisy data and that's a common problem. As a simple (less rigorous) method, you could take the existing position and adjust it toward or away from each known point. Take the measured distance to target minus the present distance to target, multiply that delta (error) by some value between 0 and 1, and move your estimated position that much toward the target. Repeat for each target. Then repeat each time you get a new set of measurements. The multiplier will have an effect like a low-pass filter, smaller values will give you a more stable position estimate with slower response to movement. For the distance, use the average of the min and max. If you can put tighter bounds on the range to one target, you can increase the multiplier closer to 1 for just that target.
This is of course a crude position estimator. The math guys can probably be more rigorous, but also more complicated. The solution is definitely not anything to do with intersecting areas and working with geometric shapes.
Finding a good way to do this has stumped me for a while now: assume I have a selection box with a set of points in it. By dragging the corners you can scale the (distance between) points in the box. Now for an axis aligned box this is easy. Take a corner as an anchor point (subtract this corner from each point, scale it, then add it to the point again) and multiply each points x and y by the factor with which the box has gotten bigger.
But now take a box that is not aligned with the x and y axis. How do you scale the points inside this box when you drag its corners?
Any box is contained inside a circle.
You find the circle which binds the box, find its center and do exactly the same as you do with an axis aligned box.
You pick one corner of the rectangle as the origin. The two edges connected to it will be the basis (u and v, which should be perpendicular to each other). You would need to normalize them first.
Subtract the origin from the coordinates and calculate the dot-product with the scaling vector (u), and with the other vector (v). This would give you how much u and v contributes to the coordinate.
Then you scale the component you want. To get the final coordinate, you just multiply the the (now scaled) components with their respective vector, and add them together.
For example:
Points: p1 = (3,5) and p2 = (6,4)
Selection corners: (0,2),(8,0),(9,4),(1,6)
selected origin = (8,0)
u = ((0,2)-(8,0))/|(0,2)-(8,0)| = <-0.970, 0.242>
v = <-0.242, -0.970>
(v is u, but with flipped coordinates, and one of them negated)
p1´ = p1 - origin = (-5, 5)
p2´ = p2 - origin = (-2, 4)
p1_u = p1´ . u = -0.970 * (-5) + 0.242 * 5 = 6.063
p1_v = p1´ . v = -0.242 * (-5) - 0.970 * 5 = -3.638
Scale p1_u by 0.5: 3.038
p1_u * u + p1_v * v + origin = <5.941, 4.265>
Same for p2: <7.412, 3.647>
As you maybe can see, they have moved towards the line (8,0)-(9,4), since we scaled by 0.5, with (0,8) as the origin.
Edit: This turned out to be a little harder to explain than I anticipated.
In python code, it could look something like this:
def scale(points, origin, u, scale):
# normalize
len_u = (u[0]**2 + u[1]**2) ** 0.5
u = (u[0]/len_u, u[1]/len_u)
# create v
v = (-u[1],u[0])
ret = []
for x,y in points:
# subtract origin
x, y = x - origin[0], y - origin[1]
# calculate dot product
pu = x * u[0] + y * u[1]
pv = x * v[0] + y * v[1]
# scale
pu = pu * scale
# transform back to normal space
x = pu * u[0] + pv * v[0] + origin[0]
y = pu * u[1] + pv * v[1] + origin[1]
ret.append((x,y))
return ret
>>> scale([(3,5),(6,4)],(8,0),(-8,2),0.5)
[(5.9411764705882355, 4.2647058823529411), (7.4117647058823533, 3.6470588235294117)]
Let's say that the box is defined as a set of four points (P1, P2, P3 and P4).
For the sake of simplicity, we'll say you are dragging P1, and that P3 is the opposite corner (the one you are using as an anchor).
Let's label the mouse position as M, and the new points you wish to calculate as N1, N2 and N4. P3 will, of course, remain the same.
Your scaling factor can be simply computed using vector subtraction and the vector dot product:
scale = ((M - P3) dot (P1 - P3)) / ((P1 - P3) dot (P1 - P3))
And the three new points can be found using scalar multiplication and vector addition:
N1 = scale*P1 + (1 - scale)*P3
N2 = scale*P2 + (1 - scale)*P3
N4 = scale*P4 + (1 - scale)*P3
edit: I see that MizardX has answered the question already, so my answer is here to help with that difficult explanation. I hope it helps!
edit: here is the algorithm for non-proportional scaling. In this case, N1 is equal to M (the point being dragged follows the mouse), so the only points of interest are N2 and N4:
N2 = ((M - P3) dot (P2 - P3)) / ((P2 - P3) dot (P2 - P3)) * (P2 - P3) + P3
N4 = ((M - P3) dot (P4 - P3)) / ((P4 - P3) dot (P4 - P3)) * (P4 - P3) + P3
where * represents scalar multiplication
edit: Here is some C++ code which answers the question. I'm sure this question is long-dead by now, but it was an interesting problem, and I had some fun writing the code.
#include <vector>
class Point
{
public:
float x;
float y;
Point() { x = y = 0; }
Point(float nx, float ny) { x = nx; y = ny; }
};
Point& operator-(Point& A, Point& B) { return Point(A.x-B.x, A.y-B.y); }
Point& operator+(Point& A, Point& B) { return Point(A.x+B.x, A.y+B.y); }
Point& operator*(float sc, Point& P) { return Point(sc*P.x, sc*P.y); }
float dot_product(Point A, Point B) { return A.x*B.x + A.y*B.y; }
struct Rect { Point point[4]; };
void scale_points(Rect box, int anchor, Point mouse, vector<Point> points)
{
Point& P3 = box.point[anchor];
Point& P2 = box.point[(anchor + 1)%4];
Point& P1 = box.point[(anchor + 2)%4];
Point& P4 = box.point[(anchor + 3)%4];
Point A = P4 - P3;
Point aFactor = dot_product(mouse - P3, A) / dot_product(A, A) * A;
Point B = P2 - P3;
Point bFactor = dot_product(mouse - P3, B) / dot_product(B, B) * B;
for (int i = 0; i < points.size(); i++)
{
Point P = points[i] - P3;
points[i] = P3 + dot_product(P, aFactor) + dot_product(P, bFactor);
}
}