I am attempting to find the closest point on a finite plane to that is defined by 3 points in 3d space with edges perpendicular and parallel to one another. a fourth point (p) is where I am attempting to calculate the distance from.
Currently, I am projecting the point onto the 'infinite' plane that is defined by the normal of the 3 points and testing whether the projected point is within the bounds of the finite plane. IF it is not, I calculate the closest point on each each and select the minimum. If it is within the bounds of the plane, I just use the distance as determined by the equation to plane.
At the moment my calculations are off for testing that the projected point is within the bounds of the plane.
To test that the correct calculations are being done elsewhere, I am also testing for the closest point of each edge of the planes to the point in question. I have a 'useInfinite' variable, that I am using to test that the 'infinite plane' and distance from has been calculated properly ( which also appears fine ).
This is currently being written in blinkscript with positional data that has been rendered into an image for testing.
I have attached an image that shows the lines where the plane bounds are supposed to be. The shaded area is what is currently being calculated as the bounds of the plane.
Is there a better / more accurate or more appropriate way (under these conditions) to test the distance to a finite plane as I cant figure out what I have done incorrectly with these equations.
/*
a, b, c: points on a plane
p: point to calculate distance from plane
*/
float distanceFromPlane(float3 a, float3 b, float3 c, float3 p) {
float3 ba = b - a;
float3 ca = c - a;
// distance from infinite plane
float3 N = cross(ba, ca);
float3 n = normalize(N);
float3 v = a - p;
float dist = fabs(dot(v, n));
// if bounded plane
if (!useInfinite) {
// closest coplanar point to plane
float3 pl_cp = p - (dist * n);
float3 pa = a - pl_cp;
float l = dot(a, ba);
float m = dot(pa, ba);
float o = dot(b, ba);
float q = dot(a, ca);
float r = dot(pa, ca);
float s = dot(c, ca);
//test if coplanar point is in bounds
if (!(
(l <= m && m <= o) && (q <= r && r <= s)
)) {
/* draw lines to test shaded area is in bounds */
// distance to closest edge
float3 d = c + b - a; // 4th corner of rect
// closest point on each edge
float labp = length(p - closestPoint(a, b, p));
float lacp = length(p - closestPoint(a, c, p));
float lbdp = length(p - closestPoint(b, d, p));
float lcdp = length(p - closestPoint(c, d, p));
// find minimum closest edge point
dist = min(labp, lacp);
dist = min(dist, lbdp);
dist = min(dist, lcdp);
}
}
dist -= expand;
if (dist < 0.001f) dist = 0.001f;
return dist;
}
render showing distance calculations and incorrect summation of bounded planes - the drawn box of lines is the correct location
What do such scalar products: l = dot(a, ba), o, q, s mean?
As far as I understand, abc is triangle with right angle a, and you want to determine whether pl_cp lies inside rectangle abcd.
Calculations of pl_cp are correct. Then you can represent pa vector as linear combination of basis vectors ba and ca
Edit:
I have used pa = pl_cp - a to make it consistent with ba and ca
pa = u * ba + v * ca
u = dot(pa, ba) / dot(ba, ba)
v = dot(pa, ca) / dot(ca, ca)
If u and v values lie in range 0..1, then point pl_cp is inside rectangle.
If you actually need the closest edge, it might be found using u and v values (to compare distances, multiply by ba, ca lengths)
Related
I'm trying to write a function that returns true if a ray intersects a sphere and the code I'm referencing goes something like this:
// given Sphere and Ray as arguments
invert the Sphere matrix
make new Ray object
origin of this object = old Ray origin * inverted Sphere matrix
direction = old Ray direction * inverted Sphere matrix
a = |new direction| ^ 2
b = dot product of new origin and new direction
c = |new origin| ^ 2 - 1
det = b*b - a*c
if det > 0 there is an intersection
I'm stuck at understanding why we need to invert the Sphere matrix first and then multiply it to the Ray's origin and direction. Also I'm confused how to derive the quadratic equation variables a, b, and c and the end. I know I have to combine the parametric equations for a ray (p + td) and for a circle (x dot x - 1 = 0) but I can't figure out how to do so.
You need to invert the sphere matrix to have the ray in the sphere's coordinate frame, which, if the sphere is not scaled, is the same as simply setting new_origin = origin - sphere_center (and using the original direction)
The equation is formed from the formula:
|new_dir*t + new_origin|^2 = r^2 (presumably r is 1)
If you expand it, you get:
|new_dir|^2*t^2 + 2*(new_origin·new_dir)*t + |new_origin|^2-r^2 = 0
Given n images and a projection matrix for each image, how can i calculate the ray (line) emitted by each pixel of the images, which is intersecting one of the three planes of the real-world coordinate system? The object captured by the camera is at the same position, just the camera's position is different for each image. That's why there is a separate projection matrix for each image.
As far as my research suggests, this is the inverse of the 3D to 2D projection. Since information is lost when projecting to 2D, it's only possible to calculate the ray (line) in the real-world coordinate system, which is fine.
An example projection matrix P, that a calculated based on given K, R and t component, according to K*[R t]
3310.400000 0.000000 316.730000
K= 0.000000 3325.500000 200.550000
0.000000 0.000000 1.000000
-0.14396457836077139000 0.96965263281337499000 0.19760617153779569000
R= -0.90366580603479685000 -0.04743335255026152200 -0.42560419233334673000
-0.40331536459778505000 -0.23984130575212276000 0.88306936201487163000
-0.010415508744
t= -0.0294278883669
0.673097816109
-604.322 3133.973 933.850 178.711
P= -3086.026 -205.840 -1238.247 37.127
-0.403 -0.240 0.883 0.673
I am using the "DinoSparseRing" data set available at http://vision.middlebury.edu/mview/data
for (int i = 0; i < 16; i++) {
RealMatrix rotationMatrix = MatrixUtils.createRealMatrix(rotationMatrices[i]);
RealVector translationVector = MatrixUtils.createRealVector(translationMatrices[i]);
// construct projection matrix according to K*[R t]
RealMatrix projMatrix = getP(kalibrationMatrices[i], rotationMatrices[i], translationMatrices[i]);
// getM returns the first 3x3 block of the 3x4 projection matrix
RealMatrix projMInverse = MatrixUtils.inverse(getM(projMatrix));
// compute camera center
RealVector c = rotationMatrix.transpose().scalarMultiply(-1.f).operate(translationVector);
// compute all unprojected points and direction vector per project point
for (int m = 0; m < image_m_num_pixel; m++) {
for (int n = 0; n < image_n_num_pixel; n++) {
double[] projectedPoint = new double[]{
n,
m,
1};
// undo perspective divide
projectedPoint[0] *= projectedPoint[2];
projectedPoint[1] *= projectedPoint[2];
// undo projection by multiplying by inverse:
RealVector projectedPointVector = MatrixUtils.createRealVector(projectedPoint);
RealVector unprojectedPointVector = projMInverse.operate(projectedPointVector);
// compute direction vector
RealVector directionVector = unprojectedPointVector.subtract(c);
// normalize direction vector
double dist = Math.sqrt((directionVector.getEntry(0) * directionVector.getEntry(0))
+ (directionVector.getEntry(1) * directionVector.getEntry(1))
+ (directionVector.getEntry(2) * directionVector.getEntry(2)));
directionVector.setEntry(0, directionVector.getEntry(0) * (1.0 / dist));
directionVector.setEntry(1, directionVector.getEntry(1) * (1.0 / dist));
directionVector.setEntry(2, directionVector.getEntry(2) * (1.0 / dist));
}
}
}
The following 2 plots show the outer rays for each images (total of 16 images). The blue end is the camera point and the cyan is a bounding box containing the object captured by the camera. One can clearly see the rays projecting back to the object in world coordinate system.
To define the ray you need a start point (which is the camera/eye position) and a direction vector, which can be calculated using any point on the ray.
For a given pixel in the image, you have a projected X and Y (zeroed at the center of the image) but no Z depth value. However the real-world co-ordinates corresponding to all possible depth values for that pixel will all lie on the ray you are trying to calculate, so you can just choose any arbitrary non-zero Z value, since any point on the ray will do.
float projectedX = (x - imageCenterX) / (imageWidth * 0.5f);
float projectedY = (y - imageCenterY) / (imageHeight * 0.5f);
float projectedZ = 1.0f; // any arbitrary value
Now that you have a 3D projected co-ordinate you can undo the projection by applying the perspective divide in reverse by multiplying X and Y by Z, then multiplying the result by the inverse projection matrix to get the unprojected point.
// undo perspective divide (redundant if projectedZ = 1, shown for completeness)
projectedX *= projectedZ;
projectedY *= projectedZ;
Vector3 projectedPoint = new Vector3(projectedX, projectedY, projectedZ);
// undo projection by multiplying by inverse:
Matrix invProjectionMat = projectionMat.inverse();
Vector3 unprojectedPoint = invProjectionMat.multiply(projectedPoint);
Subtract the camera position from the unprojected point to get the direction vector from the camera to the point, and then normalize it. (This step assumes that the projection matrix defines both the camera position and orientation, if the position is stored separately then you don't need to do the subtraction)
Vector3 directionVector = unprojectedPoint.subtract(cameraPosition);
directionVector.normalize();
The ray is defined by the camera position and the normalized direction vector. You can then intersect it with any of the X, Y, Z planes.
So I'm trying to write code that sees if a ray intersects a flat circular disk and I was hoping to get it checked out here. My disk is always centered on the negative z axis so its normal vector should be (0,0, -1).
The way I'm doing it is first calculate the ray-plane intersection and then determining if that intersection point is within the "scope" of the disk.
In my code I am getting some numbers that seem off and I am not sure if the problem is in this method or if it is possibly somewhere else. So if there is something wrong with this code I would appreciate your feedback! =)
Here is my code:
float d = z_intercept; //This is where disk intersects z-axis. Can be + or -.
ray->d = Normalize(ray->d);
Point p(0, 0, d); //This is the center point of the disk
Point p0(0, 1, d);
Point p1(1, 0, d);
Vector n = Normalize(Cross(p0-p, p1-p));//Calculate normal
float diameter = DISK_DIAMETER; //Constant value
float t = (-d-Dot(p-ray->o, n))/Dot(ray->d, n); //Calculate the plane intersection
Point intersection = ray->o + t*ray->d;
return (Distance(p, intersection) <= diameter/2.0f); //See if within disk
//This is my code to calculate distance
float RealisticCamera::Distance(Point p, Point i)
{
return sqrt((p.x-i.x)*(p.x-i.x) + (p.y-i.y)*(p.y-i.y) + (p.z-i.z)*(p.z-i.z));
}
"My disk is always centered on the negative z axis so its normal vector should be (0,0, -1)."
This fact simplifies calculations.
Degenerated case: ray->d.z = 0 -> if ray->o.z = d then ray lies in disk plane, check as 2Dd, else ray is parallel and there is no intersection
Common case: t = (d - ray->o.z) / ray->d.z
If t has positive value, find x and y for this t, and check x^2+y^2 <= disk_radius^2
Calculation of t is wrong.
Points on the ray are:
ray->o + t * ray->d
in particular, coordinate z of a point on the ray is:
ray->o.z() + t * ray->d.z()
Which must be equal to d. That comes out
t = ( d - ray->o.z() ) / ray->d.z()
Lets say I have point (x,y,z) and plane with point (a,b,c) and normal (d,e,f). I want to find the point that is the result of the orthogonal projection of the first point onto the plane. I am using this in 3d graphics programming. I want to achieve some sort of clipping onto the plane.
The projection of a point q = (x, y, z) onto a plane given by a point p = (a, b, c) and a normal n = (d, e, f) is
q_proj = q - dot(q - p, n) * n
This calculation assumes that n is a unit vector.
I've implemented this function in Qt using QVector3D:
QVector3D getPointProjectionInPlane(QVector3D point, QVector3D planePoint, QVector3D planeNormal)
{
//q_proj = q - dot(q - p, n) * n
QVector3D normalizedPlaneNormal = planeNormal.normalized();
QVector3D pointProjection = point - QVector3D::dotProduct(point - planePoint, normalizedPlaneNormal) * normalizedPlaneNormal;
return pointProjection;
}
Given curves of type Circle and Circular-Arc in 3D space, what is a good way to compute accurate bounding boxes (world axis aligned)?
Edit: found solution for circles, still need help with Arcs.
C# snippet for solving BoundingBoxes for Circles:
public static BoundingBox CircleBBox(Circle circle)
{
Point3d O = circle.Center;
Vector3d N = circle.Normal;
double ax = Angle(N, new Vector3d(1,0,0));
double ay = Angle(N, new Vector3d(0,1,0));
double az = Angle(N, new Vector3d(0,0,1));
Vector3d R = new Vector3d(Math.Sin(ax), Math.Sin(ay), Math.Sin(az));
R *= circle.Radius;
return new BoundingBox(O - R, O + R);
}
private static double Angle(Vector3d A, Vector3d B)
{
double dP = A * B;
if (dP <= -1.0) { return Math.PI; }
if (dP >= +1.0) { return 0.0; }
return Math.Acos(dP);
}
One thing that's not specified is how you convert that angle range to points in space. So we'll start there and assume that the angle 0 maps to O + r***X** and angle π/2 maps to O + r***Y**, where O is the center of the circle and
X = (x1,x2,x3)
and
Y = (y1,y2,y3)
are unit vectors.
So the circle is swept out by the function
P(θ) = O + rcos(θ)X + rsin(θ)Y
where θ is in the closed interval [θstart,θend].
The derivative of P is
P'(θ) = -rsin(θ)X + rcos(θ)Y
For the purpose of computing a bounding box we're interested in the points where one of the coordinates reaches an extremal value, hence points where one of the coordinates of P' is zero.
Setting -rsin(θ)xi + rcos(θ)yi = 0 we get
tan(θ) = sin(θ)/cos(θ) = yi/xi.
So we're looking for θ where θ = arctan(yi/xi) for i in {1,2,3}.
You have to watch out for the details of the range of arctan(), and avoiding divide-by-zero, and that if θ is a solution then so is θ±k*π, and I'll leave those details to you.
All you have to do is find the set of θ corresponding to extremal values in your angle range, and compute the bounding box of their corresponding points on the circle, and you're done. It's possible that there are no extremal values in the angle range, in which case you compute the bounding box of the points corresponding to θstart and θend. In fact you may as well initialize your solution set of θ's with those two values, so you don't have to special case it.