I have a polygon defined by n points and a polygon normal.
Now I want to get the plane of the polygon defined by
a plane normal=(nx,ny,nz)
and a constant d (distance from the origin to the plane).
The plane normal is equal to the polygon normal, but how can I calculate d?
desired plane equation nx*x+ny*y+nz*z+d=0.0
Take any point p=(px, py, pz) on the plane and plug it into the equation to obtain d.
So if your equation is
nx·x + ny·y + nz·z + d = 0
then you get
d = − (nx·px + ny·py + nz·pz).
Another common formulation is using d as the right hand side of the equation, in which case you get the reverse sign. I.e. for the equation
nx·x + ny·y + nz·z = d
you get
d = nx·px + ny·py + nz·pz.
Related
I'm trying to write a function that returns true if a ray intersects a sphere and the code I'm referencing goes something like this:
// given Sphere and Ray as arguments
invert the Sphere matrix
make new Ray object
origin of this object = old Ray origin * inverted Sphere matrix
direction = old Ray direction * inverted Sphere matrix
a = |new direction| ^ 2
b = dot product of new origin and new direction
c = |new origin| ^ 2 - 1
det = b*b - a*c
if det > 0 there is an intersection
I'm stuck at understanding why we need to invert the Sphere matrix first and then multiply it to the Ray's origin and direction. Also I'm confused how to derive the quadratic equation variables a, b, and c and the end. I know I have to combine the parametric equations for a ray (p + td) and for a circle (x dot x - 1 = 0) but I can't figure out how to do so.
You need to invert the sphere matrix to have the ray in the sphere's coordinate frame, which, if the sphere is not scaled, is the same as simply setting new_origin = origin - sphere_center (and using the original direction)
The equation is formed from the formula:
|new_dir*t + new_origin|^2 = r^2 (presumably r is 1)
If you expand it, you get:
|new_dir|^2*t^2 + 2*(new_origin·new_dir)*t + |new_origin|^2-r^2 = 0
I have a point in 3d P(x,y,z) and a plane of view Ax+By+Cz+d=0 . A point in plane is E.Now i want to project that 3d point to that plane and get 2d coordinates of the projected point relative to the point E.
P(x,y,z) = 3d point which i want to project on the plane.
Plane Ax + By + Cz + d = 0 , so normal n = (A,B,C)
E(ex,ey,ez) = A point in plane ( eye pos of camera )
What i am doing right now is to get nearest point in plane from point P.then i subtract that point to E.I suspect that this is right ???
please help me.Thanks.
The closest point is along the normal to the plane. So define a point Q that is offset from P along that normal.
Q = P - n*t
Then solve for t that puts Q in the plane:
dot(Q,n) + d = 0
dot(P-n*t,n) + d = 0
dot(P,n) - t*dot(n,n) = -d
t = (dot(P,n)+d)/dot(n,n)
Where dot((x1,y1,z1),(x2,y2,z2)) = x1*x2 + y1*y2 + z1*z2
You get a point on the plane as p0 = (0, 0, -d/C). I assume the normal has unit length.
The part of p in the same direction as n is dot(p-n0, n) * n + p0, so the projection is p - dot(p-p0,n)*n.
If you want some coordinates on the plane, you have to provide a basis/coordinate system. Eg two linear independent vectors which span the plane. The coordinates depend on these basis vectors.
I tried using a raycasting-style function to do it but can't get any maintainable results. I'm trying to calculate the intersection between two tangents on one circle. This picture should help explain:
I've googled + searched stackoverflow about this problem but can't find anything similar to this problem. Any help?
Well, if your variables are:
C = (cx, cy) - Circle center
A = (x1, y1) - Tangent point 1
B = (x2, y2) - Tangent point 2
The lines from the circle center to the two points A and B are CA = A - C and CB = B - C respectively.
You know that a tangent is perpendicular to the line from the center. In 2D, to get a line perpendicular to a vector (x, y) you just take (y, -x) (or (-y, x))
So your two (parametric) tangent lines are:
L1(u) = A + u * (CA.y, -CA.x)
= (A.x + u * CA.y, A.y - u * CA.x)
L2(v) = B + v * (CB.y, -CB.x)
= (B.x + v * CB.y, B.x - v * CB.x)
Then to calculate the intersection of two lines you just need to use standard intersection tests.
The answer by Peter Alexander assumes that you know the center of the circle, which is not obvious from your figure http://oi54.tinypic.com/e6y62f.jpg.
Here is a solution without knowing the center:
The point C (in your figure) is the intersection of the tangent at A(x, y) with the line L perpendicular to AB, cutting AB into halves. A parametric equation for the line L can be derived as follows:
The middle point of AB is M = ((x+x2)/2, (y+y2)/2), where B(x2, y2). The vector perpendicular to AB is N = (y2-y, x-x2). The vector equation of the line L is hence
L(t) = M + t N, where t is a real number.
I have the center (xyz - in 3 dimensional space) and the radius of two spheres A and B.
Now I have to figure out a point or more than 1 point where these spheres meet. It is fairly easy to figure out if the two spheres collide or not, but how do I find out the points of intersection of 2 spheres?
Any help would be greatly appreciated.
The curve where they intersect is a circle. The equation for the radius of the circle is a bit complicated, but is shown here, in eqn. 8, and this distance of the circle from the center of one of the spheres is shown in eqn. 5.
If the radius of the smaller sphere is A, and the bigger is B, and their centers are D units apart, then the points of intersection are on a circle of radius r centered on a point directly between the centers of the two spheres, which is y units from the center of the bigger sphere, and x units from the center of the other, where
y = 1/2 (D + (B^2 - A^2)/D)
and
x = 1/2 (D - (B^2 - A^2)/D)
with radius
r = B^2 - x^2 = A^2 - y^2
If you need the equation for this circle the best way is to represent it as a set of three parameterized equations, where the x, y, and z coordinates are each expressed a a function of some t, which represents the radius vector trraveling around the circle once, from zero to 2PI...
To construct these equations, think about expressing the point which is the radius r from the center, on the 2D plane which is normal to the line between the two spheres.
Check out this link for some iedas on how to do this..
Derivation is as follows: draw a line between the centers of the two spheres. Label it as D
Designate a point on this line as the center of the final solution circle label it as point O
Label the smaller portion of D as x, and the large portion as y
draw a line from O perpindicular to D, for some distance r to represent the radius of the solution circle
Label the end of this radius as Q
Now draw B between the center of the larger sphere to Q and A from the center of smaller sphere and Q
From Pythagoras:
B^2 = y^2 + r^2 and A^2 = x^2 + r^2
so, after eliminating r and a bit of algebra,
y-x = (B^2 - A*2) / (x+y)
But x+y = D so,
y-x = (B^2 - A*2) / D
Adding the equation x+y=D to the above eliminates the x, giving
2y = D + (B^2 - A*2) / D
or,
y = 1/2 ( D + (B^2 - A*2) / D )
My problem:
How can I take two 3D points and lock them to a single axis? For instance, so that both their z-axes are 0.
What I'm trying to do:
I have a set of 3D coordinates in a scene, representing a a box with a pyramid on it. I also have a camera, represented by another 3D coordinate. I subtract the camera coordinate from the scene coordinate and normalize it, returning a vector that points to the camera. I then do ray-plane intersection with a plane that is behind the camera point.
O + tD
Where O (origin) is the camera position, D is the direction from the scene point to the camera and t is time it takes for the ray to intersect the plane from the camera point.
If that doesn't make sense, here's a crude drawing:
I've searched far and wide, and as far as I can tell, this is called using a "pinhole camera".
The problem is not my camera rotation, I've eliminated that. The trouble is in translating the intersection point to barycentric (uv) coordinates.
The translation on the x-axis looks like this:
uaxis.x = -a_PlaneNormal.y;
uaxis.y = a_PlaneNormal.x;
uaxis.z = a_PlaneNormal.z;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
While the translation on the z-axis looks like this:
uaxis.x = -a_PlaneNormal.z;
uaxis.y = a_PlaneNormal.y;
uaxis.z = a_PlaneNormal.x;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
My question is: how can I turn a ray plane intersection point to barycentric coordinates on both the x and the z axis?
The usual formula for points (p) on a line, starting at (p0) with vector direction (v) is:
p = p0 + t*v
The criterion for a point (p) on a plane containing (p1) and with normal (n) is:
(p - p1).n = 0
So, plug&chug:
(p0 + t*v - p1).n = (p0-p1).n + t*(v.n) = 0
-> t = (p1-p0).n / v.n
-> p = p0 + ((p1-p0).n / v.n)*v
To check:
(p - p1).n = (p0-p1).n + ((p1-p0).n / v.n)*(v.n)
= (p0-p1).n + (p1-p0).n
= 0
If you want to fix the Z coordinate at a particular value, you need to choose a normal along the Z axis (which will define a plane parallel to XY plane).
Then, you have:
n = (0,0,1)
-> p = p0 + ((p1.z-p0.z)/v.z) * v
-> x and y offsets from p0 = ((p1.z-p0.z)/v.z) * (v.x,v.y)
Finally, if you're trying to build a virtual "camera" for 3D computer graphics, the standard way to do this kind of thing is homogeneous coordinates. Ultimately, working with homogeneous coordinates is simpler (and usually faster) than the kind of ad hoc 3D vector algebra I have written above.