Given a 2D point that is on the surface of a triangle, where each corner of the triangle is a 3D point, how can you compute the corresponding 3D point of the 2D point?
To get the 3D location of a particular 2D point on a triangle, use barycentric coordinates to interpolate the locations of the 3D vertices:
2D coordinates: u,v such that 0 <= u,v <= 1 and u+v <= 1
-> barycentric coordinates: add t such that t+u+v = 1 -> t = 1-(u+v)
3D vertices: V1, V2, and V3
-> result = u*V1 + v*V2 + t*V3
Related
visualization
I am searching the location of a point that lies on a plane. The relative location on the plane is given in u/v-Coordinates.
The normal vector n is equal to the vector from (0,0,0) to the center of the plane (or any other distance ≠ 0, if more convenient)
The plane has no rotation around the n vector - u is always on the xy axis and v on the z (up) axis
I feel like there should be a simple formula for this, given Vector3 n along with the coordinates u and v, but i'm stuck here.
You have the coordinates of the origin of the plane, namely (x, y, z)
Now we need the unit vectors u and v in global coordinates.
u is (y, -x, 0), normalized.
v is (-zx/r, -zy/r, r), normalized, where r=(x2+y2)1/2
Now you can add the location of the point in uv coordinates to the location of the plane origin in xyz coordinates.
I have a 3d vector defined by start and end coordinates (x0,y0,z0 and x1,y1,z1). Also I know the angles made by this vector to the x,y,z axes. Does some one know how do I get to know the angles induced by the vector in xy, yz and zx planes?
Projection of given segment to OXY plane is segment with coordinates (x1, y1) - (x2, y2).
It forms angle relative to OX-axis:
Axy = atan2(y2-y1, x2-x1)
Angle between segment and its projection on OXY plane is
Pxy = arcsin((z2 - z1) / Sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2)
I have the intrisic and extrinsic parameters of the camera.
The extrinsic is a 4 x 4 matrix with rotation and translation.
I have sample data as under, I have this one per camera image taken.
2.11e-001 -3.06e-001 -9.28e-001 7.89e-001
6.62e-001 7.42e-001 -9.47e-002 1.47e-001
7.18e-001 -5.95e-001 3.60e-001 3.26e+000
0.00e+000 0.00e+000 0.00e+000 1.00e+000
I would like to plot the image as given on the Matlab calibration toolkit page or
However I'm unable to figure out the Math of how to plot these 2 images.
The only lead I have is from this page http://en.wikipedia.org/wiki/Camera_resectioning. Which tells me that the camera position can be found by C = − R` . T
Any idea how to achieve this task?
Assume the corners of the plane that you want to draw are 3x1 column vectors, a = [0 0 0]', b = [w 0 0]', c = [w h 0]' and d = [0 h 0]'.
Assume that the calibration matrix that you provide is A and consists of a rotation matrix R = A(1:3, 1:3) and a translation T = A(1:3, 4).
To draw the first view
For every pose A_i with rotation R_i and translation T_i, transform each corner x_w (that is a, b, c or d) of the plane to its coordinates x_c in the camera by
x_c = R_i*x_w + T_i
Then draw the plane with transformed corners.
To draw the camera, its centre of projection in camera coordinates is [0 0 0]' and the camera x axis is [1 0 0]', y axis is [0 1 0]' and z axis is [0 0 1]'.
Note that in the drawing, the camera y-axis is pointing down, so you might want to apply an additional rotation on all the computed coordinates by multiplication with B = [1 0 0; 0 0 1; 0 -1 0].
Draw the second view
Drawing the plane is trivial since we are in world coordinates. Just draw the plane using a, b, c and d.
To draw the cameras, each camera centre is c = -R'*T. The camera axes are the rows of the rotation matrix R, so for instance, in the matrix you provided, the x-axis is
[2.11e-001 -3.06e-001 -9.28e-001]'. You can also draw a camera by transforming each point x_c given in camera coordinates to world coordinates x_w by x_w = R'*(x_c - T) and draw it.
There is now an example in opencv for visualizing the extrinsics generated from their camera calibration example
It outputs something similar to the original questions ask:
I have a point in 3d P(x,y,z) and a plane of view Ax+By+Cz+d=0 . A point in plane is E.Now i want to project that 3d point to that plane and get 2d coordinates of the projected point relative to the point E.
P(x,y,z) = 3d point which i want to project on the plane.
Plane Ax + By + Cz + d = 0 , so normal n = (A,B,C)
E(ex,ey,ez) = A point in plane ( eye pos of camera )
What i am doing right now is to get nearest point in plane from point P.then i subtract that point to E.I suspect that this is right ???
please help me.Thanks.
The closest point is along the normal to the plane. So define a point Q that is offset from P along that normal.
Q = P - n*t
Then solve for t that puts Q in the plane:
dot(Q,n) + d = 0
dot(P-n*t,n) + d = 0
dot(P,n) - t*dot(n,n) = -d
t = (dot(P,n)+d)/dot(n,n)
Where dot((x1,y1,z1),(x2,y2,z2)) = x1*x2 + y1*y2 + z1*z2
You get a point on the plane as p0 = (0, 0, -d/C). I assume the normal has unit length.
The part of p in the same direction as n is dot(p-n0, n) * n + p0, so the projection is p - dot(p-p0,n)*n.
If you want some coordinates on the plane, you have to provide a basis/coordinate system. Eg two linear independent vectors which span the plane. The coordinates depend on these basis vectors.
My problem:
How can I take two 3D points and lock them to a single axis? For instance, so that both their z-axes are 0.
What I'm trying to do:
I have a set of 3D coordinates in a scene, representing a a box with a pyramid on it. I also have a camera, represented by another 3D coordinate. I subtract the camera coordinate from the scene coordinate and normalize it, returning a vector that points to the camera. I then do ray-plane intersection with a plane that is behind the camera point.
O + tD
Where O (origin) is the camera position, D is the direction from the scene point to the camera and t is time it takes for the ray to intersect the plane from the camera point.
If that doesn't make sense, here's a crude drawing:
I've searched far and wide, and as far as I can tell, this is called using a "pinhole camera".
The problem is not my camera rotation, I've eliminated that. The trouble is in translating the intersection point to barycentric (uv) coordinates.
The translation on the x-axis looks like this:
uaxis.x = -a_PlaneNormal.y;
uaxis.y = a_PlaneNormal.x;
uaxis.z = a_PlaneNormal.z;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
While the translation on the z-axis looks like this:
uaxis.x = -a_PlaneNormal.z;
uaxis.y = a_PlaneNormal.y;
uaxis.z = a_PlaneNormal.x;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
My question is: how can I turn a ray plane intersection point to barycentric coordinates on both the x and the z axis?
The usual formula for points (p) on a line, starting at (p0) with vector direction (v) is:
p = p0 + t*v
The criterion for a point (p) on a plane containing (p1) and with normal (n) is:
(p - p1).n = 0
So, plug&chug:
(p0 + t*v - p1).n = (p0-p1).n + t*(v.n) = 0
-> t = (p1-p0).n / v.n
-> p = p0 + ((p1-p0).n / v.n)*v
To check:
(p - p1).n = (p0-p1).n + ((p1-p0).n / v.n)*(v.n)
= (p0-p1).n + (p1-p0).n
= 0
If you want to fix the Z coordinate at a particular value, you need to choose a normal along the Z axis (which will define a plane parallel to XY plane).
Then, you have:
n = (0,0,1)
-> p = p0 + ((p1.z-p0.z)/v.z) * v
-> x and y offsets from p0 = ((p1.z-p0.z)/v.z) * (v.x,v.y)
Finally, if you're trying to build a virtual "camera" for 3D computer graphics, the standard way to do this kind of thing is homogeneous coordinates. Ultimately, working with homogeneous coordinates is simpler (and usually faster) than the kind of ad hoc 3D vector algebra I have written above.