Plot 2d point on 3D plane drawn in 2d - math

I am attempting to draw a point from a 2d plane on a 3d plane that is drawn in 2d. I'm not sure how to adjust the y position based on the angle of the perspective. As you can see in the image linked below (Stack Overflow won't let me include the image because I just signed up), if the point is at the center point of the rectangle, it would need to be shifted up slightly when viewed from an angle to account for distance from the viewer. Can anyone provide an equation to help?

Say the point in the rectangle is given by (x,y), and the coordinates we're looking for in the second image are (x', y').
w = y + y0
y' = k atan(w/h)
r = sqrt(h2 + w2)
x' = k atan(x/r)
where k is a scaling factor for the whole image, h is "altitude of the viewpoint above the plane" and y0 is, roughly, distance to the object.

Related

Plane point rotation to a specific plane

I have a system where one axis is moving from [0 -> 2PI]. This movement generates an angled plane. Axis movement.
This yellow plane will be my target plane. I know the normal vector of this yellow plane and its constant. For me to calculate XYZ position on the yellow plane based on the rotation value of the axis (tool). I've come to a "solution" to first calculate what is the XYZ coordinate for a simpler plane vertical plane [1 0 0] as normal vector as I know the sphere origin and also the radius then it is easy to calculate any XYZ position based on the axis angle.
But my probelm is that now that I have the XYZ position on the gray plane: how can get my XYZ position to the corresponding position on the yellow plane? From gray plane to yellow plane Any suggestions would be appreciated.
Solution to this was simple.. I made it more complicated than necessary. There wasn't any need for transforming the points from one plane to another as these values could be calculated easily from the sphere origin and the plane orientation values.
// calculate axis rotation to radians
let radAngle = (angle)*Math.PI/180;
let beeta = (90*Math.PI/180) - radAngle; //rotation value on the circle
let gamma = Math.acos(yellow.normal.x); //plane orientation
// temporary vars
let cb = Math.cos(beeta);
let sb = Math.sin(beeta);
let cg = Math.cos(gamma);
let sg = Math.sin(gamma);
let x = sphere.origin.x + sphere.radius*(cg*sb);
let y = sphere.origin.y + sphere.radius*(sg*sb);
let z = sphere.origin.z + sphere.radius*cb;
Rotation sample

Translation coordinates for a circle under a certain angle

I have 2 circles that collide in a certain collision point and under a certain collision angle which I calculate using this formula :
C1(x1,y1) C2(x2,y2)
and the angle between the line uniting their centre and the x axis is
X = arctg (|y2 - y1| / |x2 - x1|)
and what I want is to translate the circle on top under the same angle that collided with the other circle. I mean with the angle X and I don't know what translation coordinates should I give for a proper and a straight translation!
For what I think you mean, here's how to do it cleanly.
Think in vectors.
Suppose the centre of the bottom circle has coordinates (x1,y1), and the centre of the top circle has coordinates (x2,y2). Then define two vectors
support = (x1,y1)
direction = (x2,y2) - (x1,y1)
now, the line between the two centres is fully described by the parametric representation
line = support + k*direction
with k any value in (-inf,+inf). At the initial time, substituting k=1 in the equation above indeed give the coordinates of the top circle. On some later time t, the value of k will have increased, and substituting that new value of k in the equation will give the new coordinates of the centre of the top circle.
How much k increases at value t is equal to the speed of the circle, and I leave that entirely up to you :)
Doing it this way, you never need to mess around with any angles and/or coordinate transformations etc. It even works in 3D (provided you add in z-coordinates everywhere).

Transforming a rectangle into a ring

I have a rectangle that I need to 'bend' into a ring, i.e. the top edge of the rectangle must map to the outer circle of the ring, the bottom to the inner circle, and the sides of the rectangle should join.
Here's an extremely crude sketch of the rectangle and ring:
If it is helpful or necessary, I can deal with the rectangle as a collection of horizontal lines, and the ring as a collection of circles.
The rectangle has a horizontal gradient from a to b that should map so that the gradient progresses on a circular direction.
I can see that this is a non-linear transform, but am lost as to where to look to learn the techniques to solve this problem. Could anyone with suitable experience in CG point me to anything like the right text, the right name of algorithm or the right graphics library to make my ring?
Try just using polar coordinates. If you map x as r and y as θ (normalising as θ runs from 0 to 2π), then adding some offset to r will vary the radius of the ring and adding an offset to θ will rotate it around the circle.
r = fx + a
g = (max_y - min_y)/(2*pi)
theta = gy + b
where a and b are these offsets, f scales the width of the ring and g normalizes the length of the rectangle to 2π. The transform back from these polar coordinates to cartesian (i.e. the screen) is just:
x' = r cos(theta)
y' = r sin(theta)
You then have 3 coordinate systems: (x,y) for the original rectangle, (r,θ) for the polar coordinates of the ring and (x',y') for the screen coordinates.

Determine if 3D point is inside 2D Circle

I wish to determine if a Point P(x,y,z) is inside a 2D circle in 3D space defined by its center C (cx, cy, cz), radius R, and normal to the plane the circle lies on N.
I know that a point P lying on a 2D circle in 3D space is defined by:
P = R*cos(t)U + Rsin(t)*( N x U ) + C
where U is a unit vector from the center of the circle to any point on the circle. But given a point Q, how do I know if Q is on or inside the circle? What is the appropriate parameter t to choose? And which coordinates do I compare the point Q to see if they are within the circle?
Thanks.
Project P onto the plane containing the circle, call that P'. P will be in the circle if and only if |P - P'| = 0 and |P' - C| < R.
I'd do this by breaking it into two parts:
Find out if the point is on the same plane as the circle (ie. see if the dot product of the vector going from the center to the point and the normal is zero)
Find out if it's inside the sphere containing the circle (ie. see if the distance from the center to the point is smaller than the radius).

How can I turn a ray-plane intersection point into barycentric coordinates?

My problem:
How can I take two 3D points and lock them to a single axis? For instance, so that both their z-axes are 0.
What I'm trying to do:
I have a set of 3D coordinates in a scene, representing a a box with a pyramid on it. I also have a camera, represented by another 3D coordinate. I subtract the camera coordinate from the scene coordinate and normalize it, returning a vector that points to the camera. I then do ray-plane intersection with a plane that is behind the camera point.
O + tD
Where O (origin) is the camera position, D is the direction from the scene point to the camera and t is time it takes for the ray to intersect the plane from the camera point.
If that doesn't make sense, here's a crude drawing:
I've searched far and wide, and as far as I can tell, this is called using a "pinhole camera".
The problem is not my camera rotation, I've eliminated that. The trouble is in translating the intersection point to barycentric (uv) coordinates.
The translation on the x-axis looks like this:
uaxis.x = -a_PlaneNormal.y;
uaxis.y = a_PlaneNormal.x;
uaxis.z = a_PlaneNormal.z;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
While the translation on the z-axis looks like this:
uaxis.x = -a_PlaneNormal.z;
uaxis.y = a_PlaneNormal.y;
uaxis.z = a_PlaneNormal.x;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
My question is: how can I turn a ray plane intersection point to barycentric coordinates on both the x and the z axis?
The usual formula for points (p) on a line, starting at (p0) with vector direction (v) is:
p = p0 + t*v
The criterion for a point (p) on a plane containing (p1) and with normal (n) is:
(p - p1).n = 0
So, plug&chug:
(p0 + t*v - p1).n = (p0-p1).n + t*(v.n) = 0
-> t = (p1-p0).n / v.n
-> p = p0 + ((p1-p0).n / v.n)*v
To check:
(p - p1).n = (p0-p1).n + ((p1-p0).n / v.n)*(v.n)
= (p0-p1).n + (p1-p0).n
= 0
If you want to fix the Z coordinate at a particular value, you need to choose a normal along the Z axis (which will define a plane parallel to XY plane).
Then, you have:
n = (0,0,1)
-> p = p0 + ((p1.z-p0.z)/v.z) * v
-> x and y offsets from p0 = ((p1.z-p0.z)/v.z) * (v.x,v.y)
Finally, if you're trying to build a virtual "camera" for 3D computer graphics, the standard way to do this kind of thing is homogeneous coordinates. Ultimately, working with homogeneous coordinates is simpler (and usually faster) than the kind of ad hoc 3D vector algebra I have written above.

Resources