how to translate 3d mesh, given a view direction and a change in cursor position - math

My question is similar to 3D Scene Panning in perspective projection (OpenGL) except I don't know how to compute the direction in which to move the mesh.
I have a program in which various meshes can be selected. Once a mesh is selected I want it to translate when click-dragging the cursor. When the cursor moves up, I want the mesh to move up, and so on for the appropriate direction. In other words, I want the mesh to translate in directions along the plane that is perpendicular to the viewing direction.
I have the Vector2 for the Delta (x,y) in cursor postion, and I have the Vector3 viewDirection of the camera and the center of the mesh. How can I figure out which way to translate the mesh in 3d space with the Delta and viewDirection? Will I need other information in order to to this calculation (such as the up, or eye)?
It doesn't matter if if the scale of the translation is off, I'm just trying to figure out the direction right now.
EDIT: for some reason I had a confusion about getting the up direction. Clearly it can be calculated by applying the camera rotation to the specified perspective up vector.

You'll need an additional vector, upDirection, which is the unit vector pointing "up" from your camera. You can now cross-product viewDirection and upDirection to get rightDirection, the vector pointing "right" from your camera.
You want to map y deltas to motion along upDirection (or -upDirection) and x deltas to motion in rightDirection. These vectors are in world-space.
You may want to scale the translation speed to match the mouse speed. If you are using perspective projection you'll want to scale the translation speed with your model's depth with respect to your camera (The further the object is from your camera, the faster you will need to move it if you want it to match the mouse.)

Related

Adjust the orthographic camera to fit a 3D object (Three.js)

I'm building a scene, which I want to view through the orthographic camera, from an angle. I do the following:
Build the scene.
Move the OrbitControl's (camera's) target to the center of the scene.
Move the camera by a certain (unit) vector using spherical coordinates.
Try to adjust the camera's left/right/top/bottom params to keep the object in the view, centered. Also considered adjusting the zoom.
My simplified, ideally positioned scene looks like that:
So I guess it is a problem of a calculation of positions of object extremities after (spherical) transformation and projecting them back into cartesian coordinates. I tried to use Euler transform helper, but it depends on the order of transformation for each of the axis. Quaternions are also non-commutive, and I'm lost. Perhaps I need to calculate how would the widths/heights of the diagonals change after transformation and use these?

Ray Tracer Camera - Orthographic to Perspective Projection

I am implementing a ray tracer and it currently has an orthographic projection. I want to make it into a perspective projection. I know in orthographic you send out a ray from every pixel and check for intersections. In perspective projection, the starting position of the ray is constant rather than starting from every pixel.
So I assume that in perspective projection the ray's starting position should be the camera's position. The problem is that I don't think I ever explicitly placed a camera, so I do not know what to change my ray's starting position to.
How can I determine where my camera is placed? I tried (0,0,0), but that just leaves me with a blank image so I don't think it is right.
In orthographic projection, the rays through each pixel would have the same direction, as if the rays originated from a camera behind the screen placed at infinite distance.
For perspective projection, the camera has to be placed at a finite distance behind the screen. Each ray should originate from the camera and go through each pixel of the screen. The distance between the screen and camera depends on the viewing angle.
You can triangulate the distance from the camera to your object by first picking an angle for the perspective projection. A simple example: picking an angle of 60° for the vertical Field of View (FOV) and assuming your object's center is at (0,0,0) and you want to place the camera to look down the Z axis towards the center of your object. This forms a triangle, where you can triangulate the distance with trigonometric formula: distance = (objectHeight/2) / tan(60/2). So you place the camera at (0,0,distance). You can use the same concept for your actual object location.

How to get new camera direction vector when moving an arbitrary relative angle

I am implementing a camera class and am getting stuck with some things
Let's suppose the camera is at Point (0,0,0) looking at a certain direction with its corresponding UP and RIGHT vectors.
I have a joystick control which allows you to go forward-backwards, or change orientation by moving (left-right) or (up-down), according to the above mentioned vectors.
How can I know, given the 3 vectors, which is the resulting direction vector if for instance I want to move N degrees right??
If you are talking about rotating your camera, here is how it is done: every rotation is a matrix that transforms coordinates, so all you have to do is to calculate the matrix of your rotation and then apply it to Dir, Up and Right vectors of your camera to get new ones after rotation is done.
Here is a little reading about rotation matrices (read the section of 3D rotations):
http://mathworld.wolfram.com/RotationMatrix.html

Traveling along the surface of a sphere using quaternions

I'm programming a 3D game where the user controls a first-person camera, and movement is constrained to the inside surface of a sphere. I've managed to constrain the movement, but I'm having trouble figuring out how to manage the camera orientation using quaternions. Ideally the camera up vector should point along the normal of the sphere towards its center, and user should be able to free look around - as if we was always on the bottom of the sphere, no matter where he moves.
Presumably you have two vectors describing the camera's orienation. One will be your V'up describing which way is up relative to the camera orientation and the other will be your V'norm which will be the direction the camera is aimed. You will also have a position p', where your camera is located at some time. You define a canonical orientation and position given by, say:
Vup = <0, 1, 0>
Vnorm = <0, 0, 1>
p = <0, -1, 0>
Given a quaternion rotation q you then apply your rotation to those vectors to get:
V'up = qVupq-1
V'norm = qVnormq-1
p' = qpq-1
In your particular situation, you define q to incrementally accumulate the various rotations that result in the final rotation you apply to the camera. The effect will be that it looks like what you're describing. That is, you move the camera inside a statically oriented and positioned sphere rather than moving the spehere around a statically oriented and positioned camera.
Each increment is computed by a rotation of some angle θ about the vector V = V'up x V'norm.
Quaternions are normally used to avoid gimbal lock in free space motion (flight sims, etc.). In your case, you actually want the gimbal effect, since a camera that is forced to stay upright will inevitably behave strangely when it has to point almost straight up or down.
You should be able to represent the camera's orientation as just a latitude/longitude pair indicating the direction the camera is pointing.

Polygon math

Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.

Resources