Given a Vector 3 of Euler Angles, how could one mathematically find the direction that object is facing.
In other words, how does Unity calculate the 'transform.forward' vector?
You should read up on spherical polar coordinates, noting the difference between conventional polar and Euler angles. But anyway the formula is (cos(pitch)cos(yaw), cos(pitch)sin(yaw), sin(pitch)). Note that the roll has no effect here.
Related
I have a 3d point that I would like to rotate using angles (yaw, pitch and roll) around { 0, 0, 0 }.
How would I go about it, without converting the angles into a matrix?
Well, you don't really "convert angles into a matrix". Strictly speaking, rotations are linear transformation and are in general comprised of an angle and an axis (vector) about which the rotation occurs. The two easiest approaches to define an angle / axis rotation is by using quaternions or rotation matrices. There may be other approaches, but these two methods are used largely because they are the easiest methods anyone has proposed to date. Of the two, I personally prefer quaternions for rotations since they are easier to implement and require fewer arithmetic operations. 3x3 matrices have the benefit that they are able to handle general 3D->3D linear transformation; 4x4 matrices can perform general 3D->3D affine transformation on 3D vectors.
If you want to use separate rotations for yaw, pitch and roll, you should probably review issues related to Euler Angles. You can model these using either rotation matrices or quaternions. Both approaches will essentially be equivalent. It's simply a matter of defining a sequence of angle / axis pairs and multiplying them to get the final rotation. That rotation is then applied to whatever points you have to arrive at the rotated value.
I have an object in 3D space where all I have is a euler position and rotation. How can I calculate forward and up vectors from the information I have?
I know that I can calculate the forward vector in this way:
Vector3 forward = (target.getPosition() - object.getPosition()).normalize();
.. where target is any point along the axis which the object is looking. Using the information I have, how can I pick an arbitrary point in this way to normalize?
I'm not sure how to go about solving the "up" vector at all.
First create a transform matrix from your euler angles (with the same method as you are using while rendering). Then extract the axises vectors for forward and up from it directly. For example my view matrices uses Z axis for forward/backward and X axis for left/right so I would just use those two. You will find the location of the vectors here:
Understanding 4x4 homogenous transform matrices
I have a plane which is rotated 90 degrees around an unknown axis. I know a point and normal for the plane before and after the rotation. How can I find the axis of rotation?
I've done a sketch to illustrate - it's 2D but the problem is actually 3D.
I worked it out with some help from #davin.
Use the cross product to find the direction of the rotation axis. The two known points on the planes and the unknown point on the rotation axis make an isosceles triangle, so simple geometry finds the unknown point.
The axis of rotation is an eigenvector of the rotation matrix. Moreover, it has eigenvalue 1. Every rotation matrix has such an eigenvector. Then just apply the translation to the eigenvector (presuming you're rotating and then translating) to get the final axis of rotation.
Mathematically, you need to solve Rv = v, which is equivalent to finding the nullspace of R-I.
This is related to a problem described in another question (images there):
Opengl shader problems - weird light reflection artifacts
I have a .obj importer that creates a data structure and calculates the tangents and bitangents. Here is the data for the first triangle in my object:
My understanding of tangent space is that the normal points outward from the vertex, the tangent is perpendicular (orthogonal?) to the normal vector and points in the direction of positive S in the texture, and the bitangent is perpendicular to both. I'm not sure what you call it but I thought that these 3 vectors formed what would look like a rotated or transformed x,y,z axis. They wouldn't be 3 randomly oriented vectors, right?
Also my understanding: The normals in a normal map provide a new normal vector. But in tangent space texture maps there is no built in orientation between the rgb encoded normal and the per vertex normal. So you use a TBN matrix to bridge the gap and get them in the same space (or get the lighting in the right space).
But then I saw the object data... My structure has 270 vertices and all of them have a 0 for the Tangent Y. Is that correct for tangent data? Are these tangents in like a vertex normal space or something? Or do they just look completely wrong? Or am I confused about how this works and my data is right?
To get closer to solving my problem in the other question I need to make sure my data is right and my understanding on how tangent space lighting math works.
The tangent and bitangent vectors point in the direction of the S and T components of the texture coordinate (U and V for people not used to OpenGL terms). So the tangent vector points along S and the bitangent points along T.
So yes, these do not have to be orthogonal to either the normal or each other. They follow the direction of the texture mapping. Indeed, that's their purpose: to allow you to transform normals from model space into the texture's space. They define a mapping from model space into the space of the texture.
The tangent and bitangent will only be orthogonal to each other if the S and T components at that vertex are orthogonal. That is, if the texture mapping has no sheering. And while most texture mapping algorithms will try to minimize sheering, they can't eliminate it. So if you want an accurate matrix, you need a non-orthogonal tangent and bitangent.
I need to project a 3D object onto a sphere's surface (uhm.. like casting a shadow).
AFAIR this should be possible with a projection matrix.
If the "shadow receiver" was a plane, then my projection matrix would be a 3D to 2D-plane projection, but my receiver in this case is a 3D spherical surface.
So given sphere1(centerpoint,radius),sphere2(othercenter,otherradius) and an eyepoint how can I compute a matrix that projects all points from sphere2 onto sphere1 (like casting a shadow).
Do you mean that given a vertex v you want the following projection:
v'= centerpoint + (v - centerpoint) * (radius / |v - centerpoint|)
This is not possible with a projection matrix. You could easily do it in a shader though.
Matrixes are commonly used to represent linear operations, like projection onto a plane.
In your case, the resulting vertices aren't deduced from input using a linear function, so this projection is not possible using a matrix.
If the sphere1 is sphere((0,0,0),1), that is, the sphere of radius 1 centered at the origin, then you're in effect asking for a way to convert any location (x,y,z) in 3D to a corresponding location (x', y', z') on the unit sphere. This is equivalent to vector renormalization: (x',y',z') = (x,y,z)/sqrt(x^2+y^2+z^2).
If sphere1 is not the unit sphere, but is say sphere((a,b,c),R) you can do mostly the same thing:
(x',y',z') = R*(x-a,y-b,z-c) / sqrt((x-a)^2+(y-b)^2+(z-c)^2) + (a,b,c). This is equivalent to changing coordinates so the first sphere is the unit sphere, solving the problem, then changing coordinates back.
As people have pointed out, these functions are nonlinear, so the projection cannot be called a "matrix." But if you prefer for some reason to start with a projection matrix, you could project first from 3D to a plane, then from a plane to the sphere. I'm not sure if that would be any better though.
Finally, let me point out that linear maps don't produce division-by-zero errors, but if you look closely at the formulas above, you'll see that this map can. Geometrically, that's because it's hard to project the center point of a sphere to its boundary.