Vector transformations in OpenGL ES - vector

Why are vector transformations done in reverse order in OpenGL ES? Is it because the vectors are stored in column-matrix form? It seems that they have made things unnecessarily difficult.

It probably has to do with the fact that vector transformations aren't commutative. Changing the order can give you a different result.
A simple thought experiment proves the point:
For a unit vector (1, 0, 0), a +90 degree rotation about the z-axis, followed by a +90 degree rotation about the x-axis results in a vector (0, 0, 1).
If you start with the +90 degree rotation about the x-axis, followed by the +90 degree rotation about the z-axis, you'll have a vector (0, 1, 0).

Related

How to calculate direction vectors from axis-angle rotation?

I'm representing rotations for actors in my rendering engine using a vec4 with axis-angle notation. The first 3 components (x, y, z) represent the (normalized) axis of rotation, and the last component (w) represents the angle (in radians) we have rotated about this axis.
For example,
With axis (0, 1, 0) and angle 0, up is (0, 1, 0) and forward is (0, 0, -1).
With axis (0, 0, 1) and angle 180, up is (0, 0, 1) and forward is (0, -1, 0).
My current solution (which doesn't work), looks like this:
// glm::vec4 Movable::getOrientation();
// glm::vec3 FORWARD(0.0f, 0.0f, -1.0f);
glm::vec3 Movable::getForward() {
return glm::vec3(glm::rotate(
this->getOrientation().w, glm::vec3(this->getOrientation())) *
glm::vec4(FORWARD, 1.0f));
}
I've defined the up direction to be the same as the rotational axis, but I'm having trouble calculating the forward directional vector for an arbitrary axis. What is the easiest way to do this? I'd like to take advantage of glm functions wherever possible.
One thing to keep in mind about axis-angle is that "up" should mean the same thing for all rotations with an angle of 0, as that represents no rotation no matter which direction the axis is pointed ... you can't just say up is in the direction of the axis. The proper way to calculate forward and up is to start with two vectors which represent them, say (1,0,0) for forward and (0,1,0) for up, and then apply the rotation to both those vectors to obtain the new forward and up.

What is the best way to calculate an element's angle when multiple rotates are applied to it

If you do a rotateX(180deg) rotateY(180deg) it's upside down now. So if the mouse is set to move a child element up on drag that child element will now be moving down (depending on how you have things set up).
-webkit-transform: rotateX(?deg) rotateY(?deg) rotateZ(?deg); // where does it point?
ONLY SETUP FOR WEBKIT
Take a look at the fiddle (code is a mess, stripped down). Draw 360 tic marks, arranged in a circle, on your computer monitor. How can you tell what tic mark the arrow is pointing to (assuming the box is at the exact center of the circle)?
A tutorial that covers the basics is here, here.
*edit - the transform-origin being used is at the center of the cube
Note: Everything that follows assumes you are using a vector that passes through the origin, as in this example. In your original example the vector is additionally offset from the origin by the vector [0, 0, 60]. This complicates calculations slightly, so I have used the simplified version in my explanation.
Your vector is currently defined by spherical coordinates Euler angles consecutively applied rotations to a predefined vector. Here is how you can use your rotations to determine the cartesian coordinates of the final vector:
Let us say your vector is [0, 1, 0] (assuming the arrow is 1 unit long and starts at the origin)
Apply x, y and z rotations by multiplying your vector by the rotation matrices described here in any order, replacing θ with the corresponding angle in each case:
                                             
The resulting vector is your original vector transformed by the specified x, y and z rotations
Once you have obtained the rotated vector, finding the projection of the vector on the x-y plane becomes easy.
For example, considering the vector [10, 20, 30] (cartesian coordinates), the projection on the x-y plane is the vector [10, 20, 0]. The angle of this vector from the horizontal can be calculated as:
tan-1(20/10) = 1.107 rad (counter clockwise from the positive x axis)
                    = 63.43 deg (counter clockwise from the positive x axis)
This means the arrow points between the 63rd and 64th "tick marks" counting counter clockwise from the one pointing directly to the right.

How to calculate azimut & elevation relative to a camera direction of view in 3D ...?

I'm rusty a bit here.
I have a vector (camDirectionX, camDirectionY, camDirectionZ) that represents my camera direction of view.
I have a (camX, camY, camZ) that is my camera position.
Then, I have an object placed at (objectX, objectY, objectZ)
How can I calculate, from the camera point of view, the azimut & elevation of my object ??
The first thing I would do, to simplify the problem, is transform the coordinate space so the camera is at (0, 0, 0) and pointing straight down one of the axes (so the direction is say (0, 0, 1)). Translating so the camera is at (0, 0, 0) is pretty trivial, so I won't go into that. Rotating so that the camera direction is (0, 0, 1) is a little trickier...
One way of doing it is to construct the full orthonormal basis of the camera, then stick that in a rotation matrix and apply it. The "orthonormal basis" of the camera is a fancy way of saying the three vectors that point forward, up, and right from the camera. They should all be at 90 degrees to each other (which is what the ortho bit means), and they should all be of length 1 (which is what the normal bit means).
You can get these vectors with a bit of cross-product trickery: the cross product of two vectors is perpendicular (at 90 degrees) to both.
To get the right-facing vector, we can just cross-product the camera direction vector with (0, 1, 0) (a vector pointing straight up). You'll need to normalise the vector you get out of the cross-product.
To get the up vector of the camera, we can cross product the camera direction vector with the right-facing vector we just calculated. Assuming both input vectors are normalised, this shouldn't need normalising.
We now have the orthonormal basis of the camera. If we stick these vectors into the rows of a 3x3 matrix, we get a rotation matrix that will transform our coordinate space so the camera is pointing straight down one of the axes (which one depends on the order you stick the vectors in).
It's now fairly easy to calculate the azimuth and elevation of the object.
To get the azimuth, just do an atan2 on the x/z coordinates of the object.
To get the elevation, project the object coordinates onto the x/z plane (just set the y coordinate to 0), then do:
acos(dot(normalise(object coordinates), normalise(projected coordinates)))
This will always give a positive angle -- you probably want to negate it if the object's y coordinate is less than 0.
The code for all of this will look something like:
fwd = vec3(camDirectionX, camDirectionY, camDirectionZ)
cam = vec3(camX, camY, camZ)
obj = vec3(objectX, objectY, objectZ)
# if fwd is already normalised you can skip this
fwd = normalise(fwd)
# translate so the camera is at (0, 0, 0)
obj -= cam
# calculate the orthonormal basis of the camera
right = normalise(cross(fwd, (0, 1, 0)))
up = cross(right, fwd)
# rotate so the camera is pointing straight down the z axis
# (this is essentially a matrix multiplication)
obj = vec3(dot(obj, right), dot(obj, up), dot(obj, fwd))
azimuth = atan2(obj.x, obj.z)
proj = vec3(obj.x, 0, obj.z)
elevation = acos(dot(normalise(obj), normalise(proj)))
if obj.y < 0:
elevation = -elevation
One thing to watch out for is that the cross-product of your original camera vector with (0, 1, 0) will return a zero-length vector when your camera is facing straight up or straight down. To fully define the orientation of the camera, I've assumed that it's always "straight", but that doesn't mean anything when it's facing straight up or down -- you need another rule.

Combine Rotation Axis Vectors

I'm experimenting with using axis-angle vectors for rotations in my hobby game engine. This is a 3-component vector along the axis of rotation with a length of the rotation in radians. I like them because:
Unlike quats or rotation matrices, I can actually see the numbers and visualize the rotation in my mind
They're a little less memory than quaternions or matrices.
I can represent values outside the range of -Pi to Pi (This is important if I store an angular velocity)
However, I have a tight loop that updates the rotation of all of my objects (tens of thousands) based on their angular velocity. Currently, the only way I know to combine two rotation axis vectors is to convert them to quaternions, multiply them, and then convert the result back to an axis/angle. Through profiling, I've identified this as a bottleneck. Does anyone know a more straightforward approach?
You representation is equivalent to quaternion rotation, provided your rotation vectors are unit length. If you don't want to use some canned quaternion data structure you should simply ensure your rotation vectors are of unit length, and then work out the equivalent quaternion multiplications / reciprocal computation to determine the aggregate rotation. You might be able to reduce the number of multiplications or additions.
If your angle is the only thing that is changing (i.e. the axis of rotation is constant), then you can simply use a linear scaling of the angle, and, if you'd like, mod it to be in the range [0, 2π). So, if you have a rotation rate of α raidans per second, starting from an initial angle of θ0 at time t0, then the final rotation angle at time t is given by:
θ(t) = θ0+α(t-t0) mod 2π
You then just apply that rotation to your collection of vectors.
If none of this improves your performance, you should consider using a canned quaternion library as such things are already optimized for the kinds of application you're disucssing.
You can keep them as angle axis values.
Build a cross-product (anti-symmetric) matrix using the angle axis values (x,y,z) and weight the elements of this matrix by multiplying them by the angle value. Now sum up all of these cross-product matrices (one for each angle axis value) and find the final rotation matrix by using the matrix exponential.
If matrix A represents this cross-product matrix (built from Angle Axis value) then,
exp(A) is equivalent to the rotation matrix R (i.e., equivalent to your quaternion in matrix form).
Therefore,
exp (A1 + A2) = R1 * R2
probably a more expensive calucation in the end...
You should use unit quaternions rather than scaled vectors to represent your rotations. It can be shown (not by me) that any representation of rotations using three parameters will run into problems (i.e. is singular) at some point. In your case it occurs where your vector has a length of 0 (i.e. the identity) and at lengths of 2pi, 4pi, etc. In these cases the representation becomes singular. Unit quaternions and rotation matrices do not have this problem.
From your description, it sounds like you are updating your rotation state as a result of numerical integration. In this case you can update your rotation state by converting your rotational rate (\omega) to a quaternion rate (q_dot). If we represent your quaternion as q = [q0 q1 q2 q3] where q0 is the scalar part then:
q_dot = E*\omega
where
[ -q1 -q2 -q3 ]
E = [ q0 -q3 q2 ]
[ q3 q0 -q1 ]
[ -q2 q1 q0 ]
Then your update becomes
q(k+1) = q(k) + q_dot*dt
for simple integration. You could choose a different integrator if you choose.
Old question, but another example of stack overflow answering questions the OP wasn't asking. OP already listed out his reasoning for not using quaternions to represent velocity. I was in the same boat.
That said, the way you combine two angular velocities, with each represented by a vector, which represents the axis of rotation with its magnitude representing the amount of rotation.
Just add them together. Component-by-component. Hope that helps some other soul out there.

Calculating 'up vector' from transformation matrix in 3D

I just came to strange problem with my project in 3D. Everyone knows algorythm of calculating LookAt vector, but it is not so easly to calculate "up" vector from transformation matrix (or at least maybe I simple missed something).
The problem is following:
"Up" vector is (0, 1, 0) for identity rotation matrix and rotate with matrix, but do not scale nor translate. If you have simple rotation matrix procedure is easy (multiply vector and matrix). BUT if matrix contains also translation and rotation (e.g. it was produced by multiplying several other matrices), this won't work, as vector would be translated and scaled.
My question is how to get this "up" vector from single transformation matrix, presuming vector (0, 1, 0) correspond to identity rotation matrix.
Translation actually does affect it. Let's say in the example the transformation matrix didn't do any scaling or rotation, but did translate it 2 units in the Z direction. Then when you transform (0,1,0) you get (0,1,2), and then normalizing it gives (0,1/sqrt(5), 2/sqrt(5)).
What you want to do is take the difference between the transformation of (0,1,0) and the transformation of (0,0,0), and then normalize the resulting vector. In the above example you would take (0,1,2) minus (0,0,2) (0,0,2 being the transformation of the zero vector) to get (0,1,0) as desired.
Apply your matrix to both endpoints of the up vector -- (0, 0, 0) and (0, 1, 0). Calculate the vector between those two points, and then scale it to get a unit vector. That should take care of the translation concern.
Simply multiply the up vector (0,1,0) with the transformation, and normalize. You'll get the new calculated up vector that way.
I'm no expert at matrix calculations, but it strikes me as a simple matter of calculating the up vector for the multiplied matrix and normalizing the resulting vector to a unit vector. Translation shouldn't affect it at all, and scaling is easily defeated by normalizing.
I am aware this is an OLD thread, but felt it was necessary to point this out to anyone else stumbling upon this question.
In linear Algebra, we are taught to look at a Matrix as a collection of Basis Vectors, Each representing a direction in space available to describe a relative position from the origin.
The basis vectors of any matrix (the vectors that describe the cardinal directions) can be directly read from the associated matrix column.
Simply put your first column is your "x++" vector, your second is the "y++" vector, the third is the "z++" vector. If you are working with 4x4 Matrices in 3d, the last elements of these columns and the last column are relating to translation of the origin. In this case, the last element of each of these vectors and the last column of any such matrix can be ignored for the sake of simplicity.
Example: Let us consider a matrix representing a 90 degree rotation about the y axis.
[0, 0, -1]
[0, 1, 0]
[1, 0, 0]
The up vector can be plainly extracted from the third column as (-1, 0, 0), because the matrix is applying a 90 degree rotation about the y axis the up vector now points down the x axis (as the vector says), You can acquire the basis vectors to acquire the positive cardinal directions, and negating them will give you their opposite counterparts.
Once you have a matrix from which to extract the directions, no non-trivial calculations are necessary.

Resources