Is there an algorithm to apply a rotation onto Euler angles? - math

I'm working on some local space functions for a little project.
I've got a function called getRotation(), which should return the euler angles for an object given it's parent's rotation and it's own local rotation.
So given two rotation vectors:
Vector3f parentRotation
Vector3f localRotation
How can I apply the local rotation onto the parent rotation?
If it helps I'm using Java with LWJGL 2.

The solution here was to move to a a transformation matrix as suggested by Spektre.
While this may be possible with Vector3f angles, those just aren't suitable for game development, at least not with this stack, as they introduce many problems.

Related

Understanding Angular velocities and their application

I recently had to convert euler rotation rates to vectorial angular velocity.
From what I understand, in a local referential, we can express the vectorial angular velocity by:
R = [rollRate, pitchRate, yawRate] (which is the correct order relative to the referential I want to use).
I also know that we can convert angular velocities to rotations (quaternion) for a given time-step via:
alpha = |R| * ts
nR = R / |R| * sin(alpha) <-- normalize and multiply each element by sin(alpha)
Q = [nRx i, nRy j, nRz k, cos(alpha)]
When I test this for each axis individually, I find results that I totally expect (i.e. 90°pitch/time-unit for 1 time unit => 90° pitch angle).
When I use two axes for my rotation rates however, I don't fully understand the results:
For example, if I use rollRate = 0, pitchRate = 90, yawRate = 90, apply the rotation for a given time-step and convert the resulting quaternion back to euler, I obtain the following results:
(ts = 0.1) Roll: 0.712676, Pitch: 8.96267, Yaw: 9.07438
(ts = 0.5) Roll: 21.058, Pitch: 39.3148, Yaw: 54.9771
(ts = 1.0) Roll: 76.2033, Pitch: 34.2386, Yaw: 137.111
I Understand that a "smooth" continuous rotation might change the roll component mid way.
What I don't understand however is after a full unit of time with a 90°/time-unit pitchRate combined with a 90°/time-unit yawRate I end up with these pitch and yaw angles and why I still have roll (I would have expected them to end up at [0°, 90°, 90°].
I am pretty confident on both my axis + angle to quaternion and on my quaternion to euler formulas as I've tested these extensively (both via unit-testing and via field testing), I'm not sure however about the euler rotation rate to angular-velocity "conversion".
My first bet would be that I do not understand how euler rotation-rates axes interacts on themselves, my second would be that this "conversion" between euler rotation-rates and angular velocity vector is incorrect.
Euler angles are not good way of representing arbitrary angular movement. Its just a simplification used for graphics,games and robotics. They got some pretty hard restrictions like your rotations consist of only N perpendicular axises in ND space. That is not how rotation works in real world. On top of this spherical representation of reper endpoint it creates a lot of singularities (you know when you cross poles ...).
The rotation movement is analogy for translation:
position speed acceleration
pos = Integral(vel) = Integral(Integral(acc))
ang = Integral(omg) = Integral(Integral(eps))
That in some update timer can be rewritten to this:
vel+=acc*dt; pos+=vel*dt;
omg+=eps*dt; ang+=omg*dt;
where dt is elapsed time (Timer interval).
The problem with rotation is that you can not superimpose it like translation. As each rotation has its own axis (and it does not need to be axis aligned, nor centered) and each rotation affect the axis orientation of all others too so the order of them matters a lot. On top of all this there is also gyroscopic moment creating 3th rotation from any two that has not parallel axis. Put all of this together and suddenly you see Euler angles does not match the real geometrics/physics of rotation. They can describe orientation and fake its rotation up to a degree but do not expect to make real sense once used for physic simulation.
The real simulation would require list of rotations described by the axis (not just direction but also origin), angular speed (and its change) and in each simulation step the recomputing of the axis as it will change (unless only single rotation is present).
This can be done by using coumulative homogenous transform matrices along with incremental rotations.
Sadly the majority of programmers prefers Euler angles and Quaternions simply by not knowing that there are better and simpler options and once they do they stick to Euler angles anyway as matrix math seem to be more complicated to them... That is why most nowadays games have gimbal locks, major rotation errors and glitches, unrealistic physics.
Do not get me wrong they still have their use (liek for example restrict free look for camera etc ... but they missused for stuff they are the worse option to use for.

How to get new camera direction vector when moving an arbitrary relative angle

I am implementing a camera class and am getting stuck with some things
Let's suppose the camera is at Point (0,0,0) looking at a certain direction with its corresponding UP and RIGHT vectors.
I have a joystick control which allows you to go forward-backwards, or change orientation by moving (left-right) or (up-down), according to the above mentioned vectors.
How can I know, given the 3 vectors, which is the resulting direction vector if for instance I want to move N degrees right??
If you are talking about rotating your camera, here is how it is done: every rotation is a matrix that transforms coordinates, so all you have to do is to calculate the matrix of your rotation and then apply it to Dir, Up and Right vectors of your camera to get new ones after rotation is done.
Here is a little reading about rotation matrices (read the section of 3D rotations):
http://mathworld.wolfram.com/RotationMatrix.html

Find how high up the camera is looking with a rotation Matrix

I'm working on an FPS with the jPCT library. One key thing that all FPS's need is to prevent the players from looking behind them by pulling the mouse too far up/down. Currently, I'm using some example code found on the jPCT's website that keeps track of how many angles have been added to the camera, but I'm worried about rounding issues with all the angles in radians. I can get a rotation Matrix from jPCT's camera, and I know that it contains the information to figure out how "high" up the player is looking, but I have no clue how to get it out of the matrix.
What would I look for in the rotation matrix that will tell me if the player is looking more "up" than strait up and more "down" than strait down?
If you're updating your matrix each time the player moves you're going to run into trouble due to floating point errors and your rotation matrix will turn into a skew matrix. One solution is to orthonormalise the matrix every now and then but usually it's better to simply keep the player's pitch, yaw (and roll if you need it) as floats and build your matrix from those angles when the player changes orientation, looks up/down etc. If you use optimised code for each angle (or a single method for converting Euler angles to a matrix) it's not slower than what you seem to be doing right now. You won't run into Gimbal lock issues as the camera orientation will be restricted anyway.
As for your specific question I think you'd need to calculate the angle between matrix Z axis (the third row or column, depends how your matrices are oriented) and an unrotated vector pointing down your Z axis.

Finding a Quaternion from Gyroscope Data?

I've been trying to build a filter that can successfully combine compass, geomagnetic, and gyroscopic data to produce a smooth augmented reality experience. After reading this post along with lots of discussions, I finally found out a good algorithm to correct my sensor data. Most examples I've read show how to correct accelerometers with gyroscopes, but not correct compass + accelerometer data with gyroscope. This is the algorithm I've settled upon, which works great except that I run into gimbal lock if I try to look at the scene if I'm not facing North. This algorithm is Balance Filter, only instead of only implemented in 3D
Initialization Step:
Initialize a world rotation matrix using the (noisy) accelerometer and compass sensor data (this is provided by the Android already)
Update Steps:
Integrate the gyroscope reading (time_delta * reading) for each axis (x, y, z)
Rotate the world rotation matrix using the Euler angles supplied by the integration
Find the Quaternion from the newly rotated matrix
Find the rotation matrix from the unfiltered accelerometer + compass data (using the OS provided function, I think it uses angle/axis calculation)
Get the quaternion from the matrix generated in the previous step.
Slerp between quaternion generated in step 2 (from the gyroscope), and the accelerometer data using a coefficient based on some experimental magic
Convert back to a matrix and use that to draw the scene.
My problem is that when I'm facing North and then try to look south, the whole thing blows up and it appears to be gimbal lock. After a few gimbal locks, the whole filter is in an undefined state. Searching around I hear everybody saying "Just use Quaternions" but I'm afraid it's not that simple (at least not to me) and I know there's something I'm just missing. Any help would be greatly appreciated.
The biggest reason to use quaternions is to avoid the singularity problem with Euler angles. You can directly rotate a quaternion with gyro data.
Many appologies if information is delayed or not useful specifically but may be useful to others as I found it after some research:::
a. Using a kalman (linear or non linear) filter you do following ::
Gyro to integrate the delta angle while accelerometers tell you the outer limit.
b. Euler rates are different from Gyro rate of angle change so you ll need quaternion or Euler representation::
Quaternion is non trivial but two main steps are ----
1. For Roll, pitch,yaw you get three quaternions as cos(w) +sin(v) where w is scalar part and v is vector part (or when coding just another variable)
Then simply multiply all 3 quat. to get a delta quaternion
i.e quatDelta[0] =c1c2*c3 - s1s2*s3;
quatDelta[1] =c1c2*s3 + s1s2*c3;
quatDelta[2] =s1*c2*c3 + c1*s2*s3;
quatDelta[3] =c1*s2*c3 - s1*c2*s3;
where c1,c2,c3 are cos of roll,pitch,yaw and s stands for sin of the same actually half of those gyro pre integrated angles.
2. Then just multiply by old quaternion you had
newQuat[0]=(quaternion[0]*quatDelta[0] - quaternion[1]*quatDelta[1] - quaternion[2]*quatDelta[2] - quaternion[3]*quatDelta[3]);
newQuat[1]=(quaternion[0]*quatDelta[1] + quaternion[1]*quatDelta[0] + quaternion[2]*quatDelta[3] - quaternion[3]*quatDelta[2]);
newQuat[2]=(quaternion[0]*quatDelta[2] - quaternion[1]*quatDelta[3] + quaternion[2]*quatDelta[0] + quaternion[3]*quatDelta[1]);
newQuat[3]=(quaternion[0]*quatDelta[3] + quaternion[1]*quatDelta[2] - quaternion[2]*quatDelta[1] + quaternion[3]*quatDelta[0]);
As you loop through the code it gets updated so only quatenion is a global variables not the rest
3. Lastly if you want Euler angles from them then do the following:
`euler[2]=atan2(2.0*(quaternion[0]*quaternion[1]+quaternion[2]*quaternion[3]), 1-2.0*(quaternion[1]*quaternion[1]+quaternion[2]*quaternion[2]))euler[1]=safe_asin(2.0*(quaternion[0]*quaternion[2] - quaternion[3]*quaternion[1]))euler[0]=atan2(2.0*(quaternion[0]*quaternion[3]+quaternion[1]*quaternion[2]), 1-2.0*(quaternion[2] *quaternion[2]+quaternion[3]*quaternion[3]))`
euler[1] is pitch and so on..
I just wanted to outline general steps of quaternion implementation. There may be some minor errors but I tried this myself and it works. Please note that when changing to euler angles you will get singularities also called as "Gimbal lock"
An important note here is that this is not my work but I found it over the internet and wanted to thank who ever did this priceless code...Cheers

Quaternion rotation matrix unexpectedly has the opposite sense

I have some understanding problem concerning quaternions.
In order to have my world object rotate in the correct way, I need to invert their quaternion rotation while refreshing the object world matrix.
I create the object rotation with this code:
Rotation = Quaternion.RotationMatrix(
Matrix.LookAtRH(Position,
Position + new Vector3(_moveDirection.X, 0, _moveDirection.Y),
Vector3.Up)
);
and refresh the object World matrix like this:
Object.World = Matrix.RotationQuaternion(Rotation)
* Matrix.Translation(Position);
This is not working; it makes the object rotate in the opposite way compared to what it should!
The is the way that makes my object rotate correctly:
Object.World = Matrix.RotationQuaternion(Quaternion.invert(Rotation))
* Matrix.Translation(Position);
Why do I have to invert the object rotation?
This isn't a quaternion problem so much as it is a usage and/or documentation issue with the DirectX call you're using. The transformation the call gives is the one that happens when you move the camera. If you're keeping the camera fixed and moving the world, you're swapping what's moving and what's fixed. These coordinate transformations are inverses of each other, which is why taking the inverse works for you.
You don't need to take an explicit inverse, though. Just swap the order of the first two arguments.

Resources