Quaternion - Angle computation using accelerometer and gyroscope - math

I have been using a 6dof LSM6DS0 IMU unit (with accelerometer and gyroscope). And I am trying to calculate the angle of rotation around all the three axes. I have tried may methods but not getting the results as expected.
Methods tried:
(i) Complementatry filter approach - I am able to get the angles using the formula provided in the link Angle comutation method.
But the problem is that angles are not at all consistent and drifts a lot. Moreover when the IMU is rotated around one axis, angles calculated over other axis are wobbling too much.
(ii) Quaternion based angle calculation : There were plenty of resources claiming the angles are calcluated very well using quaternion approach but none had a clear explanation. I have used this method in order to update the quaternion for every values taken from the IMU unit. But the link dint explain how to calculate the angles from quaternion.
I have used glm math library inorder to convert the quaternion to euler angles and also have tried the formula specified in wiki link. With this method since in pitch calculation asin returns only -90 to +90 degrees I am not able to rotate the object in 3D as the one they have been doing in the mentioned link.
Does anyone have tried the quaternion to angle conversion before?? I need to calculate the angles around all the three axis in the range 0 to 360 degrees or -180 to +180 degrees.
Any help could be really appreciated. Thanks in advance.

http://davidegironi.blogspot.com.ar/2013/02/avr-atmega-mpu6050-gyroscope-and.html#.VzjDKfnhCUk
Sq = [q0, q1, q2, q3]
//sensor quaternion can be translated to aerospace sequence of yaw/pitch/roll angles
yaw = atan2(2*q1*q2 - 2*q0*q3, 2*(q0^2) + 2*(q1^2) - 1)
pitch = -asin(2*q1*q3 + 2*q0*q2)
roll = atan2(2*q2*q3 - 2*q0*q1, 2*(q0^2) + 2*(q3^2) - 1)

Related

Understanding Angular velocities and their application

I recently had to convert euler rotation rates to vectorial angular velocity.
From what I understand, in a local referential, we can express the vectorial angular velocity by:
R = [rollRate, pitchRate, yawRate] (which is the correct order relative to the referential I want to use).
I also know that we can convert angular velocities to rotations (quaternion) for a given time-step via:
alpha = |R| * ts
nR = R / |R| * sin(alpha) <-- normalize and multiply each element by sin(alpha)
Q = [nRx i, nRy j, nRz k, cos(alpha)]
When I test this for each axis individually, I find results that I totally expect (i.e. 90°pitch/time-unit for 1 time unit => 90° pitch angle).
When I use two axes for my rotation rates however, I don't fully understand the results:
For example, if I use rollRate = 0, pitchRate = 90, yawRate = 90, apply the rotation for a given time-step and convert the resulting quaternion back to euler, I obtain the following results:
(ts = 0.1) Roll: 0.712676, Pitch: 8.96267, Yaw: 9.07438
(ts = 0.5) Roll: 21.058, Pitch: 39.3148, Yaw: 54.9771
(ts = 1.0) Roll: 76.2033, Pitch: 34.2386, Yaw: 137.111
I Understand that a "smooth" continuous rotation might change the roll component mid way.
What I don't understand however is after a full unit of time with a 90°/time-unit pitchRate combined with a 90°/time-unit yawRate I end up with these pitch and yaw angles and why I still have roll (I would have expected them to end up at [0°, 90°, 90°].
I am pretty confident on both my axis + angle to quaternion and on my quaternion to euler formulas as I've tested these extensively (both via unit-testing and via field testing), I'm not sure however about the euler rotation rate to angular-velocity "conversion".
My first bet would be that I do not understand how euler rotation-rates axes interacts on themselves, my second would be that this "conversion" between euler rotation-rates and angular velocity vector is incorrect.
Euler angles are not good way of representing arbitrary angular movement. Its just a simplification used for graphics,games and robotics. They got some pretty hard restrictions like your rotations consist of only N perpendicular axises in ND space. That is not how rotation works in real world. On top of this spherical representation of reper endpoint it creates a lot of singularities (you know when you cross poles ...).
The rotation movement is analogy for translation:
position speed acceleration
pos = Integral(vel) = Integral(Integral(acc))
ang = Integral(omg) = Integral(Integral(eps))
That in some update timer can be rewritten to this:
vel+=acc*dt; pos+=vel*dt;
omg+=eps*dt; ang+=omg*dt;
where dt is elapsed time (Timer interval).
The problem with rotation is that you can not superimpose it like translation. As each rotation has its own axis (and it does not need to be axis aligned, nor centered) and each rotation affect the axis orientation of all others too so the order of them matters a lot. On top of all this there is also gyroscopic moment creating 3th rotation from any two that has not parallel axis. Put all of this together and suddenly you see Euler angles does not match the real geometrics/physics of rotation. They can describe orientation and fake its rotation up to a degree but do not expect to make real sense once used for physic simulation.
The real simulation would require list of rotations described by the axis (not just direction but also origin), angular speed (and its change) and in each simulation step the recomputing of the axis as it will change (unless only single rotation is present).
This can be done by using coumulative homogenous transform matrices along with incremental rotations.
Sadly the majority of programmers prefers Euler angles and Quaternions simply by not knowing that there are better and simpler options and once they do they stick to Euler angles anyway as matrix math seem to be more complicated to them... That is why most nowadays games have gimbal locks, major rotation errors and glitches, unrealistic physics.
Do not get me wrong they still have their use (liek for example restrict free look for camera etc ... but they missused for stuff they are the worse option to use for.

Quaternion issues

I'm working with a MPU9250 to control a 3D position. I have an object that can rotate in 3 axis ( Roll, Pitch and Yaw) but I found some problems.
Initially I tried to calculate Euler Angles from Quaternions, but I discover that Euler Angles have 2 discontinuity in +/- 90 degrees, and so if I rotate the MPU for 365 deg it gives me wrong values like it's shifting while rotating.
I read that there is the possibility to convert Quaternions to DCM (Direction Cosine Matrix) but I find this algorithm fills too much the Arduino processor for calculations.
The last possibility that comes to my mind is to control the position directly with Quaternions, so I have to "forecast" the arriving Quaternion to stop the rotation when the MPU will give me the same Quaternion.
If anyone knows how to implement this or any other way please let me know your suggestions.
Many thanks,
Luca

Converting XYZ to XY (world coords to screen coords)

Is there a way to convert that data:
Object position which is a 3D point (X, Y, Z),
Camera position which is a 3D point (X, Y, Z),
Camera yaw, pitch, roll (-180:180, -90:90, 0)
Field of view (-45°:45°)
Screen width & height
into the 2D point on the screen (X, Y)?
I'm looking for proper math calculations according to this exact set of data.
It's difficult, but it's possible to do it for yourself.
There are lots of libraries that do this for you, but it is more satisfying if you do it yourself:
This problem is possible and I have written my own 3D engine to do this for objects in javascript using the HTML5 Canvas. You can see my code here and solve a 3D maze game I wrote here to try and understand what I will talk about below...
The basic idea is to work in steps. To start, you have to forget about camera angle (yaw, pitch and roll) as these come later and just imagine you are looking down the y axis. Then the basic idea is to calculate, using trig, the pitch angle and yaw to your object coordinate. By this I mean imagining that you are looking through a letterbox, the yaw angle would be the angle in degrees left and right to your coordinate (so both positive and negative) from the center/ mid line and the yaw up and down from it. Taking these angles, you can map them to the x and y 2D coordinate system.
The calculations for the angles are:
pitch = atan((coord.x - cam.x) / (coord.y - cam.y))
yaw = atan((coord.z - cam.z) / (coord.y - cam.y))
with coord.x, coord.y and coord.z being the coordinates of the object and the same for the cam (cam.x, cam.y and cam.z). These calculations also assume that you are using a Cartesian coordinate system with the different axis being: z up, y forward and x right.
From here, the next step is to map this angle in the 3D world to a coordinate which you can use in a 2D graphical representation.
To map these angles into your screen, you need to scale them up as distances from the mid line. This means multiplying them by your screen width / fov. Finally, these distances will now be positive or negative (as it is an angle from the mid line) so to actually draw it on a canvas, you need to add it to half of the screen width.
So this would mean your canvas coordinate would be:
x = width / 2 + (pitch * (width / fov)
y = height / 2 + (yaw * (height / fov)
where width and height are the dimensions of you screen, fov is the camera's fov and yaw and pitch are the respective angles of the object from the camera.
You have now achieved the first big step which is mapping a 3D coordinate down to 2D. If you have managed to get this all working, I would suggest trying multiple points and connecting them to form shapes. Also try moving your cameras position to see how the perspective changes as you will soon see how realistic it already looks.
In addition, if this worked fine for you, you can move on to having the camera be able to not only change its position in the 3D world but also change its perspective as in yaw, pitch and roll angles. I will not go into this entirely now, but the basic idea is to use 3D world transformation matrices. You can read up about them here but they do get quite complicated, however I can give you the calculations if you get this far.
It might help to read (old style) OpenGL specs:
https://www.khronos.org/registry/OpenGL/specs/gl/glspec14.pdf
See section 2.10
Also:
https://www.khronos.org/opengl/wiki/Vertex_Transformation
Might help with more concrete examples.
Also, for "proper math" look up 4x4 matrices, projections, and homogeneous coordinates.
https://en.wikipedia.org/wiki/Homogeneous_coordinates

Gravity Compensation in Accelerometer Data

Given an Accelerometer with 9 DOF (Accelerometer, Gyroscope and Magnetometer) I want to remove/compensate the effect of the gravity in accelerometer reading (Accelerometer can rotate freely). The sensor gives the orientation in quaternion representation relative to a (magnetic)north, west and up reference frame.
I found this http://www.varesano.net/blog/fabio/simple-gravity-compensation-9-dom-imus
but couldn't understand the basis for the given equation.
How could I achieve this given above information?
You need to rotate the accelerometer reading by the quaternion into the Earth frame of reference (into the coordinate system of the room if you like), then subtract gravity. The remaining acceleration is the acceleration of the sensor in the Earth frame of reference often referred to as linear acceleration or user acceleration.
In pseudo-code, something like this
acceleration = [ax, ay, ay] // accelerometer reading
q // quaternion corresponding to the orientation
gravity = [0, 0, -9.81] // gravity on Earth in m/s^2
a_rotated = rotate(acceleration, q) // rotate the measured acceleration into
// the Earth frame of reference
user_acceleration = a_rotated - gravity
You say that you can get q through the API. The only nontrivial step is to implement the rotate() function.
To compute the image of a vector v when rotated by q, the following formula should be applied: vrotated = qvq-1. To compute it with floating point numbers, you need to work out the formulas yourself; they are available at Using quaternion rotations.
As far as I can tell, the link you provided does exactly this, you see the expanded formulas there and now you know where they came from. Also, the linked content seems to measure gravity in g, that is, gravity is [0,0,-1].
Watch out for sign conventions (whether you consider gravity [0,0,-1] or [0,0,1]) and handedness of your coordinate systems!
I assume your accelerometer reading is in sensor body frame. First we need to represent accelerometer data with respect to inertial frame, and then subtract gravity.
If you are directly using Euler angles rather than quaternion, then you need to compute rotation matrix
R = [
ctheta*cpsi,
-cphi*spsi + sphi*stheta*cpsi,
sphi*spsi + cphi*stheta*cpsi;
ctheta*spsi, cphi*cpsi + sphi*stheta*spsi,
-sphi*cpsi + cphi*stheta*spsi;
-stheta, sphi*ctheta, cphi*ctheta
]
(It's given with MATLAB notation). Here phi stands for roll angle, theta for pitch, and psi for yaw. This R matrix is from body to inertial frame. I think in flight dynamics it's also known as transpose of Direction Cosine Matrix (DCM).
When you apply matrix multiplication operation, now you need to subtract gravity from z direction in order to eliminate static acceleration, i.e., gravity.

3d rotation around the origin

I know there are plenty of questions about 3d rotation that have been answered here but all of them seem to deal with rotational matrices and quaternions in OpenGL (and I don't really care if I get gimbal lock). I need to get 3d coordinates EX:(x,y,z) of a point that always must be the same distance, I'll call it "d" for now, from the origin. The only information I have as input is the deltax and deltay of the mouse across the screen. So far here is what I have tried:
First:
thetaxz+=(omousex-mouseX)/( width );
thetaxy+=(omousey-mouseY)/( height);
(thetaxy is the angle in radians on the x,y axis and thetaxz on the x,z axis)
(I limit both angles so that if they are less than or equal to 0 they equal 2*PI)
Second:
pointX=cos(thetaxz)*d;
pointY=sin(thetaxy)*d;
(pointX is the point's x coordinate and pointY is the y)
Third:
if(thetaxz)<PI){
pointZ=sqrt(sq(d)-sq(eyeX/d)-sq(eyeY/d));
}else{
pointZ=-sqrt(abs(sq(d)-sq(eyeX/d)-sq(eyeY/d)));
}
(sq() is a function that squares and abs() is an absolute value function)
(pointZ should be the point's z coordinate and it is except at crossing between the positive z hemisphere and negative z hemisphere. As it approaches the edge the point gets stretched further than the distance that it is always supposed to be at in the x and y and seemingly randomly around 0.1-0.2 radians of thetaxz the z coordinate becomes NAN or undefined)
I have thought about this for awhile, and truthfully I'm having difficulty warping my head around the concept of quaternions and rotational matrices however if you can show me how to use them to generate actual coordinates I would be glad to learn. I would still prefer it if I could just use some trigonometry in a few axis. Thank you in advance for any help and if you need more information please just ask.
Hint/last minute idea: I think it may have something to do with the z position affecting the x and y positions back but I am not sure.
EDIT: I drew a diagram:
If you truly want any success in this, you're going to have to bite the bullet and learn about rotation matrices and / or quaternion rotations. There may be other ways to do what you want, but rotation matrices and quaternion rotations are used simply because they are widely understood and among the simplest means of expressing and applying rotations to vectors. Any other representation somebody can come up with will probably be a more complex reformulation of one or both of these. In fact it can be shown rotation is a linear transformation and so can be expressed as a matrix. Quaternion rotations are just a simplified means of rotating vectors in 3D, and therefore have equivalent matrix representations.
That said, it sounds like you're interested in grabbing an object in your scene with a mouse click and rotating in a natural sort of way. If that's the case, you should look at the ArcBall method (there are numerous examples you may want to look over). This still requires you know something of quaternions. You will also find that an at least minimal comprehension of the basic aspects of linear algebra will be helpful.
Update: Based on your diagram and the comments it contains, it looks like all you are really trying to do is to convert Spherical Coordinates to Cartesian Coordinates. As long as we agree on the the notation, that's easy. Let θ be the angle you're calling XY, that is, the angle between the X axis rotated about the Z axis; this is called the azimuth angle and will be in the range [0, 2π) radians or [0°, 360°). Let Φ be an angle between the XY plane and your vector; this is called the elevation angle and will be in the range [-π/2, +π/2] or [-90°, +90°] and it corresponds to the angle you're calling the XZ angle (rotation in the XZ plane about the Y axis). There are other conventions, so make sure you're consistent. Anyway, the conversion is simply:
x = d∙cos(Φ)∙cos(θ)
y = d∙cos(Φ)∙sin(θ)
z = d∙sin(Φ)

Resources