Align IMU orientations and then get relative rotations - math

I am using two IMUs of the same type (BHI160, i.e. orientation is relative to the north and on alignment with north, the IMU's local y-axis points into the north direction) on two objects, let's say pens, with the added difficulty that if I place the two objects in parallel, both IMUs' z-axes point upwards, but one IMU is rotated 180° around the z-axis relative to the other.
Now, if I understand the math here correctly, the quaternion data I receive from an IMU is the half-angle-rotation relative to the north direction, so that q * north_dir * q_inv = IMU_y_axis (with north_dir and IMU_y_axis being 3D vectors in global space, or pure quaternions for the sake of this computation).
Due to the rotation of the IMUs, I would assume that when both pens are pointing in the same direction, I should be able to compute the second pen's orientation as q_2 = q_rot_z * q_1, where q_rot_z equals a 90° rotation around the z-axis -- following the intuition that if I point both pens towards the north, I would obtain the global direction of pen 2's y-axis (i.e. pen 1's y-axis rotated around the z-axis by 180°) by computing q_rot_z * north_dir * q_rot_z_inv
Is it thus correct that if I want to know the relative rotation of the pen tips (say, the rotation I need to go from the first pen's tip to the tip of the second one), I need to compute q_r = q_2 * q_rot_z_inv * q_1_inv in order to get from tip 1 to tip 2 by computing q_r * q_1? Or does the "prior" rotation around the z-axis not matter in this case and I only need to compute q_r = q_2 * q_1_inv as usual?
Edit:
this is basically an extension of this question, but I would like to know if the same answer also applies in my case or whether the known relative IMU rotation would in my case need to be included as well

Let's go through this step by step. You have a global coordinate system G, which is aligned to the north direction. It does not really matter how it is aligned or if it is aligned at all.
Then we have to IMUs with their respective coordinate systems I1 and I2. The coordinate systems are given as the rotation from the global system to the local systems. In the following, we will use the notation R[G->I1] for that. This represents a rotation from G to I1. If you transform any vector in G with this rotation, you will get the same vector in I1 expressed in the coordinate system G. Let's denote the transformation of a vector v with transform T by T ° v. The following figure illustrates this:
In this figure, I have added a translation to the transform (which quaternions can of course not represent). This is just meant to make the point clearer. So, we have a vector v. The same vector can lie in either coordinate system G or I. And the transformed vector R[G->I] ° v represents the v of I in the coordinate system of G. Please make sure that this is actually the rotation that you get from the IMUs. It is also possible that you get the inverse transform (this would be the system transform view, whereas we use the model transform view). This changes little in the following derivations. Therefore, I will stick to this first assumption. If you need the inverse, just adjust the formulas accordingly.
As you already know, the operation R ° v can be done by turning v into a pure quaternion, calculating R * v * conjugate(R), and turning it into a vector again (or work with pure quaternions throughout the process).
Now the pens come into play. The pen has an intrinsic coordinate system, which you can define arbitrarily. From your descriptions, it seems as if you want to define it such that the pen's local y-axis points towards the tip. So, we have an additional coordinate system per pen with the according rotation R[I1->P1] and R[I2->P2]. We can concatenate the rotations to find the global orientations (* is quaternion multiplication):
R[G->P1] = R[G->I1] * R[I1->P1]
R[G->P2] = R[G->I2] * R[I2->P2]
In the way that you defined the pen's local coordinate system, we know that R[I1->P1] is the identity (the local coordinate system is aligned with the IMU) and that R[I2->P2] is a rotation of 180° about the z-axis. So, this simplifies to:
R[G->P1] = R[G->I1]
R[G->P2] = R[G->I2] * RotateZ(180°)
Note that the z-rotation is performed in the local coordinate system of the IMU (it is multiplied at the right side). I don't know why you think that it should be 90°. It is really a rotation of 180°.
If you want to find the relative rotation between the tips, you first need to define in which coordinate system the rotation should be expressed. Let's say we want to express the rotation in the coordinate system of P1. Then, what you want to find is a rotation R[P1->P2], such that
R[G->P1] * R[P1->P2] = R[G->P2]
This solves to
R[P1->P2] = conjugate(R[G->P1]) * R[G->P2]
If you plug the above definitions in, you would get:
R[P1->P2] = conjugate(R[G->I1]) * R[G->I2] * RotateZ(180°)
And that's it.
It is pretty likely that you want something slightly different. That's why I explained it in such detail, so you will be able to modify the calculations accordingly.

Related

Understanding Angular velocities and their application

I recently had to convert euler rotation rates to vectorial angular velocity.
From what I understand, in a local referential, we can express the vectorial angular velocity by:
R = [rollRate, pitchRate, yawRate] (which is the correct order relative to the referential I want to use).
I also know that we can convert angular velocities to rotations (quaternion) for a given time-step via:
alpha = |R| * ts
nR = R / |R| * sin(alpha) <-- normalize and multiply each element by sin(alpha)
Q = [nRx i, nRy j, nRz k, cos(alpha)]
When I test this for each axis individually, I find results that I totally expect (i.e. 90°pitch/time-unit for 1 time unit => 90° pitch angle).
When I use two axes for my rotation rates however, I don't fully understand the results:
For example, if I use rollRate = 0, pitchRate = 90, yawRate = 90, apply the rotation for a given time-step and convert the resulting quaternion back to euler, I obtain the following results:
(ts = 0.1) Roll: 0.712676, Pitch: 8.96267, Yaw: 9.07438
(ts = 0.5) Roll: 21.058, Pitch: 39.3148, Yaw: 54.9771
(ts = 1.0) Roll: 76.2033, Pitch: 34.2386, Yaw: 137.111
I Understand that a "smooth" continuous rotation might change the roll component mid way.
What I don't understand however is after a full unit of time with a 90°/time-unit pitchRate combined with a 90°/time-unit yawRate I end up with these pitch and yaw angles and why I still have roll (I would have expected them to end up at [0°, 90°, 90°].
I am pretty confident on both my axis + angle to quaternion and on my quaternion to euler formulas as I've tested these extensively (both via unit-testing and via field testing), I'm not sure however about the euler rotation rate to angular-velocity "conversion".
My first bet would be that I do not understand how euler rotation-rates axes interacts on themselves, my second would be that this "conversion" between euler rotation-rates and angular velocity vector is incorrect.
Euler angles are not good way of representing arbitrary angular movement. Its just a simplification used for graphics,games and robotics. They got some pretty hard restrictions like your rotations consist of only N perpendicular axises in ND space. That is not how rotation works in real world. On top of this spherical representation of reper endpoint it creates a lot of singularities (you know when you cross poles ...).
The rotation movement is analogy for translation:
position speed acceleration
pos = Integral(vel) = Integral(Integral(acc))
ang = Integral(omg) = Integral(Integral(eps))
That in some update timer can be rewritten to this:
vel+=acc*dt; pos+=vel*dt;
omg+=eps*dt; ang+=omg*dt;
where dt is elapsed time (Timer interval).
The problem with rotation is that you can not superimpose it like translation. As each rotation has its own axis (and it does not need to be axis aligned, nor centered) and each rotation affect the axis orientation of all others too so the order of them matters a lot. On top of all this there is also gyroscopic moment creating 3th rotation from any two that has not parallel axis. Put all of this together and suddenly you see Euler angles does not match the real geometrics/physics of rotation. They can describe orientation and fake its rotation up to a degree but do not expect to make real sense once used for physic simulation.
The real simulation would require list of rotations described by the axis (not just direction but also origin), angular speed (and its change) and in each simulation step the recomputing of the axis as it will change (unless only single rotation is present).
This can be done by using coumulative homogenous transform matrices along with incremental rotations.
Sadly the majority of programmers prefers Euler angles and Quaternions simply by not knowing that there are better and simpler options and once they do they stick to Euler angles anyway as matrix math seem to be more complicated to them... That is why most nowadays games have gimbal locks, major rotation errors and glitches, unrealistic physics.
Do not get me wrong they still have their use (liek for example restrict free look for camera etc ... but they missused for stuff they are the worse option to use for.

Rotating a line defined by two points in 3D

I have edited this question for additional clarity and because of some of the answers below.
I have an electromagnetic motion tracker which tracks a sensor and gives me a point in global space (X, Y, Z). It also tracks the rotation of the sensor and gives Euler angles (Yaw, Pitch, Roll).
The sensor is attached to a rigid body on a baseball cap which sits on the head of a person. However, I wish to track the position of a specific facial feature (nose for example) which I infer from the motion tracker sensor's position and orientation.
I have estimated the spatial offset between the motion tracker and the facial features I want to track. I have done this by simply measuring the offset along the X, Y and Z axis.
Based on a previous answer to this question, I have composed a rotation matrix from the euler angles given to me by the motion tracker. However, I am stuck with how I should use this rotation matrix, the position of the sensor in global space and the spatial offset between that sensor and the nose to give me the position of the nose in global space.
The sensor will give you a rotation matrix (via the Euler angles) and a position (which should be that of the center of rotation).
Whatever item is rigidly fastened to the sensor, such as the nose, will undergo the same motion. Then knowing the relative coordinates of the nose and the sensor, you get the relation
Q = R.q + P
where R is the rotation matrix, P the position vector of the sensor and q the relative coordinates of the nose.
Note that the relation between the rotation matrix and the angles can be computed using one of these formulas: https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix. (You will need to read the article carefully to make sure which your cases is among the 12 possibilities.)
In principle, you determine R and P from the readings of the sensor, but you are missing the coordinates q. There are several approaches:
you determine those coordinates explicitly by measuring the distances along virtual axes located at the rotation center and properly aligned.
you determine the absolute coordinates Q of the nose corresponding to known R and P; then q is given by R'(Q - P) where R' denotes the transpose of R (which is also its inverse). To obtain Q, you can just move the sensor center to the nose without moving the head.

3D: Check point inside elliptical cone

I seem to have searched the whole internet trying to find an implementation of checking if a 3d point is within an elliptical cone defined by (origin, length, horizontal angle, vertical angle). Unfortunately without success as I only really found one math solution which I did not understand.
Now I am aware on how to use implement it using a normal cone:
inRange = magnitude(point - origin) <= length;
heading = normalized(point - origin);
return dot(forward, heading) >= cos(angle) && inRange;
However there the height detection is far too tall. I would really like to implement a more realistic vision cone for the AI for a game but this requires having the cone shaped more like a human field of view being more wide than tall.
Thanks a lot for any help:)
Given a 3D elliptic cone, with base at B=(x_B,y_B,z_B), height h along the cone axis k=(k_x,k_y,j_z), major base radius a, minor base radius b and direction along the major axis i=(i_x,i_y,i_z) you need to find if a point P=(x,y,z) lies inside the cone. It is your choice on how to parametrize the major axis direction and I think your are trying to use spherical coordinates with two angles.
Here are the steps to take:
Establish a coordinate system with origin on the base B and with the local x axis along your major axis i. The local z axis should be towards the tip along k. Finally the local y axis should be
j=cross(k,i)=(i_z*k_y-i_y*k_z, i_x*k_z-i_z*k_x, i_y*k_x-i_x*k_y)
j=normalize(j)
Your 3×3 rotation matrix is defined by the columns E=[i,j,k]
Transform your point P=(x,y,z) into the local coordinates with
P2 = transpose(E)*(P-B) = (x2,y2,z2)
Now establish how far along the axis of the cone is with s=(h-z2)/h where s=0 at the tip and s=1 at the base.
If s>1 or s<0 then the point is outside
Otherwise if s>0 you need to check that (x2/(s*a))^2+(y2/(s*b))^2<=1 for the point to be inside.
If s=0 then check that x2=0 and y2=0 for the point being exactly at the tip.
If you cannot do basic vector algebra, like cross products, 3D transformations and normalization that I suggest you have some reading to do before you can understand what is going on here.
Note:
// | i_x i_y i_z |
// transpose(E) = | j_x j_y j_z |
// | k_x k_y k_z |

Calculating modelview matrix for 2D camera using Eigen

I'm trying to calculate modelview matrix of my 2D camera but I can't get the formula right. I use the Affine3f transform class so the matrix is compatible with OpenGL. This is closest that I did get by trial and error. This code rotates and scales the camera ok, but if I apply translation and rotation at same time the camera movement gets messed up: camera moves in rotated fashion, which is not what I want. (And this probaly due to fact I first apply the rotation matrix and then translation)
Eigen::Affine3f modelview;
modelview.setIdentity();
modelview.translate(Eigen::Vector3f(camera_offset_x, camera_offset_y, 0.0f));
modelview.scale(Eigen::Vector3f(camera_zoom_x, camera_zoom_y, 0.0f));
modelview.rotate(Eigen::AngleAxisf(camera_angle, Eigen::Vector3f::UnitZ()));
modelview.translate(Eigen::Vector3f(camera_x, camera_y, 0.0f));
[loadmatrix_to_gl]
What I want is that camera would rotate and scale around offset position in screenspace {(0,0) is middle of the screen in this case} and then be positioned along the global xy-axes in worldspace {(0,0) is also initialy at middle of the screen} to the final position. How would I do this?
Note that I have set up also an orthographic projection matrix, which may affect this problem.
If you want a 2D image, rendered in the XY plane with OpenGL, to (1) rotate counter-clockwise by a around point P, (2) scale by S, and then (3) translate so that pixels at C (in the newly scaled and rotated image) are at the origin, you would use this transformation:
translate by -P (this moves the pixels at P to the origin)
rotate by a
translate by P (this moves the origin back to where it was)
scale by S (if you did this earlier, your rotation would be messed up)
translate by -C
If the 2D image we being rendered at the origin, you'd also need to end by translate by some value along the negative z axis to be able to see it.
Normally, you'd just do this with OpenGL basics (glTranslatef, glScalef, glRotatef, etc.). And you would do them in the reverse order that I've listed them. Since you want to use glLoadMatrix, you'd do things in the order I described with Eigen. It's important to remember that OpenGL is expecting a Column Major matrix (but that seems to be the default for Eigen; so that's probably not a problem).
JCooper did great explaining the steps to construct the initial matrix.
However I eventually solved the problem bit differently. There was few additional things and steps that were not obvious for me at the time. See JCooper answer's comments. First is to realize all matrix operations are relative.
Thus if you want to position or move the camera with absolute xy-axes, you must first decompose the matrix to extract its absolute position with unchanged axes. Then you translate the matrix by the difference of the old and new position.
Here is way to do this with Eigen:
First compute Affine2f matrix cmat scalar determinant D. With Eigen this is done with D = cmat.linear().determinant();. Next compute 'reverse' matrix matrev of the current rotation+scale matrix R using the D. matrev = (RS.array() / (1.0f / determ)).matrix()); where RS is cmat.matrix().topLeftCorner(2,2)
The absolute camera position P is then given by P = invmat * -C where C is cmat.matrix().col(2).head<2>()
Now we can reposition the camera anywhere along the absolute axes and keeping the rotation+scaling same: V = RS * (T - P) where RS is same as before, T is the new position vec and P is the decomposed position vec.
The cmat then simply translated by V to move the camera: cmat.pretranslate(V)

Adjust camera co-ordinates to represent change in azimuth, elevation and roll values

I'm currently working with libQGLViewer, and I'm receiving a stream of data from my sensor, holding azimuth, elevation and roll values, 3 euler angles.
The problem can be considered as the camera representing an aeroplane, and the changes in azimuth, elevation and roll the plane moving.
I need a general set of transformation matrices to transform the camera point and the up vector to represent this, but I'm unsure how to calculate them since the axis to rotate about changes after each rotation ( I think? ).
Either that, or just someway to pass the azimuth, elevation, roll values to the camera and have some function do it for me? I understand that cameraPosition.setOrientation(Quaterion something) might work, but I couldn't really understand it. Any ideas?
For example you could just take the three matrices for rotation about the coordinate axes, plug in your angles respectively, and multiply these three matrices together to get the final roation matrix (but use the correct multiplication order).
You can also just compute a quaternion from the euler angles. Look here for ideas. Just keep in mind that you always have to use the correct order of the euler angles (whatever your three values mean), perhaps with some experimentation (those different euler conventions always make me crazy).
EDIT: In response to your comment: This is accounted by the order of rotations. The matrices applied like v' = XYZv correspond to roation about z, unchanged y and then unchanged x, which is equal to x, y' and then z''. So you have to keep an eye on the axes (what your words like azimuth mean) and the order in which you rotate about these axes.

Resources