IMU software-based Orientation Adjustment - math

I have a six-axis IMU that outputs orientation data in Quaternion, Euler angle, and YPR formats. This data is used in a quadcopter PID controller for stabilization.
I have built a test stand for tuning the quadcopter; however, the axis of rotation for the test stand is ~45° rotated from the IMU measurement axes.
Previously, I managed to adjust the offset using a rotation matrix, but (given some changes to the program) I am now dealing with angles/quaternions instead of vectors, and it seems that this method is no longer applicable (at least without intermediate steps) to the measurement data.
This leads to my question: How can I apply this 45° transformation to the output data (in the provided format-- quaternion, Euler, YPR) in order to measure the attitude along the axis of rotation?

Related

Solving absolute scale problem in Visual Odometry using IMU data and or distance meter

the visual odometry gives me x,y,z position, unless an absolute scale.
I can also have IMU data (to obtain for example roll, pith yaw information).
I can also have a distance meter (to obtain a varying distance from the current object)
There is an easy way to combine these information and obtain the related scale?
Thanks!
If you have access to a 3D accelerometer and 3D gyroscope you can estimate (e.g. by using a Kalman filter) the biases of accelerometer + gyroscope AND estimate the scale. This paper (https://www.researchgate.net/publication/220061491_Fusion_of_IMU_and_Vision_for_Absolute_Scale_Estimation_in_Monocular_SLAM) describes two approaches how to estimate the scale given calibrated IMU data.

Rotating a line defined by two points in 3D

I have edited this question for additional clarity and because of some of the answers below.
I have an electromagnetic motion tracker which tracks a sensor and gives me a point in global space (X, Y, Z). It also tracks the rotation of the sensor and gives Euler angles (Yaw, Pitch, Roll).
The sensor is attached to a rigid body on a baseball cap which sits on the head of a person. However, I wish to track the position of a specific facial feature (nose for example) which I infer from the motion tracker sensor's position and orientation.
I have estimated the spatial offset between the motion tracker and the facial features I want to track. I have done this by simply measuring the offset along the X, Y and Z axis.
Based on a previous answer to this question, I have composed a rotation matrix from the euler angles given to me by the motion tracker. However, I am stuck with how I should use this rotation matrix, the position of the sensor in global space and the spatial offset between that sensor and the nose to give me the position of the nose in global space.
The sensor will give you a rotation matrix (via the Euler angles) and a position (which should be that of the center of rotation).
Whatever item is rigidly fastened to the sensor, such as the nose, will undergo the same motion. Then knowing the relative coordinates of the nose and the sensor, you get the relation
Q = R.q + P
where R is the rotation matrix, P the position vector of the sensor and q the relative coordinates of the nose.
Note that the relation between the rotation matrix and the angles can be computed using one of these formulas: https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix. (You will need to read the article carefully to make sure which your cases is among the 12 possibilities.)
In principle, you determine R and P from the readings of the sensor, but you are missing the coordinates q. There are several approaches:
you determine those coordinates explicitly by measuring the distances along virtual axes located at the rotation center and properly aligned.
you determine the absolute coordinates Q of the nose corresponding to known R and P; then q is given by R'(Q - P) where R' denotes the transpose of R (which is also its inverse). To obtain Q, you can just move the sensor center to the nose without moving the head.

tilt of object from the normals

I have a flat object (not totally flat (let's say in range of 25µm)) which I measured two times (The measuring concept is not important here) with applying a tilt between the two times.
I have the normals in each point of the surface and I want from these normals to know the tilt that has been applied.
My approach was to calculate the average normal of each one and then calculate the angle between the normals.
Could you please suggest for me another solution or confirm mine?!
Many thanks in advance
Your solution should work
but you have to measure normals:
evenly distributed on the whole area
or always on the same points of the object (which is not the case I assume)
if this condition is not true it could lower the accuracy a lot
Now to be sure we are talking about the same thing:
red vectors 1,2 are the average normals
angle between them is not tilt !!!
but angle between plates (in combination around two axises)
so if you want just tilt in one axis you have to project these normals
onto plane you want the tilt be in (blue vectors 1,2 on the right let them be n1,n2)
angle between these vectors is the tilt
tilt = acos( (n1.n2)/(|n1|.|n2|) )
Another method
without knowing the measurement possibilities
and object shape is hard to suggest another measurement method to validate yours.
anyway if you can measure distances you can for plate-like objects do this:
so measure a0,a1,b
compute ang
ang=atan2(a0-a1,b)
do this also for the second measurement and then:
tilt = ang2-ang1
[notes]
texts on the image are small so zoom it or download the image and view in image viewer if needed
if tilt plane is one of the base planes (xy,xz or yz) then just ignore the unused axis coordinate

Adjust camera co-ordinates to represent change in azimuth, elevation and roll values

I'm currently working with libQGLViewer, and I'm receiving a stream of data from my sensor, holding azimuth, elevation and roll values, 3 euler angles.
The problem can be considered as the camera representing an aeroplane, and the changes in azimuth, elevation and roll the plane moving.
I need a general set of transformation matrices to transform the camera point and the up vector to represent this, but I'm unsure how to calculate them since the axis to rotate about changes after each rotation ( I think? ).
Either that, or just someway to pass the azimuth, elevation, roll values to the camera and have some function do it for me? I understand that cameraPosition.setOrientation(Quaterion something) might work, but I couldn't really understand it. Any ideas?
For example you could just take the three matrices for rotation about the coordinate axes, plug in your angles respectively, and multiply these three matrices together to get the final roation matrix (but use the correct multiplication order).
You can also just compute a quaternion from the euler angles. Look here for ideas. Just keep in mind that you always have to use the correct order of the euler angles (whatever your three values mean), perhaps with some experimentation (those different euler conventions always make me crazy).
EDIT: In response to your comment: This is accounted by the order of rotations. The matrices applied like v' = XYZv correspond to roation about z, unchanged y and then unchanged x, which is equal to x, y' and then z''. So you have to keep an eye on the axes (what your words like azimuth mean) and the order in which you rotate about these axes.

Quaternion Interpolation w/ Rate Matching

I have an object w/ and orientation and the rotational rates about each of the body axis. I need to find a smooth transition from this state to a second state with a different set of rates. Additionally, I have constraints on how fast I can rotate/accelerate about each of the axis.
I have explored Quaternion slerp's, and while I can use them to smoothly interpolate between the states, I don't see an easy way to get the rate matching into it.
This feels like an exercise in differential equations and path planning, but I'm not sure exactly how to formulate the problem so that the algorithms that are out there can work on it.
Any suggestions for algorithms that can help solve this and/or tips on formulating the problem to work with those algorithms would be greatly appreciated.
[Edit - here is an example of the type of problem I'm working on]
Think of a gunner on a helicopter that needs to track a target as the helicopter is flying. For the sake of argument, he needs to be on the target from the time it rises over the horizon to the time it is no longer in view. The relative rate of this target is not constant, but I assume that through the aggregation of several 'rate matching' maneuvers I can approximate this tracking fairly well. I can calculate the gun orientation and tracking rates required at any point, it's just generating a profile from some discrete orientations and rates that is stumping me.
Thanks!
First of all your rotational rates about each axis should compose into a rotational rate vector (i.e. w = [w_x w_y w_z]^T). Then you can separate the magnitude of the rotation from the axis of the rotation. The magnitude is w_mag = w/|w|. Then the axis is the unit vector u = w/w_mag. You can then update your gross rotation by composing an incremental rotation using your favorite representation (i.e. rotation matrices, quaternions). If your starting rotation is R_0 and your incrementatl rotation is defined by R_inc(w_mag*dt, u) then you follow the following composition rules:
R_1 = R_0 * R_inc
R_k+1 = R_k * R_inc
enjoy.

Resources