I have edited this question for additional clarity and because of some of the answers below.
I have an electromagnetic motion tracker which tracks a sensor and gives me a point in global space (X, Y, Z). It also tracks the rotation of the sensor and gives Euler angles (Yaw, Pitch, Roll).
The sensor is attached to a rigid body on a baseball cap which sits on the head of a person. However, I wish to track the position of a specific facial feature (nose for example) which I infer from the motion tracker sensor's position and orientation.
I have estimated the spatial offset between the motion tracker and the facial features I want to track. I have done this by simply measuring the offset along the X, Y and Z axis.
Based on a previous answer to this question, I have composed a rotation matrix from the euler angles given to me by the motion tracker. However, I am stuck with how I should use this rotation matrix, the position of the sensor in global space and the spatial offset between that sensor and the nose to give me the position of the nose in global space.
The sensor will give you a rotation matrix (via the Euler angles) and a position (which should be that of the center of rotation).
Whatever item is rigidly fastened to the sensor, such as the nose, will undergo the same motion. Then knowing the relative coordinates of the nose and the sensor, you get the relation
Q = R.q + P
where R is the rotation matrix, P the position vector of the sensor and q the relative coordinates of the nose.
Note that the relation between the rotation matrix and the angles can be computed using one of these formulas: https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix. (You will need to read the article carefully to make sure which your cases is among the 12 possibilities.)
In principle, you determine R and P from the readings of the sensor, but you are missing the coordinates q. There are several approaches:
you determine those coordinates explicitly by measuring the distances along virtual axes located at the rotation center and properly aligned.
you determine the absolute coordinates Q of the nose corresponding to known R and P; then q is given by R'(Q - P) where R' denotes the transpose of R (which is also its inverse). To obtain Q, you can just move the sensor center to the nose without moving the head.
Related
I am using two IMUs of the same type (BHI160, i.e. orientation is relative to the north and on alignment with north, the IMU's local y-axis points into the north direction) on two objects, let's say pens, with the added difficulty that if I place the two objects in parallel, both IMUs' z-axes point upwards, but one IMU is rotated 180° around the z-axis relative to the other.
Now, if I understand the math here correctly, the quaternion data I receive from an IMU is the half-angle-rotation relative to the north direction, so that q * north_dir * q_inv = IMU_y_axis (with north_dir and IMU_y_axis being 3D vectors in global space, or pure quaternions for the sake of this computation).
Due to the rotation of the IMUs, I would assume that when both pens are pointing in the same direction, I should be able to compute the second pen's orientation as q_2 = q_rot_z * q_1, where q_rot_z equals a 90° rotation around the z-axis -- following the intuition that if I point both pens towards the north, I would obtain the global direction of pen 2's y-axis (i.e. pen 1's y-axis rotated around the z-axis by 180°) by computing q_rot_z * north_dir * q_rot_z_inv
Is it thus correct that if I want to know the relative rotation of the pen tips (say, the rotation I need to go from the first pen's tip to the tip of the second one), I need to compute q_r = q_2 * q_rot_z_inv * q_1_inv in order to get from tip 1 to tip 2 by computing q_r * q_1? Or does the "prior" rotation around the z-axis not matter in this case and I only need to compute q_r = q_2 * q_1_inv as usual?
Edit:
this is basically an extension of this question, but I would like to know if the same answer also applies in my case or whether the known relative IMU rotation would in my case need to be included as well
Let's go through this step by step. You have a global coordinate system G, which is aligned to the north direction. It does not really matter how it is aligned or if it is aligned at all.
Then we have to IMUs with their respective coordinate systems I1 and I2. The coordinate systems are given as the rotation from the global system to the local systems. In the following, we will use the notation R[G->I1] for that. This represents a rotation from G to I1. If you transform any vector in G with this rotation, you will get the same vector in I1 expressed in the coordinate system G. Let's denote the transformation of a vector v with transform T by T ° v. The following figure illustrates this:
In this figure, I have added a translation to the transform (which quaternions can of course not represent). This is just meant to make the point clearer. So, we have a vector v. The same vector can lie in either coordinate system G or I. And the transformed vector R[G->I] ° v represents the v of I in the coordinate system of G. Please make sure that this is actually the rotation that you get from the IMUs. It is also possible that you get the inverse transform (this would be the system transform view, whereas we use the model transform view). This changes little in the following derivations. Therefore, I will stick to this first assumption. If you need the inverse, just adjust the formulas accordingly.
As you already know, the operation R ° v can be done by turning v into a pure quaternion, calculating R * v * conjugate(R), and turning it into a vector again (or work with pure quaternions throughout the process).
Now the pens come into play. The pen has an intrinsic coordinate system, which you can define arbitrarily. From your descriptions, it seems as if you want to define it such that the pen's local y-axis points towards the tip. So, we have an additional coordinate system per pen with the according rotation R[I1->P1] and R[I2->P2]. We can concatenate the rotations to find the global orientations (* is quaternion multiplication):
R[G->P1] = R[G->I1] * R[I1->P1]
R[G->P2] = R[G->I2] * R[I2->P2]
In the way that you defined the pen's local coordinate system, we know that R[I1->P1] is the identity (the local coordinate system is aligned with the IMU) and that R[I2->P2] is a rotation of 180° about the z-axis. So, this simplifies to:
R[G->P1] = R[G->I1]
R[G->P2] = R[G->I2] * RotateZ(180°)
Note that the z-rotation is performed in the local coordinate system of the IMU (it is multiplied at the right side). I don't know why you think that it should be 90°. It is really a rotation of 180°.
If you want to find the relative rotation between the tips, you first need to define in which coordinate system the rotation should be expressed. Let's say we want to express the rotation in the coordinate system of P1. Then, what you want to find is a rotation R[P1->P2], such that
R[G->P1] * R[P1->P2] = R[G->P2]
This solves to
R[P1->P2] = conjugate(R[G->P1]) * R[G->P2]
If you plug the above definitions in, you would get:
R[P1->P2] = conjugate(R[G->I1]) * R[G->I2] * RotateZ(180°)
And that's it.
It is pretty likely that you want something slightly different. That's why I explained it in such detail, so you will be able to modify the calculations accordingly.
I'd need a formula to calculate 3D position and direction or orientation of a camera in a following situation:
Camera starting position is looking directly into center of the Earth. Green line goes straight up to the sky
Position that camera needs to move to is looking like this
Starting position probably shouldn't matter, but the question is:
How to calculate camera position and direction given 3D coordinates of any point on the globe. In the camera final position, the distance from Earth is always fixed. From desired camera point of view, the chosen point should appear at the rightmost point of a globe.
I think what you want for camera position is a point on the intersection of a plane parallel to the tangent plane at the location, but somewhat further from the Center, and a sphere representing the fixed distance the camera should be from the center. The intersection will be a circle, so there are infinitely many camera positions that work.
Camera direction will be 1/2 determined by the location and 1/2 determined by how much earth you want in the picture.
Suppose (0,0,0) is the center of the earth, Re is the radius of the earth, and (a,b,c) is the location on the earth you want to look at. If it's in terms of latitude and longitude you should convert to Cartesian coordinates which is straightforward. Your camera should be on a plane perpendicular to the vector (a,b,c) and at a height kRe above the earth where k>1 is some number you can adjust. The equation for the plane is then ax+by+cz=d where d = kRe^2. Note that the plane passes through the point (ka,kb,kc) in space, which is what we wanted.
Since you want the camera to be at a certain height above the earth, say h*Re where 1 < k < h, you need to find points on ax+by+cz=d for which x^2+y^2+z^2 = h^2*Re^2. So we need the intersection of the plane and a sphere. It will be easier to manage if we have a coordinate system on the plane, which we get from an orthonormal system which includes (a,b,c). A good candidate for the second vector in the orthonormal system is the projection of the z-axis (polar axis, I assume). Projecting (0,0,1) onto (a,b,c),
proj_(a,b,c)(0,0,1) = (a,b,c).(0,0,1)/|(a,b,c)|^2 (a,b,c)
= c/Re^2 (a,b,c)
Then the "horizontal component" of (0,0,1) is
u = proj_Plane(0,0,1) = (0,0,1) - c/Re^2 (a,b,c)
= (-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
You can normalize the vector to length 1 if you wish but there's no need. However, you do need to calculate and store the square of the length of the vector, namely
|u|^2 = ((ac)^2 + (bc)^2 + (Re^2-c^2))/Re^4
We could complicate this further by taking the cross product of (0,0,1) and the previous vector to get the third vector in the orthonormal system, then obtain a parametric equation for the intersection of the plane and sphere on which the camera lies, but if you just want the simplest answer we won't do that.
Now we need to solve for t such that
|(ka,kb,kc)+t(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)|^2 = h^2 Re^2
|(ka,kb,kc)|^2 + 2t (a,b,c).u + t^2 |u|^2 = h^2 Re^2
Since (a,b,c) and u are perpendicular, the middle term drops out, and you have
t^2 = (h^2 Re^2 - k^2 Re^2)/|u|^2.
Substituting that value of t into
(ka,kb,kc)+t(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
gives the position of the camera in space.
As for direction, you'll have to experiment with that a bit. Some vector that looks like
(a,b,c) + s(-ac/Re^2,-bc/Re^2,1-c^2/Re^2)
should work. It's hard to say a priori because it depends on the camera magnification, width of the view screen, etc. I'm not sure offhand whether you'll need positive or negative values for s. You may also need to rotate the camera viewport, possibly by 90 degrees, I'm not sure.
If this doesn't work out, it's possible I made an error. Let me know how it works out and I'll check.
I have azimuth , elevation and direction vector of the sun.. i want to place a view point on sun ray direction with some distance. Can anyone describe or provide a link to a resource that will help me understand and implement the required steps?
I used cartesian coordinate system to find direction vector from azimuth and elevation.and then for find
viewport origin.image for this question
x = distance
y = distance* tan azimuth
z = distance * tan elevation.
i want to find that distance value... how?
azimutal coordinate system is referencing to NEH (geometric North East High(Up)) reference frame !!!
in your link to image it is referencing to -Y axis which is not true unless you are not rendering the world but doing some nonlinear graph-plot projection so which one it is?
btw here ECEF/WGS84 and NEH you can find out how to compute NEH for WGS84
As I can see you have bad computation between coordinates so just to be clear this is how it looks like:
on the left is global Earth view and one NEH computed for its position (its origin). In the middle is surface aligned side view and on the right is surface aligned top view. Blue magenta green are input azimutal coordinates, Brown are x,y,z cartesian projections (where the coordinate is on its axis) so:
Dist'= Dist *cos(Elev );
z = Dist *sin(Elev );
x = Dist'*cos(Azimut);
y =-Dist'*sin(Azimut);
if you use different reference frame or axis orientations then change it accordingly ...
I suspect you use 4x4 homogenous transform matrices
for representing coordinate systems and also to hold your view-port so look here:
transform matrix anatomy
constructing the view-port
You need X,Y,Z axis vectors and O origin position. O you already have (at least you think) and Z axis is the ray direction so you should have it too. Now just compute X,Y as alignment to something (else the view will rotate around the ray) I use NEH for that so:
view.Z=Ray.Dir // ray direction
view.Y=NEH.Z // NEH up vector
view.X=view.Y x view.Z // cross product make view.X axis perpendicular to Y ansd Z
view.Y=view.Z x view.X // just to make all three axises perpendicular to each other
view.O=ground position - (distance*Ray.Dir);
To make it a valid view_port you have to:
view = inverse(view)*projection_matrix;
You need inverse matrix computation for that
if you want the whole thing
Then you also want to add the Sun/Earth position computation in that case look here:
complete Earth-Sun position by Kepler's equation
The distance
Now that is clear what is behind you just need to set the distance if you want to set it to Sun then it will be distance=1.0 AU; (astronomical unit) but that is huge distance and if you have perspective your earth will be very small instead use some closer distance to match your view size look here:
How to position the camera so that the object always has the same size
I'm trying to calculate modelview matrix of my 2D camera but I can't get the formula right. I use the Affine3f transform class so the matrix is compatible with OpenGL. This is closest that I did get by trial and error. This code rotates and scales the camera ok, but if I apply translation and rotation at same time the camera movement gets messed up: camera moves in rotated fashion, which is not what I want. (And this probaly due to fact I first apply the rotation matrix and then translation)
Eigen::Affine3f modelview;
modelview.setIdentity();
modelview.translate(Eigen::Vector3f(camera_offset_x, camera_offset_y, 0.0f));
modelview.scale(Eigen::Vector3f(camera_zoom_x, camera_zoom_y, 0.0f));
modelview.rotate(Eigen::AngleAxisf(camera_angle, Eigen::Vector3f::UnitZ()));
modelview.translate(Eigen::Vector3f(camera_x, camera_y, 0.0f));
[loadmatrix_to_gl]
What I want is that camera would rotate and scale around offset position in screenspace {(0,0) is middle of the screen in this case} and then be positioned along the global xy-axes in worldspace {(0,0) is also initialy at middle of the screen} to the final position. How would I do this?
Note that I have set up also an orthographic projection matrix, which may affect this problem.
If you want a 2D image, rendered in the XY plane with OpenGL, to (1) rotate counter-clockwise by a around point P, (2) scale by S, and then (3) translate so that pixels at C (in the newly scaled and rotated image) are at the origin, you would use this transformation:
translate by -P (this moves the pixels at P to the origin)
rotate by a
translate by P (this moves the origin back to where it was)
scale by S (if you did this earlier, your rotation would be messed up)
translate by -C
If the 2D image we being rendered at the origin, you'd also need to end by translate by some value along the negative z axis to be able to see it.
Normally, you'd just do this with OpenGL basics (glTranslatef, glScalef, glRotatef, etc.). And you would do them in the reverse order that I've listed them. Since you want to use glLoadMatrix, you'd do things in the order I described with Eigen. It's important to remember that OpenGL is expecting a Column Major matrix (but that seems to be the default for Eigen; so that's probably not a problem).
JCooper did great explaining the steps to construct the initial matrix.
However I eventually solved the problem bit differently. There was few additional things and steps that were not obvious for me at the time. See JCooper answer's comments. First is to realize all matrix operations are relative.
Thus if you want to position or move the camera with absolute xy-axes, you must first decompose the matrix to extract its absolute position with unchanged axes. Then you translate the matrix by the difference of the old and new position.
Here is way to do this with Eigen:
First compute Affine2f matrix cmat scalar determinant D. With Eigen this is done with D = cmat.linear().determinant();. Next compute 'reverse' matrix matrev of the current rotation+scale matrix R using the D. matrev = (RS.array() / (1.0f / determ)).matrix()); where RS is cmat.matrix().topLeftCorner(2,2)
The absolute camera position P is then given by P = invmat * -C where C is cmat.matrix().col(2).head<2>()
Now we can reposition the camera anywhere along the absolute axes and keeping the rotation+scaling same: V = RS * (T - P) where RS is same as before, T is the new position vec and P is the decomposed position vec.
The cmat then simply translated by V to move the camera: cmat.pretranslate(V)
I'm trying to figure out some calculations using arcs in 3d space but am a bit lost. Lets say that I want to animate an arc in 3d space to connect 2 x,y,z coordinates (both coordinates have a z value of 0, and are just points on a plane). I'm controlling the arc by sending it a starting x,y,z position, a rotation, a velocity, and a gravity value. If I know both the x,y,z coordinates that need to be connected, is there a way to calculate what the necessary rotation, velocity, and gravity values to connect it from the starting x,y,z coordinate to the ending one?
Thanks.
EDIT: Thanks tom10. To clarify, I'm making "arcs" by creating a parabola with particles. I'm trying to figure out how to ( by starting a parabola formed by a series particles with an beginning x,y,z,velocity,rotation,and gravity) determine where it will in end(the last x,y,z coordinates). So if it if these are the two coordinates that need to be connected:
x1=240;
y1=140;
z1=0;
x2=300;
y2=200;
z2=0;
how can the rotation, velocity, and gravity of this parabola be calculated using only these variables start the formation of the parabola:
x1=240;
y1=140;
z1=0;
rotation;
velocity;
gravity;
I am trying to keep the angle a constant value.
This link describes the ballistic trajectory to "hit a target at range x and altitude y when fired from (0,0) and with initial velocity v the required angle(s) of launch θ", which is what you want, right? To get your variables into the right form, set the rotation angle (in the x-y plane) so you're pointing in the right direction, that is atan(y/x), and from then on out, to match the usual terminology for 2D problem, rewrite your z to y, and the horizontal distance to the target (which is sqrt(xx + yy)) as x, and then you can directly use the formula in link.
Do the same as you'd do in 2D. You just have to convert your figures to an affine space by rotating the axis, so one of them becomes zero; then solve and undo the rotation.