Trying to find the relative transformation between to two positions - XNA - math

I have written a simple AR program in XNA and I am now trying to find the relative transformation between my 2 markers.
I have located my markers relative to my camera and have extracted out translation and rotation matrixes for the markers.
What I am trying to do is to find out the relative translation to get to marker 2 from marker 1. For instance if marker 1 and marker 2 were lying on the same Z plane the Z translation component would be 0mm.
The image below is the application working for 2 positions on the same plane:
I assumed that by simply multiplying the matrix of the 2nd marker by the inverse of the 1st marker I can get the translation. However I am getting completely wrong results.
The code I am running is as follows:
posit.EstimatePose(points, out matrix, out trans);
float yaw, pitch, roll;
matrix.ExtractYawPitchRoll(out yaw, out pitch, out roll);
Matrix rotation =
Matrix.CreateFromYawPitchRoll(-yaw, -pitch, roll);
Matrix translation =
Matrix.CreateTranslation(new Vector3(trans.X, trans.Y, -trans.Z));
Matrix complete = rotation * translation;
List<Matrix> all = new List<Matrix>();
all.Add(rotation);
all.Add(translation);
all.Add(complete);
matrixes.Add(all);
}
Matrix res = Matrix.Invert(matrixes[0][2]) * matrixes[1][2];
Vector3 scaleR;
Vector3 translationR;
Quaternion rotationR;
res.Decompose(out scaleR, out rotationR, out translationR);
The result:
TranslationR : {X:-103.4285 Y:-104.1754 Z:104.9243}
I have overlaid 3D axes onto the image as shown above using XNA so I assume the rotation and translation relative to the camera has been worked out correctly.
It seems like I am doing something wrong along the way to calculate the translation. I would definitely not expect the Z to equal 104mm. I was expecting something along the lines of:
{X:0 Y:150 Z:0}

I've done something similar to this before, however it was using 3x3 matrices in a 2D environment (with X,Y Translate, Rotate, Skew). Are the matrices in question 4x4?
Yes you are right, to find the matrix to transform object A with matrix M1 to object B with matrix M2 you can compute M1' * M2 (where M1' is the inverse).
The problem you may be running into is that a Matrix is composed of rotation, translation, scale and other transformations (e.g. skew/perspective). Decomposing the matrix into its component parts often yields a non-deterministic answer. Its like Quadratic equations, there is more than one solution.
Another issue may be that Matrix operations are not commutative and you are simply performing them the wrong way around. If you perform M1' * M2 and M2 * M1' you will get different results.
Please give it a try (switching the matrix order). Also I'd be looking up the matrix decomposition function you used - what value of Rotation & Scaling are you getting at the output? Are your objects rotated or scaled? If not then you should get zero. Note that it is possible to have more than one solution of rotation + translation to get the same end result and the decomposition function doesn't know which it is you are looking for.
To extract just the translation component, you can use the methods form this page:
vt = (M14, M24, M34)T
What do you get when you try that?

What I am trying to do is to find out the relative translation to get
to marker 2 from marker 1.
Vector3 relativeTranslation = Marker2Matrix.Translation - marker1Matrix.Translation;
My answer seems overly simplistic so maybe I'm not grasping your question completely, but it will create a vector that when added to Marker1's location (translation), will get you to Marker 2's location.

Related

Producing a forward/directional vector from just the euler rotation components and position

I'm currently have quite a lot of trouble producing a forward vector based on my accessible transform components. I'm working within SparkAR, where objects are built around a transform class with position, rotation (euler) and scale, as expected, however it doesn't provide any built in methods for retreiving general information such as the relative forward and up vector of the object, a transform matrix, etc.
This leaves me with the task of trying to put this together myself from the available data. The end result is simply to be able to calculate the angle between the forward vector of 2 objects, which is mapped to a 0-1 range and used as a value of texture blending. The goal is for it to simulate fake dynamic lighting through texture blends based on how similar the local object's forward vector is to the forward vector of the directional light.
I must have read 20+ different StackOverflow results by this point covering similar questions, however I've been unable to get correct results based on my testing using their suggestions, and at this point I feel like I need some more direct feedback to my method before I tear my eyes out.
My current process is as follows:
1) Retrieve euler rotation from the animated joint I want to compare against (it's in radians)
2) Convert radian values to degrees
3) Attempt to convert the euler values to a forward vector using the following calculation sample:
x = cos(yaw)cos(pitch)
y = sin(yaw)cos(pitch)
z = sin(pitch)
I tried a couple of other variations on that for different axis orders, which didn't provide much of a change at all.
Some small notes before continuing, X is pitch, Y is yaw and Z is roll. Positive Z is towards the screen, positive X is to the right, positive Y is up.
4) Normalise the vector (although it's unnecessary given the results I get).
5) Perform a dot product operation on the vector, against a set direction vector, in this case for testing purposes, I'm simply using (0,0,1).
6) Compare the resulting value against the expected result - incorrect.
Within a single 90 degree return, the value retreived from the dot product, which by my understanding should be 1 when facing the same direction, and -1 when facing inverse, oscillates between these two range endpoints between 15-20 times.
Another thing worth noting here, is that I did have to swap the Y and Z components of the calculation to produce a non-0 result, so the current forward vector calculation from euler is as follows:
x = cos(yaw)cos(pitch)
y = sin(pitch)
z = sin(yaw)cos(pitch)
I'm really not sure where to go about moving on from here to produce the result I'm looking for, as I simply don't understand what about the current calculations are going wrong to begin fixing it.
The passed in vector, pre-dot product at the rotations 0, 90, 180 and 270 are as follows:
( 1 , 0, -0.00002)
(-0.44812, 0, -0.89397)
( 1 , 0, 0.00002)
(-0.44812, 0, 0.89397)
I can see that the euler angles going into the calculations are definitely correct, so the RadToDeg conversion isn't screwing the input up.
Is the calculation I'm using for trying to produce a forward vector from euler rotations incorrect? Is there something I should be using instead?
Any advice on moving forward with this issue would be much appreciated.
Thanks.

Project 3D velocity values from vector field around a sphere to create flow lines

I just cannot figure out how to make an a point with a given velocity move around in cartesian space in my visualization while staying around a sphere (planet).
The input:
Many points with:
A Vector3 position in XYZ (lat/lon coordinates transformed with spherical function below).
A Vector3 velocity (eg. 1.0 m/s eastward, 0.0 m/s elevation change, 2.0 m/s northward).
Note these are not degrees, just meters/second which are similar to my world space units.
Just adding the velocities to the point location will make the points fly of the sphere, which makes sense. Therefore the velocities need to be transformed so stay around the sphere.
Goal: The goal is to create some flowlines around a sphere, for example like this:
Example image of vectors around a globe
So, I have been trying variations on the basic idea of: Taking the normal to center of my sphere, get a perpendicular vector and multiply that again to get a tangent:
// Sphere is always at (0,0,0); but just to indicate for completeness:
float3 normal = objectposition - float3(0,0,0);
// Get perpendicular vector of our velocity.
float3 tangent = cross(normal,velocity);
// Calculate final vector by multiplying this with our original normal
float3 newVector = cross(normal, tangent);
// And then multiplying this with length (magnitude) of the velocity such that the speed is part of the velocity again.
float final_velocity = normalize(newVector) * length(velocity);
However, this only works for an area of the data, it looks like it only works on the half of the western hemisphere (say, USA). To get it (partially) working at the east-southern area (say, South-Africa) I had to switch U and V components.
The XYZ coordinates of the sphere are created using spherical coordinates:
x = radius * Math.Cos(lat) * Math.Cos(lon);
y = radius * Math.Sin(lat);
z = radius * Math.Cos(lat) * Math.Sin(lon);
Of course I have also tried all kinds of variations with multiplying different "Up/Right" vectors like float3(0,1,0) or float3(0,0,1), switching around U/V/W components, etc. to transform the velocity in something that works well. But after about 30 hours of making no progress, I hope that someone can help me with this and point me in the right direction. The problem is basically that only a part of the sphere is correct.
Considering that a part of the data visualizes just fine, I think it should be possible by cross and dot products. As performance is really important here I am trying to stay away from 'expensive' trigonometry operations - if possible.
I have tried switching the velocity components, and in all cases one area/hemisphere works fine, and others don't. For example, switching U and V around (ignoring W for a while) makes both Africa and the US work well. But starting halfway the US, things go wrong again.
To illustrate the issue a bit better, a couple of images. The large purple image has been generated using QGIS3, and shows how it should be:
Unfortunately I have a new SO account and cannot post images yet. Therefore a link, sorry.
Correct: Good result
Incorrect: Bad result
Really hope that someone can shed some light on this issue. Do I need a rotation matrix to rotate the velocity vector? Or multiplying with (a correct) normal/tangent is enough? It looks like that to me, except for these strange oddities and somehow I have the feeling I am overlooking something trivial.
However, math is really not my thing and deciphering formula's are quite a challenge for me. So please bear with me and try to keep the language relative simple (Vector function names are much easier for me than scientific math notation). That I got this far is already quite an achievement for myself.
I tried to be as clear as possible, but if things are unclear, I am happy to elaborate them more.
After quite some frustration I managed to get it done, and just posting the key information that was needed to solve this, after weeks of reading and trying things.
The most important thing is to convert the velocity using rotation matrix no. 6 from ECEF to ENU coordinates. I tried to copy the matrix from the site; but it does not really paste well. So, some code instead:
Matrix3x3:
-sinLon, cosLon, 0,
-cosLon * sinLat, -sinLon * sinLat, cosLat,
cosLon * cosLat, sinLon * cosLat, sinLat
Lon/Lat has to be acquired through a Cartesian to polar coordinate conversion function for the given location where your velocity applies.
Would have preferred a method which required no sin/cos functions but I am not sure if that is possible after all.

OpenGL : equation of the line going through a point defined by a 4x4 matrix ? (camera for example)

I would like to know what is the set of 3 equations (in the world coordinates) of the line going through my camera (perpendicular to the camera screen). The position and rotation of my camera in the world coordinates being defined by a 4x4 matrix.
Any idea?
parametric line is simple just extract the Z axis direction vector and origin point O from the direct camera matrix (see the link below on how to do it). Then any point P on your line is defined as:
P(t) = O + t*Z
where t is your parameter. The camera view direction is usually -Z for OpenGL perspective in such case:
t = (-inf,0>
Depending on your projection you might want to use:
t = <-z_far,-z_near>
The problem is there are many combinations of conventions. So you need to know if you have row major or column major order of your matrix (so you know if the direction vectors and origins are in rows or columns). Also camera matrix in gfx is usually inverse one so you need to invert it first. For more info about this see:
Understanding 4x4 homogenous transform matrices

How to calculate rotation axis and angle?

I am trying to rotate a model in 3D so that it faces the right direction. The rotation I want is fairly trivial and can be broken down into two steps:
Rotate the model 90 degrees on its x-axis.
Rotate the model 180 degrees on its z-axis (relative to the first rotation).
The way to set a model's rotation in the framework I'm using (openFrameworks) is by calling its setRotation method. This method takes an angle, as well as floats x, y and z that specify the axis of rotation. How do I calculate the axis of rotation and angle for this particular rotation? I can't rotate the model two times sequentially because any call to setRotation overwrites previous rotations.
Please let me know if I can provide more information or clarity.
EDIT: In case anyone has the same question, this post helped me a lot.
weird that you can not apply more then one transform ... maybe you just use wrong function but anyway:
If you have direct access to transform matrix (or by get,set)
google for transform matrices if you do not have the knowledge
I suspect you are using 4x4 homogenous cartesian transform matrices
transform matrix anatomy
generate first rotation matrix and store it to M1
can use the setRotation for that
generate second rotation matrix and store it to M2
multiply them M=M1*M2
use this M instead of setRotation
If yo do not have the direct access to transform matrix and have to use just the setRotation
in that case you have to use quaternion which is the 4D vector you call the setRotation with
google for quaternion math and find the application of 2 rotations
I do not use them so I can not help with that but there are also equations out there
which converts 3x3 rotation matrix into quaternion and back
so you can still use the algorithm above
obtain M
extract the rotation matrix from it (it is just sub matrix you omit last row and column)
compute quaternion from it
and call setRotation with the result

Calculating modelview matrix for 2D camera using Eigen

I'm trying to calculate modelview matrix of my 2D camera but I can't get the formula right. I use the Affine3f transform class so the matrix is compatible with OpenGL. This is closest that I did get by trial and error. This code rotates and scales the camera ok, but if I apply translation and rotation at same time the camera movement gets messed up: camera moves in rotated fashion, which is not what I want. (And this probaly due to fact I first apply the rotation matrix and then translation)
Eigen::Affine3f modelview;
modelview.setIdentity();
modelview.translate(Eigen::Vector3f(camera_offset_x, camera_offset_y, 0.0f));
modelview.scale(Eigen::Vector3f(camera_zoom_x, camera_zoom_y, 0.0f));
modelview.rotate(Eigen::AngleAxisf(camera_angle, Eigen::Vector3f::UnitZ()));
modelview.translate(Eigen::Vector3f(camera_x, camera_y, 0.0f));
[loadmatrix_to_gl]
What I want is that camera would rotate and scale around offset position in screenspace {(0,0) is middle of the screen in this case} and then be positioned along the global xy-axes in worldspace {(0,0) is also initialy at middle of the screen} to the final position. How would I do this?
Note that I have set up also an orthographic projection matrix, which may affect this problem.
If you want a 2D image, rendered in the XY plane with OpenGL, to (1) rotate counter-clockwise by a around point P, (2) scale by S, and then (3) translate so that pixels at C (in the newly scaled and rotated image) are at the origin, you would use this transformation:
translate by -P (this moves the pixels at P to the origin)
rotate by a
translate by P (this moves the origin back to where it was)
scale by S (if you did this earlier, your rotation would be messed up)
translate by -C
If the 2D image we being rendered at the origin, you'd also need to end by translate by some value along the negative z axis to be able to see it.
Normally, you'd just do this with OpenGL basics (glTranslatef, glScalef, glRotatef, etc.). And you would do them in the reverse order that I've listed them. Since you want to use glLoadMatrix, you'd do things in the order I described with Eigen. It's important to remember that OpenGL is expecting a Column Major matrix (but that seems to be the default for Eigen; so that's probably not a problem).
JCooper did great explaining the steps to construct the initial matrix.
However I eventually solved the problem bit differently. There was few additional things and steps that were not obvious for me at the time. See JCooper answer's comments. First is to realize all matrix operations are relative.
Thus if you want to position or move the camera with absolute xy-axes, you must first decompose the matrix to extract its absolute position with unchanged axes. Then you translate the matrix by the difference of the old and new position.
Here is way to do this with Eigen:
First compute Affine2f matrix cmat scalar determinant D. With Eigen this is done with D = cmat.linear().determinant();. Next compute 'reverse' matrix matrev of the current rotation+scale matrix R using the D. matrev = (RS.array() / (1.0f / determ)).matrix()); where RS is cmat.matrix().topLeftCorner(2,2)
The absolute camera position P is then given by P = invmat * -C where C is cmat.matrix().col(2).head<2>()
Now we can reposition the camera anywhere along the absolute axes and keeping the rotation+scaling same: V = RS * (T - P) where RS is same as before, T is the new position vec and P is the decomposed position vec.
The cmat then simply translated by V to move the camera: cmat.pretranslate(V)

Resources