Oriented point (XYZ+Yaw/Pitch/Roll) frame to frame transform - math

I have a 3D oriented point (XYZ + Yaw/Pitch/Roll). Lets say this point is based on "user frame 1" (UF1)(the frame is defined in relation to the world frame (WF)).
I also have another "user frame", lets call it "user frame 2" (UF2) that is also defined in reference to the world frame.
How would I take my 3d oriented point (6 coordinates; 3 location + 3 rotation) from one frame to another ?
For example; Since all user frame are related to the world frame, How would I calculate (transform) my 3d oriented point from UF1 to WF ? Or vice-versa or from any combination ? (UF1->WF, WF->UF2, etc.)
The resulting point must have the 6 coordinates (3 location + 3 rotation) in relation to the destination frame.
PS: I'm mainly working in C#, so if possible code sample or pseudo-code to accomplish this would be appreciated.

What you are looking for is the transition matrix from one coordinate system to an other.
It is a 4x4 matrix fully determined by the 3 parameters of translation and the 6 angles of ratotion between the 2 coordinate system.
The 9 coefficients depend on the rotation angles (Euler angles or quaternion depending on what you are using). Be aware that if you are using Euler angles, the order is important : a vector of rotation (rx, ry, rz) does not describe the same rotation if you turn around X axis first then Y then Z or if you turn first around Z, then Y then X for example.
I suggest you read this, it will help you compute the matrix:
http://brainvoyager.com/bv/doc/UsersGuide/CoordsAndTransforms/SpatialTransformationMatrices.html
Note that all this is using homogenous coordinate.

Related

Rotate Image Orientation(Patient) in DICOM

I have extracted a 3D surface from an MRI acquisition and the coordinates of the points describing this surface are (I believe) with respect to the reference system of the first image of the series (I mean that the origin corresponds to the Image Position(Patient) and the axes orientation to the Image Orientation(Patient)).
I have another set of images with a different Image Position(Patient) and a different Image Orientation(Patient); I want to rotate and translate the surface extracted from the first set in order to have it match exactly the second set of images.
I'm having trouble with finding the correct 4x4 matrix that would do the job, once I get it, I know how to apply it to my surface.
Any kind of help would be greatly appreciated, thank you.
Simon
This page explains how to form a transformation matrix from the geometry information in the DICOM headers. These transformation matrices are used to transform from the volume coordinate system (pixel-x, pixel-y, slice number) to the patient/world coordinate system (in millimeters).
The basic idea to transform from volume 1 to volume 2 is to tranform from volume 1 to patient coordinates and from patient coordinates to volume 2 coordinate system. Multiplying both matrices yields the matrix to transform directly from volume 1 to volume 2.
Caution: Obviously, there can be no guarantee that every coordinate in v1 matches a coordinate in v2, i.e. the stacks may have different size and/or position.
So you have:
M1 - the matrix to transform from volume 1 to the world coordinate system and
M2 - the matrix to transform from volume 2 to the world coordinate system
Then
M1 * (M2^(-1)) is the matrix to transform a position vector from volume 1 to volume 2 (input and output is pixel-x, pixel-y, slice number)
and
M2 * (M1^(-1)) is the matrix to transform a position vector from volume 1 to volume 2 (input and output is pixel-x, pixel-y, slice number)

inverse interpolation of multidimensional grids

I am working on a project of interpolating sample data {(x_i,y_i)} where the input domain for x_i locates in 4D space and output y_i locates in 3D space. I need generate two look up tables for both directions. I managed to generate the 4D -> 3D table. But the 3D -> 4D one is tricky. The sample data are not on regular grid points, and it is not one to one mapping. Is there any known method to treat this situation? I did some search online, but what I found is only for 3D -> 3D mapping, which are not suitable for this case. Thank you!
To answer the questions of Spektre:
X(3D) -> Y(4D) is the case 1X -> nY
I want to generate a table that for any given X, we can find the value for Y. The sample data is not occupy all the domain of X. But it's fine, we only need accuracy for point inside the domain of sample data. For example, we have sample data like {(x1,x2,x3) ->(y1,y2,y3,y4)}. It is possible we also have a sample data {(x1,x2,x3) -> (y1_1,y2_1,y3_1,y4_1)}. But it is OK. We need a table for any (a,b,c) in space X, it corresponds to ONE (e,f,g,h) in space Y. There might be more than one choice, but we only need one. (Sorry for the symbol confusing if any)
One possible way to deal with this: Since I have already established a smooth mapping from Y->X, I can use Newton's method or any other method to reverse search the point y for any given x. But it is not accurate enough, and time consuming. Because I need do search for each point in the table, and the error is the sum of the model error with the search error.
So I want to know it is possible to find a mapping directly to interpolate the sample data instead of doing such kind of search in 3.
You are looking for projections/mappings
as you mentioned you have projection X(3D) -> Y(4D) which is not one to one in your case so what case it is (1 X -> n Y) or (n X -> 1 Y) or (n X -> m Y) ?
you want to use look-up table
I assume you just want to generate all X for given Y the problem with non (1 to 1) mappings is that you can use lookup table only if it has
all valid points
or mapping has some geometric or mathematic symmetry (for example distance between points in X and Yspace is similar,and mapping is continuous)
You can not interpolate between generic mapped points so the question is what kind of mapping/projection you have in mind?
First the 1->1 projections/mappings interpolation
if your X->Y projection mapping is suitable for interpolation
then for 3D->4D use tri-linear interpolation. Find closest 8 points (each in its axis to form grid hypercube) and interpolate between them in all 4 dimensions
if your X<-Y projection mapping is suitable for interpolation
then for 4D->3D use quatro-linear interpolation. Find closest 16 points (each in its axis to form grid hypercube) and interpolate between them in all 3 dimensions.
Now what about 1->n or n->m projections/mappings
That solely depends on the projection/mapping properties which I know nothing of. Try to provide an example of your datasets and adding some image would be best.
[edit1] 1 X <- n Y
I still would use quatro-linear interpolation. You still will need to search your Y table but if you group it like 4D grid then it should be easy enough.
find 16 closest points in Y-table to your input Y point
These points should be the closest points to your Y in each +/- direction of all axises. In 3D it looks like this:
red point is your input Y point
blue points are the found closest points (grid) they do not need to be so symmetric as on image .
Please do not want me to draw 4D example that make sense :) (at least for sober mind)
interpolation
find corresponding X points. If there is more then one per point chose the closer one to the others ... Now you should have 16 X points and 16+1 Y points. Then from Y points you need just to calculate the distance along lines from your input Y point. These distances are used as parameter for linear interpolations. Normalize them to <0,1> where
0 means 'left' and 1 means 'right' point
0.5 means exact middle
You will need this scalar distance in each of Y-domain dimension. Now just compute all the X points along the linear interpolations until you get the corresponding red point in X-domain.
With tri-linear interpolation (3D) there are 4+2+1=7 linear interpolations (as on image). For quatro-linear interpolation (4D) there are 8+4+2+1=15 linear interpolations.
linear interpolation
X = X0 + (X1-X0)*t
X is interpolated point
X0,X1 are the 'left','right' points
t is the distance parameter <0,1>

How do I get position and x/y/z axis out of a LH 4x4 world matrix?

As far as I know, Direct3D works with an LH coordinate system right?
So how would I get position and x/y/z axis (local orientation axis) out of a LH 4x4 (world) matrix?
Thanks.
In case you don't know: LH stands for left-handed
If the 4x4 matrix is what I think it is (a homogeneous rigid body transformation matrix, same as an element of SE(3)) then it should be fairly easy to get what you want. Any rigid body transformation can be represented by a 4x4 matrix of the form
g_ab = [ R, p;
0, 1]
in block matrix notation. The ab subscript denotes that the transformation will take the coordinates of a point represented in frame b and will tell you what the coordinates are as represented in frame a. R here is a 3x3 rotation matrix and p is a vector that, when the rotation matrix is unity (no rotation) tells you the coordinates of the origin of b in frame a. Usually, however, a rotation is present, so you have to do as below.
The position of the coordinate system described by the matrix will be given by applying the transformation to the point (0,0,0). This will well you what world coordinates the point is located at. The trick is that, when dealing with SE(3), you have to add a 1 at the end of points and a 0 at the end of vectors, which makes them vectors of length 4 instead of length 3, and hence operable on by the matrix! So, to transform point (0,0,0) in your local coordinate frame to the world frame, you'd right multiply your matrix (let's call it g_SA) by the vector (0,0,0,1). To get the world coordinates of a vector (x,y,z) you multiply the matrix by (x,y,z,0). You can think of that as being because vectors are differences of points, so the 1 in the last element goes the away. So, for example, to find the representation of your local x-axis in the world coordinates, you multiply g_SA*(1,0,0,0). To find the y-axis you do g_SA*(0,1,0,0), and so on.
The best place I've seen this discussed (and where I learned it from) is A Mathematical Introduction to Robotic Manipulation by Murray, Li and Sastry and the chapter you are interested in is 2.3.1.

Rotation matrix openCV

I would like to know how to find the rotation matrix for a set of features in a frame.
I will be more specific. I have 2 frames with 20 features, let's say frame 1 and frame 2. I could estimate the location of the features in both frames. For example let say a certain frame 1 feature at location (x, y) and I know exactly where it is so let's say (x',y').
My question is that the features are moved and probably rotated so I wanna know how to compute the rotation matrix, I know the rotation matrix for 2D:
But I don't know how to compute the angle, and how to do that? I tried a function in OpenCV which is cv2DRotationMatrix(); but the problem which as I mentioned above I don't know how to compute the angle for the rotation matrix and another problem which it gives 2*3 matrix, so it won't work out cause if I will take this 20*2 matrix, (20 is the number of features and 2 are the location in (x,y)) and multiply it by the matrix by 2*3 which is the results from the function then I will get 20*3 matrix which it doesn't seem to be realistic cause I'm working with 2D.
So what should I do? To be more specific again, show me how to compute the angle to use it in the matrix?
I'm not sure I've understood your question, but if you want to find out the angle of rotation resulting from an arbitrary transform...
A simple hack is to transform the points [0 0] and [1 0] and getting the angle of the ray from the first transformed point to the second.
o = M • [0 0]
x = M • [1 0]
d = x - o
θ = atan2(d.y, d.x)
This doesn't consider skew and other non-orthogonal transforms, for which the notion of "angle" is vague.
Have a look at this function:
cvGetAffineTransform
You give it three points in the first frame and three in the second. And it computes the affine transformation matrix (translation + rotation)
If you want, you could also try
cvGetPerspectiveTransform
With that, you can get translation+rotation+skew+lot of others.

How do you calculate the reflex angle given two vectors in 3D space?

I want to calculate the angle between two vectors a and b. Lets assume these are at the origin. This can be done with
theta = arccos(a . b / |a| * |b|)
However arccos gives you the angle in [0, pi], i.e. it will never give you an angle greater than 180 degrees, which is what I want. So how do you find out when the vectors have gone past the 180 degree mark? In 2D I would simply let the sign of the y-component on one of the vectors determine what quadrant the vector is in. But what is the easiest way to do it in 3D?
EDIT: I wanted to keep the question general but here we go. I'm programming this in c and the code I use to get the angle is theta = acos(dot(a, b)/mag(a)*mag(b)) so how would you programmatically determine the orientation?
This works in 2D because you have a plane defined in which you define the rotation.
If you want to do this in 3D, there is no such implicit 2D plane. You could transform your 3D coordinates to a 2D plane going through all three points, and do your calculation inside this plane.
But, there are of course two possible orientations for the plane, and that will affect which angles will be > 180 or smaller.
I came up with the following solution that takes advantage of the direction change of the cross product of the two vectors:
Make a vector n = a X b and normalize it. This vector is normal to the plane spanned by a and b.
Whenever a new angle is calculated compare it with the old normal. In the comparison, treat the old and the current normals as points and compute the distance between them. If this distance is 2 the normal (i.e. the cross product a X b has flipped).
You might want to have a threshold for the distance as the distance after a flip might be shorter than 2, depending on how the vectors a and b are oriented and how often you update the angle.
One solution that you could use:
What you effectively need to do is create a plane that one of the vectors is coplanar to.
Getting the cross product of both vectors will create a plane, then is you get the normal of this plane, you can get the angle between this and the vector you need to get the signed angle for, and you can use the angle to determine the sign.
If the angle is greater than 90 degrees, then it is below the created plane; less than 90 degrees, and it is above.
Depending on cost of calculations, the dot product can be used at this stage instead of the angle.
Just make sure that you always calculate the normals by the same order of vectors.
This is useable more easily if you're using the XYZ axes, and that's what you're comparing against, since you already have the vectors needed for the plane.
There are possbly more efficient solutions, but this is one I came up with.
Edit: clarification of created vectors
a X b = p. This is perpendicular to both a and b.
Then, do either:
a X p or b X p to create another vector that is the normal to the plane created by the 2 vectors. Choice of vector depends on which you're trying to find the angle for.
Strictly speaking, two 3D vectors always have two angles between them - one below or equal to 180, the other over or equal to 180. Arccos gives you one of them, you can get the other by subtracting from 360. Think of it that way: imagine two lines intersect. You have 4 angles there - 2 of one value, 2 of another. What's the angle between the lines? No single answer. Same here. Without some kind of extra criteria, you can not, in theory, tell which of the two angle values should be taken into account.
EDIT: So what you really need is an arbitrary example of fixing an orientation. Here's one: we look from the positive Z direction. If the plane between the two vectors contains the Z axis, we look from the positive Y direction. If the plane is YZ, we look from the positive X direction. I'll think how to express this in coordinate form, then edit again.

Resources