Convert local coordinates of aligned agisoft cameras output into a relative coordinate system - coordinate-systems

The outputs after aligning photos in agisoft metashape are placed in a local coordinate system that is arbitrary, it includes values of omega, phi and kappa and the rotation matrices respectively.
The coordinates of image 0 for ex: don't start at origin as reference, they start at arbitrary values.
Example showcasing two images :
PhotoID, Omega, Phi, Kappa, r11, r12, r13, r21, r22, r23, r31, r32, r33
frame0 -2.6739376830612445 -26.8628145584408102 -2.6585413651258838 0.8911308174580325 -0.0252758246719107 0.4530419394092935 0.0413784373987947 0.9988138381614838 -0.0256659621981955 -0.4518558299889612 0.0416178974034010 0.8911196662181277
frame1 -1.7705163287532029 -26.2306519040478179 -1.6659380364959333 0.8966428925733227 -0.0154081202025439 0.4424862856965952 0.0260782313117883 0.9994971141912699 -0.0180400824547062 -0.4419858018640347 0.0277147714251388 0.8965938001098707
How to translate these 1) into a relative coordinate system wrt image 0
2) Translate these two 2d coordinates, without worrying on the rotation values
Thanks in advance

Related

How to combine two 3x3 rotation matrices to align accelerometer axes with another coordinate system

I am trying to align the X, Y, Z axes from an accelerometer to the X, Y, Z axes of a vehicle.
My approach consists of two steps:
1 step: Find a rotation matrix Rz to align the Z axes of both coordinate systems. I know the acceleration vector of the vehicle is A_veh=(0,0,1000) (values in mg) when is stopped on a level surface, so I measure the acceleration reported by the accelerometer at that very moment and I get my vector A_acc=(6,64,1016) (values in mg), so I proceed to find a rotation matrix to align A_acc with the Z-axis of the vehicle (A_veh). I form the ration matrix by using the implementation of the Rodriguez formula listed here and we find out:
`Rz[][] = 0.999983 -0.000185 -0.005894
-0.000185 0.998022 -0.062866
0.005894 0.062866 0.998005`
To verify the rotation matrix I multiply the A_acc vector by Rz and get an aligned vector A_z_aligned=(0.000000,0.000000,1018) (in mg) which is the expected since all the acceleration is on the Z-axis of the vehicle.
Image showing debug info about the process of getting Rz
2 step: Find a rotation matrix Rxy to align the XY axes of both coordinate systems. Measuring some acceleration of the vehicle with an external sensor, I get the following acceleration vector for the vehicle A_veh=(-323,0,1009) and from the accelerometer, I get A_acc=(-11,-322,1009) (in mg), so repeating the procedure from step 1 I get the rotation matrix Rxy:
`Rxy[][] = 0.954307 0.135202 -0.266494
-0.050168 0.951624 0.303143
0.294151 -0.275922 0.914924`
And to verify this rotation matrix I multiply the A_acc vector by Rxy and get an aligned vector A_xy_aligned=(-322,0,1008) which is the expected.
Image showing debug info about the process of getting Rxy
Question: How can I "combine" the two rotation matrices in order to have one rotation matrix that allows me to align any raw acceleration vector read from the accelerometer to the vehicle axes?
I have tried multiplying the raw acceleration vectors by Rz and then by Rxy and it seems it does not work.
I also tried multiplying both matrices Rz*Rxy=R and use R to align the raw vectors but it does not work either.
I need some help with this topic.
Thanks,

Convert earth-centric coordinate frame to coordinate frame aligned to tangential plane?

Given that earth is perfectly spherical with radius R.
The earth-centric coordinate system E is defined as follows:
The center of this sphere is the origin,
Earth's north pole represents the z-axis.
Latitude 0 and longitude 0 represent x-axis.
Latitude 0 and longitude 90 represent y- axis.
Now at any given latitude, longitude, and altitude, we can make a local coordinate system S whose y-z plane is tangential to earth's surface and z points to the north pole and x points perpendicular to this plane.
I need a 4x4 transformation matrix to transform a 3d point from earth-centric coordinate system E to this local coordinate system S.
Transformation matrix from S to E might be composed as product of matrices:
Shift along X axis by R+Altitude
Rotation about Y-axis by Latitude
Rotation about Z-axis by Longitude
Make inverse of this matrix to get E-S transform
Assuming that earth is spherical, this is actually not that hard.
Spherical coordinates to the rescue (see here)! A sphere can be parametrized by 2 angles (as already mentions in the problem statement). Based on this, you can formulate equations to convert to cartesian coordinates. If you compute the derivative of those equations with respect to both angles, you get equations stating the tangent and bitangent of any point on the sphere. Based on this you can either use the vector pointing from the center to a point on the sphere as the normal or the cross product between tangent and bitangent. Formulations for tangent and bitangents are also given in the link above.
Now you got an orthogonal system for each point on the sphere based on your 3 vectors: tangent, bitangent and normal. The only part that is missing is the translation which is simply the vector pointing from the center to a point on the sphere. Given all the necessary ingredients, you can create a 4x4 matrix from those axes using standard libraries like glm or simply place those vectors as columns of your matrix (don't forget to normalize tangent, bitangent and normal!). Depending if you use row-major or column-major matrices you may need to transpose this matrix.

Rotate Image Orientation(Patient) in DICOM

I have extracted a 3D surface from an MRI acquisition and the coordinates of the points describing this surface are (I believe) with respect to the reference system of the first image of the series (I mean that the origin corresponds to the Image Position(Patient) and the axes orientation to the Image Orientation(Patient)).
I have another set of images with a different Image Position(Patient) and a different Image Orientation(Patient); I want to rotate and translate the surface extracted from the first set in order to have it match exactly the second set of images.
I'm having trouble with finding the correct 4x4 matrix that would do the job, once I get it, I know how to apply it to my surface.
Any kind of help would be greatly appreciated, thank you.
Simon
This page explains how to form a transformation matrix from the geometry information in the DICOM headers. These transformation matrices are used to transform from the volume coordinate system (pixel-x, pixel-y, slice number) to the patient/world coordinate system (in millimeters).
The basic idea to transform from volume 1 to volume 2 is to tranform from volume 1 to patient coordinates and from patient coordinates to volume 2 coordinate system. Multiplying both matrices yields the matrix to transform directly from volume 1 to volume 2.
Caution: Obviously, there can be no guarantee that every coordinate in v1 matches a coordinate in v2, i.e. the stacks may have different size and/or position.
So you have:
M1 - the matrix to transform from volume 1 to the world coordinate system and
M2 - the matrix to transform from volume 2 to the world coordinate system
Then
M1 * (M2^(-1)) is the matrix to transform a position vector from volume 1 to volume 2 (input and output is pixel-x, pixel-y, slice number)
and
M2 * (M1^(-1)) is the matrix to transform a position vector from volume 1 to volume 2 (input and output is pixel-x, pixel-y, slice number)

How to transform a co-ordinate value in 45 deg-135 deg co-ordinate system to earth co-ordinate system?

I get a series of square binary images as in the picture below,
I want to find the red point, which is the point of intersection of four blocks (2 black and 2 white). For doing so, I use to get the sum of all pixel values along the diagonal directions of the square image, which is 45 deg and 135 deg respectively. The intersection of maximum pixel sum 45 deg line and minimum pixel sum 135 deg line is where my red point is.
Now that I get the co-ordinate of the red point in 45 deg-135 deg co-ordinate system, how to I transform them to earth co-ordinates?
In other words, say I have a point in 45deg-135deg co-ordinate system; How do I find the corresponding co-ordinate values in x-y co-ordinate system? What is the transformation matrix?
some more information that might help:
1) if the image is a 60x60 image, I get 120 values in 45deg-135deg system, since i scan each row followed by column to add the pixels.
I don't know much about matlab, but in general all you need to do is rotate your grid by 45 degrees.
Here's a helpful link; shows you the rotation matrix you need
wikipedia rotation matrix article
The new coordinates for a point after 2D rotation look like this:
x' = x \cos \theta - y \sin \theta.
y' = x \sin \theta + y \cos \theta.
replace theta with 45 (or maybe -45) and you should be all set.
If your red dot starts out at (x,y), then after the -45 degree rotation it will have the new coordinates (x',y'), which are defined as follows:
x' = x cos(-45) - y sin (-45)
y' = x sin (-45) + y cos (-45)
Sorry when I misunderstood your question but why do you rotate the image? The x-value of your red point is just the point where the derivative in x-direction has the maximum absolute value. And for the y-direction it is the same with the derivative in y-direction.
Assume you have the following image
If you take the first row of the image it has at the beginning all 1 and the for most of the width zeroes. The plot of the first column looks like this.
Now you convolve this line with the kernel {-1,1} which is only one nested loop over your line and you get
Going now through this result and extracting the position of the point with the highest value gets you 72. Therefore the x-position of the red point is 73 (since the kernel of the convolution finds the derivative one point too soon).
Therefore, if data is the image matrix of the above binary image then extracting your red point position is near to one line in Mathematica
Last[Transpose[Position[ListConvolve[{-1, 1}, #] & /#
{data[[1]],Transpose[data][[1]]}, 1 | -1]]] + 1
Here you get {73, 86} which is the correct position if y=0 is the top row. This method should be implemented in a few minutes in any language.
Remarks:
The approximated derivative which is the result of the convolution can either be negative or positive. This depends whether it is a change from 0 to 1 or vice versa. If you want to search for the highest value, you have to take the absolute value of the convolution result.
Remember that the first row in the image matrix is not always in top position of the displayed image. This depends on the software you are using. If you get wrong y values be aware of that.

width of a frustum at a given distance from the near plane

I'm using CML to manage the 3D math in an OpenGL-based interface project I'm making for work. I need to know the width of the viewing frustum at a given distance from the eye point, which is kept as a part of a 4x4 matrix that represents the camera. My goal is to position gui objects along the apparent edge of the viewport, but at some distance into the screen from the near clipping plane.
CML has a function to extract the planes of the frustum, giving them back in Ax + By + Cz + D = 0 form. This frustum is perpendicular to the camera, which isn't necessarily aligned with the z axis of the perspective projection.
I'd like to extract x and z coordinates so as to pin graphical elements to the sides of the screen at different distances from the camera. What is the best way to go about doing it?
Thanks!
This seems to be a duplicate of Finding side length of a cross-section of a pyramid frustum/truncated pyramid, if you already have a cross-section of known width a known distance from the apex. If you don't have that and you want to derive the answer yourself you can follow these steps.
Take two adjacent planes and find
their line of intersection L1. You
can use the steps here. Really
what you need is the direction
vector of the line.
Take two more planes, one the same
as in the previous step, and find
their line of intersection L2.
Note that all planes of the form Ax + By + Cz + D = 0 go through the origin, so you know that L1 and L2
intersect.
Draw yourself a picture of the
direction vectors for L1 and L2,
tails at the origin. These form an
angle; call it theta. Find theta
using the formula for the angle
between two vectors, e.g. here.
Draw a bisector of that angle. Draw
a perpendicular to the bisector at
the distance d you want from the
origin (this creates an isosceles
triangle, bisected into two
congruent right triangles). The
length of the perpendicular is your
desired frustum width w. Note that w is
twice the length of one of the bases
of the right triangles.
Let r be the length of the
hypotenuses of the right triangles.
Then rcos(theta/2)=d and
rsin(theta/2)=w/2, so
tan(theta/2)=(w/2)/d which implies
w=2d*tan(theta/2). Since you know d
and theta, you are done.
Note that we have found the length of one side of a cross-section of a frustrum. This will work with any perpendicular cross-section of any frustum. This can be extended to adapt it to a non-perpendicular cross-section.

Resources