Oceananigans HDF5 format, manipulate the time axis - julia

I ran a ocean model using Oceananigans in Julia. And I used JLD2OutputWriter to write the output into jld2 HDF5 file. The results basically consist of several variables 3D evolving over time. To read the file, I did the following:
using Oceananigans
d4 = FieldDataset("./test4_lang.jld2")
d4.fields["u"].times
d4 has the following fields:
julia> d4.fields
Dict{String, FieldTimeSeries} with 5 entries:
"v" => 32×32×32×103 FieldTimeSeries{InMemory} located at (Center, Face, Center) on CPU…
"w" => 32×32×33×103 FieldTimeSeries{InMemory} located at (Center, Center, Face) on CPU…
"b" => 32×32×32×103 FieldTimeSeries{InMemory} located at (Center, Center, Center) on CPU…
"u" => 32×32×32×103 FieldTimeSeries{InMemory} located at (Face, Center, Center) on CPU…
"νₑ" => 32×32×32×103 FieldTimeSeries{InMemory} located at (Center, Center, Center) on CPU…
Now, my question is what shall I do, if I want to change the time axis of this data structure with a complete new time axis array for example: 1, 2, 3, ..... 103, where 103 is the length of the time axis.

Related

Convert local coordinates of aligned agisoft cameras output into a relative coordinate system

The outputs after aligning photos in agisoft metashape are placed in a local coordinate system that is arbitrary, it includes values of omega, phi and kappa and the rotation matrices respectively.
The coordinates of image 0 for ex: don't start at origin as reference, they start at arbitrary values.
Example showcasing two images :
PhotoID, Omega, Phi, Kappa, r11, r12, r13, r21, r22, r23, r31, r32, r33
frame0 -2.6739376830612445 -26.8628145584408102 -2.6585413651258838 0.8911308174580325 -0.0252758246719107 0.4530419394092935 0.0413784373987947 0.9988138381614838 -0.0256659621981955 -0.4518558299889612 0.0416178974034010 0.8911196662181277
frame1 -1.7705163287532029 -26.2306519040478179 -1.6659380364959333 0.8966428925733227 -0.0154081202025439 0.4424862856965952 0.0260782313117883 0.9994971141912699 -0.0180400824547062 -0.4419858018640347 0.0277147714251388 0.8965938001098707
How to translate these 1) into a relative coordinate system wrt image 0
2) Translate these two 2d coordinates, without worrying on the rotation values
Thanks in advance

how to calculate the coordinates on a polyline perpendicular to point (in 3D)?

I have two approximately parallel polylines representing railway tracks, consisting of hundreds (maybe thousands) of x, y, z coordinates. The two lines stay approximately 1.435m apart, but bend and curve as a railway would.
If I pick a point on one of the polylines, how do find the point which is perpendicular on the other parallel polyline?
I take it CAD programs use the cross product to find the distance / point and it chooses the line to snap to based on where your mouse is hovering.
I would like to achieve the same thing, but without hovering your mouse over the line.
Is there a way to simply compute the closest line segment on the parallel line? Or to see which segment of the polyline passes through a perpendicular plane at the selected point?
It isn't practical to loop through the segments as there are so many of them.
In python the input would be something like point x, y, z on rail1 and I would be looking to output point x, y, z on rail2.
Many thanks.
You want the point of the minimum distance to the other track.
If the other track is defined by line segments, each spanning two points with a parameter t going between 0 and 1
pos(t) => pos_1 + t * ( pos_2 - pos_1 )
You need to find the t value that produces the minimum distance to the point. Place a temporary coordinate system on the point and express the ends of each line segment pos_1 and pos_2 in relative coordinates to the point of interest.
The value of t is for the closest point is
dot(pos_1,pos_1) - dot(pos_1,pos_2)
t = ------------------------------------------------------
dot(pos_1,pos_1)-2*dot(pos_1,pos_2)+dot(pos_2,pos_2)
where dot(a,b)=ax*bx+ay*by+az*bz is the vector dot product.
Now if the resulting t is between 0 and 1, then the closest point is on this segment, and its coordinates are given by
pos(t) => pos_1 + t * ( pos_2 - pos_1 )

Kitti Velodyne point to pixel coordinate

From the Velodyne point, how to get pixel coordinate for each camera?
Using pykitti
point_cam0 = data.calib.T_cam0_velo.dot(point_velo)
We can get the projection on the image which is equation 7 of the Kitti Dataset paper:
y = Prect(i) Rrect(0) Tvelocam x
But from there, how to get the actual pixel coordinates on each image?
Update: PyKitti version 0.2.1 exposes projection matrices for all cameras.
I recently faced the same problem. For me, the problem was that pykitty didn't expose Prect and Rrect matrices for all cameras.
For Pykitti > 0.2.1, use Prect and Rrect from calibration data.
For previous versions, you have two options:
Enter the matrices by hand (data is in the .xml calibration file for each sequence).
Use this fork of pykitti: https://github.com/Mi-lo/pykitti/
Then, you can use equation 7 to project a velodyne point into an image. Note that:
You will need 3D points as a 4xN array in homogeneous coordinates. Points returned by pykitti are a Nx4 numpy array, with the reflectance in the 4th column. You can prepare the points with the prepare_velo_points function below, which keeps only points with reflectance > 0, then replaces reflectance values with 1 to get homogeneous coordinates.
The velodyne is 360°. Equation 7 will give you a result even for points that are behind the camera (they will get projected as if they were in front, but vertically mirrored). To avoid this, you should project only points that are in front of the camera. For this, you can use the function project_velo_points_in_img below. It returns 2d points in homogeneous coordinates so you should discard the 3rd row.
Here are the functions I used:
def prepare_velo_points(pts3d_raw):
'''Replaces the reflectance value by 1, and tranposes the array, so
points can be directly multiplied by the camera projection matrix'''
pts3d = pts3d_raw
# Reflectance > 0
pts3d = pts3d[pts3d[:, 3] > 0 ,:]
pts3d[:,3] = 1
return pts3d.transpose()
def project_velo_points_in_img(pts3d, T_cam_velo, Rrect, Prect):
'''Project 3D points into 2D image. Expects pts3d as a 4xN
numpy array. Returns the 2D projection of the points that
are in front of the camera only an the corresponding 3D points.'''
# 3D points in camera reference frame.
pts3d_cam = Rrect.dot(T_cam_velo.dot(pts3d))
# Before projecting, keep only points with z>0
# (points that are in fronto of the camera).
idx = (pts3d_cam[2,:]>=0)
pts2d_cam = Prect.dot(pts3d_cam[:,idx])
return pts3d[:, idx], pts2d_cam/pts2d_cam[2,:]
Hope this helps!

Aligning Shapes on a plan, Algoryhtm

I am developing a simple diagram tool with shapes on a plan using flex.
First i was using a simple 20*20 grid.
But the real cool stuff out their is auto axe magnet effect, that's how i call it at least to see why i mean by that i made a small video of balsamiq.
http://screenr.com/clB
http://www.balsamiq.com/
As you can see it aligns on the vertical horizontal border and center axes.
Borders: gray axes
Horizontal align (height/2) Center: blue axe
No Vertical align (width/2) axe
Some intermediary padding space of 25px: green axes
How do you think such algorithms work:
For now i will do with no rotation.
Given a shape S1 selected at position top left x,y of width w and height h.
Look at all shapes intersecting two zone:
from xmin = x, xmax= x+w for y > 0.
from yming = y , ymax= y+h for x > 0.
Once i have the list of shape concerned i check if any conditions matches:
When i use '=' its an approximation + or - 2 pixels will give the wanted 'magnet' effect
S1 x = S'x => Gray line at x
S1 x+w = S'x => Gray line at x+w
S1 y = S'y => Gray line at y
S1 y+h = S'y => Gray line at y+h
S1 x = S'x and S1 x+w = S'x+w => Blue line at x + w/2
And Given a padding magnet of 20 px
S1 x = S'x + PADD => greenline at S1 x
S1 x = S'x - PADD => greenline at S1 x
S1 y = S'y + PADD => greenline at S1 y
S1 y = S'y - PADD => greenline at S1 y
Whats your thought about this ?
I wrote Balsamiq's snapping algorithm. You're pretty close. The one "clever" thing we do (if I may say so myself), is to pre-populate two sparse arrays with snapping coordinates onMouseDown, so that they are easy/fast/cheap to look up onMouseMove. What I do onMouseDown is this:
let's talk about x coordinates (then repeat the same thing for y):
say GRAVITY is 5
look at all the shapes
for each shape, look at the left edge, say it's at 100. Populate the xSnappingPositions object from 100-GRAVITY (95) to 100+GRAVITY (105) with the number 100. Repeat for right edge
Then when you do an onMouseMove, you look at the control you're dragging's x and y. Is there something in xSnappingPositions and ySnappingPosition that matches the left edge now? if so, go to the value saved in the array instead of using the position detected by the mouse (i.e. snap to it). Repeat the check for right edge, center, etc.
I hope this helps!

Help constructing an OBB, trying to Represent a matrix with 3 vectors

I am presently trying to construct an OBB (Oriented Bounding Box) using the source and math contained in the book "Real Time Collision Detection".
One problem with the code contained in this book is that it does very little to explain what the parameters mean for the methods.
I am trying to figure out what I need to feed my setOBB() method (I wrote this one). It goes like this:
void PhysicalObject::setOBB( Ogre::Vector3 centrePoint, Ogre::Vector3 localAxes[3], Ogre::Vector3 positiveHalfwidthExtents )
{
// Ogre::Vector3 c; // OBB center point
// Ogre::Vector3 u[3]; // Local x-, y-, and z-axes (rotation matrix)
// Ogre::Vector3 e; // Positive halfwidth extents of OBB along each axis
m_obb.c = centrePoint;
m_obb.u[0] = localAxes[0];
m_obb.u[1] = localAxes[1];
m_obb.u[2] = localAxes[2];
m_obb.e = positiveHalfwidthExtents;
}
Looking at the parameters it wants above, the first and third parameters I believe I understand.
Pass in the centre position of the object.
This is my problem. I believe it wants a matrix represented using an array of 3 vectors? but how?
A Vector which contains magnitude for the distance between the centre point and the edge of the OBB in each x,y,z direction.
Here is what I'm doing currently:
// Build the OBB
Ogre::Vector3 rotation[3];
Ogre::Vector3 centrePoint = sphere->getPosition();
rotation[0] = ?
rotation[1] = ?
rotation[2] = ?
Ogre::Vector3 halfEdgeLengths = Ogre::Vector3( 1,1,1 );
myObject->setOBB( centrepoint, rotation, halfEdgeLengths );
How can i represent a matrix using three vectors (which I cannot avoid doing this way). Thanks.
A 3x3 matrix representing a rotation/scale in 3d space is nothing more than three vectors in row.
Each column vector is the rotated and scaled main axis. First column is the scaled and rotated x axis, second and third are y and z.
(Ogre uses column major matrices)
So, localAxes[3] simply is a rotation. And you can get it from a Quaternion.
Ogre::Vector3 rotation[3];
Ogre::Quaternion orientation = sphere->getOrientation();
orientation.ToAxes(rotation);
// Now rotation has the three axes representing the orientation of the sphere.
Ok. So another question is that this functionality requires it to be a matrix, right? I'm assuming so, or you'd just use the individual axes vectors in the places they are needed during the calculation.
That said, if you stuff these vectors into a matrix in a uniform way, then you'll either have them as the rows of the matrix or the columns of the matrix and, more or less, have a 50% chance to get it right depending on how the functions accepting the matrix expect the matrix to be formatted. So, set up the 2d array as the matrix and go forward with caution, I'd suggest. For example:
int matrix[3][3] = { { x1, x2, x3 },
{ y1, y2, y3 },
{ z1, z2, z3 }
};
Flip if you need to go column-major.

Resources