3d.io Scene Coordinate System - aframe

At the present moment, I am writing code that cues off of the coordinates of furniture items in a 3d scene.
From what I gather, each furniture piece possesses its own coordinates which are based off of the furniture pieces parent.
Whether the parent be the level or a group etc...
If the parent happens to be the "level" the furniture pieces' coordinates directly entail where the furniture piece resides in the 3d scene.
However, if a furniture piece happens to have a non-level parent, the coordinates of the furniture piece are based off of the parent(s).
Basically, in my code I am writing a recursive function that takes in a furniture node and grabs its coordinates.
As the recursive function iterates it climbs the levels of the parents and increments the original coordinates in accordance with the
coordinates of the node that is currently in the function.
If the class of node currently in the function is that of "io3d-level" the recursive function will stop and return the incremented coordinates. This is because
a node with a class of "io3d-level" must possess a coordinate object of {0, 0, 0}.
Again, the recursive function will iterate from the level of a furniture node and climb the structure of the parents grabbing and adding their coordinates
onto the original coordinates of the furniture node until the current class of the node within the function is the level, at which point the function stops
and returns the coordinates.
This recursive function appears to produce pristine results in almost all cases.
However, this function does not return pristinely accurate coordinates for a minority of furniture nodes sent through it.
The assumption behind this recursive function is that a furniture nodes' true coordinates, relative to the level, can be obtained via grabbing the coordinates
of all of the furniture nodes' parents and summing them. Obviously taking into account that x coordinates are summed with x coordinates and z coordinates
are summed with z coordinates.
Is this assumption false?
Am I perhaps misinterpreting the coordinate system that belies the 3d.io scenes?

This can be done directly without inaccuracies using the three.js handle.
let worldCoordinates = obj.data3dView.threeParent.getWorldPosition();

I assume something goes wrong in your code
So best is to share your code, so we don't have to guess.
Here is a small snippet that applies the parent location for an element when dealing with scene structure
// apply parent location
function applyLocation(element, parent) {
// Rotate element on the XZ plane around parent's center
var angleY = -parent.ry * Math.PI / 180
var rotatedX = element.x * Math.cos(angleY) - element.z * Math.sin(angleY)
var rotatedZ = element.z * Math.cos(angleY) + element.x * Math.sin(angleY)
// Get parent space coordinates for our element
element.x = parent.x + rotatedX
element.y += parent.y
element.z = parent.z + rotatedZ
element.ry += parent.ry
return element
}

Related

How to code a pathfinding player in godot using A*?

Iam nooby in godot, I have to use A* to traslate the player to the goal position, but I do not know how to start, pls help!! basically I have just 2 tiles in the tilemap, 1 of them is allowed to pass over it, I have to extract I guess the allowed tile and calculate the distance between the position player with the position goal, getting the real distance and then check cell per cell which has the lowest cost, but I do not know how to do that :c
func get_player_init_pos():
var pos = map_to_world(Vector2(54,1))pos.y += half_cell_size.y
return pos
func is_tile_vacant(pos, direction):
var curr_tile = world_to_map(pos)
var next_tile = get_cellv(curr_tile + direction)
var next_tile_pos = Vector2()
if(next_tile == 0):
next_tile_pos = map_to_world(curr_tile + direction)
else:next_tile_pos = pos
return next_tile_pos
I have this, the first part of the code is to locate the player in the map and the second is for check the tile walls in the map
You could roll your own path finding algorithm. However, there is little point in doing so, since Godot has a AStar class you can use. However, you probably don't need to use that either, because Godot has a navigation system. And that is what I'm going to suggest you to use.
First of all, you can specify both navigation and collision polygons on your tiles. You need to have the navigation polygons. Go ahead and do them.
Second you want to have a Navigation2D in the scene tree and have your TileMap as a child.
And third, you can ask the Navigation2D for a path with get_simple_path, you pass the start and end positions as arguments and you get an array of points that make up the path.
Since you mention A*, I'll briefly explain using the AStar too anyway.
First, you need to add the cells with add_point. It requires ids. It is a good idea to be clever with the ids so you can compute the id for a given position. For example x * width + y if you know the size.
So you can iterate over the tiles in your TileMap and call add_point for each one (You don't need to add cell that are not passable).
Then you need to specify the connections with connect_points (it takes the ids of the points as parameters).
And finally you can call get_point_path passing the start and end ids. Again it gives you a array of points.

Find center of a fixed-width bounding box

Given a collection of points, I'd like to find the center of a bounding box (fixed-length and width) that maximizes the number of points within said box. I'm at a loss for an efficient way to do this.
Algorithm with complexity O(N^2*logN) (I hope that better one exists):
Edit: article exploiting interval trees claims O(NlogN) complexity
Sort data array A by X coordinate.
Scan A with sweep line left to right.
For every point in A get LeftX = A[k].X - left coordinate of vertical band, find the rightmost coordinate of vertical band RightX = LeftX + Width.
Copy points inside the band to another array B.
Sort B by Y-coordinate.
Scan B width sweep line top to down.
For every point B[i] get TopY = B[i].Y - top coordinate of rectangle, calculate BottomY = TopY + Height.
Use binary search in B:
B[j] is the last bottom point in B with B[j].Y <= BottomY.
Find number of points in the current rectangle:
Number of points is N(k, i) = j - i + 1
Check whether N(k, i) is maximum among others
This seems like a difficult problem, here is my idea:
Hold a graph, each node holds a rectangle and a subset of points. the rectangle defines the area where placing the bounding box in would overlap all the points in the subset.
To build the graph:
Start with a root node holding the empty set and the rect [top:-inf, bottom:inf, left:-inf, right:inf]
For each point in the tree call this recursive function with the root node (pseudo code):
function addPoint(node, point)
// check that you didn't already try to add this point to this node
// node.tested can be a hash set
if(node.tested contains point)
return
node.tested.add(point)
newRect = node.rect.intersectWith(boundingBoxAround(point))
// if the bounding box around the point does not intersect the rectangle, return
if(newRect is invalid) // rect is invalid if right<left or bottom<top
return
node.addChild(new node(newRect, node.pointSet U {point})
for each child of node
addPoint(child, point)
Now you just pick the node with the largest subset, you can keep track of that when building the graph so you don't need to run through the graph again.
I hope my idea is clear, let me know if I can explain it better.

Aligning a point cloud on a grid

I have to measure the Z-distances for corresponding points of two clouds.
I intend to iterate through one cloud and calculate the distance bezween Z coordinates using the same X and Y of the other cloud.
Unfortunatelly it doesn't work, as there are never a point at these X-Y coordinates in the second cloud. My current workaround is to search for a closest point in the second cloud for X-Y of the first cloud. It works, but it is very slow.
Is there a way to align points of X and Y coordinates on a defined grid using PCL? This way I hope the X-Y coortinates will match better.
EDIT
Ok, here are some images and more explanation.
Top view
Side view
There is a scan of a saddle and a horse back. Both are made independently but aligned in Z-axis - Z-Axis of both are parralel.
I want to create a model of a layer, which fits exactly under the saddle (Not just a rechtangular pad).
So given a thickness of the layer I want to iterate through the saddle points and find the Z-distance to corresponding point on the horse-back. As the Y coordinates are floats, there are nearly never a point on the horse with the same XY as on the saddle.
I think. If I could align all points to a grid with a given density, there would be a corresponding XY-point on tthe horse for each XY saddle point above it.
I am not really sure if that is what you mean, but maybe the "grid" you are talking about could just be the image plane? So instead of using the 3D point cloud you could take the depth maps/depth images and just compare the values of two depth maps at the same image coordinates. This would assume that the recordings are already aligned.
If you only have the point cloud data you'd have to perform a projection on the plane (for this you's have to know the intrinsics of the camera).
Another option might be aligning the clouds using a registration method (e.g. ICP). Then you could also get the (sum of) distance(s) for corresponding points of the clouds.
I've implemented a proof of concept and want to share it. However, I'd appreciate a "proper" solution - a PCL API function probably.
bool alignToGrid( pcl::PointCloud<pcl::PointXYZRGBNormal>::Ptr cloud, QMap<QString, float > & grid, int density )
{
pcl::PointXYZRGBNormal p1;
p1.r=0;
p1.g=0;
p1.b=255;
QMap<QString, QList<float> > tmpGridMap;
for( std::vector<pcl::PointXYZRGBNormal, Eigen::aligned_allocator<pcl::PointXYZRGBNormal> >::iterator it1 = cloud->points.begin();
it1 != cloud->points.end(); it1++ )
{
p1.x = it1->x;
p1.y = it1->y;
p1.z = it1->z;
int gridx = p1.x*density;
int gridy = p1.y*density;
QString pos = QString("%1x%2").arg(gridx).arg(gridy);
tmpGridMap[pos].append(p1.z);
}
for (QMap<QString, QList<float> >::iterator it = tmpGridMap.begin(); it!=tmpGridMap.end(); ++it)
{
float meanZ=0;
foreach( float f, it.value() )
{
meanZ+=f;
}
meanZ /= it.value().size();
grid[it.key()] = meanZ;
}
return true;
}
The Idea is to iterate through a cloud and leave/create only points, which XY coordinates are on the defined grid. Density 1000 for Kinect clouds results in ca. 1mm-grid.
All points around the grid point are used for building the Z-average.
The cloud remains unmodified. The output is a map of xy-position to Z. XY Position is stored in string (weird, I know) as x. Using this map it is easy to find corresponding XY-points in other grid-aligned clouds.
Now I was able to map my clouds using any density. In the images e.g. 1mm and 1cm.

3D perspective 'grab' panning with DirectX

I am implementing a pan tool in our software's 3D view which is supposed to work much like the grab tool of, say, Photoshop or Acrobat Reader. That is, the point the user grabs onto with the mouse (clicks and holds, then moves the mouse) stays under the mouse cursor as the mouse moves.
This is a common paradigm and one that's been asked about on SO before, the best answer being to this question about the technique in OpenGL. There is another that also has some hints, and I have been reading this very informative CodeProject article. (It doesn't explain many of its code examples' variables etc, but from reading the text I think I understand the technique.) But, I have some implementation issues because my 3D environment's navigation is set up quite differently to those articles, and I am seeking some guidance.
My technique - and this might be fundamentally flawed, so please say so - is:
The scene 'camera' is stored as two D3DXVECTOR3 points: the eye position and a look point. The view matrix is constructed using D3DXMatrixLookAtLH like so:
const D3DXVECTOR3 oUpVector(0.0f, 1.0f, 0.0f); // Keep up "up", always.
D3DXMatrixLookAtLH(&m_oViewMatrix, &m_oEyePos, &m_oLook, &oUpVector);
When the mouse button is pressed, shoot a ray through that pixel and find: the coordinate (in unprojected scene / world space) of the pixel that was clicked on; the intersection of that ray with the near plane; and the distance between the near-plane point and object, which is the length between those two points. Store this and the mouse position, and the original navigation (eye and look).
// Get the clicked-on point in unprojected (normal) world space
D3DXVECTOR3 o3DPos;
if (Get3DPositionAtMouse(roMousePos, o3DPos)) { // fails if nothing under the mouse
// Mouse location when panning started
m_oPanMouseStartPos = roMousePos;
// Intersection at near plane (z = 0) of the ray from camera to clicked spot
D3DXVECTOR3 oRayVector;
CalculateRayFromPixel(m_oPanMouseStartPos, m_oPanPlaneZ0StartPos, oRayVector);
// Store original eye and look points
m_oPanOriginalEyePos = m_oEyePos;
m_oPanOriginalLook = m_oLook;
// Store the distance between near plane and the object, and the object position
m_dPanPlaneZ0ObjectDist = fabs(D3DXVec3Length(&(o3DPos - m_oPanPlaneZ0StartPos)));
m_oPanOriginalObjectPos = o3DPos;
Get3DPositionAtMouse is a known-ok method which picks a 3D coordinate under the mouse. CalculateRayFromPixel is a known-ok method which takes in a screen-space mouse coordinate and casts a ray, and fills the other two parameters with the ray intersection at the near plane (Z = 0) and the normalised ray vector.
When the mouse moves, cast another ray at the new position, but using the old (original) view matrix. (Thanks to Nico below for pointing this out.) Calculate where the object should be by extending the ray from the near plane the distance between the object and near plane (this way, the original object and new object points should be in parallel plane to the near plane.) Move the eye and look coordinates by this much. Eye and Look are set from their original (when panning started) values, with the difference being from the original mouse and new mouse positions. This is to reduce any precision loss from incrementing or decrementing by granular (integer) pixel movements as the mouse moves, ie it calculates the whole difference in navigation every time.
// Set navigation back to original (as it was when started panning) and cast a ray for the mouse
m_oEyePos = m_oPanOriginalEyePos;
m_oLook = m_oPanOriginalLook;
UpdateView();
D3DXVECTOR3 oRayVector;
D3DXVECTOR3 oNewPlaneZPos;
CalculateRayFromPixel(roMousePos, oNewPlaneZPos, oRayVector);
// Now intersect that ray (ray through the mouse pixel, using the original navigation)
// to hit the plane the object is in. Function uses a "line", so start at near plane
// and the line is of the length of the far plane away
D3DXVECTOR3 oNew3DPos;
D3DXPlaneIntersectLine(&oNew3DPos, &m_oPanObjectPlane, &oNewPlaneZPos, &(oRayVector * GetScene().GetFarPlane()));
// The eye/look difference /should/ be as simple as:
// const D3DXVECTOR3 oDiff = (m_oPanOriginalObjectPos - oNew3DPos);
// But that lags and is slow, ie the objects trail behind. I don't know why. What does
// work is to scale the from-to difference by the distance from the camera relative to
// the whole scene distance
const double dDist = D3DXVec3Length(&(oNew3DPos - m_oPanOriginalEyePos));
const double dTotalDist = GetScene().GetFarPlane() - GetScene().GetNearPlane();
const D3DXVECTOR3 oDiff = (m_oPanOriginalObjectPos - oNew3DPos) * (1.0 + (dDist / dTotalDist));
// Adjust the eye and look points by the same amount, so orthogonally changed
m_oEyePos = m_oPanOriginalEyePos + oDiff;
m_oLook = m_oPanOriginalLook + oDiff;
Diagram
This diagram is my working sketch for implementing this:
and hopefully explains the above much more simply than the text. You can see a moving point, and where the camera has to move to keep that point at the same relative position. The clicked-on point (the ray from the camera to the object) is just to the right of the straight-ahead ray representing the center pixel.
The problem
But, as you've probably guessed, this doesn't work as I hope. What I wanted to see was the clicked-on object moving with the mouse cursor. What I actually see is that the object moves in the direction of the mouse, but not enough, ie it does not keep the clicked-on point under the cursor. Secondly, the movement flickers and jumps around, jittering by up to twenty or thirty pixels sometimes, then flickers back. If I replace oDiff with something constant this doesn't occur.
Any ideas, or code samples showing how to implement this with DirectX (D3DX, DX matrix order, etc) will be gratefully read.
Edit
Commenter Nico below pointed out that when calculating the new position using the mouse cursor's moved position, I needed to use the original view matrix. Doing so helps a lot, and the objects stay near the mouse position. However, it's still not exact. What I've noticed is that at the center of the screen, it is exact; as the mouse moves further from the center, it gets out by more and more. This seemed to change based on how far away the object was, too. By pure 'I have no idea what I'm doing' guesswork, I scaled this by a factor of the near/far plane and how far away the object was, and this brings it very close to the mouse cursor, but still a few pixels away (1 to, say, 30 at the extreme edge of the screen, which is enough to make it feel wrong.)
Here's how i solve this problem.
float fieldOfView = 45.0f;
float halfFOV = (fieldOfView / 2.0f) * (DEGREES_TO_RADIANS);
float distanceToObject = // compute the world space distance from the camera to the object you want to pan
float projectionToWorldScale = distanceToObject * tan( halfFov );
Vector mouseDeltaInScreenSpace = // the delta mouse in pixels that we want to pan
Vector mouseDeltaInProjectionSpace = Vector( mouseDeltaInScreenSpace.x * 2 / windowPixelSizeX, mouseDeltaInScreenSpace.y * 2 / windowPixelSizeY ); // ( the "*2" is because the projection space is from -1 to 1)
// go from normalized device coordinate space to world space (at origin)
Vector cameraDelta = -mouseDeltaInProjectionSpace * projectionToWorldScale;
// now translate your camera by "cameraDelta".
Note this works for an field of view apsect ratio of 1, i think you would have to break up the "scale" into separate x and y components if they vertical field of view was different than the horizontal field of view
Also, you mentioned a "look at" vector. I'm not sure how my math would need to change for that since my camera is always looking straight down the z-axis.
One problem is your calculation of the new 3d position. I am not sure if this is the root cause, but you might try it. If it doesn't help, just post a comment.
The problem is that your offset vector is not parallel to the znear plane. This is because the two rays are not parallel. Therefore, if the have the same length behind znear, the distance of the end point to the znear plane cannot be equal.
You can calculate the offset vector with the theorem of intersecting lines. If zNearA and zNearB are the intersection points of the znear plane with ray A and ray B respectively, then the theorem states:
Length(original_position - cam_position) / Length(offset_vector) = Length(zNearA - cam_position) / Length(zNearB - zNearA)
And therefore
offset_vector = Length(original_position - cam_position) / Length(zNearA - cam_position) * (zNearB - zNearA)
Then you can be sure to move on a line that is parallel to the znear plane.
Just try it out and see if it helps.

Generating a 3D prism from any 2D polygon

I am creating a 2D sprite game in Unity, which is a 3D game development environment.
I have constrained all translation of objects to the XY-plane and rotation to the Z-axis.
My problem is that the meshes that are used to detect collisions between objects must still be in 3D. I have the need to detect collisions between the player object (a capsule collider) and a sprite (that has its collision volume defined by a polygonal prism).
I am currently writing the level editor and I have the need to let the user define the collision area for any given tile. In the image below the user clicks the points P1, P2, P3, P4 in that order.
Obviously the points join up to form a quadrilateral. This is the collision area I want, however I must then convert that to a 3D mesh. Basically I need to generate an extrusion of the polygon, then assign the vertex winding and triangles etc. The vertex positions is not a problem to figure out as it is merely a translation of the polygon down the z-axis.
I am having trouble creating an algorithm for assigning the winding order of the vertices, especially since the mesh must consist only of triangles.
Obviously the structure I have illustrated is not important, the polygon may be any 2d shape and will always need to form a prism.
Does anyone know any methods for this?
Thank you all very much for your time.
A simple algorithm that comes to mind is something like this:
extrudedNormal = faceNormal.multiplyScale(sizeOfExtrusion);//multiply the face normal by the extrusion amt. = move along normal
for each(vertex in face){
vPrime = vertex.clone();//copy the position of each vertex to a new object to be modified later
vPrime.addSelf(extrudedNormal);//add translation in the direction of the normal, with the amt. used in the
}
So the idea is basic:
clone the face normal and move it in
the same direction by the amt. you
want to extrude by
clone the face vertices and move them
using the moved(extruded) normal
position
For a more complete, feature rich example, refer to the Procedural Modeling Unity samples. They include a nice Mesh extrusion sample too (see ExtrudedMeshTrail.js which uses MeshExtrusion.cs).
Goodluck!
To create the extruded walls:
For each vertex a (with coordinates ax, ay) in your polygon:
- call the next vertex 'b' (with coordinates bx, by)
- create the extruded rectangle corresponding to the line from 'a' to 'b':
- The rectangle has vertices (ax,ay,z0), (ax,ay,z1), (bx,by,z0), (bx,by,z1)
- This rectangle can be created from two triangles:
- (ax,ay,z0), (ax,ay,z1), (bx,by,z0) and (ax,ay,z1), (bx,by,z0), (bx,by,z1)
If you want to create a triangle strip instead, it's even simpler. For each vertex a, just add (ax,ay,z0) and (ax,ay,z1). Whichever vertex you processed first will also need to be processed again after looping over all other vertices.
To create the end-caps:
This step is probably unnecessary for collision purposes. But, one simple technique is here: http://www.siggraph.org/education/materials/HyperGraph/scanline/outprims/polygon1.htm
Each resulting triangle should be added at depth z0 and z1.

Resources