How to implement KdTree using PCLPointCloud2 used in loadOBJfile in point cloud library? - point-cloud-library

Okay, so I have one OBJ file which I read into PCLpointcloud2. Now I want to feed it into a K-dTree. Which is not taking PCLPointCloud2 as input. I want to query any general point if it lies on the surface of my OBJ file.
I am finding it hard to understand their documentation. So how can it be done?
Plus, kindly point me to a good reference easily interpretable. And what is "PointT" BTW? Is it custom build type defined by us? please elaborate.

Look at the code in the provided tool pcl_mesh_sampling (in the PCL code directory under tools/mesh_sampling.cpp). It is relatively simple. It loads a model from PLY or OBJ then for each triangle it samples random points from the triangle. The final point cloud then undergoes a voxel-grid sample to make the points relatively uniform. Alternatively, you can just run the pcl_mesh_sampling program on your obj file to get an output PCD which you can then visualise with pcl_viewer before loading the PCD file into your own code.
Once you have the final point cloud, you can build and use a KD-Tree as per http://pointclouds.org/documentation/tutorials/kdtree_search.php
PointT is the template argument. The point cloud library can handle a variety of point types, from simple PointXYZ (having just x,y,z) to more complicated points like PointXYZRGBNormal (having x,y,z,normal_x,normal_y,normal_z, curvature, r, g, and b channels). Each algorithm is templated on the point type that you want to use. It would probably be easier if you used PointXYZ with your OBJ file, so use pcl::PointXYZ for all your template arguments. For more on templates see http://www.tutorialspoint.com/cplusplus/cpp_templates.htm and http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php.
Update (reply to latest comment)
Added here because this reply is too long for a comment.
I think I see what you are getting at. So when you sample points from the point cloud & build a KD-tree of the object surface, and for each point you keep track which faces are nearby that point (probably all the faces adjacent to the face from which the point was sampled should be sufficient? Just one face is definitely insufficient). Then when the query point is given, you find the nearest point in the KD-tree and check whether the query point is on the "outside" or inside of the full list of nearby faces associated with that point in the KD-tree. If it's on the "inside" of all of them perhaps it is an interior point. But I cannot guarantee that this is true. That is my thinking on that question at the moment. But I do wonder if you want a mesh-based approach really. By the way, if you break your mesh up into convex parts then you can have nice guarantees when processing each convex part.

Related

Adapt geometry on printed points

I draw a vectorial geometry with some calibration points around it.
I print this geometry and then I physically scan the printed calibration points (I can't scan the geometry, I can only scan the calibration points).
When I acquire these points, these aren't in their position anymore because of some print error or bad print calibration.
The question is:
Is there any algorithm that helps me to adapt the original geometry in base of the new points scanned?
In practice I need to warp the geometry in order to obtain the real geometry printed on the paper with the same print error that I have on the calibration points.
The distortion is given by the physical distortion of the material (not paper but cloth) during the print process. I can't know how much the material will distort during the print.
Yes, there are algorithms to help you with that. In general you need to learn/find the transformation between the two images that you have.
Typical geometrical transformations are affine transformations (shift, scale, rotation, shear, reflections) which need at least three control points or piecewise local linear/ local weighted mean which need at least 4-6 control points. The more control points you have, the better in general.
Given a set of control points in one image and the corresponding set of control points in the other image there are algorithms for finding the optimal transformation between if you specify a class (affine or piecewise local linear). See for example fitgeotrans in Matlab. I don't know how exactly it solves the problem by I guess by some kind of optimization. It should be easy to find available implementations for other programming languages (Python, C, Java).
What remains is finding the correspondence between the control points in the two images. For a few images you may be able to do that by hand, but in the general case you might want to automatize this as well. General image registration algorithms like imregister should do well for your images. They give you a good initial estimate for the transformation (may already be sufficient) so that then identification of the corresponding point pairs is trivial (always take the nearest) and allow refining.
So I advice you to first just try to register the images (gray scale data) with an identity transformation as starting value. Then identify corresponding point pairs and refine the transformation either using an affine or a piecewiece/local transformation. Then apply the transformation on the geometry to get the printed geometry. Depending on your choice of programming languages you will find many implementations that do the job.

Create graph of distinct routes

Problem I have a binary image with traversable and blocked cells. On this map points of interest (POIs) are set. My goal is to create a graph from these POIs respecting obstacles (see images) which represents all possible and truly distinct paths. Two paths are truly distinct if they can not be joined into one path. E.g. if the outside of the building in picture 1 was accessible a path around the building could not be merged with one through the building.
Researched I have looked at maze solvers and various shortest path finding algorithms (e.g. A*, Theta*, Phi*) and while they'd be useful for this problem they only search for a path between two points and don't consider already established routes.
Best Guess I am considering using Phi* to search for all possible routes and merge afterwards using magic (ideas?), but this will not give me truly distinct alternatives.
Can someone help?
P.S.: I'm using C++ and am not really eager to do this by myself, so if there is a library which already does this... :)
I found (and decided to use) a parallel thinning algorithm (Zhan-Suen for now) to create an image skeleton. This effectively makes the assumption that the geometry shapes the common routes, which is fine I think.
By using the Rutovitz crossing number I can extract bifurcations and crossings from the resulting skeleton. Then I'll determine the shortest line of sight from my Points of Interest (using Bresenham's algorithm) to the extracted crossings to connect them to the graph.
I hope this will help someone along the road :)

How do I best map an unorganized point cloud back to it's organized ancestor?

I get an organized point cloud (using pcl and an ASUS Xtion Pro Live), which of course contains NANs and the like. I also get an RGB image of the same scene.
The first step for processing is removing those NANs, which converts the point cloud to unorganized. I then perform a few other steps, but that's not relevant to the question (I think, see P.S.1). What COULD (I'm not sure) be relevant is that I run extract multiple times, and so have quite a few intermediate point clouds. I believe this means I can no longer assume that the points are in the same order they were at the start.
For clarification, I do understand what an unorganized point cloud it and how it differs from unorganized, both theoretically and in terms of how the data is actually stored.
After chopping off various points, I now have a much smaller point cloud which consists only of points in the original point cloud (but much less of them). How do I map these points back to the matching points in the original point cloud? I probably could iterate through the entire cloud to find matches, but this seems hacked together. Is there a better way to do this?
My main aim is to be able to say that 'point A in my final point cloud is of interest to me' and furthermore to map that to pixel K in the RGB image I first obtained. It seems to me that matching the final point cloud with the initial one is the best way to do this, but alternatives are also welcome.
P.S.1 - One of the last few steps in my process is finding a convex hull and then extracting a polygonal prism from the original point cloud. If all else fails, I will just interrogate the (20-50) points on the convex hull to match them with my initial point cloud (minimizing computation) and hence to match them with the original RGB images.
P.S.2 - Random musing - since I know the original size of the RGB image, the origin of the camera relative to the point cloud (or, rather, the position of the points relative to the camera used to take them), and can trivially obtain the camera parameters, could I simply use ray-tracing through each point in my final point cloud to produce an RGB image? The image may need registration with the 'real' RGB image, or it probably won't since nothing will have actually moved except for rounding error.

Constrained (Delaunay) Triangulation

For a university project I need to implement a computer graphics paper that has been relased a couple of years ago. At one point, I need to triangulate the results I get from my simulation. I guess its easier to explain what I need looking at a picture contained within the paper:
Let's say I already have got all the information it takes to reconstruct the contour lines that you can see in the second thumbnail. Using those I need to do some triangulation using those siluettes as constrains. I have searched the internet for triangulation libraries like CGAL, VTK, Triangle, Triangle++, ... but I always ended up throwing my hands up in horror. I am not a good programmer and it seems impossible to me to get into one of those APIs before the deadline of this project passes.
I would appreciate any kind of help like code snipplets, tips, etc...
I know that the algorithms need segments (pairs of points) as input, so let's say I have got one std::vector containing all pairs of points defining the siluette as well as the left and right side of the rectangle.
Can you somehow give me a code snipplet for i.e. CGAL that I could use for my purpose? First of all I just want to achieve the state of the third thumbnail. Lateron I will have to do some displacement within the "cracks" and finally write the information into a VBO for OpenGL rendering.
I have started working it out with CGAL. One simple problem still drives me crazy:
It is possible to attach informations (like ints) to points before adding them up to the triangulator object. I do this since I need on the one hand an int-flag that I use lateron to define my texture coordinates and on the other hand an index which I use so that I can create a indexed VBO.
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2info_insert_with_pair_iterator_2_8cpp-example.html
But instead of points I only want to insert constraint-edges. If I insert both CGAL returns strange results since points have been fed into two times (once as point and once as point of a constrained edge).
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2constrained_8cpp-example.html
Is it possible to connect in the same way as with points information to "Constraints" so that I can only use this function cdt.insert_constraint( Point(j,0), Point(j,6)); before I iterate over the resulting faces?
Lateron when I loop over the triangles I need some way to access the int-flags that I defined before. Like this but not on acutal points but the "ends" defined by the constraint edges:
for(CDT::Finite_faces_iterator fit = m_cdt.finite_faces_begin(); fit != m_cdt.finite_faces_end(); ++fit, ++k) {
int j = k*3;
for(int i=0; i < 3; i++) {
indices[j+i] = fit->vertex(i)->info().first;
}
}

How does a non-tile based map works?

Ok, here is the thing. Recently i decided i wanted to understand how Random map generation works. I found some papers and some arguments. The most interesting one was "Diamond Square algorithm" and "Midpoint Displacement". I still have to try to apply those to a software, but other than that, i ran into this site: http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
As you can see, the idea is to use polygons. But i have no idea how to apply that a Tile-Based map, not even how to create those polygons using the tools i have (c++ and sdl). I am assuming there is no way to do it ( please correct me if i am wrong.) But if i am not, how does a non-tile map works, and how are these polygons generated?
This answer will not give you directly the answers you're looking for, but hopefully will get you close enough!
The Problem
I think what blocks you is how to represent the data. You're probably used to a 2D grid that simply represent the type of each tile. As you know, this is fine to handle a tile-based map, but doesn't properly allow you to model worlds where tiles are of a different shape.
Graphs
What I suggest to you, is to see the problem a bit differently. A grid is nothing more than a graph (more info) with nodes that have 4 (or 8 if you allow diagonals) implicit neighbor nodes. So first, what I would do if I was you, would be to move from your strict standard 2D grid to a more "loose" graph, where each node has a position, a position, a list of neighbors (in most cases you'll have corners with 2 neighbors, borders with 3 and "middle" tiles with 4) and finally a rendering component which simply draws your tile on screen at the given position. Once this is done, you should be able to have the exact same results on screen that you currently have with your "2D Tile-Based" engine by simply calling the rendering component with each node who's bounding box (didn't touch it in what you should add to your node, but I'll get back to this later) intersects with the camera's frustum (in a 2D world, it would most likely if the position +/- the size intersects the RECT currently being drawn).
Search
The more generic approach will also help you doing stuff like pathfinding with generic algorithms that explore nodes until they find a valid path (see A* or Dijkstra). Even if you decided to stick to a good old 2D Tile Map game, these techniques would still be useful!
Yeah but I want Polygons
I hear you! So, if you want polygons, basically all you need to do, is add to your nodes a list of vertices and the appropriate data that you might need to render your polygons (either vertex color, textures and U/V maps, etc...) and update your rendering component to do the appropriate OpenGL (this for example should help) calls to draw your nodes. Once again, the first step to iteratively upgrade your 2D Tile Engine to a polygon map engine would be to, for each tile in your map, give each of your nodes two triangles, a texture resource (the tile), and U/V mappings (0,0 - 0,1 - 1,0 and 1,1). Once again, when this step is done, you should have a "generic" polygon based tile map engine. The creation of most of this data can be created procedurally by calculating coordinates based on tile position, tile size, etc...
Convex Polygons
If you decide that you ever might need NPCs to navigate on your map or want to allow your player to navigate by clicking the map, I would suggest that you always use convex polygons (the triangle being the simplest for of a convex polygon). This allows your code that assume that two different positions on the same polygon can be navigated to in straight line.
Complex Maps
Based on the link you provided, you want to have rather complex maps. In this case, the author used Voronoi Diagrams to generate the polygons of the map. There are already solutions to do triangulation like that, but you might also want to use other techniques that are easier to work with if you're just switching to 3D like this one for example. Once you have interesting results, you should consider implementing serialization to save/open your map data from the game. If you want to create an editor, be aware that it might be a lot of work but can be worth it if you want people to help you creating maps or to add elements to the maps (like geometry that's not part of the terrain).
I went all over the place with this answer, but hopefully it helps!
Just iterate over all the tiles, and do a hit-test from the centre of the tile to the polys. Turn the type of the tile into the type of the polygon. Did you need more than that?
EDIT: Sorry, I realize that probably isn't helpful. Playing with procedural algorithms can be fun and profitable. Start with a loop that iterates over all tiles and chooses randomly whether or not the tile is occupied. Then, iterate over them again and choose whether it is occupied or its neighbour is.
Also, check out the source code for this: http://dustinfreeman.org/toys/wall7-dustin.html

Resources