What class/functions should I use to visualize a 2d point cloud? All the documentation online seem to only point to 3d point cloud visualization.
You're right, the visualization module of PCL is done for 3D point clouds. But, you can perfectly visualize a 2D pointcloud in 3D. You will just have one of the coordinate set to 0 (or any other constant).
Your point cloud will appear planar and, as the viewer is 3D, you will be able to move and rotate around it.
If you really need to constraint the camera, I think you will need to write your own visualizer using VTK. This will require more effort, especially if you are not familiar with the way VTK is working. (vtkInteractorStyleRubberBand2D is the type of 2D camera interactor that could be used.)
Related
I am looking to find all the planes in a 3D pointcloud using PCL. The example snippet has a video showing two different planes detected:
http://pointclouds.org/documentation/tutorials/planar_segmentation.php
But if I look at the source code snippet I think it assumes that there is only one plane in the point cloud.
Is it possible to use PCL to detect all the planar sections in a point cloud using RANSAC?
Have a look at this cluster extraction tutorial. From line 44 to line 69 you can see how "all" planes are removed from the cloud. The trick is to set the filter to negative .setNegative(true) to extract the cloud without the planar surface.
I'm currently rendering a 3D model (Wavefront .obj format) in my Qt program. Right now, I'm rendering the model using Scene3D in QML, and I'm able to get it to display in the viewing area. What I would like to do is have a user click on the model and generate a 2D cross section of the slice that I would like to plot on a different window. I'm quite new to 3D rendering, and a lot of Qt documentation isn't very descriptive. I've been reading Qt documentation, experimenting, and searching online with no luck. How can I create 2D slices of a 3D object Model in Qt 3D, preferably in QML? What Qt libraries or classes can I use to achieve this?
Unfortunately, the fact that models are stored as a set of surfaces makes this hard. QT probably doesn't have a built in method for this.
Consider, for example, that a model made of faces might be missing a face. What now? can you interpolate across that gap consistently from different angles? What about the fact that a cross-section probably won't contain any vertices?
But, of course, it can be solved. First, just don't allow un-closed surfaces (meshes with holes). Second, for finding the vertices of your cross-section, perform an intersection between every edge in your model and the plane you're using, and if there's an intersection, there's a point there. Third, to find the edges, look at the list of vertices, and any two that are from an edge on the same polygon in the mesh should be connected by an edge in the cross section. To find which direction the edge should go, project the normal of the polygon onto the plane your using. For filling, I don't really know what to do. I guess that's whatever you want it to be.
I'm trying to detect squares using Point cloud library. I have pcl data from a 3D lidar in which I need to find squares. Ransac doesn't have a model for square. I wish to know what can be the most efficient method for square detection.
If you are looking for a filled square, the SACMODEL_PLANE should be able to find it. You may need to cluster the inliers of the plane model, and filter the clusters to find the location of the square.
If you are looking for the outline of a square, the SACMODEL_LINE should be able to find the 4 sides separately. You will then need some logic to filter out lines that do not belong, as well as to combine the inliners of the correct lines.
I have a city square with people, cars, trees and buildings in pcl format. I need to automatically determine the ground plane and project this objects on that ground plane to get a 2D map with occupied places.
Any idea?
I think the best thing to do here would be to familiarise yourself with the following two PCL tutorials:
http://pointclouds.org/documentation/tutorials/planar_segmentation.php
http://pointclouds.org/documentation/tutorials/project_inliers.php
The first tutorial makes use of the RANSAC algorithm to find a dominant plane in a scene. I use it to find tables and floors in robotics scenarios. You would use it to find your dominant ground plane.
The second tutorial shows how to project points directly onto a plane. This is what you would use to make your 3D point cloud into a 2D one. Note that, despite the "inlier" keyword, you can pass your whole point cloud to be projected onto the plane.
Actually, if you are after "occupied" places, you might want to project all of the points that aren't in the ground plane (i.e. the outliers), and that are above it (you can use a PCL filter, such as PlaneClipper3D, for example, or just the complement of the outliers from the plane-segmentation operation.
If the plane that you end up with (containing all your projected points) is not in the coordinate frame you want, you may wish to rotate the whole lot, for example, to align with the coordinate axes so that all z-coordinates are zero. See pcl::transformPointCloud for this (the transform will be obtainable from the plane coefficients returned from the plane segmentation).
I hope this is helpful and not at too basic a level, though the question was rather general so I suppose it should be okay.
I am completely new to 3D and started with Jeff Lamarche's tutorials as an introduction to openGL ES for iPhone, then so far, I am able to draw a spinning sphere, which will the base of my application.
What I want to do is render a planet Earth, thanks to 2D GIS vector data (polygones, lines or points with latitude/longitude or x/y coord).
I want to be able to turn different layers on/off and maybe able to identify an object that wold be touched.
My questions are :
would it be easier to rasterize my vector data to use them as image texture or apply the vector data onto the sphere (keeping in mind that I want to turn on/off the layers, the touch-enabled objects being optional)?
would it be easier to use a software like blender to draw the planet and add the layers rather than starting with the sphere I already have (procedural sphere)?
do the export tool from blender to opengl work well?
This kind of question is difficult to answer in general. Technically your intention sounds a lot like if you would like to write a program like Google Earth or KDE Marble. Since you're referring to GIS data you will require very high resolution. Textures only make sense for limited resolution data.
GIS applications usually work using hybrid approaches where some vector data are rendered directly (roads, waters, borders), while others are rendered to texture and the texture, or more accurately texture tiles, being used as caches, for example for building outlines in dense cities or the like. However data as it comes from say OSM can be directly rendered as vector data, since they are not very dense.