I'm trying to detect a square using Point cloud library. I have pcl data from a 3D lidar in which I need to find squares - point-cloud-library

I'm trying to detect squares using Point cloud library. I have pcl data from a 3D lidar in which I need to find squares. Ransac doesn't have a model for square. I wish to know what can be the most efficient method for square detection.

If you are looking for a filled square, the SACMODEL_PLANE should be able to find it. You may need to cluster the inliers of the plane model, and filter the clusters to find the location of the square.
If you are looking for the outline of a square, the SACMODEL_LINE should be able to find the 4 sides separately. You will then need some logic to filter out lines that do not belong, as well as to combine the inliners of the correct lines.

Related

Aligning two clouds using two manually selected points

I'm maintaining software which uses PCL. I'm myself not much experienced in PCL, I've only tried some examples and tried to understand the official PCL-Ducumentation (which is unfortunately mainly sparse, doxygen-generated text). My impression is, only a PCL contributors have real change to use the library efficiently.
One feature I have to fix in the software is aligning two clouds. The clouds are two objects, which should be stacked together with a layer in-between (The actual task is to calculate the volume of the layer ).
I hope the picture explains the task well. The objects are scanned both from the sides to be stacked (one from above and the other from below). On both clouds the user selects manually two points. Then, as I hope there should be a mean in PCL to align two clouds providing the two clouds and the coordinates of the points. The alignment is required only in X-Y Plane.
Unfortunately I can't find out which function should I use for this, partly because the PCL documentation is IHMO really humble, partly because of lack of experience.
My desperate idea was to stack the clouds using P1 as the origin of both and then rotate the second cloud manually using the calculated angle between P11,P21 and P12,P22. This works, but since the task appears to me very common, I'd expect PCL to provide a dedicated function for that.
Could you point me to a proper API-function, code-snippet, example, similar project or a good book helping to understand PCL API and usage?
Many thanks!
I think this problem does not need PCL. It is simple enough to form the correct linear equation and solve it.
If you want to use PCL without worrying about the maths too much (though, if the above is a mystery to you, then studying some computational geometry would be very useful), here is my suggestion.
Most PCL operations work on 3D point clouds. I understand from your question that you only have 2D point clouds OR you don't care about the 3rd dimension. In this case if I were you I would represent the points as a 3D point cloud and set the z dimension to zero.
You will only need two point clouds with 3 points as that is how many points you are feeding to the transformation estimation algorithm. The first 2 points in the clouds will be the points chosen by the user. The third one will be any point that you have chosen that you know is the same in both clouds. You need this third one otherwise the transform is still ambiguous if it is a general transform that is being computed. You can calculate however such a point as you know 2 points already and you know that all the points are on a common plane (as you have projected them by losing the z values). Just don't choose it co-linear with the other two points. For example, halfway between the two points and 2cm in the perpendicular direction (ensuring to go in the correct direction).
Then you can use the estimateRigidTransformation functions to find the transform.
http://docs.pointclouds.org/1.7.0/classpcl_1_1registration_1_1_transformation_estimation_s_v_d.html
This function is also good for over-determined problems (it is the workhorse of the ICP algorithm in PCL) but as long as you have enough points to determine the transform it should work.

Map 3D point cloud onto surface then flatten

Mapping a point cloud onto a 3D "fabric" then flattening.
So I have a scientific dataset consisting of a point cloud in 3D, this point cloud comprises points on a surface that is curved. In order to perform quantitative analysis I however need to map these point clouds onto a surface I can then flatten. I thought about using mapping tools sort of like in the case of the 3d world being flattened onto a map, but not sure how to even begin as I have no experience in cartography and maybe I'm trying to solve an easy problem with the wrong tools.
Just to briefly describe the dataset: imagine entirely transparent curtains on the window with small dots on them, if I could use that dot pattern to fit the material the dots are on I could then "straighten" it and do meaningful analysis on the spread of the dots. I'm guessing the procedure would be to first manually fit the "sheet" onto the point cloud data by using contours or something along those lines then flattening the sheet thus putting the points into a 2d array. Ultimately I'll probably also reduce that into a 1D but I assume I need the intermediate 2D step as the length of the 2nd dimension is variable (i.e. one end of the sheet is shorter than the other but still corresponds to the same position in terms of contours) I'm using Matlab and Amira though I'm always happy to learn new tools!
Any advice or hints how to approach are much appreciated!
You can use a space filling curve to reduce the 3d complexity to a 1d complexity. I use a hilbert curve to index lat-lng pairs on a 2d map. You can do the same with a 3d space but it's easier to start with a simple curve for example a z morton order curve. Space filling curves are often used in mapping applications. A space filling curve also adds some proximity information and a new sort order to the 3d points.
You can try to build a surface that approximates your dataset, then unfold the surface with the points you want. Solid3dtech.com has the tool to unfold the surfaces with the curves or points.

Point cloud:project city square to ground plane

I have a city square with people, cars, trees and buildings in pcl format. I need to automatically determine the ground plane and project this objects on that ground plane to get a 2D map with occupied places.
Any idea?
I think the best thing to do here would be to familiarise yourself with the following two PCL tutorials:
http://pointclouds.org/documentation/tutorials/planar_segmentation.php
http://pointclouds.org/documentation/tutorials/project_inliers.php
The first tutorial makes use of the RANSAC algorithm to find a dominant plane in a scene. I use it to find tables and floors in robotics scenarios. You would use it to find your dominant ground plane.
The second tutorial shows how to project points directly onto a plane. This is what you would use to make your 3D point cloud into a 2D one. Note that, despite the "inlier" keyword, you can pass your whole point cloud to be projected onto the plane.
Actually, if you are after "occupied" places, you might want to project all of the points that aren't in the ground plane (i.e. the outliers), and that are above it (you can use a PCL filter, such as PlaneClipper3D, for example, or just the complement of the outliers from the plane-segmentation operation.
If the plane that you end up with (containing all your projected points) is not in the coordinate frame you want, you may wish to rotate the whole lot, for example, to align with the coordinate axes so that all z-coordinates are zero. See pcl::transformPointCloud for this (the transform will be obtainable from the plane coefficients returned from the plane segmentation).
I hope this is helpful and not at too basic a level, though the question was rather general so I suppose it should be okay.

Rendering 3D surfaces

I've got data representing 3D surfaces (i.e. earthquake fault planes) in xyz point format. I'd like to create a 3D representation of these surfaces. I've had some success using rgl and akima, however it can't really handle geometry that may fold back on itself or have multiple z values at the same x,y point. Alternatively, using geometry (the convhulln function from qhull) I can create convex hulls that show up nicely in rgl but these are closed surfaces where in reality, the objects are open (don't completely enclose the point set). Is there a way to create these surfaces and render them, preferably in rgl?
EDIT
To clarify, the points are in a point cloud that defines the surface. They have varying density of coverage across the surface. However, the main issue is that the surface is one-sided, not closed, and I don't know how to generate a mesh/surface that isn't closed for more complex geometry.
As an example...
require(rgl)
require(akima)
faultdata<-cbind(c(1,1,1,2,2,2),c(1,1,1,2,2,2),c(10,20,-10,10,20,-10))
x <- faultdata[,1]
y <- faultdata[,2]
z <- faultdata[,3]
s <- interp(x,z,y,duplicate="strip")
surface3d(s$x,s$y,s$z,col=a,add=T)
This creates generally what I want. However, for planes that are more complex this doesn't necessarily work. e.g. where the data are:
faultdata<-cbind(c(2,2,2,2,2,2),c(1,1,1,2,2,2),c(10,20,-10,10,20,-10))
I can't use this approach because the points are all vertically co-planar. I also can't use convhulln because of the same issue and in general I don't want a closed hull, I want a surface. I looked at alphashape3d and it looks promising, but I'm not sure how to go about using it for this problem.
How do you determine how the points are connected together as a surface? By distance? That can be one way, and the alphashape3d package might be of use. Otherwise, if you know exactly how they are to be connected, then you can visualize it directly with rgl structures.

3D mesh to particle cloud conversion

I need to convert arbitrary triangulated 3D mesh to cloud of particles that are uniformly spaced.
First thought was to try find a way to fill one 3D triangle. And then fill each triangle of mesh, removing duplicated particles on edges, but that's just hard and too much work. I was hoping for some more-math way.
Can anyone point me to an algorithm which can help me do my task correctly... well, at least approximatively?
Thanks
There are two main options:
Voxelization of mesh. Easy to implement the conversion of mesh to voxels, but it's inaccurate since uniform spacing cannot be achieved: distance between cubes can be x, x*sqrt(2) or x*sqrt(3) depending if neighbor cubes are in same plane and adjacent.
Poisson disk sampling on surface. Hard to implement and lack of research material and code, but mathematically very correct. Some links:
http://research.microsoft.com/apps/pubs/default.aspx?id=135760
http://web.mysites.ntu.edu.sg/cwfu/public/Shared%20Documents/dualtiling/index.html
You could convert the TIN to raster using a GIS package or software such as R, then retrieve one point at the center of each pixel representing the value. (Example in ArcGIS)
EDIT: If the irregular 3D mesh has multiple heights per {x, y} a similar approach would be to sample the mesh using a voxel "grid" and keep one value per voxel. GRASS GIS has the functionality to take the vertices of the TIN (3d mesh) and convert them to voxels, then back to a regular 3d cloud.

Resources