hwo to use pcl to achieve that discretizing the 3D point cloud into a 2D grid over the xy plane, not use the voxelgrid? - point-cloud-library

I want to project the 3D point cloud into a 2D grid over the xy plane, each grid cell size is 20cm*20cm, how to achieve it effectively?
NOT use VoxelGrid method, because I want to retain every point and deal with them in the next step(Gaussian kernel every column and use EM to deal with each grid)

As discussed in the comments, you can achieve what you want with OctreePointCloudPointVector class.
Here is an example how to use the class:
#include <pcl/point_cloud.h>
#include <pcl/io/pcd_io.h>
#include <pcl/octree/octree_pointcloud_pointvector.h>
using Cloud = pcl::PointCloud<pcl::PointXYZ>;
using CloudPtr = Cloud::Ptr;
using OctreeT = pcl::octree::OctreePointCloudPointVector<pcl::PointXYZ>;
int main(int argc, char** argv)
{
if(argc < 2)
return 1;
// load cloud
CloudPtr cloud(new Cloud);
pcl::io::loadPCDFile(argv[1], *cloud);
CloudPtr cloud_projected(new Cloud(*cloud));
// project to XY plane
for(auto& pt : *cloud_projected)
pt.z = 0.0f;
// create octree, set resolution to 20cm
OctreeT octree(0.2);
octree.setInputCloud(cloud_projected);
octree.addPointsFromInputCloud();
// we gonna store the indices of the octree leafs here
std::vector<std::vector<int>> indices_vec;
indices_vec.reserve(octree.getLeafCount());
// traverse the octree leafs and store the indices
const auto it_end = octree.leaf_depth_end();
for(auto it = octree.leaf_depth_begin(); it != it_end; ++it)
{
auto leaf = it.getLeafContainer();
std::vector<int> indices;
leaf.getPointIndices(indices);
indices_vec.push_back(indices);
}
// save leafs to file
int cnt = 0;
for(const auto indices : indices_vec)
{
Cloud leaf(*cloud, indices);
pcl::io::savePCDFileBinary("leaf_" + std::to_string(cnt++) + ".pcd", leaf);
}
}
You can see the output by calling pcl_viewer:
pcl_viewer leaf_*.pcd
See sample output

You can achieve this using https://github.com/daavoo/pyntcloud with the following code:
from pyntcloud import PyntCloud
cloud = PyntCloud.from_file("some_cloud.ply")
# 0.2 asumming your point cloud units are meters
voxelgrid_id = cloud.add_structure("voxelgrid", size_x=0.2, size_y=0.2)
voxelgrid = cloud.structures[voxelgrid_id]
You can learn more about VoxelGrid here:
https://github.com/daavoo/pyntcloud/blob/master/examples/%5Bstructures%5D%20VoxelGrid.ipynb

What do you mean with 2D grid over the xy plane? Do you still want the z value to be the original value, or do you want to project the point cloud to the XY plane first?
Keep Z value
If you want to keep the Z values, just set the leaf size for Z of VoxelGrid to infinite (or a very large number).
pcl::VoxelGrid<pcl::PCLPointCloud2> sor;
sor.setInputCloud (cloud);
sor.setLeafSize (0.01f, 0.01f, 100000.0f);
sor.filter (*cloud_filtered);
Project Cloud to XY plane first
Project a cloud to the XY plane is nothing else than setting the Z value for each point to 0.
for(auto& pt : cloud)
pt.z = 0.0f;
Now you can do normal VoxelGrid on the projected point cloud.

Related

QWT moving canvas

I'm using QWT library for my widget, there are some curves on the canvas, like this:
void Plot::addCurve1( double x, double y, const char *CurveName,
const char *CurveColor,const char *CurveType )
{
...
*points1 << QPointF(x, y);
curve1->setSamples( *points1 );
curve1->attach( this );
...
}
So, all my curves have the same coordinate system. I'm trying to build navigation interface, so I could put step into TextEdit (for example) and moving by using this step, or I could go the end/start of my defined curve.
I've found method in QwtPlotPanner class, that gives me such opportunity:
double QWT_widget::move_XLeft()
{
//getting step from TextEdit
QString xValStr = _XNavDiscrepancies->toPlainText();
double xVal = xVal.toDouble();
// moveCanvas(int dx, int dy) - the method of QwtPlotPanner
plot->panner->moveCanvas(xVal,0);
x_storage = x_storage - xVal;
return x_storage;
}
So it works ok, but displacement in pixels and I need to stick it to my defined curve and it's coordinate system.
Qwt User's Guide tells, that:
Adjust the enabled axes according to dx/dy
Parameters
dx Pixel offset in x direction
dy Pixel offset in y direction
And this is the only information I've found. How can I convert pixels step into my coordinat system step? I need to go to the end of my curve, so I should return the last QPointF(x,y) of my curve and convert it to pixel-step? Or maybe I'm using wrong class/method?
Thank you very much :)
Thanks to #Pavel Gridin:
(https://ru.stackoverflow.com/a/876184/251026)
"For conversion from pixels to coordinates and back there are two
methods: QwtPlot::transform and QwtPlot::invTransform"

Occlusion estimation in pointcloud using pcl voxelgridOcclusionEstimation

I need to find out which points of a pointcloud are visible from a RGBD sensor located at origin(0,0,0). I tried to use the voxelgridOcclusionEstimation class of pcl to determine the visible region in the cloud as seen by the sensor. It uses ray tracing technique.
As an experiment,I tried to get the visible region in a sphere whose center satisfies one of the following:
center is along x
center is along y
center is along z
center is along xz plane
center is along y z plane
center is along x y plane.
The sensor is at the origin with zero rotation in all cases.
voxelgridOcclusionEstimation yeilds wierd results. The green region denotes visible region, while the red represents the occluded region.
My code is:
int main(int argc, char * argv[])
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_in(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_occluded(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_visible(new pcl::PointCloud<pcl::PointXYZ>);
pcl::io::loadPCDFile(argv[1],*cloud_in);
Eigen::Quaternionf quat(1,0,0,0);
cloud_in->sensor_origin_ = Eigen::Vector4f(0,0,0,0);
cloud_in->sensor_orientation_= quat;
pcl::VoxelGridOcclusionEstimation<pcl::PointXYZ> voxelFilter;
voxelFilter.setInputCloud (cloud_in);
float leaf_size=atof(argv[2]);
voxelFilter.setLeafSize (leaf_size, leaf_size, leaf_size);
voxelFilter.initializeVoxelGrid();
std::vector<Eigen::Vector3i,
Eigen::aligned_allocator<Eigen::Vector3i> > occluded_voxels;
for (size_t i=0;i<cloud_in->size();i++)
{
PointT pt=cloud_in->points[i];
Eigen::Vector3i grid_cordinates=voxelFilter.getGridCoordinates (pt.x, pt.y, pt.z);
int grid_state;
int ret=voxelFilter.occlusionEstimation( grid_state, grid_cordinates );
if (grid_state==1)
{
cloud_occluded->push_back(cloud_in->points[i]);
}
else
{
cloud_visible->push_back(cloud_in->points[i]);
}
}
pcl::io::savePCDFile(argv[3],*cloud_occluded);
pcl::io::savePCDFile(argv[4],*cloud_visible);
return 0;
}
Your code seems to work except for the typo and missing point type definitions. Try with a different point cloud for better visual analysis.
Edit. On the other hand this seems to behave oddly with for example the milk cart could from here http://pointclouds.org/documentation/tutorials/supervoxel_clustering.php#supervoxel-clustering.
The voxelgridOcclusionEstimation class works but the grid width is very important. If we make it very small then there will be unoccupied voxels in the Foreground, which will let the casted rays to pass to pass to Background. If they are set very big, then the surface will not be correctly represented. This is more difficult if the model does not have uniform point density as in the case of data captured by RGBD sensors

Perspective Projection effect correction

I was trying to plot 8 points in a 3D space from the 8 vertices of the above 3D sphare.
I used the following code:
#include "Coordinates2d.h"
#include "Point3d.h"
const double zoom = 500;
int main()
{
Coordinates2d::ShowWindow("3D Primitives!");
std::vector<Point3d> points;
points.push_back(Point3d(0,0,20));
points.push_back(Point3d(0,100,20));
points.push_back(Point3d(120,100,20));
points.push_back(Point3d(120,0,20));
points.push_back(Point3d(0,0,120));
points.push_back(Point3d(0,100,120));
points.push_back(Point3d(120,100,120));
points.push_back(Point3d(120,0,120));
for(int i=0 ; i<points.size() ; i++)
{
Coordinates2d::Draw(points[i], zoom);
}
Coordinates2d::Wait();
}
Where, the Point3D is like the following:
#ifndef _POINT_3D_
#define _POINT_3D_
#include "graphics.h"
#include "Matrix.h"
#include "Point2d.h"
#include <cmath>
#include <iostream>
struct Point3d
{
double x;
double y;
double z;
public:
Point3d();
Point3d(double x, double y, double z);
Point3d(Point3d const & point);
Point3d & operator=(Point3d const & point);
Point3d & operator+(int scalar);
bool operator==(Point3d const & point);
bool operator!=(Point3d const & point);
Point3d Round()
{
return Point3d(floor(this->x + 0.5), floor(this->y + 0.5), floor(this->z + 0.5));
}
void Show()
{
std::cout<<"("<<x<<", "<<y<<", "<<z<<")";
}
bool IsValid();
double Distance(Point3d & point);
void SetMatrix(const Matrix & mat);
Matrix GetMatrix() const;
Point2d ConvertTo2d(double zoom)
{
return Point2d(x*zoom/(zoom-z), y*zoom/(zoom-z));
}
};
#endif
#ifndef _COORDINATES_2D_
#define _COORDINATES_2D_
#include "graphics.h"
#include "Point2d.h"
#include "Point3d.h"
#include "Line3d.h"
class Coordinates2d
{
private:
static Point2d origin;
public:
static void Wait();
static void ShowWindow(char str[]);
private:
static void Draw(Point2d & pt);
public:
static void Draw(Point3d & pt, double zoom)
{
Coordinates2d::Draw(pt.ConvertTo2d(zoom));
}
};
#endif
I was expecting the output to be the following:
But the output became like the following:
I am actually interested to move my viewing camera.
How can I achieve my desired result?
I see from the comments that you achieved your desired result with a clever formula. If you're interested in doing it the 'standard' graphics way, using matrices, I hope this post will help you.
I found an excellent page written explaining projection matrices for OpenGL, which also extends to the general mathematics of projection.
If you want to go in depth, here is the very well written article, explains it's steps in detail, and is just overall highly commendable.
The below image shows the first part of what you're trying to do.
So the image on the left is the 'viewing volume' that you want your camera to see. You can see that in this case, the Center of Projection (basically the focal point of the camera) is at the origin.
But wait, you say, I don't WANT the center of projection to be at the origin! I know, we'll cover that later.
What we're doing here is taking the strangely shaped volume on the left, and converting it to what we call 'normalized coordinate' on the right. So we're mapping out viewing volume onto the range of -1 to 1 in each direction. Basically, we mathmatically stretch the irregularly shaped viewing volume into this 2x2x2 cube centered at the origin.
This operation is accomplished through the following matrix, again, from the excellent article I linked above.
So note you have six variables.
t = top
b = bottom
l = left
r = right
n = near
f = far
Those six variables define you viewing volume. Far is not labeled on the above image, but it is the distance of the furthest plane from the origin in the image.
The above image shows the projection matrix that puts out viewing volume into normalized coordinates. Once coordinates are in this form, you can make it flat by simply ignoring the z coordinate, which is similar to some of the work you have done (nice work!).
So we're all set with that for viewing things from the origin. But let's say we don't want to view from the origin, and would prefer to view from, say somewhere behind and to the side.
Well we can do that! but instead of moving our viewing area (we have the math all nicely worked out right here), it is perhaps counter intuitively, easier to move all the points we are trying to view.
This can be done by multiplying all of the points by a translation matrix.
Here is the wikipedia page for translation, from which I took the following matrix.
Vx, Vy, and Vz are the amount we want to move things in the x, y, and z directions. Keep in mind, if we want to move the camera in the positive x direction, we need a negative Vx, and vice versa. This is because we are moving the points instead of the camera. Feel free to try it and see, if you want.
You may also have noticed that both of the matrices I showed are 4x4, and your coordinates are 3x1. This is because the matrices are meant to be used with homogeneous coordinates. These seem strange because they use 4 variables to represent a 3D point, but its just x, y, z, and w, where you make w =1 for your points. I believe this variable is used for depth buffers, among other things, but it is basically ubiquitously present in graphics' matrix math, so you'll want to get used to using it.
Now that you have these matrices, you can apply the translation one to your points, then apply the perspective one to those points you got out. Then simply ignore the z components, and there you are! You have a 2D image from -1 to 1 in the x and y directions.

How do I take a 2D point, and project it into a 3D Vector by a perspective camera

I have a 2D Point (x,y) and I want to project it to a Vector, so that I can perform a ray-trace to check if the user clicked on a 3D Object, I have written all the other code, Except when I got back to my function to get the Vector from the xy cords of the mouse, I was not accounting for Field-Of-View, and I don't want to guess what the factor would be, as 'voodoo' fixes are not a good idea for a library. any math-magicians wanna help? :-).
Heres my current code, that needs FOV of the camera applied:
sf::Vector3<float> Camera::Get3DVector(int Posx, int Posy, sf::Vector2<int> ScreenSize){
//not using a "wide lens", and will maintain the aspect ratio of the viewport
int window_x = Posx - ScreenSize.x/2;
int window_y = (ScreenSize.y - Posy) - ScreenSize.y/2;
float Ray_x = float(window_x)/float(ScreenSize.x/2);
float Ray_y = float(window_y)/float(ScreenSize.y/2);
sf::Vector3<float> Vector(Ray_x,Ray_y, -_zNear);
// to global cords
return MultiplyByMatrix((Vector/LengthOfVector(Vector)), _XMatrix, _YMatrix, _ZMatrix);
}
You're not too fart off, one thing is to make sure your mouse is in -1 to 1 space (not 0 to 1)
Then you create 2 vectors:
Vector3 orig = Vector3(mouse.X,mouse.Y,0.0f);
Vector3 far = Vector3(mouse.X,mouse.Y,1.0f);
You also need to use the inverse of your perspective tranform (or viewprojection if you want world space)
Matrix ivp = Matrix::Invert(Projection)
Then you do:
Vector3 rayorigin = Vector3::TransformCoordinate(orig,ivp);
Vector3 rayfar = Vector3::TransformCoordinate(far,ivp);
If you want a ray, you also need direction, which is simply:
Vector3 raydir = Normalize(rayfar-rayorigin);

CSG operations on implicit surfaces with marching cubes

I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not.
For my initial testing, I tried with two spheres circles, and the set operation difference. i.e A - B. One circle is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result.
void march(const vec2& cmin, //min x and y for the grid cell
const vec2& cmax, //max x and y for the grid cell
std::vector<vec2>& tri,
float iso,
float (*cmp1)(const vec2&), //distance from stationary circle
float (*cmp2)(const vec2&) //distance from moving circle
)
{
unsigned int squareindex = 0;
float scalar[4];
vec2 verts[8];
/* initial setup of the grid cell */
verts[0] = vec2(cmax.x, cmax.y);
verts[2] = vec2(cmin.x, cmax.y);
verts[4] = vec2(cmin.x, cmin.y);
verts[6] = vec2(cmax.x, cmin.y);
float s1,s2;
/**********************************
********For-loop of interest******
*******Set difference between ****
*******two implicit surfaces******
**********************************/
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
if((s1 < iso)){ //if inside circle1
if((s2 < iso)){ //if inside circle2
scalar[i] = s2; //then set the scalar to the moving circle
} else {
scalar[i] = s1; //only inside circle1
squareindex |= (1<<i); //mark as inside
}
}
else {
scalar[i] = s1; //inside neither circle
}
}
if(squareindex == 0)
return;
/* Usual interpolation between edge points to compute
the new intersection points */
verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]);
verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]);
verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]);
verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]);
for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token
int index = triTable[squareindex][i]; //look up our indices for triangulation
if(index == -1)
break;
tri.push_back(verts[index]);
}
}
This gives me weird jaggies:
(source: mechcore.net)
It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this.
A full testcase can be downloaded HERE
EDIT: Basically, my implementation of marching squares works fine. It is my scalar field which is broken, and I wonder what the correct way would look like. Preferably I'm looking for a general approach to implement the three set operations I discussed above, for the usual primitives (circle, rectangle/square, plane)
EDIT 2: Here are some new images after implementing the answerer's whitepaper:
1.Difference
2.Intersection
3.Union
EDIT 3: I implemented this in 3D too, with proper shading/lighting:
1.Difference between a greater sphere and a smaller sphere
2.Difference between a greater sphere and a smaller sphere in the center, clipped by two planes on both sides, and then union with a sphere in the center.
3.Union between two cylinders.
This is not how you mix the scalar fields. Your scalars say one thing, but your flags whether you are inside or not say another. First merge the fields, then render as if you were doing a single compound object:
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
s = max(s1, iso-s2); // This is the secret sauce
if(s < iso) { // inside circle1, but not inside circle2
squareindex |= (1<<i);
}
scalar[i] = s;
}
This article might be helpful: Combining CSG modeling with soft blending using
Lipschitz-based implicit surfaces.

Resources