I need to find out which points of a pointcloud are visible from a RGBD sensor located at origin(0,0,0). I tried to use the voxelgridOcclusionEstimation class of pcl to determine the visible region in the cloud as seen by the sensor. It uses ray tracing technique.
As an experiment,I tried to get the visible region in a sphere whose center satisfies one of the following:
center is along x
center is along y
center is along z
center is along xz plane
center is along y z plane
center is along x y plane.
The sensor is at the origin with zero rotation in all cases.
voxelgridOcclusionEstimation yeilds wierd results. The green region denotes visible region, while the red represents the occluded region.
My code is:
int main(int argc, char * argv[])
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_in(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_occluded(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_visible(new pcl::PointCloud<pcl::PointXYZ>);
pcl::io::loadPCDFile(argv[1],*cloud_in);
Eigen::Quaternionf quat(1,0,0,0);
cloud_in->sensor_origin_ = Eigen::Vector4f(0,0,0,0);
cloud_in->sensor_orientation_= quat;
pcl::VoxelGridOcclusionEstimation<pcl::PointXYZ> voxelFilter;
voxelFilter.setInputCloud (cloud_in);
float leaf_size=atof(argv[2]);
voxelFilter.setLeafSize (leaf_size, leaf_size, leaf_size);
voxelFilter.initializeVoxelGrid();
std::vector<Eigen::Vector3i,
Eigen::aligned_allocator<Eigen::Vector3i> > occluded_voxels;
for (size_t i=0;i<cloud_in->size();i++)
{
PointT pt=cloud_in->points[i];
Eigen::Vector3i grid_cordinates=voxelFilter.getGridCoordinates (pt.x, pt.y, pt.z);
int grid_state;
int ret=voxelFilter.occlusionEstimation( grid_state, grid_cordinates );
if (grid_state==1)
{
cloud_occluded->push_back(cloud_in->points[i]);
}
else
{
cloud_visible->push_back(cloud_in->points[i]);
}
}
pcl::io::savePCDFile(argv[3],*cloud_occluded);
pcl::io::savePCDFile(argv[4],*cloud_visible);
return 0;
}
Your code seems to work except for the typo and missing point type definitions. Try with a different point cloud for better visual analysis.
Edit. On the other hand this seems to behave oddly with for example the milk cart could from here http://pointclouds.org/documentation/tutorials/supervoxel_clustering.php#supervoxel-clustering.
The voxelgridOcclusionEstimation class works but the grid width is very important. If we make it very small then there will be unoccupied voxels in the Foreground, which will let the casted rays to pass to pass to Background. If they are set very big, then the surface will not be correctly represented. This is more difficult if the model does not have uniform point density as in the case of data captured by RGBD sensors
Related
I have a QGraphicsView in my Qt application on which user can draw curves. Curves consist of QGraphicsEllipseItem's and QGraphicsPathItem's, which connect the adjacent ellipses.
I want to get a list of QPoint's which satisfy the given curve. I tried creating local QPainterPath for this procedure which would represent the whole curve and iterating over all the points from it's rectangle to see which ones satisfy this curve. The code looks like:
QPainterPath curvePath = edges[index]->at(0)->path();
qreal left, right, bottom, top;
for(int i=1;i<edges[index]->size();i++)
{
curvePath.connectPath(edges[index]->at(i)->path());
}
QRectF curveRect = curvePath.boundingRect();
left = curveRect.left();
right = curveRect.right();
top = curveRect.top();
bottom = curveRect.bottom();
for(qreal i = left;i<right;i++)
for(qreal j = top;j<bottom;j++)
{
QPointF pointToCheck(i, j);
if(curvePath.contains(pointToCheck))
list.append(pointToCheck);
}
where edges is QList of QLists of QGraphicsPathItem's. It works fine in case of calculations (the point of applying this is to increase precision of calculation), but it really slows down my application since those calculations are made quite often.
Is there more efficient way to implement this?
I want to project the 3D point cloud into a 2D grid over the xy plane, each grid cell size is 20cm*20cm, how to achieve it effectively?
NOT use VoxelGrid method, because I want to retain every point and deal with them in the next step(Gaussian kernel every column and use EM to deal with each grid)
As discussed in the comments, you can achieve what you want with OctreePointCloudPointVector class.
Here is an example how to use the class:
#include <pcl/point_cloud.h>
#include <pcl/io/pcd_io.h>
#include <pcl/octree/octree_pointcloud_pointvector.h>
using Cloud = pcl::PointCloud<pcl::PointXYZ>;
using CloudPtr = Cloud::Ptr;
using OctreeT = pcl::octree::OctreePointCloudPointVector<pcl::PointXYZ>;
int main(int argc, char** argv)
{
if(argc < 2)
return 1;
// load cloud
CloudPtr cloud(new Cloud);
pcl::io::loadPCDFile(argv[1], *cloud);
CloudPtr cloud_projected(new Cloud(*cloud));
// project to XY plane
for(auto& pt : *cloud_projected)
pt.z = 0.0f;
// create octree, set resolution to 20cm
OctreeT octree(0.2);
octree.setInputCloud(cloud_projected);
octree.addPointsFromInputCloud();
// we gonna store the indices of the octree leafs here
std::vector<std::vector<int>> indices_vec;
indices_vec.reserve(octree.getLeafCount());
// traverse the octree leafs and store the indices
const auto it_end = octree.leaf_depth_end();
for(auto it = octree.leaf_depth_begin(); it != it_end; ++it)
{
auto leaf = it.getLeafContainer();
std::vector<int> indices;
leaf.getPointIndices(indices);
indices_vec.push_back(indices);
}
// save leafs to file
int cnt = 0;
for(const auto indices : indices_vec)
{
Cloud leaf(*cloud, indices);
pcl::io::savePCDFileBinary("leaf_" + std::to_string(cnt++) + ".pcd", leaf);
}
}
You can see the output by calling pcl_viewer:
pcl_viewer leaf_*.pcd
See sample output
You can achieve this using https://github.com/daavoo/pyntcloud with the following code:
from pyntcloud import PyntCloud
cloud = PyntCloud.from_file("some_cloud.ply")
# 0.2 asumming your point cloud units are meters
voxelgrid_id = cloud.add_structure("voxelgrid", size_x=0.2, size_y=0.2)
voxelgrid = cloud.structures[voxelgrid_id]
You can learn more about VoxelGrid here:
https://github.com/daavoo/pyntcloud/blob/master/examples/%5Bstructures%5D%20VoxelGrid.ipynb
What do you mean with 2D grid over the xy plane? Do you still want the z value to be the original value, or do you want to project the point cloud to the XY plane first?
Keep Z value
If you want to keep the Z values, just set the leaf size for Z of VoxelGrid to infinite (or a very large number).
pcl::VoxelGrid<pcl::PCLPointCloud2> sor;
sor.setInputCloud (cloud);
sor.setLeafSize (0.01f, 0.01f, 100000.0f);
sor.filter (*cloud_filtered);
Project Cloud to XY plane first
Project a cloud to the XY plane is nothing else than setting the Z value for each point to 0.
for(auto& pt : cloud)
pt.z = 0.0f;
Now you can do normal VoxelGrid on the projected point cloud.
I'm using QWT library for my widget, there are some curves on the canvas, like this:
void Plot::addCurve1( double x, double y, const char *CurveName,
const char *CurveColor,const char *CurveType )
{
...
*points1 << QPointF(x, y);
curve1->setSamples( *points1 );
curve1->attach( this );
...
}
So, all my curves have the same coordinate system. I'm trying to build navigation interface, so I could put step into TextEdit (for example) and moving by using this step, or I could go the end/start of my defined curve.
I've found method in QwtPlotPanner class, that gives me such opportunity:
double QWT_widget::move_XLeft()
{
//getting step from TextEdit
QString xValStr = _XNavDiscrepancies->toPlainText();
double xVal = xVal.toDouble();
// moveCanvas(int dx, int dy) - the method of QwtPlotPanner
plot->panner->moveCanvas(xVal,0);
x_storage = x_storage - xVal;
return x_storage;
}
So it works ok, but displacement in pixels and I need to stick it to my defined curve and it's coordinate system.
Qwt User's Guide tells, that:
Adjust the enabled axes according to dx/dy
Parameters
dx Pixel offset in x direction
dy Pixel offset in y direction
And this is the only information I've found. How can I convert pixels step into my coordinat system step? I need to go to the end of my curve, so I should return the last QPointF(x,y) of my curve and convert it to pixel-step? Or maybe I'm using wrong class/method?
Thank you very much :)
Thanks to #Pavel Gridin:
(https://ru.stackoverflow.com/a/876184/251026)
"For conversion from pixels to coordinates and back there are two
methods: QwtPlot::transform and QwtPlot::invTransform"
I was trying to plot 8 points in a 3D space from the 8 vertices of the above 3D sphare.
I used the following code:
#include "Coordinates2d.h"
#include "Point3d.h"
const double zoom = 500;
int main()
{
Coordinates2d::ShowWindow("3D Primitives!");
std::vector<Point3d> points;
points.push_back(Point3d(0,0,20));
points.push_back(Point3d(0,100,20));
points.push_back(Point3d(120,100,20));
points.push_back(Point3d(120,0,20));
points.push_back(Point3d(0,0,120));
points.push_back(Point3d(0,100,120));
points.push_back(Point3d(120,100,120));
points.push_back(Point3d(120,0,120));
for(int i=0 ; i<points.size() ; i++)
{
Coordinates2d::Draw(points[i], zoom);
}
Coordinates2d::Wait();
}
Where, the Point3D is like the following:
#ifndef _POINT_3D_
#define _POINT_3D_
#include "graphics.h"
#include "Matrix.h"
#include "Point2d.h"
#include <cmath>
#include <iostream>
struct Point3d
{
double x;
double y;
double z;
public:
Point3d();
Point3d(double x, double y, double z);
Point3d(Point3d const & point);
Point3d & operator=(Point3d const & point);
Point3d & operator+(int scalar);
bool operator==(Point3d const & point);
bool operator!=(Point3d const & point);
Point3d Round()
{
return Point3d(floor(this->x + 0.5), floor(this->y + 0.5), floor(this->z + 0.5));
}
void Show()
{
std::cout<<"("<<x<<", "<<y<<", "<<z<<")";
}
bool IsValid();
double Distance(Point3d & point);
void SetMatrix(const Matrix & mat);
Matrix GetMatrix() const;
Point2d ConvertTo2d(double zoom)
{
return Point2d(x*zoom/(zoom-z), y*zoom/(zoom-z));
}
};
#endif
#ifndef _COORDINATES_2D_
#define _COORDINATES_2D_
#include "graphics.h"
#include "Point2d.h"
#include "Point3d.h"
#include "Line3d.h"
class Coordinates2d
{
private:
static Point2d origin;
public:
static void Wait();
static void ShowWindow(char str[]);
private:
static void Draw(Point2d & pt);
public:
static void Draw(Point3d & pt, double zoom)
{
Coordinates2d::Draw(pt.ConvertTo2d(zoom));
}
};
#endif
I was expecting the output to be the following:
But the output became like the following:
I am actually interested to move my viewing camera.
How can I achieve my desired result?
I see from the comments that you achieved your desired result with a clever formula. If you're interested in doing it the 'standard' graphics way, using matrices, I hope this post will help you.
I found an excellent page written explaining projection matrices for OpenGL, which also extends to the general mathematics of projection.
If you want to go in depth, here is the very well written article, explains it's steps in detail, and is just overall highly commendable.
The below image shows the first part of what you're trying to do.
So the image on the left is the 'viewing volume' that you want your camera to see. You can see that in this case, the Center of Projection (basically the focal point of the camera) is at the origin.
But wait, you say, I don't WANT the center of projection to be at the origin! I know, we'll cover that later.
What we're doing here is taking the strangely shaped volume on the left, and converting it to what we call 'normalized coordinate' on the right. So we're mapping out viewing volume onto the range of -1 to 1 in each direction. Basically, we mathmatically stretch the irregularly shaped viewing volume into this 2x2x2 cube centered at the origin.
This operation is accomplished through the following matrix, again, from the excellent article I linked above.
So note you have six variables.
t = top
b = bottom
l = left
r = right
n = near
f = far
Those six variables define you viewing volume. Far is not labeled on the above image, but it is the distance of the furthest plane from the origin in the image.
The above image shows the projection matrix that puts out viewing volume into normalized coordinates. Once coordinates are in this form, you can make it flat by simply ignoring the z coordinate, which is similar to some of the work you have done (nice work!).
So we're all set with that for viewing things from the origin. But let's say we don't want to view from the origin, and would prefer to view from, say somewhere behind and to the side.
Well we can do that! but instead of moving our viewing area (we have the math all nicely worked out right here), it is perhaps counter intuitively, easier to move all the points we are trying to view.
This can be done by multiplying all of the points by a translation matrix.
Here is the wikipedia page for translation, from which I took the following matrix.
Vx, Vy, and Vz are the amount we want to move things in the x, y, and z directions. Keep in mind, if we want to move the camera in the positive x direction, we need a negative Vx, and vice versa. This is because we are moving the points instead of the camera. Feel free to try it and see, if you want.
You may also have noticed that both of the matrices I showed are 4x4, and your coordinates are 3x1. This is because the matrices are meant to be used with homogeneous coordinates. These seem strange because they use 4 variables to represent a 3D point, but its just x, y, z, and w, where you make w =1 for your points. I believe this variable is used for depth buffers, among other things, but it is basically ubiquitously present in graphics' matrix math, so you'll want to get used to using it.
Now that you have these matrices, you can apply the translation one to your points, then apply the perspective one to those points you got out. Then simply ignore the z components, and there you are! You have a 2D image from -1 to 1 in the x and y directions.
This is probably a pretty simple thing but my knowledge of direct x is just not up to par with what I'm trying to achieve.
For the moment I am trying to create a vehicle that moves around on terrain. I am attempting to make the vehicle recognize the terrain by creating a square (4 D3DXVECTOR3 points) around the vehicle who's points each detect the height of the terrain and adjust the vehicle accordingly.
The vehicle is a simple object derived from Microsoft sample code. It has a world matrix, coordinates, rotations etc.
What I am trying to achieve is to make these points move along with the vehicle, turning when it does so they can detect the difference in height. This requires me to update the points each time the vehicle moves but I cannot for the life of me figure out how to get them to rotate properly.
So In summary I am looking for a simple way to rotate a vector about an origin (my vehicles coordinates).
These points are situated near the vehicle wheels so if it worked they would stay there regardless of the vehicles y -axis rotation.
Heres What Ive tryed:
D3DXVECTOR3 vec;
D3DXVec3TransformCoord(&vectorToHoldTransformation,&SquareTopLeftPoint,&matRotationY);
SquareTopLeftPoint = vec;
This resulted in the point spinning madly out of control and leaving the map.
xRot = VehicleCoordinateX + cos(RotationY) * (SquareTopleftX - VehicleCoordinateX) - sin(RotationY) * (SquareTopleftZ - VehicleCoordinateZ);
yRot = VehicleCoordinateZ + sin(RotationY) * (SquareTopleftX - VehicleCoodinateX) + cos(RotationY) * (SquareToplefteZ - VehicleCoordinateZ);
BoxPoint refers to the vector I am attempting to rotate.
Vehicle is of course the origin of rotation
RotationY is the amount it has rotated.
This is the code for 1 of 4 vectors in this square but I assume once I get 1 write the rest are just copy-paste.
No matter what I try the point either does not move or spirals out of control under leaving the map all-together.
Here is a snippet of my object class
class Something
{
public:
float x, y, z;
float speed;
float rx, ry, rz;
float sx, sy, sz;
float width;
float length;
float frameTime;
D3DXVECTOR3 initVecDir;
D3DXVECTOR3 currentVecDir;
D3DXMATRIX matAllRotations;
D3DXMATRIX matRotateX;
D3DXMATRIX matRotateY;
D3DXMATRIX matRotateZ;
D3DXMATRIX matTranslate;
D3DXMATRIX matWorld;
D3DXMATRIX matView;
D3DXMATRIX matProjection;
D3DXMATRIX matWorldViewProjection;
//these points represent a box that is used for collision with terrain.
D3DXVECTOR3 frontLeftBoxPoint;
D3DXVECTOR3 frontRightBoxPoint;
D3DXVECTOR3 backLeftBoxPoint;
D3DXVECTOR3 backRightBoxPoint;
}
I was thinking it might be possible to do this using D3DXVec3TransformCoord
D3DXMatrixTranslation(&matTranslate, origin.x,0,origin.z);
D3DXMatrixRotationY(&matRotateY, ry);
D3DXMatrixTranslation(&matTranslate2,width,0,-length);
matAllRotations = matTranslate * matRotateY * matTranslate2;
D3DXVECTOR3 newCoords;
D3DXVECTOR3 oldCoords = D3DXVECTOR3(x,y,z);
D3DXVec3TransformCoord(&newCoords, &oldCoords, &matAllRotations);
Turns out that what I need to do was
Translate by -origin.
rotate
Translate by origin.
What I was doing was
Move to origin
Rotate
Translate by length/width
Thought it was the same.
D3DXMATRIX matTranslate2;
D3DXMatrixTranslation(&matTranslate,-origin.x,0,-origin.z);
D3DXMatrixRotationY(&matRotateY,ry);
D3DXMatrixTranslation(&matTranslate2,origin.x,0,origin.z);
//D3DXMatrixRotationAxis(&matRotateAxis,&origin,ry);
D3DXMATRIX matAll = matTranslate * matRotateY * matTranslate2;
D3DXVECTOR4 newCoords;
D3DXVECTOR4 oldCoords = D3DXVECTOR4(x,y,z,1);
D3DXVec4Transform(&newCoords,&oldCoords,&matAll);
//D3DXVec4TransformCoord(&newCoords, &oldCoords, &matAll);
return newCoords;
Without knowing more about your code I can't say what it does exactly, however one 'easy' way to think about this problem if you know the angle of the heading of your vehicle in world coordinates is to represent your points in a manner such that the center of the vehicle is at the origin, use a simple rotation matrix to rotate it around the vehicle according to the heading, and then add your vehicle's center to the resulting coordinates.
x = vehicle_center_x + cos(heading) * corner_x - sin(heading) * corner_y
y = vehicle_center_y - sin(heading) * corner_x + cos(heading) * corner_y
Keep in mind that corner_x and corner_y are expressed in coordinates relative to the vehicle -- NOT relative to the world.