Perspective Projection effect correction - projection

I was trying to plot 8 points in a 3D space from the 8 vertices of the above 3D sphare.
I used the following code:
#include "Coordinates2d.h"
#include "Point3d.h"
const double zoom = 500;
int main()
{
Coordinates2d::ShowWindow("3D Primitives!");
std::vector<Point3d> points;
points.push_back(Point3d(0,0,20));
points.push_back(Point3d(0,100,20));
points.push_back(Point3d(120,100,20));
points.push_back(Point3d(120,0,20));
points.push_back(Point3d(0,0,120));
points.push_back(Point3d(0,100,120));
points.push_back(Point3d(120,100,120));
points.push_back(Point3d(120,0,120));
for(int i=0 ; i<points.size() ; i++)
{
Coordinates2d::Draw(points[i], zoom);
}
Coordinates2d::Wait();
}
Where, the Point3D is like the following:
#ifndef _POINT_3D_
#define _POINT_3D_
#include "graphics.h"
#include "Matrix.h"
#include "Point2d.h"
#include <cmath>
#include <iostream>
struct Point3d
{
double x;
double y;
double z;
public:
Point3d();
Point3d(double x, double y, double z);
Point3d(Point3d const & point);
Point3d & operator=(Point3d const & point);
Point3d & operator+(int scalar);
bool operator==(Point3d const & point);
bool operator!=(Point3d const & point);
Point3d Round()
{
return Point3d(floor(this->x + 0.5), floor(this->y + 0.5), floor(this->z + 0.5));
}
void Show()
{
std::cout<<"("<<x<<", "<<y<<", "<<z<<")";
}
bool IsValid();
double Distance(Point3d & point);
void SetMatrix(const Matrix & mat);
Matrix GetMatrix() const;
Point2d ConvertTo2d(double zoom)
{
return Point2d(x*zoom/(zoom-z), y*zoom/(zoom-z));
}
};
#endif
#ifndef _COORDINATES_2D_
#define _COORDINATES_2D_
#include "graphics.h"
#include "Point2d.h"
#include "Point3d.h"
#include "Line3d.h"
class Coordinates2d
{
private:
static Point2d origin;
public:
static void Wait();
static void ShowWindow(char str[]);
private:
static void Draw(Point2d & pt);
public:
static void Draw(Point3d & pt, double zoom)
{
Coordinates2d::Draw(pt.ConvertTo2d(zoom));
}
};
#endif
I was expecting the output to be the following:
But the output became like the following:
I am actually interested to move my viewing camera.
How can I achieve my desired result?

I see from the comments that you achieved your desired result with a clever formula. If you're interested in doing it the 'standard' graphics way, using matrices, I hope this post will help you.
I found an excellent page written explaining projection matrices for OpenGL, which also extends to the general mathematics of projection.
If you want to go in depth, here is the very well written article, explains it's steps in detail, and is just overall highly commendable.
The below image shows the first part of what you're trying to do.
So the image on the left is the 'viewing volume' that you want your camera to see. You can see that in this case, the Center of Projection (basically the focal point of the camera) is at the origin.
But wait, you say, I don't WANT the center of projection to be at the origin! I know, we'll cover that later.
What we're doing here is taking the strangely shaped volume on the left, and converting it to what we call 'normalized coordinate' on the right. So we're mapping out viewing volume onto the range of -1 to 1 in each direction. Basically, we mathmatically stretch the irregularly shaped viewing volume into this 2x2x2 cube centered at the origin.
This operation is accomplished through the following matrix, again, from the excellent article I linked above.
So note you have six variables.
t = top
b = bottom
l = left
r = right
n = near
f = far
Those six variables define you viewing volume. Far is not labeled on the above image, but it is the distance of the furthest plane from the origin in the image.
The above image shows the projection matrix that puts out viewing volume into normalized coordinates. Once coordinates are in this form, you can make it flat by simply ignoring the z coordinate, which is similar to some of the work you have done (nice work!).
So we're all set with that for viewing things from the origin. But let's say we don't want to view from the origin, and would prefer to view from, say somewhere behind and to the side.
Well we can do that! but instead of moving our viewing area (we have the math all nicely worked out right here), it is perhaps counter intuitively, easier to move all the points we are trying to view.
This can be done by multiplying all of the points by a translation matrix.
Here is the wikipedia page for translation, from which I took the following matrix.
Vx, Vy, and Vz are the amount we want to move things in the x, y, and z directions. Keep in mind, if we want to move the camera in the positive x direction, we need a negative Vx, and vice versa. This is because we are moving the points instead of the camera. Feel free to try it and see, if you want.
You may also have noticed that both of the matrices I showed are 4x4, and your coordinates are 3x1. This is because the matrices are meant to be used with homogeneous coordinates. These seem strange because they use 4 variables to represent a 3D point, but its just x, y, z, and w, where you make w =1 for your points. I believe this variable is used for depth buffers, among other things, but it is basically ubiquitously present in graphics' matrix math, so you'll want to get used to using it.
Now that you have these matrices, you can apply the translation one to your points, then apply the perspective one to those points you got out. Then simply ignore the z components, and there you are! You have a 2D image from -1 to 1 in the x and y directions.

Related

hwo to use pcl to achieve that discretizing the 3D point cloud into a 2D grid over the xy plane, not use the voxelgrid?

I want to project the 3D point cloud into a 2D grid over the xy plane, each grid cell size is 20cm*20cm, how to achieve it effectively?
NOT use VoxelGrid method, because I want to retain every point and deal with them in the next step(Gaussian kernel every column and use EM to deal with each grid)
As discussed in the comments, you can achieve what you want with OctreePointCloudPointVector class.
Here is an example how to use the class:
#include <pcl/point_cloud.h>
#include <pcl/io/pcd_io.h>
#include <pcl/octree/octree_pointcloud_pointvector.h>
using Cloud = pcl::PointCloud<pcl::PointXYZ>;
using CloudPtr = Cloud::Ptr;
using OctreeT = pcl::octree::OctreePointCloudPointVector<pcl::PointXYZ>;
int main(int argc, char** argv)
{
if(argc < 2)
return 1;
// load cloud
CloudPtr cloud(new Cloud);
pcl::io::loadPCDFile(argv[1], *cloud);
CloudPtr cloud_projected(new Cloud(*cloud));
// project to XY plane
for(auto& pt : *cloud_projected)
pt.z = 0.0f;
// create octree, set resolution to 20cm
OctreeT octree(0.2);
octree.setInputCloud(cloud_projected);
octree.addPointsFromInputCloud();
// we gonna store the indices of the octree leafs here
std::vector<std::vector<int>> indices_vec;
indices_vec.reserve(octree.getLeafCount());
// traverse the octree leafs and store the indices
const auto it_end = octree.leaf_depth_end();
for(auto it = octree.leaf_depth_begin(); it != it_end; ++it)
{
auto leaf = it.getLeafContainer();
std::vector<int> indices;
leaf.getPointIndices(indices);
indices_vec.push_back(indices);
}
// save leafs to file
int cnt = 0;
for(const auto indices : indices_vec)
{
Cloud leaf(*cloud, indices);
pcl::io::savePCDFileBinary("leaf_" + std::to_string(cnt++) + ".pcd", leaf);
}
}
You can see the output by calling pcl_viewer:
pcl_viewer leaf_*.pcd
See sample output
You can achieve this using https://github.com/daavoo/pyntcloud with the following code:
from pyntcloud import PyntCloud
cloud = PyntCloud.from_file("some_cloud.ply")
# 0.2 asumming your point cloud units are meters
voxelgrid_id = cloud.add_structure("voxelgrid", size_x=0.2, size_y=0.2)
voxelgrid = cloud.structures[voxelgrid_id]
You can learn more about VoxelGrid here:
https://github.com/daavoo/pyntcloud/blob/master/examples/%5Bstructures%5D%20VoxelGrid.ipynb
What do you mean with 2D grid over the xy plane? Do you still want the z value to be the original value, or do you want to project the point cloud to the XY plane first?
Keep Z value
If you want to keep the Z values, just set the leaf size for Z of VoxelGrid to infinite (or a very large number).
pcl::VoxelGrid<pcl::PCLPointCloud2> sor;
sor.setInputCloud (cloud);
sor.setLeafSize (0.01f, 0.01f, 100000.0f);
sor.filter (*cloud_filtered);
Project Cloud to XY plane first
Project a cloud to the XY plane is nothing else than setting the Z value for each point to 0.
for(auto& pt : cloud)
pt.z = 0.0f;
Now you can do normal VoxelGrid on the projected point cloud.

QWT moving canvas

I'm using QWT library for my widget, there are some curves on the canvas, like this:
void Plot::addCurve1( double x, double y, const char *CurveName,
const char *CurveColor,const char *CurveType )
{
...
*points1 << QPointF(x, y);
curve1->setSamples( *points1 );
curve1->attach( this );
...
}
So, all my curves have the same coordinate system. I'm trying to build navigation interface, so I could put step into TextEdit (for example) and moving by using this step, or I could go the end/start of my defined curve.
I've found method in QwtPlotPanner class, that gives me such opportunity:
double QWT_widget::move_XLeft()
{
//getting step from TextEdit
QString xValStr = _XNavDiscrepancies->toPlainText();
double xVal = xVal.toDouble();
// moveCanvas(int dx, int dy) - the method of QwtPlotPanner
plot->panner->moveCanvas(xVal,0);
x_storage = x_storage - xVal;
return x_storage;
}
So it works ok, but displacement in pixels and I need to stick it to my defined curve and it's coordinate system.
Qwt User's Guide tells, that:
Adjust the enabled axes according to dx/dy
Parameters
dx Pixel offset in x direction
dy Pixel offset in y direction
And this is the only information I've found. How can I convert pixels step into my coordinat system step? I need to go to the end of my curve, so I should return the last QPointF(x,y) of my curve and convert it to pixel-step? Or maybe I'm using wrong class/method?
Thank you very much :)
Thanks to #Pavel Gridin:
(https://ru.stackoverflow.com/a/876184/251026)
"For conversion from pixels to coordinates and back there are two
methods: QwtPlot::transform and QwtPlot::invTransform"

Make object orbit around another with Degrees only (Not radians only integer math)

I'm currently developing something that requires objects to orbit around another. Though, I am limited. I cannot use Radians to achieve this. I have access to sin and cos, and the degrees. But I cannot use radians (it breaks everything). Note that this is within Minecraft and the values there cannot hold floats or doubles. So, an answer like 0.017 will be 17. For this reason, I cannot use radians.
The function to calculate sin and cos is limited between -180 to 180. This means I cannot simply turn 0.787 radians into 787 radians, as that's out of the limit, and the answer returned would be completely wrong.
Right now the code would be something like this:
var distance = 100; // from the centre of orbit
var degrees = 45; // around a 360 degree orbit
var radians = degrees * (Math.PI / 180);
var x = Math.cos(radians) * distance;
var y = Math.sin(radians) * distance;
But that code completely relies on degrees being converted into radians. I cannot do this, because of Minecraft's integer limits, and how the functions calculate sin and cos. It just is simply not possible.
So the main questions is:
How can I find future position of an object with just the degrees, sin and cos?
(Perhaps base the answer as if the degrees were 45 for instance)
Here is a great example picture:
why not make your own LUT in fixed point ? something like this in C++:
const int fp=1000; // fixed point precision
const int mycos[360]={ 1000, 999, 999, 998, 997, 996, 994, 992, 990, ... }
float x,y,x0=0,y0=0,r=50,ang=45;
x = x0 + ( (r*mycos[ ang %360]) / fp );
y = y0 + ( (r*mycos[(ang+90)%360]) / fp );
Also you can write a script that will create the LUT for you. Each value in the LUT is computed like this:
LUT[i] = fp*cos(i*M_PI/180); // i = 0,1,2,...359
Now to normalize angle before use:
ang %= 360;
if (ang<0) ang+=360;
There are also ways to compute sin,cos tables with integer variables only out there. We used it in 8-bit era asm on Z80 for our stuff and later on x86 demos ... so it is possible to write code that would create it directly in minecraft script without the need of another compiler use.
You can even change the angular units to power of 2 instead of 360 so you can get rid of the modulo and also set the fp to mower of 2 -1 so you do not need even to divide. After some digging in my source archives I found my ancient TASM MS-DOS demo which uses this technique. After porting it to C++ and tweaking the constants here C++ result:
int mysinLUT[256];
void mysin_init100() // <-100,+100>
{
int bx,si=620,cx=0,dx; // si ~amplitude
for (bx=0;bx<256;bx++)
{
mysinLUT[bx]=(cx>>8);
cx+=si;
dx=41*cx;
if (dx<0) dx=-((-dx)>>16); else dx>>=16;
si-=dx;
}
}
void mysin_init127() // <-127,+127>
{
int bx,si=793,cx=0,dx; // si ~amplitude
for (bx=0;bx<256;bx++)
{
mysinLUT[bx]=(cx>>8)+1;
cx+=si;
dx=41*cx;
if (dx<0) dx=-((-dx)>>16); else dx>>=16;
si-=dx;
}
}
int mysin(int a){ return mysinLUT[(a )&255]; }
int mycos(int a){ return mysinLUT[(a+64)&255]; }
The constants are set so the sin[256] holds rough approximation of sinus within range <-100,+100> or <-127,+127> (depends on which init you call) and the angle period is 256 instead of 360. You need first call the mysin_init???(); once to init the LUT after that you can use mysin,mycos just do not forget to divide the final result by /100 or >>7.
When I render overlay of real and approximated circle using VCL:
void draw()
{
// select range
// #define range100
#define range127
// init sin LUT just once
static bool _init=true;
if (_init)
{
_init=false;
#ifdef range100
mysin_init100();
#endif
#ifdef range127
mysin_init127();
#endif
}
int a,x,y,x0,y0,r;
// clear screen
bmp->Canvas->Brush->Color=clWhite;
bmp->Canvas->FillRect(TRect(0,0,xs,ys));
// compute circle size from window resolution xs,ys
x0=xs>>1;
y0=ys>>1;
r=x0; if (r>y0) r=y0; r=(r*7)/10;
// render real circle
bmp->Canvas->Pen->Color=clRed;
bmp->Canvas->Ellipse(x0-r,y0-r,x0+r,y0+r);
// render approximated circle
bmp->Canvas->Pen->Color=clBlack;
for (a=0;a<=256;a++)
{
#ifdef range100
x=x0+((r*mycos(a))/100);
y=y0-((r*mysin(a))/100);
#endif
#ifdef range127
// if >> is signed (copying MSB)
x=x0+((r*mycos(a))>>7);
y=y0-((r*mysin(a))>>7);
// if >> is unsigned (inserting 0) and all circle points are non negative
// x=( (x0<<7)+(r*mycos(a)) )>>7;
// y=( (y0<<7)-(r*mysin(a)) )>>7;
// this should work no matter what
// x=r*mycos(a); if (x<0) x=-((-x)>>7); else x>>=7; x=x0+x;
// y=r*mysin(a); if (y<0) y=-((-y)>>7); else y>>=7; y=y0-y;
// this work no matter what but use signed division
// x=x0+((r*mycos(a))/127);
// y=y0-((r*mysin(a))/127);
#endif
if (!a) bmp->Canvas->MoveTo(x,y);
else bmp->Canvas->LineTo(x,y);
}
Form1->Canvas->Draw(0,0,bmp);
//bmp->SaveToFile("out.bmp");
}
the result looks like this:
Red is real circle and Black is circle using mysin,mycos. As you can see there are small deviations due to approximations accuracy but no floating point operation is used here. It is weird as the 3 methods of bitshift rsults in different numbers (it must be some optimization of mine compiler) the constants are tweaked for the first method.

Occlusion estimation in pointcloud using pcl voxelgridOcclusionEstimation

I need to find out which points of a pointcloud are visible from a RGBD sensor located at origin(0,0,0). I tried to use the voxelgridOcclusionEstimation class of pcl to determine the visible region in the cloud as seen by the sensor. It uses ray tracing technique.
As an experiment,I tried to get the visible region in a sphere whose center satisfies one of the following:
center is along x
center is along y
center is along z
center is along xz plane
center is along y z plane
center is along x y plane.
The sensor is at the origin with zero rotation in all cases.
voxelgridOcclusionEstimation yeilds wierd results. The green region denotes visible region, while the red represents the occluded region.
My code is:
int main(int argc, char * argv[])
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_in(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_occluded(new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_visible(new pcl::PointCloud<pcl::PointXYZ>);
pcl::io::loadPCDFile(argv[1],*cloud_in);
Eigen::Quaternionf quat(1,0,0,0);
cloud_in->sensor_origin_ = Eigen::Vector4f(0,0,0,0);
cloud_in->sensor_orientation_= quat;
pcl::VoxelGridOcclusionEstimation<pcl::PointXYZ> voxelFilter;
voxelFilter.setInputCloud (cloud_in);
float leaf_size=atof(argv[2]);
voxelFilter.setLeafSize (leaf_size, leaf_size, leaf_size);
voxelFilter.initializeVoxelGrid();
std::vector<Eigen::Vector3i,
Eigen::aligned_allocator<Eigen::Vector3i> > occluded_voxels;
for (size_t i=0;i<cloud_in->size();i++)
{
PointT pt=cloud_in->points[i];
Eigen::Vector3i grid_cordinates=voxelFilter.getGridCoordinates (pt.x, pt.y, pt.z);
int grid_state;
int ret=voxelFilter.occlusionEstimation( grid_state, grid_cordinates );
if (grid_state==1)
{
cloud_occluded->push_back(cloud_in->points[i]);
}
else
{
cloud_visible->push_back(cloud_in->points[i]);
}
}
pcl::io::savePCDFile(argv[3],*cloud_occluded);
pcl::io::savePCDFile(argv[4],*cloud_visible);
return 0;
}
Your code seems to work except for the typo and missing point type definitions. Try with a different point cloud for better visual analysis.
Edit. On the other hand this seems to behave oddly with for example the milk cart could from here http://pointclouds.org/documentation/tutorials/supervoxel_clustering.php#supervoxel-clustering.
The voxelgridOcclusionEstimation class works but the grid width is very important. If we make it very small then there will be unoccupied voxels in the Foreground, which will let the casted rays to pass to pass to Background. If they are set very big, then the surface will not be correctly represented. This is more difficult if the model does not have uniform point density as in the case of data captured by RGBD sensors

In Qt drawPoint method does not plot anything if negative valued parameters are supplies

in Qt creator drawPoint() method does not put point if negative valued parameters are passed
following is code for Bresenham's algorithm.but, it is not working in qt creator.it just plots circle in one quadrant.
Bresenham::Bresenham(QWidget*parent):QWidget(parent)
{}
void Bresenham::paintEvent(QPaintEvent *e)
{
Q_UNUSED(e);
QPainter qp(this);
drawPixel(&qp);
}
void Bresenham::drawPixel(QPainter *qp)
{
QPen pen(Qt::red,2,Qt::SolidLine);
qp->setPen(pen);
int x=0,y,d,r=100;
y=r;
d=3-2*r;
do
{
qp->drawPoint(x,y);
qp->drawPoint(y,x);
qp->drawPoint(y,-x);
qp->drawPoint(x,-y);
qp->drawPoint(-x,-y);
qp->drawPoint(-y,-x);
qp->drawPoint(-x,y);
qp->drawPoint(-y,x);
if(d<0)
{
d=d+4*x+6;
}
else
{
d=d+(4*x-4*y)+10;
y=y-1;
}
x=x+1;
}while(x<y);
}
You need to translate the Qt coordinate system to the classic cartesian one. Choose a new center QPoint orig and replace all
qp->drawPoint(x,y);
with
qp->drawPoint(orig + QPoint(x,y));
The Qt coordinates system origin is at (0,0) and the y-axis is inverted. For instance, a segment from A(2,7) to B(6,1) look like this:
Notice how there is only the positive-x, positive-y quadrant. For simplicity assume that no negative coordinates exist.
Note:
For performance reasons it is better to compute all the points first and then draw them all using
QPainter::drawPoints ( const QPoint * points, int pointCount );

Resources