How can I filter organized point cloud without making it unorganized? - point-cloud-library

I am working on filtering point cloud data, I read in multiple scientifical publications that it's preferred to work on organized point cloud since the the filtering/segmenting/so on can be more efficient if the cloud is organized. So I organized it using spherical projection and it works fine.
After passing the projected cloud to multiple filters like voxel, crop, outlier removal, it deorganize back to the former format!!! Which I don't want I still want to segment and treat the surface and so on but on the organized format to profit computationally from the organized format!!
I could apply the projection on it but it upsamples it again which is logical, yet I don't genuinely want after downsampling through filters!!
Is there any way by instantiation the filters to handle the data in different way that doesn't deorganize the data?
Here some info out the data pipeline
Projected cloud Size: 65536
The projected cloud is organized: 1
The projected cloud is dense: 1
Projected PointCloud width: 1024
Projected PointCloud height: 64
Projecting took 11 milliseconds
PointCloud after cropping: 51145 data points (x y z intensity).
PointCloud after extracting the ego: 51115 data points (x y z intensity).
PointCloud after filtering with voxel grid: 17340 data points (x y z intensity).
PointCloud size after regarding Radius Outlier Removal filter: 17303 data points (x y z intensity).
Filtered Cloud Size: 51145
The filtered Cloud is organized: 0
Filtered PointCloud width: 51145
Filtered PointCloud height: 1
Filtering took 21 milliseconds

When setting up the filter, before applying it do the following:
your_filter.setKeepOrganized(true);
For example, with a statistical outlier removal filter:
pcl::PointCloud<PointXYZ>::Ptr cloud_filtered;
pcl::StatisticalOutlierRemoval<PointXYZ> stat_filter;
stat_filter.setInputCloud(original_cloud);
stat_filter.setMeanK(50);
stat_filter.setStddevMulThresh(1.0);
// keeps `cloud_filtered` organized after filtering
stat_filter.setKeepOrganized(true);
stat_filter.filter(*cloud_filtered);

Related

Given a set of points with x, y and z coordinates whose bounds are 0 to 1 (inclusive), determine if they're all uniformly distributed (or close to)

I'm trying to determine whether a set of points are uniformly distributed in a 1 x 1 x 1 cube. Each point comes with an x, y, and z coordinate that corresponds to their location in the cube.
A trivial way that I can think of is to flatten the set of points into 2 graphs and check how normally distributed both are however I do not know whether that's a correct way of doing so.
Anyone else has any idea?
I would compute point density map and then check for anomalies in it:
definitions
let assume we have N points to test. If the points are uniformly distributed then they should form "uniform grid" of mmm points:
m * m * m = N
m = N^(1/3)
To account for disturbances from uniform grid and asses statistics you need to divide your cube to grid of cubes where each cube will hold several points (so statistical properties could be computed) let assume k>=5 points per grid cube so:
cubes = m/k
create a 3D array of counters
simply we need integer counter per each grid cube so:
int map[cubes][cubes][cubes];
fill it with zeroes.
process all points p(x,y,z) and update map[][][]
Simply loop through all of your points, and compute grid cube position they belong to and update their counter by incrementing it.
map[x*(cubes-1)][y*(cubes-1)][z*(cubes-1)]++;
compute average count of the map[][][]
simple average like this will do:
avg=0;
for (xx=0;xx<cubes;xx++)
for (yy=0;yy<cubes;yy++)
for (zz=0;zz<cubes;zz++)
avg+=map[xx][yy][zz];
avg/=cubes*cubes*cubes;
now just compute abs distance to this average
d=0;
for (xx=0;xx<cubes;xx++)
for (yy=0;yy<cubes;yy++)
for (zz=0;zz<cubes;zz++)
d+=fabs(map[xx][yy][zz]-avg);
d/=cubes*cubes*cubes;
the d will hold a metric telling how far are the points from uniform density. Where 0 means uniform distribution. So just threshold it ... the d is also depending on the number of points and my intuition tells me d>=k means totally not uniform so if you want to make it more robust you can do something like this (the threshold might need tweaking):
d/=k;
if (d<0.25) uniform;
else nonuniform;
As you can see all this is O(N) time so it should be fast enough for you. If it isn't you can evaluate every 10th point by skipping points however that can be done only if the order of points is random. If not you would need to pick N/10 random points instead. The 10 might be any constant but you need to take in mind you still need enough points to process so the statistic results are representing your set so I would not go below 250 points (but that depends on what exactly you need)
Here few of my answers using density map technique:
Finding holes in 2d point sets?
Location of highest density on a sphere

how to plot specific segment from density.lpp

I use density.lpp for kernel density estimation. I want to pick specific segment in that and plot the estimation through chosen segment. As an example, I have a road which is a combination of two segments. each segments have different length so I don't know how many pieces each of them are divided by.
here is the locations of vertices and road segment ids.
https://www.dropbox.com/s/fmuul0b6lus279c/R.csv?dl=0
here is the code I used to create spatial lines data frame and random points on the network and get density estimation.
Is there a way to know how many pieces each segment divided by? OR if I want to plot locations vs estimation for chosen segment how can I do? Using dimyx=100 created 199 estimation points but I don't know how many of them belongs to Swid=1 or Swid=2.
One approached I used was, using gDistance it works fine in this problem because these segments connected to one directions however, when there is 4 ways connection, some of the lambda values connects to another segments which is not belongs to that segment. I provided picture and circled 2 points, when I used gDistance, those points connected to other segments. Any ideas?
R=read.csv("R.csv",header=T,sep=",")
R2.1=dplyr::select(R, X01,Y01,Swid)
coordinates(R2.1) = c("X01", "Y01")
proj4string(R2.1)=CRS("+proj=utm +zone=17 +datum=NAD83 +units=m +no_defs +ellps=GRS80 +towgs84=0,0,0")
plot(R2.1,main="nodes on the road")
##
LineXX <- lapply(split(R2.1, R2.1$Swid), function(x) Lines(list(Line(coordinates(x))), x$Swid[1L]))
##
linesXY <- SpatialLines(LineXX)
data <- data.frame(Swid = unique(R2.1$Swid))
rownames(data) <- data$Swid
lxy <- SpatialLinesDataFrame(linesXY, data)
proj4string(lxy)=proj4string(trtrtt.original)
W.1=as.linnet.SpatialLines(lxy)
Rand1=runiflpp(250, W.1)
Rand1XY=coords(Rand1)[,1:2]
W2=owin(xrange=c(142751.98, 214311.26), yrange=c(3353111, 3399329))
Trpp=ppp(x=Rand1XY$x, y=Rand1XY$y, window=W2) ### planar point object
L.orig=lpp(Trpp,W.1) # discrete
plot(L.orig,main="Original with accidents")
S1=bw.scott(L.orig)[1] # in case to change bandwitdh
Try274=density(L.orig,S1,distance="path",continuous=TRUE,dimyx=100)
L=as.linnet(L.orig)
length(Try274[!is.na(Try274$v)])
[1] 199
This is a question about the spatstat package.
The result of density.lpp is an object of class linim. For any such object, you can use as.data.frame to extract the data. This yields a data frame with one row for each sample point on the network. For each sample point, the data are xc, yc (coordinates of nearest pixel centre), x,y (exact coordinates of sample point on network), seg (identifier of segment), tp (relative position along segment) and values (the density value). If you split the data frame by the seg column, you will get the data for invididual segments of the network.
However, it seems that you may want information about the internal workings of density.lpp. In order to achieve adequate accuracy during the computation phase, density.lpp subdivides each network segment into many short segments (using a complex set of rules). This information is lost when the final results are discretised into a linim object and returned. The attribute "dx" reports the length of the short segments that were used in the computation phase, but that's all.
If you email me directly I can show you how to extract the internal information.

Kitti Velodyne point to pixel coordinate

From the Velodyne point, how to get pixel coordinate for each camera?
Using pykitti
point_cam0 = data.calib.T_cam0_velo.dot(point_velo)
We can get the projection on the image which is equation 7 of the Kitti Dataset paper:
y = Prect(i) Rrect(0) Tvelocam x
But from there, how to get the actual pixel coordinates on each image?
Update: PyKitti version 0.2.1 exposes projection matrices for all cameras.
I recently faced the same problem. For me, the problem was that pykitty didn't expose Prect and Rrect matrices for all cameras.
For Pykitti > 0.2.1, use Prect and Rrect from calibration data.
For previous versions, you have two options:
Enter the matrices by hand (data is in the .xml calibration file for each sequence).
Use this fork of pykitti: https://github.com/Mi-lo/pykitti/
Then, you can use equation 7 to project a velodyne point into an image. Note that:
You will need 3D points as a 4xN array in homogeneous coordinates. Points returned by pykitti are a Nx4 numpy array, with the reflectance in the 4th column. You can prepare the points with the prepare_velo_points function below, which keeps only points with reflectance > 0, then replaces reflectance values with 1 to get homogeneous coordinates.
The velodyne is 360°. Equation 7 will give you a result even for points that are behind the camera (they will get projected as if they were in front, but vertically mirrored). To avoid this, you should project only points that are in front of the camera. For this, you can use the function project_velo_points_in_img below. It returns 2d points in homogeneous coordinates so you should discard the 3rd row.
Here are the functions I used:
def prepare_velo_points(pts3d_raw):
'''Replaces the reflectance value by 1, and tranposes the array, so
points can be directly multiplied by the camera projection matrix'''
pts3d = pts3d_raw
# Reflectance > 0
pts3d = pts3d[pts3d[:, 3] > 0 ,:]
pts3d[:,3] = 1
return pts3d.transpose()
def project_velo_points_in_img(pts3d, T_cam_velo, Rrect, Prect):
'''Project 3D points into 2D image. Expects pts3d as a 4xN
numpy array. Returns the 2D projection of the points that
are in front of the camera only an the corresponding 3D points.'''
# 3D points in camera reference frame.
pts3d_cam = Rrect.dot(T_cam_velo.dot(pts3d))
# Before projecting, keep only points with z>0
# (points that are in fronto of the camera).
idx = (pts3d_cam[2,:]>=0)
pts2d_cam = Prect.dot(pts3d_cam[:,idx])
return pts3d[:, idx], pts2d_cam/pts2d_cam[2,:]
Hope this helps!

How to plot a map in R without geographic coordinates?

I have a data set with some meteorological stations. I have to locations of these stations given as locations in a grid that is used for estimation of some models, i.e., they have x and y values with ranges of around (0,600) for x and (0,500) for y. If I plot them, the "map" looks like this:
.
Now, I would like to have a real map of Europe under these points. How can I do this?
I have some sort of IDs that I could partially match to a data base from the World Meteorological Organization which allowed me to get the longitudes and latitudes for a subset of these stations (around 15 % are missing, however). If I plot those on a real map, using for example the maps package for R, I get the locations on a map which looks like this:
.
However, for some stations in my data set, I do not have IDs given, so I can not match them to the true coordinates. The second plot thus contains ca 15% less stations than the first one, but I would like to show all points from the first plot in a real map (as given by the second plot). As mentioned, I thus need a way to get from the x-y-locations to the true geographical locations, or a way to transform a map such that it matches the first plot.
It is impossible to get the missing true locations, so my question is how I can get a map under the points given by the x and y locations in the grid.

inverse interpolation of multidimensional grids

I am working on a project of interpolating sample data {(x_i,y_i)} where the input domain for x_i locates in 4D space and output y_i locates in 3D space. I need generate two look up tables for both directions. I managed to generate the 4D -> 3D table. But the 3D -> 4D one is tricky. The sample data are not on regular grid points, and it is not one to one mapping. Is there any known method to treat this situation? I did some search online, but what I found is only for 3D -> 3D mapping, which are not suitable for this case. Thank you!
To answer the questions of Spektre:
X(3D) -> Y(4D) is the case 1X -> nY
I want to generate a table that for any given X, we can find the value for Y. The sample data is not occupy all the domain of X. But it's fine, we only need accuracy for point inside the domain of sample data. For example, we have sample data like {(x1,x2,x3) ->(y1,y2,y3,y4)}. It is possible we also have a sample data {(x1,x2,x3) -> (y1_1,y2_1,y3_1,y4_1)}. But it is OK. We need a table for any (a,b,c) in space X, it corresponds to ONE (e,f,g,h) in space Y. There might be more than one choice, but we only need one. (Sorry for the symbol confusing if any)
One possible way to deal with this: Since I have already established a smooth mapping from Y->X, I can use Newton's method or any other method to reverse search the point y for any given x. But it is not accurate enough, and time consuming. Because I need do search for each point in the table, and the error is the sum of the model error with the search error.
So I want to know it is possible to find a mapping directly to interpolate the sample data instead of doing such kind of search in 3.
You are looking for projections/mappings
as you mentioned you have projection X(3D) -> Y(4D) which is not one to one in your case so what case it is (1 X -> n Y) or (n X -> 1 Y) or (n X -> m Y) ?
you want to use look-up table
I assume you just want to generate all X for given Y the problem with non (1 to 1) mappings is that you can use lookup table only if it has
all valid points
or mapping has some geometric or mathematic symmetry (for example distance between points in X and Yspace is similar,and mapping is continuous)
You can not interpolate between generic mapped points so the question is what kind of mapping/projection you have in mind?
First the 1->1 projections/mappings interpolation
if your X->Y projection mapping is suitable for interpolation
then for 3D->4D use tri-linear interpolation. Find closest 8 points (each in its axis to form grid hypercube) and interpolate between them in all 4 dimensions
if your X<-Y projection mapping is suitable for interpolation
then for 4D->3D use quatro-linear interpolation. Find closest 16 points (each in its axis to form grid hypercube) and interpolate between them in all 3 dimensions.
Now what about 1->n or n->m projections/mappings
That solely depends on the projection/mapping properties which I know nothing of. Try to provide an example of your datasets and adding some image would be best.
[edit1] 1 X <- n Y
I still would use quatro-linear interpolation. You still will need to search your Y table but if you group it like 4D grid then it should be easy enough.
find 16 closest points in Y-table to your input Y point
These points should be the closest points to your Y in each +/- direction of all axises. In 3D it looks like this:
red point is your input Y point
blue points are the found closest points (grid) they do not need to be so symmetric as on image .
Please do not want me to draw 4D example that make sense :) (at least for sober mind)
interpolation
find corresponding X points. If there is more then one per point chose the closer one to the others ... Now you should have 16 X points and 16+1 Y points. Then from Y points you need just to calculate the distance along lines from your input Y point. These distances are used as parameter for linear interpolations. Normalize them to <0,1> where
0 means 'left' and 1 means 'right' point
0.5 means exact middle
You will need this scalar distance in each of Y-domain dimension. Now just compute all the X points along the linear interpolations until you get the corresponding red point in X-domain.
With tri-linear interpolation (3D) there are 4+2+1=7 linear interpolations (as on image). For quatro-linear interpolation (4D) there are 8+4+2+1=15 linear interpolations.
linear interpolation
X = X0 + (X1-X0)*t
X is interpolated point
X0,X1 are the 'left','right' points
t is the distance parameter <0,1>

Resources