PDAL diff execution - point-cloud-library

I have two point clouds one is of an area before development, one is after.
I want to isolate the land surface points (A) now under the new buildings and I want to isolate the new buildings (B).
So I see PDAL has a diff function, would this enable me to export A and B as described above?
I would also like to add a +/- of 10cm to the selection, is this possible.

This is not the intent of pdal diff. According to https://www.pdal.io/apps/diff.html:
The command checks for the equivalence of the following items:
Different schema
Expected count
Metadata
Actual point count
Byte-by-byte point data
A separate PDAL kernel would be required for what you are looking to do.

Related

How to delete area of polygons that overlap?

I am trying to take a single layer of polygon buffers and delete the areas in which these circular buffers overlap (or intersect... I am not sure about the correct terminology here). I have so far tried the Intersection tool and Symmetrical difference tool, but these requires two layer inputs, and I am just working with a single layer. How can I accomplish this? I am working in QGIS.
Here is what I am working with:
I simply want to select and delete the areas where these circles overlap. I have searched this extensively online, but cannot find a solution that works for me, since I am only working with one layer.
you can try select by location analysis, if select "Where the feature" condition "Overlap", it select just overlap features.enter image description here
And delete after.
You can perform SQL Query with Execute SQL tool [
https://docs.qgis.org/3.16/en/docs/user_manual/processing_algs/qgis/vectorgeneral.html#id85]
Where Additional input datasources is your circle layer, that must have unique value, in my example attribute is called fid
Query is:
select c1.fid as fid1, c2.fid as fid2, intersection(c1.geometry,c2.geometry) as geometry
from input1 as c1, input1 as c2
where c1.fid > c2.fid and st_intersects(c1.geometry,c2.geometry)
query result is intersaction between geometries:
aftert that you can do symmetrical difference to optain
There is a new QGis plugin named "Buffer Without Overlaps" that works fine!

A large amount of points to create separate polygons (ArcGIS/QGIS)

Visual example of the data
I used a drone to create a DOF of a small area. During the flight, it takes a photo every 20sh seconds (40sh meters of a flight). I have created a CSV file, which I transferred to a point shapefile. In total, I made with drone 10 so-called "missions", each with 100-200 points which are "shaped" as squares on the map. What I want now is to create a polygon shapefile from the point shapefile.
Because those points sometimes overlap, I cannot use the "Aggregate Points" task, as it's only distance-based. I want to make polygons automatically, using some kind of script. What could help is the fact that a maximum time between two points (AKA photos taken) is 10-20 seconds, so if the time distance is over 3 minutes, it's another "mission". Can you help with such a script, that would quickly and automatically create as many polygons as there are missions?
Okay, I think I understand what you are trying to accomplish. Since no one replied I am going to give it a quick shot, so you have something to try.
I think the best strategy would be to:
Clustering algorithm: Try running a Clustering algorithm such as DBSCAN around the timestamp dimension to classify them based on time groups, instead of the distance (since, as you said, distance based separation is not enough to properly identify and separate the points). After which, you should have all the points classified between different groups with a column group id. Maximum distance parameter in the algorithm should be around 20 seconds steps, or even a minute (since you said each mission was separated at least about 3 minutes apart).
Feature based Polygon to point: At that point, then you run your generic Polygon_from_points(...) function that transforms these clustered points to polygons shapes based on a specific discriminant feature (which in your case is going to be each group id).
How does this work?: This would properly separate the groups first (time-based) and then you should be able to find a generic point to polygon based on a feature (Arcgis should have some).
I dont have an example dataset, nor any code written, but based on what you described I think it would work, hope it helps.

How to fix this line shape file?

I am creating a line from point shapefile which is auto generated. First time when I create that line in ArcGIS, I got a line like this because the points are not in a order:
after that I ordered the points according to it's location and got a line like this:
But unable to create a line like this:
Please give me any solution to fix this in ArcGIS or R programming. If you need the shapefile I can provide you.
I think there is no bullet proof way to restore the line, as same dataset can obviously represent different lines, so you would need to use some heuristics to do this. What Rafael described is very good top-bottom heuristics if you can reliably detect start and end points.
Another heuristics could be a bottom-up process to stitch nearby segments into a line. Find nearby points for every point, sort and connect the two nearest points. Continue this process, making sure each point has at most two connections, and that there are no loops.
A simpler approach that might just work if the line follows in general some direction is your idea of sorting points. But instead of ordering by x or y coordinate, find a linear approximation of these points, project each point to this straight line, and sort using the coordinate of the projection.
One way to go about this would be to treat it as a graph problem.
Create a weighted fully connected graph where the nodes are the points and the edge weight distance between its endpoints. Heuristically identify the “starting” and “ending” points of the line (for example, pick the bottom-leftmost point as start and top-rightmost and end).
Then you can use a shortest path algorithm to generate the order in which you connect the points.

Finding fairway and ports from ship track point using R

I have some data points from different ship include lon/lat, time and ID values recorded by AIS machine on ships, I want to use this points values to create line values which indicate the ship track, and then use track lines to find out fairway and ports.
Now I use trip package in R to build track data, however, in my data I find that data points from some ships may not continuous, sometimes the data points may lost in a segment track and sometimes the data points are "bad" points(the points in a track jump to a far away location), I need to filter the "bad" points and complete the lost points. When I use the function speedfilter in trip package to filter the "bad" points, there are two problems:
I set the max.speed, but a lot of points below this max.speed have been found out, is that the problem with CRS system?
The speedfilter function always find out the point next to the "bad" point and miss the "bad" point.
You can plot your lat/lng data as spatial lines using leaflet.
Concerning
max.speed: According to the manual of track points are tested for speed between previous / next and 2nd previous / next points. So I think what you observe is on purpose.
speedfilter: You should be able to solve the problem by always subtracting 1 from the index or shift the return vector.

How to implement KdTree using PCLPointCloud2 used in loadOBJfile in point cloud library?

Okay, so I have one OBJ file which I read into PCLpointcloud2. Now I want to feed it into a K-dTree. Which is not taking PCLPointCloud2 as input. I want to query any general point if it lies on the surface of my OBJ file.
I am finding it hard to understand their documentation. So how can it be done?
Plus, kindly point me to a good reference easily interpretable. And what is "PointT" BTW? Is it custom build type defined by us? please elaborate.
Look at the code in the provided tool pcl_mesh_sampling (in the PCL code directory under tools/mesh_sampling.cpp). It is relatively simple. It loads a model from PLY or OBJ then for each triangle it samples random points from the triangle. The final point cloud then undergoes a voxel-grid sample to make the points relatively uniform. Alternatively, you can just run the pcl_mesh_sampling program on your obj file to get an output PCD which you can then visualise with pcl_viewer before loading the PCD file into your own code.
Once you have the final point cloud, you can build and use a KD-Tree as per http://pointclouds.org/documentation/tutorials/kdtree_search.php
PointT is the template argument. The point cloud library can handle a variety of point types, from simple PointXYZ (having just x,y,z) to more complicated points like PointXYZRGBNormal (having x,y,z,normal_x,normal_y,normal_z, curvature, r, g, and b channels). Each algorithm is templated on the point type that you want to use. It would probably be easier if you used PointXYZ with your OBJ file, so use pcl::PointXYZ for all your template arguments. For more on templates see http://www.tutorialspoint.com/cplusplus/cpp_templates.htm and http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php.
Update (reply to latest comment)
Added here because this reply is too long for a comment.
I think I see what you are getting at. So when you sample points from the point cloud & build a KD-tree of the object surface, and for each point you keep track which faces are nearby that point (probably all the faces adjacent to the face from which the point was sampled should be sufficient? Just one face is definitely insufficient). Then when the query point is given, you find the nearest point in the KD-tree and check whether the query point is on the "outside" or inside of the full list of nearby faces associated with that point in the KD-tree. If it's on the "inside" of all of them perhaps it is an interior point. But I cannot guarantee that this is true. That is my thinking on that question at the moment. But I do wonder if you want a mesh-based approach really. By the way, if you break your mesh up into convex parts then you can have nice guarantees when processing each convex part.

Resources