I need to import a shape file of a drinking water network into a hydraulic software.
For this software I need intersections when lines are intersecting.
So I tried to create all intersections, but this did not bring the results I need.
Another degree of difficulty adds the fact that in hydraulic systems we have also crossing pipes, that have no connection with each other. So they would need to be excluded from the intersection. My first thought was to insert a limit length of 1-5cm for interecting lines. So if lines are crossing the limit length would be exceeded and the lines would not be intersected by the algorithm.
Can someone help me here? At least with the first step - creation of intersections?
Screenshot from one missing intersection
Related
Visual example of the data
I used a drone to create a DOF of a small area. During the flight, it takes a photo every 20sh seconds (40sh meters of a flight). I have created a CSV file, which I transferred to a point shapefile. In total, I made with drone 10 so-called "missions", each with 100-200 points which are "shaped" as squares on the map. What I want now is to create a polygon shapefile from the point shapefile.
Because those points sometimes overlap, I cannot use the "Aggregate Points" task, as it's only distance-based. I want to make polygons automatically, using some kind of script. What could help is the fact that a maximum time between two points (AKA photos taken) is 10-20 seconds, so if the time distance is over 3 minutes, it's another "mission". Can you help with such a script, that would quickly and automatically create as many polygons as there are missions?
Okay, I think I understand what you are trying to accomplish. Since no one replied I am going to give it a quick shot, so you have something to try.
I think the best strategy would be to:
Clustering algorithm: Try running a Clustering algorithm such as DBSCAN around the timestamp dimension to classify them based on time groups, instead of the distance (since, as you said, distance based separation is not enough to properly identify and separate the points). After which, you should have all the points classified between different groups with a column group id. Maximum distance parameter in the algorithm should be around 20 seconds steps, or even a minute (since you said each mission was separated at least about 3 minutes apart).
Feature based Polygon to point: At that point, then you run your generic Polygon_from_points(...) function that transforms these clustered points to polygons shapes based on a specific discriminant feature (which in your case is going to be each group id).
How does this work?: This would properly separate the groups first (time-based) and then you should be able to find a generic point to polygon based on a feature (Arcgis should have some).
I dont have an example dataset, nor any code written, but based on what you described I think it would work, hope it helps.
I am creating a line from point shapefile which is auto generated. First time when I create that line in ArcGIS, I got a line like this because the points are not in a order:
after that I ordered the points according to it's location and got a line like this:
But unable to create a line like this:
Please give me any solution to fix this in ArcGIS or R programming. If you need the shapefile I can provide you.
I think there is no bullet proof way to restore the line, as same dataset can obviously represent different lines, so you would need to use some heuristics to do this. What Rafael described is very good top-bottom heuristics if you can reliably detect start and end points.
Another heuristics could be a bottom-up process to stitch nearby segments into a line. Find nearby points for every point, sort and connect the two nearest points. Continue this process, making sure each point has at most two connections, and that there are no loops.
A simpler approach that might just work if the line follows in general some direction is your idea of sorting points. But instead of ordering by x or y coordinate, find a linear approximation of these points, project each point to this straight line, and sort using the coordinate of the projection.
One way to go about this would be to treat it as a graph problem.
Create a weighted fully connected graph where the nodes are the points and the edge weight distance between its endpoints. Heuristically identify the “starting” and “ending” points of the line (for example, pick the bottom-leftmost point as start and top-rightmost and end).
Then you can use a shortest path algorithm to generate the order in which you connect the points.
I have lat/lng data of multirotor UAV flights. There are alot of datapoints (~13k per flight) and I wish to find line segments from the data. They give me flight speed and direction. I know that most of the flights are guided missons meaning a point is given to fly to. However the exact points are unknown to me.
Here is a graph of a single flight lat/lng shifted to near (0,0) so they are visible on the same time-series graph.
I attempted to generate similar data, but there are several constraints and it may take more time to solve than working on the segmenting.
The graphs start and end nearly always at the same point.
Horisontal lines mean the UAV is stationary. These segments are expected.
Beginning and and end are always stationary for takeoff and landing.
There is some level of noise in the lines for the gps accuracy tho seemingly not that much.
Alot of data points.
The number of segments is unknown.
The noise I could calculate given the segments and least squares method to the line. Currently I'm thinking of sampling the data (to decimate it a little) and constructing lines. Merging the lines with smaller angle than x (dependant on the noise) and finding the intersection points of the lines left.
Another thought is to try and look at this problem in the frequency domain. The corners should be quite high frequency. Maybe I could make a custom filter kernel that would enable me to use a window function and win in efficency.
EDIT: Rewrote the question for more clarity and less rambling.
I have some data points from different ship include lon/lat, time and ID values recorded by AIS machine on ships, I want to use this points values to create line values which indicate the ship track, and then use track lines to find out fairway and ports.
Now I use trip package in R to build track data, however, in my data I find that data points from some ships may not continuous, sometimes the data points may lost in a segment track and sometimes the data points are "bad" points(the points in a track jump to a far away location), I need to filter the "bad" points and complete the lost points. When I use the function speedfilter in trip package to filter the "bad" points, there are two problems:
I set the max.speed, but a lot of points below this max.speed have been found out, is that the problem with CRS system?
The speedfilter function always find out the point next to the "bad" point and miss the "bad" point.
You can plot your lat/lng data as spatial lines using leaflet.
Concerning
max.speed: According to the manual of track points are tested for speed between previous / next and 2nd previous / next points. So I think what you observe is on purpose.
speedfilter: You should be able to solve the problem by always subtracting 1 from the index or shift the return vector.
I've implemented a function to check the overlapping of two polygons, p1 and p2, to verify if p1 overlaps p2 the function takes every edge of p1 and test if one point of it is inside p2 and not on an edge of p2 (they can share an edge).
The function works just fine, the problem is that it's called a thousand times and it makes my program really slow since it have to iterate through each edge point by point and I'm only checking 4 cases of polygon overlapping, they are:
If a triangle overlaps a triangle.
If a triangle overlaps a rectangle.
If a triangle overlaps a parallelogram.
If a rectangle overlaps a parallelogram.
Is there any simpler and faster way to check if these cases of overlapping occur?
I'm thinking all you are really looking for is where line segment intersections. This can be done in O((N+k) log N), where N is the number of line segments (roughly the number of vertices) and k is the number of intersections. Using the Bentley-Ottmann algorithm. This would probably be best done using ALL of the polygons, instead of simply considering only two at a time.
The trouble is keeping track of which line segments belong to which polygon. There is also the matter that considering all line segments may be worst than simply considering only two polygons, in which case you stop as soon as you get one valid intersection (some intersections may not meet your requirements to be count as an overlap). This method is probably faster, although it may require that you try different combinations of polygons. In which case, it's probably better to simply consider all line segments.
You can speed up the intersection test by first check if the edge intersects the boundin box of the polygon. Look at Line2D.interSects(Rectangle2D rect).
If you have ten thousands of polygons, then you further can speed up, by storing the polygons in a spatial index (Region Quadtree). This then will limit the search to a view polygons instead of bruite force searching all. But this proably make only sense if you often have to search, not only once at program start up.