R package to visualize time-dependent network of processes - r

I am trying to visualize a network that is time dependent. Assuming time is running on the x axis, I'd like to follow the intuitions that connected nodes appear close to one another (have a similar y-score) and nodes with lots of edges appear closer to the center of the visualization, while those with fewer appear towards the upper and lower bounds. I'd like nodes not to overlap and to avoid having edges cross nodes. The y-space is meaningless for my purposes, other than to organize the nodes to achieve the above outcomes. The nodes themselves take place in time - i.e. they are not points.
Here is a simple mock-up of what I'd like to achieve (each rectangle is a process):
Having looked around I can't seem to find anything. I am trying build my own solution in ggplot, creating rounds of connections (first round has no connection to them, second round has connections from the first round, third round has connections from the second round etc.). The y-score for the first round is based on number of outward connections, with y-values for subsequent rounds being the mean of all prior round y-scores from which there are connections. I'll then adjust y-scores to address any processes from showing as overlapping.
This seems like it should be possible, but its proving challenging. As I write more and more code, I thought I'd ask if someone had already built a solution for this.

Related

Registration of slightly overlapping point cloud

Problem image
I am trying to align/register (>4) 2-D point cloud segments from several laser scanners with high accuracy, producing an perimeter contour of the scanned product. The segments between lasers may look like that above. The issue is that the calibration process may be both incorrect, and slightly not accurate enough thus I am where I am (and possibly containing individual elevation tilt errors so the segments are not shape-wise similar--close but not exact) and trying to make the best of the situation.
Visually, the segments have a slight bias in both directions as well as a rotational error compared to each other.
The difficulty is that the segments only partially overlap, contain a low but noticeable noise which maybe coherent, and the sampled point distribution is both low and uneven in the overlapping region, since the camera placement are apart (approximately 90 degrees).
My solution so far is to ignore the rotational bias, estimate the mean bias of selected corresponce points within point cloud segments in the overlapping region, and translate each segment by that estimate until I get to the opposite corner. It works somewhat OK, but it is a problem in the last set of sensors since all the errors appear to add up there. Additionally, it fails when there is little or no overlapping region.
I am not a specialist, so complicated solutions maybe useful for others. A relatively robust, iterative approach that can be simply coded is the best! I am thankfully grateful for any advice to solve this simple but quite challenging problem.

A large amount of points to create separate polygons (ArcGIS/QGIS)

Visual example of the data
I used a drone to create a DOF of a small area. During the flight, it takes a photo every 20sh seconds (40sh meters of a flight). I have created a CSV file, which I transferred to a point shapefile. In total, I made with drone 10 so-called "missions", each with 100-200 points which are "shaped" as squares on the map. What I want now is to create a polygon shapefile from the point shapefile.
Because those points sometimes overlap, I cannot use the "Aggregate Points" task, as it's only distance-based. I want to make polygons automatically, using some kind of script. What could help is the fact that a maximum time between two points (AKA photos taken) is 10-20 seconds, so if the time distance is over 3 minutes, it's another "mission". Can you help with such a script, that would quickly and automatically create as many polygons as there are missions?
Okay, I think I understand what you are trying to accomplish. Since no one replied I am going to give it a quick shot, so you have something to try.
I think the best strategy would be to:
Clustering algorithm: Try running a Clustering algorithm such as DBSCAN around the timestamp dimension to classify them based on time groups, instead of the distance (since, as you said, distance based separation is not enough to properly identify and separate the points). After which, you should have all the points classified between different groups with a column group id. Maximum distance parameter in the algorithm should be around 20 seconds steps, or even a minute (since you said each mission was separated at least about 3 minutes apart).
Feature based Polygon to point: At that point, then you run your generic Polygon_from_points(...) function that transforms these clustered points to polygons shapes based on a specific discriminant feature (which in your case is going to be each group id).
How does this work?: This would properly separate the groups first (time-based) and then you should be able to find a generic point to polygon based on a feature (Arcgis should have some).
I dont have an example dataset, nor any code written, but based on what you described I think it would work, hope it helps.

segmenting lat/long data graph into lines/vectors

I have lat/lng data of multirotor UAV flights. There are alot of datapoints (~13k per flight) and I wish to find line segments from the data. They give me flight speed and direction. I know that most of the flights are guided missons meaning a point is given to fly to. However the exact points are unknown to me.
Here is a graph of a single flight lat/lng shifted to near (0,0) so they are visible on the same time-series graph.
I attempted to generate similar data, but there are several constraints and it may take more time to solve than working on the segmenting.
The graphs start and end nearly always at the same point.
Horisontal lines mean the UAV is stationary. These segments are expected.
Beginning and and end are always stationary for takeoff and landing.
There is some level of noise in the lines for the gps accuracy tho seemingly not that much.
Alot of data points.
The number of segments is unknown.
The noise I could calculate given the segments and least squares method to the line. Currently I'm thinking of sampling the data (to decimate it a little) and constructing lines. Merging the lines with smaller angle than x (dependant on the noise) and finding the intersection points of the lines left.
Another thought is to try and look at this problem in the frequency domain. The corners should be quite high frequency. Maybe I could make a custom filter kernel that would enable me to use a window function and win in efficency.
EDIT: Rewrote the question for more clarity and less rambling.

Calculate a dynamic iteration value when zooming into a Mandelbrot

I'm trying to figure out how to automatically adjust the maximum iteration value when moving around in the Mandelbrot fractal.
All examples I've found uses a constant of 1000 or less but that's not enough when zooming into the fractal set.
Is there a way to determine the number of max_iterations based on for example where you are in the Mandelbrot space (x_start,x_end,y_start,y_end)?
One method I tried was to repetitively pre-process a small area in the region of the Mset boundary with increasing iterations until the percentage change in status from one repetition to the next was small. The problem was, that would vary in different places on the current map, since the "depth" varies across it. How to find the right place to do it? By logging the "deepest" boundary area during the previous generation (that will still be within the next zoom area).
But my best strategy was to avoid iterating wherever possible:
Away from the boundary of the Mset, areas of equal depth can be "contoured" and then filled with that depth. It was not an easy algorithm. Basically I followed a raster scan but when I detected a boundary of iteration change (examining all the neighbours to ensure I wasn't close the the edge of the Mset), I would switch to a curve-stitching method to iterate around a contour back to where it started (obviously not recalculating spots I already did), and then make a second pass filling in the raster lines within the countour with the iteration level. It was fraught with leaks but eventually I cracked it.
Within the Mset, I followed the same approach, because the very last thing you want to do is to plough across vast areas and hit the iteration limit.
The difficult area is close the the boundary, where the iteration results can't be related to smooth contours with the neighbours. The contour stitching method won't work here, since there is only ever 1 pixel of a particular depth.
Using the contour method also will have faults to the lower or Mset sides of this region, but since this area looks chaotic until you zoom deeper, I lived with that.
So having said all that, I simply set the iteration depth as high as I can tolerate, but perhaps you can combine my first paragraph with the area-filling techniques.
BTW colouring the region adjacent to the Mset looks terrible when an animated smooth playback of the zoom is attempted. For that reason I coloured this area in a grey scale, by comparing with neighbours. If there was too much difference, I coloured to 0x808080 at first, then adapted that depending on the predominance of the neighbours' depth. All requiring fine tuning!

How to determine all line segments from a list of points generated from a mouse gesture?

Currently I am interning at a software company and one of my tasks has been to implement the recognition of mouse gestures. One of the senior developers helped me get started and provided code/projects that uses the $1 Unistroke Recognizer http://depts.washington.edu/aimgroup/proj/dollar/. I get, in a broad way, what the $1 Unistroke Recognizer is doing and how it works but am a bit overwhelmed with trying to understand all of the internals/finer details of it.
My problem is that I am trying to recognize the gesture of moving the mouse downards, then upwards. The $1 Unistroke Recognizer determines that the gesture I created was a downwards gesture, which is infact what it ought to do. What I really would like it to do is say "I recognize a downards gesture AND THEN an upwards gesture."
I do not know if the lack of understanding of the $1 Unistroke Recognizer completely is causing me to scratch my head, but does anyone have any ideas as to how to recognize two different gestures from moving the mouse downwards then upwards?
Here is my idea that I thought might help me but would love for someone who is an expert or even knows just a bit more than me to let me know what you think. Any help or resources that you know of would be greatly appreciated.
How My Application Currently Works:
The way that my current application works is that I capture points from where the mouse cursor is while the user holds down the left mouse button. A list of points then gets feed to a the gesture recognizer and it then spits out what it thinks to be the best shape/gesture that cooresponds to the captured points.
My Idea:
What I wanted to do is before I feed the points to the gesture recognizer is to somehow go through all the points and break them down into separate lines or curves. This way I could feed each line/curve in one at a time and from the basic movements of down, up, left, right, diagonals, and curves I could determine the final shape/gesture.
One way I thought would be good in determining if there are separate lines in my list of points is sampling groups of points and looking at their slope. If the slope of a sampled group of points differed X% from some other group of sampled points then it would be safe to assume that there is indeed a separate line present.
What I Think Are Possible Problems In My Thinking:
Where do I determine the end of a line and the start of a separate line? If I was to use the idea of checking the slope of a group of points and then determined that there was a separate line present that doesn't mean I nessecarily found the slope of a separate line. For example if you were to draw a straight edged "L" with a right angle and sample the slope of the points around the corner of the "L" you would see that the slope would give resonable indication that there is a separate line present but those points don't correspond to the start of a separate line.
How to deal with the ever changing slope of a curved line? The gesture recognizer that I use handles curves already in the way I want it too. But I don't want my method that I use to determine separate lines keep on looking for these so called separate lines in a curve because its slope is changing all the time when I sample groups of points. Would I just stop sampling points once the slope changed more than X% so many times in a row?
I'm not using the correct "type" of math for determining separate lines. Math isn't my strongest subject but I did do some research. I tried to look into Dot Products and see if that would point me in some direction, but I don't know if it will. Has anyone used Dot Prodcuts for doing something like this or some other method?
Final Thoughts, Remarks, And Thanks:
Part of my problem I feel like is that I don't know how to compeletly ask my question. I wouldn't be surprised if this problem has already been asked (in one way or another) and a solution exist that can be Googled. But my search results on Google didn't provide any solutions as I just don't know exactly how to ask my question yet. If you feel like it is confusing please let me know where and why and I will help clarify it. In doing so maybe my searches on Google will become more precise and I will be able to find a solution.
I just want to say thanks again for reading my post. I know its long but didn't really know where else to ask it. Imma talk with some other people around the office but all of my best solutions I have used throughout school have come from the StackOverflow community so I owe much thanks to you.
Edits To This Post:
(7/6 4:00 PM) Another idea I thought about was comparing all the points before a Min/Max point. For example, if I moved the mouse downards then upwards, my starting point would be the current Max point while the point where I start moving the mouse back upwards would be my min point. I could then go ahead and look to see if there are any points after the min point and if so say that there could be a new potential line. I dunno how well this will work on other shapes like stars but thats another thing Im going to look into. Has anyone done something similar to this before?
If your problem can be narrowed down to breaking apart a general curve into straight or smoothly curved partial lines then you could try this.
Comparing the slope of the segments and identifying breaking points where it is greater then some threshold would work in a very simplified case. Imagine a perfectly formed L-shape where you have a right angle between two straight lines. Obviously the corner point would be the only one where the slope difference is above the threshold as long as the threshold is between 0 and 90 degrees, and thus a identifiable breaking point.
However, the vertical and horizontal lines may be slightly curved so the threshold would need to be large enough for these small differences in slope to be ignored as breaking points. You'd also have to decide how sharp a corner the algorithm should pick up as a break. is 90 deg or higher required, or is even 30 deg enough? This is an important question.
Finally, to make this robust I would not be satisfied comparing the slopes of two adjacent segments. Hands may shake, corners may be smoothed out and the ideal conditions to find straight lines and sharp corners will probably never occur. For each point investigated for a break I would take the average slope of the N previous segments and compare it to the average slope of the N following segments. This can be efficiently implemented using a running mean. By choosing a good sample number N (depending on the accuracy of the input, the total number of points, etc) the algorithm can avoid the noise and make better detections.
Basically the algorithm would be:
For each investigated point (beginning N points into the sequence and ending N points before the end.)
Compute average slope of the N previous segments.
Compute average slope of the N next segments.
If the difference of the averages is greater than the Threshold, mark current point as a breaking point.
This is quite off the top of my head. You'd have to try it in your application.
if you work with absolute angles, like upwards and downwards, you can simply take the absolute slope between two points (not necessarily adjacent) to determine if it's RIGHT, LEFT, UP, DOWN (if that is enough of a distinction)
the art is to find a distance between points so that the angle is not random (with 1px, the angle will be a multiple of 45°)
There is a firefox plugin for Navigation using mouse gestures that works very well. I think it's FireGestures, but I'm not sure. I guess you can get some inspiration from that one
Additional thought: If you draw a shape by connectiong successive points, then connecting back to the first point, the ratio between the area and the final line segment's length is also an indicator for the gesture's "edginess"
If you are just interested in up/down/left/right, a first approximation is to check 45 degree segments of a circle. This is easily done by checking the the horizontal difference between (successive) points against the vertical difference between points.
Say you have a greater positive horizontal difference than vertical difference, then that would be 'RIGHT'.
The only difficulty then comes for example, in distinguishing UP/DOWN from UP/RIGHT/DOWN. But this could be done by distances between points. If you determine that the mouse has moved RIGHT for less than 20 pixels say, then you can ignore that movement.

Resources