Im just planing a triangle rasterizer on GPU (GLSL) and I came across an algorithm that could remove barycentric coords (which is quiet expencive to copute) from rasterization.
Given the triangle ABC with the edges abc and the linear equation f.
In the first step I check each of the edges abc for intersection with f, (the triangle is already projected in R²)
then I take the factor from the linear equation of abc (each edge is an linear equation) and use this parameter in R³ for determinating a new equation which conects the two intersection points on two differnt edges of the triangle.
So now I can fill my triangle line by line (the pixels between the intersection points are filled).
Based on the linear equation in R³ (of the intersection points) I can now determinate the R³ cooeds of each pixel so I have a depth value.
I know it's a bit complicated so if you have any quaestion please ask me.
So do you think it's possible this way and if yes, is iit faster ??
If I forgot anything or if I did a mistake please inform me.
If you have any optimisation idea or something else please inform me.
I found out that it is a bit faster but I'll have some problems with textures and additionsl calculations would slow the whole thing so I think I will use barycentric coordinates.
I am trying to solve a programming interview question that requires one to find the maximum number of points that lie on the same straight straight line in a 2D plane. I have looked up the solutions on the web. All of them discuss a O(N^2) solution using hashing such as the one at this link: Here
I understand the part where common gradient is used to check for co-linear points since that is a common mathematical technique. However, the solution points out that one must beware of vertical lines and overlapping points. I am not sure how these points can cause problems? Can't I just store the gradient of vertical lines as infinity (a large number)?
Hint:
Three distinct points are collinear if
x_1*(y_2-y_3)+x_2*(y_3-y_1)+x_3*(y_1-y_2) = 0
No need to check for slopes or anything else. You need need to eliminate duplicate points from the set before the search begins though.
So pick a pair of points, and find all other points that are collinear and store them in a list of lines. For the remainder points do the same and then compare which lines have the most points.
The first time around you have n-2 tests. The second time around you have n-4 tests because there is no point on revisiting the first two points. Next time n-6 etc, for a total of n/2 tests. At worst case this results in (n/2)*(n/2-1) operations which is O(n^2) complexity.
PS. Who ever decided the canonical answer is using slopes knows very little about planar geometry. People invented homogeneous coordinates for points and lines in a plane for the exact reason of having to represent vertical lines and other degenerate situations.
I have a point p and multiple circles C1,C2,C3.... Cn.
How can i find if the point is at the intersection area of this Circles ?
I just know the center(x,y) and the radius of the circles C1...Cn and the coordinates of the point p(x,y).
I have to write a cpp code for implementing this problem.
But at first i need to know the logic. Please help.
Just check if the distances of the point from the three centers are less of all the three radius.
EDIT: homework? I should not have answered :/
I'd like to draw the 3-simplex which encloses some random points in 3D. So for example:
pts <- rnorm(30)
pts <- matrix(pts, ncol = 3)
With these points, I'd like to compute the vertices of the 3-simplex (irregular tetrahedron) that just encloses these points. Can someone suggest a package/function that will do this? All manner of searching for simplex-related material is dominated by answers that relate to using simplices for other purposes, of which there are many. I just want to compute one and draw it. Seems simple, but I don't seem to know the relevant keywords for what I need.
If nobody can find a suitable package for this, you'll have to settle for doing it yourself, which isn't so difficult if you don't require it to be the absolute tightest fit. See this question over at mathexchange.
The simplest approach presented in this question seems to me to be translating the origin so that all points lie in the positive orthant (i.e, all point dimensions are positive) and then projecting the points to lie within the simplex denoted by each unit vector. To get this simplex in your original coordinate system you can take the inverse projection and inverse translation of the points in this simplex.
Another approach suggested there is to find the enveloping sphere (which you can for instance use Ritter's algorithm for), and then find an enveloping simplex of the sphere, which might be an easier task depending what you are most comfortable with.
I think you're looking for convhulln in the geometry package, but I'm no expert, so maybe that isn't quite what you are looking for.
I'm looking for a very simple algorithm for computing the polygon intersection/clipping.
That is, given polygons P, Q, I wish to find polygon T which is contained in P and in Q, and I wish T to be maximal among all possible polygons.
I don't mind the run time (I have a few very small polygons), I can also afford getting an approximation of the polygons' intersection (that is, a polygon with less points, but which is still contained in the polygons' intersection).
But it is really important for me that the algorithm will be simple (cheaper testing) and preferably short (less code).
edit: please note, I wish to obtain a polygon which represent the intersection. I don't need only a boolean answer to the question of whether the two polygons intersect.
I understand the original poster was looking for a simple solution, but unfortunately there really is no simple solution.
Nevertheless, I've recently created an open-source freeware clipping library (written in Delphi, C++ and C#) which clips all kinds of polygons (including self-intersecting ones). This library is pretty simple to use: https://github.com/AngusJohnson/Clipper2
You could use a Polygon Clipping algorithm to find the intersection between two polygons. However these tend to be complicated algorithms when all of the edge cases are taken into account.
One implementation of polygon clipping that you can use your favorite search engine to look for is Weiler-Atherton. wikipedia article on Weiler-Atherton
Alan Murta has a complete implementation of a polygon clipper GPC.
Edit:
Another approach is to first divide each polygon into a set of triangles, which are easier to deal with. The Two-Ears Theorem by Gary H. Meisters does the trick. This page at McGill does a good job of explaining triangle subdivision.
If you use C++, and don't want to create the algorithm yourself, you can use Boost.Geometry. It uses an adapted version of the Weiler-Atherton algorithm mentioned above.
You have not given us your representation of a polygon. So I am choosing (more like suggesting) one for you :)
Represent each polygon as one big convex polygon, and a list of smaller convex polygons which need to be 'subtracted' from that big convex polygon.
Now given two polygons in that representation, you can compute the intersection as:
Compute intersection of the big convex polygons to form the big polygon of the intersection. Then 'subtract' the intersections of all the smaller ones of both to get a list of subracted polygons.
You get a new polygon following the same representation.
Since convex polygon intersection is easy, this intersection finding should be easy too.
This seems like it should work, but I haven't given it more deeper thought as regards to correctness/time/space complexity.
Here's a simple-and-stupid approach: on input, discretize your polygons into a bitmap. To intersect, AND the bitmaps together. To produce output polygons, trace out the jaggy borders of the bitmap and smooth the jaggies using a polygon-approximation algorithm. (I don't remember if that link gives the most suitable algorithms, it's just the first Google hit. You might check out one of the tools out there to convert bitmap images to vector representations. Maybe you could call on them without reimplementing the algorithm?)
The most complex part would be tracing out the borders, I think.
Back in the early 90s I faced something like this problem at work, by the way. I muffed it: I came up with a (completely different) algorithm that would work on real-number coordinates, but seemed to run into a completely unfixable plethora of degenerate cases in the face of the realities of floating-point (and noisy input). Perhaps with the help of the internet I'd have done better!
I have no very simple solution, but here are the main steps for the real algorithm:
Do a custom double linked list for the polygon vertices and
edges. Using std::list won't do because you must swap next and
previous pointers/offsets yourself for a special operation on the
nodes. This is the only way to have simple code, and this will give
good performance.
Find the intersection points by comparing each pair of edges. Note
that comparing each pair of edge will give O(N²) time, but improving
the algorithm to O(N·logN) will be easy afterwards. For some pair of
edges (say a→b and c→d), the intersection point is found by using
the parameter (from 0 to 1) on edge a→b, which is given by
tₐ=d₀/(d₀-d₁), where d₀ is (c-a)×(b-a) and d₁ is (d-a)×(b-a). × is
the 2D cross product such as p×q=pₓ·qᵧ-pᵧ·qₓ. After having found tₐ,
finding the intersection point is using it as a linear interpolation
parameter on segment a→b: P=a+tₐ(b-a)
Split each edge adding vertices (and nodes in your linked list)
where the segments intersect.
Then you must cross the nodes at the intersection points. This is
the operation for which you needed to do a custom double linked
list. You must swap some pair of next pointers (and update the
previous pointers accordingly).
Then you have the raw result of the polygon intersection resolving algorithm. Normally, you will want to select some region according to the winding number of each region. Search for polygon winding number for an explanation on this.
If you want to make a O(N·logN) algorithm out of this O(N²) one, you must do exactly the same thing except that you do it inside of a line sweep algorithm. Look for Bentley Ottman algorithm. The inner algorithm will be the same, with the only difference that you will have a reduced number of edges to compare, inside of the loop.
The way I worked about the same problem
breaking the polygon into line segments
find intersecting line using IntervalTrees or LineSweepAlgo
finding a closed path using GrahamScanAlgo to find a closed path with adjacent vertices
Cross Reference 3. with DinicAlgo to Dissolve them
note: my scenario was different given the polygons had a common vertice. But Hope this can help
If you do not care about predictable run time you could try by first splitting your polygons into unions of convex polygons and then pairwise computing the intersection between the sub-polygons.
This would give you a collection of convex polygons such that their union is exactly the intersection of your starting polygons.
If the polygons are not aligned then they have to be aligned. I would do this by finding the centre of the polygon (average in X, average in Y) then incrementally rotating the polygon by matrix transformation, project the points to one of the axes and use the angle of minimum stdev to align the shapes (you could also use principal components). For finding the intersection, a simple algorithm would be define a grid of points. For each point maintain a count of points inside one polygon, or the other polygon or both (union) (there are simple & fast algorithms for this eg. http://wiki.unity3d.com/index.php?title=PolyContainsPoint). Count the points polygon1 & polygon2, divide by the amount of points in polygon1 or Polygon2 and you have a rough (depending on the grid sampling) estimate of proportion of polygons overlap. The intersection area would be given by the points corresponding to an AND operation.
eg.
function get_polygon_intersection($arr, $user_array)
{
$maxx = -999; // choose sensible limits for your application
$maxy = -999;
$minx = 999;
$miny = 999;
$intersection_count = 0;
$not_intersected = 0;
$sampling = 20;
// find min, max values of polygon (min/max variables passed as reference)
get_array_extent($arr, $maxx, $maxy, $minx, $miny);
get_array_extent($user_array, $maxx, $maxy, $minx, $miny);
$inc_x = $maxx-$minx/$sampling;
$inc_y = $maxy-$miny/$sampling;
// see if x,y is within poly1 and poly2 and count
for($i=$minx; $i<=$maxx; $i+= $inc_x)
{
for($j=$miny; $j<=$maxy; $j+= $inc_y)
{
$in_arr = pt_in_poly_array($arr, $i, $j);
$in_user_arr = pt_in_poly_array($user_array, $i, $j);
if($in_arr && $in_user_arr)
{
$intersection_count++;
}
else
{
$not_intersected++;
}
}
}
// return score as percentage intersection
return 100.0 * $intersection_count/($not_intersected+$intersection_count);
}
This can be a huge approximation depending on your polygons, but here's one :
Compute the center of mass for each
polygon.
Compute the min or max or average
distance from each point of the
polygon to the center of mass.
If C1C2 (where C1/2 is the center of the first/second polygon) >= D1 + D2 (where D1/2 is the distance you computed for first/second polygon) then the two polygons "intersect".
Though, this should be very efficient as any transformation to the polygon applies in the very same way to the center of mass and the center-node distances can be computed only once.