Number of triangles with N points inside - math

Given some points in plane (upto 500 points), no 3 collinear. We have to determine the number of triangles whose vertices are from the given points and that contain exactly N points inside them. How to efficiently solve this problem? The naive O(n^4) algorithm is too slow. Any better approach?

You could try thinking of the triangle as the intersection of three half-spaces. To find the number of points inside a triangle A, B, C first consider the set of points on one side of the infinite line in direction AB. Let these sets L(AB) and R(AB) for points of the left and right. Similarly you the same with other two edges and build sets L(AC) and R(AC) and sets L(BC) and R(BC).
So the number of points in ABC will be the number of points in the intersection of L(AB), L(AC) and L(BC). (You might want to consider R(AB) instead depending on the orientation of the triangle).
Now if we want to consider the full set of 500 points. First take all pairs of points AB and construct the sets L(AB) and R(AB). This will take O(n^3) operations.
Next we test all triangles and find the intersections of the three sets. If we use some hash table structure for the sets then to find the intersection points is like a hashtable lookups. If L(AB) has l elements, L(AC) has m elements and L(BC) n elements. Say l > m > n. For each point in L(BC) we need to do a lookup in L(AC) and L(BC) so thats a maximum of 2n hashtable lookups.
It might be faster to consider a geometric lookup table.
Divide your whole domain into a coarse grid say a 10 by 10 grid. We can then put each point into a set G(i,j). We can then split the sets L(AB) into each grid cell. Say call these sets L(AB,i,j) and R(AB,i,j). In testing for intersections first workout which grid cells lie in the intersection. This dramatically reduces the search space and as each set L(AB,i,j) contain fewer members there will be fewer hashtable lookups.

Actually I happened to encounter similar problem recently but the only difference was that there were around 300 pts and I solved it using bitset (C++ STL). For every pair of points, say (x[i],y[i]) and (x[j],y[j]), I formed a bitset<302>B[i][j] and B[i][j][k] stores 1 if k-th point is above line segment from point i to point j else I would store 0.
Now in a brute force manner I get three points so as to form a triangle, lets say (x[i],y[i]), (x[j],y[j]) and (x[k],y[k]), then a point,say z-th point ,would be inside triangle if B[i][j][z]==B[i][j][k] && B[j][k][z]==B[j][k][i] && B[k][i][z]==B[k][i][j] because a point inside triangle would show similar sign w.r.t. a side of triangle as the third point of triangle(one which is not on this side).
So i get three bitset variables P=B[i][j], Q=B[j][k] and R=B[k][i] and there taking there bitwise AND then applying count() function to give me the active number of bits and hence the number of points within the triangle. But make sure you change variable P such that it gives B[i][j][k]=1 if not then take bitwise not (~) of this variable.
Though the above solution is problem specific, i hope it helps. This is the problem link: http://usaco.org/current/index.php?page=viewproblem&cpid=660

Related

Shortest distance to cover all the (N+1) points. All the N points lie on x- axis. Remaining one point lies anywhere in the coordinate plane

Given (N+1) points. All the N points lie on x- axis. Remaining one point (HEAD point) lies anywhere in the coordinate plane.
Given a START point on x- axis.
Find the shortest distance to cover all the points starting from START point.We can traverse a point multiple times.
Example N+1=4
points on x axis
(0,1),(0,2),(0,3)
HEAD Point
(1,1) //only head point can lie anywhere //Rest all on x axis
START Point
(0,1)
I am looking for a method as of how to approach this problem.
Whether we should visit HEAD point first or HEAD point in between.
I tried to find a way using Graph Theory to simplify this problem and reduce the paths that need to be considered. If there is an elegant way to represent this problem using graphs to identify a solution, I was not able to find it. This approach becomes very inefficient as the n increases - the time and memory is O(2^n).
Looking at this as a tree graph, the root node would be the START point, then each of its child nodes would be the points it is connected to.
Since the START point and the rest of the points aside from the HEAD all lay on the x-axis, all non-HEAD points only need to be connected to adjacent points on the x-axis. This is because the distance of the path between any two points is the sum of the distances between any adjacent points along the path between those two points (the subset of nodes representing points on the x-axis does not need to form a complete graph). This reduces the brute force approach some.
Here's a simple example:
The upper left shows the original problem: points on the x-axis along with the START and HEAD points.
In the upper right, this has been transformed into a graph with each node representing a point from the original problem. The edges represent the paths that can be taken between points. This assumes that the START point only represents the first point in the path. Unlike the other nodes, it is only included in the path once. If that is not the case and the path can return to the START point, this would approximately double the possible paths, but the same approach can be followed.
In the bottom left, the START point, a, is the root of a tree graph, and each node connected to the START point is a child node. This process is repeated for each child node until either:
A path that is obviously not optimal is identified, in which case that node can just be excluded from the graph. See the nodes in red boxes; going back and forth between the same nodes is unnecessary.
All points are included when traversing the tree from the root to that node, producing a potential solution.
Note that when creating the tree graph, each time a node is repeated, its "potential" child nodes are the same as the first time the node was included. By "potential", I mean cases above still need to be checked, because the result might include a nonsensical path, in which case that node would not be included. It is also possible a potential solution results from the path after its child nodes are included.
The last step is to add up the distances for each of the potential solutions to determine which path is shortest.
This requires a careful examination of the different cases.
Assume for now START (S) is on the far left, and HEAD (H) is somewhere in the middle the path maybe something like
H
/ \
S ---- * ----*----* * --- * ----*
Or it might be shorter to from H to and from the one of the other node
H
//
S ---- * --- * -- *----------*---*
If S is not at one end you might have something like
H
/ \
* ---- * ----*----* * --- * ----*
--------S
Or even going direct from S to H on the first step
H
/ |
* ---- * ----*----* |
S
A full analysis of cases would be quite extensive.
Actually solving the problem, might depend on the number of nodes you have. If the number is small < 10, then compete enumeration might be possible. Just work out every possible path, eliminate the ones which are illegal, and choose the smallest. The number of paths is I think in the order of n!, so its computable for small n.
For large n you can break the problem into small segments. I think its enough just to consider a small patch with nodes either side of H and a small patch with nodes either side of S.
This is not really a solution, but a possible way to think about tackling the problem.
(To be pedantic stackoverflow.com is not the right site for this question in the stack exchange network. Computational Science : algorithms might be a better place.
This is a fun problem. First, lets try to find a brute force solution, as Poosh did.
Observations about the Shortest Path
No repeated points
You are in an Euclidean geometry, thus the triangle inequality holds: For all points a,b,c, the distance d(a,b) + d(b,c) <= d(a,c). Thus, whenever you have an optimal path that contains a point that occurs more than once, you can remove one of them, which means it is not an optimal path, which leads to a contradiction and proves that your optimal path contains each point exactly once.
Permutations
Our problem is thus to find the permutation, lets call it M_i, of the numbers 1...n for points P1...Pn (where P0 is the fixed start point and Pn the head point, P1...Pn-1 are ordered by increasing x value) that minimizes the sum of |(P_M_i)-(P_M_(i-1))| for i from 1 to n, || being the vector length sqrt(v_x²+v_y²).
The number of permutations of a set of size n is n!. In this case we have n+1 points, so a brute force approach testing all permutations would have complexity (n+1)!, which is higher than even 2^n and definitely not practical, so we need further observations to improve this.
Next Steps
My next step would now be to see if there are any other sequences that can be proven to be not optimal, leading to a reduction in the number of candidates to be tested.
Paths of non-head points
Lets look at all paths (sequences of indices of points that don't contain a head point and that are parts of the optimal path. If we don't change the start and end point of a path, then any other transpositions have no effect on the outside environment and we can perform purely local optimizations. We can prove that those sequences must have monotonic (increasing or decreasing) x coordinate values and thus monotonic indices (as they are ordered by ascending x coordinate between indices 0 and n-1):
We are in a purely one dimensional subspace and the total distance of the path is thus equal to the sum of the absolute values of the differences in x coordinates between one such point and the next. It is clear that this sum is minimized by ordering by x coordinate in either ascending or descending order and thus ordering the indices in the same way. Note that this is true for maximal such paths as well as for all continuous "subpaths" of them.
Wrapping it up
The only choices we have left are:
where do we place the head node in the optimal path?
which way do we order the two paths to the left and right?
This means we have n values for the index of the head node (1...n, 0 is fixed as the start node) and 2x2 values for sort order. So we have 4n choices which we can all calculate and pick the shortest one. One of the sort orders probably determines the other but I leave that to you.
Anyways, the complexity of this algorithm is O(4n) = O(n). Because reading in the input of the problems is in O(n) and writing the output is as well, I believe that is an algorithm of optimal complexity. However, if we could reformulate the problem somewhat, so that we could read and write the input and output in some compressed form, as in only the parameters that we actually need to solve the problem, then it is possible that we could do better.
P.S.: I'm not a mathematician so I probably used wrong words for some concepts and missed the usual notation for the variables and functions. I would be glad for some expert to check this for any obvious errors.

Efficiently find a border around a binary group of points

This is more of a mathematical question.
I have a list of 2D coordinates of length N. (Nx2 list)
The coordinates are rounded numbers and form a region. The following is an example:
enter image description here
What I would like is to have a border around these points. Like the following:
enter image description here
One option to do this is to
go through the list, and for each coordinate i
check for the 8 possible neighbours j to see
if this point doesn't overlap with given coordinates k .
if this point doesn't overlap with already found border coordinates
This works well, nut needs N*N*8 calculations. For my N=1000 points: 8 million!
Does anyone know how this could be done more efficient?
Best regards,
Martin
If the size of the grid is constrained and is on the order of N as well, you could do better and get to O(N) by making a 2-D array of ints the size of the grid.
Initialize the grid to zeros.
For each point in the list of points, set the point itself to negative in the grid array and set each neighbor that isn't negative to positive.
When you're done, each point in the 2-D grid array that's positive is the border.
Make a sorted container that enforces uniqueness (in c++ STL, that's a std::set) of coordinates. Go through the points, adding each point's eight neighbors to the set if they aren't already in there. Then go though the points a second time, subtracting them from the set if they are in the set. The points that remain are the border. That's O(N*log(N)). In general that's the best you can do. But see my other answer for a better algorithm if additional criteria exist.

How to test of 2 sets of planes (each defining a volume in 3d space) overlap?

To take a simple example, say there is 2 bounding boxes (not necessarily axis aligned), each defined by 6 planes.
Is there a good way to determine if the volumes defined by each set of planes overlap?
(Only true/false, no need for the intersecting volume).
A solution to this problem, if its general should be able to scale up to many sets of planes too.
So far the solutions I've come up with basically rely on converting each set of planes into geometry - (vertices & polygons), then performing the intersection as you would if you have to intersect any 2 regular meshes. However I was wondering if there was a more elegant method that doesn't rely on this.
The intersection volume (if any) is the set of all points on the right side of all planes (combined, from both volumes). So, if you can select 3 planes whose intersection is on the right side of all the remaining planes, then the two volumes have an intersection.
This is a linear programming problem. In your case, you only need to find if there is a feasible solution or not; there are standard techniques for doing this.
You can determine the vertices of one of your bodies by mutually intersecting all possible triples that its planes form, and then check whether each of the resulting vertices lies on the good side of the planes defining the second body. When each of the second body's planes is given as base vertex p and normal v, this involves checking whether (x-p).v>=0 .
Assume that your planes are each given as base vertices (p,q,r) and normals (u,v,w) respectively, where the normals form the columns of a matrix M, the intersection is x = inv(M).(p.u, q.v, r.w).
Depending on how regular your two bodies are (e.g. parallelepipeds), many of the dot products and matrix inverses can be precomputed and reused. Perhaps you can share some of your prerequisites.
Posting this answer since this is one possible solution (just from thinking about the problem).
first calculate a point on each plane set (using 3 planes), and simply check if either of these points is inside the other plane-set.This covers cases where one volume is completely inside another, but won't work for partially overlapping volumes of course.
The following method can check for partial intersections.
for one of the sets, calculate the ray defined by each plane-plane pair.
clip the each of these rays by the other planes in the set, (storing a minimum and maximum value per ray).
discard any rays that have a minimum value greater then their maximum.The resulting rays represent all 'edges' for the volume.
So far all these calculations have been done on a single set of planes, so this information can be calculated once and stored for re-use.
Now continue clipping the rays but this time use the other set of planes, (again, discarding rays with a min greater then the maximum).
If there are one or more rays remaining, then there is an intersection.
Note 0): This isn't going to be efficient for any number of planes, (too many On^2 checks going on). In that case converting to polygons and then using more typical geometry tree structures makes more sense.
Note 1): Discarding rays can be done as the plane-pairs are iterated over to avoid first having to store all possible edges, only to discard many.
Note 2): Before clipping all rays with the second set of planes, a quick check could be made by doing a point-inside test between the plane-sets (the point can be calculated using a ray and its min/max). This will work if one shape is inside another, however clipping the rays is still needed for a final result.

Fast algorithm to uncross any crossing edges in a set of polygons

I have a number of polygons each represented as a list of points. I'm looking for a fast algorithm to go through the list of polygons and uncross all of the crossed edges until no crossed edges remain.
Psudocode for current version:
While True:
For each pair of polygons:
for edge1 in first_polygon:
for edge2 in second_polygon:
if edges_cross(edge1,edge2): # Uses a line segment intersection test
uncross_edges(first_polygon,second_polygon,edge1,edge2)
If no edges have been uncrossed:
break
This can be improved a fair bit by replacing the while loop with recursion. However, it's still rather poor in terms of performance.
Below is a simple example of the untangling*. The there'll actually be a large number of polygons and a fair number of points per polygon (around 10-500). The red line shows which two edges are being uncrossed. The result should always be a series of planar graphs, although not sure if there are multiple valid outcomes or just one.
Edit: This time I added the lines first then added the points, and used a bit more complex of a shape. Pretend the points are fixed.
First, let us illustrate what you want (if I got it right). Suppose you have two polygons, one of them has an edge (a, b) which intersects with an edge (s, r) of the other one. These polygons also have a clock-wise orientation, so you know the next vertex after b, and the next vertex after r. Since the edges crosses, you remove them both, and add four new ones. The new ones you add are: (a, r), (r, next(b)); (s, b), (b, next(r)). So you again have two polygons. This is illustrated in the following figure. Note that by initially removing only two edges (one from each polygon), all the crossing were resolved.
Speeding the trivial implementation of O(n^2) per iteration is not entirely easy, and 500 points per polygon is a very small amount to be worried about. If you decide that you need to improve this time, my initial suggestion would be to use the Bentley-Otmann algorithm in some smart way. The smart way involves running the algorithm, then when you find an intersection, you do the procedure above to eliminate the intersection, and then you update the events that guide the algorithm. Hopefully, the events to be handled can be updated without rendering the algorithm useless for this situation, but I don't have a proof for that.
It seems that you want to end up with an embedded planar polygon whose vertices are exactly a given collection of points. The desired "order" on the points is what you get by going around the boundary of the polygon and enumerating the vertices in the order they appear.
For a given collection of points in general there will be more than one embedded polygon with this property; for an example, consider the following list of points:
(-1,-1), (0,0), (1,0), (1,1), (0,1)
This list defines a polygon meeting your criteria (if I understand it correctly). But so does the following ordering of this list:
(-1,-1), (1,0), (0,0), (1,1), (0,1)
Here is one algorithm that will work (I don't know about fast).
First, sort your points by x-coordinate (eg with quicksort) in increasing order (call this list L).
Second, find the convex hull (eg with quickhull); the boundary of the convex hull will contain the leftmost and rightmost points in the sorted list L (call these L[1] and L[n]); let S be the subset of points on the boundary between L[1] and L[n].
The list you want is S in the order it appears in L (which will also be the order it appears in the boundary of the convex hull) followed by the other elements L-S in the reverse of the order they appear in L.
The first two operations should usually take time O(n log n) (worst case O(n^2)); the last will take time O(n). The polygon you get will be the lower boundary of the convex hull (from left to right, say), together with the rest of the points in a "zigzag" above them going from right to left.

How to determine ordering of 3D vertices

If I have 5 Vertices in 3D coordinate space how can I determined the ordering of those Vertices. i.e clockwise or anticlockwise.
If I elaborate more on this,
I have a 3D model which consists of set of polygons. Each polygon is collection of vertices and I want to calculate the norm of the polygon surface. To calculate the norm I have to consider the vertices in counter clockwise order . My question is given set of vertices how can I determine whether it is ordered in clockwise or counter clockwise?
This is for navigation mesh generation where I want to remove the polygons which cannot be walked by the agent. To do so my approach is to calculate the surface norm(perpendicular vector of the polygon) and remove the polygon based on the angle with 2D plane. To calculate the norm I should know in which order points are arranged. So for given set of points in polygon how can I determine the order of the arrangement of points.
Ex.
polygon1 consist of Vertex1 = [-21.847065 -2.492895 19.569759], Vertex2 [-22.279873 1.588395 16.017160], Vertex3 [-17.234818 7.132950 7.453146] these 3 points and how can I determine the order of them
As others have noted, your question isn't entirely clear. Is the for something like a 3D backface culling test? If so, you need a point to determine the winding direction relative to. Viewed from one side of the polygon the vertices will appear to wind clockwise. From the other side they'll appear to wind counter clockwise.
But suppose your polygon is convex and properly planar. Take any three consecutive vertices A, B, and C. Then you can find the surface normal vector using the cross product:
N = (B - A) x (C - A)
Taking the dot product of the normal with a vector from the given view point, V, to one of the vertices will give you a value whose sign indicates which way the vertices appear to wind when viewed from V:
w = N . (A - V)
Whether this is positive for clockwise and negative for anticlockwise or the opposite will depend on the handedness of your coordinate system.
Your question is too poorly defined to give a complete answer, but here's the skeleton of one.
The missing part (the meat if you will), is a function that takes any two coordinates and tells you which one is 'greater' than the other. Without a solid definition for this, you won't be able to make anything work.
The rest, the skeleton, is pretty simple. Sort your list of vectors using your comparison function. For five vectors, a simple bubble sort will be all you need, although if the number of vertices increases considerably you may want to look into a faster sorting algorithm (ie. Quicksort).
If your chosen language / libraries provide sorting for you, you've already got your skeleton.
EDIT
After re-reading your question, it also occurred to me that since these n vertices define a polygon, you can probably make the assumption that all of them lie on the same plane (if they don't, then good luck rendering that).
So, if you can map the vector coordinates to 2d positions on that plane, you can reduce your problem to ordering them clockwise or counterclockwise in a two dimensional space.
I think your confusion comes from the fact that methods for computing cross products are sometimes taught in terms of clockwiseness, with a check of the clockwiseness of 3 points A,B,C determining the sign of:
(B-A) X (C - A)
However a better definition actually determines this for you.
In general 5 arbitrary points in 3 dimensions can't be said to have a clockwise ordering but 3 can since 3 points always lie in a plane.

Resources