I want to write an algorithm using a dynamic programming technique, which does the following:
Find number of monotonic paths along the edges of a grid with n × n square cells, which do not pass above the diagonal. A monotonic path is one which starts in the lower left corner, finishes in the upper right corner, and consists entirely of edges pointing rightwards or upwards.
I had some ideas, but can't figure out how to do it right.
First, find a base for your recursion by solving a degenerate case (a 0 x 0 grid). Then look for a recurrence step by imagining that part of the problem, say, K x M is already solved, and see if you can expand upon that solution by adding one row or one column to it, making the solution K+1 x M or K x M+1. This should be simple: for each point you add, see if a grid point is below the diagonal, and then add up the number of paths leading to that point from the bottom and from the left. One of these points would be in the K x M, the other would be in the additional row or column that you are building.
With the degenerate case and a recursive step in hand, build your solution by first solving a 0 x N problem, then 1 x N, then 2 x N and so on, until you have your N x N solution.
Here is a possible recursion that considers only square grids.
There are two kinds of monotone paths over the n×n grid that do not cross the diagonal: those that touch the diagonal at some intermediate point (i,i) with 0 < i < n, and those that don't.
A path over the n×n grid that first touches the diagonal at (i,i) can be split in two: one path over the i×i grid that does not touch the diagonal, and another over the (n-i)×(n-i) grid and thay may touch the diagonal. This suggests that you can count those with a recursion that considers all possible i.
A path that does not touch the diagonal will start with "right", and end with "up". Between these two moves is a monotone path over a (n-1)×(n-1) grid that does not cross the diagonal but may touch it.
What we are computing here is the nth Catalan number. There is a formula for it if you want to verify your recursive computation.
Let number of path reaching coordinate (i,j) be P(i,j). Therefore (assuming bottom left corner is (0,0)):
P(i,j) = P(i-1,j) + P(i,j-1)
You can further put conditions of coordinate not going below the diagonal. That is: i ranges from 0..n, j ranges from 0..n, but i<=j always.
Related
Given (N+1) points. All the N points lie on x- axis. Remaining one point (HEAD point) lies anywhere in the coordinate plane.
Given a START point on x- axis.
Find the shortest distance to cover all the points starting from START point.We can traverse a point multiple times.
Example N+1=4
points on x axis
(0,1),(0,2),(0,3)
HEAD Point
(1,1) //only head point can lie anywhere //Rest all on x axis
START Point
(0,1)
I am looking for a method as of how to approach this problem.
Whether we should visit HEAD point first or HEAD point in between.
I tried to find a way using Graph Theory to simplify this problem and reduce the paths that need to be considered. If there is an elegant way to represent this problem using graphs to identify a solution, I was not able to find it. This approach becomes very inefficient as the n increases - the time and memory is O(2^n).
Looking at this as a tree graph, the root node would be the START point, then each of its child nodes would be the points it is connected to.
Since the START point and the rest of the points aside from the HEAD all lay on the x-axis, all non-HEAD points only need to be connected to adjacent points on the x-axis. This is because the distance of the path between any two points is the sum of the distances between any adjacent points along the path between those two points (the subset of nodes representing points on the x-axis does not need to form a complete graph). This reduces the brute force approach some.
Here's a simple example:
The upper left shows the original problem: points on the x-axis along with the START and HEAD points.
In the upper right, this has been transformed into a graph with each node representing a point from the original problem. The edges represent the paths that can be taken between points. This assumes that the START point only represents the first point in the path. Unlike the other nodes, it is only included in the path once. If that is not the case and the path can return to the START point, this would approximately double the possible paths, but the same approach can be followed.
In the bottom left, the START point, a, is the root of a tree graph, and each node connected to the START point is a child node. This process is repeated for each child node until either:
A path that is obviously not optimal is identified, in which case that node can just be excluded from the graph. See the nodes in red boxes; going back and forth between the same nodes is unnecessary.
All points are included when traversing the tree from the root to that node, producing a potential solution.
Note that when creating the tree graph, each time a node is repeated, its "potential" child nodes are the same as the first time the node was included. By "potential", I mean cases above still need to be checked, because the result might include a nonsensical path, in which case that node would not be included. It is also possible a potential solution results from the path after its child nodes are included.
The last step is to add up the distances for each of the potential solutions to determine which path is shortest.
This requires a careful examination of the different cases.
Assume for now START (S) is on the far left, and HEAD (H) is somewhere in the middle the path maybe something like
H
/ \
S ---- * ----*----* * --- * ----*
Or it might be shorter to from H to and from the one of the other node
H
//
S ---- * --- * -- *----------*---*
If S is not at one end you might have something like
H
/ \
* ---- * ----*----* * --- * ----*
--------S
Or even going direct from S to H on the first step
H
/ |
* ---- * ----*----* |
S
A full analysis of cases would be quite extensive.
Actually solving the problem, might depend on the number of nodes you have. If the number is small < 10, then compete enumeration might be possible. Just work out every possible path, eliminate the ones which are illegal, and choose the smallest. The number of paths is I think in the order of n!, so its computable for small n.
For large n you can break the problem into small segments. I think its enough just to consider a small patch with nodes either side of H and a small patch with nodes either side of S.
This is not really a solution, but a possible way to think about tackling the problem.
(To be pedantic stackoverflow.com is not the right site for this question in the stack exchange network. Computational Science : algorithms might be a better place.
This is a fun problem. First, lets try to find a brute force solution, as Poosh did.
Observations about the Shortest Path
No repeated points
You are in an Euclidean geometry, thus the triangle inequality holds: For all points a,b,c, the distance d(a,b) + d(b,c) <= d(a,c). Thus, whenever you have an optimal path that contains a point that occurs more than once, you can remove one of them, which means it is not an optimal path, which leads to a contradiction and proves that your optimal path contains each point exactly once.
Permutations
Our problem is thus to find the permutation, lets call it M_i, of the numbers 1...n for points P1...Pn (where P0 is the fixed start point and Pn the head point, P1...Pn-1 are ordered by increasing x value) that minimizes the sum of |(P_M_i)-(P_M_(i-1))| for i from 1 to n, || being the vector length sqrt(v_x²+v_y²).
The number of permutations of a set of size n is n!. In this case we have n+1 points, so a brute force approach testing all permutations would have complexity (n+1)!, which is higher than even 2^n and definitely not practical, so we need further observations to improve this.
Next Steps
My next step would now be to see if there are any other sequences that can be proven to be not optimal, leading to a reduction in the number of candidates to be tested.
Paths of non-head points
Lets look at all paths (sequences of indices of points that don't contain a head point and that are parts of the optimal path. If we don't change the start and end point of a path, then any other transpositions have no effect on the outside environment and we can perform purely local optimizations. We can prove that those sequences must have monotonic (increasing or decreasing) x coordinate values and thus monotonic indices (as they are ordered by ascending x coordinate between indices 0 and n-1):
We are in a purely one dimensional subspace and the total distance of the path is thus equal to the sum of the absolute values of the differences in x coordinates between one such point and the next. It is clear that this sum is minimized by ordering by x coordinate in either ascending or descending order and thus ordering the indices in the same way. Note that this is true for maximal such paths as well as for all continuous "subpaths" of them.
Wrapping it up
The only choices we have left are:
where do we place the head node in the optimal path?
which way do we order the two paths to the left and right?
This means we have n values for the index of the head node (1...n, 0 is fixed as the start node) and 2x2 values for sort order. So we have 4n choices which we can all calculate and pick the shortest one. One of the sort orders probably determines the other but I leave that to you.
Anyways, the complexity of this algorithm is O(4n) = O(n). Because reading in the input of the problems is in O(n) and writing the output is as well, I believe that is an algorithm of optimal complexity. However, if we could reformulate the problem somewhat, so that we could read and write the input and output in some compressed form, as in only the parameters that we actually need to solve the problem, then it is possible that we could do better.
P.S.: I'm not a mathematician so I probably used wrong words for some concepts and missed the usual notation for the variables and functions. I would be glad for some expert to check this for any obvious errors.
i have two concentric circles and three points are given for each circle that are on circumference.
I need a optimized method to check if a given random point exist inbetween these circles or not.
You can compute (x²+y²), x, y, 1 for each point. The last entry is simply the constant one. Put these terms for four given points into a matrix and compute its determinant. The determinant will be zero if the points are cocircular. Otherwise the sign will tell you which point is on which side with respect to the circle defined by the other three. Use a simple example to check which sign corresponds to which direction. Be prepared for the fact that the three circle-defining points being oriented in a clockwise or counter-clockwise orientation will affect this sign, too.
Computing a 4×4 determinant can be done horribly inefficiently, too. I'd suggest you compute all the 2×2 minors from the first two rows, and all the 2×2 minors from the last two, then you can combine them to form the full determinant. See this Math SE post for details. If you need further mathematical help (as opposed to programming help), you might find more suitable answers there.
Nothe that the above works for each circle independently. Check whether the point is inside the one, then check whether it is outside the other. It does not make use of the fact that the circles are assumed to be cocircular.
Given some points in plane (upto 500 points), no 3 collinear. We have to determine the number of triangles whose vertices are from the given points and that contain exactly N points inside them. How to efficiently solve this problem? The naive O(n^4) algorithm is too slow. Any better approach?
You could try thinking of the triangle as the intersection of three half-spaces. To find the number of points inside a triangle A, B, C first consider the set of points on one side of the infinite line in direction AB. Let these sets L(AB) and R(AB) for points of the left and right. Similarly you the same with other two edges and build sets L(AC) and R(AC) and sets L(BC) and R(BC).
So the number of points in ABC will be the number of points in the intersection of L(AB), L(AC) and L(BC). (You might want to consider R(AB) instead depending on the orientation of the triangle).
Now if we want to consider the full set of 500 points. First take all pairs of points AB and construct the sets L(AB) and R(AB). This will take O(n^3) operations.
Next we test all triangles and find the intersections of the three sets. If we use some hash table structure for the sets then to find the intersection points is like a hashtable lookups. If L(AB) has l elements, L(AC) has m elements and L(BC) n elements. Say l > m > n. For each point in L(BC) we need to do a lookup in L(AC) and L(BC) so thats a maximum of 2n hashtable lookups.
It might be faster to consider a geometric lookup table.
Divide your whole domain into a coarse grid say a 10 by 10 grid. We can then put each point into a set G(i,j). We can then split the sets L(AB) into each grid cell. Say call these sets L(AB,i,j) and R(AB,i,j). In testing for intersections first workout which grid cells lie in the intersection. This dramatically reduces the search space and as each set L(AB,i,j) contain fewer members there will be fewer hashtable lookups.
Actually I happened to encounter similar problem recently but the only difference was that there were around 300 pts and I solved it using bitset (C++ STL). For every pair of points, say (x[i],y[i]) and (x[j],y[j]), I formed a bitset<302>B[i][j] and B[i][j][k] stores 1 if k-th point is above line segment from point i to point j else I would store 0.
Now in a brute force manner I get three points so as to form a triangle, lets say (x[i],y[i]), (x[j],y[j]) and (x[k],y[k]), then a point,say z-th point ,would be inside triangle if B[i][j][z]==B[i][j][k] && B[j][k][z]==B[j][k][i] && B[k][i][z]==B[k][i][j] because a point inside triangle would show similar sign w.r.t. a side of triangle as the third point of triangle(one which is not on this side).
So i get three bitset variables P=B[i][j], Q=B[j][k] and R=B[k][i] and there taking there bitwise AND then applying count() function to give me the active number of bits and hence the number of points within the triangle. But make sure you change variable P such that it gives B[i][j][k]=1 if not then take bitwise not (~) of this variable.
Though the above solution is problem specific, i hope it helps. This is the problem link: http://usaco.org/current/index.php?page=viewproblem&cpid=660
Given A: a point, B: A point known to exist on a plane P, C: the normal of plane P. Can I determine if A lies on P by the result of a dot product between (A - B) and C being zero? (or within a certain level of precision, I'll probably use 0.0001f)
I might just be missing some obvious mathematical flaw but this seems to be much simpler and faster than transforming the point to a triangle's coordinate space a.la the answer to Check if a point is inside a plane segment
So secondly I guess; if this is a valid check, would it be computationally faster than using matrix transformations if all I want is to see if the point is on the plane? (and not whether it lies within a polygon on said plane, I'll probably keep using matrix transforms for that)
You are correct that B lies on the plane through A and with normal P if and only if dotProduct(A-B,P) = 0.
To estimate speed for this sort of thing, you can pretty much just count the multiplications. This formula just has three multiplications, so its going to be faster than pretty much anything to do with matrices.
The above answers are closer to the proof, but not sufficient. It should be intuitive that using just two vectors is insufficient because for one, point P can be above the plane and a vertical line drawn from it to the plane would still generate a zero dot product with any single vector lying on the plane, just as it would for a point P on the plane. The necessary and sufficient condition is that if two vectors can be found on the plane then the actual plane is represented unambiguously by the cross product of the two vectors i.e.
w=uxv. By definition, w is the area vector, which is always perpendicular to the plane.
Then, for the point P in question, constructing a third vector s from either u or v should be tested against w by the dot product, s.t.
w.s=|w||s|cos(90)=0 implies that the point P lies on the plane described by w, which is in turn described by vectors u and v.
If I have 5 Vertices in 3D coordinate space how can I determined the ordering of those Vertices. i.e clockwise or anticlockwise.
If I elaborate more on this,
I have a 3D model which consists of set of polygons. Each polygon is collection of vertices and I want to calculate the norm of the polygon surface. To calculate the norm I have to consider the vertices in counter clockwise order . My question is given set of vertices how can I determine whether it is ordered in clockwise or counter clockwise?
This is for navigation mesh generation where I want to remove the polygons which cannot be walked by the agent. To do so my approach is to calculate the surface norm(perpendicular vector of the polygon) and remove the polygon based on the angle with 2D plane. To calculate the norm I should know in which order points are arranged. So for given set of points in polygon how can I determine the order of the arrangement of points.
Ex.
polygon1 consist of Vertex1 = [-21.847065 -2.492895 19.569759], Vertex2 [-22.279873 1.588395 16.017160], Vertex3 [-17.234818 7.132950 7.453146] these 3 points and how can I determine the order of them
As others have noted, your question isn't entirely clear. Is the for something like a 3D backface culling test? If so, you need a point to determine the winding direction relative to. Viewed from one side of the polygon the vertices will appear to wind clockwise. From the other side they'll appear to wind counter clockwise.
But suppose your polygon is convex and properly planar. Take any three consecutive vertices A, B, and C. Then you can find the surface normal vector using the cross product:
N = (B - A) x (C - A)
Taking the dot product of the normal with a vector from the given view point, V, to one of the vertices will give you a value whose sign indicates which way the vertices appear to wind when viewed from V:
w = N . (A - V)
Whether this is positive for clockwise and negative for anticlockwise or the opposite will depend on the handedness of your coordinate system.
Your question is too poorly defined to give a complete answer, but here's the skeleton of one.
The missing part (the meat if you will), is a function that takes any two coordinates and tells you which one is 'greater' than the other. Without a solid definition for this, you won't be able to make anything work.
The rest, the skeleton, is pretty simple. Sort your list of vectors using your comparison function. For five vectors, a simple bubble sort will be all you need, although if the number of vertices increases considerably you may want to look into a faster sorting algorithm (ie. Quicksort).
If your chosen language / libraries provide sorting for you, you've already got your skeleton.
EDIT
After re-reading your question, it also occurred to me that since these n vertices define a polygon, you can probably make the assumption that all of them lie on the same plane (if they don't, then good luck rendering that).
So, if you can map the vector coordinates to 2d positions on that plane, you can reduce your problem to ordering them clockwise or counterclockwise in a two dimensional space.
I think your confusion comes from the fact that methods for computing cross products are sometimes taught in terms of clockwiseness, with a check of the clockwiseness of 3 points A,B,C determining the sign of:
(B-A) X (C - A)
However a better definition actually determines this for you.
In general 5 arbitrary points in 3 dimensions can't be said to have a clockwise ordering but 3 can since 3 points always lie in a plane.