Optimal Pathfinding - dictionary

Background
There is a square map with some obstacles on it. Obstacles are represented by polygons. I implemented following path finding algorithm:
1) Choose precision (it'll be denoted by k)
2) Divide map into k x k squares.
3) Make graph from those squares according to the following rules:
- Every node represents one square
- Two nodes are connected if and only if they are adjacent and none of them cosist any obstacle.
4) Find shortest path using A* algorithm (or Dijkstra or some other...)
This algorithm works quite well if map is not dynamic. It means that obstacles can't be moved.
Questions
1) Is that efficient approach?
2) What to do if obstacles can be moved?
3) How to treat other agents? Lets consider situation where there 100agents in the room. There are two exists. All agents are in one group, and that group near one of the exits. If all agents go to the nearest exit then it'll cause a bottleneck. Some of them should go to the other exit to minimize time needed to exit. How to get such result?

Use the A* path as a general guideline around static obstacles and perform local Obstacle Avoidance for dynamic (smaller) obstacles. Reynolds also has an algorithm for the bottleneck-problem. He calls it Queueing.

Related

Minimum length of lines needed

Suppose we have a set of N points on the cartesian plane (x_i and y_i). Suppose we connect those points with lines.
Is there any way like using a graph and something like a shortest path algorithm or minimum spanning tree so that we can reach any point starting from any point but minimizing the total length of the lines??
I though that maybe I could set the cost of the edges with the distance of a graph and use a shortest path algorithm but I'm not sure if this is possible.
Any ideas ?
I'm not 100% sure what you want, so I go for two algorithms.
First: you just want a robust algorithm, use dijkstras algorithm. The only challange left is to define the edge cost. Which would be 1 for neighboring nodes, I assume.
Second: you want to use heuristics to estimate the next best node and optimize time consumption. Use A*, but you need to write a heuristic which under estimates the distance. You could use the euclidean distance to do so. The edge problematic stays the same.

Shortest distance to cover all the (N+1) points. All the N points lie on x- axis. Remaining one point lies anywhere in the coordinate plane

Given (N+1) points. All the N points lie on x- axis. Remaining one point (HEAD point) lies anywhere in the coordinate plane.
Given a START point on x- axis.
Find the shortest distance to cover all the points starting from START point.We can traverse a point multiple times.
Example N+1=4
points on x axis
(0,1),(0,2),(0,3)
HEAD Point
(1,1) //only head point can lie anywhere //Rest all on x axis
START Point
(0,1)
I am looking for a method as of how to approach this problem.
Whether we should visit HEAD point first or HEAD point in between.
I tried to find a way using Graph Theory to simplify this problem and reduce the paths that need to be considered. If there is an elegant way to represent this problem using graphs to identify a solution, I was not able to find it. This approach becomes very inefficient as the n increases - the time and memory is O(2^n).
Looking at this as a tree graph, the root node would be the START point, then each of its child nodes would be the points it is connected to.
Since the START point and the rest of the points aside from the HEAD all lay on the x-axis, all non-HEAD points only need to be connected to adjacent points on the x-axis. This is because the distance of the path between any two points is the sum of the distances between any adjacent points along the path between those two points (the subset of nodes representing points on the x-axis does not need to form a complete graph). This reduces the brute force approach some.
Here's a simple example:
The upper left shows the original problem: points on the x-axis along with the START and HEAD points.
In the upper right, this has been transformed into a graph with each node representing a point from the original problem. The edges represent the paths that can be taken between points. This assumes that the START point only represents the first point in the path. Unlike the other nodes, it is only included in the path once. If that is not the case and the path can return to the START point, this would approximately double the possible paths, but the same approach can be followed.
In the bottom left, the START point, a, is the root of a tree graph, and each node connected to the START point is a child node. This process is repeated for each child node until either:
A path that is obviously not optimal is identified, in which case that node can just be excluded from the graph. See the nodes in red boxes; going back and forth between the same nodes is unnecessary.
All points are included when traversing the tree from the root to that node, producing a potential solution.
Note that when creating the tree graph, each time a node is repeated, its "potential" child nodes are the same as the first time the node was included. By "potential", I mean cases above still need to be checked, because the result might include a nonsensical path, in which case that node would not be included. It is also possible a potential solution results from the path after its child nodes are included.
The last step is to add up the distances for each of the potential solutions to determine which path is shortest.
This requires a careful examination of the different cases.
Assume for now START (S) is on the far left, and HEAD (H) is somewhere in the middle the path maybe something like
H
/ \
S ---- * ----*----* * --- * ----*
Or it might be shorter to from H to and from the one of the other node
H
//
S ---- * --- * -- *----------*---*
If S is not at one end you might have something like
H
/ \
* ---- * ----*----* * --- * ----*
--------S
Or even going direct from S to H on the first step
H
/ |
* ---- * ----*----* |
S
A full analysis of cases would be quite extensive.
Actually solving the problem, might depend on the number of nodes you have. If the number is small < 10, then compete enumeration might be possible. Just work out every possible path, eliminate the ones which are illegal, and choose the smallest. The number of paths is I think in the order of n!, so its computable for small n.
For large n you can break the problem into small segments. I think its enough just to consider a small patch with nodes either side of H and a small patch with nodes either side of S.
This is not really a solution, but a possible way to think about tackling the problem.
(To be pedantic stackoverflow.com is not the right site for this question in the stack exchange network. Computational Science : algorithms might be a better place.
This is a fun problem. First, lets try to find a brute force solution, as Poosh did.
Observations about the Shortest Path
No repeated points
You are in an Euclidean geometry, thus the triangle inequality holds: For all points a,b,c, the distance d(a,b) + d(b,c) <= d(a,c). Thus, whenever you have an optimal path that contains a point that occurs more than once, you can remove one of them, which means it is not an optimal path, which leads to a contradiction and proves that your optimal path contains each point exactly once.
Permutations
Our problem is thus to find the permutation, lets call it M_i, of the numbers 1...n for points P1...Pn (where P0 is the fixed start point and Pn the head point, P1...Pn-1 are ordered by increasing x value) that minimizes the sum of |(P_M_i)-(P_M_(i-1))| for i from 1 to n, || being the vector length sqrt(v_x²+v_y²).
The number of permutations of a set of size n is n!. In this case we have n+1 points, so a brute force approach testing all permutations would have complexity (n+1)!, which is higher than even 2^n and definitely not practical, so we need further observations to improve this.
Next Steps
My next step would now be to see if there are any other sequences that can be proven to be not optimal, leading to a reduction in the number of candidates to be tested.
Paths of non-head points
Lets look at all paths (sequences of indices of points that don't contain a head point and that are parts of the optimal path. If we don't change the start and end point of a path, then any other transpositions have no effect on the outside environment and we can perform purely local optimizations. We can prove that those sequences must have monotonic (increasing or decreasing) x coordinate values and thus monotonic indices (as they are ordered by ascending x coordinate between indices 0 and n-1):
We are in a purely one dimensional subspace and the total distance of the path is thus equal to the sum of the absolute values of the differences in x coordinates between one such point and the next. It is clear that this sum is minimized by ordering by x coordinate in either ascending or descending order and thus ordering the indices in the same way. Note that this is true for maximal such paths as well as for all continuous "subpaths" of them.
Wrapping it up
The only choices we have left are:
where do we place the head node in the optimal path?
which way do we order the two paths to the left and right?
This means we have n values for the index of the head node (1...n, 0 is fixed as the start node) and 2x2 values for sort order. So we have 4n choices which we can all calculate and pick the shortest one. One of the sort orders probably determines the other but I leave that to you.
Anyways, the complexity of this algorithm is O(4n) = O(n). Because reading in the input of the problems is in O(n) and writing the output is as well, I believe that is an algorithm of optimal complexity. However, if we could reformulate the problem somewhat, so that we could read and write the input and output in some compressed form, as in only the parameters that we actually need to solve the problem, then it is possible that we could do better.
P.S.: I'm not a mathematician so I probably used wrong words for some concepts and missed the usual notation for the variables and functions. I would be glad for some expert to check this for any obvious errors.

Pathfinding - A* with least turns

Is it possible to modify A* to return the shortest path with the least number of turns?
One complication: Nodes can no longer be distinguished solely by their location, because their parent node is relevant in determining future turns, so they have to have a direction associated with them as well.
But the main problem I'm having, is how to work number of turns into the partial path cost (g). If I multiply g by the number of turns taken (t), weird things are happening like: A longer path with N turns near the end is favored over a shorter path with N turns near the beginning.
Another less optimal solution I'm considering is: After calculating the shortest path, I could run a second A* iteration (with a different path cost formula), this time bounded within the x/y range of the shortest path, and return the path with the least turns. Any other ideas?
The current "state" of the search is actually represented by two things: The node you're in, and the direction you're facing. What you want is to separate each of those states into different nodes.
So, for each node in the initial graph, split it into E separate nodes, where E is the number of incoming edges. Each of these new nodes represents the old node, but facing in different directions. The outgoing edges of these new nodes will all be the same as the old outgoing edges, but with a different weight. If the old weight was w, then...
If the edge doesn't represent a turn, make the new weight w as well
If the edge does represent a turn, make the new weight w + ε, where ε is some number significantly smaller than the smallest weight.
Then just do a normal A* search. Since none of the weights have decreased, your heuristic will still be admissible, so you can still use the same heuristic.
If you really want to minimize the number of turns (as opposed to finding a nice tradeoff between turns and path length), why not transform your problem space by adding an edge for every pair of nodes connected by an unobstructed straight line; these are the pairs you can travel between without a turn. There are O(n) such edges per node, so the new graph is O(n3) in terms of edges. That makes A* solutions as much as O(n3) in terms of time.
Manhattan distance might be a good heuristic for A*.
Is it possible to modify A* to return the shortest path with the least number of turns?
It is most likely not possible. The reason being that it is an example of the weight-constrained shortest path problem. It is therefore NP-Complete and cannot be solved efficiently.
You can find papers that discuss solving this problem e.g. http://web.stanford.edu/~shushman/math15_report.pdf

Path finding for games

What are some path finding algorithms used in games of all types? (Of all types where characters move, anyway) Is Dijkstra's ever used? I'm not really looking to code anything; just doing some research, though if you paste pseudocode or something, that would be fine (I can understand Java and C++).
I know A* is like THE algorithm to use in 2D games. That's great and all, but what about 2D games that are not grid-based? Things like Age of Empires, or Link's Awakening. There aren't distinct square spaces to navigate to, so what do they do?
What do 3D games do? I've read this thingy http://www.ai-blog.net/archives/000152.html, which I hear is a great authority on the subject, but it doesn't really explain HOW, once the meshes are set, the path finding is done. IF A* is what they use, then how is something like that done in a 3D environment? And how exactly do the splines work for rounding corners?
Dijkstra's algorithm calculates the shortest path to all nodes in a graph that are reachable from the starting position. For your average modern game, that would be both unnecessary and incredibly expensive.
You make a distinction between 2D and 3D, but it's worth noting that for any graph-based algorithm, the number of dimensions of your search space doesn't make a difference. The web page you linked to discusses waypoint graphs and navigation meshes; both are graph-based and could in principle work in any number of dimensions. Although there are no "distinct square spaces to move to", there are discrete "slots" in the space that the AI can move to and which have been carefully layed out by the game designers.
Concluding, A* is actually THE algorithm to use in 3D games just as much as in 2D games. Let's see how A* works:
At the start, you know the coordinates of your current position and
your target position. You make an optimistic estimate of the
distance to your destination, for example the length of the straight
line between the start position and the target.
Consider the adjacent nodes in the graph. If one of them is your
target (or contains it, in case of a navigation mesh), you're done.
For each adjacent node (in the case of a navigation mesh, this could
be the geometric center of the polygon or some other kind of
midpoint), estimate the associated cost of traveling along there as the
sum of two measures: the length of the path you'd have traveled so
far, and another optimistic estimate of the distance that would still
have to be covered.
Sort your options from the previous step by their estimated cost
together with all options that you've considered before, and pick
the option with the lowest estimated cost. Repeat from step 2.
There are some details I haven't discussed here, but this should be enough to see how A* is basically independent of the number of dimensions of your space. You should also be able to see why this works for continous spaces.
There are some closely related algorithms that deal with certain problems in the standard A* search. For example recursive best-first search (RBFS) and simplified memory-bounded A* (SMA*) require less memory, while learning real-time A* (LRTA*) allows the agent to move before a full path has been computed. I don't know whether these algorithms are actually used in current games.
As for the rounding of corners, this can be done either with distance lines (where corners are replaced by circular arcs), or with any kind of spline function for full-path smoothing.
In addition, algorithms are possible that rely on a gradient over the search space (where each point in space is associated with a value), rather than a graph. These are probably not applied in most games because they take more time and memory, but might be interesting to know about anyway. Examples include various hill-climbing algorithms (which are real-time by default) and potential field methods.
Methods to procedurally obtain a graph from a continuous space exist as well, for example cell decomposition, Voronoi skeletonization and probabilistic roadmap skeletonization. The former would produce something compatible with a navigation mesh (though it might be hard to make it equally efficient as a hand-crafted navigation mesh) while the latter two produce results that will be more like waypoint graphs. All of these, as well as potential field methods and A* search, are relevant to robotics.
Sources:
Artificial Intelligence: A Modern Approach, 2nd edition
Introduction to The Design and Analysis of Algorithms, 2nd edition

Dijkstra's algorithm to find most weighted path

I just want to make sure this would work. Could you find the greatest path using Dijkstra's algorithm? Would you have to initialize the distance to something like -1 first and then change the relax subroutine to check if it's greater?
This is for a problem that will not have any negative weights.
This is actually the problem:
Suppose you are given a diagram of a telephone network, which is a graph
G whose vertices represent switches centers, and whose edges represent communication
lines between two centers. The edges are marked by their bandwidth of its lowest
bandwidth edge. Give an algorithm that, given a diagram and two switches centers a
and b, will output the maximum bandwidth of a path between a and b.
Would this work?
EDIT:
I did find this:
Hint: The basic subroutine will be very similar to the subroutine Relax in Dijkstra.
Assume that we have an edge (u, v). If min{d[u],w(u, v)} > d[v] then we should update
d[v] to min{d[u],w(u, v)} (because the path from a to u and then to v has bandwidth
min{d[u],w(u, v)}, which is more than the one we have currently).
Not exactly sure what that's suppose to mean though since all distance are infinity on initialization. So, i don't know how this would work. any clues?
I'm not sure Djikstra's is the way to go. Negative weights do bad, bad things to Djikstra's.
I'm thinking that you could sort by edge weight, and start removing the lowest weight edge (the worst bottleneck), and seeing if the graph is still connected (or at least your start and end points). The point at which the graph is broken is when you know you took out the bottleneck, and you can look at that edge's value to get the bandwidth. (If I'm not mistaken, each iteration takes O(E) time, and you will need O(E) iterations to find the bottleneck edge, so this is an O(E2) algorithm.
Edit: you have to realize that the greatest path isn't necessarily the highest bandwidth: you're looking to maximize the value of min({edges in path}), not sum({edges in path}).
You can solve this easily by modifying Dijkstra's to calculate maximum bandwidth to all other vertices.
You do not need to initialize the start vertex to -1.
Algorithm: Maximum Bandwidth(G,a)
Input: A simple undirected weighted graph G with non -ve edge weights, and a distinguished vertex a of G
Output: A label D[u], for each vertex u of G, such that D[u] is the maximum bandwidth available from a to u.
Initialize empty queue Q;
Start = a;
for each vertex u of G do,
D[u] = 0;
for all vertices z adjacent to Start do{ ---- 1
If D[Start] => D[z] && w(start, z) > D[z] {
Q.enqueue(z);
D[z] = min(D[start], D[z]);
}
}
If Q!=null {
Start = Q.dequeue;
Jump to 1
}
else
finish();
This may not be the most efficient way to calculate the bandwidth, but its what I could think of for now.
calculating flow may be more applicable, however flow allows for multiple paths to be used.
Just invert the edge weights. That is, if the edge weight is d, consider it instead as d^-1. Then do Dijkstra's as normal. Initialize all distances to infinity as normal.
You can use Dijkstra's algorithm to find a single longest path but since you only have two switch centers I don't see why you need to visit each node as in Dijkstra's. There is most likes a more optimal way of going this, such as a branch and bound algorithm.
Can you adjust some of the logic in algorithm AllPairsShortestPaths(Floyd-Warshall)?
http://www.algorithmist.com/index.php/Floyd-Warshall%27s_Algorithm
Initialize unconnected edges to negative infinity and instead of taking the min of the distances take the max?

Resources