AABB vs Circle - Vise versa using separate axis theorem - math

Am following this tutorial for my 2d game collision handling , this tutorial explains about the collision used in one of my favorite game "N". How they used separate axis theorem more efficiently for collision between AABB vs AABB and AABB vs Circle. http://www.metanetsoftware.com/technique/tutorialA.html. I understand the implementation of AABB vs AABB collision handling but I couldn't understand AABB vs Circle collision detection especially voronoi regions.Totally confused how/where to start.
AABB vs AABB collision detection
Find the axis along all the edge by finding the normal of each edge.
Projection all the vertices to the
resultant Axis , final result should
be a scalar value.
The resultant scalar value in turn
is used to find whether collision is
present or not.
Can someone please explain how to handle collision AABB vc Circle - vise versa?

Since collisions with a circle always come down to a comparison against the radius (in your case, via projection), having the closest line segment (edge of the polygon) and the normal vector are the only building blocks you need. The normal vector is easily computed from the points of the line segment (something like unit(y2-y1, x1-x2) ... the negative reciprocal of the slope). Figuring out which edge is closest is the building block that remains. Voronoi regions give us the last building block.
You understand collisions between axis-aligned bounding boxes. I assume you also understand collisions between two circles. I'm assuming you don't understand voronoi regions. So, where to start? Voronoi diagrams. I highly suggest that you find a diagrammed explanation. This link is quite good. However, depending on how lost you are, perhaps a little additional background (seriously, though, no explanation can beat the visual):
A voronoi diagram is one of the ubiquitous data structures of computational geometry. Any computational geometry book will discuss the Voronoi diagram. It answers a simple question: where is the closest post office? Given a set of points in a plane (post offices), a voronoi diagram separates the plane into different regions, each containing one of the points. If you are in a particular region, you know which point (post office) is closest to you. If you were a circle, this would be nice for collision detection for a simple reason: the closest point is the most important one to test for collisions.
Note that if you want to mathematically derive a voronoi diagram, you simply consider all point pairs and calculate all bisecting lines. Then you intersect all of the bisecting lines and throw away the segments that are unimportant because some other point is closer to the point of interest (which happens at every intersection). This leads to a terribly inefficient algorithm, though. The efficient implementation involves another ubiquitous thing in computational geometry: the line-sweep algorithm. Its details can be found elsewhere; the important bit is that it provides a method of considering only the important points at any stage of the algorithm.
The voronoi regions in your tutorial are a little more complex. Instead of just points, we have line segments. Fortunately, the line-sweep algorithm handles this nicely. You mostly have to worry about the start or end of the line segments. Conceptually, not much changes once you have the basic algorithm down. Again, this is exceptionally helpful for collision detection with a circle: given the voronoi region, you know which line segment to test collisions against.
Does that even help? Feedback appreciated. I'll be happy to clarify anything. Explaining voronoi diagrams without visuals is probably a bad idea.

Related

How can I make this grid variation of the voronoi diagram?

I have a problem that reminds me of Voronoi, but I'm hoping that my variation will allow me to avoid using the Voronoi algorithm, and write something quicker.
Here's a horrible image I made in Paint to illustrate my problem:
Say I have an area of a map. Each dot represents a shop. Each square represents a neighbourhood. The voronoi diagram shows the areas closest to each shop.
If one of those areas dominates a square, then that whole square belongs to that shop.
Is it possible to determine which squares belong to which shop, without having to calculate an intermediate voronoi diagram? It seems as though, since this is like a very rough approximation of a voronoi diagram, there should be a super fast shortcut to generating it.
Perhaps I'm misunderstanding, but can't you just find the vertex which is closest to the centroid of each square?
#user2615897 points out that this isn't generally correct (see comment). None-the-less, I think it would be a good approximation for a grid which looks like your example (specifically: roughly equal-area cells, with spacings comparable to the square-sizes).
My intuition is that without explicitly constructing the diagram, any approach will only be an approximation... but I'm not sure.
This (segment) of a configuration illustrates the point:
The red vertex is nearest the center of the central square, while the green-vertex owns the most area.

Aligning two clouds using two manually selected points

I'm maintaining software which uses PCL. I'm myself not much experienced in PCL, I've only tried some examples and tried to understand the official PCL-Ducumentation (which is unfortunately mainly sparse, doxygen-generated text). My impression is, only a PCL contributors have real change to use the library efficiently.
One feature I have to fix in the software is aligning two clouds. The clouds are two objects, which should be stacked together with a layer in-between (The actual task is to calculate the volume of the layer ).
I hope the picture explains the task well. The objects are scanned both from the sides to be stacked (one from above and the other from below). On both clouds the user selects manually two points. Then, as I hope there should be a mean in PCL to align two clouds providing the two clouds and the coordinates of the points. The alignment is required only in X-Y Plane.
Unfortunately I can't find out which function should I use for this, partly because the PCL documentation is IHMO really humble, partly because of lack of experience.
My desperate idea was to stack the clouds using P1 as the origin of both and then rotate the second cloud manually using the calculated angle between P11,P21 and P12,P22. This works, but since the task appears to me very common, I'd expect PCL to provide a dedicated function for that.
Could you point me to a proper API-function, code-snippet, example, similar project or a good book helping to understand PCL API and usage?
Many thanks!
I think this problem does not need PCL. It is simple enough to form the correct linear equation and solve it.
If you want to use PCL without worrying about the maths too much (though, if the above is a mystery to you, then studying some computational geometry would be very useful), here is my suggestion.
Most PCL operations work on 3D point clouds. I understand from your question that you only have 2D point clouds OR you don't care about the 3rd dimension. In this case if I were you I would represent the points as a 3D point cloud and set the z dimension to zero.
You will only need two point clouds with 3 points as that is how many points you are feeding to the transformation estimation algorithm. The first 2 points in the clouds will be the points chosen by the user. The third one will be any point that you have chosen that you know is the same in both clouds. You need this third one otherwise the transform is still ambiguous if it is a general transform that is being computed. You can calculate however such a point as you know 2 points already and you know that all the points are on a common plane (as you have projected them by losing the z values). Just don't choose it co-linear with the other two points. For example, halfway between the two points and 2cm in the perpendicular direction (ensuring to go in the correct direction).
Then you can use the estimateRigidTransformation functions to find the transform.
http://docs.pointclouds.org/1.7.0/classpcl_1_1registration_1_1_transformation_estimation_s_v_d.html
This function is also good for over-determined problems (it is the workhorse of the ICP algorithm in PCL) but as long as you have enough points to determine the transform it should work.

Vector Shape Difference & intersection

Let me explain my problem:
I have a black vector shape (let's say it's a series of joined, straight lines for now, but it'd be nice if I could also support quadratic curves).
I also have a rectangle of a predefined width and height. I'm going to place it on top of the black shape, and then take the union of the two.
My first issue is that I don't know how to quickly extract vector unions, but I think there is a well-defined formula I can figure out for myself.
My second, and more tricky issue is how to efficiently detect the position the rectangle needs to be in (i.e., what translation and rotation are needed by the matrices), in order to maximize the black, remaining after the union (see figure, below).
The red outlined shape below is ~33% black; the green is something like 85%; and there are positions for this shape & rectangle wherein either could have 100% coverage.
Obviously, I can brute-force this by trying every translation and rotation value for every point where at least part of the rectangle is touching the black shape, then keep track of the one with the most black coverage. The problem is, I can only try a finite number of positions (and may therefore miss the maximum). Apart from that, it feels very inefficient!
Can you think of a more efficient way of tackling this problem?
Something from my Uni days tells me that a Fourier transform might improve the efficiency here, but I can't figure out how I'd do that with a vector shape!
Three ideas that have promise of being faster and/or more precise than brute force search:
Suppose you have a 3d physics engine. Define a "cone-shaped" surface where the apex is at say (0,0,-1), the black polygon boundary on the z=0 plane with its centroid at the origin, and the cone surface is formed by connecting the apex with semi-infinite rays through the polygon boundary. Think of a party hat turned upside down and crumpled to the shape of the black polygon. Now constrain the rectangle to be parallel to the z=0 plane and initially so high above the cone (large z value) that it's easy to find a place where it's definitely "inside". Then let the rectangle fall downward under gravity, twisting about z and translating in x-y only as it touches the cone, staying inside all the way down until it settles and can't move any farther. The collision detection and force resolution of the physics engine takes care of the complexities. When it settles, it will be in a position of maximal coverage of the black polygon in a local sense. (If it settles with z<0, then coverage is 100%.) For the convex case it's probably a global maximum. To probabilistically improve the result for non-convex cases (like your example), you'd randomize the starting position, dropping the polygon many times, taking the best result. Note you don't really need a full blown physics engine (though they certainly exist in open source). It's enough to use collision resolution to tell you how to rotate and translate the rectangle in a pseudo-physical way as it twists and slides uniformly down the z axis as far as possible.
Different physics model. Suppose the black area is an attractive field generator in 2d following the usual inverse square rule like gravity and magnetism. Now let the rectangle drift in a damping medium responding to this field. It ought to settle with a maximal area overlapping the black area. There are problems with "nulls" like at the center of a donut, but I don't think these can ever be stable equillibria. Can they? The simulation could be easily done by modeling both shapes as particle swarms. Or since the rectangle is a simple shape and you are a physicist, you could come up with a closed form for the integral of attractive force between a point and the rectangle. This way only the black shape needs representation as particles. Come to think of it, if you can come up with a closed form for torque and linear attraction due to two triangles, then you can decompose both shapes with a (e.g. Delaunay) triangulation and get a precise answer. Unfortunately this discussion implies it can't be done analytically. So particle clouds may be the final solution. The good news is that modern processors, particularly GPUs, do very large particle computations with amazing speed. Edit: I implemented this quick and dirty. It works great for convex shapes, but concavities create stable points that aren't what you want. Using the example:
This problem is related to robot path planning. Looking at this literature may turn up some ideas In RPP you have obstacles and a robot and want to find a path the robot can travel while avoiding and/or sliding along them. If the robot is asymmetric and can rotate, then 2d planning is done in a 3d (toroidal) configuration space (C-space) where one dimension is rotation (so closes on itself). The idea is to "grow" the obstacles in C-space while shrinking the robot to a point. Growing the obstacles is achieved by computing Minkowski Differences.) If you decompose all polygons to convex shapes, then there is a simple "edge merge" algorithm for computing the MD.) When the C-space representation is complete, any 1d path that does not pierce the "grown" obstacles corresponds to continuous translation/rotation of the robot in world space that avoids the original obstacles. For your problem the white area is the obstacle and the rectangle is the robot. You're looking for any open point at all. This would correspond to 100% coverage. For the less than 100% case, the C-space would have to be a function on 3d that reflects how "bad" the intersection of the robot is with the obstacle rather than just a binary value. You're looking for the least bad point. C-space representation is an open research topic. An octree might work here.
Lots of details to think through in both cases, and they may not pan out at all, but at least these are frameworks to think more about the problem. The physics idea is a bit like using simulated spring systems to do graph layout, which has been very successful.
I don't believe it is possible to find the precise maximum for this problem, so you will need to make do with an approximation.
You could potentially render the vector image into a bitmap and use Haar features for this - they provide a very quick O(1) way of calculating the average colour of a rectangular region.
You'd still need to perform this multiple times for different rotations and positions, but it would bring it algorithmic complexity down from a naive O(n^5) to O(n^3) which may be acceptably fast. (with n here being the size of the different degrees of freedom you are scanning)
Have you thought to keep track of the remaining white space inside the blocks with something like if whitespace !== 0?

Path finding for games

What are some path finding algorithms used in games of all types? (Of all types where characters move, anyway) Is Dijkstra's ever used? I'm not really looking to code anything; just doing some research, though if you paste pseudocode or something, that would be fine (I can understand Java and C++).
I know A* is like THE algorithm to use in 2D games. That's great and all, but what about 2D games that are not grid-based? Things like Age of Empires, or Link's Awakening. There aren't distinct square spaces to navigate to, so what do they do?
What do 3D games do? I've read this thingy http://www.ai-blog.net/archives/000152.html, which I hear is a great authority on the subject, but it doesn't really explain HOW, once the meshes are set, the path finding is done. IF A* is what they use, then how is something like that done in a 3D environment? And how exactly do the splines work for rounding corners?
Dijkstra's algorithm calculates the shortest path to all nodes in a graph that are reachable from the starting position. For your average modern game, that would be both unnecessary and incredibly expensive.
You make a distinction between 2D and 3D, but it's worth noting that for any graph-based algorithm, the number of dimensions of your search space doesn't make a difference. The web page you linked to discusses waypoint graphs and navigation meshes; both are graph-based and could in principle work in any number of dimensions. Although there are no "distinct square spaces to move to", there are discrete "slots" in the space that the AI can move to and which have been carefully layed out by the game designers.
Concluding, A* is actually THE algorithm to use in 3D games just as much as in 2D games. Let's see how A* works:
At the start, you know the coordinates of your current position and
your target position. You make an optimistic estimate of the
distance to your destination, for example the length of the straight
line between the start position and the target.
Consider the adjacent nodes in the graph. If one of them is your
target (or contains it, in case of a navigation mesh), you're done.
For each adjacent node (in the case of a navigation mesh, this could
be the geometric center of the polygon or some other kind of
midpoint), estimate the associated cost of traveling along there as the
sum of two measures: the length of the path you'd have traveled so
far, and another optimistic estimate of the distance that would still
have to be covered.
Sort your options from the previous step by their estimated cost
together with all options that you've considered before, and pick
the option with the lowest estimated cost. Repeat from step 2.
There are some details I haven't discussed here, but this should be enough to see how A* is basically independent of the number of dimensions of your space. You should also be able to see why this works for continous spaces.
There are some closely related algorithms that deal with certain problems in the standard A* search. For example recursive best-first search (RBFS) and simplified memory-bounded A* (SMA*) require less memory, while learning real-time A* (LRTA*) allows the agent to move before a full path has been computed. I don't know whether these algorithms are actually used in current games.
As for the rounding of corners, this can be done either with distance lines (where corners are replaced by circular arcs), or with any kind of spline function for full-path smoothing.
In addition, algorithms are possible that rely on a gradient over the search space (where each point in space is associated with a value), rather than a graph. These are probably not applied in most games because they take more time and memory, but might be interesting to know about anyway. Examples include various hill-climbing algorithms (which are real-time by default) and potential field methods.
Methods to procedurally obtain a graph from a continuous space exist as well, for example cell decomposition, Voronoi skeletonization and probabilistic roadmap skeletonization. The former would produce something compatible with a navigation mesh (though it might be hard to make it equally efficient as a hand-crafted navigation mesh) while the latter two produce results that will be more like waypoint graphs. All of these, as well as potential field methods and A* search, are relevant to robotics.
Sources:
Artificial Intelligence: A Modern Approach, 2nd edition
Introduction to The Design and Analysis of Algorithms, 2nd edition

Newell's Method to Calculate Plane Equation of Concave Polygon - Improvements?

3 points are needed to define a plane. Newell's method is known to fail if the 3 points are chosen around a concave corner - the normal of the resulting plane will point in the direction opposite to the expected one.
Are there any improvements to Newell's method that help in choosing a valid starting point? Or is there an alternative algorithm that doesn't have this issue?
Since your only interest in the plane is whether a given point is "deeper" or "closer" than it, I suppose you expect the normal of the plane to point towards the viewpoint (or away from it, depending on your convention). So just calculate the dot-product of the normal and the vector from the viewpoint to one of the three points, and look at its sign; if it's positive when you generally expect negative (or vice-versa), then reverse the normal.
Just came across this after some Googling for Newell's Plane algorithm. I'm also interested in using the algorithm, but my reading of various literature leaves me thinking that it is the "three point" method that fails on the concave corner case, while Newell's Method will work correctly. Here is the relevant passage:
This technique, first suggested by Newell (Sutherland et al., 1974),
works for concave polygons and polygons containing collinear vertices,
as well as for nonplanar polygons, e.g., polygons resulting from
perturbed vertex locations...
Newell's method may seem inefficient for planar polygons, since it
uses all the vertices of a polygon when, in fact, only three
points are needed to define a plane. It should be noted, though, that
for arbitrary planar polygons, these three points must be chosen very
carefully:
Three points uniquely define a plane if and only if they are not
collinear; and
if the three points are chosen around a "concave" corner, the normal
of the resulting plane will point in the direction opposite to the
expected one.
Checking for the properties would reduce the efficiency of the
three-point method as well as making its coding rather inelegant. A
good strategy may be that of using the three-point method for polygons
that are already known to be planar and strictly convex (no collinear
vertices,) and using Newell's method for the rest.
Source: Filippo Tampieri. “Newell's Method for Computing the Plane Equation of a Polygon”. In Graphics Gems III, Academic Press, 1992, pp. 231–232.

Resources