2D collision for games without AABB - 2d

What exactly is the best way to detect 2d game collision? I use aabb's (axis-aligned-bounding-boxes) but if you have a big circle or something, you will be hitting it when your like 200 pixels away. Would the best way be to just see if the pixels in the 2 images are touching? please let me know a good method.
EDIT:
Ok so now I realize how simple circle collision is. but say I have an oval or something that isn't really a shape. Or even a square that is rotated 45 degrees.

If you have circles you can use Circle-To-Circle collision. Just take the distance of the midpoints and compare it with the length of the radii.
Beside that it really depends on what you need. There's a plethora of collision detection algorithms (mostly to speed things up, e.g. by using coherence between frames) but that's out of the range for a short general note and you'd need to specify your problem a bit more.

Related

How to find the sides of a rectangle if you know the sides of a quadrilateral inside the rectangle?

I'm working on an application that uses a accelerometer to measure the sides of a room, I know it will not be exact measurements but it's fine.
In reality I would like the program to be able to calculate the sides of any room shape not only rectangles and squares (and more than 4 corners), but I'm starting with something more simple (rectangle shaped rooms).
My problem is not with the accelerometer but more with the math aspect of the code. Because I measured the room by placing the phone on a wall and then going to the connected wall, I will get the measurements of a quadrilateral inside the rectangle. From there, if it's possible, I will get the measurements of the sides of the rectangle, but I don't really know how.
What I've tried so far:
Divided the quadrilateral inside the rectangle in half, to make 2 triangles. Then I calculated the diagonal using the Pythagoras theorem. Then I used the law of Cosines to calculate one of the angles, and did the same again to find another. Then found the 3rd angle using the 2 other angles (c=a+b-180). I did this for both triangles.
I don't know if this is the right approach and if I have missed something simple, or if I simply don't have enough information to solve for the sides of the rectangle. I have looked into some geometry and trigonometry math online and haven't find anything that gives me a solution. But like I said, maybe I missed something simple.
Any push in the right direction would be helpful.
The rectangle and the quadrilateral
The problem lacks a unique solution. Imagine placing a pair of calipers around the quadrilateral. You'll be able to rotate the calipers around it, and at each angle the calipers will be able to close to a different width. Each of those widths is a different possible room dimension.
You'll also never get an accurate position measurement using the inertial sensors in a phone to begin with. The accels and gyros aren't even close to accurate enough. GPS is, but only outdoors away from structures that cause multipathing artifacts. Quick and sloppy with a tape measure will win every time.

Implementing pathfinding in tiled 2d world

I have a 2d world made of tiles. Tiles are either passable, non-passable or have some sort of movement penalty.
All entities and tiles have their own hitboxes and sizes for collision detection.
Each tile tile has dimensions of 16x16px.
Most examples I've read seem to suggest that we're moving from center of one tile to the center of another tile. And as we see from the picture below, that red part looks hardly optimal nor it doesn't take entity size into account. Also pathding nodes are also placed into 2d array, with only 8 possible directions from each node.
But wouldn't actually shortest path be something like this?
How should I implement pathfinding?
Should tiles be splitted into smaller nodes for pathfinding or is there some other way to get more accurate routes? Even if I splitted each tile to have 10x10 pathfinding nodes, It still wouldn't find shortest line between 2 points.
Should there be more than 8 directions and if so, how should that be implemented?
For example if my world was 50x50 tiles big, how should the pathfinding map look like and how it should be generated?
It depends on your definition of "shortest path" and what you plan to do with it.
In your example, it appears that you consider a valid move to be from the center of one tile to the center of any other tile in unobstructed view. How you'd validate moves to partially obstructed tiles is not clear. This differs from the geometrically shortest path, which would obviously hug the wall, and the realistic shortest path, would would use a unit width and turn radius to avoid walls and sudden changes in direction.
A common approach is to use A* as usual, and then post-process the path in a number of ways to optimize and smooth it. This works both for grid based worlds like yours, and for more general navmeshes.
Gamasutra had a nice overview of this called Toward More Realistic Pathfinding, with a variety of ideas and techniques from smoothing zigzags and adding curves, to optimizing paths for units with acceleration and direction.
I had almost same problem and I have coded a pre-computation software for all tiles to all tiles with some optimization
You can find source code here : https://github.com/FurkanGozukara/pathfinding-2d-tile-map
The development video is here : https://www.youtube.com/watch?v=jRTA0iLjv6M
I did come up with my own algorithm and implementation. Therefore it is probably not the best nor the most optimized one. Although it is already implemented into my free browser based game MonsterMMORPG and it works great : https://www.monstermmorpg.com/

Vector Shape Difference & intersection

Let me explain my problem:
I have a black vector shape (let's say it's a series of joined, straight lines for now, but it'd be nice if I could also support quadratic curves).
I also have a rectangle of a predefined width and height. I'm going to place it on top of the black shape, and then take the union of the two.
My first issue is that I don't know how to quickly extract vector unions, but I think there is a well-defined formula I can figure out for myself.
My second, and more tricky issue is how to efficiently detect the position the rectangle needs to be in (i.e., what translation and rotation are needed by the matrices), in order to maximize the black, remaining after the union (see figure, below).
The red outlined shape below is ~33% black; the green is something like 85%; and there are positions for this shape & rectangle wherein either could have 100% coverage.
Obviously, I can brute-force this by trying every translation and rotation value for every point where at least part of the rectangle is touching the black shape, then keep track of the one with the most black coverage. The problem is, I can only try a finite number of positions (and may therefore miss the maximum). Apart from that, it feels very inefficient!
Can you think of a more efficient way of tackling this problem?
Something from my Uni days tells me that a Fourier transform might improve the efficiency here, but I can't figure out how I'd do that with a vector shape!
Three ideas that have promise of being faster and/or more precise than brute force search:
Suppose you have a 3d physics engine. Define a "cone-shaped" surface where the apex is at say (0,0,-1), the black polygon boundary on the z=0 plane with its centroid at the origin, and the cone surface is formed by connecting the apex with semi-infinite rays through the polygon boundary. Think of a party hat turned upside down and crumpled to the shape of the black polygon. Now constrain the rectangle to be parallel to the z=0 plane and initially so high above the cone (large z value) that it's easy to find a place where it's definitely "inside". Then let the rectangle fall downward under gravity, twisting about z and translating in x-y only as it touches the cone, staying inside all the way down until it settles and can't move any farther. The collision detection and force resolution of the physics engine takes care of the complexities. When it settles, it will be in a position of maximal coverage of the black polygon in a local sense. (If it settles with z<0, then coverage is 100%.) For the convex case it's probably a global maximum. To probabilistically improve the result for non-convex cases (like your example), you'd randomize the starting position, dropping the polygon many times, taking the best result. Note you don't really need a full blown physics engine (though they certainly exist in open source). It's enough to use collision resolution to tell you how to rotate and translate the rectangle in a pseudo-physical way as it twists and slides uniformly down the z axis as far as possible.
Different physics model. Suppose the black area is an attractive field generator in 2d following the usual inverse square rule like gravity and magnetism. Now let the rectangle drift in a damping medium responding to this field. It ought to settle with a maximal area overlapping the black area. There are problems with "nulls" like at the center of a donut, but I don't think these can ever be stable equillibria. Can they? The simulation could be easily done by modeling both shapes as particle swarms. Or since the rectangle is a simple shape and you are a physicist, you could come up with a closed form for the integral of attractive force between a point and the rectangle. This way only the black shape needs representation as particles. Come to think of it, if you can come up with a closed form for torque and linear attraction due to two triangles, then you can decompose both shapes with a (e.g. Delaunay) triangulation and get a precise answer. Unfortunately this discussion implies it can't be done analytically. So particle clouds may be the final solution. The good news is that modern processors, particularly GPUs, do very large particle computations with amazing speed. Edit: I implemented this quick and dirty. It works great for convex shapes, but concavities create stable points that aren't what you want. Using the example:
This problem is related to robot path planning. Looking at this literature may turn up some ideas In RPP you have obstacles and a robot and want to find a path the robot can travel while avoiding and/or sliding along them. If the robot is asymmetric and can rotate, then 2d planning is done in a 3d (toroidal) configuration space (C-space) where one dimension is rotation (so closes on itself). The idea is to "grow" the obstacles in C-space while shrinking the robot to a point. Growing the obstacles is achieved by computing Minkowski Differences.) If you decompose all polygons to convex shapes, then there is a simple "edge merge" algorithm for computing the MD.) When the C-space representation is complete, any 1d path that does not pierce the "grown" obstacles corresponds to continuous translation/rotation of the robot in world space that avoids the original obstacles. For your problem the white area is the obstacle and the rectangle is the robot. You're looking for any open point at all. This would correspond to 100% coverage. For the less than 100% case, the C-space would have to be a function on 3d that reflects how "bad" the intersection of the robot is with the obstacle rather than just a binary value. You're looking for the least bad point. C-space representation is an open research topic. An octree might work here.
Lots of details to think through in both cases, and they may not pan out at all, but at least these are frameworks to think more about the problem. The physics idea is a bit like using simulated spring systems to do graph layout, which has been very successful.
I don't believe it is possible to find the precise maximum for this problem, so you will need to make do with an approximation.
You could potentially render the vector image into a bitmap and use Haar features for this - they provide a very quick O(1) way of calculating the average colour of a rectangular region.
You'd still need to perform this multiple times for different rotations and positions, but it would bring it algorithmic complexity down from a naive O(n^5) to O(n^3) which may be acceptably fast. (with n here being the size of the different degrees of freedom you are scanning)
Have you thought to keep track of the remaining white space inside the blocks with something like if whitespace !== 0?

Collision Detection (Ground & Slopes) in 2D Platform Game using Pygame Rects

First off, I am not after any instructions on logic for collision detection; I get it.
What I am trying to work out is the least complicated way to do this with Pygame using Sprites & Rects. I want to be able to check collisions for the Player against ground, walls & slopes. In theory it is quite straight forward, but I'm having difficulty because it seems like you cannot do this with one Rect.
One Rect is simple enough to get you collisions in the X plane against walls. The same Rect could be used also be used in the Y plane against solids, but not with slopes - since with the collision routines in Pygame it checks the whole Rect (or mask), rather than perhaps just the bottom middle of the Rect. It seems in addition you need to have a number of "sprites" to check collisions with, that are 1x1 pixel in various places around the Player.
What's the easiest way to do this, without having a bunch of 3, 4, or more separate "collision pixels" to check against slopes?
Geoff
It sound to me like you want pixel perfect collision detection. Here you can find a premade function that seems to me to be what you want/need.
You could use pygame.mask, which provides pixel perfect collision detection in C:
http://www.pygame.org/docs/ref/mask.html

Find X/Y/Z rotation angles from one position to another

I am using a 3D engine called Electro which is programmed using Lua. It's not a very good 3D engine, but I don't have any choice in the matter.
Anyway, I'm trying to take a flat quadrilateral and transform it to be in a specific location and orientation. I know exactly where it is supposed to go (i.e. I know the exact vertices where the corners should end up), but I'm hitting a snag in getting it rotated to the right place.
Electro does not allow you to apply transformation matrices. Instead, you must transform models by using built-in scale, position (that is, translate), and rotation functions. The rotation function takes an object and 3 angles (in degrees):
E.set_entity_rotation(entity, xangle, yangle, zangle)
The documentation does not speficy this, but after looking through Electro's source, I'm reasonably certain that the rotation is applied in order of X rotation -> Y rotation -> Z rotation.
My question is this: If my starting object is a flat quadrilateral lying on the X-Z plane centered at the origin, and the destination position is in a different location and orientation where the destination vertices are known, how could I use Electro's rotation function to rotate it into the correct orientation before I move it to the correct place?
I've been racking my brain for two days trying to figure this out, looking at math that I don't understand dealing with Euler angles and such, but I'm still lost. Can anyone help me out?
Can you tell us more about the problem? It sounds odd phrased in this way. What else do you know about the final orientation you have to hit? Is it completely arbitrary or user-specified or can you use more knowledge to help solve the problem? Is there any other Electro API you could use to help?
If you really must solve this general problem, then too bad, it's hard, and underspecified. Here's some guy's code that may work, from euclideanspace.com.
First do the translation to bring one corner of the quadrilateral to the point you'd like it to be, then apply the three rotational transformations in succession:
If you know where the quad is, and you know exactly where it needs to go, and you're certain that there are no distortions of the quad to fit it into the place where it needs to go, then you should be able to figure out the angles using the vector scalar product.
If you have two vectors, the angle between them can be calculated by taking the dot product.

Resources