How do I determine if a series of points (or polygon) is within a rectangular region? - apache-flex

I have been looking at posts about determining if a point lies within a polygon or not and the answers are either too vague, abstract, or complex for me. So I am going to try to ask my question specific to what I need to do.
I have a set of points that describe a non-straight line (sometimes a closed polygon). I have a rectangular "view" region. I need to determine as efficiently as possible whether any of the line segments (or polygon borders) pass through the view region.
I can't simply test each point to see if it lies within the view region. It is possible for a segment to pass through the region without any point actually inside the region (ie the line is drawn across the region).
Here is an example of what I want to determine (red means the function should return true for the set of points, blue means it should return false, example uses straight lines and rectangles because I am not an artist).
Another condition I want to be able to account for (though the method/function may be a separate one), is to determine not just whether a polygon's border passes through the rectangular region, but whether the region is entirely encompassed by the polygon. The nuance here is that in the situation first described above, if I am only concerned with drawing borders, the method should return false. But in the situation described here, if I need to fill the polygon region then I need the function to return true. I currently do not need to worry about testing "donut" shaped polygons (thank God!).
Here is an example illustrating the nuance (the red rectangle does not have a single vertex or border segment passing through the on-screen region, but it should still be considered on-screen):
For the "does any line segment or polygon border pass through or lie on screen?" problem I know I can come up with a solution (albeit perhaps not an efficient one). Even though it is more verbose, the conditions are clear to me. But the second "is polygon region on screen?" problem is a little harder. I'm hoping someone might have a good suggestion for doing this. And if one solution is easily implemented on top of the other, well, booya.
As always, thank you in advance for any help or suggestions.
PS I have a function for determining line intersection, but it seems like overkill to use it to compare each segment to each side of the on-screen region because the on-screen region is ALWAYS a plain [0, 0, width, height] rectangle. Isn't there some kind of short-cut?

What you are searching for is named a Collision Detection Algorithm A Google search will lead you to plenty of implementations in various language as well as a lot of theory
There are plenty of Geometric theory behind, from the simplest bisector calculus to Constrained Delaunay Triangulations and Voronoi Diagrams (that are just examples). It depends on the shape of Object, the number of dimensions and for sure the ratio between exactness needed and computing time afforded ;-)
Good read

PS I have a function for determining
line intersection, but it seems like
overkill to use it to compare each
segment to each side of the on-screen
region because the on-screen region is
ALWAYS a plain [0, 0, width, height]
rectangle. Isn't there some kind of
short-cut?
It's not an overkill, its neccessary here. The only kind of shortcut I can think of is to hardcode values [0, 0, width, height] into that function and simplify it a bit.

Related

2D space organic projection

I'm currently working on a glsl shader (EDIT : I'm starting to think that a shader isn't necessarily the best solution and as I'm doing this in processing, I can consider a vectorial solution too) supposed to render something like this but filling the entire 2D space (or at least a larger surface):
To do so, I want to map the repeating patterns on the general leaves shapes that you can see on the top of the sketch below.
My problem is this mapping part : is it possible to find a function that project XY coordinates on the screen to another position in such a way that I can map my patterns the way I want? The leaves must have some kind of UV coordinates inside them (to be able to apply the repeating pattern) and the transformation must be a conformal map because otherwise, there would be some distortions in the pattern.
I've tried several lines of thought but I haven't managed to get the final result :
recursion :
the idea is to first cut the plane in stripes, then cut the stripes in leaves shapes that touch the top and the bottom of the stripes (because that's easier) and finally recursively cut the leaves in halves until the result looks more random. as long as the borders of the stripe aren't on the screen, it shouldn't be too noticeable. The biggest difficulty here is to avoid the distortion.
voronoi :
it may be possible to find a distance function guided by a vector field such that the Voronoi diagram looks more like what I'm looking for. However I don't think it will be possible to have the UV mapping I want. If it's the case, a good approximation woult do the trick, the result doesn't need to be exact as long as it isn't too noticable.
distortion :
it could also be possible to find a more direct way to do this projection. While desperately looking for a solution, I came across the fact that a continuous complex function is a conform map but I haven't managed to go any further.
Finaly, there may be another solution I haven't thought about and I would be glad if someone gave me a complete solution or just a new idea I haven't tried yet.

NetTopologySuite squash multipolygon to polygon

I am working in aspnetcore using the most up to date GeoAPI and NetTopologySuite version for core. What I'm trying to do should be fairly simple but I can't seem to find the proper way to do it either through experimentation of googling. Or even what to call it, to be honest, which makes googling harder.
Hopefully someone can kick me in the right direction.
I have a multipolygon which may be made up of one or more polygons. I want to create a buffer around that multipolygon's points out to X distance. This is basically a map overlay with concentric areas of interest. A given point of interest may fall in the original multi polygon's shapes... or it might fall in the first or second buffer area. Kinda like an onion if the core of an onion had random shapes in it.
That first part is simple. Just iterate the multipolygon's points and apply a buffer to each point using the buffer method:
var bufferZonePoints = new List<IGeometry>();
foreach(var point in multiPolygon.Coordinates)
{
bufferZonePoints.Add(point.Buffer(x));
}
var bufferZone = this.geometryFactory.CreateMultiPolygon(bufferZonePoints);
That's fine. But it's giving me another multipolygon made up of thousands of points. When I use this as a map overlay, I get a hurricane of circles following the vague outlines of the original shape sort of looking like a spirograph drawing. All I want is basically the outer boundary of all the buffer circles without all the points in the center.
I tried doing a ConvexHull on the multipolygon and it looked correct at first until I realize that it was shaving off the angles on the outside in order to get the smallest polygon all those points fit into (which is what convex hulls do after all). But that causes problems in the stuff I'm overlaying. Some points of interest may be outside the actual buffer, but be inside if the convex hull decides to round off a bumpy area of the zone. (I hope that makes sense).
Basically what I'm trying to do is take that multipolygon made up of all those buffered points and squash it down into a single polygon made up all the outermost boundaries of the buffers. But without all the spirograph garbage in the middle. I don't really want a ConvexHull. I've also tried Union and the GeometryCombiner class, but none of these are doing what I want.
I don't know if this helps makes this mud any clearer but there is a setting in QGIS that when you plunk down two circles and the circles would overlap they combine into one big blob like soap bubbles and the boundaries in between vanish. That's kinda what I'm trying to do via code.
Does that make sense? Can anyone help?
After continuing to experiment with my mapping tool. It would appear that Union DOES actually give me the result I wanted.
I started with two polygons that were far enough apart to make it obvious what was going on, did a union on them and got back just the shell of the combination of them. As I added more of the buffered points to it, the shame became a bit more obvious.
That's pretty well what I wanted.
Thanks anyway though! Hopefully this will help someone else.

How tell if a point is within a polygon for texture

This seems to be a rather asked question - (hear me out first! :)
I've created a polygon with perlin noise, and it looks like this:
I need to generate a texture from this array of points. (I'm using Monogame/XNA, but I assume this question is somewhat agnostic).
Anyway, researching this problem tells me that many people use raycasting to determine how many times a line crosses over the polygon shape (If once, it's inside. twice or zero times, it's outside). This makes sense, but I wonder if there is a better way, given that I have all of the points.
Doing a small raycast for every pixel I want to fill in seems excessive - is this the only/best way?
If I have a small 500px square image I need to fill in, I'll need to do a raycast for 250,000 individual pixels, which seems like an awful lot.
If you want to do this for every pixel, you can use a sweeping line:
Start from the topmost coordinate and examine a horizontal ray from left to right. Calculate all intersections with the polygon and sort them by their x-coordinate. Then iterate all pixels on the line and remember if you are in or out. Whenever you encounter an intersection, switch to the other side. If some pixel is in, set the texture. If not, ignore it. Do this from top to bottom for every possible horizontal line.
The intersection calculation could be enhanced in several ways. E.g. by using an acceleration data structure like a grid, quadtree, etc. or by examining the intersecting or touching edges of the polygon before. Then, when you sweep the line, you will already know, which edges will cause an intersection.

How does a non-tile based map works?

Ok, here is the thing. Recently i decided i wanted to understand how Random map generation works. I found some papers and some arguments. The most interesting one was "Diamond Square algorithm" and "Midpoint Displacement". I still have to try to apply those to a software, but other than that, i ran into this site: http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
As you can see, the idea is to use polygons. But i have no idea how to apply that a Tile-Based map, not even how to create those polygons using the tools i have (c++ and sdl). I am assuming there is no way to do it ( please correct me if i am wrong.) But if i am not, how does a non-tile map works, and how are these polygons generated?
This answer will not give you directly the answers you're looking for, but hopefully will get you close enough!
The Problem
I think what blocks you is how to represent the data. You're probably used to a 2D grid that simply represent the type of each tile. As you know, this is fine to handle a tile-based map, but doesn't properly allow you to model worlds where tiles are of a different shape.
Graphs
What I suggest to you, is to see the problem a bit differently. A grid is nothing more than a graph (more info) with nodes that have 4 (or 8 if you allow diagonals) implicit neighbor nodes. So first, what I would do if I was you, would be to move from your strict standard 2D grid to a more "loose" graph, where each node has a position, a position, a list of neighbors (in most cases you'll have corners with 2 neighbors, borders with 3 and "middle" tiles with 4) and finally a rendering component which simply draws your tile on screen at the given position. Once this is done, you should be able to have the exact same results on screen that you currently have with your "2D Tile-Based" engine by simply calling the rendering component with each node who's bounding box (didn't touch it in what you should add to your node, but I'll get back to this later) intersects with the camera's frustum (in a 2D world, it would most likely if the position +/- the size intersects the RECT currently being drawn).
Search
The more generic approach will also help you doing stuff like pathfinding with generic algorithms that explore nodes until they find a valid path (see A* or Dijkstra). Even if you decided to stick to a good old 2D Tile Map game, these techniques would still be useful!
Yeah but I want Polygons
I hear you! So, if you want polygons, basically all you need to do, is add to your nodes a list of vertices and the appropriate data that you might need to render your polygons (either vertex color, textures and U/V maps, etc...) and update your rendering component to do the appropriate OpenGL (this for example should help) calls to draw your nodes. Once again, the first step to iteratively upgrade your 2D Tile Engine to a polygon map engine would be to, for each tile in your map, give each of your nodes two triangles, a texture resource (the tile), and U/V mappings (0,0 - 0,1 - 1,0 and 1,1). Once again, when this step is done, you should have a "generic" polygon based tile map engine. The creation of most of this data can be created procedurally by calculating coordinates based on tile position, tile size, etc...
Convex Polygons
If you decide that you ever might need NPCs to navigate on your map or want to allow your player to navigate by clicking the map, I would suggest that you always use convex polygons (the triangle being the simplest for of a convex polygon). This allows your code that assume that two different positions on the same polygon can be navigated to in straight line.
Complex Maps
Based on the link you provided, you want to have rather complex maps. In this case, the author used Voronoi Diagrams to generate the polygons of the map. There are already solutions to do triangulation like that, but you might also want to use other techniques that are easier to work with if you're just switching to 3D like this one for example. Once you have interesting results, you should consider implementing serialization to save/open your map data from the game. If you want to create an editor, be aware that it might be a lot of work but can be worth it if you want people to help you creating maps or to add elements to the maps (like geometry that's not part of the terrain).
I went all over the place with this answer, but hopefully it helps!
Just iterate over all the tiles, and do a hit-test from the centre of the tile to the polys. Turn the type of the tile into the type of the polygon. Did you need more than that?
EDIT: Sorry, I realize that probably isn't helpful. Playing with procedural algorithms can be fun and profitable. Start with a loop that iterates over all tiles and chooses randomly whether or not the tile is occupied. Then, iterate over them again and choose whether it is occupied or its neighbour is.
Also, check out the source code for this: http://dustinfreeman.org/toys/wall7-dustin.html

What's the point in QPainter::drawConvexPolgon

From the docs:
QPainter offers two methods of painting QPolygons: drawPolygon and drawConvexPolygon.
Nowhere in the documentation is it made clear what the difference between them is. Additionally, the drawConvexPolygon docs state
If the supplied polygon is not convex, i.e. it contains at least one angle larger than 180 degrees, the results are undefined.
So... what is it for? I hoped the method would somehow find the convex hull of my polygon and paint that, however that doesn't seem to be the case.
The QPainter::drawConvexPolygon() documentation says:
On some platforms (e.g. X11), the drawConvexPolygon() function can be faster than the drawPolygon() function.
So,
drawPolygon() is more generic as it also allows to paint non-convex polygons (but drawing might be slower)
drawConvexPolygon() can only be used to draw convex polygons, but might be faster on specific platforms
For example, when doing 3D-rendering, you can use a Polygon Mesh which consists of convex polygons only to make rendering simpler, in which case the faster drawConvexPolygon() would be the better choice (since you need to paint a large number of convex polygons).
Determining which part of the polygon is the outside and inside (for filling purposes) makes different choices depending on if the polygon contains a convex region. Think about how to determine the inside of a star shape vs. the inside of a rectangle.

Resources