How tell if a point is within a polygon for texture - math

This seems to be a rather asked question - (hear me out first! :)
I've created a polygon with perlin noise, and it looks like this:
I need to generate a texture from this array of points. (I'm using Monogame/XNA, but I assume this question is somewhat agnostic).
Anyway, researching this problem tells me that many people use raycasting to determine how many times a line crosses over the polygon shape (If once, it's inside. twice or zero times, it's outside). This makes sense, but I wonder if there is a better way, given that I have all of the points.
Doing a small raycast for every pixel I want to fill in seems excessive - is this the only/best way?
If I have a small 500px square image I need to fill in, I'll need to do a raycast for 250,000 individual pixels, which seems like an awful lot.

If you want to do this for every pixel, you can use a sweeping line:
Start from the topmost coordinate and examine a horizontal ray from left to right. Calculate all intersections with the polygon and sort them by their x-coordinate. Then iterate all pixels on the line and remember if you are in or out. Whenever you encounter an intersection, switch to the other side. If some pixel is in, set the texture. If not, ignore it. Do this from top to bottom for every possible horizontal line.
The intersection calculation could be enhanced in several ways. E.g. by using an acceleration data structure like a grid, quadtree, etc. or by examining the intersecting or touching edges of the polygon before. Then, when you sweep the line, you will already know, which edges will cause an intersection.

Related

Rendering highly granular and "zoomed out" data

There was a gif on the internet where someone used some sort of CAD and drew multiple vector pictures in it. On the first frame they zoom-in on a tiny dot, revealing there a whole new different vector picture just on a different scale, and then they proceed to zoom-in further on another tiny dot, revealing another detailed picture, repeating several times. here is the link to the gif
Or another similar example: imagine you have a time-series with a granularity of a millisecond per sample and you zoom out to reveal years-worth of data.
My questions are: how such a fine-detailed data, in the end, gets rendered, when a huge amount of data ends up getting aliased into a single pixel.
Do you have to go through the whole dataset to render that pixel (i.e. in case of time-series: go through million records to just average them out into 1 line or in case of CAD render whole vector picture and blur it into tiny dot), or there are certain level-of-detail optimizations that can be applied so that you don't have to do this?
If so, how do they work and where one can learn about it?
This is a very well known problem in games development. In the following I am assuming you are using a scene graph, a node-based tree of objects.
Typical solutions involve a mix of these techniques:
Level Of Detail (LOD): multiple resolutions of the same model, which are shown or hidden so that only one is "visible" at any time. When to hide and show is usually determined by the distance between camera and object, but you could also include the scale of the object as a factor. Modern 3d/CAD software will sometimes offer you automatic "simplification" of models, which can be used as the low res LOD models.
At the lowest level, you could even just use the object's bounding
box. Checking whether a bounding box is in view is only around 1-7 point checks depending on how you check. And you can utilise object parenting for transitive bounding boxes.
Clipping: if a polygon is not rendered in the view port at all, no need to render it. In the GIF you posted, when the camera zooms in on a new scene, what is left from the larger model is a single polygon in the background.
Re-scaling of world coordinates: as you zoom in, the coordinates for vertices become sub-zero floating point numbers. Given you want all coordinates as precise as possible and given modern CPUs can only handle floats with 64 bits precision (and often use only 32 for better performance), it's a good idea to reset the scaling of the visible objects. What I mean by that is that as your camera zooms in to say 1/1000 of the previous view, you can scale up the bigger objects by a factor of 1000, and at the same time adjust the camera position and focal length. Any newly attached small model would use its original scale, thus preserving its precision.
This transition would be invisible to the viewer, but allows you to stay within well-defined 3d coordinates while being able to zoom in infinitely.
On a higher level: As you zoom into something and the camera gets closer to an object, it appears as if the world grows bigger relative to the view. While normally the camera space is moving and the world gets multiplied by the camera's matrix, the same effect can be achieved by changing the world coordinates instead of the camera.
First, you can use caching. With tiles, like it's done in cartography. You'll still need to go over all the points, but after that you'll be able zoom-in/zoom-out quite rapidly.
But if you don't have extra memory for cache (not so much actually, much less than the data itself), or don't have time to go over all the points you can use probabilistic approach.
It can be as simple as peeking only every other point (or every 10th point or whatever suits you). It yields decent results for some data. Again in cartography it works quite well for shorelines, but not so well for houses or administrative boarders - anything with a lot of straight lines.
Or you can take a more hardcore probabilistic approach: randomly peek some points, and if, for example, there're 100 data points that hit pixel one and only 50 hit pixel two, then you can more or less safely assume that if you'll continue to peek points still pixel one will be twice as likely to be hit that pixel two. So you can just give up and draw pixel one with a twice more heavy color.
Also consider how much data you can and want to put in a pixel. If you'll draw a pixel in black and white, then there're only 256 variants of color. And you don't need to be more precise. Or if you're going to draw a pixel in full color then you still need to ask yourself: will anyone notice the difference between something like rgb(123,12,54) and rgb(123,11,54)?

How to find the sides of a rectangle if you know the sides of a quadrilateral inside the rectangle?

I'm working on an application that uses a accelerometer to measure the sides of a room, I know it will not be exact measurements but it's fine.
In reality I would like the program to be able to calculate the sides of any room shape not only rectangles and squares (and more than 4 corners), but I'm starting with something more simple (rectangle shaped rooms).
My problem is not with the accelerometer but more with the math aspect of the code. Because I measured the room by placing the phone on a wall and then going to the connected wall, I will get the measurements of a quadrilateral inside the rectangle. From there, if it's possible, I will get the measurements of the sides of the rectangle, but I don't really know how.
What I've tried so far:
Divided the quadrilateral inside the rectangle in half, to make 2 triangles. Then I calculated the diagonal using the Pythagoras theorem. Then I used the law of Cosines to calculate one of the angles, and did the same again to find another. Then found the 3rd angle using the 2 other angles (c=a+b-180). I did this for both triangles.
I don't know if this is the right approach and if I have missed something simple, or if I simply don't have enough information to solve for the sides of the rectangle. I have looked into some geometry and trigonometry math online and haven't find anything that gives me a solution. But like I said, maybe I missed something simple.
Any push in the right direction would be helpful.
The rectangle and the quadrilateral
The problem lacks a unique solution. Imagine placing a pair of calipers around the quadrilateral. You'll be able to rotate the calipers around it, and at each angle the calipers will be able to close to a different width. Each of those widths is a different possible room dimension.
You'll also never get an accurate position measurement using the inertial sensors in a phone to begin with. The accels and gyros aren't even close to accurate enough. GPS is, but only outdoors away from structures that cause multipathing artifacts. Quick and sloppy with a tape measure will win every time.

Is there a formula to find affected square by sized-brush on a grid?

I am not sure how to put this problem in a single sentence, sorry if the title is misleading.
I am currently developing a simple terrain editor with a circle-shaped brush size. The image below shows a few cases that represent my problem.
additional info: the square size is fixed and uniform and in the current version, my concern is only to find which one is hit and which one is not (the amount of region covered is important for weighting the hit, but probably not right now)
My current solution (which is not even correct for a certain condition) is: given a hit in a position (x, y) with radius r, loop through all square from (x-radius, y-radius) to (x+radius, y+radius) and apply 2-D box to circle collision detection. But I don't think this is optimal (or even correct IMO).
Can anyone help me with this one? Thank you
Since i can't add a simple comment due to bureaucracy on this website i have to type it out here.
Anyway you're in luck since i was trying to do this recently as well! The way i did it is i iterated through the vertex array and check if the current vertex falls inside the radius of the circle. But perhaps what you want is to check it against each quad center and if that center falls inside the radius then add the whole quad as it's being collided.
Of course depending on the size of your grid the performance will vary so it's good to try to iterate through as few quads as needed. Though accessing these quads from the array is something you have to figure out yourself.

How do I determine if a series of points (or polygon) is within a rectangular region?

I have been looking at posts about determining if a point lies within a polygon or not and the answers are either too vague, abstract, or complex for me. So I am going to try to ask my question specific to what I need to do.
I have a set of points that describe a non-straight line (sometimes a closed polygon). I have a rectangular "view" region. I need to determine as efficiently as possible whether any of the line segments (or polygon borders) pass through the view region.
I can't simply test each point to see if it lies within the view region. It is possible for a segment to pass through the region without any point actually inside the region (ie the line is drawn across the region).
Here is an example of what I want to determine (red means the function should return true for the set of points, blue means it should return false, example uses straight lines and rectangles because I am not an artist).
Another condition I want to be able to account for (though the method/function may be a separate one), is to determine not just whether a polygon's border passes through the rectangular region, but whether the region is entirely encompassed by the polygon. The nuance here is that in the situation first described above, if I am only concerned with drawing borders, the method should return false. But in the situation described here, if I need to fill the polygon region then I need the function to return true. I currently do not need to worry about testing "donut" shaped polygons (thank God!).
Here is an example illustrating the nuance (the red rectangle does not have a single vertex or border segment passing through the on-screen region, but it should still be considered on-screen):
For the "does any line segment or polygon border pass through or lie on screen?" problem I know I can come up with a solution (albeit perhaps not an efficient one). Even though it is more verbose, the conditions are clear to me. But the second "is polygon region on screen?" problem is a little harder. I'm hoping someone might have a good suggestion for doing this. And if one solution is easily implemented on top of the other, well, booya.
As always, thank you in advance for any help or suggestions.
PS I have a function for determining line intersection, but it seems like overkill to use it to compare each segment to each side of the on-screen region because the on-screen region is ALWAYS a plain [0, 0, width, height] rectangle. Isn't there some kind of short-cut?
What you are searching for is named a Collision Detection Algorithm A Google search will lead you to plenty of implementations in various language as well as a lot of theory
There are plenty of Geometric theory behind, from the simplest bisector calculus to Constrained Delaunay Triangulations and Voronoi Diagrams (that are just examples). It depends on the shape of Object, the number of dimensions and for sure the ratio between exactness needed and computing time afforded ;-)
Good read
PS I have a function for determining
line intersection, but it seems like
overkill to use it to compare each
segment to each side of the on-screen
region because the on-screen region is
ALWAYS a plain [0, 0, width, height]
rectangle. Isn't there some kind of
short-cut?
It's not an overkill, its neccessary here. The only kind of shortcut I can think of is to hardcode values [0, 0, width, height] into that function and simplify it a bit.

Where can I find information on line growing algorithms?

I'm doing some image processing, and I need to find some information on line growing algorithms - not sure if I'm using the right terminology here, so please call me out on this is needs be.
Imagine my input image is simply a circle on a black background. I'd basically like extract the coordinates, so that I may draw this circle elsewhere based on the coordinates.
Note: I am already using edge detection image filters, but I thought it best to explain with a simple example.
Basically what I'm looking to do is detect lines in an image, and store the result in a data type where by I have say a class called Line, and various different Point objects (containing X/Y coordinates).
class Line
{
Point points[];
}
class Point
{
int X, Y;
}
And this is how I'd like to use it...
Line line;
for each pixel in image
{
if pixel should be added to line
{
add pixel coordinates to line;
}
}
I have no idea how to approach this as you can probably establish, so pointers to any subject matter would be greatly appreciated.
I'm not sure if I'm interpreting you right, but the standard way is to use a Hough transform. It's a two step process:
From the given image, determine whether each pixel is an edge pixel (this process creates a new "binary" image). A standard way to do this is Canny edge-detection.
Using the binary image of edge pixels, apply the Hough transform. The basic idea is: for each edge pixel, compute all lines through it, and then take the lines that went through the most edge pixels.
Edit: apparently you're looking for the boundary. Here's how you do that.
Recall that the Canny edge detector actually gives you a gradient also (not just the magnitude). So if you pick an edge pixel and follow along (or against) that vector, you'll find the next edge pixel. Keep going until you don't hit an edge pixel anymore, and there's your boundary.
What you are talking about is not an easy problem! I have found that this website is very helpful in image processing: http://homepages.inf.ed.ac.uk/rbf/HIPR2/wksheets.htm
One thing to try is the Hough Transform, which detects shapes in an image. Mind you, it's not easy to figure out.
For edge detection, the best is Canny edge detection, also a non-trivial task to implement.
Assuming the following is true:
Your image contains a single shape on a background
You can determine which pixels are background and which pixels are the shape
You only want to grab the boundary of the outside of the shape (this excludes donut-like shapes where you want to trace the inside circle)
You can use a contour tracing algorithm such as the Moore-neighbour algorithm.
Steps:
Find an initial boundary pixel. To do this, start from the bottom-left corner of the image, travel all the way up and if you reach the top, start over at the bottom moving right one pixel and repeat, until you find a shape pixel. Make sure you keep track of the location of the pixel that you were at before you found the shape pixel.
Find the next boundary pixel. Travel clockwise around the last visited boundary pixel, starting from the background pixel you last visited before finding the current boundary pixel.
Repeat step 2 until you revisit first boundary pixel. Once you visit the first boundary pixel a second time, you've traced the entire boundary of the shape and can stop.
You could take a look at http://processing.org/ the project was created to teach the fundamentals of computer programming within a visual context. There is the language, based on java, and an IDE to make 'sketches' in. It is a very good package to quickly work with visual objects and has good examples of things like edge detection that would be useful to you.
Just to echo the answers above you want to do edge detection and Hough transform.
Note that a Hough transform for a circle is slightly tricky (you are solving for 3 parameters, x,y,radius) you might want to just use a library like openCV

Resources