clipping polygon against rectangle - math

today I have a (simple) rendering problem for you. My current project gets datas from a file to generate a SVG file. Drawing things as polygon is pretty easy thanks to the SVG format, but I have a single problem: some of my polygons are in AND out of the page (meaning that some parts of them are displayed while the rest is not shown due to the fact they are out of the display limit). In order to optimize the final SVG file I need to reduce my polygon to a simpler form.
Consider the grey rectangle as my page.
Consider the green polygon as the thing I actually draw.
First picture shows you the thing that I actually have while the second picture shows you the final result I want to have.
First I thought to reduce my polygon in simple triangles in order to only draw points in the display limits. But I think a simpler solution exists... if you have it do not hesitate to share it with me :)
EDIT:
I have this tricky case to handle as well :
Thank you.

Clipping a polygon with a rectangle. We reduce this problem to
clipping a polygon with a line. We reduce this to an even simpler problem:
clipping one edge of a polygon with a line. Which is really just
Finding the intersection of a line segment with a line (if it exists).
The last problem is pretty easy, considering that your lines are vertical or horizontal. Is that enough?

Related

2D space organic projection

I'm currently working on a glsl shader (EDIT : I'm starting to think that a shader isn't necessarily the best solution and as I'm doing this in processing, I can consider a vectorial solution too) supposed to render something like this but filling the entire 2D space (or at least a larger surface):
To do so, I want to map the repeating patterns on the general leaves shapes that you can see on the top of the sketch below.
My problem is this mapping part : is it possible to find a function that project XY coordinates on the screen to another position in such a way that I can map my patterns the way I want? The leaves must have some kind of UV coordinates inside them (to be able to apply the repeating pattern) and the transformation must be a conformal map because otherwise, there would be some distortions in the pattern.
I've tried several lines of thought but I haven't managed to get the final result :
recursion :
the idea is to first cut the plane in stripes, then cut the stripes in leaves shapes that touch the top and the bottom of the stripes (because that's easier) and finally recursively cut the leaves in halves until the result looks more random. as long as the borders of the stripe aren't on the screen, it shouldn't be too noticeable. The biggest difficulty here is to avoid the distortion.
voronoi :
it may be possible to find a distance function guided by a vector field such that the Voronoi diagram looks more like what I'm looking for. However I don't think it will be possible to have the UV mapping I want. If it's the case, a good approximation woult do the trick, the result doesn't need to be exact as long as it isn't too noticable.
distortion :
it could also be possible to find a more direct way to do this projection. While desperately looking for a solution, I came across the fact that a continuous complex function is a conform map but I haven't managed to go any further.
Finaly, there may be another solution I haven't thought about and I would be glad if someone gave me a complete solution or just a new idea I haven't tried yet.

NetTopologySuite squash multipolygon to polygon

I am working in aspnetcore using the most up to date GeoAPI and NetTopologySuite version for core. What I'm trying to do should be fairly simple but I can't seem to find the proper way to do it either through experimentation of googling. Or even what to call it, to be honest, which makes googling harder.
Hopefully someone can kick me in the right direction.
I have a multipolygon which may be made up of one or more polygons. I want to create a buffer around that multipolygon's points out to X distance. This is basically a map overlay with concentric areas of interest. A given point of interest may fall in the original multi polygon's shapes... or it might fall in the first or second buffer area. Kinda like an onion if the core of an onion had random shapes in it.
That first part is simple. Just iterate the multipolygon's points and apply a buffer to each point using the buffer method:
var bufferZonePoints = new List<IGeometry>();
foreach(var point in multiPolygon.Coordinates)
{
bufferZonePoints.Add(point.Buffer(x));
}
var bufferZone = this.geometryFactory.CreateMultiPolygon(bufferZonePoints);
That's fine. But it's giving me another multipolygon made up of thousands of points. When I use this as a map overlay, I get a hurricane of circles following the vague outlines of the original shape sort of looking like a spirograph drawing. All I want is basically the outer boundary of all the buffer circles without all the points in the center.
I tried doing a ConvexHull on the multipolygon and it looked correct at first until I realize that it was shaving off the angles on the outside in order to get the smallest polygon all those points fit into (which is what convex hulls do after all). But that causes problems in the stuff I'm overlaying. Some points of interest may be outside the actual buffer, but be inside if the convex hull decides to round off a bumpy area of the zone. (I hope that makes sense).
Basically what I'm trying to do is take that multipolygon made up of all those buffered points and squash it down into a single polygon made up all the outermost boundaries of the buffers. But without all the spirograph garbage in the middle. I don't really want a ConvexHull. I've also tried Union and the GeometryCombiner class, but none of these are doing what I want.
I don't know if this helps makes this mud any clearer but there is a setting in QGIS that when you plunk down two circles and the circles would overlap they combine into one big blob like soap bubbles and the boundaries in between vanish. That's kinda what I'm trying to do via code.
Does that make sense? Can anyone help?
After continuing to experiment with my mapping tool. It would appear that Union DOES actually give me the result I wanted.
I started with two polygons that were far enough apart to make it obvious what was going on, did a union on them and got back just the shell of the combination of them. As I added more of the buffered points to it, the shame became a bit more obvious.
That's pretty well what I wanted.
Thanks anyway though! Hopefully this will help someone else.

Eliptic looking graph

What options should I use for making my graph looking like an Elipse ? I was messing with the hierarchical option under the layout module, but I've not gotten nowhere near my desired shape.
My graph is left to right, left node group connects to middle one, and middle one connects to the right one. It can be perceived as this image below.
Can someone point me in the right direction ? Thanks for your expertise
As for the elliptic shape around the nodes and edges, you can either set a background image to your graph area or create a node of large size (but this way you may have troubles with node repulsion, though). Unfortunately, there's no way to make sure that all the nodes will always be inside the ellipse (unless you access the vis' canvas and deal with it on low level or do some other hackery).
Also AFAIK it is impossible to create those wavy edges, but for those rounded ones you may want to use repulsion physics instead of barnesHut. See also physicsConfiguration example.

How tell if a point is within a polygon for texture

This seems to be a rather asked question - (hear me out first! :)
I've created a polygon with perlin noise, and it looks like this:
I need to generate a texture from this array of points. (I'm using Monogame/XNA, but I assume this question is somewhat agnostic).
Anyway, researching this problem tells me that many people use raycasting to determine how many times a line crosses over the polygon shape (If once, it's inside. twice or zero times, it's outside). This makes sense, but I wonder if there is a better way, given that I have all of the points.
Doing a small raycast for every pixel I want to fill in seems excessive - is this the only/best way?
If I have a small 500px square image I need to fill in, I'll need to do a raycast for 250,000 individual pixels, which seems like an awful lot.
If you want to do this for every pixel, you can use a sweeping line:
Start from the topmost coordinate and examine a horizontal ray from left to right. Calculate all intersections with the polygon and sort them by their x-coordinate. Then iterate all pixels on the line and remember if you are in or out. Whenever you encounter an intersection, switch to the other side. If some pixel is in, set the texture. If not, ignore it. Do this from top to bottom for every possible horizontal line.
The intersection calculation could be enhanced in several ways. E.g. by using an acceleration data structure like a grid, quadtree, etc. or by examining the intersecting or touching edges of the polygon before. Then, when you sweep the line, you will already know, which edges will cause an intersection.

Where can I find information on line growing algorithms?

I'm doing some image processing, and I need to find some information on line growing algorithms - not sure if I'm using the right terminology here, so please call me out on this is needs be.
Imagine my input image is simply a circle on a black background. I'd basically like extract the coordinates, so that I may draw this circle elsewhere based on the coordinates.
Note: I am already using edge detection image filters, but I thought it best to explain with a simple example.
Basically what I'm looking to do is detect lines in an image, and store the result in a data type where by I have say a class called Line, and various different Point objects (containing X/Y coordinates).
class Line
{
Point points[];
}
class Point
{
int X, Y;
}
And this is how I'd like to use it...
Line line;
for each pixel in image
{
if pixel should be added to line
{
add pixel coordinates to line;
}
}
I have no idea how to approach this as you can probably establish, so pointers to any subject matter would be greatly appreciated.
I'm not sure if I'm interpreting you right, but the standard way is to use a Hough transform. It's a two step process:
From the given image, determine whether each pixel is an edge pixel (this process creates a new "binary" image). A standard way to do this is Canny edge-detection.
Using the binary image of edge pixels, apply the Hough transform. The basic idea is: for each edge pixel, compute all lines through it, and then take the lines that went through the most edge pixels.
Edit: apparently you're looking for the boundary. Here's how you do that.
Recall that the Canny edge detector actually gives you a gradient also (not just the magnitude). So if you pick an edge pixel and follow along (or against) that vector, you'll find the next edge pixel. Keep going until you don't hit an edge pixel anymore, and there's your boundary.
What you are talking about is not an easy problem! I have found that this website is very helpful in image processing: http://homepages.inf.ed.ac.uk/rbf/HIPR2/wksheets.htm
One thing to try is the Hough Transform, which detects shapes in an image. Mind you, it's not easy to figure out.
For edge detection, the best is Canny edge detection, also a non-trivial task to implement.
Assuming the following is true:
Your image contains a single shape on a background
You can determine which pixels are background and which pixels are the shape
You only want to grab the boundary of the outside of the shape (this excludes donut-like shapes where you want to trace the inside circle)
You can use a contour tracing algorithm such as the Moore-neighbour algorithm.
Steps:
Find an initial boundary pixel. To do this, start from the bottom-left corner of the image, travel all the way up and if you reach the top, start over at the bottom moving right one pixel and repeat, until you find a shape pixel. Make sure you keep track of the location of the pixel that you were at before you found the shape pixel.
Find the next boundary pixel. Travel clockwise around the last visited boundary pixel, starting from the background pixel you last visited before finding the current boundary pixel.
Repeat step 2 until you revisit first boundary pixel. Once you visit the first boundary pixel a second time, you've traced the entire boundary of the shape and can stop.
You could take a look at http://processing.org/ the project was created to teach the fundamentals of computer programming within a visual context. There is the language, based on java, and an IDE to make 'sketches' in. It is a very good package to quickly work with visual objects and has good examples of things like edge detection that would be useful to you.
Just to echo the answers above you want to do edge detection and Hough transform.
Note that a Hough transform for a circle is slightly tricky (you are solving for 3 parameters, x,y,radius) you might want to just use a library like openCV

Resources