Generate subdivided triangle-strip cube? - math

I want to generate a cube where each face is divided into bits, like the following image:
http://img59.imageshack.us/img59/2504/gridcube165c3.jpg
Now, I can do this pretty simply if I'm just rendering quads, by just spacing vertices along each face plane at regular intervals, but my problem comes in when I want to turn the whole thing into a triangle strip. I've just got no idea how to unwrap it programmatically- is there some pattern to unwrapping that I'd follow?
I'm thinking of starting with the vertex at the top left corner as Row 0 Column 0 (R0C0), I'd want (first triangle) R0C0, ROC1, R1C1, (second triangle) R0C0, R1C0, R1C1 and so forth, and then when I reach the end of a row I guess I'd use a degenerate triangle to move to the next row, and then when I reach the end of the face I'd do the same to start a new face.
My main problem is that I can't visualize the program loop that would do this. I can reason out which vertex comes next visually, which is how I worked out the order above, but when I try to think programmatically I just stare blankly.
Even worse, with the end product I want the generated cube to be UV-mapped with a simple cube-map unwrap (the kind that looks like a T or t).
I guess, really, the best solution would be to find a library that already does this for me.

You could take a look at Ignacio Castaño's 'Optimal Grid Rendering' even though it's not triangle strips, it may inspire you.
Otherwise, you could use NVTriStrip library and be done with it.

Related

2D space organic projection

I'm currently working on a glsl shader (EDIT : I'm starting to think that a shader isn't necessarily the best solution and as I'm doing this in processing, I can consider a vectorial solution too) supposed to render something like this but filling the entire 2D space (or at least a larger surface):
To do so, I want to map the repeating patterns on the general leaves shapes that you can see on the top of the sketch below.
My problem is this mapping part : is it possible to find a function that project XY coordinates on the screen to another position in such a way that I can map my patterns the way I want? The leaves must have some kind of UV coordinates inside them (to be able to apply the repeating pattern) and the transformation must be a conformal map because otherwise, there would be some distortions in the pattern.
I've tried several lines of thought but I haven't managed to get the final result :
recursion :
the idea is to first cut the plane in stripes, then cut the stripes in leaves shapes that touch the top and the bottom of the stripes (because that's easier) and finally recursively cut the leaves in halves until the result looks more random. as long as the borders of the stripe aren't on the screen, it shouldn't be too noticeable. The biggest difficulty here is to avoid the distortion.
voronoi :
it may be possible to find a distance function guided by a vector field such that the Voronoi diagram looks more like what I'm looking for. However I don't think it will be possible to have the UV mapping I want. If it's the case, a good approximation woult do the trick, the result doesn't need to be exact as long as it isn't too noticable.
distortion :
it could also be possible to find a more direct way to do this projection. While desperately looking for a solution, I came across the fact that a continuous complex function is a conform map but I haven't managed to go any further.
Finaly, there may be another solution I haven't thought about and I would be glad if someone gave me a complete solution or just a new idea I haven't tried yet.

Is there a formula to find affected square by sized-brush on a grid?

I am not sure how to put this problem in a single sentence, sorry if the title is misleading.
I am currently developing a simple terrain editor with a circle-shaped brush size. The image below shows a few cases that represent my problem.
additional info: the square size is fixed and uniform and in the current version, my concern is only to find which one is hit and which one is not (the amount of region covered is important for weighting the hit, but probably not right now)
My current solution (which is not even correct for a certain condition) is: given a hit in a position (x, y) with radius r, loop through all square from (x-radius, y-radius) to (x+radius, y+radius) and apply 2-D box to circle collision detection. But I don't think this is optimal (or even correct IMO).
Can anyone help me with this one? Thank you
Since i can't add a simple comment due to bureaucracy on this website i have to type it out here.
Anyway you're in luck since i was trying to do this recently as well! The way i did it is i iterated through the vertex array and check if the current vertex falls inside the radius of the circle. But perhaps what you want is to check it against each quad center and if that center falls inside the radius then add the whole quad as it's being collided.
Of course depending on the size of your grid the performance will vary so it's good to try to iterate through as few quads as needed. Though accessing these quads from the array is something you have to figure out yourself.

How tell if a point is within a polygon for texture

This seems to be a rather asked question - (hear me out first! :)
I've created a polygon with perlin noise, and it looks like this:
I need to generate a texture from this array of points. (I'm using Monogame/XNA, but I assume this question is somewhat agnostic).
Anyway, researching this problem tells me that many people use raycasting to determine how many times a line crosses over the polygon shape (If once, it's inside. twice or zero times, it's outside). This makes sense, but I wonder if there is a better way, given that I have all of the points.
Doing a small raycast for every pixel I want to fill in seems excessive - is this the only/best way?
If I have a small 500px square image I need to fill in, I'll need to do a raycast for 250,000 individual pixels, which seems like an awful lot.
If you want to do this for every pixel, you can use a sweeping line:
Start from the topmost coordinate and examine a horizontal ray from left to right. Calculate all intersections with the polygon and sort them by their x-coordinate. Then iterate all pixels on the line and remember if you are in or out. Whenever you encounter an intersection, switch to the other side. If some pixel is in, set the texture. If not, ignore it. Do this from top to bottom for every possible horizontal line.
The intersection calculation could be enhanced in several ways. E.g. by using an acceleration data structure like a grid, quadtree, etc. or by examining the intersecting or touching edges of the polygon before. Then, when you sweep the line, you will already know, which edges will cause an intersection.

How to determine all line segments from a list of points generated from a mouse gesture?

Currently I am interning at a software company and one of my tasks has been to implement the recognition of mouse gestures. One of the senior developers helped me get started and provided code/projects that uses the $1 Unistroke Recognizer http://depts.washington.edu/aimgroup/proj/dollar/. I get, in a broad way, what the $1 Unistroke Recognizer is doing and how it works but am a bit overwhelmed with trying to understand all of the internals/finer details of it.
My problem is that I am trying to recognize the gesture of moving the mouse downards, then upwards. The $1 Unistroke Recognizer determines that the gesture I created was a downwards gesture, which is infact what it ought to do. What I really would like it to do is say "I recognize a downards gesture AND THEN an upwards gesture."
I do not know if the lack of understanding of the $1 Unistroke Recognizer completely is causing me to scratch my head, but does anyone have any ideas as to how to recognize two different gestures from moving the mouse downwards then upwards?
Here is my idea that I thought might help me but would love for someone who is an expert or even knows just a bit more than me to let me know what you think. Any help or resources that you know of would be greatly appreciated.
How My Application Currently Works:
The way that my current application works is that I capture points from where the mouse cursor is while the user holds down the left mouse button. A list of points then gets feed to a the gesture recognizer and it then spits out what it thinks to be the best shape/gesture that cooresponds to the captured points.
My Idea:
What I wanted to do is before I feed the points to the gesture recognizer is to somehow go through all the points and break them down into separate lines or curves. This way I could feed each line/curve in one at a time and from the basic movements of down, up, left, right, diagonals, and curves I could determine the final shape/gesture.
One way I thought would be good in determining if there are separate lines in my list of points is sampling groups of points and looking at their slope. If the slope of a sampled group of points differed X% from some other group of sampled points then it would be safe to assume that there is indeed a separate line present.
What I Think Are Possible Problems In My Thinking:
Where do I determine the end of a line and the start of a separate line? If I was to use the idea of checking the slope of a group of points and then determined that there was a separate line present that doesn't mean I nessecarily found the slope of a separate line. For example if you were to draw a straight edged "L" with a right angle and sample the slope of the points around the corner of the "L" you would see that the slope would give resonable indication that there is a separate line present but those points don't correspond to the start of a separate line.
How to deal with the ever changing slope of a curved line? The gesture recognizer that I use handles curves already in the way I want it too. But I don't want my method that I use to determine separate lines keep on looking for these so called separate lines in a curve because its slope is changing all the time when I sample groups of points. Would I just stop sampling points once the slope changed more than X% so many times in a row?
I'm not using the correct "type" of math for determining separate lines. Math isn't my strongest subject but I did do some research. I tried to look into Dot Products and see if that would point me in some direction, but I don't know if it will. Has anyone used Dot Prodcuts for doing something like this or some other method?
Final Thoughts, Remarks, And Thanks:
Part of my problem I feel like is that I don't know how to compeletly ask my question. I wouldn't be surprised if this problem has already been asked (in one way or another) and a solution exist that can be Googled. But my search results on Google didn't provide any solutions as I just don't know exactly how to ask my question yet. If you feel like it is confusing please let me know where and why and I will help clarify it. In doing so maybe my searches on Google will become more precise and I will be able to find a solution.
I just want to say thanks again for reading my post. I know its long but didn't really know where else to ask it. Imma talk with some other people around the office but all of my best solutions I have used throughout school have come from the StackOverflow community so I owe much thanks to you.
Edits To This Post:
(7/6 4:00 PM) Another idea I thought about was comparing all the points before a Min/Max point. For example, if I moved the mouse downards then upwards, my starting point would be the current Max point while the point where I start moving the mouse back upwards would be my min point. I could then go ahead and look to see if there are any points after the min point and if so say that there could be a new potential line. I dunno how well this will work on other shapes like stars but thats another thing Im going to look into. Has anyone done something similar to this before?
If your problem can be narrowed down to breaking apart a general curve into straight or smoothly curved partial lines then you could try this.
Comparing the slope of the segments and identifying breaking points where it is greater then some threshold would work in a very simplified case. Imagine a perfectly formed L-shape where you have a right angle between two straight lines. Obviously the corner point would be the only one where the slope difference is above the threshold as long as the threshold is between 0 and 90 degrees, and thus a identifiable breaking point.
However, the vertical and horizontal lines may be slightly curved so the threshold would need to be large enough for these small differences in slope to be ignored as breaking points. You'd also have to decide how sharp a corner the algorithm should pick up as a break. is 90 deg or higher required, or is even 30 deg enough? This is an important question.
Finally, to make this robust I would not be satisfied comparing the slopes of two adjacent segments. Hands may shake, corners may be smoothed out and the ideal conditions to find straight lines and sharp corners will probably never occur. For each point investigated for a break I would take the average slope of the N previous segments and compare it to the average slope of the N following segments. This can be efficiently implemented using a running mean. By choosing a good sample number N (depending on the accuracy of the input, the total number of points, etc) the algorithm can avoid the noise and make better detections.
Basically the algorithm would be:
For each investigated point (beginning N points into the sequence and ending N points before the end.)
Compute average slope of the N previous segments.
Compute average slope of the N next segments.
If the difference of the averages is greater than the Threshold, mark current point as a breaking point.
This is quite off the top of my head. You'd have to try it in your application.
if you work with absolute angles, like upwards and downwards, you can simply take the absolute slope between two points (not necessarily adjacent) to determine if it's RIGHT, LEFT, UP, DOWN (if that is enough of a distinction)
the art is to find a distance between points so that the angle is not random (with 1px, the angle will be a multiple of 45°)
There is a firefox plugin for Navigation using mouse gestures that works very well. I think it's FireGestures, but I'm not sure. I guess you can get some inspiration from that one
Additional thought: If you draw a shape by connectiong successive points, then connecting back to the first point, the ratio between the area and the final line segment's length is also an indicator for the gesture's "edginess"
If you are just interested in up/down/left/right, a first approximation is to check 45 degree segments of a circle. This is easily done by checking the the horizontal difference between (successive) points against the vertical difference between points.
Say you have a greater positive horizontal difference than vertical difference, then that would be 'RIGHT'.
The only difficulty then comes for example, in distinguishing UP/DOWN from UP/RIGHT/DOWN. But this could be done by distances between points. If you determine that the mouse has moved RIGHT for less than 20 pixels say, then you can ignore that movement.

Where can I find information on line growing algorithms?

I'm doing some image processing, and I need to find some information on line growing algorithms - not sure if I'm using the right terminology here, so please call me out on this is needs be.
Imagine my input image is simply a circle on a black background. I'd basically like extract the coordinates, so that I may draw this circle elsewhere based on the coordinates.
Note: I am already using edge detection image filters, but I thought it best to explain with a simple example.
Basically what I'm looking to do is detect lines in an image, and store the result in a data type where by I have say a class called Line, and various different Point objects (containing X/Y coordinates).
class Line
{
Point points[];
}
class Point
{
int X, Y;
}
And this is how I'd like to use it...
Line line;
for each pixel in image
{
if pixel should be added to line
{
add pixel coordinates to line;
}
}
I have no idea how to approach this as you can probably establish, so pointers to any subject matter would be greatly appreciated.
I'm not sure if I'm interpreting you right, but the standard way is to use a Hough transform. It's a two step process:
From the given image, determine whether each pixel is an edge pixel (this process creates a new "binary" image). A standard way to do this is Canny edge-detection.
Using the binary image of edge pixels, apply the Hough transform. The basic idea is: for each edge pixel, compute all lines through it, and then take the lines that went through the most edge pixels.
Edit: apparently you're looking for the boundary. Here's how you do that.
Recall that the Canny edge detector actually gives you a gradient also (not just the magnitude). So if you pick an edge pixel and follow along (or against) that vector, you'll find the next edge pixel. Keep going until you don't hit an edge pixel anymore, and there's your boundary.
What you are talking about is not an easy problem! I have found that this website is very helpful in image processing: http://homepages.inf.ed.ac.uk/rbf/HIPR2/wksheets.htm
One thing to try is the Hough Transform, which detects shapes in an image. Mind you, it's not easy to figure out.
For edge detection, the best is Canny edge detection, also a non-trivial task to implement.
Assuming the following is true:
Your image contains a single shape on a background
You can determine which pixels are background and which pixels are the shape
You only want to grab the boundary of the outside of the shape (this excludes donut-like shapes where you want to trace the inside circle)
You can use a contour tracing algorithm such as the Moore-neighbour algorithm.
Steps:
Find an initial boundary pixel. To do this, start from the bottom-left corner of the image, travel all the way up and if you reach the top, start over at the bottom moving right one pixel and repeat, until you find a shape pixel. Make sure you keep track of the location of the pixel that you were at before you found the shape pixel.
Find the next boundary pixel. Travel clockwise around the last visited boundary pixel, starting from the background pixel you last visited before finding the current boundary pixel.
Repeat step 2 until you revisit first boundary pixel. Once you visit the first boundary pixel a second time, you've traced the entire boundary of the shape and can stop.
You could take a look at http://processing.org/ the project was created to teach the fundamentals of computer programming within a visual context. There is the language, based on java, and an IDE to make 'sketches' in. It is a very good package to quickly work with visual objects and has good examples of things like edge detection that would be useful to you.
Just to echo the answers above you want to do edge detection and Hough transform.
Note that a Hough transform for a circle is slightly tricky (you are solving for 3 parameters, x,y,radius) you might want to just use a library like openCV

Resources