Generating triangulated road geometry from a graph - math

What I'm trying to achieve:
Have a look at the following image from this paper
It's taking a road graph that is likely represented as segments/junctions, giving the lines width (call it what you like, sweeping, thickening) and then generating triangulated geometry for the roads.
Why I am asking this question:
This operation seems to be a fairly standard thing to do, but I can't any papers that directly deal with how to do it. Most GIS / procedural city generation papers focus on the generation of the road graph itself (e.g. creating interesting topologies), but the step involving taking the graph data and generating triangle meshes / UVs is always glossed over.
Here's a really nice video of complex road intersections with nice texturing and good-looking junctions. This is the level of quality I'd eventually like to achieve, but an incremental step towards this would be more than acceptable to me. Here's another video showing interactive road graph creation with a 3d visualisation.
There is a paper to go with that video but nothing is said about the triangulation strategy :(
I have my own approach to try that's too long-winded to detail here, but I'd much rather implement an existing solution / algorithm if one exists, as it'll be better than anything I cook up in the next few weeks.
Can anyone point me in the right direction?
Thanks.

What you are seeking is the offset polygon for each of the regions bounded by roads. If all those regions are convex, this is an easy computation. If some are nonconvex, then it is more difficult, but still well-studied. You can find links at Wikipedia under straight skeleton, or here on StackOverflow under "An algorithm for inflating/deflating (offsetting, buffering) polygons."

Related

Aligning two clouds using two manually selected points

I'm maintaining software which uses PCL. I'm myself not much experienced in PCL, I've only tried some examples and tried to understand the official PCL-Ducumentation (which is unfortunately mainly sparse, doxygen-generated text). My impression is, only a PCL contributors have real change to use the library efficiently.
One feature I have to fix in the software is aligning two clouds. The clouds are two objects, which should be stacked together with a layer in-between (The actual task is to calculate the volume of the layer ).
I hope the picture explains the task well. The objects are scanned both from the sides to be stacked (one from above and the other from below). On both clouds the user selects manually two points. Then, as I hope there should be a mean in PCL to align two clouds providing the two clouds and the coordinates of the points. The alignment is required only in X-Y Plane.
Unfortunately I can't find out which function should I use for this, partly because the PCL documentation is IHMO really humble, partly because of lack of experience.
My desperate idea was to stack the clouds using P1 as the origin of both and then rotate the second cloud manually using the calculated angle between P11,P21 and P12,P22. This works, but since the task appears to me very common, I'd expect PCL to provide a dedicated function for that.
Could you point me to a proper API-function, code-snippet, example, similar project or a good book helping to understand PCL API and usage?
Many thanks!
I think this problem does not need PCL. It is simple enough to form the correct linear equation and solve it.
If you want to use PCL without worrying about the maths too much (though, if the above is a mystery to you, then studying some computational geometry would be very useful), here is my suggestion.
Most PCL operations work on 3D point clouds. I understand from your question that you only have 2D point clouds OR you don't care about the 3rd dimension. In this case if I were you I would represent the points as a 3D point cloud and set the z dimension to zero.
You will only need two point clouds with 3 points as that is how many points you are feeding to the transformation estimation algorithm. The first 2 points in the clouds will be the points chosen by the user. The third one will be any point that you have chosen that you know is the same in both clouds. You need this third one otherwise the transform is still ambiguous if it is a general transform that is being computed. You can calculate however such a point as you know 2 points already and you know that all the points are on a common plane (as you have projected them by losing the z values). Just don't choose it co-linear with the other two points. For example, halfway between the two points and 2cm in the perpendicular direction (ensuring to go in the correct direction).
Then you can use the estimateRigidTransformation functions to find the transform.
http://docs.pointclouds.org/1.7.0/classpcl_1_1registration_1_1_transformation_estimation_s_v_d.html
This function is also good for over-determined problems (it is the workhorse of the ICP algorithm in PCL) but as long as you have enough points to determine the transform it should work.

Ogre3D - render water with waves by vertex? nurbs?

I have an Ogre3D application and I would like to render a surface that represents the water with waves.
I think I am not the only one that has this purpose, so I was looking for an example to follow.
I imagine that if I want to create a water surface and want to move it like a wave I have to create a surface with many vertexes (according to which precision I want) and then control the height of each vertexes.
As the water will be quite big, I think that the water will take long time to be rendered, so I was wandering if it was better to render it by vertex or nurbs? Or are there any better way?
There's an Ocean example included in Ogre distribution that you can use as starting point. I don't remember if it uses any LOD system but it has quite nice random waves and Fresnel shader.
The nurbs won't help you much as there's no easy way to push them into GPU. They're good for some modelling tasks but at the end you need to convert them to 'real' geometry.

Path finding for games

What are some path finding algorithms used in games of all types? (Of all types where characters move, anyway) Is Dijkstra's ever used? I'm not really looking to code anything; just doing some research, though if you paste pseudocode or something, that would be fine (I can understand Java and C++).
I know A* is like THE algorithm to use in 2D games. That's great and all, but what about 2D games that are not grid-based? Things like Age of Empires, or Link's Awakening. There aren't distinct square spaces to navigate to, so what do they do?
What do 3D games do? I've read this thingy http://www.ai-blog.net/archives/000152.html, which I hear is a great authority on the subject, but it doesn't really explain HOW, once the meshes are set, the path finding is done. IF A* is what they use, then how is something like that done in a 3D environment? And how exactly do the splines work for rounding corners?
Dijkstra's algorithm calculates the shortest path to all nodes in a graph that are reachable from the starting position. For your average modern game, that would be both unnecessary and incredibly expensive.
You make a distinction between 2D and 3D, but it's worth noting that for any graph-based algorithm, the number of dimensions of your search space doesn't make a difference. The web page you linked to discusses waypoint graphs and navigation meshes; both are graph-based and could in principle work in any number of dimensions. Although there are no "distinct square spaces to move to", there are discrete "slots" in the space that the AI can move to and which have been carefully layed out by the game designers.
Concluding, A* is actually THE algorithm to use in 3D games just as much as in 2D games. Let's see how A* works:
At the start, you know the coordinates of your current position and
your target position. You make an optimistic estimate of the
distance to your destination, for example the length of the straight
line between the start position and the target.
Consider the adjacent nodes in the graph. If one of them is your
target (or contains it, in case of a navigation mesh), you're done.
For each adjacent node (in the case of a navigation mesh, this could
be the geometric center of the polygon or some other kind of
midpoint), estimate the associated cost of traveling along there as the
sum of two measures: the length of the path you'd have traveled so
far, and another optimistic estimate of the distance that would still
have to be covered.
Sort your options from the previous step by their estimated cost
together with all options that you've considered before, and pick
the option with the lowest estimated cost. Repeat from step 2.
There are some details I haven't discussed here, but this should be enough to see how A* is basically independent of the number of dimensions of your space. You should also be able to see why this works for continous spaces.
There are some closely related algorithms that deal with certain problems in the standard A* search. For example recursive best-first search (RBFS) and simplified memory-bounded A* (SMA*) require less memory, while learning real-time A* (LRTA*) allows the agent to move before a full path has been computed. I don't know whether these algorithms are actually used in current games.
As for the rounding of corners, this can be done either with distance lines (where corners are replaced by circular arcs), or with any kind of spline function for full-path smoothing.
In addition, algorithms are possible that rely on a gradient over the search space (where each point in space is associated with a value), rather than a graph. These are probably not applied in most games because they take more time and memory, but might be interesting to know about anyway. Examples include various hill-climbing algorithms (which are real-time by default) and potential field methods.
Methods to procedurally obtain a graph from a continuous space exist as well, for example cell decomposition, Voronoi skeletonization and probabilistic roadmap skeletonization. The former would produce something compatible with a navigation mesh (though it might be hard to make it equally efficient as a hand-crafted navigation mesh) while the latter two produce results that will be more like waypoint graphs. All of these, as well as potential field methods and A* search, are relevant to robotics.
Sources:
Artificial Intelligence: A Modern Approach, 2nd edition
Introduction to The Design and Analysis of Algorithms, 2nd edition

AABB vs Circle - Vise versa using separate axis theorem

Am following this tutorial for my 2d game collision handling , this tutorial explains about the collision used in one of my favorite game "N". How they used separate axis theorem more efficiently for collision between AABB vs AABB and AABB vs Circle. http://www.metanetsoftware.com/technique/tutorialA.html. I understand the implementation of AABB vs AABB collision handling but I couldn't understand AABB vs Circle collision detection especially voronoi regions.Totally confused how/where to start.
AABB vs AABB collision detection
Find the axis along all the edge by finding the normal of each edge.
Projection all the vertices to the
resultant Axis , final result should
be a scalar value.
The resultant scalar value in turn
is used to find whether collision is
present or not.
Can someone please explain how to handle collision AABB vc Circle - vise versa?
Since collisions with a circle always come down to a comparison against the radius (in your case, via projection), having the closest line segment (edge of the polygon) and the normal vector are the only building blocks you need. The normal vector is easily computed from the points of the line segment (something like unit(y2-y1, x1-x2) ... the negative reciprocal of the slope). Figuring out which edge is closest is the building block that remains. Voronoi regions give us the last building block.
You understand collisions between axis-aligned bounding boxes. I assume you also understand collisions between two circles. I'm assuming you don't understand voronoi regions. So, where to start? Voronoi diagrams. I highly suggest that you find a diagrammed explanation. This link is quite good. However, depending on how lost you are, perhaps a little additional background (seriously, though, no explanation can beat the visual):
A voronoi diagram is one of the ubiquitous data structures of computational geometry. Any computational geometry book will discuss the Voronoi diagram. It answers a simple question: where is the closest post office? Given a set of points in a plane (post offices), a voronoi diagram separates the plane into different regions, each containing one of the points. If you are in a particular region, you know which point (post office) is closest to you. If you were a circle, this would be nice for collision detection for a simple reason: the closest point is the most important one to test for collisions.
Note that if you want to mathematically derive a voronoi diagram, you simply consider all point pairs and calculate all bisecting lines. Then you intersect all of the bisecting lines and throw away the segments that are unimportant because some other point is closer to the point of interest (which happens at every intersection). This leads to a terribly inefficient algorithm, though. The efficient implementation involves another ubiquitous thing in computational geometry: the line-sweep algorithm. Its details can be found elsewhere; the important bit is that it provides a method of considering only the important points at any stage of the algorithm.
The voronoi regions in your tutorial are a little more complex. Instead of just points, we have line segments. Fortunately, the line-sweep algorithm handles this nicely. You mostly have to worry about the start or end of the line segments. Conceptually, not much changes once you have the basic algorithm down. Again, this is exceptionally helpful for collision detection with a circle: given the voronoi region, you know which line segment to test collisions against.
Does that even help? Feedback appreciated. I'll be happy to clarify anything. Explaining voronoi diagrams without visuals is probably a bad idea.

How can I produce visualizations combining network graphs and imaginary maps?

Basically, I'm looking for something like this awesome research project: Gmap, which was referenced in this related SO question.
It's a rather novel data visualization that combines a network graph with an imaginary set of regions that looks like a map. Basically, the map-ification helps humans comprehend the enormous data set better.
Cool, huh? GMap doesn't appear to be open source, though I plan to contact the authors.
I already know how to create a network graph with a force-directed layout (currently using Prefuse/Flare), so an answer could be a way to layer a mapping algorithm on top of an existing graph. I'm also not concerned about the client-side at all right now - this would be a backend process, and I am flexible about technology stack and data output at this stage.
There's also this paper that describes the algorithm backing GMap. If you have heard of Voronoi diagrams (which rock, but make my head hurt), this paper is for you. I quit after Calc 1, though, so I'm hoping to avoid remembering what sigmas and epsilons are.
As a start, could you do a simple closest point sort of an algorithm? So it looks something like this: You have your force directed layout and have computed some sort of bounding box. Now you want to render it. Adjust your bounding box to line up to the origin and then as you calculate the color of each pixel, find it's closest point. This should generate some semblance of regions and should be quite simple to try out. Of course, it isn't going to be as pretty as GMap, but maybe a start? The runtime would be awful, but... I don't know about you but computing boundary lines directly sounds a lot harder to me.

Resources