Ogre3D - render water with waves by vertex? nurbs? - vertex

I have an Ogre3D application and I would like to render a surface that represents the water with waves.
I think I am not the only one that has this purpose, so I was looking for an example to follow.
I imagine that if I want to create a water surface and want to move it like a wave I have to create a surface with many vertexes (according to which precision I want) and then control the height of each vertexes.
As the water will be quite big, I think that the water will take long time to be rendered, so I was wandering if it was better to render it by vertex or nurbs? Or are there any better way?

There's an Ocean example included in Ogre distribution that you can use as starting point. I don't remember if it uses any LOD system but it has quite nice random waves and Fresnel shader.
The nurbs won't help you much as there's no easy way to push them into GPU. They're good for some modelling tasks but at the end you need to convert them to 'real' geometry.

Related

Aligning two clouds using two manually selected points

I'm maintaining software which uses PCL. I'm myself not much experienced in PCL, I've only tried some examples and tried to understand the official PCL-Ducumentation (which is unfortunately mainly sparse, doxygen-generated text). My impression is, only a PCL contributors have real change to use the library efficiently.
One feature I have to fix in the software is aligning two clouds. The clouds are two objects, which should be stacked together with a layer in-between (The actual task is to calculate the volume of the layer ).
I hope the picture explains the task well. The objects are scanned both from the sides to be stacked (one from above and the other from below). On both clouds the user selects manually two points. Then, as I hope there should be a mean in PCL to align two clouds providing the two clouds and the coordinates of the points. The alignment is required only in X-Y Plane.
Unfortunately I can't find out which function should I use for this, partly because the PCL documentation is IHMO really humble, partly because of lack of experience.
My desperate idea was to stack the clouds using P1 as the origin of both and then rotate the second cloud manually using the calculated angle between P11,P21 and P12,P22. This works, but since the task appears to me very common, I'd expect PCL to provide a dedicated function for that.
Could you point me to a proper API-function, code-snippet, example, similar project or a good book helping to understand PCL API and usage?
Many thanks!
I think this problem does not need PCL. It is simple enough to form the correct linear equation and solve it.
If you want to use PCL without worrying about the maths too much (though, if the above is a mystery to you, then studying some computational geometry would be very useful), here is my suggestion.
Most PCL operations work on 3D point clouds. I understand from your question that you only have 2D point clouds OR you don't care about the 3rd dimension. In this case if I were you I would represent the points as a 3D point cloud and set the z dimension to zero.
You will only need two point clouds with 3 points as that is how many points you are feeding to the transformation estimation algorithm. The first 2 points in the clouds will be the points chosen by the user. The third one will be any point that you have chosen that you know is the same in both clouds. You need this third one otherwise the transform is still ambiguous if it is a general transform that is being computed. You can calculate however such a point as you know 2 points already and you know that all the points are on a common plane (as you have projected them by losing the z values). Just don't choose it co-linear with the other two points. For example, halfway between the two points and 2cm in the perpendicular direction (ensuring to go in the correct direction).
Then you can use the estimateRigidTransformation functions to find the transform.
http://docs.pointclouds.org/1.7.0/classpcl_1_1registration_1_1_transformation_estimation_s_v_d.html
This function is also good for over-determined problems (it is the workhorse of the ICP algorithm in PCL) but as long as you have enough points to determine the transform it should work.

Vector Shape Difference & intersection

Let me explain my problem:
I have a black vector shape (let's say it's a series of joined, straight lines for now, but it'd be nice if I could also support quadratic curves).
I also have a rectangle of a predefined width and height. I'm going to place it on top of the black shape, and then take the union of the two.
My first issue is that I don't know how to quickly extract vector unions, but I think there is a well-defined formula I can figure out for myself.
My second, and more tricky issue is how to efficiently detect the position the rectangle needs to be in (i.e., what translation and rotation are needed by the matrices), in order to maximize the black, remaining after the union (see figure, below).
The red outlined shape below is ~33% black; the green is something like 85%; and there are positions for this shape & rectangle wherein either could have 100% coverage.
Obviously, I can brute-force this by trying every translation and rotation value for every point where at least part of the rectangle is touching the black shape, then keep track of the one with the most black coverage. The problem is, I can only try a finite number of positions (and may therefore miss the maximum). Apart from that, it feels very inefficient!
Can you think of a more efficient way of tackling this problem?
Something from my Uni days tells me that a Fourier transform might improve the efficiency here, but I can't figure out how I'd do that with a vector shape!
Three ideas that have promise of being faster and/or more precise than brute force search:
Suppose you have a 3d physics engine. Define a "cone-shaped" surface where the apex is at say (0,0,-1), the black polygon boundary on the z=0 plane with its centroid at the origin, and the cone surface is formed by connecting the apex with semi-infinite rays through the polygon boundary. Think of a party hat turned upside down and crumpled to the shape of the black polygon. Now constrain the rectangle to be parallel to the z=0 plane and initially so high above the cone (large z value) that it's easy to find a place where it's definitely "inside". Then let the rectangle fall downward under gravity, twisting about z and translating in x-y only as it touches the cone, staying inside all the way down until it settles and can't move any farther. The collision detection and force resolution of the physics engine takes care of the complexities. When it settles, it will be in a position of maximal coverage of the black polygon in a local sense. (If it settles with z<0, then coverage is 100%.) For the convex case it's probably a global maximum. To probabilistically improve the result for non-convex cases (like your example), you'd randomize the starting position, dropping the polygon many times, taking the best result. Note you don't really need a full blown physics engine (though they certainly exist in open source). It's enough to use collision resolution to tell you how to rotate and translate the rectangle in a pseudo-physical way as it twists and slides uniformly down the z axis as far as possible.
Different physics model. Suppose the black area is an attractive field generator in 2d following the usual inverse square rule like gravity and magnetism. Now let the rectangle drift in a damping medium responding to this field. It ought to settle with a maximal area overlapping the black area. There are problems with "nulls" like at the center of a donut, but I don't think these can ever be stable equillibria. Can they? The simulation could be easily done by modeling both shapes as particle swarms. Or since the rectangle is a simple shape and you are a physicist, you could come up with a closed form for the integral of attractive force between a point and the rectangle. This way only the black shape needs representation as particles. Come to think of it, if you can come up with a closed form for torque and linear attraction due to two triangles, then you can decompose both shapes with a (e.g. Delaunay) triangulation and get a precise answer. Unfortunately this discussion implies it can't be done analytically. So particle clouds may be the final solution. The good news is that modern processors, particularly GPUs, do very large particle computations with amazing speed. Edit: I implemented this quick and dirty. It works great for convex shapes, but concavities create stable points that aren't what you want. Using the example:
This problem is related to robot path planning. Looking at this literature may turn up some ideas In RPP you have obstacles and a robot and want to find a path the robot can travel while avoiding and/or sliding along them. If the robot is asymmetric and can rotate, then 2d planning is done in a 3d (toroidal) configuration space (C-space) where one dimension is rotation (so closes on itself). The idea is to "grow" the obstacles in C-space while shrinking the robot to a point. Growing the obstacles is achieved by computing Minkowski Differences.) If you decompose all polygons to convex shapes, then there is a simple "edge merge" algorithm for computing the MD.) When the C-space representation is complete, any 1d path that does not pierce the "grown" obstacles corresponds to continuous translation/rotation of the robot in world space that avoids the original obstacles. For your problem the white area is the obstacle and the rectangle is the robot. You're looking for any open point at all. This would correspond to 100% coverage. For the less than 100% case, the C-space would have to be a function on 3d that reflects how "bad" the intersection of the robot is with the obstacle rather than just a binary value. You're looking for the least bad point. C-space representation is an open research topic. An octree might work here.
Lots of details to think through in both cases, and they may not pan out at all, but at least these are frameworks to think more about the problem. The physics idea is a bit like using simulated spring systems to do graph layout, which has been very successful.
I don't believe it is possible to find the precise maximum for this problem, so you will need to make do with an approximation.
You could potentially render the vector image into a bitmap and use Haar features for this - they provide a very quick O(1) way of calculating the average colour of a rectangular region.
You'd still need to perform this multiple times for different rotations and positions, but it would bring it algorithmic complexity down from a naive O(n^5) to O(n^3) which may be acceptably fast. (with n here being the size of the different degrees of freedom you are scanning)
Have you thought to keep track of the remaining white space inside the blocks with something like if whitespace !== 0?

Generating triangulated road geometry from a graph

What I'm trying to achieve:
Have a look at the following image from this paper
It's taking a road graph that is likely represented as segments/junctions, giving the lines width (call it what you like, sweeping, thickening) and then generating triangulated geometry for the roads.
Why I am asking this question:
This operation seems to be a fairly standard thing to do, but I can't any papers that directly deal with how to do it. Most GIS / procedural city generation papers focus on the generation of the road graph itself (e.g. creating interesting topologies), but the step involving taking the graph data and generating triangle meshes / UVs is always glossed over.
Here's a really nice video of complex road intersections with nice texturing and good-looking junctions. This is the level of quality I'd eventually like to achieve, but an incremental step towards this would be more than acceptable to me. Here's another video showing interactive road graph creation with a 3d visualisation.
There is a paper to go with that video but nothing is said about the triangulation strategy :(
I have my own approach to try that's too long-winded to detail here, but I'd much rather implement an existing solution / algorithm if one exists, as it'll be better than anything I cook up in the next few weeks.
Can anyone point me in the right direction?
Thanks.
What you are seeking is the offset polygon for each of the regions bounded by roads. If all those regions are convex, this is an easy computation. If some are nonconvex, then it is more difficult, but still well-studied. You can find links at Wikipedia under straight skeleton, or here on StackOverflow under "An algorithm for inflating/deflating (offsetting, buffering) polygons."

How do you create 2D vines procedurally for a game?

For a game I am making I want to create 2D vines and vine like structures procedurally. Is there some paper or code snippet that someone can point me to?
Googling results in procedural trees which have straight spiky branches, but I need to create vines which are curvy. Think Jack and the beanstalk type of growth.
http://youtu.be/2wq541W6LyE?t=2m11s
Your particular approach is going to depend on how you game handles drawing and collisions.
An approach popular with flash-based games is to draw the vine to a bitmap: Since you don't list your programming environemnt, I'll just explain the steps, not the code.
Start with a circle,
1 draw it,
2 move it,
3 scale it down.
4 At a random interval, spawn a "branch" and or a leaf.
Set the scale and position of the branch to match the parent. Start a 1-5 loop on the branch.
5 repeat 1 until fully grown (scale is too small to proceed).
In the move phase it can be handy to use Sin curve to make your vine weave in and out.
Youcan tweak the settings for how much it curves to get different types of vines.
Here is a link to a discussion fo teh topic. Some good source to be found in the links.
http://groups.google.com/group/flashcodersny/browse_thread/thread/9906041e557e620c
Including source code inf flash:
http://xfiles.funnygarbage.com/~colinholgate/swf/varicoseg.zip
And a javascript version that looks more like lightning, but couild be adapted to vines without much change:
http://www.brainjam.ca/hyperbolic/01_spite_mrdoob.html

Volume of a 3D closed mesh car object

I have a 3D closed mesh car object having a surface made up
triangles. I want to calculate its volume, center of volume and inertia tensor.
Could you help me
Regards.
George
For volume...
For each triangular facet, lookup its corner points. Call 'em P,Q,R.
Compute this quantity (I call it "partial volume")
pv = PxQyRz + PyQzRx + PzQxRy - PxQzRy - PyQxRz - PzQyRx
Add these together for all facets and divide by 6.
Important! The P,Q,R for each facet must be arranged clockwise as seen from outside. (Or all counter-clockwise, as long as it's consistent for all facets.)
If the mesh has any quadrilaterals, just temporarily hallucinate a diagonal joining one pair of opposite corners. That makes it into two triangles.
Practical computationial improvement: Before doing math with P,Q and R, subtract the coordinates of some "center" point C. This can be the center of mass, a midpoint between the min/max x, y and z, or any convenient point inside or near the mesh. This helps minimize truncation errors for more accurate volumes.
From numerical point of view, what you are trying to achieve is quite simple and can be reduced to calculating few quadratures. Wikipedia will provide needed information about maths behind it.
If you are looking for out-of-the-box volume calculation, take a look at this entry.
As of inertia -- shape is not enough, as you also need distribution of mass.
Well, there isn't much information on the car being provided here - you should be able to break down the car into simpler shapes - the higher degree of approximation your require - the more simpler shapes you can break it into. (This could be difficult if the car is somehow dynamically generated and completely different every time ... but I don't see that situation making any sense).
This should help with finding the Inertial Tensor of various simpler shapes: http://www.gamedev.net/community/forums/topic.asp?topic_id=57001 , finding the volumes and the likes of things like spheres and cubes is pretty common knowledge so I won't bother linking that.
I think it was Archimedes who discovered that if you submerge the car in a volume of liquid, the displaced liquid will have the same volume as the car.
I'm not sure what this would help you in this case though. Having a liquid simulation running in the background and submerging the mesh into it sounds a bit over the top. Although, I think it does work, and therefore qualifies as a (bit useless nonetheless) answer. ;^)

Resources