Constrained (Delaunay) Triangulation - graph

For a university project I need to implement a computer graphics paper that has been relased a couple of years ago. At one point, I need to triangulate the results I get from my simulation. I guess its easier to explain what I need looking at a picture contained within the paper:
Let's say I already have got all the information it takes to reconstruct the contour lines that you can see in the second thumbnail. Using those I need to do some triangulation using those siluettes as constrains. I have searched the internet for triangulation libraries like CGAL, VTK, Triangle, Triangle++, ... but I always ended up throwing my hands up in horror. I am not a good programmer and it seems impossible to me to get into one of those APIs before the deadline of this project passes.
I would appreciate any kind of help like code snipplets, tips, etc...
I know that the algorithms need segments (pairs of points) as input, so let's say I have got one std::vector containing all pairs of points defining the siluette as well as the left and right side of the rectangle.
Can you somehow give me a code snipplet for i.e. CGAL that I could use for my purpose? First of all I just want to achieve the state of the third thumbnail. Lateron I will have to do some displacement within the "cracks" and finally write the information into a VBO for OpenGL rendering.
I have started working it out with CGAL. One simple problem still drives me crazy:
It is possible to attach informations (like ints) to points before adding them up to the triangulator object. I do this since I need on the one hand an int-flag that I use lateron to define my texture coordinates and on the other hand an index which I use so that I can create a indexed VBO.
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2info_insert_with_pair_iterator_2_8cpp-example.html
But instead of points I only want to insert constraint-edges. If I insert both CGAL returns strange results since points have been fed into two times (once as point and once as point of a constrained edge).
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2constrained_8cpp-example.html
Is it possible to connect in the same way as with points information to "Constraints" so that I can only use this function cdt.insert_constraint( Point(j,0), Point(j,6)); before I iterate over the resulting faces?
Lateron when I loop over the triangles I need some way to access the int-flags that I defined before. Like this but not on acutal points but the "ends" defined by the constraint edges:
for(CDT::Finite_faces_iterator fit = m_cdt.finite_faces_begin(); fit != m_cdt.finite_faces_end(); ++fit, ++k) {
int j = k*3;
for(int i=0; i < 3; i++) {
indices[j+i] = fit->vertex(i)->info().first;
}
}

Related

how to determine OpenGL winding based on normals?

I am trying to understand how to manually generate objects.
I have a mesh, part of which I delete and create a new geometry in its place. I have information about the normals of deleted vertices. On the basis of which I have to build new faces (in a different size and quantity) looking in the same direction.
But I don’t understand how to choose the correct winding. It sounds easy when the lessons talk about CCW winding in screen space. But what if I have a bunch of almost chaotic points in the model space? How then to determine this CCW, which axis is used for this? I suggest that the nearest old normals might help. But what is the cheapest method to determine the correct order?
It turned out to be easier than I thought. It is necessary to find the cross product of the first two vectors from the vertices of a triangle, then find the dot of the resulting vector and the normal vector, if the result is negative, then during generation it is necessary to change the order of vertices.

How to create 2D slices of 3D object model in Qt?

I'm currently rendering a 3D model (Wavefront .obj format) in my Qt program. Right now, I'm rendering the model using Scene3D in QML, and I'm able to get it to display in the viewing area. What I would like to do is have a user click on the model and generate a 2D cross section of the slice that I would like to plot on a different window. I'm quite new to 3D rendering, and a lot of Qt documentation isn't very descriptive. I've been reading Qt documentation, experimenting, and searching online with no luck. How can I create 2D slices of a 3D object Model in Qt 3D, preferably in QML? What Qt libraries or classes can I use to achieve this?
Unfortunately, the fact that models are stored as a set of surfaces makes this hard. QT probably doesn't have a built in method for this.
Consider, for example, that a model made of faces might be missing a face. What now? can you interpolate across that gap consistently from different angles? What about the fact that a cross-section probably won't contain any vertices?
But, of course, it can be solved. First, just don't allow un-closed surfaces (meshes with holes). Second, for finding the vertices of your cross-section, perform an intersection between every edge in your model and the plane you're using, and if there's an intersection, there's a point there. Third, to find the edges, look at the list of vertices, and any two that are from an edge on the same polygon in the mesh should be connected by an edge in the cross section. To find which direction the edge should go, project the normal of the polygon onto the plane your using. For filling, I don't really know what to do. I guess that's whatever you want it to be.

How to implement KdTree using PCLPointCloud2 used in loadOBJfile in point cloud library?

Okay, so I have one OBJ file which I read into PCLpointcloud2. Now I want to feed it into a K-dTree. Which is not taking PCLPointCloud2 as input. I want to query any general point if it lies on the surface of my OBJ file.
I am finding it hard to understand their documentation. So how can it be done?
Plus, kindly point me to a good reference easily interpretable. And what is "PointT" BTW? Is it custom build type defined by us? please elaborate.
Look at the code in the provided tool pcl_mesh_sampling (in the PCL code directory under tools/mesh_sampling.cpp). It is relatively simple. It loads a model from PLY or OBJ then for each triangle it samples random points from the triangle. The final point cloud then undergoes a voxel-grid sample to make the points relatively uniform. Alternatively, you can just run the pcl_mesh_sampling program on your obj file to get an output PCD which you can then visualise with pcl_viewer before loading the PCD file into your own code.
Once you have the final point cloud, you can build and use a KD-Tree as per http://pointclouds.org/documentation/tutorials/kdtree_search.php
PointT is the template argument. The point cloud library can handle a variety of point types, from simple PointXYZ (having just x,y,z) to more complicated points like PointXYZRGBNormal (having x,y,z,normal_x,normal_y,normal_z, curvature, r, g, and b channels). Each algorithm is templated on the point type that you want to use. It would probably be easier if you used PointXYZ with your OBJ file, so use pcl::PointXYZ for all your template arguments. For more on templates see http://www.tutorialspoint.com/cplusplus/cpp_templates.htm and http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php.
Update (reply to latest comment)
Added here because this reply is too long for a comment.
I think I see what you are getting at. So when you sample points from the point cloud & build a KD-tree of the object surface, and for each point you keep track which faces are nearby that point (probably all the faces adjacent to the face from which the point was sampled should be sufficient? Just one face is definitely insufficient). Then when the query point is given, you find the nearest point in the KD-tree and check whether the query point is on the "outside" or inside of the full list of nearby faces associated with that point in the KD-tree. If it's on the "inside" of all of them perhaps it is an interior point. But I cannot guarantee that this is true. That is my thinking on that question at the moment. But I do wonder if you want a mesh-based approach really. By the way, if you break your mesh up into convex parts then you can have nice guarantees when processing each convex part.

How does a non-tile based map works?

Ok, here is the thing. Recently i decided i wanted to understand how Random map generation works. I found some papers and some arguments. The most interesting one was "Diamond Square algorithm" and "Midpoint Displacement". I still have to try to apply those to a software, but other than that, i ran into this site: http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
As you can see, the idea is to use polygons. But i have no idea how to apply that a Tile-Based map, not even how to create those polygons using the tools i have (c++ and sdl). I am assuming there is no way to do it ( please correct me if i am wrong.) But if i am not, how does a non-tile map works, and how are these polygons generated?
This answer will not give you directly the answers you're looking for, but hopefully will get you close enough!
The Problem
I think what blocks you is how to represent the data. You're probably used to a 2D grid that simply represent the type of each tile. As you know, this is fine to handle a tile-based map, but doesn't properly allow you to model worlds where tiles are of a different shape.
Graphs
What I suggest to you, is to see the problem a bit differently. A grid is nothing more than a graph (more info) with nodes that have 4 (or 8 if you allow diagonals) implicit neighbor nodes. So first, what I would do if I was you, would be to move from your strict standard 2D grid to a more "loose" graph, where each node has a position, a position, a list of neighbors (in most cases you'll have corners with 2 neighbors, borders with 3 and "middle" tiles with 4) and finally a rendering component which simply draws your tile on screen at the given position. Once this is done, you should be able to have the exact same results on screen that you currently have with your "2D Tile-Based" engine by simply calling the rendering component with each node who's bounding box (didn't touch it in what you should add to your node, but I'll get back to this later) intersects with the camera's frustum (in a 2D world, it would most likely if the position +/- the size intersects the RECT currently being drawn).
Search
The more generic approach will also help you doing stuff like pathfinding with generic algorithms that explore nodes until they find a valid path (see A* or Dijkstra). Even if you decided to stick to a good old 2D Tile Map game, these techniques would still be useful!
Yeah but I want Polygons
I hear you! So, if you want polygons, basically all you need to do, is add to your nodes a list of vertices and the appropriate data that you might need to render your polygons (either vertex color, textures and U/V maps, etc...) and update your rendering component to do the appropriate OpenGL (this for example should help) calls to draw your nodes. Once again, the first step to iteratively upgrade your 2D Tile Engine to a polygon map engine would be to, for each tile in your map, give each of your nodes two triangles, a texture resource (the tile), and U/V mappings (0,0 - 0,1 - 1,0 and 1,1). Once again, when this step is done, you should have a "generic" polygon based tile map engine. The creation of most of this data can be created procedurally by calculating coordinates based on tile position, tile size, etc...
Convex Polygons
If you decide that you ever might need NPCs to navigate on your map or want to allow your player to navigate by clicking the map, I would suggest that you always use convex polygons (the triangle being the simplest for of a convex polygon). This allows your code that assume that two different positions on the same polygon can be navigated to in straight line.
Complex Maps
Based on the link you provided, you want to have rather complex maps. In this case, the author used Voronoi Diagrams to generate the polygons of the map. There are already solutions to do triangulation like that, but you might also want to use other techniques that are easier to work with if you're just switching to 3D like this one for example. Once you have interesting results, you should consider implementing serialization to save/open your map data from the game. If you want to create an editor, be aware that it might be a lot of work but can be worth it if you want people to help you creating maps or to add elements to the maps (like geometry that's not part of the terrain).
I went all over the place with this answer, but hopefully it helps!
Just iterate over all the tiles, and do a hit-test from the centre of the tile to the polys. Turn the type of the tile into the type of the polygon. Did you need more than that?
EDIT: Sorry, I realize that probably isn't helpful. Playing with procedural algorithms can be fun and profitable. Start with a loop that iterates over all tiles and chooses randomly whether or not the tile is occupied. Then, iterate over them again and choose whether it is occupied or its neighbour is.
Also, check out the source code for this: http://dustinfreeman.org/toys/wall7-dustin.html

Very general question about the implementation of vectors, vertices, edges, rays, lines & line segments

This is just a LARGE generalized question regarding rays (and/or line segments or edges etc) and their place in a software rendered 3d engine that is/not performing raytracing operations. I'm learning the basics and I'm the first to admit that I don't know much about this stuff so please be kind. :)
I wondered why a parameterized line is not used instead of a ray(or are they??). I have looked around at a few cpp files around the internet and seen a couple of resources define a Ray.cpp object, one with a vertex and a vector, another used a point and a vector. I'm pretty sure that you can define an infinate line with only a normal or a vector and then define intersecting points along that line to create a line segment as a subset of that infinate line. Are there any current engines implementing lines in this way, or is there a better way to go about this?
To add further complication (or simplicity?) Wikipedia says that in vector space, the end points of a line segment are often vectors, notably u -> u + v, which makes alot of sence if defining a line by vectors in space rather than intersecting an already defined, infinate line, but I cannot find any implementation of this either which makes me wonder about the validity of my thoughts when applying this in a 3d engine and even further complication is created when looking at the Flash 3D engine, Papervision, I looked at the Ray class and it takes 6 individual number values as it's parameters and then returns them as 2 different Number3D, (the Papervision equivalent of a Vector), data types?!?
I'd be very interested to see an implementation of something which actually uses the CORRECT way of implementing these low level parts as per their true definitions.
I'm pretty sure that you can define an infinate line with only a normal or a vector
No, you can't. A vector would define a direction of the line, but all the parallel lines share the same direction, so to pick one, you need to pin it down using a specific point that the line passes through.
Lines are typically defined in Origin + Direction*K form, where K would take any real value, because that form is easy for other math. You could as well use two points on the line.

Resources