Applying transformations in a 2d scene graph - 2d

In a 2d scene graph: When I perform a transformation like a rotation or other in a node, how I should apply these operations in children nodes? If I apply the operation in each children, it can take much time. I thought when the app render the scene, in each node down the global rotation is added with the current node rotation. The same with translation and scale. When is rising in the graph, those global properties is subtracted. Is there a better way?

You need to use a stack on which you push and pop the current transformation matrix.
def calculateTransformRecursive(self):
pushGlobalTransform();
self.calculateAndStoreGlobalTransform();
for node in self.nodes:
node.calculateTransformRecursive();
popGlobalTransform();
This way every node has a global transformation at drawing time, and there's no need to "undo" transformations when going up the tree.

Related

Index of hovered element in QGraphicsPathItem

I have a QGraphicsPathItem which is drawn from a list of cartesian x,y points.
What would be the best (performance wise) method of determining when the cursor is hovered over one of these points I presently iterate through the source list and compare each point with the cursor position.
Regards
Qt does not provide a built-in solution for what you want. You should reimplement QGraphicsScene::mouseMoveEvent and check in it which point (if any) is hovered (with a certain margin), i.e. determine which point is within a certain distance of the current mouse position (QGraphicsSceneMouseEvent::pos).
The most computation intensive task is determining the closest point. A naive approach is to iterate over all the points, but general optimised implementation exists:
QuadTree: 2D implementation
k-d tree: Multidimensional implementation
Nearest Neighbor Search: broad overview
Caching the last result and using the triangle inequality may be important to improve the performance of this method:
If currently the mouse hovers a point P, the next time you can just validate if it still hovers this point.
If currently no point is hovered and the nearest point from location P (the last mouse position for which you calculated the nearest point) is at a distance d, then you should not check if an hover occur if: norm(P - QGraphicsSceneMouseEvent::pos()) < d - hoverThreshold
I usually use the QGraphicScene's itemAt() method to check for graphic items under the cursor.

Find the distance between two points on any 3d surface

I am making a game in Unity3d and I need a pathfinding algorithm that can guide enemy's towards the player on a 3d surface. The problem is that the 3d surface can take any shape, so it can be a 3d sphere, cube, torus and many more shapes.
I tried using A* but for that formula I need the distance between the two points, and since the object is curved I cannot get that so easily. I found that you can use the Haversine formula if its a sphere, but that won't work on a torus or a random 3d shape.
I want this kind of result except with every kind of object:
https://www.youtube.com/watch?v=hvunNq7yVcU
Is there a way/algorithm that I can use to get that result. I know there is something called nav mesh but I need to program it myself. Also I cannot find how nav mesh approaches this dilema. I am going to use the triangles of my object as nodes.
So my question boils down to:
Does anyone know a algorithm for pathfinding that works on any 3d surface?
Thanks in advance.
I think your problem is that you are not using a graph, I would suggest that you look into a tutorial on how to create a graph, for the language you are using if you can, (this may also help here they are using edges to connect their node which is needed if you have more then one weight). If you do make a graph you will need a node class. Each node must contain pointers to any nodes that it is connected to and an ID of some kind. In your case that is probably all you need but it is also possible to assign a weight to each move if you also have an edge class (connectors between nodes) which would be used to connect the nodes. If you do have an edge class your nodes will have pointers to edges instead of other nodes and each edge will have a weight and a pointer to 1 or 2 nodes (depending on if it is a directed path or not). You can also make a graph class to contain all of your nodes and edges.
Summary:
make a node class and determine if you need the edge class (if everything has a weight of 1 you can get away with out it). Use the node class to create a graph to represent your map with each tile being a node with pointers to connected tiles. Use A* or dijkstra's algorithm to search your graph to find the shortest path.
note: most examples you will find will be for 2d graphs, yours is no different except that there are no bounds on yours, you just need to connect the nodes to their adjacent tiles.

How does a non-tile based map works?

Ok, here is the thing. Recently i decided i wanted to understand how Random map generation works. I found some papers and some arguments. The most interesting one was "Diamond Square algorithm" and "Midpoint Displacement". I still have to try to apply those to a software, but other than that, i ran into this site: http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/
As you can see, the idea is to use polygons. But i have no idea how to apply that a Tile-Based map, not even how to create those polygons using the tools i have (c++ and sdl). I am assuming there is no way to do it ( please correct me if i am wrong.) But if i am not, how does a non-tile map works, and how are these polygons generated?
This answer will not give you directly the answers you're looking for, but hopefully will get you close enough!
The Problem
I think what blocks you is how to represent the data. You're probably used to a 2D grid that simply represent the type of each tile. As you know, this is fine to handle a tile-based map, but doesn't properly allow you to model worlds where tiles are of a different shape.
Graphs
What I suggest to you, is to see the problem a bit differently. A grid is nothing more than a graph (more info) with nodes that have 4 (or 8 if you allow diagonals) implicit neighbor nodes. So first, what I would do if I was you, would be to move from your strict standard 2D grid to a more "loose" graph, where each node has a position, a position, a list of neighbors (in most cases you'll have corners with 2 neighbors, borders with 3 and "middle" tiles with 4) and finally a rendering component which simply draws your tile on screen at the given position. Once this is done, you should be able to have the exact same results on screen that you currently have with your "2D Tile-Based" engine by simply calling the rendering component with each node who's bounding box (didn't touch it in what you should add to your node, but I'll get back to this later) intersects with the camera's frustum (in a 2D world, it would most likely if the position +/- the size intersects the RECT currently being drawn).
Search
The more generic approach will also help you doing stuff like pathfinding with generic algorithms that explore nodes until they find a valid path (see A* or Dijkstra). Even if you decided to stick to a good old 2D Tile Map game, these techniques would still be useful!
Yeah but I want Polygons
I hear you! So, if you want polygons, basically all you need to do, is add to your nodes a list of vertices and the appropriate data that you might need to render your polygons (either vertex color, textures and U/V maps, etc...) and update your rendering component to do the appropriate OpenGL (this for example should help) calls to draw your nodes. Once again, the first step to iteratively upgrade your 2D Tile Engine to a polygon map engine would be to, for each tile in your map, give each of your nodes two triangles, a texture resource (the tile), and U/V mappings (0,0 - 0,1 - 1,0 and 1,1). Once again, when this step is done, you should have a "generic" polygon based tile map engine. The creation of most of this data can be created procedurally by calculating coordinates based on tile position, tile size, etc...
Convex Polygons
If you decide that you ever might need NPCs to navigate on your map or want to allow your player to navigate by clicking the map, I would suggest that you always use convex polygons (the triangle being the simplest for of a convex polygon). This allows your code that assume that two different positions on the same polygon can be navigated to in straight line.
Complex Maps
Based on the link you provided, you want to have rather complex maps. In this case, the author used Voronoi Diagrams to generate the polygons of the map. There are already solutions to do triangulation like that, but you might also want to use other techniques that are easier to work with if you're just switching to 3D like this one for example. Once you have interesting results, you should consider implementing serialization to save/open your map data from the game. If you want to create an editor, be aware that it might be a lot of work but can be worth it if you want people to help you creating maps or to add elements to the maps (like geometry that's not part of the terrain).
I went all over the place with this answer, but hopefully it helps!
Just iterate over all the tiles, and do a hit-test from the centre of the tile to the polys. Turn the type of the tile into the type of the polygon. Did you need more than that?
EDIT: Sorry, I realize that probably isn't helpful. Playing with procedural algorithms can be fun and profitable. Start with a loop that iterates over all tiles and chooses randomly whether or not the tile is occupied. Then, iterate over them again and choose whether it is occupied or its neighbour is.
Also, check out the source code for this: http://dustinfreeman.org/toys/wall7-dustin.html

Generating a 3D prism from any 2D polygon

I am creating a 2D sprite game in Unity, which is a 3D game development environment.
I have constrained all translation of objects to the XY-plane and rotation to the Z-axis.
My problem is that the meshes that are used to detect collisions between objects must still be in 3D. I have the need to detect collisions between the player object (a capsule collider) and a sprite (that has its collision volume defined by a polygonal prism).
I am currently writing the level editor and I have the need to let the user define the collision area for any given tile. In the image below the user clicks the points P1, P2, P3, P4 in that order.
Obviously the points join up to form a quadrilateral. This is the collision area I want, however I must then convert that to a 3D mesh. Basically I need to generate an extrusion of the polygon, then assign the vertex winding and triangles etc. The vertex positions is not a problem to figure out as it is merely a translation of the polygon down the z-axis.
I am having trouble creating an algorithm for assigning the winding order of the vertices, especially since the mesh must consist only of triangles.
Obviously the structure I have illustrated is not important, the polygon may be any 2d shape and will always need to form a prism.
Does anyone know any methods for this?
Thank you all very much for your time.
A simple algorithm that comes to mind is something like this:
extrudedNormal = faceNormal.multiplyScale(sizeOfExtrusion);//multiply the face normal by the extrusion amt. = move along normal
for each(vertex in face){
vPrime = vertex.clone();//copy the position of each vertex to a new object to be modified later
vPrime.addSelf(extrudedNormal);//add translation in the direction of the normal, with the amt. used in the
}
So the idea is basic:
clone the face normal and move it in
the same direction by the amt. you
want to extrude by
clone the face vertices and move them
using the moved(extruded) normal
position
For a more complete, feature rich example, refer to the Procedural Modeling Unity samples. They include a nice Mesh extrusion sample too (see ExtrudedMeshTrail.js which uses MeshExtrusion.cs).
Goodluck!
To create the extruded walls:
For each vertex a (with coordinates ax, ay) in your polygon:
- call the next vertex 'b' (with coordinates bx, by)
- create the extruded rectangle corresponding to the line from 'a' to 'b':
- The rectangle has vertices (ax,ay,z0), (ax,ay,z1), (bx,by,z0), (bx,by,z1)
- This rectangle can be created from two triangles:
- (ax,ay,z0), (ax,ay,z1), (bx,by,z0) and (ax,ay,z1), (bx,by,z0), (bx,by,z1)
If you want to create a triangle strip instead, it's even simpler. For each vertex a, just add (ax,ay,z0) and (ax,ay,z1). Whichever vertex you processed first will also need to be processed again after looping over all other vertices.
To create the end-caps:
This step is probably unnecessary for collision purposes. But, one simple technique is here: http://www.siggraph.org/education/materials/HyperGraph/scanline/outprims/polygon1.htm
Each resulting triangle should be added at depth z0 and z1.

Matrix multiplication - view/projection, world/projection, etc

In HLSL there's a lot of matrix multiplication and while I understand how and where to use them I'm not sure about how they are derived or what their actual goals are.
So I was wondering if there was a resource online that explains this, I'm particularly curious about what is the purpose behind multiplying a world matrix by a view matrix and a world+view matrix by a projection matrix.
You can get some info, from a mathematical viewpoint, on this wikipedia article or on msdn.
Essentially, when you render a 3d model to the screen, you start with a simple collection of vertices scattered in 3d space. These vertices all have their own positions expressed in "object space". That is, they usually have coordinates which have no meaning in the scene that is being rendered, but only express the relations between one vertex and the other of the same model.
For instance, the positions of the vertices of a model could only range from -1 to 1 (or similar, it depends on how the model has been created).
In order to render the model in the correct position, you have to scale, rotate and translate it to the "real" position in your scene. This position you are moving to is expressed in "world space" coordinates which also express the real relationships between vertices in your scene. To do so, you simply multiply each vertex' position with its World matrix. This matrix must be created to include the translation/rotation/scale parameters you need to apply, in order for the object to appear in the correct position in the scene.
At this point (after multiplying all vertices of all your models with a world matrix) your vertices are expressed in world coordinates, but you still cannot render them correctly because their position is not relative to your "view" (i.e. your camera). So, this time you multiply everything using a View matrix which reflects the position and orientation of the viewpoint from which you are rendering the scene.
All vertices are now in the correct position, but in order to simulate perspective you still have to multiply everything with a Projection matrix. This last multiplication determines how the position of the vertices changes based on distance from the camera.
And now finally all vertices, starting from their position in "object space", have been moved to the final position on the screen, where they will be rendered, rasterized and then presented.
Online resources: Direct3D Matrices , Projection Metrices, Direct3D Transformation, The Importance of Matrices in the DirectX API.

Resources