I am currently working of JavaFX 3D application and come across getNormals() method in TriangleMesh class.
As in TriangleMesh class is used to create user defined Java FX 3D obejct and in that getPoints() is used to add PointsgetFaces() is used to add Faces getTexCoords() is used to manage Texture of 3D Object, but I am not sure what is use of getNormals() method in TriangleMesh class.
In TriangleMesh class, we can set vertex format to VertexFormat.POINT_TEXCOORD and VertexFormat.POINT_NORMAL_TEXCOORD.But if we set vertexFormat as "VertexFormat.POINT_NORMAL_TEXCOORD", then we need to add indices of normals into the Faces like below : [
p0, n0, t0, p1, n1, t1, p3, n3, t3, // First triangle of a textured rectangle
p1, n1, t1, p2, n2, t2, p3, n3, t3 // Second triangle of a textured rectangle
]
as described in https://docs.oracle.com/javase/8/javafx/api/javafx/scene/shape/TriangleMesh.html
I didn't find any difference in 3D shape, if I used vertexFormat as POINT_TEXCOORD or POINT_NORMAL_TEXCOORD.
So what is the use of getNormals() method in TriangleMesh JavaFX?
Thanks in Advance..
Use of normals in computer graphics:
The normal is often used in computer graphics to determine a surface's orientation toward a light source for flat shading, or the orientation of each of the corners (vertices) to mimic a curved surface with Phong shading.
The normals effect the shading applied to a face.
The standard shading mechanism for JavaFX 8 is Phong Shading and a Phong Reflection Model. By default, Phong Shading assumes a smoothly varying (linearly interpolated) surface normal vector. This allows you to have a sphere rendered by shading with limited vertex geometry supplied. By default the normal vectors will be calculated as being perpendicular to the faces.
What JavaFX allows is for you to supply your own normals rather than rely on the default calculated ones. The Phong shading algorithm implementation in JavaFX will then interpolate between the normals that you supply rather than the normals it calculates. Changing the direction of surface normals will change the shading model by altering how the model represents light bouncing off of it, essentially the light will bounce in a different direction with a modified normal.
This example from Wikipedia shows a phong shaded sphere on the right. Both spheres actually have the same geometry. The distribution of the normals which contribute towards the phong shading equation is the default, smoothly interpolated one based upon a standard normal calculation for each face (so no user normals supplied). The equation used for calculating the shading is described in the PhongMaterial javadoc, and you can see there the normal contribution to the shading algorithm, in terms of the both the calculations of diffuse color and the specular highlights.
Standard 3D models, such as obj files can optionally allow for providing normals:
vn i j k
Polygonal and free-form geometry statement.
Specifies a normal vector with components i, j, and k.
Vertex normals affect the smooth-shading and rendering of geometry.
For polygons, vertex normals are used in place of the actual facet
normals. For surfaces, vertex normals are interpolated over the
entire surface and replace the actual analytic surface normal.
When vertex normals are present, they supersede smoothing groups.
i j k are the i, j, and k coordinates for the vertex normal. They
are floating point numbers
So, why would you want it?
The easiest way to explain might be to look at something known as smoothing groups (please click on the link, I won't embed here due to copyright). As can be seen by the linked image, when the smoothing group is applied to a collection of faces it is possible to get a sharp delineation (e.g. a crease or a corner) between the grouped faces. Specifying normals allows you to accomplish a similar thing to a smoothing group, just with more control because you can specify individual normals for each vertex rather than an overall group of related faces. Note JavaFX allows you to specify smoothing groups via getFaceSmoothingGroups() for instances where you don't want to go to the trouble of defining full normal geometry via getNormals().
Another, similar idea is a normal map (or bump map). Such a map stores normal information in an image rather than as vector information, such as the getNormals() method, so it is a slightly different thing. But you can see a similar interaction with the reflection model algorithm:
Background Reading - How to understand Phong Materials (and other things)
Related
I'm making a procedurally generated minecraft-like voxel terrain in Unity. Mesh generation and albedo channel texturing is flawless; however I need to apply different normal map textures for different cube faces regarding whether they're neighboring to another cube or not. Materials accepts only single normal map file and doesn't provide a sprite-sheet-editor kind of functionality for normal maps. So I have no idea about how to use selected slices out of normal map file as if they were albedo textures. I couldn't find any related resources about the problem. Any help will be appreciated. Thanks...
First of all, I'm not an expert in this area, though I am going to try to help you based on my limited and incomplete understanding of parts of Unity.
If there are a finite number of "normal face maps" that can exist, I suggest something like you indicated ("sprite sheet") and create a single texture (also sometimes called a texture atlas) that contains all these normal maps.
The next step, which I'm not sure whether the Standard material shader will be to handle for your situation is to generate UV/texture coordinates for the normal map and pass those along with your vertex xyz positions to the shader. The UV coordinates need to be for each vertex of each face; they are specified as a 2-D (U, V) offset into your atlas of normal maps and are floating point values with a range of [0.0, 1.0], that map to the full X and Y coordinates of the actual normal texture. For instance, if you had an atlas with a grid of textures in 4 rows and 4 columns, a face that should use the top-left texture would have UV coords of [(0,0), (0.25,0), (0.25,0.25), (0, 0.25)].
The difficulty here may depend if you are you using UV coordinates for doing other texture mapping (e.g. in the Albedo or wherever else). If this is the case, I think the Unity Standard Shader permits two sets of texture coordinates, and if you need more, you might have to roll your own shader or find a Shader asset elsewhere that allows for more UV sets. This is where my understanding of gets shaky, as I'm not exactly sure how the shader uses these two UV coordinate sets, and whether there is some existing convention for how these UV coordinate are used, as the standard shader supports secondary/detail maps, which may mean you have to share the UV0 set with all non-detail maps, so albedo, normal, height, occlusion, etc.
I understand how to use delaunay triangulation in 2d points?
But how to use delaunay triangulation in 3d points?
I mean I want to generate surface triangle mesh not tetrahedron mesh, so how can I use delaunay triangulation to generate 3d surface mesh?
Please give me some hint.
To triangulate a 3D point cloud you need the BallPivoting algorithm: https://vgc.poly.edu/~csilva/papers/tvcg99.pdf
There are two meanings of a 3D triangulation. One is when the whole space is filled, likely with tetrahedra (hexahedra and others may be also used). The other is called 2.5D, typically for terrains where the z is a property as the color or whatever, which doesn't influence the resulting triangulation.
If you use Shewchuk's triangle you can get the result.
If you are curious enough, you'll be able to select those tetrahedra that have one face not shared with other tetrahedra. These are the same tetrahedra "joined" with infinite/enclosing points. Extract those faces and you have your 3D surface triangulation.
If you want "direct" surface reconstruction then you undoubtly need to know in advance which vertices among the total given are in the surface. If you don't know them, perhaps the "maxima method" allows to find them out.
One your points cloud consists only of surface vertices, the triangulation method can be any one you like, from (adapted) incremental Chew's, Ruppert, etc to "ball-pivoting" method and "marching cubes" method.
The Delaunay tetrahedrization doesn't fit for two reasons
it fills a volume with tetrahedra, instead of defining a surface,
it fills the convex hull of the points, which is probably not what you expect.
To address the second problem, you need to accept concavities, and this implies that you need to specify a reference scale that tells what level of detail you want. This leads to the concept of Alpha Shapes, which are obtained as a subset of the faces.
Lookup "Alpha Shape" in an image search engine.
I have a triangle mesh along with a function which defines the material properties at each point in 3d space. Using a given resolution in object space, I generate a triangular texture for each triangle; specifically, these each end up being right triangles with size corresponding to the actual triangle in the mesh. I have two problems, however: 1) the output texture atlas is large and obviously contains large amounts of dead space, and 2) each vertex for each triangle needs to have its own texcoord, since each triangle's texture ends up in a different part of the atlas.
What algorithms exist to generate a texture atlas from a mesh with known textures for each triangle? I'm looking to share as many texcoords as possible across the mesh, which means that adjacent triangles should have correspondingly adjacent textures in the atlas. Not everything can be shared -- since 3d objects can't always flatten into a 2d surface with constant texture resolution -- but I'm hoping to maximize this.
This is related to a problem described in another question (images there):
Opengl shader problems - weird light reflection artifacts
I have a .obj importer that creates a data structure and calculates the tangents and bitangents. Here is the data for the first triangle in my object:
My understanding of tangent space is that the normal points outward from the vertex, the tangent is perpendicular (orthogonal?) to the normal vector and points in the direction of positive S in the texture, and the bitangent is perpendicular to both. I'm not sure what you call it but I thought that these 3 vectors formed what would look like a rotated or transformed x,y,z axis. They wouldn't be 3 randomly oriented vectors, right?
Also my understanding: The normals in a normal map provide a new normal vector. But in tangent space texture maps there is no built in orientation between the rgb encoded normal and the per vertex normal. So you use a TBN matrix to bridge the gap and get them in the same space (or get the lighting in the right space).
But then I saw the object data... My structure has 270 vertices and all of them have a 0 for the Tangent Y. Is that correct for tangent data? Are these tangents in like a vertex normal space or something? Or do they just look completely wrong? Or am I confused about how this works and my data is right?
To get closer to solving my problem in the other question I need to make sure my data is right and my understanding on how tangent space lighting math works.
The tangent and bitangent vectors point in the direction of the S and T components of the texture coordinate (U and V for people not used to OpenGL terms). So the tangent vector points along S and the bitangent points along T.
So yes, these do not have to be orthogonal to either the normal or each other. They follow the direction of the texture mapping. Indeed, that's their purpose: to allow you to transform normals from model space into the texture's space. They define a mapping from model space into the space of the texture.
The tangent and bitangent will only be orthogonal to each other if the S and T components at that vertex are orthogonal. That is, if the texture mapping has no sheering. And while most texture mapping algorithms will try to minimize sheering, they can't eliminate it. So if you want an accurate matrix, you need a non-orthogonal tangent and bitangent.
I am creating a 2D sprite game in Unity, which is a 3D game development environment.
I have constrained all translation of objects to the XY-plane and rotation to the Z-axis.
My problem is that the meshes that are used to detect collisions between objects must still be in 3D. I have the need to detect collisions between the player object (a capsule collider) and a sprite (that has its collision volume defined by a polygonal prism).
I am currently writing the level editor and I have the need to let the user define the collision area for any given tile. In the image below the user clicks the points P1, P2, P3, P4 in that order.
Obviously the points join up to form a quadrilateral. This is the collision area I want, however I must then convert that to a 3D mesh. Basically I need to generate an extrusion of the polygon, then assign the vertex winding and triangles etc. The vertex positions is not a problem to figure out as it is merely a translation of the polygon down the z-axis.
I am having trouble creating an algorithm for assigning the winding order of the vertices, especially since the mesh must consist only of triangles.
Obviously the structure I have illustrated is not important, the polygon may be any 2d shape and will always need to form a prism.
Does anyone know any methods for this?
Thank you all very much for your time.
A simple algorithm that comes to mind is something like this:
extrudedNormal = faceNormal.multiplyScale(sizeOfExtrusion);//multiply the face normal by the extrusion amt. = move along normal
for each(vertex in face){
vPrime = vertex.clone();//copy the position of each vertex to a new object to be modified later
vPrime.addSelf(extrudedNormal);//add translation in the direction of the normal, with the amt. used in the
}
So the idea is basic:
clone the face normal and move it in
the same direction by the amt. you
want to extrude by
clone the face vertices and move them
using the moved(extruded) normal
position
For a more complete, feature rich example, refer to the Procedural Modeling Unity samples. They include a nice Mesh extrusion sample too (see ExtrudedMeshTrail.js which uses MeshExtrusion.cs).
Goodluck!
To create the extruded walls:
For each vertex a (with coordinates ax, ay) in your polygon:
- call the next vertex 'b' (with coordinates bx, by)
- create the extruded rectangle corresponding to the line from 'a' to 'b':
- The rectangle has vertices (ax,ay,z0), (ax,ay,z1), (bx,by,z0), (bx,by,z1)
- This rectangle can be created from two triangles:
- (ax,ay,z0), (ax,ay,z1), (bx,by,z0) and (ax,ay,z1), (bx,by,z0), (bx,by,z1)
If you want to create a triangle strip instead, it's even simpler. For each vertex a, just add (ax,ay,z0) and (ax,ay,z1). Whichever vertex you processed first will also need to be processed again after looping over all other vertices.
To create the end-caps:
This step is probably unnecessary for collision purposes. But, one simple technique is here: http://www.siggraph.org/education/materials/HyperGraph/scanline/outprims/polygon1.htm
Each resulting triangle should be added at depth z0 and z1.