Calcultate dimensions of mesh using vertex and face data - math

I have 3d models made in Blender. I need to load them for further rendering using OpenGL ES.
I also want to implement picking, so I need to know object dimensions. So far I did not find option like "export dimensions" while exportiong to wavefront obj in Blender. I decided to calculate them manually.
So, I have vertex and face data. How do I calculate object dimensions? It may be done roughly, whatever. Or maybe I'm on the wrong tracK?

Isn't it just the minimum and maximum coordinate value found for each vertex for each of the X, Y and Z axes?

Related

Generate a network graph inside a cylinder

I need to solve a problem and I'm really stuck with it, so I want to summon the power of the community to see if anyone has an idea on how to handle it.
I need to create a porous material from a given surface . So, I have a point cloud representing the surface of a cylinder, like the one in the figure, and I need to generate a graph inside it from the points on the surface to be filled with volume. It is mandatory that all the points of the surface are used (some more can be added if necessary), the length of the edges must be an input parameter of the function and the angle between two nodes must always be greater than 45º with respect to the horizontal plane.
My initial idea was to make a while loop that in each iteration creates a random point cloud inside the cylinder between the current z (the current z is the maximum z value of the previous iteration) and the current z + the given edge length. Once this point cloud is created, it joins the surface nodes of the last iteration with the points of this point cloud that satisfy the condition of edge length and angularity. And it continues until the current z is greater than the maximum z of the cylinder surface.
The problem with this idea is that it is not consistent and the results are disastrous. So I would like to ask if anyone has a better idea or if anyone knows if any python libraries could help me. I am currently using networkx and numpy-stl but those are not meant to do what I want. ChatGPT is unable to understant me too :(.
Thank you so much for you time!!

how to determine OpenGL winding based on normals?

I am trying to understand how to manually generate objects.
I have a mesh, part of which I delete and create a new geometry in its place. I have information about the normals of deleted vertices. On the basis of which I have to build new faces (in a different size and quantity) looking in the same direction.
But I don’t understand how to choose the correct winding. It sounds easy when the lessons talk about CCW winding in screen space. But what if I have a bunch of almost chaotic points in the model space? How then to determine this CCW, which axis is used for this? I suggest that the nearest old normals might help. But what is the cheapest method to determine the correct order?
It turned out to be easier than I thought. It is necessary to find the cross product of the first two vectors from the vertices of a triangle, then find the dot of the resulting vector and the normal vector, if the result is negative, then during generation it is necessary to change the order of vertices.

How to create 2D slices of 3D object model in Qt?

I'm currently rendering a 3D model (Wavefront .obj format) in my Qt program. Right now, I'm rendering the model using Scene3D in QML, and I'm able to get it to display in the viewing area. What I would like to do is have a user click on the model and generate a 2D cross section of the slice that I would like to plot on a different window. I'm quite new to 3D rendering, and a lot of Qt documentation isn't very descriptive. I've been reading Qt documentation, experimenting, and searching online with no luck. How can I create 2D slices of a 3D object Model in Qt 3D, preferably in QML? What Qt libraries or classes can I use to achieve this?
Unfortunately, the fact that models are stored as a set of surfaces makes this hard. QT probably doesn't have a built in method for this.
Consider, for example, that a model made of faces might be missing a face. What now? can you interpolate across that gap consistently from different angles? What about the fact that a cross-section probably won't contain any vertices?
But, of course, it can be solved. First, just don't allow un-closed surfaces (meshes with holes). Second, for finding the vertices of your cross-section, perform an intersection between every edge in your model and the plane you're using, and if there's an intersection, there's a point there. Third, to find the edges, look at the list of vertices, and any two that are from an edge on the same polygon in the mesh should be connected by an edge in the cross section. To find which direction the edge should go, project the normal of the polygon onto the plane your using. For filling, I don't really know what to do. I guess that's whatever you want it to be.

Adapt geometry on printed points

I draw a vectorial geometry with some calibration points around it.
I print this geometry and then I physically scan the printed calibration points (I can't scan the geometry, I can only scan the calibration points).
When I acquire these points, these aren't in their position anymore because of some print error or bad print calibration.
The question is:
Is there any algorithm that helps me to adapt the original geometry in base of the new points scanned?
In practice I need to warp the geometry in order to obtain the real geometry printed on the paper with the same print error that I have on the calibration points.
The distortion is given by the physical distortion of the material (not paper but cloth) during the print process. I can't know how much the material will distort during the print.
Yes, there are algorithms to help you with that. In general you need to learn/find the transformation between the two images that you have.
Typical geometrical transformations are affine transformations (shift, scale, rotation, shear, reflections) which need at least three control points or piecewise local linear/ local weighted mean which need at least 4-6 control points. The more control points you have, the better in general.
Given a set of control points in one image and the corresponding set of control points in the other image there are algorithms for finding the optimal transformation between if you specify a class (affine or piecewise local linear). See for example fitgeotrans in Matlab. I don't know how exactly it solves the problem by I guess by some kind of optimization. It should be easy to find available implementations for other programming languages (Python, C, Java).
What remains is finding the correspondence between the control points in the two images. For a few images you may be able to do that by hand, but in the general case you might want to automatize this as well. General image registration algorithms like imregister should do well for your images. They give you a good initial estimate for the transformation (may already be sufficient) so that then identification of the corresponding point pairs is trivial (always take the nearest) and allow refining.
So I advice you to first just try to register the images (gray scale data) with an identity transformation as starting value. Then identify corresponding point pairs and refine the transformation either using an affine or a piecewiece/local transformation. Then apply the transformation on the geometry to get the printed geometry. Depending on your choice of programming languages you will find many implementations that do the job.

Converting 2 point coords to vector coords for angleBetween()?

I'm working on a PyMEL script that allows the user to duplicate a selected object multiple times, using a CV curve and its points coordinates to transform & rotate each copy to a certain point in space.
In order to achieve this, Im using the adjacent 2 points of each CV (control vertex) to determine the rotation for the object.
I have managed to retrieve the coordinates of the curve's CVs
#Add all points of the curve to the cvDict dictionary
int=0
cvDict={}
while int<selSize:
pointName='point%s' % int
coords= pointPosition ('%s.cv[%s]' % (obj,int), w=1)
#Setup the key for the current point
cvDict[pointName]={}
#add coords to x,y,z subkeys to dict
cvDict[pointName]['x']= coords[0]
cvDict[pointName]['y']= coords[1]
cvDict[pointName]['z']= coords[2]
int += 1
Now the problem I'm having is figuring out how to get the angle for each CV.
I stumbled upon the angleBetween() function:
http://download.autodesk.com/us/maya/2010help/CommandsPython/angleBetween.html
In theory, this should be my solution, since I could find the "middle vector" (not sure if that's the mathematical term) of each of the curve's CVs (using the adjacent CVs' coordinates to find a fourth point) and use the above mentioned function to determine how much I'd have to rotate the object using a reference vector, for example on the z axis.
At least theoretically - the issue is that the function only takes 1 set of coords for each vector and I have absolutely no Idea how to convert my point coords to that format (since I always have at least 2 sets of coordinates, one for each point).
Thanks.
If you wanna go the long way and not grab the world transforms of the curve, definitely make use of pymel's datatypes module. It has everything that python's native math module does and a few others that are Maya specific. Also the math you would require to do this based on CVs can be found here.
Hope that puts you in the right direction.
If you're going to skip the math, maybe you should just create a locator, path-animate it along the curve, and then sample the result. That would allow you to get completely continuous orientations along the curve. The midpoint-constraint method you've outlined above is limited to 1 valid sample per curve segment -- if you wanted 1/4 of the way or 3/4 of the way between two cv's your orientation would be off. Plus you don't have to reinvent all of the manu different options for deciding on the secondary axis of rotation, reading curves with funky parameterization, and so forth.

Resources