How to calculate the half extents of a cube? - math

I wish to know what formula should I use to get half extents of cube or rectangular cube.
The library I use to make graphical objects requires this cube half extents (which I don't really know what is).

I've finally got it. Cube half extents is a vector representing half size of the cube along its local axis. Example : a cube having (1,1,1) as size have (0.5,0.5,0.5) as half extents.

Related

How to calculate a random point inside a cube

I'm trying to figure out the math to find a random point inside a cube.
I have something small but it can't take into account the rotation of the cube.
Here are some images of my results.
Here you can see the cube is rotated to some degree but when I generate some points it retains the shape as if the cube was normal (I think the term is called axis aligned but I'm not sure).
I'm using a Vector to represent the extent of the cube but for the life of me I can't figure out how to get the points to follow it when it's rotated.
Can someone point me in the right direction as to how I would do this?
EDIT1:
Now its misaligned and it goes even weirder when I rotate it sideways.
Can someone walk me through it from the beginning? I think my base line math is all wrong to begin with.
Generate the points in the straight position then apply the rotation (also check the origin of the coordinates).

tilt of object from the normals

I have a flat object (not totally flat (let's say in range of 25µm)) which I measured two times (The measuring concept is not important here) with applying a tilt between the two times.
I have the normals in each point of the surface and I want from these normals to know the tilt that has been applied.
My approach was to calculate the average normal of each one and then calculate the angle between the normals.
Could you please suggest for me another solution or confirm mine?!
Many thanks in advance
Your solution should work
but you have to measure normals:
evenly distributed on the whole area
or always on the same points of the object (which is not the case I assume)
if this condition is not true it could lower the accuracy a lot
Now to be sure we are talking about the same thing:
red vectors 1,2 are the average normals
angle between them is not tilt !!!
but angle between plates (in combination around two axises)
so if you want just tilt in one axis you have to project these normals
onto plane you want the tilt be in (blue vectors 1,2 on the right let them be n1,n2)
angle between these vectors is the tilt
tilt = acos( (n1.n2)/(|n1|.|n2|) )
Another method
without knowing the measurement possibilities
and object shape is hard to suggest another measurement method to validate yours.
anyway if you can measure distances you can for plate-like objects do this:
so measure a0,a1,b
compute ang
ang=atan2(a0-a1,b)
do this also for the second measurement and then:
tilt = ang2-ang1
[notes]
texts on the image are small so zoom it or download the image and view in image viewer if needed
if tilt plane is one of the base planes (xy,xz or yz) then just ignore the unused axis coordinate

Project a grid in screenspace on the world xz plane

I want to project a grid on the xz-plane like shown here:
To do that, I created a vertex grid with x and z range [-1|1]. In the shader I multiply the xz screen coordinate of a vertex with the inverse of the View-Projection matrix. Then I want to adjust the height, depending on the new world xz coordinates and finally I transform these coordinates back to screenspace by multiplying them with the View-Projection matrix.
I dont know why, but I get a very strange plane shown on the screen. Are the mathematical oprations I use correct?
The grid that you initially create, is that in projection space or actual screen co-ords? It sounds like it is in projection space since you only transform it with the inverse of the view-projection matrix to get into world co-ords. I think you need to include the "Window" matrix too i.e. transform them by the inverse of the View-Projection-Window matrix (and similarly on the way back to screen co-ords).
Edit:
I'm probably not understanding exactly what it is you're trying to do so here's some questions back. :)
Are you trying to take the grid that's shown in the screenshot in your question and project that onto world z-x co-ordinates? If so, then why do you start with a grid of z-x values? Also, if you apply an inverse view matrix to those then surely you would end up with a line since the camera looks along z although your second screenshots show that you are getting a plane. I'm a bit confused.

mapping from normalized device coordinates to view space

I'd like to map from normalized device coordinates back to viewspace.
The other way arround works like this:
viewspace -> clip space : multiply the homogeneous coordinates by the projection matrix
clip space -> normalized device coordinates: divide the (x,y,z,w) by w
now in normalized device coordinates all coordinates which were within the view frustum fall into the cube x,y,z € [-1,1] and w=1
Now i'd like to transform some points on the boundary of that cube back into view coordinates. The projection matrix is nonsingular, so I can use the inverse to get from clipsace to viewspace. but i don't know how to get from normalized device space to clipspace, since i don't know how to calculate the 'w' i need to multiply the other coordinates with.
can someone help me with that? thanks!
Unless you actually want to recover your clip space values for some reason you don't need to calculate the W. Multiply your NDC point by the inverse of the projection matrix and then divide by W to get back to view space.
The flow graph on the top, and the formulas described on the following page, might help you : http://www.songho.ca/opengl/gl_transform.html

Polygon math

Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.

Resources