Generate random point on a 2D disk in 3D space given normal vector - math

Is there a simple and efficient method to generate a random (uniformly distributed) point on a disk "hanging" in 3-dimensional space? The disk is defined by its normal.
Ideally, I would like to avoid rotation matrices, as I do not fully understand them, and I know they have issues.
So far, I've tried generating a 3D unit vector and projecting it onto the plane of the disk, which does ensure that the point is within the disk, but not that it's uniformly distributed.
I also tried scaling the generated vector according to some function of its length, but I couldn't get a uniform distribution back regardless.
I had an idea that involved creating 2 vectors perpendicular to each other and the normal, to define a local coordinate system. Then I could generate a point on the unit disk as in 2D and convert the result back to the global coordinate system. This seems like it would be quite efficient, as it involves some precomputation (which I'm completely fine with) and only simple calculations afterwards (this is for a raytracer, so it'll happen a lot). The problem is, I don't know how to reliably calculate the local coordinate system's basis vectors while avoiding possible issues like collinearity.
Any help is much appreciated.

A easy way to calculate orthogonal basis vectors u, v for a plane with normal n = (a,b,c) is finding the component with least absolute value, and making u orthogonal to that component; the rest pretty much follows. For example, if the first component is the one with minimal absolute value, you can pick these basis vectors:
u = (0, -c, b) // n·u = -bc+cb = 0
v = (b²+c², -ab, -ac) // n·v = ab²+ac²-ab²-ac² = 0, u·v = abc-abc = 0

Related

Producing a forward/directional vector from just the euler rotation components and position

I'm currently have quite a lot of trouble producing a forward vector based on my accessible transform components. I'm working within SparkAR, where objects are built around a transform class with position, rotation (euler) and scale, as expected, however it doesn't provide any built in methods for retreiving general information such as the relative forward and up vector of the object, a transform matrix, etc.
This leaves me with the task of trying to put this together myself from the available data. The end result is simply to be able to calculate the angle between the forward vector of 2 objects, which is mapped to a 0-1 range and used as a value of texture blending. The goal is for it to simulate fake dynamic lighting through texture blends based on how similar the local object's forward vector is to the forward vector of the directional light.
I must have read 20+ different StackOverflow results by this point covering similar questions, however I've been unable to get correct results based on my testing using their suggestions, and at this point I feel like I need some more direct feedback to my method before I tear my eyes out.
My current process is as follows:
1) Retrieve euler rotation from the animated joint I want to compare against (it's in radians)
2) Convert radian values to degrees
3) Attempt to convert the euler values to a forward vector using the following calculation sample:
x = cos(yaw)cos(pitch)
y = sin(yaw)cos(pitch)
z = sin(pitch)
I tried a couple of other variations on that for different axis orders, which didn't provide much of a change at all.
Some small notes before continuing, X is pitch, Y is yaw and Z is roll. Positive Z is towards the screen, positive X is to the right, positive Y is up.
4) Normalise the vector (although it's unnecessary given the results I get).
5) Perform a dot product operation on the vector, against a set direction vector, in this case for testing purposes, I'm simply using (0,0,1).
6) Compare the resulting value against the expected result - incorrect.
Within a single 90 degree return, the value retreived from the dot product, which by my understanding should be 1 when facing the same direction, and -1 when facing inverse, oscillates between these two range endpoints between 15-20 times.
Another thing worth noting here, is that I did have to swap the Y and Z components of the calculation to produce a non-0 result, so the current forward vector calculation from euler is as follows:
x = cos(yaw)cos(pitch)
y = sin(pitch)
z = sin(yaw)cos(pitch)
I'm really not sure where to go about moving on from here to produce the result I'm looking for, as I simply don't understand what about the current calculations are going wrong to begin fixing it.
The passed in vector, pre-dot product at the rotations 0, 90, 180 and 270 are as follows:
( 1 , 0, -0.00002)
(-0.44812, 0, -0.89397)
( 1 , 0, 0.00002)
(-0.44812, 0, 0.89397)
I can see that the euler angles going into the calculations are definitely correct, so the RadToDeg conversion isn't screwing the input up.
Is the calculation I'm using for trying to produce a forward vector from euler rotations incorrect? Is there something I should be using instead?
Any advice on moving forward with this issue would be much appreciated.
Thanks.

How to convert points between two coordinate systems with different rotations

Imagine two coordinate systems layed on top of each other, with a rotation and scale difference between the two:
The problem is to convert a point from the non-rotated system to the other. What we do have, are four corner points forming a rectangle, with coordinates known for both systems at each point. We also know the rotation difference, and I think I at least should know the scale difference too. How do I convert a point from the non-rotated system to the rotated system? I have Unity3D at use.
Extra points for clarity in math :)
PS: I'm writing this really late, going to edit later for more clarity.
Some linear algebra does the trick:
Express each operation as a matrix and matrix multiply those to combine them into a single resulting matrix (for efficiency).
If translation is involved you need to add a dimension to your matrices, see homogenous coordinates.
The reason is that the mappings are affine ones then, not linear ones. You can ignore the extra dimension in the end result. It is just a nice way to embed affine mappings into linear ones, so the algebra is easier.
Example
M = M_trans * M_rot * M_scale
x' = M x
The order here is right to left: vector x is first scaled, then rotated, then translated into vector x'. (Using column vectors).
Hints on the matrices: Rotation Matrix, Scaling Matrix
For deriving 2D formulas when given 3D ones: either keep z = 0 or delete the 3rd row and 3rd column from each matrix.

Fast and efficient way to store orientation in 3D space?

I originally stored an objects origin and orientation in 3D space using 3 vectors representing the objects origin, forward and up directions.
To apply to correct transformation to the modelview matrix stack I compose the affine transformation matrix using these three vectors.
Translation is trivial, however for rotation is applied by constructing the correct rotation matrix (depending on the angle and axis of rotation) and applying it to these 3 vectors.
I'm using this method for a very large number of objects and the rotation/ affine matrix composing is causing a performance bottleneck.
I'm wondering if there is a more sensible/efficient way to store the orientation?
I originally stored an objects origin and orientation in 3D space using 3 vectors representing the objects origin, forward and up directions.
Or in other words: You're storing a 3×3 matrix. The matrices OpenGL use are just the same, though they're 4×4, but the only difference to your is, that the element 4,4 is always 1, the elements 0...3,4 are all 0, and the first column 0,0...3 is the cross product of forward and up, usually called right.
This is actually the most concise and directly accessible way to represent an objects placement in 3D space. You can apply any kind of transformation, by performing a single matrix multiplication then.
Another method is using a quaternion together with an offset vector. But quaternions must be turned into a matrix if you want your objects to be translatable (or you could chain up a lot of translation/rotation pairings for a transformation hierachy, but using a matrix actually causes less overhead).
Other than memory, what's stopping you from storing the whole 4x4 affine matrix?
Better yet, ISTR that if the array is normalised the bottom row is usually [0, 0, 0, 1] so you only need to store the top three rows.
A quaternion is probably the most natural/efficient way to store an orientation (it should be better in all respects than forward/up vectors). A quaternion will take 4 values to store, and takes 10 multiplications and 15 additions to convert to a 3x3 rotation matrix -- no divisions or transcendental functions are required.
If you are especially pressed for space, you can probably get by with only 3 values, as you can generate the first element of a unit quaternion robustly from the remaining three. This will take an additional 3 multiplications, 3 additions, and a square root (it is also slightly tricky, since you need to make sure this first element is nonnegative...)
Quaternions transformations may be faster then Matrix on current GPGPUs where the access for global memory is much slower then to local memory.
According to this tables you compute twice much but need to fetch less then half of memory.
Rendering have the vertex shader uniforms stored as local memory, the only reasonable here is the use of matrices.

align one set of 2d points with another using only translation and rotation

I'm working in OpenCV but I don't think there is a function for this. I can find a function for finding affine transformations, but affine transformations include scaling, and I only want to consider rotation + translation.
Imagine I have two sets of points in 2d - let's say each set has exactly 50 points.
E.g. set A = {x1, y1, x2, y2, ... , x50, y50}
set B = {x1', y1', x2', y2', ... , x50', y50'}
I want to find the rotation and translation combination that gets closest to mapping set A onto set B. I guess I would define "closest" as minimises the average distance between points in A and corresponding points in B. I.e., minimises the average distance between (x1, y1) and (x1', y1'), etc.
I guess I could use brute force testing all possible translations and rotations but this would be extremely inefficient. Does anyone know a simpler way?
Thanks!
This problem has a very elegant solution in terms of singular value decomposition of the proximity matrix (distances between pairs of points). The name of this is the orthogonal Procrustes problem, after the Greek legend about a fellow who offered travellers a bed that would fit anyone.
The solution comes from finding the nearest orthogonal matrix to a given (not necessarily orthogonal) matrix.
The way I would do it in Excel is to make a couple columns representing the points.
Cells representing rotation/translation of a set (no need to rotate and translate both of them).
Then columns representing those same points rotated/translated.
Then another column for the distance between the points of the rotated/translated points.
Then a cell of the sum of the distances between points.
Finally, use Solver to optimize the rotation and translation cells.
If you fix some rotation you can get an answer using ternary search. Run search in x and for every tested x run it in y to get the best value. This will give you the correct answer since the function (sum of corresponding distances) is convex (this can be proved through observing that restriction of the function to any line is a one-dimensional convex function; and the last is a standard fact: the sum of several convex functions is convex).
Instead of brute force over the angle I can propose such a method based on the ternary search. Choose some not very large step S. Compute the target function for every angle in (0, S, 2S,...). Then, if S is small enough, we can exclude some of segments (iS, (i + 1)S) from consideration. Namely ones with relatively large values of function with angles iS and (i + 1)S. Being implemented carefully this can give an answer and can do it faster than brute force.

What is a 3D Vector and how does it differ from a 3D point?

Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
If they are different, then how do I calculate a vector given a 3d point?
The difference is that a vector is an algebraic object that may or may not be given as the set of coordinates in some space. (thanks to bungalobill for correcting my sloppiness).
A point is just a point given by coordinates. Generally, one can conflate the two. If you are given a set of coordinates, and told that they constitute a 'point' with no further information (choice of basis, etc), then you can just hand that set of numbers back and legitimately claim to have produced a vector.
The largest difference between the two is that it makes no sense to do things to one that you can do to the other. For example,
You can add vectors: <1 2 3> + <3 2 1> = <4 4 4>
You can multiply (or scale) a vector by a number (generally called a scalar)
2 * <1 1 1> = <2 2 2>
You can ask how far apart two points are: d((1, 2, 3), (3, 2, 1) = sqrt((1 - 3)2 + (2 - 2)2 + (3 - 1)2) = sqrt(8) ~= 2.82
A good intuitive way to think about the association between a vector and a point is that a vector tells you how to get from the origin (that one point in space to which we assign the coordinates (0, 0, 0)) to its associated point.
If you translate your coordinate system, then you get a new vector for the same point. Although the coordinates that make up the point will undergo the same translation so it's a pretty easy conflation to make between the two.
Likewise if rotate the coordinate system or apply some other transformation (e.g. a shear), then the coordinates and vector associated to the point will also change.
It's also possible for a vector to be something else entirely, for example a bounded function on the interval [0, 1] is a vector because you can multiply it by a real number and add it to another function on the interval and it will satisfy certain requirements (namely the axioms of a vectorspace). In this case one thinks of having one coordinate for each real number, x, in [0, 1] where the value of that coordinate is just f(x). So that's the easiest example of an infinite dimensional vector space.
There are all sorts of vector spaces and the notion that a vector is a 'point and a direction' (or whatever it's supposed to be) is actually pretty vacuous.
A vector represents a change from one state to another. To create one, you need two states (in this case, points), and then you subtract the initial state from the final state in order to get the resultant vector.
Vectors are a more general idea that a point in 3D space.
Vectors can have 2, 3, or n dimensions. They represent many quantities in the physical world (e.g., velocity, force, acceleration) besides position.
A mathematician would say that a vector is a first order tensor that transforms according to this rule:
u(i) = A(i, j)v(j)
You need both point and vector because they are different. A point in 3D space denoting position is a vector, but every vector is not a point in 3D space.
Then there's the computer science notion of a vector as a container - it's an abstraction for an array of values or references. This is a different concept from a mathematician's idea of a vector, because every vector container need not obey the first order tensor transformation law (e.g. a Vector of OrderItems). That's yet another separate idea.
It's important to keep all these in mind when talking about vectors and points.
Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
Traditionaly vector means a direction and speed. A point could be considered a vector from the world orgin of one time step. (even though it may not be considered mathematically pure)
If they are different, then how do I calculate a vector given a 3d point?
target-tower is the common mnemonic.
Careful on your usage of this. The resulting vector is really normal*velocity. If you want to change it into something useful in a game application: you will need to normalize the vector first.
Example: Joe is at (10,0,0) and he wants to go to (10,10,0)
Target-Tower: (10,10,0)-(10,0,0)=(0,10,0)
Normalize the resulting vector: (0,1,0)
Apply "physics": (0,1,0) * speed*elapsed_time < speed = 3 and we'll say that the computer froze for a whole 2 seconds between the last step and this one for ease of computation >
=(0,6,0)
Add the resulting vector to Joes current point in space to get his next point in space: ... =(10,6,0)
Normal = vector/(sqrt(x*x+y*y+z*z))
...I think I have everything here
Vector is the change in the states. A point is the static point. Two vectors can be parallel or perpendicular. You can have product of two vectors which is a third vector. You can multiply a vector by a constant. You can add two vectors.
All these operations are not allowed on point. So program wise if you think both as a C++ class, there will be many such methods in the vector class but probably only Get and Set for point.
In the context of game mathematics there is no difference.
Points are elements of an affine space.† Vectors are elements of a vector (aka linear) space. When you choose an origin in an affine space it automatically induces a linear structure on that affine space. The contrary is also true: if you have a vector space it already satisfies all the axioms of an affine space.
The fact is that when it comes to computation, the only way to represent an affine space numerically is to use tuples of numbers, which also form a vector space.
Each object in a game always has an origin, and it is crucial to know where it is. That origin is set relative to the origin of the world, which is set relative to the origin of the camera/viewport. The vertices of the object are represented as vectors -- offsets from the object origin. You use matrix multiplication to transform the objects -- that is too a purely vector space operation (you cannot multiply an affine point by a matrix without specifying the origin first). Etc, etc... As we see all those triplets of numbers that we might think of as 'points' are actually vectors in the local coordinate system.
So is there any reason to distinguish between the two outside the study of algebra? It is an unnecessary abstraction, and unnecessary abstractions are harmful (KISS). So my answer is no, just go with a single vector type.
† Or any topological space outside the context of game development.
A vector is a line, that is a sequence of points but that it can be represented by two points, the starting and the ending point.
If you take the origin as the starting point, then you can describe your vector giving only the ending point.

Resources