Reading a book - "Introduction to 3D Game Programming with Directx12 by Frank Luna", I did not understand how we got the rotation matrix from function
the rotation matrix is:
to get this matrix from this rule:
Very nice formula.
Assuming |n|=1, the term (n.v)n is the projection of v in the direction of n, that must be unchanged.
The term v - (n.v)n is the projection of v in the plane normal to n, thus multiplied by cos(theta).
The term n x v is the vector normal to the plane containing n and v, thus multiplied by sin(theta).
I found that cross product and projection can be represented as matrices
The geometry of rotation about a vector n.
Related
I am trying to generate two vectors with a given cosine similarity. Input would be the degree of cosine similarity (or angle as it depends on it anyway) and the number of dimensions (D) in the vectors, and output would be two vectors of D dimensions with that given similarity between them Now, I know how to use the cosine similarity function to calculate the similarity but I'm lost when trying it the other way around.
Is there such a procedure or algorithm and how is it called?
For a given starting vector u and cosine similarity c:
Generate 2 points in n-D space; call them a and b
Project b onto the plane orthogonal to u and containing a
Subtract a from the result to obtain a vector orthogonal to u; call it h
Use [u,h] as a basis and basic trigonometry to generate the desired vector v
The above method is dimension-agnostic as it only uses dot products. The resultant vectors {v} are of unit length and uniformly distributed around u.
I am trying to bend a mesh along a spline curve and currently out of ideas … at first I thought I just add spline point vectors to mesh's vertices , but I am looking for more optimized version of it …
so guys …
How can I bend a mesh along a spline, so that mesh, with some forward axis vector follows the spline and bends according to it and also repeat along the spline …
???
I believe there are many ways to do what you want. Some years ago I worked out an approach using Conformal Geometric Algebra. Of course you can do it using conventional 3D math as its described in Instant Mesh Deformation and Deformation styles for spline-based skeletal animation papers.
A simple method is as follows:
Your spline function is a function S(t): R -> R^3, it takes a scalar between [0:1] and give you points in R^3.
Project each mesh vertex on the spline curve. The projection is orthogonal in the sense that it follows the direction of a normal vector to the curve. So your mesh vertex v_i is projected to a point v'_i in the spline where S(t_i) = v'_i. Form a vector p_i = v_i - v'_i (which is normal to the curve) so each mesh vertex can be expressed as:
v_i = S(t_i) + p_i
Compute an orthogonal coordinate system at "each point" of the spline. That coordinate system is known as Frenet-Serret frame. The first vector to determine is the tangent to the curve. It is uniquely defined as the derivative of S(t) so tangent T = S(t)/dt. The other two vectors, the normal N and binormal B, can be computed in different ways, check the above reference papers for that.
Express the vector p_i (from step 1) in terms of the Frenet-Serret frame at point S(t_i). Such that the vector p_i is a linear combination of T, N and B. Create a matrix A with columns T, N and B. You need to find x_i such that:
A x_i = p_i
That can be solved by inverting the matrix A (actually taking the transpose should suffice). So each mesh vertex can be computed as:
v_i = S(t_i) + A x_i
You can store the pair (t_i, x_i) instead of v_i (you don't need to store v_i anymore since you can compute it from t_i and x_i).
To deform the mesh deformation the spline control points must be translated, then you need to recompute the Frenet-Serret frame of each spline point (taking the derivative of the S(t) to compute T and updating the N and B as suggested in above reference papers). Once you have the updated T, N and B, you can define the matrix A and then compute the mesh vertex positions using formula from step 3.
Results can be seen in pictures of the above mentioned papers.
I wa swondering if anyone knows of any function in Maxima to find the normalized eigen vectors of a 21x21 matrix?
I am using the function dgeev but I do not believe these eigenvectors are normalized.
I appreciate Any thoughts,
Ben
The eigenvectors computed by dgeev are indeed normalized to have Euclidean norm = 1. Keep in mind that to compute the norm of a complex vector (let's call it v), you want
sqrt (ctranspose (v) . v)
Here ctranspose is the conjugate transpose.
ueivectors normalizes the eigenvectors but apparently not the eignevlaues
I wonder if it is possible (and if it is then how) to re-present an arbitrary M3 matrix transformation as a sequence of simpler transformations (such as translate, scale, skew, rotate)
In other words: how to calculate MTranslate, MScale, MRotate, MSkew matrices from the MComplex so that the following equation would be true:
MComplex = MTranslate * MScale * MRotate * MSkew (or in an other order)
Singular Value Decomposition (see also this blog and this PDF). It turns an arbitrary matrix into a composition of 3 matrices: orthogonal + diagonal + orthogonal. The orthogonal matrices are rotation matrices; the diagonal matrix represents skewing along the primary axes = scaling.
The translation throws a monkey wrench into the game, but what you should do is take out the translation part of the matrix so you have a 3x3 matrix, run SVD on that to give you the rotation+skewing, then add the translation part back in. That way you'll have a rotation + scale + rotation + translate composition of 4 matrices. It's probably possible to do this in 3 matrices (rotation + scaling along some set of axes + translation) but I'm not sure exactly how... maybe a QR decomposition (Q = orthogonal = rotation, but I'm not sure if the R is skew-only or has a rotational part.)
Yes, but the solution will not be unique. Also you should rather put translation at the end (the order of the rest doesn't matter)
For any given square matrix A there exists infinitely many matrices B and C so that A = B*C. Choose any invertible matrix B (which means that B^-1 exists or det(B) != 0) and now C = B^-1*A.
So for your solution first decompose MC into MT and MS*MR*MSk*I, choosing MT to be some invertible transposition matrix. Then decompose the rest into MS and MR*MSk*I so that MS is arbitrary scaling matrix. And so on...
Now if at the end of the fun I is an identity matrix (with 1 on diagonal, 0 elsewhere) you're good. If it is not, start over, but choose different matrices ;-)
In fact, using the method above symbolically you can create set of equations that will yield you a parametrized formulas for all of these matrices.
How useful these decompositions would be for you, well - that's another story.
If you type this into Mathematica or Maxima they'll compute this for you in no time.
How does it actually reduce noise..can you suggest some nice tutorials?
SVD can be understood from a geometric sense for square matrices as a transformation on a vector.
Consider a square n x n matrix M multiplying a vector v to produce an output vector w:
w = M*v
The singular value decomposition M is the product of three matrices M=U*S*V, so w=U*S*V*v. U and V are orthonormal matrices. From a geometric transformation point of view (acting upon a vector by multiplying it), they are combinations of rotations and reflections that do not change the length of the vector they are multiplying. S is a diagonal matrix which represents scaling or squashing with different scaling factors (the diagonal terms) along each of the n axes.
So the effect of left-multiplying a vector v by a matrix M is to rotate/reflect v by M's orthonormal factor V, then scale/squash the result by a diagonal factor S, then rotate/reflect the result by M's orthonormal factor U.
One reason SVD is desirable from a numerical standpoint is that multiplication by orthonormal matrices is an invertible and extremely stable operation (condition number is 1). SVD captures any ill-conditioned-ness in the diagonal scaling matrix S.
One way to use SVD to reduce noise is to do the decomposition, set components that are near zero to be exactly zero, then re-compose.
Here's an online tutorial on SVD.
You might want to take a look at Numerical Recipes.
Singular value decomposition is a method for taking an nxm matrix M and "decomposing" it into three matrices such that M=USV. S is a diagonal square (the only nonzero entries are on the diagonal from top-left to bottom-right) matrix containing the "singular values" of M. U and V are orthogonal, which leads to the geometric understanding of SVD, but that isn't necessary for noise reduction.
With M=USV, we still have the original matrix M with all its noise intact. However, if we only keep the k largest singular values (which is easy, since many SVD algorithms compute a decomposition where the entries of S are sorted in nonincreasing order), then we have an approximation of the original matrix. This works because we assume that the small values are the noise, and that the more significant patterns in the data will be expressed through the vectors associated with larger singular values.
In fact, the resulting approximation is the most accurate rank-k approximation of the original matrix (has the least squared error).
To answer to the tittle question: SVD is a generalization of eigenvalues/eigenvectors to non-square matrices.
Say,
$X \in N \times p$, then the SVD decomposition of X yields X=UDV^T where D is diagonal and U and V are orthogonal matrices.
Now X^TX is a square matrice, and the SVD decomposition of X^TX=VD^2V where V is equivalent to the eigenvectors of X^TX and D^2 contains the eigenvalues of X^TX.
SVD can also be used to greatly ease global (i.e. to all observations simultaneously) fitting of an arbitrary model (expressed in an formula) to data (with respect to two variables and expressed in a matrix).
For example, data matrix A = D * MT where D represents the possible states of a system and M represents its evolution wrt some variable (e.g. time).
By SVD, A(x,y) = U(x) * S * VT(y) and therefore D * MT = U * S * VT
then D = U * S * VT * MT+ where the "+" indicates a pseudoinverse.
One can then take a mathematical model for the evolution and fit it to the columns of V, each of which are a linear combination the components of the model (this is easy, as each column is a 1D curve). This obtains model parameters which generate M? (the ? indicates it is based on fitting).
M * M?+ * V = V? which allows residuals R * S2 = V - V? to be minimized, thus determining D and M.
Pretty cool, eh?
The columns of U and V can also be inspected to glean information about the data; for example each inflection point in the columns of V typically indicates a different component of the model.
Finally, and actually addressing your question, it is import to note that although each successive singular value (element of the diagonal matrix S) with its attendant vectors U and V does have lower signal to noise, the separation of the components of the model in these "less important" vectors is actually more pronounced. In other words, if the data is described by a bunch of state changes that follow a sum of exponentials or whatever, the relative weights of each exponential get closer together in the smaller singular values. In other other words the later singular values have vectors which are less smooth (noisier) but in which the change represented by each component are more distinct.