Invertible matrices - Convex or concave set - convex-optimization

Is the set of matrices such that determinant is non-zero a convex or a concave set?
GL(n)={X: det(X) not = 0}

Related

Bend a mesh along spline

I am trying to bend a mesh along a spline curve and currently out of ideas … at first I thought I just add spline point vectors to mesh's vertices , but I am looking for more optimized version of it …
so guys …
How can I bend a mesh along a spline, so that mesh, with some forward axis vector follows the spline and bends according to it and also repeat along the spline …
???
I believe there are many ways to do what you want. Some years ago I worked out an approach using Conformal Geometric Algebra. Of course you can do it using conventional 3D math as its described in Instant Mesh Deformation and Deformation styles for spline-based skeletal animation papers.
A simple method is as follows:
Your spline function is a function S(t): R -> R^3, it takes a scalar between [0:1] and give you points in R^3.
Project each mesh vertex on the spline curve. The projection is orthogonal in the sense that it follows the direction of a normal vector to the curve. So your mesh vertex v_i is projected to a point v'_i in the spline where S(t_i) = v'_i. Form a vector p_i = v_i - v'_i (which is normal to the curve) so each mesh vertex can be expressed as:
v_i = S(t_i) + p_i
Compute an orthogonal coordinate system at "each point" of the spline. That coordinate system is known as Frenet-Serret frame. The first vector to determine is the tangent to the curve. It is uniquely defined as the derivative of S(t) so tangent T = S(t)/dt. The other two vectors, the normal N and binormal B, can be computed in different ways, check the above reference papers for that.
Express the vector p_i (from step 1) in terms of the Frenet-Serret frame at point S(t_i). Such that the vector p_i is a linear combination of T, N and B. Create a matrix A with columns T, N and B. You need to find x_i such that:
A x_i = p_i
That can be solved by inverting the matrix A (actually taking the transpose should suffice). So each mesh vertex can be computed as:
v_i = S(t_i) + A x_i
You can store the pair (t_i, x_i) instead of v_i (you don't need to store v_i anymore since you can compute it from t_i and x_i).
To deform the mesh deformation the spline control points must be translated, then you need to recompute the Frenet-Serret frame of each spline point (taking the derivative of the S(t) to compute T and updating the N and B as suggested in above reference papers). Once you have the updated T, N and B, you can define the matrix A and then compute the mesh vertex positions using formula from step 3.
Results can be seen in pictures of the above mentioned papers.

Deriving lower triangular matrix with positive diagonal in r

I am creating optimization algorithm where I need to plug initial value of A which is a lower triangular matrix with positive diagonal. My first question is how to derive a random lower triangular matrix with positive diagonal as an initial value matrix in r? And is it good idea to choose random matrix? if not what are the ways to do a better initial guess for this type of matrix?
I can imagine there are much better solutions based on your specific applications, but we can set the diagonal to half-Normal (i.e. |e| where e ~ N(0,1)) and set the lower-triangular off-diagonal elements to standard Normal values. ...
n <- 10
M <- diag(abs(rnorm(n)))
M[lower.tri(M, diag = FALSE)] <- rnorm(n*(n-1)/2)

Left and right eigenvectors in Julia

I have a general real matrix (i.e. not symmetric or Hermitian, etc.), and I would like to find its right eigenvectors and corresponding left eigenvectors in Julia.
Julia's eigen function returns the right eigenvectors only. I can find the left eigenvectors by doing
eigen(copy(M'))
but this requires copying the whole matrix and performing the eigendecomposition again, and there is no guarantee that the eigenvectors will be in the same order. (The copy is necessary because there is no eigen method for matrices of type Adjoint.)
In Python we have scipy.linalg.eigs, which can compute the left and right eigenvectors simultaneously in a single pass, which is more efficient and guarantees that they will be in the same order. Is there something similar in Julia?
The left eigenvectors can be computed by taking the inverse of the matrix formed by the right eigenvectors:
using LinearAlgebra
A = [1 0.1; 0.1 1]
F = eigen(A)
Q = eigvecs(F) # right eigenvectors
QL = inv(eigvecs(F)) # left eigenvectors
Λ = Diagonal(eigvals(F))
# check the results
A * Q ≈ Q * Λ # returns true
QL * A ≈ Λ * QL # returns true, too
# in general we have:
A ≈ Q * Λ * inv(Q)
In the above example QL are the left eigenvectors.
If the left eigenvectors are applied to a vector is it preferable to compute Q \ v, instead of inv(QL)*v.
I use the SVD factorization which decomposes a matrix into 3 matrixes USV' = M. The U matrix contains columnwise the left eigenvectors, V the right eigenvectors, and S is diagonal with the sqrt of the eigenvalues.
Note that U = inv(V) if and only if A is symetric (as in PCA where both are used indistinctly on a correlation matrix) and must be positive definite.
Here are some sources where I confirmed my info:
https://www.cc.gatech.edu/~dellaert/pubs/svd-note.pdf
https://en.wikipedia.org/wiki/Singular_value_decomposition

Calculate Rao's quadratic entropy

Rao QE is a weighted Euclidian distance matrix. I have the vectors for the elements of the d_ijs in a data table dt, one column per element (say there are x of them). p is the final column. nrow = S. The double sums are for the lower left (or upper right since it is symmetric) elements of the distance matrix.
If I only needed an unweighted distance matrix I could simply do dist() over the x columns. How do I weight the d_ijs by the product of p_i and p_j?
And example data set is at https://github.com/GeraldCNelson/nutmod/blob/master/RaoD_example.csv with the ps in the column called foodQ.ratio.
You still start with dist for the raw Euclidean distance matrix. Let it be D. As you will read from R - How to get row & column subscripts of matched elements from a distance matrix, a "dist" object is not a real matrix, but a 1D array. So first do D <- as.matrix(D) or D <- dist2mat(D) to convert it to a complete matrix before the following.
Now, let p be the vector of weights, the Rao's QE is just a quadratic form q'Dq / 2:
c(crossprod(p, D %*% p)) / 2
Note, I am not doing everything in the most efficient way. I have performed a symmetric matrix-vector multiplication D %*% p using the full D rather than just its lower triangular part. However, R does not have a routine doing triangular matrix-vector multiplication. So I compute the full version than divide 2.
This doubles computation amount that is necessary; also, making D a full matrix doubles memory costs. But if your problem is small to medium size this is absolutely fine. For large problem, if you are R and C wizard, call BLAS routine dtrmv or even dtpmv for the triangular matrix-vector computation.
Update
I just found this simple paper: Rao's quadratic entropy as a measure of functional diversity based on multiple traits for definition and use of Rao's EQ. It mentions that we can replace Euclidean distance with Mahalanobis distance. In case we want to do this, use my code in Mahalanobis distance of each pair of observations for fast computation of Mahalanobis distance matrix.

Split a transform matrix into orhhogonal matrix and scale matrix

If I have a matrix from scale, translate, and rotation transform. I want to split this matrix to two matrix. One is rotation+translation matrix, the other is scale matrix.
Because I want to compute the correct normal vector transform, so I only need orthogonal matrix to do the computation for surface normal vector
Any ideas?
If I have a matrix from scale, translate, and rotation transform. I want to split this matrix to two matrix. One is rotation+translation matrix, the other is scale matrix.
I'm assuming this matrix you are talking about is a 4x4 matrix that is widely used by some, widely despised by others, with the fourth row being 0,0,0,1.
I'll cause these two operations "scale" and "rotate+translate". Note well: These operations are not commutative. Scaling a 3-vector and then rotating/translating this scaled vector yields a different result than you would get by reversing the order of operations.
Case 1, operation is "rotate+translate", then "scale".
Let SR=S*R, where S is a 3x3 diagonal matrix with positive diagonal elements (a scaling matrix) and R is a 3x3 orthonormal rotation matrix. The rows of matrix SR will be orthogonal to one another, but the columns will not be orthogonal. The scale factors are the square root of the norms of the rows of the matrix SR.
Algorithm:
Given 4x4 matrix A, produce 4x4 scaling matrix S, 4x4 rotation+translation matrix T
A = [ SR(3x3) Sx(3x1) ]
[ 0(1x3) 1 ]
Partition A into a 3x3 matrix SR and a 3 vector Sx as depicted above.
Construct the scaling matrix S. The first three diagonal elements are the vector norms of the rows of matrix SR; the last diagonal element is 1.
Construct the 4x4 rotation+translation matrix T by dividing each row of A by the corresponding scale factor.
Case 2, operation is "scale", then "rotate+translate".
Now consider the case RS=R*S. Here the columns of A will be orthogonal to one another, but the rows will not be orthogonal. In this case the scale factors are the square root of the norms of the columns of the matrix RS.
Algorithm:
Given 4x4 matrix A, produce 4x4 rotation+translation matrix T, 4x4 scaling matrix S
A = [ RS(3x3) x(3x1) ]
[ 0(1x3) 1 ]
Partition A into a 3x3 matrix RS and a 3 vector x as depicted above.
Construct the scaling matrix S. The first three diagonal elements are the vector norms of the columns of matrix RS; the last diagonal element is 1.
Construct the 4x4 rotation+translation matrix T by dividing each row of A by the corresponding scale factor.
If the scaling is not uniform (e.g., scale x by 2, y by 4, z by 1/2), you can tell the order of operations by looking at the inner products of the rows and columns of the upper 3x3 matrix with one another. Scaling last (my case 1) means the row inner products will be very close to zero but the column inner products will be non zero. Scaling first (my case 2) reverses the situation. If the scaling is uniform there is no way to tell which case is which. You need to know beforehand.
Just an idea -
Multiply the matrix by the unit vectors (1/sqrt(3),1/sqrt(3),1/sqrt(3)),
check how the length of the vector after the multiplication,
scale the matrix by the reciprocal of that value. Now you have an orthogonal matrix
create a new scale matrix with the scale you found.
Remove the translation to get a 3x3 matrix
Perform the polar decomposition via SVD.

Resources