Multiplying by inverse matrices? - math

If I have a matrix which is a combination of WorldViewProjection and I multiply it by the inverse of the projection does it yield the WorldView matrix or something else? If not then how can I extract the WorldView matrix from a WorldViewProjection matrix?
Thanks for any help :)

If you multiply on the right by the inverse of Projection, you will get World*View.
If you multiply on the left you'll get something entirely different, since matrix multiplication isn't commutative.
This assumes that Projection has an inverse. Not all matrices do.

Related

Maxima: Eigenvectors output

So I solve the eigenvectors for a matrix in Maxima.
a:matrix([10,10],[-4,-3]);
\\outputs matrix
vec:eigenvectors(a);
[[[5,2],[1,1]],[[[1,-1/2]],[[1,-4/5]]]]
I've hand calculated the eigenvalues, and vectors as (1x2) 5: [-2,1]. 2:[-5,4], which are correct. What is Maxima outputting?
Eigenvectors are only determined up to a multiplicative constant. That is, if x is an eigenvector, then so is a*x where a is a scalar. I think if you look at your result and Maxima's result, you'll see that they are equivalent in that sense.
There are different normalization schemes. Looks like Maxima makes the first element 1. Another common scheme is to make the norm of the eigenvector equal to 1. Or one can just leave them unnormalized.

How to square a symmetric matrix using only upper or lower triangle?

I am attempting to calculate a topological overlap measure matrix without using TOMsimilarity() in the WGCNA package. In this calculation, I need to square a large (44000x44000) symmetric matrix.
Is it possible to do this by only using either the upper or lower triangle of the matrix?
I've seen it completed by creating a distance matrix of the symmetric matrix, but I was hoping someone could guide me in another direction.
The goal would be to complete the calculation as quick as possible.
Currently, the code is as so:
correlation<-cor(data)
adjacency<-(0.5*(1+correlation))^2
sum<-apply(adjacency,1, sum)
summatrix<-matrix(sum,ncol=length(sum),nrow=length(sum))
min.k<-pmin(summatrix, t(summatrix))
num<-adjacency%*%adjacency+adjacency
den<-min.k+1-adjacency
tom<-num/den
diag(tom)=1
disstom<-1-tom
Thanks in advance!

Normalized Eigenvectors Maxima

I wa swondering if anyone knows of any function in Maxima to find the normalized eigen vectors of a 21x21 matrix?
I am using the function dgeev but I do not believe these eigenvectors are normalized.
I appreciate Any thoughts,
Ben
The eigenvectors computed by dgeev are indeed normalized to have Euclidean norm = 1. Keep in mind that to compute the norm of a complex vector (let's call it v), you want
sqrt (ctranspose (v) . v)
Here ctranspose is the conjugate transpose.
ueivectors normalizes the eigenvectors but apparently not the eignevlaues

R: how to compute the distance between the columns of a matrix?

The R function dist "computes and returns the distance matrix computed by using the specified distance measure to compute the distances between the rows of a data matrix".
However, I want the distance measure to be computed between the columns of a data matrix, not the rows! How can I do that?
Do I need to rotate the matrix. If so, how? If not, should I use a different function?
Maybe you can use R function t?
t(x) will transpose matrix x.

Normalizing a matrix with respect to a constraint

I am doing a project which requires me to normalize a sparse NxNmatrix. I read somewhere that we can normalize a matrix so that its eigen values lie between [-1,1] by multiplying it with a diagonal matrix D such that N = D^{-1/2}*A*D^{-1/2}.
But I am not sure what D is here. Also, is there a function in Matlab that can do this normalization for sparse matrices?
It's possible that I am misunderstanding your question, but as it reads it makes no sense to me.
A matrix is just a representation of a linear transformation. Given that a matrix A corresponds to a linear transformation T, any matrix of the form B^{-1} A B (called the conjugate of A by B) for an invertible matrix B corresponds to the same transformation, represented in a difference basis. In particular, the eigen values of a matrix correspond to the eigen values of the linear transformation, so conjugating by an invertible matrix cannot change the eigen values.
It's possible that you meant that you want to scale the eigen vectors so that each has unit length. This is a common thing to do since then the eigen values tell you how far a vector of unit length is magnified by the transformation.

Resources