Matrices diagonalizable and relation with eigenspace - math

if 4-square Matrix A has 2 eigenvalues 3 and 0 and their eigenspace dimensions is 2 for both, is the matrix diagonalizable?

Yes, since you can find 4 linearly independent eigenvectors in those two two-dimensional eigenspaces. Writing your matrix corresponding to the basis consisting of those 4 vectors would yield a diagonal matrix. But, I think this question is too specific to be asked in here. You can learn basic linear algebra from the books such as Linear Algebra by Serge Lang.

Related

k means clustering on matrix

I am trying to cluster a Multidimensional Functional Object with the "kmeans" algorithms. What does it mean: So I don't have anymore a vector per each row or Individual, even more a 3x3 observation matrix per each Individual.For example: Individual = 1 has the following observations:
(x1, x2, x3),(y1,y2,y3),(z1,z2,z3).
The same structure of observations is also given for the other Individuals. So do you know how I can cluster with "kmeans" including all 3 observation vectors -and not only one observation vector how it is normal used for "kmeans" clustering?
Would you do it for each observation vector, f.e. (x1, x2, x3), separately and then combine the Information somehow together? I want to do this with the kmeans() Function in R.
Many thanks for your answers!
Using k-means you interpret each observation as a point in an N-dimensional vector space. Then you minimize the distances between your observations and the cluster centers.
Since, the data is viewed as dots in an N-dim space, the actual arrangement of the values does not matter.
You can, therefore, either tell your k-means routine to use a matrix norm, for example the Frobenius norm, to compute the distances. The other way would be to flatten your observations from 3 by 3 matrices to 1 by 9 vectors. The Frobenius norm of a NxN matrix is equivalent to the euclidean norm of a 1xN^2 vector.
Just give the argument to kmeans() with all the three columns it'll calculate the distances in 3 dimension, if that is what you are looking for.

SVD in LSI in the book Introduction to Information Retrieval

In the example 18.4 of the book Introduction to Information Retrieval. The term-document matrix is decomposed using SVD. My question is why Σ is a 5*5 matrix in the example? Shouldn't it be a 5*6 matrix? Is it wrong?
Here is the link of the Chapter 18 of the book Introduction to Information Retrieval. Thanks!
The book is correct. A term document matrix (of dimension DxT) is split into a product of three matrices. The middle matrix (denoted as \Sigma in the book) is the key matrix whose dimension is TxT (T=5 in the example).
Intuitively, you can think of this matrix as denoting relationship between terms. In the best case, all the column vectors of this matrix should be linearly independent meaning that this forms the basis vector in the term space and there is no dependence between the terms. However, this is not true in practice. You'll find that the rank of this matrix is typically a few orders of magnitude less than T (say T'), meaning that there are T-T' linearly dependent column vectors in this matrix.
One can then take a lower order approximation of this matrix by considering only a T'xT' term matrix. In effect, you take the principal eigen values of the matrix and project your vectors on these eigen vectors (treated as new basis) using rotation and scaling. That's precisely what spectral decomposition or PCA (or LSA) does.

How do I find the matrix of the linear transformation?

Going through the text on Linear Algebra by A. O. Morris (2nd edition) I am trying to understand something on linear transformations.
There is a problem where the R-bases of U and V are given as
{u1, u2} and {v1,v2,v3} respectively and the linear transformation from U to V is given by
Tu1=v1+2v2-v3
Tu2=v1-v2
The problem is to
a) find the matrix of T relative to these bases,
b) the matrix relative to the R-bases
{-u1+u2,2u1-u2} and {v1,v1+v2,v1+v2+v3},
and c) the relationship between the two matrices.
From the very good treatment of the subject here https://math.stackexchange.com/questions/12383/determine-the-matrix-relative-to-a-given-basis I figured out that the first matrix T has columns
(1,2,-1),(1,-1,0).
then for part b) I figure out Matrix A which I take to be the transform of ordered basis of U to standard basis of U as
((-1,2),(2,1)}
Matrix B which I take to be the transform of ordered basis of V to standard basis of V as
{(1,1,1),(0,1,1),(0,0,1)}
I then find inverse of B and form the product
[B]inv.C.A as the answer to part c).
Somehow I do not seem to get it. I am completely at sea. I would appreciate help to understand this.
My sincere apologies for not being able to use Latex.

Calculating the trace of a matrix to the power k

I need to calculate the trace of a matrix to the power of 3 and 4 and it needs to be as fast as it can get.
The matrix here is an adjacency matrix of a simple graph, therefore it is square, symmetric, its entries are always 1 or 0 and the diagonal elements are always 0.
Optimization is trivial for the trace of the matrix to the power of 2:
We only need the diagonal entries (i,i) for the trace, skip all others
As the matrix is symmetric these entries are just the entries of the i-th row squared and summed up
And as the entries are just 1 or 0 the square-operation can be skipped
Another idea I found on wikipedia was summing up all elements of the Hadamard product, i.e. entry-wise multiplication, but I don't know how to extend this method to the power of 3 and 4.
See http://en.wikipedia.org/wiki/Trace_(linear_algebra)#Properties
Maybe I'm just blind but I can't think of a simple solution.
In the end I need a C++ implementation, but I think that's not important to the question.
Thanks in advance for any help.
The trace is the sum of the eigenvalues and the eigenvalues of a matrix power are just the eigenvalues to that power.
That is, if l_1,...,l_n are the eigenvalues of your matrix then trace(M^p) = 1_1^p + l_2^p +...+l_n^p.
Depending on your matrix you may want to go with computing the eigenvalues and then summing. If your matrix has low rank (or can be well approximated with a low rank matrix) you can compute the eigenvalues very cheaply (a partial eigendecomposition has complexity O(n*k^2) where k is the rank).
Edit: You mention in the comments that it's 1600x1600 in which case finding all the eigenvalues should be no problem. Here's one of many C++ codes that you can use for this http://code.google.com/p/redsvd/
Ok, I just figured this one out myself.
The important thing I did not know was this:
If A is the adjacency matrix of the directed or undirected graph G, then the matrix An (i.e., the matrix product of n copies of A) has an interesting interpretation: the entry in row i and column j gives the number of (directed or undirected) walks of length n from vertex i to vertex j. This implies, for example, that the number of triangles in an undirected graph G is exactly the trace of A^3 divided by 6.
(Copied from http://en.wikipedia.org/wiki/Adjacency_matrix#Properties)
Retrieving the number of paths of a given length from node i to i for all n nodes can essentially be done in O(n) when dealing with sparse graphs and using adjacency lists instead of matrices.
Nevertheless, thanks for your answers!

Normalizing a matrix with respect to a constraint

I am doing a project which requires me to normalize a sparse NxNmatrix. I read somewhere that we can normalize a matrix so that its eigen values lie between [-1,1] by multiplying it with a diagonal matrix D such that N = D^{-1/2}*A*D^{-1/2}.
But I am not sure what D is here. Also, is there a function in Matlab that can do this normalization for sparse matrices?
It's possible that I am misunderstanding your question, but as it reads it makes no sense to me.
A matrix is just a representation of a linear transformation. Given that a matrix A corresponds to a linear transformation T, any matrix of the form B^{-1} A B (called the conjugate of A by B) for an invertible matrix B corresponds to the same transformation, represented in a difference basis. In particular, the eigen values of a matrix correspond to the eigen values of the linear transformation, so conjugating by an invertible matrix cannot change the eigen values.
It's possible that you meant that you want to scale the eigen vectors so that each has unit length. This is a common thing to do since then the eigen values tell you how far a vector of unit length is magnified by the transformation.

Resources