Finding connected components in adjacency matrix - linear-algebra

I was wondering, Is it possible to find the connected components in an adjacency matrix ?
Assuming i have the following adjacency matrix (The cols and rows are not the same node),
If i draw the adjacency graph it is easy to find the connected components which are :
1)
A2 -> B3
A2 -> B1
A1 -> B1
2)
A3 -> B2
I was wondering is there any operations i can do on the matrix to find the connected components ? and assuming there is a way to find the connected components from the matrix without the need for a graph, why does that method works ? what is the proof ?
Thanks very much.

Related

does PCA give us the ordered list from features from the most important to less?

I have a general question about principle component analysis:
I know PCS give us a direction that data have the most variation in that direction. I am wondering can PCS give us an order for features from the most important one to the least?
For example if I have 8 features f1,...,f8, it gives me f5 is the most important then f3 then f8 then f4 and ...
If yes what function should I call in R?
Each principal component is a transformation of length p. If you have eight features PCA will produce 8 principal components, each of length 8 in which each element is a scalar for features 1 through 8. The elements of the the principal components are in the same order as your variables. The larger the absolute value of the element with the corresponding index (i = 1 ... p), the greater that variable contributes to the transformation of that PC. The first PC is usually the most significant transformation, followed by PC2, and so on.

Simple Orthographic Structure from Motion using R -- Determining Metric Constraints

I would like to build a simple structure from motion program according to Tomasi and Kanade [1992]. The article can be found below:
https://people.eecs.berkeley.edu/~yang/courses/cs294-6/papers/TomasiC_Shape%20and%20motion%20from%20image%20streams%20under%20orthography.pdf
This method seems elegant and simple, however, I am having trouble calculating the metric constraints outlined in equation 16 of the above reference.
I am using R and have outlined my work thus far below:
Given a set of images
I want to track the corners of the three cabinet doors and the one picture (black points on images). First we read in the points as a matrix w where
Ultimately, we want to factorize w into a rotation matrix R and shape matrix S that describe the 3 dimensional points. I will spare as many details as I can but a complete description of the maths can be gleaned from the Tomasi and Kanade [1992] paper.
I supply w below:
w.vector=c(0.2076,0.1369,0.1918,0.1862,0.1741,0.1434,0.176,0.1723,0.2047,0.233,0.3593,0.3668,0.3744,0.3593,0.3876,0.3574,0.3639,0.3062,0.3295,0.3267,0.3128,0.2811,0.2979,0.2876,0.2782,0.2876,0.3838,0.3819,0.3819,0.3649,0.3913,0.3555,0.3593,0.2997,0.3202,0.3137,0.31,0.2718,0.2895,0.2867,0.825,0.7703,0.742,0.7251,0.7232,0.7138,0.7345,0.6911,0.1937,0.1248,0.1723,0.1741,0.1657,0.1313,0.162,0.1657,0.8834,0.8118,0.7552,0.727,0.7364,0.7232,0.7288,0.6892,0.4309,0.3798,0.4021,0.3965,0.3844,0.3546,0.3695,0.3583,0.314,0.3065,0.3989,0.3876,0.3857,0.3781,0.3989,0.3593,0.5184,0.4849,0.5147,0.5193,0.5109,0.4812,0.4979,0.4849,0.3536,0.3517,0.4121,0.3951,0.3951,0.3781,0.397,0.348,0.5175,0.484,0.5091,0.5147,0.5128,0.4784,0.4905,0.4821,0.7722,0.7326,0.7326,0.7232,0.7232,0.7119,0.7402,0.7006,0.4281,0.3779,0.3918,0.3863,0.3825,0.3472,0.3611,0.3537,0.8043,0.7628,0.7458,0.7288,0.727,0.7213,0.7364,0.6949,0.5789,0.5491,0.5761,0.5817,0.5733,0.5444,0.5537,0.5379,0.3649,0.3536,0.4177,0.3951,0.3857,0.3819,0.397,0.3461,0.697,0.671,0.6821,0.6821,0.6719,0.6412,0.6468,0.6235,0.3744,0.3649,0.4159,0.3819,0.3781,0.3612,0.3763,0.314,0.7008,0.6691,0.6794,0.6812,0.6747,0.6393,0.6412,0.6235,0.7571,0.7345,0.7439,0.7496,0.7402,0.742,0.7647,0.7213,0.5817,0.5463,0.5696,0.5779,0.5761,0.5398,0.551,0.5398,0.7665,0.7326,0.7439,0.7345,0.7288,0.727,0.7515,0.7062,0.8301,0.818,0.8571,0.8878,0.8766,0.8561,0.858,0.8394,0.4121,0.3876,0.4347,0.397,0.38,0.3631,0.3668,0.2971,0.912,0.8962,0.9185,0.939,0.9259,0.898,0.8887,0.8571,0.3989,0.3781,0.4215,0.3725,0.3612,0.3461,0.3423,0.2782,0.9092,0.8952,0.9176,0.9399,0.925,0.8971,0.8887,0.8571,0.4743,0.4536,0.4894,0.4517,0.446,0.4328,0.4385,0.3706,0.8273,0.8171,0.8571,0.8878,0.8766,0.8543,0.8561,0.8394,0.4743,0.4554,0.4969,0.4668,0.4536,0.4404,0.4536,0.3857)
w=matrix(w.vector,ncol=16,nrow=16,byrow=FALSE)
Then create registered measurement matrix wm according to equation 2 as
by
wm = w - rowMeans(w)
We can decompose wm into a '2FxP' matrix o1 a diagonal 'PxP' matrix e and 'PxP' matrix o2 by using a singular value decomposition.
svdwm <- svd(wm)
o1 <- svdwm$u
e <- diag(svdwm$d)
o2 <- t(svdwm$v) ## dont forget the transpose!
However, because of noise, we only pay attention to the first 3 columns of o1, first 3 values of e and the first 3 rows of o2 by:
o1p <- svdwm$u[,1:3]
ep <- diag(svdwm$d[1:3])
o2p <- t(svdwm$v)[1:3,] ## dont forget the transpose!
Now we can solve for our rhat and shat in equation (14)
by
rhat <- o1p%*%ep^(1/2)
shat <- ep^(1/2) %*% o2p
However, these results are not unique and we still need to solve for R and S by equation (15)
by using the metric constraints of equation (16)
Now I need to find Q. I believe there are two potential methods but am unclear how to employ either.
Method 1 involves solving for B where B=Q%*%solve(Q) then using Cholesky decomposition to find Q. Method 1 appears to be the common choice in literature, however, little detail is given as to how to actually solve the linear system. It is apparent that B is a '3x3' symmetric matrix of 6 unknowns. However, given the metric constraints (equations 16), I don't know how to solve for 6 unknowns given 3 equations. Am I forgetting a property of symmetric matrices?
Method II involves using non-linear methods to estimate Q and is less commonly used in structure from motion literature.
Can anyone offer some advice as to how to go about solving this problem? Thanks in advance and let me know if I need to be more clear in my question.
can be written as .
can be written as .
can be written as .
so our equations are:
So the first equation can be written as:
which is equivalent to
To keep it short we define now:
(I know the spacings are terrably small, but yes, this is a Vector...)
So for all equations in all different Frames f, we can write one big equation:
(sorry for the ugly formulas...)
Now you just need to solve the -Matrix using Cholesky decomposition or whatever...

Calculating the trace of a matrix to the power k

I need to calculate the trace of a matrix to the power of 3 and 4 and it needs to be as fast as it can get.
The matrix here is an adjacency matrix of a simple graph, therefore it is square, symmetric, its entries are always 1 or 0 and the diagonal elements are always 0.
Optimization is trivial for the trace of the matrix to the power of 2:
We only need the diagonal entries (i,i) for the trace, skip all others
As the matrix is symmetric these entries are just the entries of the i-th row squared and summed up
And as the entries are just 1 or 0 the square-operation can be skipped
Another idea I found on wikipedia was summing up all elements of the Hadamard product, i.e. entry-wise multiplication, but I don't know how to extend this method to the power of 3 and 4.
See http://en.wikipedia.org/wiki/Trace_(linear_algebra)#Properties
Maybe I'm just blind but I can't think of a simple solution.
In the end I need a C++ implementation, but I think that's not important to the question.
Thanks in advance for any help.
The trace is the sum of the eigenvalues and the eigenvalues of a matrix power are just the eigenvalues to that power.
That is, if l_1,...,l_n are the eigenvalues of your matrix then trace(M^p) = 1_1^p + l_2^p +...+l_n^p.
Depending on your matrix you may want to go with computing the eigenvalues and then summing. If your matrix has low rank (or can be well approximated with a low rank matrix) you can compute the eigenvalues very cheaply (a partial eigendecomposition has complexity O(n*k^2) where k is the rank).
Edit: You mention in the comments that it's 1600x1600 in which case finding all the eigenvalues should be no problem. Here's one of many C++ codes that you can use for this http://code.google.com/p/redsvd/
Ok, I just figured this one out myself.
The important thing I did not know was this:
If A is the adjacency matrix of the directed or undirected graph G, then the matrix An (i.e., the matrix product of n copies of A) has an interesting interpretation: the entry in row i and column j gives the number of (directed or undirected) walks of length n from vertex i to vertex j. This implies, for example, that the number of triangles in an undirected graph G is exactly the trace of A^3 divided by 6.
(Copied from http://en.wikipedia.org/wiki/Adjacency_matrix#Properties)
Retrieving the number of paths of a given length from node i to i for all n nodes can essentially be done in O(n) when dealing with sparse graphs and using adjacency lists instead of matrices.
Nevertheless, thanks for your answers!

LU decomposition of rectangular matrices

Method lu of package Matrix works fine for square matrices. However, I can't see why there is that square restriction. How can I perform LU decomposition on a rectangular matrix?
You can embed it into a identity matrix:
[ a11 a12 a13 ]
[ a21 a22 a23 ]
[ 0 0 1 ]
LU decomposition is for square matrices only. You may want to check Wikipedia for a refreshing.
Non-square matricies mean different things.
If it has more rows than columns (more equations than unknowns), it means you need a least squares approximation. You can pre-multiply both sides by the transpose of A and use LU decomp on that. The result is the least squares "best" solution.
If it has fewer rows than columns (more unknowns than equations), you need Singular Value Decomposition (SVD). It'll give you the best solution and the null space as well.

Align point clouds via 3 points correlation?

Let's say I have 3 point clouds: first that has 3 points {x1,y1,z1}, {x2,y2,z2}, {x3,y3,z3} and second point cloud that has same points as {xx1, yy1, zz1}, {xx2,yy2,zz2}, {xx3,yy3,zz3}... I assume to align second point cloud to first I have to multiply second one's points by T[3x3matrix].
1) So how do I find this transform matrix(T) ? I tried to do the equations by hand, but failed to solve them. Is there an solution somewhere, cause I'm pretty sure I'm not the first one to stumble into the problem.
2) I assume that matrix might include skewing and shearing. Is there a way to find matrix with only 7 degrees of freedom (3translation, 3rotation, 1scale)?
The transformation matrix T1 that takes the unit vectors {1, 0, 0}, {0, 1, 0}, and {0, 0, 1} to {x1, y1, z1}, {x2, y2, z2}, {x3, y3, z3} is simply
| x1 x2 x3 |
T1 = | y1 y2 y3 |
| z1 z2 z3 |
And likewise the transformation T2 that takes those 3 unit vectors to the second set of points is
| xx1 xx2 xx3 |
T2 = | yy1 yy1 yy3 |
| zz1 zz2 zz3 |
Therefore, the matrix that takes the first three points to the second three points is given by T2 * T1-1. If T1 is non-singular, then this transformation is uniquely determined, so it has no degrees of freedom. If T1 is a singular matrix, then there could be no solutions, or there could be infinitely many solutions.
When you say you want 7 degrees of freedom, this is somewhat of a misuse of terminology. In the general case, this matrix is composed of 3 rotational degrees of freedom, 3 scaling degrees, and 3 shearing degrees, making a total of 9. You can figure out these parameters by performing a QR factorization. The Q matrix gives you the rotational parameters, and the R matrix gives you the scaling parameters (along the diagonal) and the shearing parameters (above the diagonal).
Approach of Adam Rosenfield is correct. But solution as T2 * Inv (T1) is wrong. Since in Matrix multiplication A * B != B * A : Hence result is Inv(T1) * T2
The seven parameter transformation that you are talking about is referred to as a 3d conformal transformation, or sometimes a 3d similarity transformation given that the two clouds are similar. If the two shapes are identical, Adam Rosenfields solution is good. Where there are small differences, and you wish to get a best fit, the most commonly used solution is a Helmert transformation which uses a least squares approach to minimise the residuals. The wikipedia and google stuff on this doesn't seem great at a glance. My reference on this is Ghilani & Wolf's adjustment computations, p345. This is also a great book on matrix math as applied to spatial problems and a good addition to the library.
edit: Adam's 9 parameter version of this transformation is referred to as an affine transformation
Here is an example of computing least-squares estimates of the parameters of a 2D affine transformation in R.

Resources