Laplace expansion for determinants with r - r

I have a 21*21 matrix. I would like to use R in order to apply laplace extension along the first row so to display only the last step for the calculation of the determinant (2x2 matrix).
Unluckily, despite my best efforts, I can't figure out how this could be done.
To be clearer I provide an example with a 3x3 matrix
e.g. r <- matrix(c(1:9),3,3)
My aim is to find an expansion along the first row, so to obtain the three cofactors. These cofactors should be visualized so to distinguish the three minor matrices, the multiplying corresponding element of the matrix, and the signs of the permutation.
TO have a visual you can take a look to the first example in (http://en.wikipedia.org/wiki/Laplace_expansion)
Any suggestions? Thank you

Related

Migration Matrix in a Two-Dimensional Torus-Like Space: create a migration matrix in R

I'm working on a standard problem in random walk in two dimension.
In particular, I'm trying to generate a migration matrix for a toroidal model (Toroidal Transition Matrix) in R.
I have previously posted the same problem here: Transition matrix for a two-dimensional random walk in a torus: compute matrix in R
However, I did not get any feedback.
Similarly to what I mentioned in that post, I decided to assume independent movement along each dimension. Therefore, instead of calculating a Toroidal migration matrix and retrieve transition probabilities, I have multiplied independent probabilities from two separate 'one-dimensional circular model'.
I'm not sure whether this is formally correct and I would like to have some opinion on this regard.
Yes, to walk within the torus with each dimension being independent you do just multiply the transition probabilities. If there are n states in your circular graph, then there would be n^2 states in your torus and so you should expect a n^2 x n^2 transition matrix.
To give any more detail you'll have to specify exactly what you need to know.

Forming a query vector in LSA

After performing the SVD of a term-document matrix, and getting a reduced rank matrix, various sources have stated the following reduced query vector formula. It seems easy to see how its derived.
However, in this link, the query vector is calculated as centroid of the corresponding reduced term vectors. I tried to see if the two were the same, but the results were different.
What is the difference between the two and what are the pros/cons of using either?

R how to create a large matrix by combining small blocks of matrix

I'm working on a constrained optimization problem using Lagrange Multiplier method. And I'm trying to build this huge sparse matrix in R in order to calculate the values.
Here's how the matrices would look like. And the link below for the details of the problem if needed.
Implementation of Lagrange Multiplier to solve constrained optimization problem.
Here's the code I've come up with, sorry if my approach seems clumsy to you, since I'm new to matrix manipulation and programming.
First, I imported the 3154 by 30 matrices from a csv file, and then combined all columns into one. Then I created a diagonal matrices to imitate the upper left corner of the matrices.
Then, to imitate the lower left corner of the matrices. I created a 3154x3154 identity matrices and tried to replicate it 30 times.
I have two questions here
When I tried to cbind the diagonal sparse matrix, it returned a combination of two lists instead of a matrix. So I had to convert it to matrix, but this is taking too much of my memory. I'd like to know if there's a better way to accomplish this.
I want to know if there's a formula for cbind a matrix multiple times. Since I need to replicate the matrix 30 times. I'm curious if there's a cleaner way to get around all the typings. (This was solved thanks to #Jthorpe)
I was gonna do the same thing for the rest of the matrices. I know this is not the best approach to tackle this problem. Please feel free to to suggest any smarter way of doing this. Thanks!
library(Matrix)
dist_data=read.csv("/Users/xxxxx/dist_mat.csv", header=T)
c=ncol(dist_data) #number of cluster - 30
n=nrow(dist_data) #number of observations - 3153
#Create a c*n+c+n = 3153*30+3153+3 = 97,773 coefficient matrix
dist_list=cbind(unlist(dist_data))
Coeff_mat=2*.sparseDiagonal(c*n,x = c(dist_list))
diag=.sparseDiagonal(n)
Uin <- do.call(cbind,rep(list(as.matrix(diag)),30))

How to reconstruct the original data matrix after applying PCA in R

I've applied PCA on my data using the function prcomp in R.
This function returns the following:
Variation
Rotation Matrix
Standard Deviation
Scores (X)
My question is: How can I reconstruct the reduced version of the data after choosing, for example, two principle components?
Going back and forth between PCA and 'normal' space is done using the rotation matrix. Just take a close look at the matrix algebra in the chapter on PCA in your favorite multivariate statistics book. To truncate (or reduce) your dataset, just limit the rotation matrix to the PC axes you want, e.g. the first two.

Finding full QR decomposition from reduced QR

What's the best way to find additional orthonormal columns of Q? I have computed the reduced QR decomposition already, but need the full QR decomposition.
I assume there is a standard approach to this, but I've been having trouble finding it.
You might wonder why I need the full Q matrix. I'm using it to apply a constraint matrix for "natural" splines to a truncated power series basis expansion. I'm doing this in Java, but am looking for a language-independent answer.
Successively add columns to Q in the following way:
Pick a vector not already in the span of Q
Orthogonalize it with respect to the columns of Q
Add the orthogonalized vector as a new column of Q.
Add a row of zeros to the bottom of R
For reference, see these illustrative albeit mathematical lecture notes
Just in case, the process of "orthogonalization" of a new vector is an old technique called the Gram-Schmidt process, and there is a variant which is numerically stable.

Resources