correlation between features with R - r

I wanna calculate the correlation of features, every feature consists of a 50*100 matrix. My question is how can I use R to calculate the correlation between features, instead of correlation of columns inside the matrix.

Related

Is there an R function to compare two variance-covariance matrices via fit indicators?

I obtained two variance-co-variance matrices from two different samples. Both contain data on the same variables. I would like to estimate their similarity according to fit indices, i.e., I am interested whether the pattern of covariances between the variables is similar or different in the two samples. I am familiar with fit indices from structural equation modeling (e.g., Chi-square, GFI, CFI, RMSEA, SRMR) which compare an empirical variance-covariance matrix with a model-implied variance-covariance matrix. Is there a way to obtain these fit indicators for the comparison of two empirical variance-covariance matrices?
I tried compareCov which only gives a visual comparison.

Multiple weighted matrices in a SLX model in R

Is there any packages or commands allow multiple weighted matrices in a spatial lagged X (SLX) model?
I want to include two different weighted matrices with one dependent variable, but I cannot find any packages for it?
Theoretically, in spatial analysis, including multiple W matrices are not appropriate? If it is possible, how can I conduct analysis with W1 and W2? Do I have to do it by hand?(I meant, once create the lagged variable by multiplying W matrix and the key DV, and and run a OLS regression with the variables. Is it the right way applying multiple weighted matrices?
Thanks!
Dongjin

why the calculation of eigenvectors and eigenvalues when performing PCA is so effective?

The core of Principal Componenet Analysis (PCA) lies at calculating eigenvalues and eigenvectors from the variance-covariance matrix corresponding to some dataset (for example, a matrix of multivariate data coming from a set of individuals). Text-book knowledge I have is that:
a) by multipliying such eigenvectors with the original data matrix one can calculate "scores" (as many as orignal set of variables) which are independent from each other
b) the eigenvalues summarize the amount of variance of each score.
These two properties make this process a very effective data transformation technique to simplify the analysis of multivariate data.
My question is why is that so? why is calculating eigenvalues and eigenvectors from a covariance-variance matrix results in such unique properties of the scores?

Plotting randomly selected columns of correlation matrix

I have a really big similarity matrix having 444 columns. I want to plot a heatmap or corrplot to compare different similarity matrices, but I can't use all the columns. I want to take a random sample of columns and then plot a heatmap, but I don't want to compute similarities again for this columns as it takes a lot of time for some similarity functions that I have. Any ideas how I could take a random sample of columns from similarity matrix (it has the same structure as correlation matrix) to plot a heatmap for them?

Create an artificial correlation matrix

I want to do some testing of a program but I would like to have a really big matrix
Is there any tool that can generate an artificial correlation matrix?
Pick n random n-dimensional vectors of numbers from -1 to 1. Use the dot product of any 2 vectors is their correlation. Use that fact to make a random n x n correlation matrix.
Is this really a correlation matrix? Make each dimension into an independent standard normal distribution. The coefficients of each vector then describes a random variable. Those random variables have the specified correlations. So yes, this is actually going to be a correlation matrix.
There is a repository of sample matrix data for use in comparing algos available at the Matrix Market - free despite the name.

Resources