coercing numeric and then graph from matrix in R - r

I have a symmetrix distance matrix (x):
0 2.6096 2.3601 5.6109
2.6096 0 1.7045 6.8441
2.3601 1.7045 0 6.5946
5.6109 6.8441 6.5946 0
which I would like to analyse as a graph, in order to compute its spectral density. In order to do that, I want to follow these steps (with igraph):
x_mat <- as.matrix(x,matrix.type="adjacency") #get adjacency matrix`
x_graph <- graph.adjacency(x_mat) #convert to graph
x_lap <- graph.laplacian(x_graph) #convert to laplacian graph
x_eig <- eigen(x_lap,symmetric=TRUE,only.values=TRUE)
(I'm not sure how to plot the spectral density, but I'm not even there yet)
But I'm having trouble from the start. I can get my matrix to be a matrix
x_mat <- as.matrix(x,matrix.type="adjacency")
is.matrix(x_mat)
[1] TRUE
x_mat
[,1]
[1,] Numeric,39204
But I cannot coerce it to be numeric
mode(x_mat) <- "numeric"
_Error in eval(expr, envir, enclos) :
(list) object cannot be coerced to type 'double'_
I need the adjacency matrix to be numeric in order to move along my pipeline. Any advice? Alternative methods to achieve my goal, of course, are also welcome.
Thanks in advance.

data.matrix should provide what you need.
df <- read.table(header=F, text='
0 2.6096 2.3601 5.6109
2.6096 0 1.7045 6.8441
2.3601 1.7045 0 6.5946
5.6109 6.8441 6.5946 0
')
mat <- data.matrix(df)
is.matrix(mat)
> is.matrix(mat)
[1] TRUE
is.numeric(mat)
> is.numeric(mat)
[1] TRUE

Related

Most efficient way in base R to do pairwise correlations between thousands of columns in a matrix [duplicate]

I'm new to R, so I apologize if this is a straightforward question, however I've done quite a bit of searching this evening and can't seem to figure it out. I've got a data frame with a whole slew of variables, and what I'd like to do is create a table of the correlations among a subset of these, basically the equivalent of "pwcorr" in Stata, or "correlations" in SPSS. The one key to this is that not only do I want the r, but I also want the significance associated with that value.
Any ideas? This seems like it should be very simple, but I can't seem to figure out a good way.
Bill Venables offers this solution in this answer from the R mailing list to which I've made some slight modifications:
cor.prob <- function(X, dfr = nrow(X) - 2) {
R <- cor(X)
above <- row(R) < col(R)
r2 <- R[above]^2
Fstat <- r2 * dfr / (1 - r2)
R[above] <- 1 - pf(Fstat, 1, dfr)
cor.mat <- t(R)
cor.mat[upper.tri(cor.mat)] <- NA
cor.mat
}
So let's test it out:
set.seed(123)
data <- matrix(rnorm(100), 20, 5)
cor.prob(data)
[,1] [,2] [,3] [,4] [,5]
[1,] 1.0000000 NA NA NA NA
[2,] 0.7005361 1.0000000 NA NA NA
[3,] 0.5990483 0.6816955 1.0000000 NA NA
[4,] 0.6098357 0.3287116 0.5325167 1.0000000 NA
[5,] 0.3364028 0.1121927 0.1329906 0.5962835 1
Does that line up with cor.test?
cor.test(data[,2], data[,3])
Pearson's product-moment correlation
data: data[, 2] and data[, 3]
t = 0.4169, df = 18, p-value = 0.6817
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.3603246 0.5178982
sample estimates:
cor
0.09778865
Seems to work ok.
Here is something that I just made, I stumbled on this post because I was looking for a way to take every pair of variables, and get a tidy nX3 dataframe. Column 1 is a variable, Column 2 is a variable, and Column 3 and 4 are their absolute value and true correlation. Just pass the function a dataframe of numeric and integer values.
pairwiseCor <- function(dataframe){
pairs <- combn(names(dataframe), 2, simplify=FALSE)
df <- data.frame(Vairable1=rep(0,length(pairs)), Variable2=rep(0,length(pairs)),
AbsCor=rep(0,length(pairs)), Cor=rep(0,length(pairs)))
for(i in 1:length(pairs)){
df[i,1] <- pairs[[i]][1]
df[i,2] <- pairs[[i]][2]
df[i,3] <- round(abs(cor(dataframe[,pairs[[i]][1]], dataframe[,pairs[[i]][2]])),4)
df[i,4] <- round(cor(dataframe[,pairs[[i]][1]], dataframe[,pairs[[i]][2]]),4)
}
pairwiseCorDF <- df
pairwiseCorDF <- pairwiseCorDF[order(pairwiseCorDF$AbsCor, decreasing=TRUE),]
row.names(pairwiseCorDF) <- 1:length(pairs)
pairwiseCorDF <<- pairwiseCorDF
pairwiseCorDF
}
This is what the output is:
> head(pairwiseCorDF)
Vairable1 Variable2 AbsCor Cor
1 roll_belt accel_belt_z 0.9920 -0.9920
2 gyros_dumbbell_x gyros_dumbbell_z 0.9839 -0.9839
3 roll_belt total_accel_belt 0.9811 0.9811
4 total_accel_belt accel_belt_z 0.9752 -0.9752
5 pitch_belt accel_belt_x 0.9658 -0.9658
6 gyros_dumbbell_z gyros_forearm_z 0.9491 0.9491
I've found that the R package picante does a nice job dealing with the problem that you have. You can easily pass your dataset to the cor.table function and get a table of correlations and p-values for all of your variables. You can specify Pearson's r or Spearman in the function. See this link for help:
http://www.inside-r.org/packages/cran/picante/docs/cor.table
Also remember to remove any non-numeric columns from your dataset prior to running the function. Here's an example piece of code:
install.packages("picante")
library(picante)
#Insert the name of your dataset in the code below
cor.table(dataset, cor.method="pearson")
You can use the sjt.corr function of the sjPlot-package, which gives you a nicely formatted correlation table, ready for use in your Office application.
Simplest function call is just to pass the data frame:
sjt.corr(df)
See examples here.

Error in as.Date.numeric() : 'origin' must be supplied

I have received a paper in which they included the R files for their empirical results. Nevertheless, I have some problems while trying to run their codes:
data <- vni$R[198:length(vni$R)]; date <- vni$Date[198:length(vni$R)]
l <- length(data)
rw_length <- 52 # 52 weeks (~ 1 year)
bound <- vector()
avr <- vector()
for (i in (rw_length+1):l) {
AVR.test <- AutoBoot.test(data[(i-rw_length):i],nboot=2000,"Normal",c(0.025, 0.975))
bound <- append(bound, AVR.test$CI.stat)
avr <- append(avr, AVR.test$test.stat)
}
lower <- bound[seq(1, length(bound), 2)]
upper <- bound[seq(2, length(bound), 2)]
results <- matrix(c(date[(rw_length+1):l],data[(rw_length+1):l],avr,upper, lower),ncol=5, dimnames = list(c(),c("Date", "Return", "AVR", "Upper", "Lower")))
And I get the following error: `
Error in as.Date.numeric(e) : 'origin' must be supplied`
for the results <- matrix(c(date[(rw_length+1):l],data[(rw_length+1):l],avr,upper, lower),ncol=5, dimnames = list(c(),c("Date", "Return", "AVR", "Upper", "Lower")))
My dataset is:
Date P R
1 2001-03-23 259.60 0.0000000000
2 2001-03-30 269.30 0.0366840150
3 2001-04-06 284.69 0.0555748690
4 2001-04-13 300.36 0.0535808860
5 2001-04-20 317.76 0.0563146260
...
935 2019-02-15 950.89 0.0454163960
936 2019-02-22 988.91 0.0392049380
937 2019-03-01 979.63 -0.0094283770
Could you please help me with that issue?
Thanks alot!
Everything in a matrix must be the same class. This is often found when there's a string among numbers, where
m <- matrix(0, nr=2, nc=2)
m
# [,1] [,2]
# [1,] 0 0
# [2,] 0 0
m[1] <- "a"
m
# [,1] [,2]
# [1,] "a" "0"
# [2,] "0" "0"
In this case, you have Date (first column) and numeric (all others? no idea what AutoBoot is). And because it's trying to coerce from least-complex to most-complex (from numeric to Date), the non-Date objects are being converted.
matrix(c(Sys.Date(), 1.1))
# Error in as.Date.numeric(e) : 'origin' must be supplied
I suggest that trying to store this in a matrix is therefore fundamentally flawed. If you want to store a Date object among numbers, you have two options:
Store it as a data.frame, where each column can have its own class.
Pre-convert the "Date" data to numeric and store it as a number. This means that if/when you need the dates to be of class Date again, you'll need to as.Date(..., origin="1970-01-01").

text similarity between two tfidf matrix

I have two xml text files and using quanteda and tm package, i have tokenized them and tranform to tf-idf matrix. here is my rstudio environment:
enter image description here
how can i calculate the similarities between these two files, for example, using Jaccard.
I have try dist(), cosine(), and text2vec, however, i all encounter errors.
for examples:
cosine(x = pta2.tokens.tfidf, y = pta3.tokens.tfidf)
Error in cosine(x = pta2.tokens.tfidf, y = pta3.tokens.tfidf) :
argument mismatch. Either one matrix or two vectors needed as input.
simi <- sim2(pta2.tokens.tfidf, pta3.tokens.tfidf, method = "jaccard", norm = "none")
Error: ncol(x) == ncol(y) is not TRUE
The problem is that you have a data.frame with string values and you are using distance that need a numeric matrix input
DIST
you need a numeric matrix:
?dist
Usage
dist(x, method = "euclidean", diag = FALSE, upper = FALSE, p=2)
Arguments
x a numeric matrix, data frame or "dist" object.
COSINE
you need numeric values:
?cosine
Usage
cosine(x, y, use = "everything", inverse = FALSE)
Arguments
x A numeric dataframe/matrix or vector
SIM2
Your error is due to the difference of the number of columns in pta2.tokens.tfidf and pta3.tokens.tfidf. Here an example of the error:
df1<-as.matrix(data.frame(a=c("a","b","c"),b=c("d","e","f")))
df2<-as.matrix(data.frame(c=c("a","b","c"),d=c("d","e","f"),e=c("g","h","i")))
sim2(df1,df2)
Error: ncol(x) == ncol(y) is not TRUE
But also if you have same dimentions, this method will not work as you can see because it needs numeric argument in input:
sim2(df1,df1)
Error in m^2 : non-numeric argument to binary operator
You must have matrices with same dimensions and numeric, like this:
df3<-as.matrix(data.frame(a=c(1,2,3),b=c(4,5,6)))
> df4<-as.matrix(data.frame(a=c(3,2,3),b=c(3,3,6)))
> sim2(df3,df4)
[,1] [,2] [,3]
[1,] 0.8574929 0.9417419 0.9761871
[2,] 0.9191450 0.9785498 0.9965458
[3,] 0.9486833 0.9922779 1.0000000
A possible solution
Use function stringdist from stringdist package, here a toy example:
Two dataframes with string values
df1<-data.frame(a=c("abc","bav","cda"),b=c("ddd","ese","feff"))
df2<-data.frame(a=c("abc","gfb","cdd"),b=c("dsd","eeesfd","fafe"))
Function to compare string values in two big data.frames:
f<-function(i,df1,df2)
{
f2<-function(y,list1,list2)
{
return(stringdist(list1[y],list2[y],method="jw"))
}
return(unlist(lapply(seq(1:length(df1[,i])),f2,list1=df1[,i],list2=df2[,i])))
}
dist_matrix<-do.call(cbind,lapply(seq(1:ncol(df1)),f,df1=df1,df2=df2))
Distance matrix
dist_matrix
[,1] [,2]
[1,] 0.0000000 0.2222222
[2,] 1.0000000 0.2777778
[3,] 0.2222222 0.3333333

Generate multivariate normal r.v.'s with rank-deficient covariance via Pivoted Cholesky Factorization

I'm just beating my head against the wall trying to get a Cholesky decomposition to work in order to simulate correlated price movements.
I use the following code:
cormat <- as.matrix(read.csv("http://pastebin.com/raw/qGbkfiyA"))
cormat <- cormat[,2:ncol(cormat)]
rownames(cormat) <- colnames(cormat)
cormat <- apply(cormat,c(1,2),FUN = function(x) as.numeric(x))
chol(cormat)
#Error in chol.default(cormat) :
# the leading minor of order 8 is not positive definite
cholmat <- chol(cormat, pivot=TRUE)
#Warning message:
# In chol.default(cormat, pivot = TRUE) :
# the matrix is either rank-deficient or indefinite
rands <- array(rnorm(ncol(cholmat)), dim = c(10000,ncol(cholmat)))
V <- t(t(cholmat) %*% t(rands))
#Check for similarity
cor(V) - cormat ## Not all zeros!
#Check the standard deviations
apply(V,2,sd) ## Not all ones!
I'm not really sure how to properly use the pivot = TRUE statement to generate my correlated movements. The results look totally bogus.
Even if I have a simple matrix and I try out "pivot" then I get bogus results...
cormat <- matrix(c(1,.95,.90,.95,1,.93,.90,.93,1), ncol=3)
cholmat <- chol(cormat)
# No Error
cholmat2 <- chol(cormat, pivot=TRUE)
# No warning... pivot changes column order
rands <- array(rnorm(ncol(cholmat)), dim = c(10000,ncol(cholmat)))
V <- t(t(cholmat2) %*% t(rands))
#Check for similarity
cor(V) - cormat ## Not all zeros!
#Check the standard deviations
apply(V,2,sd) ## Not all ones!
There are two errors with your code:
You did not use pivoting index to revert the pivoting done to the Cholesky factor. Note, pivoted Cholesky factorization for a semi-positive definite matrix A is doing:
P'AP = R'R
where P is a column pivoting matrix, and R is an upper triangular matrix. To recover A from R, we need apply the inverse of P (i.e., P'):
A = PR'RP' = (RP')'(RP')
Multivariate normal with covariance matrix A, is generated by:
XRP'
where X is multivariate normal with zero mean and identity covariance.
Your generation of X
X <- array(rnorm(ncol(R)), dim = c(10000,ncol(R)))
is wrong. First, it should not be ncol(R) but nrow(R), i.e., the rank of X, denoted by r. Second, you are recycling rnorm(ncol(R)) along columns, and the resulting matrix is not random at all. Therefore, cor(X) is never close to an identity matrix. The correct code is:
X <- matrix(rnorm(10000 * r), 10000, r)
As a model implementation of the above theory, consider your toy example:
A <- matrix(c(1,.95,.90,.95,1,.93,.90,.93,1), ncol=3)
We compute the upper triangular factor (suppressing possible rank-deficient warnings) and extract inverse pivoting index and rank:
R <- suppressWarnings(chol(A, pivot = TRUE))
piv <- order(attr(R, "pivot")) ## reverse pivoting index
r <- attr(R, "rank") ## numerical rank
Then we generate X. For better result we centre X so that column means are 0.
X <- matrix(rnorm(10000 * r), 10000, r)
## for best effect, we centre `X`
X <- sweep(X, 2L, colMeans(X), "-")
Then we generate target multivariate normal:
## compute `V = RP'`
V <- R[1:r, piv]
## compute `Y = X %*% V`
Y <- X %*% V
We can verify that Y has target covariance A:
cor(Y)
# [,1] [,2] [,3]
#[1,] 1.0000000 0.9509181 0.9009645
#[2,] 0.9509181 1.0000000 0.9299037
#[3,] 0.9009645 0.9299037 1.0000000
A
# [,1] [,2] [,3]
#[1,] 1.00 0.95 0.90
#[2,] 0.95 1.00 0.93
#[3,] 0.90 0.93 1.00

Vectorized element-wise division on Sparse Matrices in R

A/B in R performs an element-wise division on the matrix.
However, if I generate a sparse matrix from the Matrix package, and try to divide A/B, I get this error:
> class(N)
[1] "dgCMatrix"
attr(,"package")
[1] "Matrix"
> N/N
Error in asMethod(object) :
Cholmod error 'problem too large' at file ../Core/cholmod_dense.c, line 105
>
Interesting. When the sparse matrix is small in total size, I get this behavior:
> m <- sparseMatrix(i=c(1,2,1,3), j=c(1,1,3,3), x=c(1,2,1,4))
> m/m
3 x 3 Matrix of class "dgeMatrix"
[,1] [,2] [,3]
[1,] 1 NaN 1
[2,] 1 NaN NaN
[3,] NaN NaN 1
>
But when it's moderately sized (~ 20000 elements), I get the Cholmod error.
Is there a workaround or a more proper way to do element-wise division on sparse matrices in R?
The problem with element-wise division is that if your matrices are both sparse, then you'll have a lot of Inf and NaN in the result, and these make it dense. That's why you get the out-of-memory errors.
If you want to replace Inf and NaN with zeros in the result, then the solution is relatively easy, you just get the summary() of both matrices and work with the indices and values directly.
You'll need to restrict the A and B index vectors to their intersection and perform the division on that. To get the intersection of index pairs, one can use merge().
Here is a quick and dirty implementation:
# Some example data
A <- sparseMatrix(i=c(1,1,2,3), j=c(1,3,1,3), x=c(1,1,2,3))
B <- sparseMatrix(i=c(3,2,1), j=c(3,2,1), x=c(3,2,1))
sdiv <- function(X, Y, names=dimnames(X)) {
sX <- summary(X)
sY <- summary(Y)
sRes <- merge(sX, sY, by=c("i", "j"))
sparseMatrix(i=sRes[,1], j=sRes[,2], x=sRes[,3]/sRes[,4],
dimnames=names)
}
sdiv(A, B)
# 3 x 3 sparse Matrix of class "dgCMatrix"
#
# [1,] 1 . .
# [2,] . . .
# [3,] . . 1
Thanks to flodel for the suggestion about using summary and merge.

Resources