Speeding up correlation matrix calculation in R - r

I have a dataframe with 49 variables and 4M rows. I want to calculate the correlation matrix of 49 x 49. All columns are of class numeric.
Here's a sample :
df <- data.frame(replicate(49,sample(0:50,4000000,rep=TRUE)))
I used the standard cor function.
cor_matrix <- cor(df, use = "pairwise.complete.obs")
This is taking a really long time. I have 16GB RAM and an i5 single core 2.60Ghz.
Is there a way to make this calculation faster on my desktop?

There's a faster version of the cor function in the WGCNA package (used for inferring gene networks based on correlations). On my 3.1 GHz i7 w/ 16 GB of RAM it can solve the same 49 x 49 matrix about 20x faster:
mat <- replicate(49, as.numeric(sample(0:50,4000000,rep=TRUE)))
system.time(
cor_matrix <- cor(mat, use = "pairwise.complete.obs")
)
user system elapsed
40.391 0.017 40.396
system.time(
cor_matrix_w <- WGCNA::cor(mat, use = "pairwise.complete.obs")
)
user system elapsed
1.822 0.468 2.290
all.equal(cor_matrix, cor_matrix_w)
[1] TRUE
Check the helpfile for the function for details on differences between versions when your data contains more missing observations.

Related

silhouette calculation in R for a large data

I want to calculate silhouette for cluster evaluation. There are some packages in R, for example cluster and clValid. Here is my code using cluster package:
# load the data
# a data from the UCI website with 434874 obs. and 3 variables
data <- read.csv("./data/spatial_network.txt",sep="\t",header = F)
# apply kmeans
km_res <- kmeans(data,20,iter.max = 1000,
nstart=20,algorithm="MacQueen")
# calculate silhouette
library(cluster)
sil <- silhouette(km_res$cluster, dist(data))
# plot silhouette
library(factoextra)
fviz_silhouette(sil)
The code works well for smaller data, say data with 50,000 obs, however I get an error like "Error: cannot allocate vector of size 704.5 Gb" when the data size is a bit large. This might be problem for Dunn index and other internal indices for large datasets.
I have 32GB RAM in my computer. The problem comes from calculating dist(data). I am wondering if it is possible to not calculating dist(data) in advance, and calculate corresponding distances when it is required in the silhouette formula.
I appreciate your help regarding this problem and how I can calculate silhouette for large and very large datasets.
You can implement Silhouette yourself.
It only needs every distance twice, so storing an entire distance matrix is not necessary. It may run a bit slower because it computes distances twice, but at the same time the better memory efficiency may well make up for that.
It will still take a LONG time though.
You should consider to only use a subsample (do you really need to consider all points?) as well as alternatives such as Simplified Silhouette, in particular with KMeans... You only gain very little with extra data on such methods. So you may just use a subsample.
Anony-Mousse answer is perfect, particularly subsampling. This is very important for very large datasets due to the increase in computational cost.
Here is another solution for calculating internal measures such as silhouette and Dunn index, using an R package of clusterCrit. clusterCrit is for calculating clustering validation indices, which does not require entire distance matrix in advance. However, it might be slow as Anony-Mousse discussed. Please see the below link for documentation for clusterCrit:
https://www.rdocumentation.org/packages/clusterCrit/versions/1.2.8/topics/intCriteria
clusterCrit also calculates most of Internal measures for cluster validation.
Example:
intCriteria(data,km_res$cluster,c("Silhouette","Calinski_Harabasz","Dunn"))
If it is possible to calculate the Silhouette index, without using the distance matrix, alternatively you can use the clues package, optimizing both the time and the memory used by the cluster package. Here is an example:
library(rbenchmark)
library(cluster)
library(clues)
set.seed(123)
x = c(rnorm(1000,0,0.9), rnorm(1000,4,1), rnorm(1000,-5,1))
y = c(rnorm(1000,0,0.9), rnorm(1000,6,1), rnorm(1000, 5,1))
cluster = rep(as.factor(1:3),each = 1000)
df <- cbind(x,y)
head(df)
x y
[1,] -0.50442808 -0.13527673
[2,] -0.20715974 -0.29498142
[3,] 1.40283748 -1.30334876
[4,] 0.06345755 -0.62755613
[5,] 0.11635896 2.33864121
[6,] 1.54355849 -0.03367351
Runtime comparison between the two functions
benchmark(f1 = silhouette(as.integer(cluster), dist = dist(df)),
f2 = get_Silhouette(y = df, mem = cluster))
test replications elapsed relative user.self sys.self user.child sys.child
1 f1 100 15.16 1.902 13.00 1.64 NA NA
2 f2 100 7.97 1.000 7.76 0.00 NA NA
Comparison in memory usage between the two functions
library(pryr)
object_size(silhouette(as.integer(cluster), dist = dist(df)))
73.9 kB
object_size(get_Silhouette(y = df, mem = cluster))
36.6 kB
As a conclusion clues::get_Silhouette, it reduces the time and memory used to the same.

Speed up raster::extract with weights in r

I want to extract the precise mean value of raster values from an area extent defined by a polygon in r. This works using raster::extract with the option weights=TRUE. However, this operation becomes prohibitively slow with large rasters and the function doesn't seem to be parallelized, thus beginCluster() ... endCluster() does not speed up the process.
I need to extract the values for a range of rasters, exemplified here as r, r10 and r100. Is there a way to speed this up in r, or is there an alternative way of doing this in GDAL?
r <- raster(nrow=1000, ncol=1000, vals=sample(seq(0,0.8,0.01),1000000,replace=TRUE))
r10 <- aggregate(r, fact=10)
r100 <- aggregate(r, fact=100)
v = Polygons(list(Polygon(cbind(c(-100,100,80,-120), c(-70,0,70,0)))), ID = "a")
v = SpatialPolygons(list(v))
plot(r)
plot(r10)
plot(r100)
plot(v, add=T)
system.time({
precise.mean <- raster::extract(r100, v, method="simple",weights=T, normalizeWeights=T, fun=mean)
})
user system elapsed
0.251 0.000 0.253
> precise.mean
[,1]
[1,] 0.3994278
system.time({
precise.mean <- raster::extract(r10, v, method="simple",weights=T, normalizeWeights=T, fun=mean)
})
user system elapsed
7.447 0.000 7.446
precise.mean
[,1]
[1,] 0.3995429
In the end I resorted the problem using gdalUtils working directly on the harddisk.
I used the command gdalwarp() to reduce the raster resolution to r10, 100.
Then gdalwarp() to increase the resolution of the resulting raster to the original resolution of r.
Then gdalwarp() with cutline= "v.shp", crop_to_cutline =T to mask the raster to the vector v.
And then gdalinfo() combined with subset(x(grep("Mean=",x))) to extract the mean values.
All of this was packed in a foreach() %dopar% loop to process a number of rasters and resolution.
While complicated and probably not as precise as extract::raster, it did the job.
It should actually run faster if you first call beginCluster (the function then deals with the parallelization). Even better would be to use version 2.7-14 which has a much faster implementation. It is currently under review at CRAN, but you can also get it here: https://github.com/rspatial/raster

Speed up Simulation in R with Code Optimization

The generic version of what I am trying to do is to conduct a simulation study where I manipulate a few variables to see how that impacts a result. I'm having some speed issues with R. The latest simulation worked with a few iterations (10 per experiment). However, when I moved to a large scale (10k per experiment) version, the simulation has been running for 14 hours (and is still running).
Below is the code (with comments) that I am running. Being a rookie with R, and am struggling to optimize the simulation to be efficient. My hope is to learn from the comments and suggestions provided here to optimize this code and use these comments for future simulation studies.
Let me say a few things about what this code is supposed to do. I am manipulating two variables: effect size and sample size. Each combination is run 10k times (i.e., 10k experiments per condition). I initialize a data frame to store my results (called Results). I loop over three variables: Effect size, sample size, and iterations (10k).
Within the loops, I initialize four NULL components: p.test, p.rep, d.test, and d.rep. The former two capture the p-value of the initial t-test and the p-value of the replication (replicated under similar conditions). The latter two calculate the effect size (Cohen's d).
I generate my random data from a standard normal for the control condition (DVcontrol), and I use my effect size as the mean for the experimental condition (DVexperiment). I take the difference between the values and throw the result into the t-test function in R (paired-samples t-test). I store the results in a list called Trials and I rbind this to the Results data frame. This process is repeated 10k times until completion.
# Set Simulation Parameters
## Effect Sizes (ES is equal to mean difference when SD equals Variance equals 1)
effect_size_range <- seq(0, 2, .1) ## ES
## Sample Sizes
sample_size_range <- seq(10, 1000, 10) ## SS
## Iterations for each ES-SS Combination
iter <- 10000
# Initialize the Vector of Results
Results <- data.frame()
# Set Random Seed
set.seed(12)
# Loop over the Different ESs
for(ES in effect_size_range) {
# Loop over the Different Sample Sizes
for(SS in sample_size_range) {
# Create p-value Vectors
p.test <- NULL
p.rep <- NULL
d.test <- NULL
d.rep <- NULL
# Loop over the iterations
for(i in 1:iter) {
# Generate Test Data
DVcontrol <- rnorm(SS, mean=0, sd=1)
DVexperiment <- rnorm(SS, mean=ES, sd=1)
DVdiff <- DVexperiment - DVcontrol
p.test[i] <- t.test(DVdiff, alternative="greater")$p.value
d.test[i] <- mean(DVdiff) / sd(DVdiff)
# Generate Replication Data
DVcontrol <- rnorm(iter, mean=0, sd=1)
DVexperiment <- rnorm(iter, mean=ES, sd=1)
DVdiff <- DVexperiment - DVcontrol
p.rep[i] <- t.test(DVdiff, alternative="greater")$p.value
d.rep[i] <- mean(DVdiff) / sd(DVdiff)
}
# Results
Trial <- list(ES=ES, SS=SS,
d.test=mean(d.test), d.rep=mean(d.rep),
p.test=mean(p.test), p.rep=mean(p.rep),
r=cor(p.test, p.rep, method="kendall"),
r.log=cor(log2(p.test)*(-1), log2(p.rep)*(-1), method= "kendall"))
Results <- rbind(Results, Trial)
}
}
Thanks in advance for your comments and suggestions,
Josh
The general approach to optimization is to run a profiler to determine what portion of the code the interpreter spends the most time in, and then to optimize that portion. Let's say your code resides in a file called test.R. In R, you can profile it by running the following sequence of commands:
Rprof() ## Start the profiler
source( "test.R" ) ## Run the code
Rprof( NULL ) ## Stop the profiler
summaryRprof() ## Display the results
(Note that these commands will generate a file Rprof.out in the directory of your R session.)
If we run the profiler on your code (with iter <- 10, rather than iter <- 10000), we get the following profile:
# $by.self
# self.time self.pct total.time total.pct
# "rnorm" 1.56 24.53 1.56 24.53
# "t.test.default" 0.66 10.38 2.74 43.08
# "stopifnot" 0.32 5.03 0.86 13.52
# "rbind" 0.32 5.03 0.52 8.18
# "pmatch" 0.30 4.72 0.34 5.35
# "mean" 0.26 4.09 0.42 6.60
# "var" 0.24 3.77 1.38 21.70
From here, we observe that rnorm and t.test are your most expensive operations (shouldn't really be a surprise as these are in your inner-most loop).
Once you figured out where the expensive function calls are, the actual optimization consists of two steps:
Optimize the function, and/or
Optimize the number of times the function is called.
Since t.test and rnorm are built-in R functions, your only option for Step 1 above is to look for alternative packages that may have faster implementations of sampling from the normal distribution and/or running multiple t tests. Step 2 is really about restructuring your code in a way that does not recompute the same thing multiple times. For example, the following lines of code do not depend on i:
# Generate Test Data
DVcontrol <- rnorm(SS, mean=0, sd=1)
DVexperiment <- rnorm(SS, mean=ES, sd=1)
Does it make sense to move these outside the loop, or do you really need a new sample of your test data for each different value of i?

R: Faster way of computing large distance matrix

I am computing distance matrix between large number of locations (5000) on sphere (using Haversine distance function).
Here is my code:
require(geosphere)
x=rnorm(5000)
y=rnorm(5000)
xy1=cbind(x,y)
The time taken for computing the distance matrix is
system.time( outer(1:nrow(xy1), 1:nrow(xy1), function(i,j) distHaversine(xy1[i,1:2],xy1[j,1:2])))
The time taken to execute this program is high. Any suggestion how to lower time consumption to do this job! Thanks.
Try the built-in function in the geosphere package?
z <- distm( xy1 )
The default distance function for distm() - which calculates a distance matrix between a set of points - is the Haversine ("distHaversine") formula, but you may specify another using the fun argument.
On my 2.6GHz Core i7 rMBP this takes about 5 seconds for 5,000 points.
I add below a solution using the spatialrisk package. The key functions in this package are written in C++ (Rcpp), and are therefore very fast.
library(geosphere)
library(spatialrisk)
library(data.table)
x=rnorm(5000)
y=rnorm(5000)
xy1 = data.table(x,y)
# Cross join two data tables
coordinates_dt <- optiRum::CJ.dt(xy1, xy1)
system.time({
z <- distm( xy1 )
})
# user system elapsed
# 14.163 3.700 19.072
system.time({
distances_m <- coordinates_dt[, dist_m := spatialrisk::haversine(y, x, i.y, i.x)]
})
# user system elapsed
# 2.027 0.848 2.913

Fast ANOVA computation in R

I have a dataframe with the following dimensions:
dim(b)
[1] 974 433685
The columns represent variables that I want to run ANOVAs on (i.e., I want to run 433,685 ANOVAs). Sample size is 974. The last column is the 'group' variable.
I've come up with 3 different methods, but all are too slow due to the number of tests.
First, let's generate a small practice dataset to play with:
dat = as.data.frame(matrix(runif(10000*500), ncol = 10000, nrow = 500))
dat$group = rep(letters[1:10], 5000)
Method 1 (based on 'sapply'):
system.time(sapply(dat[,-length(dat)], function(x) aov(x~group, data=dat) ))
user system elapsed
143.76 0.33 151.79
Methods 2 (based on 'mclapply' from 'parallel' package):
library(parallel)
options(mc.cores=3)
system.time(mclapply(dat[,-length(dat)], function(x) aov(x~group, data=dat) ))
user system elapsed
141.76 0.21 142.58
Methods 3 (based on 'cbind'-ing the LHS):
formula = as.formula( paste0("cbind(", paste(names(dat)[-length(dat)],collapse=","), ")~group") )
system.time(aov(formula, data=dat))
user system elapsed
10.00 0.22 10.25
In the practice dataset, Method 3 is a clear winner. However, when I do this on my actual data, computing on just 10 (of 433,685) columns using Method 3 takes this long:
user system elapsed
119.028 5.430 124.414
Not sure why it takes substantially longer on my actual data. I have access to a Linux cluster with upwards of 16 cores and 72GB of RAM.
Is there any way to compute this faster?
For simultaneously fitting many general linear models (such as ANOVA) using the same design matrix, the Bioconductor/R limma package provides a very fast lmFit() function. This is how to fit an ANOVA model using limma:
library(limma)
# generate some data
# (same dimensions as in your question)
nrows <- 1e4
ncols <- 5e2
nlevels <- 10
dat <- matrix(
runif(nrows * ncols),
nrow = nrows,
ncol = ncols
)
group <- factor(rep(
letters[1:nlevels],
ncols / nlevels
))
# construct the design matrix
# (same as implicitly used in your question)
dmat <- model.matrix(~ group)
# fit the ANOVA model
fit <- lmFit(dat, dmat)
On my laptop it finished in 0.4 - 0.45 seconds, on data of the same dimensions as the data in your question.

Resources