Fast ANOVA computation in R - r

I have a dataframe with the following dimensions:
dim(b)
[1] 974 433685
The columns represent variables that I want to run ANOVAs on (i.e., I want to run 433,685 ANOVAs). Sample size is 974. The last column is the 'group' variable.
I've come up with 3 different methods, but all are too slow due to the number of tests.
First, let's generate a small practice dataset to play with:
dat = as.data.frame(matrix(runif(10000*500), ncol = 10000, nrow = 500))
dat$group = rep(letters[1:10], 5000)
Method 1 (based on 'sapply'):
system.time(sapply(dat[,-length(dat)], function(x) aov(x~group, data=dat) ))
user system elapsed
143.76 0.33 151.79
Methods 2 (based on 'mclapply' from 'parallel' package):
library(parallel)
options(mc.cores=3)
system.time(mclapply(dat[,-length(dat)], function(x) aov(x~group, data=dat) ))
user system elapsed
141.76 0.21 142.58
Methods 3 (based on 'cbind'-ing the LHS):
formula = as.formula( paste0("cbind(", paste(names(dat)[-length(dat)],collapse=","), ")~group") )
system.time(aov(formula, data=dat))
user system elapsed
10.00 0.22 10.25
In the practice dataset, Method 3 is a clear winner. However, when I do this on my actual data, computing on just 10 (of 433,685) columns using Method 3 takes this long:
user system elapsed
119.028 5.430 124.414
Not sure why it takes substantially longer on my actual data. I have access to a Linux cluster with upwards of 16 cores and 72GB of RAM.
Is there any way to compute this faster?

For simultaneously fitting many general linear models (such as ANOVA) using the same design matrix, the Bioconductor/R limma package provides a very fast lmFit() function. This is how to fit an ANOVA model using limma:
library(limma)
# generate some data
# (same dimensions as in your question)
nrows <- 1e4
ncols <- 5e2
nlevels <- 10
dat <- matrix(
runif(nrows * ncols),
nrow = nrows,
ncol = ncols
)
group <- factor(rep(
letters[1:nlevels],
ncols / nlevels
))
# construct the design matrix
# (same as implicitly used in your question)
dmat <- model.matrix(~ group)
# fit the ANOVA model
fit <- lmFit(dat, dmat)
On my laptop it finished in 0.4 - 0.45 seconds, on data of the same dimensions as the data in your question.

Related

Speed up Simulation in R with Code Optimization

The generic version of what I am trying to do is to conduct a simulation study where I manipulate a few variables to see how that impacts a result. I'm having some speed issues with R. The latest simulation worked with a few iterations (10 per experiment). However, when I moved to a large scale (10k per experiment) version, the simulation has been running for 14 hours (and is still running).
Below is the code (with comments) that I am running. Being a rookie with R, and am struggling to optimize the simulation to be efficient. My hope is to learn from the comments and suggestions provided here to optimize this code and use these comments for future simulation studies.
Let me say a few things about what this code is supposed to do. I am manipulating two variables: effect size and sample size. Each combination is run 10k times (i.e., 10k experiments per condition). I initialize a data frame to store my results (called Results). I loop over three variables: Effect size, sample size, and iterations (10k).
Within the loops, I initialize four NULL components: p.test, p.rep, d.test, and d.rep. The former two capture the p-value of the initial t-test and the p-value of the replication (replicated under similar conditions). The latter two calculate the effect size (Cohen's d).
I generate my random data from a standard normal for the control condition (DVcontrol), and I use my effect size as the mean for the experimental condition (DVexperiment). I take the difference between the values and throw the result into the t-test function in R (paired-samples t-test). I store the results in a list called Trials and I rbind this to the Results data frame. This process is repeated 10k times until completion.
# Set Simulation Parameters
## Effect Sizes (ES is equal to mean difference when SD equals Variance equals 1)
effect_size_range <- seq(0, 2, .1) ## ES
## Sample Sizes
sample_size_range <- seq(10, 1000, 10) ## SS
## Iterations for each ES-SS Combination
iter <- 10000
# Initialize the Vector of Results
Results <- data.frame()
# Set Random Seed
set.seed(12)
# Loop over the Different ESs
for(ES in effect_size_range) {
# Loop over the Different Sample Sizes
for(SS in sample_size_range) {
# Create p-value Vectors
p.test <- NULL
p.rep <- NULL
d.test <- NULL
d.rep <- NULL
# Loop over the iterations
for(i in 1:iter) {
# Generate Test Data
DVcontrol <- rnorm(SS, mean=0, sd=1)
DVexperiment <- rnorm(SS, mean=ES, sd=1)
DVdiff <- DVexperiment - DVcontrol
p.test[i] <- t.test(DVdiff, alternative="greater")$p.value
d.test[i] <- mean(DVdiff) / sd(DVdiff)
# Generate Replication Data
DVcontrol <- rnorm(iter, mean=0, sd=1)
DVexperiment <- rnorm(iter, mean=ES, sd=1)
DVdiff <- DVexperiment - DVcontrol
p.rep[i] <- t.test(DVdiff, alternative="greater")$p.value
d.rep[i] <- mean(DVdiff) / sd(DVdiff)
}
# Results
Trial <- list(ES=ES, SS=SS,
d.test=mean(d.test), d.rep=mean(d.rep),
p.test=mean(p.test), p.rep=mean(p.rep),
r=cor(p.test, p.rep, method="kendall"),
r.log=cor(log2(p.test)*(-1), log2(p.rep)*(-1), method= "kendall"))
Results <- rbind(Results, Trial)
}
}
Thanks in advance for your comments and suggestions,
Josh
The general approach to optimization is to run a profiler to determine what portion of the code the interpreter spends the most time in, and then to optimize that portion. Let's say your code resides in a file called test.R. In R, you can profile it by running the following sequence of commands:
Rprof() ## Start the profiler
source( "test.R" ) ## Run the code
Rprof( NULL ) ## Stop the profiler
summaryRprof() ## Display the results
(Note that these commands will generate a file Rprof.out in the directory of your R session.)
If we run the profiler on your code (with iter <- 10, rather than iter <- 10000), we get the following profile:
# $by.self
# self.time self.pct total.time total.pct
# "rnorm" 1.56 24.53 1.56 24.53
# "t.test.default" 0.66 10.38 2.74 43.08
# "stopifnot" 0.32 5.03 0.86 13.52
# "rbind" 0.32 5.03 0.52 8.18
# "pmatch" 0.30 4.72 0.34 5.35
# "mean" 0.26 4.09 0.42 6.60
# "var" 0.24 3.77 1.38 21.70
From here, we observe that rnorm and t.test are your most expensive operations (shouldn't really be a surprise as these are in your inner-most loop).
Once you figured out where the expensive function calls are, the actual optimization consists of two steps:
Optimize the function, and/or
Optimize the number of times the function is called.
Since t.test and rnorm are built-in R functions, your only option for Step 1 above is to look for alternative packages that may have faster implementations of sampling from the normal distribution and/or running multiple t tests. Step 2 is really about restructuring your code in a way that does not recompute the same thing multiple times. For example, the following lines of code do not depend on i:
# Generate Test Data
DVcontrol <- rnorm(SS, mean=0, sd=1)
DVexperiment <- rnorm(SS, mean=ES, sd=1)
Does it make sense to move these outside the loop, or do you really need a new sample of your test data for each different value of i?

Monte Carlo simulation of correlation between two Brownian motion (continuous random walk)

y <- cumsum(rnorm(100,0,1)) # random normal, with small (1.0) drift.
y.ts <- ts(y)
x <- cumsum(rnorm(100,0,1))
x
x.ts <- ts(x)
ts.plot(y.ts,ty= "l", x.ts) # plot the two random walks
Regression.Q1 = lm(y~x) ; summary(lm2)
summary(Regression.Q1)
t.test1 <- (summary(Regression.Q1)$coef[2,3]) # T-test computation
y[t] = y[t-1] + epsilon[t]
epsilon[t] ~ N(0,1)
set.seed(1)
t=1000
epsilon=sample(c(-1,1), t, replace = 1) # Generate k random walks across time {0, 1, ... , T}
N=T=1e3
y=t(apply(matrix(sample(c(-1,1),N*T,rep=TRUE),ncol=T),1,cumsum))
y[1]<-0
for (i in 2:t) {
y[i]<-y[i-1]+epsilon[i]
}
I need to:
Repeat the process 1,000 times (Monte Carlo simulations), namely build a loop around the previous program and each time save the t statistics. You will have a sequence of 1;000 t-tests : S = (t-test1, t-test2, ... , t-test1000). Count the number of time the absolute value of the 1,000 t-tests > 1.96, the critical value at a 5% significance level. If the series were I(0) you would have found roughly 5%. It won't be the case here (spurious regression).
What do I need to add to save the respective coefficients ?
Your posted code related to y[t] = y[t-1] + epsilon[t] is not real working code, but I can see that you are trying to store all 1000 * 2 random walk. Actually there is no need to do this. We only care about t-score rather than what those realizations of random walk are.
For this kind of problem, where we aim to replicate a procedure a lot of times, it is handy to first write a function to execute such a procedure for a single time. You already had good working code for this; we just need to wrap it in a function (removing those unnecessary part like plot):
sim <- function () {
y <- cumsum(rnorm(100,0,1))
x <- cumsum(rnorm(100,0,1))
coef(summary(lm(y ~ x)))[2,3]
}
This function takes no input; it only returns the t-score for one experiment.
Now, we are going to repeat this 1000 times. We can write a for loop, but function replicate is easier (read ?replicate if necessary)
S <- replicate(1000, sim())
Note this will take some time, much slower than it should be for such a simple task, because both lm and summary.lm are slow. A much faster way will be shown later.
Now, S is vector with 1000 values, which is the "a sequence of 1000 t-tests" you want. To get "the number of time the absolute value of the 1,000 t-tests > 1.96", we can just use
sum(abs(S) > 1.96)
# [1] 756
The result 756 is just what I get; you will get something different as the simulation is random. But it will always be quite a large number as expected.
A faster version of sim:
fast_sim <- function () {
y <- cumsum(rnorm(100,0,1))
x <- cumsum(rnorm(100,0,1))
y <- y - mean(y)
x <- x - mean(x)
xty <- crossprod(x,y)[1]
xtx <- crossprod(x)[1]
b <- xty / xtx
sigma <- sqrt(sum((y - x * b) ^ 2) / 98)
b * sqrt(xtx) * sigma
}
This function computes simple linear regression without lm, and t-score without summary.lm.
S <- replicate(1000, fast_sim())
sum(abs(S) > 1.96)
# [1] 778
An alternative way is to use cor.test:
fast_sim2 <- function () {
y <- cumsum(rnorm(100,0,1))
x <- cumsum(rnorm(100,0,1))
unname(cor.test(x, y)[[1]])
}
S <- replicate(1000, fast_sim())
sum(abs(S) > 1.96)
# [1] 775
Let's have a benchmark:
system.time(replicate(1000, sim()))
# user system elapsed
# 1.860 0.004 1.867
system.time(replicate(1000, fast_sim()))
# user system elapsed
# 0.088 0.000 0.090
system.time(replicate(1000, fast_sim2()))
# user system elapsed
# 0.308 0.004 0.312
cor.test is much faster than lm + summary.lm, but manual computation is even faster!

Combining binary classification algorithms

I have several algorithms which solve a binary classification (with response 0 or 1) problem by assigning to each observation a probability of the target value being equal to 1. All the algorithms try to minimize the log loss function where N is the number of observations, y_i is the actual target value and p_i is the probability of 1 predicted by the algorithm. Here is some R code with sample data:
actual.response = c(1,0,0,0,1)
prediction.df = data.frame(
method1 = c(0.5080349,0.5155535,0.5338271,0.4434838,0.5002529),
method2 = c(0.5229466,0.5298336,0.5360780,0.4217748,0.4998602),
method3 = c(0.5175378,0.5157711,0.5133765,0.4372109,0.5215695),
method4 = c(0.5155535,0.5094510,0.5201827,0.4351625,0.5069823)
)
log.loss = colSums(-1/length(actual.response)*(actual.response*log(prediction.df)+(1-actual.response)*log(1-prediction.df)))
The sample code gives the log loss for each algorithm:
method1 method3 method2 method4
0.6887705 0.6659796 0.6824404 0.6719181
Now I want to combine this algorithms so I can minimize the log loss even further. Is there any R package which can do this for me? I will appreciate references to any algorithms, articles, books or research papers which solve this kind of problem. Note that as a final result I want to have the predicted probabilities of each class and note plain 0,1 responses.
This is called ensemble learning (Wikipedia).
Check out this article: "an intro to ensemble learning in r."
Here is an example I did using the Cornell movie review data which can be downloaded by clicking the link. I used to data set with 1000 positive and 1000 negative reviews. Once you get the data into R:
library(RTextTools)
library(tm)
library(glmnet)
library(ipred)
library(randomForest)
library(data.table)
## create a column of sentiment score. 0 for negative and 1 for
## positive.
text_neg$pos_neg<-rep(0,1000)
text_pos$pos_neg<-rep(1,1000)
## Combine into 1 data.table and rename.
text_all<-rbind(text_neg, text_pos)
##dont forget to shuffle
set.seed(26)
text2<-text_all[sample(nrow(text_all)),]
## turn the data.frame into a document term matrix. This uses the handy
##RTextTools wrappers and functions.
doc_matrix <- create_matrix(text2$V1, language="english",
removeNumbers=TRUE, stemWords=TRUE, removeSparseTerms=.98)
ncol(data.frame(as.matrix(doc_matrix)))
## 2200 variables at .98 sparsity. runs pretty slow...
## create a container with the very nice RTextTools package
container <- create_container(doc_matrix, text2$pos_neg,
trainSize=1:1700, testSize=1701:2000, virgin=FALSE)
## train the data
time_glm<-system.time(GLMNET <- train_model(container,"GLMNET"));
time_glm #1.19
time_slda<-system.time(SLDA <- train_model(container,"SLDA"));
time_slda #45.03
time_bag<-system.time(BAGGING <- train_model(container,"BAGGING"));
time_bag #59.24
time_rf<-system.time(RF <- train_model(container,"RF")); time_rf #69.59
## classify with the models
GLMNET_CLASSIFY <- classify_model(container, GLMNET)
SLDA_CLASSIFY <- classify_model(container, SLDA)
BAGGING_CLASSIFY <- classify_model(container, BAGGING)
RF_CLASSIFY <- classify_model(container, RF)
## summarize results
analytics <- create_analytics(container,cbind( SLDA_CLASSIFY,
BAGGING_CLASSIFY,RF_CLASSIFY, GLMNET_CLASSIFY))
summary(analytics)
This ran an ensemble classifier using the 4 different methods (random forests, GLM, SLD and bagging). The ensemble summary at the end shows
# ENSEMBLE SUMMARY
#
# n-ENSEMBLE COVERAGE n-ENSEMBLE RECALL
# n >= 1 1.00 0.86
# n >= 2 1.00 0.86
# n >= 3 0.89 0.89
# n >= 4 0.63 0.96
That if all 4 methods agreed on if the review was positive or negative, then the ensemble had a 96% recall rate. But be careful, because with a binary outcome (2 choices) and 4 different algorithms, there is bound to be a lot of agreement.
See the RTextTools documentation for more explanation. They also do an almost identical example with U.S Congress data that I more or less mimicked in the above example.
Hope this was helpful.

Speed up glm in apply function in R

My question is based on the following situation:
I have a matrix with 20 rows and > 100,000 columns. I would like to apply the glm function and extract the Likelihood ratio statistic for each of the columns. So far, I have tried to implement in this manner. For example:
X <- gl(5, 4, length = 20); Y <- gl(4, 1, length = 20)
X <- factor(X); Y <- factor(Y)
matrix <- matrix(sample.int(15, size = 20*100000, replace = TRUE), nrow = 20, ncol = 100000)
apply(matrix, 2, function(x) glm(x ~ X+Y, poisson)$deviance)
Is there any way to speed up the computation time? I figured that since each vector that is used in glm is not big at all (vector of length 20), speedglm is not helpful here.
I would be glad if anyone could give me advice on this. Thank you very much in advance!
I ran a test of 1000 columns. It only took 2.4 seconds.
system.time(apply(matrix[,1:1000], 2, function(x) glm(x ~ X+Y, poisson)$deviance))
user system elapsed
2.40 0.00 2.46
I also tried 50,000 and it seemed to scale very linearly.
Therefore you only need to wait for 4 minutes to compute 100,000 cols. So I don't see the problem. However, the bottle neck is the overhead of calling the gbm() function 100,000 times. Try to avoid running a high level function that many times.
To run faster, listed ascending in terms of effort:
wrap it in parallel loop (2x-4x times speed-up)
figure out to perform the calculation as matrix multiplications in R (~50x)
implement with Rcpp (~100x)
None of the solutions will take you less than 4 minutes to achieve

Speeding up correlation matrix calculation in R

I have a dataframe with 49 variables and 4M rows. I want to calculate the correlation matrix of 49 x 49. All columns are of class numeric.
Here's a sample :
df <- data.frame(replicate(49,sample(0:50,4000000,rep=TRUE)))
I used the standard cor function.
cor_matrix <- cor(df, use = "pairwise.complete.obs")
This is taking a really long time. I have 16GB RAM and an i5 single core 2.60Ghz.
Is there a way to make this calculation faster on my desktop?
There's a faster version of the cor function in the WGCNA package (used for inferring gene networks based on correlations). On my 3.1 GHz i7 w/ 16 GB of RAM it can solve the same 49 x 49 matrix about 20x faster:
mat <- replicate(49, as.numeric(sample(0:50,4000000,rep=TRUE)))
system.time(
cor_matrix <- cor(mat, use = "pairwise.complete.obs")
)
user system elapsed
40.391 0.017 40.396
system.time(
cor_matrix_w <- WGCNA::cor(mat, use = "pairwise.complete.obs")
)
user system elapsed
1.822 0.468 2.290
all.equal(cor_matrix, cor_matrix_w)
[1] TRUE
Check the helpfile for the function for details on differences between versions when your data contains more missing observations.

Resources