Why is R slow on this random permutation function? - r

I am new to R (Revolution Analytics R) and have been translating some Matlab functions into R.
Question: Why is the function GRPdur(n) so slow?
GRPdur = function(n){
#
# Durstenfeld's Permute algorithm, CACM 1964
# generates a random permutation of {1,2,...n}
#
p=1:n # start with identity p
for (k in seq(n,2,-1)){
r = 1+floor(runif(1)*k); # random integer between 1 and k
tmp = p[k];
p[k] = p[r]; # Swap(p(r),p(k)).
p[r] = tmp;
}
return(p)
}
Here is what I get on a Dell Precision 690, 2xQuadcore Xeon 5345 # 2.33 GHz, Windows 7 64-bit:
> system.time(GRPdur(10^6))
user system elapsed
15.30 0.00 15.32
> system.time(sample(10^6))
user system elapsed
0.03 0.00 0.03
Here is what I get in Matlab 2011b
>> tic;p = GRPdur(10^6);disp(toc)
0.1364
tic;p = randperm(10^6);disp(toc)
0.1116
Here is what I get in Matlab 2008a
>> tic;p=GRPdur(10^6);toc
Elapsed time is 0.124169 seconds.
>> tic;p=randperm(10^6);toc
Elapsed time is 0.211372 seconds.
>>
LINKS : GRPdur is part of RPGlab, a package of Matlab functions that I wrote that generates and tests various random permutation generators. The notes can be viewed separately here: Notes on RPGlab.
The original Durstenfeld Algol program is here

Both Matlab and S (later R) started out as thin wrappers around FORTRAN functions for doing math stuff.
In S/R the for-loops have "always" been slow, but that has been OK because there are usually vectorized ways of expressing the problem. Also, R has thousands of functions in Fortran or C that do higher-level things quickly. For instance, the sample function which does exactly what your for-loop does - but much more quickly.
So why then is MATLAB much better at executing scripted for-loops? Two simple reasons: RESOURCES and PRIORITIES.
MathWorks who make MATLAB is a rather big company with around 2000 employees. They decided years ago to prioritize improving the performance of scripts. They hired a bunch of compiler experts and spent years developing a Just-In-Time compiler (JIT) that takes the script code and turns it into assembler code. They did a very good job too. Kudos to them!
R is open source, and the R core team works on improving R in their spare time. Luke Tierney of R core has worked hard and developed a compiler package for R that compiles R scripts to byte code. It does NOT turn it into assembler code however, but works pretty well. Kudos to him!
...But the amount of effort put into the R compiler vs. the MATLAB compiler is simply much less, and therefore the result is slower:
system.time(GRPdur(10^6)) # 9.50 secs
# Compile the function...
f <- compiler::cmpfun(GRPdur)
system.time(f(10^6)) # 3.69 secs
As you can see, the for-loop became 3x faster by compiling it to byte code. Another difference is that the R JIT compiler is not enabled by default as it is in MATLAB.
UPDATE Just for the record, a slightly more optimized R version (based on Knuth's algorithm), where the random generation has been vectorized as #joran suggested:
f <- function(n) {
p <- integer(n)
p[1] <- 1L
rv <- runif(n, 1, 1:n) # random integer between 1 and k
for (k in 2:n) {
r <- rv[k]
p[k] = p[r] # Swap(p(r),p(k)).
p[r] = k
}
p
}
g <- compiler::cmpfun(f)
system.time(f(1e6)) # 4.84
system.time(g(1e6)) # 0.98
# Compare to Joran's version:
system.time(GRPdur1(10^6)) # 6.43
system.time(GRPdur2(10^6)) # 1.66
...still a magnitude slower than MATLAB. But again, just use sample or sample.int which apparently beats MATLAB's randperm by 3x!
system.time(sample.int(10^6)) # 0.03

Because you wrote a c-program in an R-skin
n = 10^6L
p = 1:n
system.time( sample(p,n))
0.03 0.00 0.03

Responding to the OP's request was too long to fit in a comment, so here's what I was referring to:
#Create r outside for loop
GRPdur1 <- function(n){
p <- 1:n
k <- seq(n,2,-1)
r <- 1 + floor(runif(length(k)) * k)
for (i in 1:length(k)){
tmp <- p[k[i]];
p[k[i]] <- p[r[i]];
p[r[i]] <- tmp;
}
return(p)
}
library(compiler)
GRPdur2 <- cmpfun(GRPdur1)
set.seed(1)
out1 <- GRPdur(100)
set.seed(1)
out2 <- GRPdur1(100)
#Check the GRPdur1 is generating the identical output
identical(out1,out2)
system.time(GRPdur(10^6))
user system elapsed
12.948 0.389 13.232
system.time(GRPdur2(10^6))
user system elapsed
1.908 0.018 1.910
Not quite 10x, but more than the 3x Tommy showed just using the compiler. For a somewhat more accurate timing:
library(rbenchmark)
benchmark(GRPdur(10^6),GRPdur2(10^6),replications = 10)
test replications elapsed relative user.self sys.self
1 GRPdur(10^6) 10 127.315 6.670946 124.358 3.656
2 GRPdur2(10^6) 10 19.085 1.000000 19.040 0.222
So the 10x comment was (perhaps not surprisingly, being based on a single system.time run) optimistic, but the vectorization gains you a fair bit more speed over what the byte compiler does.

Related

Speed up Simulation in R with Code Optimization

The generic version of what I am trying to do is to conduct a simulation study where I manipulate a few variables to see how that impacts a result. I'm having some speed issues with R. The latest simulation worked with a few iterations (10 per experiment). However, when I moved to a large scale (10k per experiment) version, the simulation has been running for 14 hours (and is still running).
Below is the code (with comments) that I am running. Being a rookie with R, and am struggling to optimize the simulation to be efficient. My hope is to learn from the comments and suggestions provided here to optimize this code and use these comments for future simulation studies.
Let me say a few things about what this code is supposed to do. I am manipulating two variables: effect size and sample size. Each combination is run 10k times (i.e., 10k experiments per condition). I initialize a data frame to store my results (called Results). I loop over three variables: Effect size, sample size, and iterations (10k).
Within the loops, I initialize four NULL components: p.test, p.rep, d.test, and d.rep. The former two capture the p-value of the initial t-test and the p-value of the replication (replicated under similar conditions). The latter two calculate the effect size (Cohen's d).
I generate my random data from a standard normal for the control condition (DVcontrol), and I use my effect size as the mean for the experimental condition (DVexperiment). I take the difference between the values and throw the result into the t-test function in R (paired-samples t-test). I store the results in a list called Trials and I rbind this to the Results data frame. This process is repeated 10k times until completion.
# Set Simulation Parameters
## Effect Sizes (ES is equal to mean difference when SD equals Variance equals 1)
effect_size_range <- seq(0, 2, .1) ## ES
## Sample Sizes
sample_size_range <- seq(10, 1000, 10) ## SS
## Iterations for each ES-SS Combination
iter <- 10000
# Initialize the Vector of Results
Results <- data.frame()
# Set Random Seed
set.seed(12)
# Loop over the Different ESs
for(ES in effect_size_range) {
# Loop over the Different Sample Sizes
for(SS in sample_size_range) {
# Create p-value Vectors
p.test <- NULL
p.rep <- NULL
d.test <- NULL
d.rep <- NULL
# Loop over the iterations
for(i in 1:iter) {
# Generate Test Data
DVcontrol <- rnorm(SS, mean=0, sd=1)
DVexperiment <- rnorm(SS, mean=ES, sd=1)
DVdiff <- DVexperiment - DVcontrol
p.test[i] <- t.test(DVdiff, alternative="greater")$p.value
d.test[i] <- mean(DVdiff) / sd(DVdiff)
# Generate Replication Data
DVcontrol <- rnorm(iter, mean=0, sd=1)
DVexperiment <- rnorm(iter, mean=ES, sd=1)
DVdiff <- DVexperiment - DVcontrol
p.rep[i] <- t.test(DVdiff, alternative="greater")$p.value
d.rep[i] <- mean(DVdiff) / sd(DVdiff)
}
# Results
Trial <- list(ES=ES, SS=SS,
d.test=mean(d.test), d.rep=mean(d.rep),
p.test=mean(p.test), p.rep=mean(p.rep),
r=cor(p.test, p.rep, method="kendall"),
r.log=cor(log2(p.test)*(-1), log2(p.rep)*(-1), method= "kendall"))
Results <- rbind(Results, Trial)
}
}
Thanks in advance for your comments and suggestions,
Josh
The general approach to optimization is to run a profiler to determine what portion of the code the interpreter spends the most time in, and then to optimize that portion. Let's say your code resides in a file called test.R. In R, you can profile it by running the following sequence of commands:
Rprof() ## Start the profiler
source( "test.R" ) ## Run the code
Rprof( NULL ) ## Stop the profiler
summaryRprof() ## Display the results
(Note that these commands will generate a file Rprof.out in the directory of your R session.)
If we run the profiler on your code (with iter <- 10, rather than iter <- 10000), we get the following profile:
# $by.self
# self.time self.pct total.time total.pct
# "rnorm" 1.56 24.53 1.56 24.53
# "t.test.default" 0.66 10.38 2.74 43.08
# "stopifnot" 0.32 5.03 0.86 13.52
# "rbind" 0.32 5.03 0.52 8.18
# "pmatch" 0.30 4.72 0.34 5.35
# "mean" 0.26 4.09 0.42 6.60
# "var" 0.24 3.77 1.38 21.70
From here, we observe that rnorm and t.test are your most expensive operations (shouldn't really be a surprise as these are in your inner-most loop).
Once you figured out where the expensive function calls are, the actual optimization consists of two steps:
Optimize the function, and/or
Optimize the number of times the function is called.
Since t.test and rnorm are built-in R functions, your only option for Step 1 above is to look for alternative packages that may have faster implementations of sampling from the normal distribution and/or running multiple t tests. Step 2 is really about restructuring your code in a way that does not recompute the same thing multiple times. For example, the following lines of code do not depend on i:
# Generate Test Data
DVcontrol <- rnorm(SS, mean=0, sd=1)
DVexperiment <- rnorm(SS, mean=ES, sd=1)
Does it make sense to move these outside the loop, or do you really need a new sample of your test data for each different value of i?

R How to get total CPU time with foreach?

I am trying to get total CPU hours of a code run in parallel (using foreach from the package doParallel) but I'm not sure how to go about doing this. I have used proc.time() but it just returns a difference in 'real' time. From what I have read of system.time(), it should also just do the same as proc.time(). How do I get total CPU hours of an R code run in parallel?
A Little trick is to return the measured runtime with your computation result together by list. An example as below, we use system.time() to get the runtime as same as proc.time().
NOTE: this is the modified example from my blog post of R with Parallel Computing from User Perspectives.
# fake code to show how to get runtime of each process in foreach
library(foreach)
library(doParallel)
# Real physical cores in my computer
cores <- detectCores(logical = FALSE)
cl <- makeCluster(cores)
registerDoParallel(cl, cores=cores)
system.time(
res.gather <- foreach(i=1:cores, .combine='list') %dopar%
{
s.time <- system.time( {
set.seed(i)
res <- matrix(runif(10^6), nrow=1000, ncol=1000)
res <- exp(sqrt(res)*sqrt(res^3))
})
list(result=res, runtime=s.time)
}
)
stopImplicitCluster()
stopCluster(cl)
Thus, the runtime is saved in res.gather and you can get it easily. So, add them up and we can know how many total time for your parallel program.
> res.gather[[1]]$runtime
user system elapsed
0.42 0.04 0.48
> res.gather[[2]]$runtime
user system elapsed
0.42 0.03 0.47
> res.gather[[2]]$runtime[3] + res.gather[[2]]$runtime[3]
elapsed
0.94
Finally, the runtime of 2 R sessions is 0.94 sec without accounting wait time of R master.

Running time foreach package [duplicate]

This question already has answers here:
Why is the parallel package slower than just using apply?
(3 answers)
Closed 7 years ago.
I have problem by using foreach package in R. In fact, when I compile this code :
tmp=proc.time()
x<-for(i in 1:1000){sqrt(i)}
x
proc.time()-tmp
and this code :
tmp=proc.time()
x<- foreach(i=1:1000) %dopar% sqrt(i)
x
proc.time()-tmp
The R console posts for Parallel Computing :
utilisateur système écoulé
0.464 0.776 0.705
and for the normal loop :
utilisateur système écoulé
0.001 0.000 0.001
So the normal loop runs faster... Is it normal?
Thanks for your help.
Parallel processing won't speed up simple operations like sqrt(x). Ideally you use it for more complex operations, or you do something like,
x<- foreach(i=0:9,combine = 'c') %dopar% sqrt(seq(i*10000000,(i+1)*10000000-1))
x
It takes more time to switch processes than it does to those tasks. If you look at the processors used in your system monitor/task manager, you'll see that only one processor is used, regardless of the backend you set up.
Edit: It seems that you have no parallel backend set up for your foreach loop, so it will default to sequential mode anyway. An easy way to set up the parallel backend is
library(doParallel)
ncores = detectCores()
clust = makeCluster(ncores - 2)
registerDoParallel(clust)
#do stuff here
#...
stopCluster(clust)
Depending on your system, you may need to do more outside of R in order to get the backend set up.
Here is some test code you can use to set up a parallel experiment on Windows:
library(foreach)
library(doParallel)
cl <- makePSOCKcluster(2)
registerDoParallel(cl)
system.time({
x <- foreach(i=1:1000) %do% Sys.sleep(0.001)
})
system.time({
x <- foreach(i=1:1000) %dopar% Sys.sleep(0.001)
})
stopCluster(cl)
You should find that the parallel implementation runs roughly half the time as serial:
> system.time({
+ x <- foreach(i=1:1000) %do% Sys.sleep(0.001)
+ })
user system elapsed
0.08 0.00 12.55
>
> system.time({
+ x <- foreach(i=1:1000) %dopar% Sys.sleep(0.001)
+ })
user system elapsed
0.23 0.00 6.09
Note that parallel computing is not a silver bullet. There is a fixed startup cost as well as a communication cost. See Amdahl's law
In general, it is only worth doing parallel computing if your task is taking a long time to run.

Is there an efficient way to parallelize mapply?

I have many rows and on every row I compute the uniroot of a non-linear function. I have a quad-core Ubuntu machine which hasn't stopped running my code for two days now. Not surprisingly, I'm looking for ways to speed things up ;-)
After some research, I noticed that only one core is currently used and parallelization is the thing to do. Digging deeper, I came to the conclusion (maybe incorrectly?) that the package foreach isn't really meant for my problem because too much overhead is produced (see, for example, SO). A good alternative seems to be multicore for Unix machines. In particular, the pvec function seems to be the most efficient one after I checked the help page.
However, if I understand it correctly, this function only takes one vector and splits it up accordingly. I need a function that can be parallized, but takes multiple vectors (or a data.frame instead), just like the mapply function does. Is there anything out there that I missed?
Here is a small example of what I want to do: (Note that I include a plyr example here because it can be an alternative to the base mapply function and it has a parallelize option. However, it is slower in my implementation and internally, it calls foreach to parallelize, so I think it won't help. Is that correct?)
library(plyr)
library(foreach)
n <- 10000
df <- data.frame(P = rnorm(n, mean=100, sd=10),
B0 = rnorm(n, mean=40, sd=5),
CF1 = rnorm(n, mean=30, sd=10),
CF2 = rnorm(n, mean=30, sd=5),
CF3 = rnorm(n, mean=90, sd=8))
get_uniroot <- function(P, B0, CF1, CF2, CF3) {
uniroot(function(x) {-P + B0 + CF1/x + CF2/x^2 + CF3/x^3},
lower = 1,
upper = 10,
tol = 0.00001)$root
}
system.time(x1 <- mapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3))
#user system elapsed
#0.91 0.00 0.90
system.time(x2 <- mdply(df, get_uniroot))
#user system elapsed
#5.85 0.00 5.85
system.time(x3 <- foreach(P=df$P, B0=df$B0, CF1=df$CF1, CF2=df$CF2, CF3=df$CF3, .combine = "c") %do% {
get_uniroot(P, B0, CF1, CF2, CF3)})
#user system elapsed
# 10.30 0.00 10.36
all.equal(x1, x2$V1) #TRUE
all.equal(x1, x3) #TRUE
Also, I tried to implement Ryan Thompson's function chunkapply from the SO link above (only got rid of doMC part, because I couldn't install it. His example works, though, even after adjusting his function.),
but didn't get it to work. However, since it uses foreach, I thought the same arguments mentioned above apply, so I didn't try it too long.
#chunkapply(get_uniroot, list(P=df$P, B0=df$B0, CF1=df$CF1, CF2=df$CF2, CF3=df$CF3))
#Error in { : task 1 failed - "invalid function value in 'zeroin'"
PS: I know that I could just increase tol to reduce the number of steps that are necessary to find a uniroot. However, I already set tol as big as possible.
I'd use the parallel package that's built into R 2.14 and work with matrices. You could then simply use mclapply like this:
dfm <- as.matrix(df)
result <- mclapply(seq_len(nrow(dfm)),
function(x) do.call(get_uniroot,as.list(dfm[x,])),
mc.cores=4L
)
unlist(result)
This is basically doing the same mapply does, but in a parallel way.
But...
Mind you that parallelization always counts for some overhead as well. As I explained in the question you link to, going parallel only pays off if your inner function calculates significantly longer than the overhead involved. In your case, your uniroot function works pretty fast. You might then consider to cut your data frame in bigger chunks, and combine both mapply and mclapply. A possible way to do this is:
ncores <- 4
id <- floor(
quantile(0:nrow(df),
1-(0:ncores)/ncores
)
)
idm <- embed(id,2)
mapply_uniroot <- function(id){
tmp <- df[(id[1]+1):id[2],]
mapply(get_uniroot, tmp$P, tmp$B0, tmp$CF1, tmp$CF2, tmp$CF3)
}
result <-mclapply(nrow(idm):1,
function(x) mapply_uniroot(idm[x,]),
mc.cores=ncores)
final <- unlist(result)
This might need some tweaking, but it essentially breaks your df in exactly as many bits as there are cores, and run the mapply on every core. To show this works :
> x1 <- mapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3)
> all.equal(final,x1)
[1] TRUE
it's an old topic but fyi you now have parallel::mcmapply doc is here. don't forget to set mc.cores in the options. I usually use mc.cores=parallel::detectCores()-1 to let one cpu free for OS operations.
x4 <- mcmapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3,mc.cores=parallel::detectCores()-1)
This isn't exactly a best practices suggestion, but considerable speed-up can be had by identifying the root for all parameters in a 'vectorized' fashion. For instance,
bisect <-
function(f, interval, ..., lower=min(interval), upper=max(interval),
f.lower=f(lower, ...), f.upper=f(upper, ...), maxiter=20)
{
nrow <- length(f.lower)
bounds <- matrix(c(lower, upper), nrow, 2, byrow=TRUE)
for (i in seq_len(maxiter)) {
## move lower or upper bound to mid-point, preserving opposite signs
mid <- rowSums(bounds) / 2
updt <- ifelse(f(mid, ...) > 0, 0L, nrow) + seq_len(nrow)
bounds[updt] <- mid
}
rowSums(bounds) / 2
}
and then
> system.time(x2 <- with(df, {
+ f <- function(x, PB0, CF1, CF2, CF3)
+ PB0 + CF1/x + CF2/x^2 + CF3/x^3
+ bisect(f, c(1, 10), PB0, CF1, CF2, CF3)
+ }))
user system elapsed
0.180 0.000 0.181
> range(x1 - x2)
[1] -6.282406e-06 6.658593e-06
versus about 1.3s for application of uniroot separately to each. This also combined P and B0 into a single value ahead of time, since that is how they enter the equation.
The bounds on the final value are +/- diff(interval) * (.5 ^ maxiter) or so. A fancier implementation would replace bisection with linear or quadratic interpolation (as in the reference cited in ?uniroot), but then uniform efficient convergence (and in all cases error handling) would be more tricky to arrange.

Problems using foreach parallelization

I'm trying to compare parallelization options. Specifically, I'm comparing the standard SNOW and mulitcore implementations to those using doSNOW or doMC and foreach. As a sample problem, I'm illustrating the central limit theorem by computing the means of samples drawn from a standard normal distribution many times. Here's the standard code:
CltSim <- function(nSims=1000, size=100, mu=0, sigma=1){
sapply(1:nSims, function(x){
mean(rnorm(n=size, mean=mu, sd=sigma))
})
}
Here's the SNOW implementation:
library(snow)
cl <- makeCluster(2)
ParCltSim <- function(cluster, nSims=1000, size=100, mu=0, sigma=1){
parSapply(cluster, 1:nSims, function(x){
mean(rnorm(n=size, mean=mu, sd=sigma))
})
}
Next, the doSNOW method:
library(foreach)
library(doSNOW)
registerDoSNOW(cl)
FECltSim <- function(nSims=1000, size=100, mu=0, sigma=1) {
x <- numeric(nSims)
foreach(i=1:nSims, .combine=cbind) %dopar% {
x[i] <- mean(rnorm(n=size, mean=mu, sd=sigma))
}
}
I get the following results:
> system.time(CltSim(nSims=10000, size=100))
user system elapsed
0.476 0.008 0.484
> system.time(ParCltSim(cluster=cl, nSims=10000, size=100))
user system elapsed
0.028 0.004 0.375
> system.time(FECltSim(nSims=10000, size=100))
user system elapsed
8.865 0.408 11.309
The SNOW implementation shaves off about 23% of computing time relative to an unparallelized run (time savings get bigger as the number of simulations increase, as we would expect). The foreach attempt actually increases run time by a factor of 20. Additionally, if I change %dopar% to %do% and check the unparallelized version of the loop, it takes over 7 seconds.
Additionally, we can consider the multicore package. The simulation written for multicore is
library(multicore)
MCCltSim <- function(nSims=1000, size=100, mu=0, sigma=1){
unlist(mclapply(1:nSims, function(x){
mean(rnorm(n=size, mean=mu, sd=sigma))
}))
}
We get an even better speed improvement than SNOW:
> system.time(MCCltSim(nSims=10000, size=100))
user system elapsed
0.924 0.032 0.307
Starting a new R session, we can attempt the foreach implementation using doMC instead of doSNOW, calling
library(doMC)
registerDoMC()
then running FECltSim() as above, still finding
> system.time(FECltSim(nSims=10000, size=100))
user system elapsed
6.800 0.024 6.887
This is "only" a 14-fold increase over the non-parallelized runtime.
Conclusion: My foreach code is not running efficiently under either doSNOW or doMC. Any idea why?
Thanks,
Charlie
To follow on something Joris said, foreach() is best when the number of jobs does not hugely exceed the number of processors you will be using. Or more generally, when each job takes a significant amount of time on its own (seconds or minutes, say). There is a lot of overhead in creating the threads, so you really don't want to use it for lots of small jobs. If you were doing 10 million sims rather than 10 thousand, and you structured your code like this:
nSims = 1e7
nBatch = 1e6
foreach(i=1:(nSims/nBatch), .combine=c) %dopar% {
replicate(nBatch, mean(rnorm(n=size, mean=mu, sd=sigma))
}
I bet you would find that foreach was doing pretty well.
Also note the use of replicate() for this kind of application rather than sapply. Actually, the foreach package has a similar convenience function, times(), which could be applied in this case. Of course, if your code is not doing a simple simulations with identical parameters every time, you will need sapply() and foreach().
To start with, you could write your foreach code a bit more concise :
FECltSim <- function(nSims=1000, size=100, mu=0, sigma=1) {
foreach(i=1:nSims, .combine=c) %dopar% {
mean(rnorm(n=size, mean=mu, sd=sigma))
}
}
This gives you a vector, no need to explicitly make it within the loop. Also no need to use cbind, as your result is every time just a single number. So .combine=c will do
The thing with foreach is that it creates quite a lot of overhead to communicate between the cores and get the results of the different cores fit together. A quick look at the profile shows this pretty clearly :
$by.self
self.time self.pct total.time total.pct
$ 5.46 41.30 5.46 41.30
$<- 0.76 5.75 0.76 5.75
.Call 0.76 5.75 0.76 5.75
...
More than 40% of the time it is busy selecting things. It also uses a lot of other functions for the whole operation. Actually, foreach is only advisable if you have relatively few rounds through very time consuming functions.
The other two solutions are built on a different technology, and do far less in R. On a sidenode, snow is actually initially developed to work on clusters more than on single workstations, like multicore is.

Resources