Parallel processing in R done wrong? - r

I have a code I am trying to process in parallel using the foreach-package. The code is working but when I run it on a computer with 4 cores it takes about 26 min and when I switch to one with 32 cores, it still takes 13 min to finish. I was wondering whether I am doing something wrong since I am using 8 times as much cores, but only reduce the time by one half. My code looks like this:
no_cores <- detectCores()
cl <- makeCluster(no_cores)
registerDoParallel(cl)
Xenopus_Data <- foreach(b=1:length(newly_populated_vec),.packages = c("raster", "gdistance", "rgdal","sp")) %dopar% { Xenopus_Walk(altdata=altdata,water=water,habitat_suitability=habitat_suitability,max_range_without_water=max_range_without_water,max_range=max_range,slope=slope,Start_Pt=newly_populated_vec[b]) }
stopCluster(cl)
For the computer with 4 cores I get the following time:
Time_of_Start
[1] "2016-07-12 13:07:23 CEST"
Time_of_end
[1] "2016-07-12 13:33:10 CEST"
And for the one with 32 cores:
Time_of_Start
[1] "2016-07-12 14:35:48 CEST"
Time_of_end
[1] "2016-07-12 14:48:08 CEST"
Is this normal ? and if so, does anyone know how to speed it up additionally, maybe using different packages?
Any type of help is greatly appreciated!
EDIT: these are the times I get after applying the corrections as suggested. For 32 cores:
User System elapsed
5.99 40.78 243.97
For 4 cores:
user system elapsed
1.91 0.94 991.71
Note that before, I did the calculation multiple times via some loops, that's why the computation time decreased so drastically, but one can still tell that the difference between the two computers has increased, I believe.

Try this and let me know if your problem is solved:
library(doParallel)
library(foreach)
registerDoParallel(cores=detectCores())
n <- length(newly_populated_vec)
cat("\nN = ", n, " | Parallel workers count = ", getDoParWorkers(), "\n\n", sep="")
t0 <- proc.time()
Xenopus_Data <- foreach(b=1:n,.packages = c("raster", "gdistance", "rgdal","sp"), .combine=rbind) %dopar% {
Xenopus_Walk(
water=water,
altdata=altdata,
habitat_suitability=habitat_suitability,
max_range_without_water=max_range_without_water,
max_range=max_range,
slope=slope,
Start_Pt=newly_populated_vec[b])
}
TIME <- proc.time() - t0
Also, try to monitor the logical cores in your PC/laptop to check if all cores are involved in the computation. (TaskManager for Windows and htop for Linux)
Please also be mindful that doubling the number of cores does not necessarily lead to having a double performance.

Related

How do I time my script in R? [duplicate]

This question already has answers here:
Measuring function execution time in R
(15 answers)
Closed 8 years ago.
I guess I have a simple and straightforward question.
I am running a script and for each function I want to time the runtime of the function. I suppose there is a function to time my function. Can anybody help me here?
I have been searching but keep finding functions for time series and time intervals. I am not searching that.
As the others in the comments mentioned before, the simplest way is with system.time. Here's an example code from the system.time manual page
require(stats)
system.time(for(i in 1:100) mad(runif(1000)))
## Not run:
exT <- function(n = 10000) {
# Purpose: Test if system.time works ok; n: loop size
system.time(for(i in 1:n) x <- mean(rt(1000, df = 4)))
}
#-- Try to interrupt one of the following (using Ctrl-C / Escape):
exT() #- about 4 secs on a
2.5GHz Xeon
system.time(exT()) #~ +/- same
On my machine, once the function exT() is called, this is my output:
user system elapsed
2.916 0.004 2.925
And for the function system.time(exT()) I get the following output:
user system elapsed
3.004 0.016 3.026
This means that for the first case the elapsed time is 2.925 seconds and 3.026 for the second.
However, if you want to perform benchmark tests, you should use the package rbenchmark (go here). This is a library which consists of one function:
The library consists of just one function, benchmark, which is a
simple wrapper around system.time.
On the link I've provided, you can see more examples of how to use this package. There are 4 examples there, which are pretty good.

R probability simulation that won't terminate?

I'm teaching a statistics class where I'm having students explore questions in probability and statistics through simulation using R. Recently there was some confusion about the probability of getting exactly two 6's when rolling 5 dice. The answer is choose(5,2)*5^3/6^5, but some students were convinced that "order shouldn't matter"; i.e. that the answer should be choose(5,2)*choose(25,3)/choose(30,5). I thought it would be fun to have them simulate rolling 5 dice thousands of times, keeping track of the empirical probability for each experiment, and then repeat the experiment many times. The problem is the two numbers above are sufficiently close that it's quite hard to get a simulation to tease out the difference in a statistically significant fashion (of course I could just be doing it wrong). I tried rolling 5 dice 100000 times, then repeating the experiment 10000 times. This took an hour or so to run on my i7 linux machine and still allowed for a 25% chance that the correct answer is choose(5,2)*choose(25,3)/choose(30,5). So I increased the number of dice rolls per experiment to 10^6. Now the code has been running for over 2 days and shows no sign of finishing. I'm confused by this, as I only increased the number of operations by an order of magnitude, implying that the run time should be closer to 10 hours.
Second question: Is there a better way to do this? See code posted below:
probdist = rep(0,10000)
for (j in 1:length(probdist))
{
outcome = rep(0,1000000)
for (k in 1:1000000)
{
rolls = sample(1:6, 5, replace=T)
if (length(rolls[rolls == 6]) == 2) outcome[k] = 1
}
probdist[j] = sum(outcome)/length(outcome)
}
A good rule of thumb is to never, ever write a for loop in R. Here's an alternative solution:
doSample <- function()
{
sum(sample(1:6,size=5,replace=TRUE)==6)==2
}
> system.time(samples <- replicate(n=10000,expr=doSample()))
user system elapsed
0.06 0.00 0.06
> mean(samples)
[1] 0.1588
> choose(5,2)*5^3/6^5
[1] 0.160751
Doesn't seem to be too accurate with $10,000$ samples. Better with $100,000$:
> system.time(samples <- replicate(n=100000,expr=doSample()))
user system elapsed
0.61 0.02 0.61
> mean(samples)
[1] 0.16135
I had originally awarded a correct answer check to M. Berk for his/her suggestion to use the R replicate() function. Further investigation has forced to to rescind my previous endorsement. It turns out that replicate() is just a wrapper for sapply(), which doesn't actually afford any performance benefits over a for loop (this seems to be a common misconception). In any case, I prepared 3 versions of the simulation, 2 using a for loop, and one using replicate, as suggested, and ran them one after the other, starting from a fresh R session each time, in order to compare the execution times:
# dice26dist1.r: For () loop version with unnecessary array allocation
probdist = rep(0,100)
for (j in 1:length(probdist))
{
outcome = rep(0,1000000)
for (k in 1:1000000)
{
rolls = sample(1:6, 5, replace=T)
if (length(rolls[rolls == 6]) == 2) outcome[k] = 1
}
probdist[j] = sum(outcome)/length(outcome)
}
system.time(source('dice26dist1.r'))
user system elapsed
596.365 0.240 598.614
# dice26dist2.r: For () loop version
probdist = rep(0,100)
for (j in 1:length(probdist))
{
outcomes = 0
for (k in 1:1000000)
{
rolls = sample(1:6, 5, replace=T)
if (length(rolls[rolls == 6]) == 2) outcomes = outcomes + 1
}
probdist[j] = outcomes/1000000
}
system.time(source('dice26dist2.r'))
user system elapsed
506.331 0.076 508.104
# dice26dist3.r: replicate() version
doSample <- function()
{
sum(sample(1:6,size=5,replace=TRUE)==6)==2
}
probdist = rep(0,100)
for (j in 1:length(probdist))
{
samples = replicate(n=1000000,expr=doSample())
probdist[j] = mean(samples)
}
system.time(source('dice26dist3.r'))
user system elapsed
804.042 0.472 807.250
From this you can see that the replicate() version is considerably slower than either of the for loop versions by any system.time metric. I had originally thought that my problem was mostly due to cache misses by allocating the million character outcome[] array, but comparing the times of dice26dist1.r and dice26dist2.r indicates that this only has nominal impact on performance (although the impact on system time is considerable: >300% difference.
One might argue that I'm still using for loops in all three simulations, but as far as I can tell this is completely unavoidable when simulating a random process; I have to simulate actually going through the random process (in this case, rolling 5 die) every time. I would love to know about any technique that would allow me to avoid using a for loop (in a way that improves performance, of course). I understand that this problem would lend itself very effectively to parallelization, but I'm talking about using a single R session -- is there a way to make this faster?
Vectorization is almost always preferred to any for loop. In this case, you should see substantial speedup by generating all your dice throws first, then checking how many in each group of five equal 6.
set.seed(5)
N <- 1e6
foo <- matrix(sample(1:6, 5*N, replace=TRUE), ncol=5)
p <- mean(rowSums(foo==6)==2)
se <- sqrt(p*(1-p)/N)
p
## [1] 0.160382
Here's a 95% confidence interval:
p + se*qnorm(0.975)*c(-1,1)
## [1] 0.1596628 0.1611012
We can see that the true answer (ans1) is in the interval, but the false answer (ans2) is not, or we could perform significance tests; the p-value when testing the true answer is 0.31 but for the false answer is 0.0057.
(ans1 <- choose(5,2)*5^3/6^5)
## [1] 0.160751
pnorm(abs((ans1-p)/se), lower=FALSE)*2
## [1] 0.3145898
ans2 <- choose(5,2)*choose(25,3)/choose(30,5)
## [1] 0.1613967
pnorm(abs((ans2-p)/se), lower=FALSE)*2
## [1] 0.005689008
Note that I'm generating all the dice throws at once; if memory is an issue, you could split this up into pieces and combine, as you did in your original post. This is possibly what caused your unexpected speedup in time; if it was necessary to use swap memory, this would slow it substantially. If so, better to increase the number of time you run the loop, not the number of rolls within the loop.

Is it possible to reduce computing time by a factor higher than your number of cores using R package multicore?

I had a R routine that spent most of its time on a lapply call of the form:
lapply(X, FUN, ...)
where X is a list with 400 elements. The total time of execution was 11.88 sec.
Then I decided to use the multicore package and made the following change on my routine
mclapply(X, FUN, ...)
After that I was surprised to see that the computing time dropped to 0.66 sec. That is, only 5% of the original time. This was surprising to me since I was expecting something around 25% of the original time since the processor on my laptop is
Intel® Core™ i5 CPU M 560 # 2.67GHz × 4
Can someone explain me where this extra reduced time comes from? Is it that each core can itself parallelize computations?

Snowfall's sfApply and sfClusterApplyLB is slower than normal loop or sapply [duplicate]

This question already has answers here:
Why is the parallel package slower than just using apply?
(3 answers)
Closed 9 years ago.
When i apply this code in R, the loop and sapply are faster than snowfall's functions. What am i doing wrong? (using windows 8)
library(snowfall)
a<- 2
sfInit(parallel = TRUE, cpus = 4)
wrapper <- function(x){((x*a)^2)/3}
sfExport('a')
values <- seq(0, 100,1)
benchmark(for(i in 1:length(values)){wrapper(i)},sapply(values,wrapper),sfLapply(values, wrapper),sfClusterApplyLB(values, wrapper))
sfStop()
elapsed time for after 100 replications:
loop 0.05
sapply 0.07
sfClusterApplySB 2.94
sfApply 0.26
If the function that is sent to each of the worker nodes takes a small amount of time, the overhead of paralellization causes the overall duration of the task to take longer than running the job serially. When the jobs that are sent to the worker nodes take a significant amount of time (at least several seconds), than paralellization will really show improved performance.
See also:
Why is the parallel package slower than just using apply?
Searching for [r] parallel will yield at least 20 questions like yours, including more details as to what you can do to solve the problem.

Multicore and memory usage in R under Ubuntu

I am running R on an Ubuntu workstation with 8 virtual cores and 8 Gb of ram. I was hoping to routinely use the multicore package to make use of the 8 cores in parallel; however I find that the whole R process becomes duplicated 8 times.
As R actually seems to use much more memory than is reported in gc (by a factor 5, even after gc()), this means that even a relatively mild memory usage (one 200Mb object) becomes intractably memory-heavy once duplicated 8 times.
I looked into bigmemory to have the child processes share the same memory space; but it would require some major rewriting of my code as it doesn't deal with dataframes.
Is there a way to make R as lean as possible before forking, i.e. have the OS reclaim as much memory as possible?
EDIT:
I think I understand what is going on now. The problem is not where I thought it was -- objects that exist in the parent thread and are not manipulated do not get duplicated eight times. Instead my problem, I believe, came from the nature of the manipulation I am making each child process perform. Each has to manipulate a big factor with hundreds of thousands of levels, and I think this is the memory-heavy bit. As a result, it is indeed the case that the overall memory load is proportional to the number of cores; but not as dramatically as I thought.
Another lesson I learned is that with 4 physical cores + possibility of hyperthreading, hyperthreading is actually not typically a good idea for R. The gain is minimal, and the memory cost may be non-trivial. So I'll be working on 4 cores from now on.
For those who would like to experiment, this is the type of code I was running:
# Create data
sampdata <- data.frame(id = 1:1000000)
for (letter in letters) {
sampdata[, letter] <- rnorm(1000000)
}
sampdata$groupid = ceiling(sampdata$id/2)
# Enable multicore
library(multicore)
options(cores=4) # number of cores to distribute the job to
# Actual job
system.time(do.call("cbind",
mclapply(subset(sampdata, select = c(a:z)), function(x) tapply(x, sampdata$groupid, sum))
))
Have you tried data.table?
> system.time(ans1 <- do.call("cbind",
lapply(subset(sampdata,select=c(a:z)),function(x)tapply(x,sampdata$groupid,sum))
))
user system elapsed
906.157 13.965 928.645
> require(data.table)
> DT = as.data.table(sampdata)
> setkey(DT,groupid)
> system.time(ans2 <- DT[,lapply(.SD,sum),by=groupid])
user system elapsed
186.920 1.056 191.582 # 4.8 times faster
> # massage minor diffs in results...
> ans2$groupid=NULL
> ans2=as.matrix(ans2)
> colnames(ans2)=letters
> rownames(ans1)=NULL
> identical(ans1,ans2)
[1] TRUE
Your example is very interesting. It is reasonably large (200MB), there are many groups (1/2 million), and each group is very small (2 rows). The 191s can probably be improved by quite a lot, but at least it's a start. [March 2011]
And now, this idiom (i.e. lapply(.SD,...)) has been improved a lot. With v1.8.2, and on a faster computer than the test above, and with the latest version of R etc, here is the updated comparison :
sampdata <- data.frame(id = 1:1000000)
for (letter in letters) sampdata[, letter] <- rnorm(1000000)
sampdata$groupid = ceiling(sampdata$id/2)
dim(sampdata)
# [1] 1000000 28
system.time(ans1 <- do.call("cbind",
lapply(subset(sampdata,select=c(a:z)),function(x)
tapply(x,sampdata$groupid,sum))
))
# user system elapsed
# 224.57 3.62 228.54
DT = as.data.table(sampdata)
setkey(DT,groupid)
system.time(ans2 <- DT[,lapply(.SD,sum),by=groupid])
# user system elapsed
# 11.23 0.01 11.24 # 20 times faster
# massage minor diffs in results...
ans2[,groupid:=NULL]
ans2[,id:=NULL]
ans2=as.matrix(ans2)
rownames(ans1)=NULL
identical(ans1,ans2)
# [1] TRUE
sessionInfo()
R version 2.15.1 (2012-06-22)
Platform: x86_64-pc-mingw32/x64 (64-bit)
locale:
[1] LC_COLLATE=English_United Kingdom.1252 LC_CTYPE=English_United Kingdom.1252
[3] LC_MONETARY=English_United Kingdom.1252 LC_NUMERIC=C
[5] LC_TIME=English_United Kingdom.1252
attached base packages:
[1] stats graphics grDevices datasets utils methods base
other attached packages:
[1] data.table_1.8.2 RODBC_1.3-6
Things I've tried on Ubuntu 64 bit R, ranked in order of success:
Work with fewer cores, as you are doing.
Split the mclapply jobs into pieces, and save the partial results to a database using DBI with append=TRUE.
Use the rm function along with gc() often
I have tried all of these, and mclapply still begins to create larger and larger processes as it runs, leading me to suspect each process is holding onto some sort of residual memory it really doesn't need.
P.S. I was using data.table, and it seems each child process copies the data.table.

Resources