R: Foreach Parallelized - r

I want to run a function 100 times. The function itself contains a for loop that requires running 4000 time. I placed my code online on EC2 to run it on multiple cores but am not sure if I am doing it correctly as it doesn't reveal if its actually utilizing all cores. Does the code below make sense?
#arbitrary function:
x = function() {
y=c()
for(i in 1:4000){
y=c(y,i)
}
return(y)
}
#helper Function
loop.helper<-function(n.times){
results = list()
for(i in 1:n.times){
results[[i]] = x()
}
return(results)
}
#Parallel
require(foreach)
require(parallel)
require(doParallel)
cores = detectCores() #32
cl<-makeCluster(cores) #register cores
registerDoParallel(cl, cores = cores)
This is my problem, I am not sure if its should be this:
out <- foreach(i=1:cores) %dopar% {
helper(n.times = 100)
}
or should it be this:
out <- foreach(i=1:100) %dopar% {
x()
}
Both of them work, but I am not sure if the first one will distribute the task to the 32 cores I have or does it automatically do it in the second foreach loop implementation.
thanks

out <- foreach(i=1:100) %dopar% {
x()
}
Is the correct way to do it. The foreach package will automatically distribute the 100 tasks among the registered cores (32 cores, in your case).
If you read the package documentation, you can read some of the examples and it should become extra clear to you.
EDIT:
To respond to #user1234440's comment:
Some considerations:
There is some time required to set up and manage the parallel tasks (e.g. setting up the multiple jobs to run concurrently, and then combining the results at the end). For some trivial tasks or small jobs, sometimes running a parallel process takes longer than the simple sequential loop simply because setting up the parallel processes takes up more time than it saves. However, for most tasks that require some non-trivial computations, you will likely experience speed improvements.
Also, from what I have read, you will see diminishing returns as you use more cores (e.g. using 8 cores may not necessarily be 2x faster than using 4 cores, but may only be 1.5x faster). In addition, from my personal experience, using ALL the available cores on my system resulted in some performance degradation. I think this was because I was dedicating all of my system resources to the parallel job and it was slowing down my other system processes.
That being said, I have almost always experienced speed improvements when using the parallel processing power offered by the foreach function. For your example of running 100 jobs with 32 cores, 4 cores will receive 4 jobs, and the other 28 cores will receive 3 jobs. Now it will be as if 32 computers are running mini for loops, iterating through the 4 or 3 jobs that were distributed to each of the cores. After each loop is completed, the results are combined and returned to you.
If running the 100 tasks is completed faster with a simple for loop than with a parallel foreach loop, then running these 100 tasks in a regular for loop 4000 times will be faster than running the 100 tasks in a parallelized foreach loop 4000 times.

Since you want to execute the function "x" 100 times, you can do that with:
out <- foreach(i=1:100) %dopar% {
x()
}
This correctly returns a list of 100 vectors. Your other solution is wrong because it will execute the function "x" cores * 100 times, returning a list of cores lists of 100 vectors.
You may be confused because it is common to write parallel loops that use one iteration for each core. For instance, you could also execute "x" 100 times like this:
out <- foreach(i=1:cores, .combine='c') %dopar% {
results <- vector('list', 25)
for (j in 1:25) {
results[[j]] <- x()
}
results
}
This also returns a list of 100 vectors, and it will be somewhat more efficient. This technique is called "task chunking", and it can give significantly better performance when the tasks are short. Your second solution is almost like this, except the helper function should execute fewer iterations, and the resulting lists should be combined, which I do by using c as the combine function.
It's important to realize that you can't control the number of cores that are used via the iteration variable in the foreach loop: that is controlled via the registerDoParallel function. But most parallel backends, including doParallel, will map cores tasks to cores workers. It's also important to realize that you don't truly control the number of cores that will be used by the cores worker processes. You control the number of processes that will be created to execute tasks when you call makeCluster, but ultimately it is up to the operating system to schedule those processes on the cores of the CPU, so the "cores" argument is something of a misnomer.
Also note that for your example, you should call registerDoParallel as:
registerDoParallel(cl)
Since you specified a value for the cl argument, the cores argument is ignored, however the documentation doesn't make that clear.

Related

Specify number of cores for R doParallel

I'm trying to specify a parallel process in R to use 3 out of 4 possible cores on my computer to leave a bit of CPU power for other processes while this runs in the background. My code looks something like this:
library(doParallel)
cl <- makePSOCKcluster(3)
registerDoParallel(cl)
results <- foreach(i = 1:10) %dopar% {
...some processes to be parallelized...
}
stopCluster(cl)
When I run this and look in task manager, all cores are running at 100%. Is there a way to only use 3 cores, or is this not possible?
Thanks!
I'm sure this has been answered elsewhere, but ...
cl <- makePSOCKcluster(detectCores() * .875)
OR
cl <- makePSOCKcluster(detectCores() - 1)
Will work for this.
Check out the help page on detectCores() and one final warning is that I once put detectCores inside of a loop thinking its fast ... its not, so if you need it more than a few times, assign a variable.
Finally, I very much favor parallelization using furrr (future_map, etc.) instead of foreach() %dopar% these days.
Using mcaffinity before running parallel process, you can limit the number of cores.
parallel::mcaffinity(1:3)
This mcaffinity allow your R work to allocate only the first 3 cores.

load-balancing in R foreach loops

Is there a way to modify how R foreach loop does load balancing with doParallel backend ? When parallelizing tasks that have very different execution time, it can happen all nodes but one have finished their tasks while the last one still have several tasks to do. Here is a toy example:
library(foreach)
library(doParallel)
registerDoParallel(4)
waittime = c(10,1,1,1,10,1,1,1,10,1,1,1,10,1,1,1)
w = iter(waittime)
foreach(i=w) %dopar% {
message(paste("waiting",i, "on",Sys.getpid()))
Sys.sleep(i)
}
Basically, the code register 4 cores. For each loop i, the task is to wait for waittime[i] seconds. However, because the load balancing in the foreach loop seems to be, by default, to split the total number of tasks into sets having a length of the number of registered cores, in the above example, the first core receives all the tasks with waittime = 10, while the 3 others receive tasks with waittime = 1 so that these 3 cores will have finished all their tasks before the first one have finished its first.
Is there a way to make foreach() distribute tasks one at a time ? i.e. in the above case, I'd like that the first 4 tasks are distributed among the 4 cores, and then that each next task is distributed to the next available core.
Thanks.
I haven't tested it myself, but the doParallel backend provides a preschedule option akin to the mc.preschedule argument in mclapply(). (See section 7 of the doParallel vignette.)
You might try:
mcoptions <- list(preschedule = FALSE)
foreach(i = w, .options.multicore = mcoptions)
Apologies for posting as an answer but I have insufficient rep to comment. Is it possible that you could rewrite your code to make use of parLapplyLB or parSapplyLB?
parLapplyLB, parSapplyLB are load-balancing versions, intended for use when applying FUN to different elements of X takes quite variable amounts of time, and either the function is deterministic or reproducible results are not required.

Parallel computing taking same or more time

I am trying to set up parallel computing in R for a large simulation, but I noticed that there is no improvement in time.
I tried a simple example:
library(foreach)
library(doParallel)
stime<-system.time(for (i in 1:10000) rnorm(10000))[3]
print(stime)
10.823
cl<-makeCluster(2)
registerDoParallel(cores=2)
stime<-system.time(ls<-foreach(s = 1:10000) %dopar% rnorm(10000))[3]
stopCluster(cl)
print(stime)
29.526
The system time is more then twice as much as it was in the original case without parallel computing.
Obviously I am doing something wrong but I cannot figure out what it is.
Performing many tiny tasks in parallel can be very inefficient. The standard solution is to use chunking:
ls <- foreach(s=1:2) %dopar% {
for (i in 1:5000) rnorm(10000)
}
Instead of executing 10,000 tiny tasks in parallel, this loop executes two larger tasks, and runs almost twice as fast as the sequential version on my Linux machine.
Also note that your "foreach" example is actually sending a lot of data from the workers to the master. My "foreach" example throws that data away just like your sequential example, so I think it's a better comparison.
If you need to return a large amount of data then a fair comparison would be:
ls <- lapply(rep(10000, 10000), rnorm)
versus:
ls <- foreach(s=1:2, .combine='c') %dopar% {
lapply(rep(10000, 5000), rnorm)
}
On my Linux machine the times are 8.6 seconds versus 7.0 seconds. That's not impressive due to the large communication to computation ratio, but it would have been much worse if I hadn't used chunking.

Foreach & SNOW do not work on Windows

I want to use a foreach loop on a Windows machine to make use of multiple cores in cpu heavy computation. However, I cannot get the processes to do any work.
Here is a minimal example of what I think should work, but doesn't:
library(snow)
library(doSNOW)
library(foreach)
cl <- makeSOCKcluster(4)
registerDoSNOW(cl)
pois <- rpois(1e6, 1500) # draw 1500 times from poisson with mean 1500
x <- foreach(i=1:1e6) %dopar% {
runif(pois[i]) # draw from uniform distribution pois[i] times
}
stopCluster(cl)
SNOW does create the 4 "slave" processes, but they don't do any work:
I hope this isn't a duplicate, but I cannot find anything with the search terms I can come up with.
It's probably working (at least it does on my mac). However, one call to runif takes such a small amount of time that all the time is spent for the overhead and the child processes spend negligible CPU power with the actual tasks.
x <- foreach(i=1:20) %dopar% {
system.time(runif(pois[i]))
}
x[[1]]
#user system elapsed
# 0 0 0
Parallelization makes sense if you have some heavy computations that cannot be optimized. That's not the case in your example. You don't need 1e6 calls to runif, one would be sufficient (e.g., runif(sum(pois)) and then split the result).
PS: Always test with a smaller example.
Although this particular example isn't worth executing in parallel, it's worth noting that since it uses doSNOW, the entire pois vector is auto-exported to all of the workers even though each worker only needs a fraction of it. However, you can avoid auto-exporting any data to the workers by iterating over pois itself:
x <- foreach(p=pois) %dopar% {
runif(p)
}
Now the elements of pois are sent to the workers in the tasks, so each worker only receives the data that's actually needed to perform its tasks. This technique isn't important when using doMC, since the doMC workers get pois for free.
You can also often improve performance enormously by processing pois in larger chunks using an iterator function such as "isplitVector" from the itertools package.

How to avoid duplicating objects with foreach

I have a very huge string vector and would like to do a parallel computing using foreach and dosnow package. I noticed that foreach would make copies of the vector for each process, thus exhaust system memory quickly. I tried to break the vector into smaller pieces in a list object, but still do not see any memory usage reduction. Does anyone have thoughts on this? Below are some demo code:
library(foreach)
library(doSNOW)
library(snow)
x<-rep('some string', 200000000)
# split x into smaller pieces in a list object
splits<-getsplits(x, mode='bysize', size=1000000)
tt<-vector('list', length(splits$start))
for (i in 1:length(tt)) tt[[i]]<-x[splits$start[i]: splits$end[i]]
ret<-foreach(i = 1:length(splits$start), .export=c('somefun'), .combine=c) %dopar% somefun(tt[[i]])
The style of iterating that you're using generally works well with the doMC backend because the workers can effectively share tt by the magic of fork. But with doSNOW, tt will be auto-exported to the workers, using lots of memory even though they only actually need a fraction of it. The suggestion made by #Beasterfield to iterate directly over tt resolves that issue, but it's possible to be even more memory efficient through the use of iterators and an appropriate parallel backend.
In cases like this, I use the isplitVector function from the itertools package. It splits a vector into a sequence of sub-vectors, allowing them to be processed in parallel without losing the benefits of vectorization. Unfortunately, with doSNOW, it will put these sub-vectors into a list in order to call the clusterApplyLB function in snow since clusterApplyLB doesn't support iterators. However, the doMPI and doRedis backends will not do that. They will send the sub-vectors to the workers right from the iterator, using almost half as much memory.
Here's a complete example using doMPI:
suppressMessages(library(doMPI))
library(itertools)
cl <- startMPIcluster()
registerDoMPI(cl)
n <- 20000000
chunkSize <- 1000000
x <- rep('some string', n)
somefun <- function(s) toupper(s)
ret <- foreach(s=isplitVector(x, chunkSize=chunkSize), .combine='c') %dopar% {
somefun(s)
}
print(length(ret))
closeCluster(cl)
mpi.quit()
When I run this on my MacBook Pro with 4 GB of memory
$ time mpirun -n 5 R --slave -f split.R
it takes about 16 seconds.
You have to be careful with the number of workers that you create on the same machine, although decreasing the value of chunkSize may allow you to start more.
You can decrease your memory usage even more if you're able to use an iterator that doesn't require all of the strings to be in memory at the same time. For example, if the strings are in a file named 'strings.txt', you can use s=ireadLines('strings.txt', n=chunkSize).

Resources