Strategical considerations for nested parallel computing in R with foreach - r

I am using the foreach-package in R to run some code in parallel.
Technically it works for me but the computational improvement is fairly moderate, to be honest.
Since I have limited experience with parallel computing and only found fairly outdated (or questionable) articles online, I hoped I could pick your brains on your experience in terms of strategy to improve efficiency.
The situation is as follows, I have one outer loop (multiple thousand iterations, therefore fewer cores than iterations) and multiple smaller inner loops (with iterations less than cores available). The computational demanding parts occur within the smaller inner loops.
So far, I have been running the outer loop with foreach and %dopar%; and the inner ones with standard for-loops.
The reasoning for that was, that if I run the inner ones in parallel instead, most cores will remain unused since the loops themselves have fewer iterations than cores available.
Please see some pseudo-code for illustration:
foreach(it = 1:10000, .inorder = FALSE) %dopar% {
Some simple computations
for(i in 1:10){
Some demanding computation
}
Some more simple computations
for(i in 1:10){
And some more demanding computation
}
}
My question is, from your experience, is it more efficient to run inner or outer loops in parallel or both and is my reasoning regarding the non-usage of cores if running the inner loops in parallel instead correct?
Just FYI, I am currently developing on an eight-core, 16 GB RAM machine but will ultimately take it to the cloud with 16 cores and 50 GB RAM.
Hope you could share your experience since I believe it is a decision other people will face occasionally as well.
Best,
Oliver

Related

foreach doparallel on GPU

I have this code for writing my results in parallel. I am using foreach and doParallel libraries in R.
output_location='/home/Desktop/pp/'
library(foreach)
library(doParallel)
library(data.table)
no_cores <- detectCores()
registerDoParallel(makeCluster(no_cores))
a=Sys.time()
foreach(i=1:100,.packages = c('foreach','doParallel')
,.options.multicore=mcoptions)%dopar%
{result<- my_functon(arg1,arg2)
write(result,file=paste(output_location,"out",toString(i),".csv"))
gc()
}
Now it uses 4 cores in the CPU and thus the writing takes very less time using this code.But i want foreach-doparallel using GPU. Is there any method for processing the foreach doParallel loop on GPU. gputools,gpuR are some GPU supporting R packages. But they are mainly for mathematical computations like gpuMatMult(),gpuMatrix() etc. I am looking for running the loop on GPU. Any help or guidance will be great.
Parallelization with foreach or similar tools works because you have multiple CPUs (or a CPU with multiple cores), which can process multiple tasks at once. A GPU also has multiple cores, but these are already used to process a single task in parallel. So if you want to parallelize further, you will need multiple GPUs.
However, keep in mind that GPUs are faster than CPUs only for certain types of applications. Matrix operations with large matrices being a prime example! See the performance section here for a recent comparison of one particular example. So it might make sense for you to consider if the GPU is the right tool for you.
In addition: File IO will always go via the CPU.

When (if ever) should I tell R parallel to not use all cores?

I've been using this code:
library(parallel)
cl <- makeCluster( detectCores() - 1)
clusterCall(cl, function(){library(imager)})
then I have a wrapper function looking something like this:
d <- matrix #Loading a batch of data into a matrix
res <- parApply(cl, d, 1, FUN, ...)
# Upload `res` somewhere
I tested on my notebook, with 8 cores (4 cores, hyperthreading). When I ran it on a 50,000 row, 800 column, matrix, it took 177.5s to complete, and for most of the time the 7 cores were kept at near 100% (according to top), then it sat there for the last 15 or so seconds, which I guess was combining results. According to system.time(), user time was 14s, so that matches.
Now I'm running on EC2, a 36-core c4.8xlarge, and I'm seeing it spending almost all of its time with just one core at 100%. More precisely: There is an approx 10-20 secs burst where all cores are being used, then about 90 secs of just one core at 100% (being used by R), then about 45 secs of other stuff (where I save results and load the next batch of data). I'm doing batches of 40,000 rows, 800 columns.
The long-term load average, according to top, is hovering around 5.00.
Does this seem reasonable? Or is there a point where R parallelism spends more time with communication overhead, and I should be limiting to e.g. 16 cores. Any rules of thumb here?
Ref: CPU spec I'm using "Linux 4.4.5-15.26.amzn1.x86_64 (amd64)". R version 3.2.2 (2015-08-14)
UPDATE: I tried with 16 cores. For the smallest data, run-time increased from 13.9s to 18.3s. For the medium-sized data:
With 16 cores:
user system elapsed
30.424 0.580 60.034
With 35 cores:
user system elapsed
30.220 0.604 54.395
I.e. the overhead part took the same amount of time, but the parallel bit had fewer cores so took longer, and so it took longer overall.
I also tried using mclapply(), as suggested in the comments. It did appear to be a bit quicker (something like 330s vs. 360s on the particular test data I tried it on), but that was on my notebook, where other processes, or over-heating, could affect the results. So, I'm not drawing any conclusions on that yet.
There are no useful rules of thumb — the number of cores that a parallel task is optimal for is entirely determined by said task. For a more general discussion see Gustafson’s law.
The high single-core portion that you’re seeing in your code probably comes from the end phase of the algorithm (the “join” phase), where the parallel results are collated into a single data structure. Since this far surpasses the parallel computation phase, this may indeed be an indication that fewer cores could be beneficial.
I'd add that in case you are not aware of this wonderful resource for parallel computing in R, you may find reading Norman Matloff's recent book Parallel Computing for Data Science: With Examples in R, C++ and CUDA a very helpful read. I'd highly recommend it (I learnt a lot, not coming from a CS background).
The book answers your question in depth (Chapter 2 specifically). The book gives a high level overview of the causes of overhead that lead to bottlenecks to parallel programs.
Quoting section 2.1, which implicitly partially answers your question:
There are two main performance issues in parallel programming:
Communications overhead: Typically data must be transferred back and
forth between processes. This takes time, which can take quite a toll
on performance. In addition, the processes can get in each other’s way
if they all try to access the same data at once. They can collide when
trying to access the same communications channel, the same memory
module, and so on. This is another sap on speed. The term granularity
is used to refer, roughly, to the ratio of computa- tion to overhead.
Large-grained or coarse-grained algorithms involve large enough chunks
of computation that the overhead isn’t much of a problem. In
fine-grained algorithms, we really need to avoid overhead as much as
possible.
^ When overhead is high, less cores for the problem at hand can give shorter total computation time.
Load balance: As noted in the last chapter, if we are not
careful in the way in which we assign work to processes, we risk
assigning much more work to some than to others. This compromises
performance, as it leaves some processes unproductive at the end of
the run, while there is still work to be done.
When if ever do not use all cores? One example from my personal experience in running daily cronjobs in R on data that amounts to 100-200GB data in RAM, in which multiple cores are run to crunch blocks of data, I've indeed found running with say 6 out of 32 available cores to be faster than using 20-30 of the cores. A major reason was memory requirements for children processes (After a certain number of children processes were in action, memory usage got high and things slowed down considerably).

parallel r with foreach and mclapply at the same time

I am implementing a parallel processing system which will eventually be deployed on a cluster, but I'm having trouble working out how the various methods of parallel processing interact.
I need to use a for loop to run a big block of code, which contains several large list of matrices operations. To speed this up, I want to parallelise the for loop with a foreach(), and parallelise the list operations with mclapply.
example pseudocode:
cl<-makeCluster(2)
registerDoParallel(cl)
outputs <- foreach(k = 1:2, .packages = "various packages") {
l_output1 <- mclapply(l_input1, function, mc.cores = 2)
l_output2 <- mclapply(l_input2, function, mc.cores = 2)
return = mapply(cbind, l_output1, l_output2, SIMPLIFY=FALSE)
}
This seems to work. My questions are:
1) is it a reasonable approach? They seem to work together on my small scale tests, but it feels a bit kludgy.
2) how many cores/processors will it use at any given time? When I upscale it to a cluster, I will need to understand how much I can push this (the foreach only loops 7 times, but the mclapply lists are up to 70 or so big matrices). It appears to create 6 "cores" as written (presumably 2 for the foreach, and 2 for each mclapply.
I think it's a very reasonable approach on a cluster because it allows you to use multiple nodes while still using the more efficient mclapply across the cores of the individual nodes. It also allows you to do some of the post-processing on the workers (calling cbind in this case) which can significantly improve performance.
On a single machine, your example will create a total of 10 additional processes: two by makeCluster which each call mclapply twice (2 + 2(2 + 2)). However, only four of them should use any significant CPU time at a time. You could reduce that to eight processes by restructuring the functions called by mclapply so that you only need to call mclapply once in the foreach loop, which may be more efficient.
On multiple machines, you will create the same number of processes, but only two processes per node will use much CPU time at a time. Since they are spread out across multiple machines it should scale well.
Be aware that mclapply may not play nicely if you use an MPI cluster. MPI doesn't like you to fork processes, as mclapply does. It may just issue some stern warnings, but I've also seen other problems, so I'd suggest using a PSOCK cluster which uses ssh to launch the workers on the remote nodes rather than using MPI.
Update
It looks like there is a problem calling mclapply from cluster workers created by the "parallel" and "snow" packages. For more information, see my answer to a problem report.

mclapply cores spending lots of time in uninterruptable sleep

This is a somewhat generic question for which I apologize, but I can't generate a code example that reproduces the behavior. My question is this: I'm scoring a largish data set (~11 million rows with 274 dimensions) by subdividing the data set into a list of data frames and then running a scoring function on 16 cores of a 24 core Linux server using mclapply. Each data frame on the list is allocated to a spawned instance and scored, returning a list of data frames of predictions. While the mclapply is running the various R instances are spending a lot of time in uninterruptable sleep, more than they're spending running. Has anyone else experienced this using mclapply? I'm a Linux neophyte, from an OS perspective does this make any sense? Thanks.
You need to be careful when using mclapply to operate on large data sets. It's easy to create too many workers for the amount of memory on your computer and the amount of memory used by your computation. It's hard to predict the memory requirements due to the complexity of R's memory management, so it's best to monitor memory usage carefully using a tool such as "top" or "htop".
You may be able to decrease the memory usage by splitting your work into more but smaller tasks since that may reduce the memory needed by the computation. I don't think that the choice of prescheduling affects the memory usage much, since mclapply will never fork more than mc.cores workers at a time, regardless of the value of mc.prescheduling.

What R parallelization/HPC packages allow for parallelization within a loop?

Suppose I have a hierarchical Bayesian model with $V$ first-level nodes, where $V$ is very large, and I am going to to do $S$ simulations. My thinking is that I could benefit by parallelizing the computation of each of those first-level nodes, and of course from running multiple chains in parallel. So I would have two for or *apply levels, one of the parallelization of the multiple chains, and one for the parallelization of the first-level node computations within an iteration for a particular chain. In what R packages, if any, is this possible? Thank you.
As requested, here is some high-level pseudo-code for something I'd want to do:
for node in top.cluster {
for draw in simulation {
draw population.level.variables from population.level.conditionals
for node in bottom.cluster {
draw random.effect[node] from random.effect.conditionals[node]
}
}
}
Does this make more sense?
In general, it is best to parallelize at the outermost level of the calculation as that avoids communication overhead as much as possible. Unless you tell us more specifics I don't see a point in parallelizing at two explicit levels of the code.
Here are some exceptions:
Of course that is not (easily) possibly if for your outer loop each iteration depends on the results of the last.
Another caveat is that you'd need to have sufficient memory for this high-level parallelization as (possibly) n copies of the data need to be held in RAM.
In R, you can do implicitly* parallelized matrix calculations by using a parallelized BLAS (I use OpenBLAS), which also doesn't need more memory. Depending on how much of your calculations are done by the BLAS, you may want to tweak the "outer" parallelization and the number of threads used by the BLAS.
* without any change to your code
Here's the high-performance computation task view, which gives you an overview of pacakges
Personally, I mostly use snow + the parallelized BLAS.

Resources