Force foreach to combine at some point during a %dopar% - r

I'm currently facing an issue using foreach (with doParallel) in R. I use the sum as combine function, but it appears that R waits for all the tasks to be completed before performing the sum. The fact is that my objects are pretty heavy and numerous, and my machine is unable to store all of those.
Is there any way to sum on the fly (in other words, to force a call to the combine function within the foreach) ? Thank you :)

Related

Executing parallel function calls in R

Currently I am using foreach loop in R to run parallel function calls on multiple cores of the same machine, and the code looks something like this:
result=foreach(i=1:length(list_of_dataframes))
{
temp=some_function(list_of_dataframes[[i]])
return(temp)
}
In this list_of_dataframes each data frame is for one product and has different number of columns, some_function is a function that does modelling task on each of the data frame. There are multiple function calls inside this function, some do data wrangling, others perform some sort of variable selection and so on. The result is a list of lists with each sub-list being a list of 3 data frames.For now, I have hardly 500 products and I am using a 32GB machine with 12 cores to perform this task in parallel using doparallel and foreach. My first question is - how do I scale this up,say when I have 500000 products and which framework should be ideal for this? My second question is- Can I use sparkR for this? Is spark meant to perform tasks like these? Would sparkR.lapply() be a good thing to use? I have read that it should be used as a last resort.
I am very new to all this parallel stuff, any help or suggestions would be of great help.Thanks in advance.

R mclapply vs foreach

I use mclapply for all my "embarassingly parallel" computations. I find it clean and easy to use, and when arguments mc.cores = 1 and mc.preschedule = TRUE I can insert browser() in the function inside mclapply and debug line by line just like in regular R. This is a huge help in getting code to production quicker.
What does foreach offer that mclapply does not? Is there a reason I should consider writing foreach code going forward?
If I understand correctly, both can use the multicore approach to parallel computations (permitting forking) which I like to use for performance reasons.
I have seen foreach being used in various packages, and have read the basics of it, but frankly I don't find it as easy to use. I also am unable to figure out how to get the browser() to work in foreach function calls. (yes I have read this thread browser mode with foreach %dopar% but didn't help me to get the browser to work right).
The problem is almost the same as described here: Understanding the differences between mclapply and parLapply in R .
The mclapply is creating clones of the master process for each worker processes (threads/cores) at the point that mclapply is called, reproducibility is guaranteed. Unfortunately, that isn't possible on Windows where in contrast to multicore there is always used the multisession parallelism by foreach or parLapply.
When using parLapply or foreach with %dopar%, you generally have to perform the following additional steps: Create a PSOCK cluster, Register the cluster if desired, Load necessary packages on the cluster workers, Export necessary data and functions to the global environment of the cluster workers.
That is why foreach has parameters like .packages and .export which enable us to distribute everything needed across sessions.
future package provided details of differences between mulicore and multisession processing https://cran.r-project.org/web/packages/future/vignettes/future-1-overview.html
As Steve Weston (author of foreach) says here, using foreach with doParallel as backend you can initialize workers. This can be helpful for setting up a database connection more efficiently once per worker instead of once per task.

Spawn process that run in parallel in R

I am writing a script that needs to be running continuously storing information on a MySQL database.
However, at some point of the day I will like to produce some summary of the data being colected, but writing this in the same script will stop collecting data while doing these summaries. Here's a sketch of the problem:
while (1==1) {
# get data and store it on the relational database
# At some point of the day (or time interval) do some summaries
if (time == certain_time) {
source("analyze_data.R")
}
}
The problem is that I'll like the data collection not to stop, being executed by another core of the computer.
I have seen references to packages parallel and multicore but my impression is that they are useful to repetitive tasks applied over vectors or lists.
You can use parallel to fork a process but you are right that the program will wait eternally for all the forked processes to come back together before proceeding (that is kind of the use case of parallel).
Why not run two separate R programs, one that collects the data and one that grabs it? Then, you simply run one continuously in the background and the other at set times. The problem then becomes one of getting the data out of the continuous data gathering program and into the summary program.
Do the logic outside of R:
Write 2 scripts; 1 with a while loop storing data, the other with a check. Run the while loop with one process and just leave it running.
Meanwhile, run your other (checking script) on demand to crunch the data. Or, put it in a cron job.
There are robust tools outside of R to handle this kind of thing; why do it inside R?

Using R Parallel with other R packages

I am working on a very time intensive analysis using the LQMM package in R. I set the model to start running on Thursday, it is now Monday, and is still running. I am confident in the model itself (tested as a standard MLM), and I am confident in my LQMM code (have run several other very similar LQMMs with the same dataset, and they all took over a day to run). But I'd really like to figure out how to make this run faster if possible using the parallel processing capabilities of the machines I have access to (note all are Microsoft Windows based).
I have read through several tutorials on using parallel, but I have yet to find one that shows how to use the parallel package in concert with other R packages....am I over thinking this, or is it not possible?
Here is the code that I am running using the R package LQMM:
install.packages("lqmm")
library(lqmm)
g1.lqmm<-lqmm(y~x+IEP+pm+sd+IEPZ+IEP*x+IEP*pm+IEP*sd+IEP*IEPZ+x*pm+x*sd+x*IEPZ,random=~1+x+IEP+pm+sd+IEPZ, group=peers, tau=c(.1,.2,.3,.4,.5,.6,.7,.8,.9),na.action=na.omit,data=g1data)
The dataset has 122433 observations on 58 variables. All variables are z-scored or dummy coded.
The dependent libraries will need to be evaluated on all your nodes. The function clusterEvalQ is foreseen inside the parallel package for this purpose. You might also need to export some of your data to the global environments of your subnodes: For this you can use the clusterExport function. Also view this page for more info on other relevant functions that might be useful to you.
In general, to speed up your application by using multiple cores you will have to split up your problem in multiple subpieces that can be processed in parallel on different cores. To achieve this in R, you will first need to create a cluster and assign a particular number of cores to it. Next, You will have to register the cluster, export the required variables to the nodes and then evaluate the necessary libraries on each of your subnodes. The exact way that you will setup your cluster and launch the nodes will depend on the type of sublibraries and functions that you will use. As an example, your clustersetup might look like this when you choose to utilize the doParallel package (and most of the other parallelisation sublibraries/functions):
library(doParallel)
nrCores <- detectCores()
cl <- makeCluster(nrCores)
registerDoParallel(cl);
clusterExport(cl,c("g1data"),envir=environment());
clusterEvalQ(cl,library("lqmm"))
The cluster is now prepared. You can now assign subparts of the global task to each individual node in your cluster. In the general example below each node in your cluster will process subpart i of the global task. In the example we will use the foreach %dopar% functionality that is provided by the doParallel package:
The doParallel package provides a parallel backend for the
foreach/%dopar% function using the parallel package of R 2.14.0 and
later.
Subresults will automatically be added to the resultList. Finally, when all subprocesses are finished we merge the results:
resultList <- foreach(i = 1:nrCores) %dopar%
{
#process part i of your data.
}
stopCluster(cl)
#merge data..
Since your question was not specifically on how to split up your data I will let you figure out the details of this part for yourself. However, you can find a more detailed example using the doParallel package in my answer to this post.
It sounds like you want to use parallel computing to make a single call of the lqmm function execute more quickly. To do that, you either have to:
Split the one call of lqmm into multiple function calls;
Parallelize a loop inside lqmm.
Some functions can be split up into multiple smaller pieces by specifying a smaller iteration value. Examples include parallelizing randomForest over the ntree argument, or parallelizing kmeans over the nstart argument. Another common case is to split the input data into smaller pieces, operate on the pieces in parallel, and then combine the results. That is often done when the input data is a data frame or a matrix.
But many times in order to parallelize a function you have to modify it. It may actually be easier because you may not have to figure out how to split up the problem and combine the partial results. You may only need to convert an lapply call into a parallel lapply, or convert a for loop into a foreach loop. However, it's often time consuming to understand the code. It's also a good idea to profile the code so that your parallelization really speeds up the function call.
I suggest that you download the source distribution of the lqmm package and start reading the code. Try to understand it's structure and get an idea which loops could be executed in parallel. If you're lucky, you might figure out a way to split one call into multiple calls, but otherwise you'll have to rebuild a modified version of the package on your machine.

When does foreach call .combine?

I have written some code using foreach which processes and combines a large number of CSV files. I am running it on a 32 core machine, using %dopar% and registering 32 cores with doMC. I have set .inorder=FALSE, .multicombine=TRUE, verbose=TRUE, and have a custom combine function.
I notice if I run this on a sufficiently large set of files, it appears that R attempts to process EVERY file before calling .combine the first time. My evidence is that in monitoring my server with htop, I initially see all cores maxed out, and then for the remainder of the job only one or two cores are used while it does the combines in batches of ~100 (.maxcombine's default), as seen in the verbose console output. What's really telling is the more jobs i give to foreach, the longer it takes to see "First call to combine"!
This seems counter-intuitive to me; I naively expected foreach to process .maxcombine files, combine them, then move on to the next batch, combining those with the output of the last call to .combine. I suppose for most uses of .combine it wouldn't matter as the output would be roughly the same size as the sum of the sizes of inputs to it; however my combine function pares down the size a bit. My job is large enough that I could not possibly hold all 4200+ individual foreach job outputs in RAM simultaneously, so I was counting on my space-saving .combine and separate batching to see me through.
Am I right that .combine doesn't get called until ALL my foreach jobs are individually complete? If so, why is that, and how can I optimize for that (other than making the output of each job smaller) or change that behavior?
The short answer is to use either doMPI or doRedis as your parallel backend. They work more as you expect.
The doMC, doSNOW and doParallel backends are relatively simple wrappers around functions such as mclapply and clusterApplyLB, and don't call the combine function until all of the results have been computed, as you've observed. The doMPI, doRedis, and (now defunct) doSMP backends are more complex, and get inputs from the iterators as needed and call the combine function on-the-fly, as you have assumed they would. These backends have a number of advantages in my opinion, and allow you to handle an arbitrary number of tasks if you have appropriate iterators and combine function. It surprises me that so many people get along just fine with the simpler backends, but if you have a lot of tasks, the fancy ones are essential, allowing you to do things that are quite difficult with packages such as parallel.
I've been thinking about writing a more sophisticated backend based on the parallel package that would handle results on the fly like my doMPI package, but there's hasn't been any call for it to my knowledge. In fact, yours has been the only question of this sort that I've seen.
Update
The doSNOW backend now supports on-the-fly result handling. Unfortunately, this can't be done with doParallel because the parallel package doesn't export the necessary functions.

Resources