I'd like to make parallel processing in R by using packages 'doParallel' and 'foreach'. And, the idea is to make parallel only computations without any outcomes. What I've found looks like 'foreach' operator always return some kind of result that takes memory in the RAM. So, I need any help to have an empty result for parallel processing loops.
# 1. Packages
library(doParallel)
library(foreach)
# 2. Create and run app cluster
cluster_app <- makeCluster(detectCores())
registerDoParallel(cluster_app)
# 3. Loop with result
list_i <- foreach(i = 1:100) %dopar% {
print(i)
}
# 4. List is not empty
list_i
# 5. How make loop with empty 'list_i' ?
# TODO: make 'list' equal NULL or NA
# 6. Stop app cluster
stopCluster(cluster_app)
Here is the solution I found:
# 1. Packages
library(doParallel)
library(foreach)
# 2. Create and run app cluster
cluster_app <- makeCluster(detectCores())
registerDoParallel(cluster_app)
# 3. Loop with result
list_i <- foreach(i = 1:100) %dopar% {
print(i)
}
list_i
# 4. Mock data processing
mock_data <- function(x) {
data.frame(matrix(NA, nrow = x, ncol = x))
}
# 4. How make loop with empty 'list_i' ?
foreach(i = 1:10, .combine = 'c') %dopar% {
# 1. Calculations
mock_data(x)
# 2. Result
NULL
}
# The results has only one value 'NULL' (not a data set)
list_i
# 5. Stop app cluster
stopCluster(cluster_app)
Related
I have found a feature/bug in the foreach package, which I do not understand. Perhaps someone can explain me this behaviour:
I created a for-loop with the foreach package (I use them together with mutlicore calculations, but here just in a sequentiell example, the bug appears in both variants). This loop runs r times. In every run a list with c entries is returned. So I expect a list with r entries, and every entry consists of c lists.
My code was the following one:
library(foreach)
clusters <- 10
runs <- 100
temp <- foreach(r = 1:runs,
.combine = 'list',
.multicombine = TRUE) %do% {
signal_all <- lapply(1:clusters, function(x){
return(1)
})
return(signal_all)
} ## end do
With this code, all works as expected, see the following picture:
But when increasing runs <- 101, the output temp is this:
The expected list structure is destroyed. But when commenting out the line .combine = 'list' all works as expected.
library(foreach)
clusters <- 10
runs <- 100
temp <- foreach(r = 1:runs,
.multicombine = TRUE) %do% {
signal_all <- lapply(1:clusters, function(x){
return(1)
})
return(signal_all)
} ## end do
Can someone explain this behaviour?
Thanks for any help!
Meanwhile I have found a solution.
The foreach function knows that some comine-functions (e.g. c or cbind) take many arguments, and will call them with up to 100 arguments (by default) in order to improve performance. With the argument .maxcombine you can set them manually.
library(foreach)
clusters <- 10
runs <- 101
temp <- foreach(r = 1:runs,
.combine = 'list',
.maxcombine = runs,
.multicombine = T) %do% {
signal_all <- lapply(1:clusters, function(x){
return(1)
})
return(signal_all)
} ## end do
I'd like to know whether the cpv function within the trotter package works with %dopar%? I'm getting the following error:
task 1 failed - "object of type 'S4' is not subsettable"
Here's a small example:
library(doParallel)
library(trotter)
registerDoParallel(cores = 2)
x <- letters
combos <- cpv(2, 1:4)
print(combos)
num_combos <- length(combos)
results_list <- foreach(combo_num=1:num_combos) %dopar% { # many iterations
y <- x[combos[combo_num]]
# time consuming stuff follows that involves using y
}
Replacing %dopar% with %do% (or simply using a for loop) and it works fine.
Depending on the cluster type one needs to explicitly specify the used packages via the .packages argument. The following should work:
library(doParallel)
library(trotter)
cl <- makePSOCKcluster(2)
registerDoParallel(cl=cl)
x <- letters
combos <- cpv(2, 1:4)
num_combos <- length(combos)
rl <- foreach(combo_num=1:num_combos, .packages="trotter") %dopar% {
x[combos[combo_num]]
}
There are several packages in R to simplify running code in parallel, like foreach and future. Most of these have constructs which are like lapply or a for loop: they carry on until all the tasks have finished.
Is there a simple parallel version of Find? That is, I would like to run several tasks in parallel. I don't need all of them to finish, I just need to get the first one that finishes (maybe with a particular result). After that the other tasks can be killed, or left to finish on their own.
Conceptual code:
hunt_needle <- function (x, y) x %in% (y-1000):y
x <- sample.int(1000000, 1)
result <- parallel_find(seq(1000, 1000000, 1000), hunt_needle)
# should return the first value for which hunt_needle is true
You can use shared memory so that processes can communicate with one another.
For that, you can use package bigstatsr (disclaimer: I'm the author).
Choose a block size and do:
# devtools::install_github("privefl/bigstatsr")
library(bigstatsr)
# Data example
cond <- logical(1e6)
cond[sample(length(cond), size = 1)] <- TRUE
ind.block <- bigstatsr:::CutBySize(length(cond), block.size = 1000)
cl <- parallel::makeCluster(nb_cores())
doParallel::registerDoParallel(cl)
# This value (in an on-disk matrix) is shared by processes
found_it <- FBM(1, 1, type = "integer", init = 0L)
library(foreach)
res <- foreach(ic = sample(rows_along(ind.block)), .combine = 'c') %dopar% {
if (found_it[1]) return(NULL)
ind <- bigstatsr:::seq2(ind.block[ic, ])
find <- which(cond[ind])
if (length(find)) {
found_it[1] <- 1L
return(ind[find[1]])
} else {
return(NULL)
}
}
parallel::stopCluster(cl)
# Verification
all.equal(res, which(cond))
Basically, when a solution is found, you don't need to do some computations anymore, and others know it because you put a 1 in found_it which is shared between all processes.
As your question is not reproducible and I don't understand everything you need, you may have to adapt this solution a little bit.
I would like to know if/how it would be possible to return multiple outputs as part of foreach dopar loop.
Let's take a very simplistic example. Let's suppose I would like to do 2 operations as part of the foreach loop, and would like to return or save the results of both operations for each value of i.
For only one output to return, it would be as simple as:
library(foreach)
library(doParallel)
cl <- makeCluster(3)
registerDoParallel(cl)
oper1 <- foreach(i=1:100000) %dopar% {
i+2
}
oper1 would be a list with 100000 elements, each element is the result of the operation i+2 for each value of i.
Suppose now I would like to return or save the results of two different operations separately, e.g. i+2 and i+3. I tried the following:
oper1 = list()
oper2 <- foreach(i=1:100000) %dopar% {
oper1[[i]] = i+2
return(i+3)
}
hoping that the results of i+2 will be saved in the list oper1, and that the results of the second operation i+3 will be returned by foreach. However, nothing gets populated in the list oper1! In this case, only the result of i+3 gets returned from the loop.
Is there any way of returning or saving both outputs in two separate lists?
Don't try to use side-effects with foreach or any other parallel program package. Instead, return all of the values from the body of the foreach loop in a list. If you want your final result to be a list of two lists rather than a list of 100,000 lists, then specify a combine function that transposes the results:
comb <- function(x, ...) {
lapply(seq_along(x),
function(i) c(x[[i]], lapply(list(...), function(y) y[[i]])))
}
oper <- foreach(i=1:10, .combine='comb', .multicombine=TRUE,
.init=list(list(), list())) %dopar% {
list(i+2, i+3)
}
oper1 <- oper[[1]]
oper2 <- oper[[2]]
Note that this combine function requires the use of the .init argument to set the value of x for the first invocation of the combine function.
I prefer to use a class to hold multiple results for a %dopar% loop.
This example spins up 3 cores, calculates multiple results on each core, then returns the list of results to the calling thread.
Tested under RStudio, Windows 10, and R v3.3.2.
library(foreach)
library(doParallel)
# Create class which holds multiple results for each loop iteration.
# Each loop iteration populates two properties: $result1 and $result2.
# For a great tutorial on S3 classes, see:
# http://www.cyclismo.org/tutorial/R/s3Classes.html#creating-an-s3-class
multiResultClass <- function(result1=NULL,result2=NULL)
{
me <- list(
result1 = result1,
result2 = result2
)
## Set the name for the class
class(me) <- append(class(me),"multiResultClass")
return(me)
}
cl <- makeCluster(3)
registerDoParallel(cl)
oper <- foreach(i=1:10) %dopar% {
result <- multiResultClass()
result$result1 <- i+1
result$result2 <- i+2
return(result)
}
stopCluster(cl)
oper1 <- oper[[1]]$result1
oper2 <- oper[[1]]$result2
This toy example shows how to return multiple results from a %dopar% loop.
This example:
Spins up 3 cores.
Renders a graph on each core.
Returns the graph and an attached message.
Prints the graphs and it's attached message out.
I found this really useful to speed up using Rmarkdown to print 1,800 graphs into a PDF document.
Tested under Windows 10, RStudio, and R v3.3.2.
R code:
# Demo of returning multiple results from a %dopar% loop.
library(foreach)
library(doParallel)
library(ggplot2)
cl <- makeCluster(3)
registerDoParallel(cl)
# Create class which holds multiple results for each loop iteration.
# Each loop iteration populates two properties: $resultPlot and $resultMessage.
# For a great tutorial on S3 classes, see:
# http://www.cyclismo.org/tutorial/R/s3Classes.html#creating-an-s3-class
plotAndMessage <- function(resultPlot=NULL,resultMessage="?")
{
me <- list(
resultPlot = resultPlot,
resultMessage = resultMessage
)
# Set the name for the class
class(me) <- append(class(me),"plotAndMessage")
return(me)
}
oper <- foreach(i=1:5, .packages=c("ggplot2")) %dopar% {
x <- c(i:(i+2))
y <- c(i:(i+2))
df <- data.frame(x,y)
p <- ggplot(df, aes(x,y))
p <- p + geom_point()
message <- paste("Hello, world! i=",i,"\n",sep="")
result <- plotAndMessage()
result$resultPlot <- p
result$resultMessage <- message
return(result)
}
# Print resultant plots and messages. Despite running on multiple cores,
# 'foreach' guarantees that the plots arrive back in the original order.
foreach(i=1:5) %do% {
# Print message attached to plot.
cat(oper[[i]]$resultMessage)
# Print plot.
print(oper[[i]]$resultPlot)
}
stopCluster(cl)
there are some informative posts on how to create a counter for loops in an R program. However, how do you create a similar function when using the parallelized version with "foreach()"?
Edit: After an update to the doSNOW package it has become quite simple to display a nice progress bar when using %dopar% and it works on Linux, Windows and OS X
doSNOW now officially supports progress bars via the .options.snow argument.
library(doSNOW)
cl <- makeCluster(2)
registerDoSNOW(cl)
iterations <- 100
pb <- txtProgressBar(max = iterations, style = 3)
progress <- function(n) setTxtProgressBar(pb, n)
opts <- list(progress = progress)
result <- foreach(i = 1:iterations, .combine = rbind,
.options.snow = opts) %dopar%
{
s <- summary(rnorm(1e6))[3]
return(s)
}
close(pb)
stopCluster(cl)
Yet another way of tracking progress, if you keep in mind the total number of iterations, is to set .verbose = T as this will print to the console which iterations have been finished.
Previous solution for Linux and OS X
On Ubuntu 14.04 (64 bit) and OS X (El Capitan) the progress bar is displayed even when using %dopar% if in the makeCluster function oufile = "" is set. It does not seem to work under Windows. From the help on makeCluster:
outfile: Where to direct the stdout and stderr connection output from the workers. "" indicates no redirection (which may only be useful for workers on the local machine). Defaults to ‘/dev/null’ (‘nul:’ on Windows).
Example code:
library(foreach)
library(doSNOW)
cl <- makeCluster(4, outfile="") # number of cores. Notice 'outfile'
registerDoSNOW(cl)
iterations <- 100
pb <- txtProgressBar(min = 1, max = iterations, style = 3)
result <- foreach(i = 1:iterations, .combine = rbind) %dopar%
{
s <- summary(rnorm(1e6))[3]
setTxtProgressBar(pb, i)
return(s)
}
close(pb)
stopCluster(cl)
This is what the progress bar looks like. It looks a little odd since a new bar is printed for every progression of the bar and because a worker may lag a bit which causes the progress bar to go back and forth occasionally.
You can also get this to work with the progress package.
# loading parallel and doSNOW package and creating cluster ----------------
library(parallel)
library(doSNOW)
numCores<-detectCores()
cl <- makeCluster(numCores)
registerDoSNOW(cl)
# progress bar ------------------------------------------------------------
library(progress)
iterations <- 100 # used for the foreach loop
pb <- progress_bar$new(
format = "letter = :letter [:bar] :elapsed | eta: :eta",
total = iterations, # 100
width = 60)
progress_letter <- rep(LETTERS[1:10], 10) # token reported in progress bar
# allowing progress bar to be used in foreach -----------------------------
progress <- function(n){
pb$tick(tokens = list(letter = progress_letter[n]))
}
opts <- list(progress = progress)
# foreach loop ------------------------------------------------------------
library(foreach)
foreach(i = 1:iterations, .combine = rbind, .options.snow = opts) %dopar% {
summary(rnorm(1e6))[3]
}
stopCluster(cl)
This code is a modified version of the doRedis example, and will make a progress bar even when using %dopar% with a parallel backend:
#Load Libraries
library(foreach)
library(utils)
library(iterators)
library(doParallel)
library(snow)
#Choose number of iterations
n <- 1000
#Progress combine function
f <- function(){
pb <- txtProgressBar(min=1, max=n-1,style=3)
count <- 0
function(...) {
count <<- count + length(list(...)) - 1
setTxtProgressBar(pb,count)
Sys.sleep(0.01)
flush.console()
c(...)
}
}
#Start a cluster
cl <- makeCluster(4, type='SOCK')
registerDoParallel(cl)
# Run the loop in parallel
k <- foreach(i = icount(n), .final=sum, .combine=f()) %dopar% {
log2(i)
}
head(k)
#Stop the cluster
stopCluster(cl)
You have to know the number of iterations and the combination function ahead of time.
This is now possible with the parallel package. Tested with R 3.2.3 on OSX 10.11, running inside RStudio, using a "PSOCK"-type cluster.
library(doParallel)
# default cluster type on my machine is "PSOCK", YMMV with other types
cl <- parallel::makeCluster(4, outfile = "")
registerDoParallel(cl)
n <- 10000
pb <- txtProgressBar(0, n, style = 2)
invisible(foreach(i = icount(n)) %dopar% {
setTxtProgressBar(pb, i)
})
stopCluster(cl)
Strangely, it only displays correctly with style = 3.
You save the start time with Sys.time() before the loop. Loop over rows or columns or something which you know the total of. Then, inside the loop you can calculate the time ran so far (see difftime), percentage complete, speed and estimated time left. Each process can print those progress lines with the message function. You'll get an output something like
1/1000 complete # 1 items/s, ETA: 00:00:45
2/1000 complete # 1 items/s, ETA: 00:00:44
Obviously the looping order will greatly affect how well this works. Don't know about foreach but with multicore's mclapply you'd get good results using mc.preschedule=FALSE, which means that items are allocated to processes one-by-one in order as previous items complete.
This code implements a progress bar tracking a parallelized foreach loop using the doMC backend, and using the excellent progress package in R. It assumes that all cores, specified by numCores, do an approximately equal amount of work.
library(foreach)
library(doMC)
library(progress)
iterations <- 100
numCores <- 8
registerDoMC(cores=numCores)
pbTracker <- function(pb,i,numCores) {
if (i %% numCores == 0) {
pb$tick()
}
}
pb <- progress_bar$new(
format <- " progress [:bar] :percent eta: :eta",
total <- iterations / numCores, clear = FALSE, width= 60)
output = foreach(i=1:iterations) %dopar% {
pbTracker(pb,i,numCores)
Sys.sleep(1/20)
}
The following code will produce a nice progress bar in R for the foreach control structure. It will also work with graphical progress bars by replacing txtProgressBar with the desired progress bar object.
# Gives us the foreach control structure.
library(foreach)
# Gives us the progress bar object.
library(utils)
# Some number of iterations to process.
n <- 10000
# Create the progress bar.
pb <- txtProgressBar(min = 1, max = n, style=3)
# The foreach loop we are monitoring. This foreach loop will log2 all
# the values from 1 to n and then sum the result.
k <- foreach(i = icount(n), .final=sum, .combine=c) %do% {
setTxtProgressBar(pb, i)
log2(i)
}
# Close the progress bar.
close(pb)
While the code above answers your question in its most basic form a better and much harder question to answer is whether you can create an R progress bar which monitors the progress of a foreach statement when it is parallelized with %dopar%. Unfortunately I don't think it is possible to monitor the progress of a parallelized foreach in this way, but I would love for someone to prove me wrong, as it would be very useful feature.