I have a process I want to do in parallel but I fail due to some strange error. Now I am considering to combine, and calculate the failing task on the master CPU. However I don't know how to write such a function for .combine.
How should it be written?
I know how to write them, for example this answer provides an example, but it doesn't provide how to handle with failing tasks, neither repeating a task on the master.
I would do something like:
foreach(i=1:100, .combine = function(x, y){tryCatch(?)} %dopar% {
long_process_which_fails_randomly(i)
}
However, how do I use the input of that task in the .combine function (if it can be done)? Or should I provide inside the %dopar% to return a flag or a list to calculate it?
To execute tasks in the combine function, you need to include extra information in the result object returned by the body of the foreach loop. In this case, that would be an error flag and the value of i. There are many ways to do this, but here's an example:
comb <- function(results, x) {
i <- x$i
result <- x$result
if (x$error) {
cat(sprintf('master computing failed task %d\n', i))
# Could call function repeatedly until it succeeds,
# but that could hang the master
result <- try(fails_randomly(i))
}
results[i] <- list(result) # guard against a NULL result
results
}
r <- foreach(i=1:100, .combine='comb',
.init=vector('list', 100)) %dopar% {
tryCatch({
list(error=FALSE, i=i, result=fails_randomly(i))
},
error=function(e) {
list(error=TRUE, i=i, result=e)
})
}
I'd be tempted to deal with this problem by executing the parallel loop repeatedly until all the tasks have been computed:
x <- rnorm(100)
results <- lapply(x, function(i) simpleError(''))
# Might want to put a limit on the number of retries
repeat {
ix <- which(sapply(results, function(x) inherits(x, 'error')))
if (length(ix) == 0)
break
cat(sprintf('computing tasks %s\n', paste(ix, collapse=',')))
r <- foreach(i=x[ix], .errorhandling='pass') %dopar% {
fails_randomly(i)
}
results[ix] <- r
}
Note that this solution uses the .errorhandling option which is very useful if errors can occur. For more information on this option, see the foreach man page.
Related
I am doing some heavy computations which I would like to speed up by performing it in a parallel loop. Moreover, I want the result of each calculation to be assigned to the global environment based on the name of the data currently processed:
fun <- function(arg) {
assign(arg, arg, envir = .GlobalEnv)
}
For loop
In a simple for loop, that would be the following and this works just fine:
for_fun <- function() {
data <- letters[1:10]
for(i in 1:length(data)) {
dat <- quote(data[i])
call <- call("fun", dat)
eval(call)
}
}
# Works as expected
for_fun()
In this function, I first get some data, loop over it, quote it (although not necessary) to be used in a function call. In reality, this function name is also dynamic which is why I am doing it this way.
Foreach
Now, I want to speed this up. My first thought was to use the foreach package (with a doParallel backend):
foreach_fun <- function() {
# Set up parallel backend
cl <- parallel::makeCluster(parallel::detectCores())
doParallel::registerDoParallel(cl)
data <- letters[1:10]
foreach(i = 1:length(data)) %dopar% {
dat <- quote(data[i])
call <- call("fun", dat)
eval(call)
}
# Stop the parallel backend
parallel::stopCluster(cl)
doParallel::stopImplicitCluster()
}
# Error in { : task 1 failed - "could not find function "fun""
foreach_fun()
Replacing the whole quote-call-eval procedure with simply fun(data[i]) resolves the error but still nothing gets assigned.
Future
To ensure it wasn't a problem with the foreach package, I also tried the future package (although I am not familiar with it).
future_fun <- function() {
# Plan a parallel future
cl <- parallel::makeCluster(parallel::detectCores())
future::plan(cluster, workers = cl)
data <- letters[1:10]
# Create an explicit future
future(expr = {
for(i in 1:length(data)) {
dat <- quote(data[i])
call <- call("fun", dat)
eval(call)
}
})
# Stop the parallel future
parallel::stopCluster(cl)
future::plan(sequential)
}
# No errors but nothing assigned
# probably the future was never evaluated
future_fun()
Forcing the future to be evaluated (f <- future(...); value(f)) triggers the same error as by using foreach: Error in { : task 1 failed - "could not find function "fun""
Summary
In short, my questions are:
How do you assign variables to the global environment in a parallel loop?
Why does the function lookup fail?
There are several packages in R to simplify running code in parallel, like foreach and future. Most of these have constructs which are like lapply or a for loop: they carry on until all the tasks have finished.
Is there a simple parallel version of Find? That is, I would like to run several tasks in parallel. I don't need all of them to finish, I just need to get the first one that finishes (maybe with a particular result). After that the other tasks can be killed, or left to finish on their own.
Conceptual code:
hunt_needle <- function (x, y) x %in% (y-1000):y
x <- sample.int(1000000, 1)
result <- parallel_find(seq(1000, 1000000, 1000), hunt_needle)
# should return the first value for which hunt_needle is true
You can use shared memory so that processes can communicate with one another.
For that, you can use package bigstatsr (disclaimer: I'm the author).
Choose a block size and do:
# devtools::install_github("privefl/bigstatsr")
library(bigstatsr)
# Data example
cond <- logical(1e6)
cond[sample(length(cond), size = 1)] <- TRUE
ind.block <- bigstatsr:::CutBySize(length(cond), block.size = 1000)
cl <- parallel::makeCluster(nb_cores())
doParallel::registerDoParallel(cl)
# This value (in an on-disk matrix) is shared by processes
found_it <- FBM(1, 1, type = "integer", init = 0L)
library(foreach)
res <- foreach(ic = sample(rows_along(ind.block)), .combine = 'c') %dopar% {
if (found_it[1]) return(NULL)
ind <- bigstatsr:::seq2(ind.block[ic, ])
find <- which(cond[ind])
if (length(find)) {
found_it[1] <- 1L
return(ind[find[1]])
} else {
return(NULL)
}
}
parallel::stopCluster(cl)
# Verification
all.equal(res, which(cond))
Basically, when a solution is found, you don't need to do some computations anymore, and others know it because you put a 1 in found_it which is shared between all processes.
As your question is not reproducible and I don't understand everything you need, you may have to adapt this solution a little bit.
I have a very huge list ( huge_list ) . A function (inner_fun) is called for each value of the list. Inner_fun takes around .5 seconds.output of inner_fun is a simple numeric vector of size 3. I am trying to parallelise this approach. After going through many articles , it was mentioned that it is better to divide in chunks when the parallel function is very quick. So i divided it based on cores. But there is no time benefit. I am not able to understand the concept here . Can anyone give few insights on this. My major concern is that if i am doing something wrong with the code. I am not posting exact codes here. but i have tried to replicate as much as possible
few observations :
dummy_fun and dummy_fun2 takes around 10 hrs with parallel kept as
11
with no parallel , this goes around 20 hrs.
with parallel=2 ,it completes in 15 hrs
I am using 12 cores , 60 GB RAM , ubuntu machine
Code to make cluster
no_of_clusters<-detectCores()-1
cl <- makeCluster(no_of_clusters) ; registerDoParallel(cl) ;
clusterExport(cl, varlist=c("arg1","arg2","inner_fun"))
Function without chunks
dummy_fun<-function(arg1,arg2,huge_list){
g <- foreach (i = 1: length(huge_list),.combine=rbind,
.multicombine=TRUE) %dopar% {
inner_fun(i,arg1,arg2,huge_list[i])
}
return(g)
}
**Functions with chunks **
dummy_fun2<-function(arg1,arg2,huge_list){
il<-1:length(huge_list)
il2<-split(il, ceiling(seq_along(il)/(length(il)/(detectCores()-1))))
g <- foreach ( i= il2 , .combine=rbind,.multicombine=TRUE) %dopar% {
ab1<-lapply(i,function(li)
{
inner_fun(i,arg1,arg2,huge_list(i))
}
)
do.call(rbind,ab1)
}
return(g)
}
You got the chunks wrong. It's not about dividing the indices in chunks of length no_of_clusters but rather to divide them in no_of_clusters chunks.
Try this out:
dummy_fun2 <- function(arg1, arg2, huge_list, inner_fun, ncores) {
cl <- parallel::makeCluster(ncores)
doParallel::registerDoParallel(cl)
on.exit(parallel::stopCluster(cl), add = TRUE)
L <- length(huge_list)
inds <- split(seq_len(L), sort(rep_len(seq_len(NCORES), L)))
foreach(l = seq_along(inds), .combine = rbind) %dopar% {
ab1 <- lapply(inds[[l]], function(i) {
inner_fun(i, arg1, arg2, huge_list[i])
})
do.call(rbind, ab1)
}
}
Further remarks:
it's often useless to use more than half of the cores you have on your computer.
the option .multicombine is automatically used with rbind. But the .maxcombine is really important (need more than 100). Here, we use lapply for the sequential part so this remark doesn't matter.
it's useless to have many exports when using foreach, it already exports what is necessary from the environment of dummy_fun2.
are you sure you want to use huge_list[i] (get a list of one element) rather than huge_list[[i]] (get the i-th element of the list)?
I've looked through much of the documentation and done a fair amount of Googling, but can't find an answer to the following question: Is there a way to induce 'next-like' functionality in a parallel foreach loop using the foreach package?
Specifically, I'd like to do something like (this doesn't work with next but does without):
foreach(i = 1:10, .combine = "c") %dopar% {
n <- i + floor(runif(1, 0, 9))
if (n %% 3) {next}
n
}
I realize I can nest my brackets, but if I want to have a few next conditions over a long loop this very quickly becomes a syntax nightmare.
Is there an easy workaround here (either next-like functionality or a different way of approaching the problem)?
You could put your code in a function and call return. It's not clear from your example what you want it to do when n %% 3 so I'll return NA.
funi <- function(i) {
n <- i + floor(runif(1, 0, 9))
if (n %% 3) return(NA)
n
}
foreach(i = 1:10, .combine = "c") %dopar% { funi(i) }
Although it seems strange, you can use a return in the body of a foreach loop, without the need for an auxiliary function (as demonstrated by #Aaron):
r <- foreach(i = 1:10, .combine='c') %dopar% {
n <- i + floor(runif(1, 0, 9))
if (n %% 3) return(NULL)
n
}
A NULL is returned in this example since it is filtered out by the c function, which can be useful.
Also, although it doesn't work well for your example, the when function can take the place of next at times, and is useful for preventing the computation from taking place at all:
r <- foreach(i=1:5, .combine='c') %:%
foreach(j=1:5, .combine='c') %:%
when (i != j) %dopar% {
10 * i + j
}
The inner expression is only evaluated 20 times, not 25. This is particularly useful with nested foreach loops, since when has access to all of the upstream iterator values.
Update
If you want to filter out NULLs when returning the results in a list, you need to write your own combine function. Here's a complete example that demonstrates a combine function that works like the default combine function but includes a filtering mechanism:
library(doSNOW)
cl <- makeSOCKcluster(3)
registerDoSNOW(cl)
filteredlist <- function(a, ...) {
values <- list(...)
c(a, values[! sapply(values, is.null)])
}
r <- foreach(i=1:200, .combine='filteredlist', .init=list(),
.multicombine=TRUE) %dopar% {
# filter out odd values of i
if (i %% 2) return(NULL)
i
}
Note that this code works correctly when there are more than 100 task results (100 is the default value of the .maxcombine option).
I am trying to use foreach and am having problems making the .combine function scalable. For example, here is a simple combine function
MyComb <- function(part1,part2){
xs <- c(part1$x,part2$x)
ys <- c(part1$y,part2$y)
return(list(xs,ys))
}
When I use this function to combine a foreach statement with an iterator other than 2 it returns it incorrectly. For example this works:
x = foreach(i=1:2,.combine=MyComb) %dopar% list("x"=i*2,"y"=i*3)
But not this:
x = foreach(i=1:3,.combine=MyComb) %dopar% list("x"=i*2,"y"=i*3)
Is there a way to generalize the combine function to make it scalable to n iterations?
Your .combine function must take either two pieces and return something that "looks" like a piece (could be passed back in as a part) or take many arguments and put all of them together at once (with the same restrictions). Thus at least your MyComb must return a list with components x and y (which is what each piece of your %dopar% do.
A couple of ways to do this:
MyComb1 <- function(part1, part2) {
list(x=c(part1$x, part2$x), y=c(part1$y, part2$y))
}
x = foreach(i=1:3,.combine=MyComb1) %dopar% list("x"=i*2,"y"=i*3)
This version takes only two pieces at a time.
MyComb2 <- function(...) {
dots = list(...)
ret <- lapply(names(dots[[1]]), function(e) {
unlist(sapply(dots, '[[', e))
})
names(ret) <- names(dots[[1]])
ret
}
s = foreach(i=1:3,.combine=MyComb2) %dopar% list("x"=i*2,"y"=i*3)
x = foreach(i=1:3,.combine=MyComb2, .multicombine=TRUE) %dopar% list("x"=i*2,"y"=i*3)
This one can take multiple pieces at a time and combine them. It is more general (but more complex).