Objects unavailable to some nodes in multiple cluster parallel processing - r

I am trying to optimize a function using the genoud genetic optimizer in R, which is part of library(rgenoud). I have limitted experience with setting up parallel processing in R.
genoud has a built in cluster option to makeClusters whose names are known. I want to set up parallel processing on a single multi-core machine. I understand the local machine's name is localhost and repeating the name will use more than one core on the machine.
This approach seems to be working fine for simple functions such as:
genout <- genoud(sin, 1, cluster=c('localhost','localhost','localhost'))
However when I optimize more complex function structures some of the machines or environments do not find all functions. Here is an example:
fun1 <- function(x) {sin(x)}
fun2 <- function(x) {
x <- fun1(x)
ret <- x + cos(x)
return(ret)
}
genout <- genoud(fun2, 1, cluster=c('localhost','localhost','localhost'))
This gives error
Error in checkForRemoteErrors(val) :
3 nodes produced errors; first error: could not find function "fun1"
One solution seems to be to embed fun1 into fun2. However, this seems inefficient if fun2 is run a large number of times (in other cases than the silly example). Is it true that the only way to solve this problem is embedding fun1 in fun2?
Edit: Even when embedding, there are more problems when objects need to be passed to fun2 via genoud, see:
y=1
fun2 <- function(x,z) {
fun1 <- function(x) {sin(x)}
x <- fun1(x)
ret <- x + cos(x)*z
return(ret)
}
genout <- genoud(fun2, 1, cluster=c('localhost','localhost','localhost'), z=y)
>Error in checkForRemoteErrors(val) :
3 nodes produced errors; first error: object 'y' not found

I think the problem is that the genoud function is not properly evaluating and sending the additional arguments to the workers. In this case, the argument z is sent to the workers without being evaluated, so the workers have to evaluate it, but they fail because the variable y wasn't sent to the workers.
A work-around is to export the necessary variables to the workers using clusterExport, which requires you to explicitly create the cluster object for genoud to use:
library(rgenoud)
library(parallel)
cl <- makePSOCKcluster(3)
fun1 <- function(x) {sin(x)}
fun2 <- function(x, z) {
x <- fun1(x)
x + cos(x) * z
}
y <- 1
clusterExport(cl, c('fun1', 'y'))
genout <- genoud(fun2, 1, cluster=cl, z=y)
This also exports fun1, which is necessary since genoud doesn't know that fun2 depends on it.

Related

R: get list and environment of all variables and functions within a given function (for parallel processing)

I am using foreach for parallel processing, which requires manual passing of functions via a list to the environments of addressed cores. I want to automate this process and cover all use cases. Easy for simple functions which use only enclosed variables. Complications however as soon as functions which are to be parallel processed are using arguments and variables that are defined in another environment. Consider the following case:
global.variable <- 3
global.function <-function(j){
res <- j^2
return(res)
}
compute.in.parallel <-function(i){
res <- global.function(i+global.variable)
return(res)
}
pop <- seq(10)
do <- function(pop,fun){
require(doParallel)
require(foreach)
cl <- makeCluster(16)
registerDoParallel(cl)
clusterExport(cl,list("global.variable","global.function"),envir=globalenv())
results <- foreach(i=pop) %dopar% fun(i)
stopCluster(cl)
return(results)
}
do(pop,compute.in.parallel)
this works because I manually pass the global.variable and global.function to the cores as well (note that compute.in.parallel itself is automatically considered within the scope):
clusterExport(cl,list("global.variable","global.function"),envir=globalenv())
but I want to do it automatically - requiring to build a string of all variables and functions which are used (but not defined/passed/contained) within compute.in.parallel. How do I do this?
My current workaround is dump all available variables to the cores:
clusterExport(cl,as.list(unique(c(ls(.GlobalEnv),ls(environment())))),envir=environment())
This is however non-satisfactory - I am not considering variables in package namespaces and other hidden environments as well as generally passing way too many variables to the cores, creating significant overhead with every parallel run.
Any suggested improvements?
Just pass all arguments that are needed in do(), rather than using global variables.
compute.in.parallel <- function(i, global.variable, global.function) {
global.function(i + global.variable)
}
do <- function(pop, fun, ncores = parallel::detectCores() - 1, ...) {
require(foreach)
cl <- parallel::makeCluster(ncores)
on.exit(parallel::stopCluster(cl), add = TRUE)
doParallel::registerDoParallel(cl)
foreach(i = pop) %dopar% fun(i, ...)
}
do(seq(10), compute.in.parallel,
global.variable = 3,
global.function = function(j) j^2)
The future framework automatically identifies and exports globals by default. The doFuture package provides a generic future backend adaptor for foreach. If you use that, the following works:
do <- function(pop, fun) {
library("doFuture")
registerDoFuture()
cl <- parallel::makeCluster(2)
old_plan <- plan(cluster, workers = cl)
on.exit({
plan(old_plan)
parallel::stopCluster(cl)
})
foreach(i = pop) %dopar% fun(i)
}

run r*ply like function in parallel [duplicate]

I am fond of the parallel package in R and how easy and intuitive it is to do parallel versions of apply, sapply, etc.
Is there a similar parallel function for replicate?
You can just use the parallel versions of lapply or sapply, instead of saying to replicate this expression n times you do the apply on 1:n and instead of giving an expression, you wrap that expression in a function that ignores the argument sent to it.
possibly something like:
#create cluster
library(parallel)
cl <- makeCluster(detectCores()-1)
# get library support needed to run the code
clusterEvalQ(cl,library(MASS))
# put objects in place that might be needed for the code
myData <- data.frame(x=1:10, y=rnorm(10))
clusterExport(cl,c("myData"))
# Set a different seed on each member of the cluster (just in case)
clusterSetRNGStream(cl)
#... then parallel replicate...
parSapply(cl, 1:10000, function(i,...) { x <- rnorm(10); mean(x)/sd(x) } )
#stop the cluster
stopCluster(cl)
as the parallel equivalent of:
replicate(10000, {x <- rnorm(10); mean(x)/sd(x) } )
Using clusterEvalQ as a model, I think I would implement a parallel replicate as:
parReplicate <- function(cl, n, expr, simplify=TRUE, USE.NAMES=TRUE)
parSapply(cl, integer(n), function(i, ex) eval(ex, envir=.GlobalEnv),
substitute(expr), simplify=simplify, USE.NAMES=USE.NAMES)
The arguments simplify and USE.NAMES are compatible with sapply rather than replicate, but they make it a better wrapper around parSapply in my opinion.
Here's an example derived from the replicate man page:
library(parallel)
cl <- makePSOCKcluster(3)
hist(parReplicate(cl, 100, mean(rexp(10))))
The future.apply package provides a plug-in replacement to replicate() that runs in parallel and uses statistical sound parallel random number generation out of the box:
library(future.apply)
plan(multisession, workers = 4)
y <- future_replicate(100, mean(rexp(10)))

Parallel model scoring R

I'm trying to use the snow package to score an elastic net model in R, but I can't figure out how to get the predict function to run across multiple nodes in the cluster. The code below contains both a timing benchmark and the actual code producing the error:
##############
#Snow example#
##############
library(snow)
library(glmnet)
library(mlbench)
data(BostonHousing)
BostonHousing$chas<-as.numeric(BostonHousing$chas)
ind<-as.matrix(BostonHousing[,1:13],col.names=TRUE)
dep<-as.matrix(BostonHousing[,14],col.names=TRUE)
fit_lambda<-cv.glmnet(ind,dep)
#fit elastic net
fit_en<<-glmnet(ind,dep,family="gaussian",alpha=0.5,lambda=fit_lambda$lambda.min)
ind_exp<-rbind(ind,ind)
#single thread baseline
i<-0
while(i < 2000){
ind_exp<-rbind(ind_exp,ind)
i = i+1
}
system.time(st<-predict(fit_en,ind_exp))
#formula for parallel execution
pred_en<-function(x){
x<-as.matrix(x)
return(predict(fit_en,x))
}
#make the cluster
cl<-makeSOCKcluster(4)
clusterExport(cl,"fit_en")
clusterExport(cl,"pred_en")
#parallel baseline
system.time(mt<-parRapply(cl,ind_exp,pred_en))
I have been able to parallelize via forking on a Linux box using multicore, but I ended up having to use a pretty poorly performing mclapply combined with unlist and was looking for a better way to do it with snow (that would incidentally work on both my dev windows PC and my prod Linux servers). Thanks SO.
I should start by saying that the predict.glmnet function doesn't seem to be compute intensive enough to be worth parallelizing. But this is an interesting example, and my answer may be helpful to you, even if this particular case isn't worth parallelizing.
The main problem is that the parRapply function is a parallel wrapper around apply, which in turn calls your function on the rows of the submatrices, which isn't what you want. You want your function to be called directly on the submatrices. Snow doesn't contain a convenience function that does that, but it's easy to write one:
rowchunkapply <- function(cl, x, fun, ...) {
do.call('rbind', clusterApply(cl, splitRows(x, length(cl)), fun, ...))
}
Another problem in your example is that you need to load glmnet on the workers so that the correct predict function is called. You also don't need to explicitly export the pred_en function, since that is handled for you.
Here's my version of your example:
library(snow)
library(glmnet)
library(mlbench)
data(BostonHousing)
BostonHousing$chas <- as.numeric(BostonHousing$chas)
ind <- as.matrix(BostonHousing[,1:13], col.names=TRUE)
dep <- as.matrix(BostonHousing[,14], col.names=TRUE)
fit_lambda <- cv.glmnet(ind, dep)
fit_en <- glmnet(ind, dep, family="gaussian", alpha=0.5,
lambda=fit_lambda$lambda.min)
ind_exp <- do.call("rbind", rep(list(ind), 2002))
# make and initialize the cluster
cl <- makeSOCKcluster(4)
clusterEvalQ(cl, library(glmnet))
clusterExport(cl, "fit_en")
# execute a function on row chunks of x and rbind the results
rowchunkapply <- function(cl, x, fun, ...) {
do.call('rbind', clusterApply(cl, splitRows(x, length(cl)), fun, ...))
}
# worker function
pred_en <- function(x) {
predict(fit_en, x)
}
mt <- rowchunkapply(cl, ind_exp, pred_en)
You may also be interested in using the cv.glmnet parallel option, which uses the foreach package.

Variable scope in boot in a multiclustered parallel approach

I'm trying to figure out how to pass functions and packages to the boot() function when running parallel computations. It seems very expensive to load a package or define functions inside a loop. The foreach() function that I often use for other parallel tasks has a .packages and .export arguments that handles this (see this SO question) in a nice way but I can't figure out how to do this with the boot package.
Below is a meaningless example that shows what happens when switching to parallel:
library(boot)
myMean <- function(x) mean(x)
meaninglessTest <- function(x, i){
return(myMean(x[i]))
}
x <- runif(1000)
bootTest <- function(){
out <- boot(data=x, statistic=meaninglessTest, R=10000, parallel="snow", ncpus=4)
return(boot.ci(out, type="perc"))
}
bootTest()
Complains (as expected) about that it can't find myMean.
Sidenote: When running this example it runs slower than one-core, probably because splitting this simple task over multiple cores is more time consuming than the actual task. Why isn't the default to split into even job batches of R/ncpus - is there a reason why this isn't default behavior?
Update on the sidenote: As Steve Weston noted, the parLapply that boot() uses actually splits the job into even batches/chunks. The function is a neat wrapper for clusterApply:
docall(c, clusterApply(cl, splitList(x, length(cl)), lapply,
fun, ...))
I'm a little surprised that this doesn't have a better performance when I scale up the the number of repetitions:
> library(boot)
> set.seed(10)
> x <- runif(1000)
>
> Reps <- 10^4
> start_time <- Sys.time()
> res <- boot(data=x, statistic=function(x, i) mean(x[i]),
+ R=Reps, parallel="no")
> Sys.time()-start_time
Time difference of 0.52335 secs
>
> start_time <- Sys.time()
> res <- boot(data=x, statistic=function(x, i) mean(x[i]),
+ R=Reps, parallel="snow", ncpus=4)
> Sys.time()-start_time
Time difference of 3.539357 secs
>
> Reps <- 10^5
> start_time <- Sys.time()
> res <- boot(data=x, statistic=function(x, i) mean(x[i]),
+ R=Reps, parallel="no")
> Sys.time()-start_time
Time difference of 5.749831 secs
>
> start_time <- Sys.time()
> res <- boot(data=x, statistic=function(x, i) mean(x[i]),
+ R=Reps, parallel="snow", ncpus=4)
> Sys.time()-start_time
Time difference of 23.06837 secs
I hope that this is only due to the very simple mean function and that more complex cases behave better. I must admit that I find it a little disturbing as the cluster initialization time should be the same in the 10.000 & 100.000 case, yet the absolute time difference increases and the 4-core version takes 5 times longer. I guess this must be an effect of the list merging, as I can't find any other explanation for it.
If the function to be executed in parallel (meaninglessTest in this case) has extra dependencies (such as myMean), the standard solution is to export those dependencies to the cluster via the clusterExport function. That requires creating a cluster object and passing it to boot via the "cl" argument:
library(boot)
library(parallel)
myMean <- function(x) mean(x)
meaninglessTest <- function(x, i){
return(myMean(x[i]))
}
cl <- makePSOCKcluster(4)
clusterExport(cl, 'myMean')
x <- runif(1000)
bootTest <- function() {
out <- boot(data=x, statistic=meaninglessTest, R=10000,
parallel="snow", ncpus=4, cl=cl)
return(boot.ci(out, type="perc"))
}
bootTest()
stopCluster(cl)
Note that once the cluster workers have been initialized, they can be used by boot many times and do not need to be reinitialized, so it isn't that expensive.
To load packages on the cluster workers, you can use clusterEvalQ:
clusterEvalQ(cl, library(randomForest))
That's nice and simple, but for more complex worker initialization, I usually create a "worker init" function and execute it via clusterCall which is perfect for executing a function once on each of the workers.
As for your side note, the performance is bad because the statistic function does so little work, as you say, but I'm not sure why you think that the work isn't being split evenly between the workers. The parLapply function is used to do the work in parallel in this case, and it does split the work evenly and rather efficiently, but that doesn't guarantee better performance than running sequentially using lapply. But perhaps I'm misunderstanding your question.

parSapply not finding objects in global environment

I am trying to run code on several cores (I tried both the snow and parallel packages). I have
cl <- makeCluster(2)
y <- 1:10
sapply(1:5, function(x) x + y) # Works
parSapply(cl, 1:5, function(x) x + y)
The last line returns the error:
Error in checkForRemoteErrors(val) :
2 nodes produced errors; first error: object 'y' not found
Clearly parSapply isn't finding y in the global environment. Any ways to get around this? Thanks.
The nodes don't know about the y in the global environment on the master. You need to tell them somehow.
library(parallel)
cl <- makeCluster(2)
y <- 1:10
# add y to function definition and parSapply call
parSapply(cl, 1:5, function(x,y) x + y, y)
# export y to the global environment of each node
# then call your original code
clusterExport(cl, "y")
parSapply(cl, 1:5, function(x) x + y)
It is worth mentioning that your example will work if parSapply is called from within a function, although the real issue is where the function function(x) x + y is created. For example, the following code works correctly:
library(parallel)
fun <- function(cl, y) {
parSapply(cl, 1:5, function(x) x + y)
}
cl <- makeCluster(2)
fun(cl, 1:10)
stopCluster(cl)
This is because functions that are created in other functions are serialized along with the local environment in which they were created, while functions created from the global environment are not serialized along with the global environment. This can be useful at times, but it can also lead to a variety a problems if you're not aware of the issue.

Resources