Does R have any mechanism to run different calculation in different threads (Windows-like mechanism of threads/tasks)? Let's
func1 <- function(x) { return (x^2); }
func2 <- function(y) { return (y^3); }
I need to execute something like this (imagine code):
thread1 <- thread_run(func1);
thread2 <- thread_run(func2);
with same mechanism of synchronization, like:
wait(thread1);
wait(thread2);
You can do that with the future package
install.packages(future)
library(future)
And then just use your code and just change the assigment to
thread1 %<-% thread_run(func1);
thread2 %<-% thread_run(func2);
Here more to read: http://www.r-bloggers.com/a-future-for-r-slides-from-user-2016/
Related
I have an expensive problem I'm trying to split into pieces.
It's an optimization problem, and consists of an initial expensive setup step, followed by a recursive structure, such that the workers can only perform one step at a time before the results need to be collected, and a new task sent to the workers.
A complicating feature is that an initial setup step for the sub computations that should occur on each worker, has to be performed directly on each worker, and cannot be exported to the worker via clusterExport or similar.
I had hoped to be able to use clusterApply to assign the outcome of this initial setup to be stored on the specific worker, but can't seem to achieve this.
The first part of my code below shows my current attempts and describes what I would like, the second shows an attempt to see all objects available on the worker and where they are located.
library(parallel)
### What I would like to do:
test2<-function(){
MYOBJECT <-0
cl=makeCluster(2,type='PSOCK')
clusterExport(cl,c('MYOBJECT'),envir = environment())
clusterApply(cl,1:2,function(x) { #attempt to modify / create MYOBJECT on the worker processes
y <- x * 2 #expensive operation I only want to do once, that *cannot* be exported to the worker
MYOBJECT <<- y
MYOBJECT <- y
assign('MYOBJECT',y,envir = parent.frame()) #envs[[1]])
})
clusterApply(cl,1:2,function(x) MYOBJECT * .5) #cheap operation to be done many times
}
test2() #should return a list of 1 and 2, without assignment into the test2 function environment / re exporting
#trying to find out where MYOBJECT is on the worker
test<-function(){
MYOBJECT <-1
cl=makeCluster(1,type='PSOCK')
clusterExport(cl,c('MYOBJECT'),envir = environment())
clusterApply(cl,1,function(x) {
MYOBJECT <<- list('hello')
assign('MYOBJECT',list('hellohello'),envir = parent.frame()) #envs[[1]])
})
clusterApply(cl,1,function(x)
lapply(sys.frames(),ls) #where is MYOBJECT?
)
}
test()
Simple solution in the end -- to modify the contents of individual workers in a persistent manner, the assignment within the clusterApply function needs to be made to the global environment.
library(parallel)
### What I would like to do:
test2<-function(){
MYOBJECT <-0
cl=makeCluster(2,type='PSOCK')
clusterExport(cl,c('MYOBJECT'),envir = environment())
clusterApply(cl,1:2,function(x) { #attempt to modify / create MYOBJECT on the worker processes
y <- x * 2 #expensive operation I only want to do once, that *cannot* be exported to the worker
assign('MYOBJECT2',y,envir = globalenv()) #envs[[1]])
})
clusterApply(cl,1:2,function(x) MYOBJECT2 * .5) #cheap operation to be done many times
}
test2() #should return a list of 1 and 2, without assignment into the test2 function environment / re exporting
In R, is there a way to exit from the calling function and return a value? Something like return(), but from the parent function?
parent <- function(){
child()
# stuff afterward should not be executed
}
child <- function(){
returnFromParent("a message returned by parent()")
}
It seems stop() is doing something like that. What I want to do is to write a small replacement for stop() that returns the message that stop() writes to stderr.
Update after G5W's suggestion: I have a large number of checks, each resulting in a stop() if the test fails, but subsequent conditions cannot be evaluated if earlier checks fail, so the function must exit after a failing one. To do this 'properly', I would have to build up a huge if else construct, which I wanted to avoid.
Got it. I guess I was looking for something like this:
parent <- function(){
parent_killing_child()
print("do not run this")
}
parent_killing_child <- function(){
do.call(return, list("my message"), envir = sys.frame(-1))
}
parent()
Thanks for all the advices.
Disclaimer: This sounds a XY problem, printing the stop message to stdout has few to no value, if interactive it should not be a problem, if in a script just use the usual redirection 2 > &1 to write stderr messages to stdout, or maybe use sink as in answer in this question.
Now, if I understood properly what you're after I'll do something like the following to avoid too much code refactoring.
First define a function to handle errors:
my_stop <- function() {
e <- geterrmessage()
print(e)
}
Now configure the system to send errors to your function (error handler) and suppress error messages:
options(error = my_stop)
options(show.error.messages=FALSE)
Now let's test it:
f1 <- function() {
f2()
print("This should not be seen")
}
f2 <- function() {
stop("This is a child error message")
}
Output:
> f1()
[1] "Error in f2() : This is a child error message\n"
For the parent function, make a list of tests. Then loop over the tests, and return your message at the first failed test. Subsequent tests will not be executed after the first failure.
Sample code:
test1 <- function(){criteria <- T; return(ifelse(criteria,T,F))}
test2 <- function(){criteria <- F; return(ifelse(criteria,T,F))}
test3 <- function(){criteria <- T; return(ifelse(criteria,T,F))}
parent <- function() {
tests <- c('test1', 'test2', 'test3')
for (i in 1:length(tests)) {
passed <- do.call(tests[i],args = list())
#print(passed)
if (!passed){
return(paste("Testing failed on test ", i, ".", sep=''))
}
}
return('Congrats! All tests passed!')
}
parent()
Update
Kudos to #chris for their clever application of do.call() in their successful solution.
In five years since then, the R team has released the rlang package within the tidyverse, which provides the apt function rlang::return_from() in tandem with rlang::return_to().
While base::return() can only return from the current local frame,
these two functions will return from any frame on the current
evaluation stack, between the global and the currently active context.
They provide a way of performing arbitrary non-local jumps out of the
function currently under evaluation.
Solution
Thus, you can simply do
child <- function() {
rlang::return_from(
# Return from the parent context (1 frame back).
frame = rlang::caller_env(n = 1),
# Return the message text.
value = "some text returned by parent()"
)
}
where the parent is identified via rlang::caller_env().
Results
When called from a parent() function
parent <- function() {
child()
# stuff afterward should not be executed
return("text that should NOT be returned by parent()")
}
the child() function will force parental behavior like this:
parent()
#> [1] "some text returned by parent()"
Bonus
See my solution here for throwing an error from a parent (or from any arbitrary "ancestor").
Let's suppose that I want to apply, in a parallel fashion, myfunction to each row of myDataFrame. Suppose that otherDataFrame is a dataframe with two columns: COLUNM1_odf and COLUMN2_odf used for some reasons in myfunction. So I would like to write a code using parApply like this:
clus <- makeCluster(4)
clusterExport(clus, list("myfunction","%>%"))
myfunction <- function(fst, snd) {
#otherFunction and aGlobalDataFrame are defined in the global env
otherFunction(aGlobalDataFrame)
# some code to create otherDataFrame **INTERNALLY** to this function
otherDataFrame %>% filter(COLUMN1_odf==fst & COLUMN2_odf==snd)
return(otherDataFrame)
}
do.call(bind_rows,parApply(clus,myDataFrame,1,function(r) { myfunction(r[1],r[2]) }
The problem here is that R doesn't recognize COLUMN1_odf and COLUMN2_odf even if I insert them in clusterExport. How can I solve this problem? Is there a way to "export" all the object that snow needs in order to not enumerate each of them?
EDIT 1: I've added a comment (in the code above) in order to specify that the otherDataFrame is created interally to myfunction.
EDIT 2: I've added some pseudo-code in order to generalize myfunction: it now uses a global dataframe (aGlobalDataFrame and another function otherFunction)
Done some experiments, so I solved my problem (with the suggestion of Benjamin and considering the 'edit' that I've added to the question) with:
clus <- makeCluster(4)
clusterEvalQ(clus, {library(dplyr); library(magrittr)})
clusterExport(clus, "myfunction", "otherfunction", aGlobalDataFrame)
myfunction <- function(fst, snd) {
#otherFunction and aGlobalDataFrame are defined in the global env
otherFunction(aGlobalDataFrame)
# some code to create otherDataFrame **INTERNALLY** to this function
otherDataFrame %>% dplyr::filter(COLUMN1_odf==fst & COLUMN2_odf==snd)
return(otherDataFrame)
}
do.call(bind_rows, parApply(clus, myDataFrame, 1,
{function(r) { myfunction(r[1], r[2]) } )
In this way I've registered aGlobalDataFrame, myfunction and otherfunction, in short all the function and the data used by the function used to parallelize the job (myfunction itself)
Now that I'm not looking at this on my phone, I can see a couple of issues.
First, you are not actually creating otherDataFrame in your function. You are trying to pipe an existing otherDataFrame into filter, and if otherDataFrame doesn't exist in the environment, the function will fail.
Second, unless you have already loaded the dplyr package into your cluster environments, you will be calling the wrong filter function.
Lastly, when you've called parApply, you haven't specified anywhere what fst and snd are supposed to be. Give the following a try:
clus <- makeCluster(4)
clusterEvalQ(clus, {library(dplyr); library(magrittr)})
clusterExport(clus, "myfunction")
myfunction <- function(otherDataFrame, fst, snd) {
dplyr::filter(otherDataFrame, COLUMN1_odf==fst & COLUMN2_odf==snd)
}
do.call(bind_rows,parApply(clus,myDataFrame,1,function(r, fst, snd) { myfunction(r[fst],r[snd]), "[fst]", "[snd]") }
I have a for loop that is something like this:
for (i=1:150000) {
tempMatrix = {}
tempMatrix = functionThatDoesSomething() #calling a function
finalMatrix = cbind(finalMatrix, tempMatrix)
}
Could you tell me how to make this parallel ?
I tried this based on an example online, but am not sure if the syntax is correct. It also didn't increase the speed much.
finalMatrix = foreach(i=1:150000, .combine=cbind) %dopar% {
tempMatrix = {}
tempMatrix = functionThatDoesSomething() #calling a function
cbind(finalMatrix, tempMatrix)
}
Thanks for your feedback. I did look up parallel after I posted this question.
Finally after a few tries, I got it running. I have added the code below in case it is useful to others
library(foreach)
library(doParallel)
#setup parallel backend to use many processors
cores=detectCores()
cl <- makeCluster(cores[1]-1) #not to overload your computer
registerDoParallel(cl)
finalMatrix <- foreach(i=1:150000, .combine=cbind) %dopar% {
tempMatrix = functionThatDoesSomething() #calling a function
#do other things if you want
tempMatrix #Equivalent to finalMatrix = cbind(finalMatrix, tempMatrix)
}
#stop cluster
stopCluster(cl)
Note - I must add a note that if the user allocates too many processes, then user may get this error: Error in serialize(data, node$con) : error writing to connection
Note - If .combine in the foreach statement is rbind , then the final object returned would have been created by appending output of each loop row-wise.
Hope this is useful for folks trying out parallel processing in R for the first time like me.
References:
http://www.r-bloggers.com/parallel-r-loops-for-windows-and-linux/
https://beckmw.wordpress.com/2014/01/21/a-brief-foray-into-parallel-processing-with-r/
The language is R.
I have a couple of files:
utilities.non.foo.R
utilities.foo.R
utilities.R
foo is an in-house package that has been cobbled together (for image processing, although this is irrelevant). It works great, but only on Linux machines, and it is a huge pain to try and compile it even on those.
Basically, utilities.foo.R contains a whole lot of functions that require package foo.
The functions in here are called functionname.foo.
I'm about to start sharing this code with external collaborators who don't have this package or Linux, so I've written a file utilities.non.foo.R, which contains all the functions in utilities.foo.R, except the dependency on package foo has been removed.
These functions are all called functionname.non.foo.
The file utilities.R has a whole heap of this, for each function:
functionname <- function(...) {
if ( fooIsLoaded() ) {
functionname.foo(...)
} else {
functionname.non.foo(...)
}
}
The idea is that one only needs to load utilities.R and if you happen to have package foo (e.g. my internal collaborators), you will use that backend. If you don't have foo (external collaborators), you'll use the non-foo backend.
My question is: is there some way to do the redirection for each function name without explicitly writing the above bit of code for every single function name?
This reminds me of how (e.g.) there is a print method, a print.table, print.data.frame, etc, but the user only needs to use print and which method is used is chosen automatically.
I'd like to have that, except the method.class would be more like method.depends_on_which_package_is_loaded.
Is there any way to avoid writing a redirection function per function in my utilities.R file?
As Dirk says, just use a package. In this case, put all your new *.non.foo functions in a new package, which is also called foo. Distribute this foo to your collaborators, instead of your in-house version. That way your utilities code can just be
functionname <- function(...) functionname.foo(...)
without having to make any checks at all.
Here is an idea: write a function that sets f to either f.foo or f.non.foo. It could be called in a loop, over all functions in a given namespace (or all functions whose name ends in .foo).
dispatch <- function(s) {
if ( fooIsLoaded() ) {
f <- get( paste(s, "foo", sep=".") )
} else {
f <- get( paste(s, "non.foo", sep=".") )
}
assign( s, f, envir=.GlobalEnv ) # You may want to use a namespace
}
f.foo <- function() cat("foo\n")
f.non.foo <- function() cat("non-foo\n")
fooIsLoaded <- function() TRUE
dispatch("f")
f()
fooIsLoaded <- function() FALSE
dispatch("f")
f()
A simpler solution would be to give the same name
to both functions, but put them in different namespaces/packages.
This sounds quite inefficient and inelegant, but how about
funify = function(f, g, package="ggplot2") {
if(paste("package:", package, sep="") %in% search()) f else
{ message("how do you intend to work without ", package) ; g}
}
detach(package:ggplot2)
foo = funify(paste, function(x) letters[x])
foo(1:10)
library(ggplot2)
foo = funify(paste, function(x) letters[x])
foo(1:10)