R, dplyr and snow: how to parallelize functions which use dplyr - r

Let's suppose that I want to apply, in a parallel fashion, myfunction to each row of myDataFrame. Suppose that otherDataFrame is a dataframe with two columns: COLUNM1_odf and COLUMN2_odf used for some reasons in myfunction. So I would like to write a code using parApply like this:
clus <- makeCluster(4)
clusterExport(clus, list("myfunction","%>%"))
myfunction <- function(fst, snd) {
#otherFunction and aGlobalDataFrame are defined in the global env
otherFunction(aGlobalDataFrame)
# some code to create otherDataFrame **INTERNALLY** to this function
otherDataFrame %>% filter(COLUMN1_odf==fst & COLUMN2_odf==snd)
return(otherDataFrame)
}
do.call(bind_rows,parApply(clus,myDataFrame,1,function(r) { myfunction(r[1],r[2]) }
The problem here is that R doesn't recognize COLUMN1_odf and COLUMN2_odf even if I insert them in clusterExport. How can I solve this problem? Is there a way to "export" all the object that snow needs in order to not enumerate each of them?
EDIT 1: I've added a comment (in the code above) in order to specify that the otherDataFrame is created interally to myfunction.
EDIT 2: I've added some pseudo-code in order to generalize myfunction: it now uses a global dataframe (aGlobalDataFrame and another function otherFunction)

Done some experiments, so I solved my problem (with the suggestion of Benjamin and considering the 'edit' that I've added to the question) with:
clus <- makeCluster(4)
clusterEvalQ(clus, {library(dplyr); library(magrittr)})
clusterExport(clus, "myfunction", "otherfunction", aGlobalDataFrame)
myfunction <- function(fst, snd) {
#otherFunction and aGlobalDataFrame are defined in the global env
otherFunction(aGlobalDataFrame)
# some code to create otherDataFrame **INTERNALLY** to this function
otherDataFrame %>% dplyr::filter(COLUMN1_odf==fst & COLUMN2_odf==snd)
return(otherDataFrame)
}
do.call(bind_rows, parApply(clus, myDataFrame, 1,
{function(r) { myfunction(r[1], r[2]) } )
In this way I've registered aGlobalDataFrame, myfunction and otherfunction, in short all the function and the data used by the function used to parallelize the job (myfunction itself)

Now that I'm not looking at this on my phone, I can see a couple of issues.
First, you are not actually creating otherDataFrame in your function. You are trying to pipe an existing otherDataFrame into filter, and if otherDataFrame doesn't exist in the environment, the function will fail.
Second, unless you have already loaded the dplyr package into your cluster environments, you will be calling the wrong filter function.
Lastly, when you've called parApply, you haven't specified anywhere what fst and snd are supposed to be. Give the following a try:
clus <- makeCluster(4)
clusterEvalQ(clus, {library(dplyr); library(magrittr)})
clusterExport(clus, "myfunction")
myfunction <- function(otherDataFrame, fst, snd) {
dplyr::filter(otherDataFrame, COLUMN1_odf==fst & COLUMN2_odf==snd)
}
do.call(bind_rows,parApply(clus,myDataFrame,1,function(r, fst, snd) { myfunction(r[fst],r[snd]), "[fst]", "[snd]") }

Related

Is it valid to access global variables in R function and how to assign it in a package?

I have a package which provides a script and some functions. Within the script I assign a variable which will be used by the function. This works if the function gets executed within the script but might fail if I just call the function since the variable doesn't exist.
If I use devtools::check() I get warnings, that the variable within the function isn't defined. How can I handle this properly?
Edit
I am thinking about to use get() within the function to assign the variable within the function to get rid of this warnings. So the question is, is myp2 the correct way of doing something like this? Maybe some trycatch to handle errors?
ab <- c(1,2,3)
myp1 <- function() {
print(ab)
return(1)
}
myp2 <- function() {
ab <- get('ab')
print(ab)
return(1)
}
myp1()
myp2()
You could do something like
if(!exists("your variable")){
stop("You have not defined your variable")}
This would check to see if what you are looking for exists. A better practice would be to define the variable in the function and have the default value be the name of the thing for which you are looking.
myp <- function(x) {
print(x)
return(1)
}
ab <- c(1,2,3)
myp(x = ab)
If possible, it would be also better to substitute the script with a function.

Applying an operation to every dataframe in the global environment

I would like to create a prediction matrix (using mice) for each dataframe in my workspace. I thought of doing the following:
library(mice)
PredMatr = list()
try (for (i in 1:length(ls())) {
PredMatr [[i]]=quickpred(get(ls()[i]), mincor=.1)
})
But it stops when it encounters something different than a dataframe in the workspace. How could I adapt my code to make the operation conditional on the object being a dataframe?
you can use eapply to test which objects in the environment are class data.frame and only work with those. For example use:
Myls<-ls(sorted=F)[eapply(.GlobalEnv, class)=="data.frame"]
and now Myls is a list of the names of the objects that are a data.frame. These can then be fed into get()
eapply is like lapply but it applies to every object in an environment rather than every object in a list.
Edit to add:
To use this in the original problem you can do the following:
library(mice)
PredMatr = list()
Myls<-ls(sorted=F)[eapply(.GlobalEnv, class)=="data.frame"]
try (for (i in 1:length(Myls)) {
PredMatr [[i]]=quickpred(get(Myls[i]), mincor=.1)
})
You could add
if(!is.data.frame(get(ls()[i]))) next;
to your code, then the loop will skip to the next iteration when it encounters a non-data.frame structure.
Answer to comment
library(mice)
PredMatr = list()
try (for (i in 1:length(ls())) {
if(!is.data.frame(get(ls()[i]))) next;
PredMatr [[i]]=quickpred(get(ls()[i]), mincor=.1)
})
Should do the trick.

How to list all the functions signatures in an R file?

Is there an R function that lists all the functions in an R script file along with their arguments?
i.e. an output of the form:
func1(var1, var2)
func2(var4, var10)
.
.
.
func10(varA, varB)
Using [sys.]source has the very undesirable side-effect of executing the source inside the file. At the worst this has security problems, but even “benign” code may simply have unintended side-effects when executed. At best it just takes unnecessary time (and potentially a lot).
It’s actually unnecessary to execute the code, though: it is enough to parse it, and then do some syntactical analysis.
The actual code is trivial:
file_parsed = parse(filename)
functions = Filter(is_function, file_parsed)
function_names = unlist(Map(function_name, functions))
And there you go, function_names contains a vector of function names. Extending this to also list the function arguments is left as an exercise to the reader. Hint: there are two approaches. One is to eval the function definition (now that we know it’s a function definition, this is safe); the other is to “cheat” and just get the list of arguments to the function call.
The implementation of the functions used above is also not particularly hard. There’s probably even something already in R core packages (‘utils’ has a lot of stuff) but since I’m not very familiar with this, I’ve just written them myself:
is_function = function (expr) {
if (! is_assign(expr)) return(FALSE)
value = expr[[3L]]
is.call(value) && as.character(value[[1L]]) == 'function'
}
function_name = function (expr) {
as.character(expr[[2L]])
}
is_assign = function (expr) {
is.call(expr) && as.character(expr[[1L]]) %in% c('=', '<-', 'assign')
}
This correctly recognises function declarations of the forms
f = function (…) …
f <- function (…) …
assign('f', function (…) …)
It won’t work for more complex code, since assignments can be arbitrarily complex and in general are only resolvable by actually executing the code. However, the three forms above probably account for ≫ 99% of all named function definitions in practice.
UPDATE: Please refer to the answer by #Konrad Rudolph instead
You can create a new environment, source your file in that environment and then list the functions in it using lsf.str() e.g.
test.env <- new.env()
sys.source("myfile.R", envir = test.env)
lsf.str(envir=test.env)
rm(test.env)
or if you want to wrap it as a function:
listFunctions <- function(filename) {
temp.env <- new.env()
sys.source(filename, envir = temp.env)
functions <- lsf.str(envir=temp.env)
rm(temp.env)
return(functions)
}

using callCC with higher-order functions in R

I'm trying to figure out how to get R's callCC function for short-circuiting evalutation of a function to work with functions like lapply and Reduce.
Motivation
This would make Reduce and and lapply have asymptotic efficiency > O(n), by allowing you to
exit a computation early.
For example, if I'm searching for a value in a list I could map a 'finder' function across the list, and the second it is found lapply stops running and that value is returned (much like breaking a loop, or using a return statement to break out early).
The problem is I am having trouble writing the functions that lapply and Reduce should take using a style that callCC requires.
Example
Say I'm trying to write a function to find the value '100' in a list: something equivalent to
imperativeVersion <- function (xs) {
for (val in xs) if (val == 100) return (val)
}
The function to pass to lapply would look like:
find100 <- function (val) { if (val == 100) SHORT_CIRCUIT(val) }
functionalVersion <- function (xs) lapply(xs, find100)
This (obviously) crashes, since the short circuiting function hasn't been defined yet.
callCC( function (SHORT_CIRCUIT) lapply(1:1000, find100) )
The problem is that this also crashes, because the short circuiting function wasn't around when find100 was defined. I would like for something similar to this to work.
the following works because SHORT_CIRCUIT IS defined at the time that the function passed to lapply is created.
callCC(
function (SHORT_CIRCUIT) {
lapply(1:1000, function (val) {
if (val == 100) SHORT_CIRCUIT(val)
})
)
How can I make SHORT_CIRCUIT be defined in the function passed to lapply without defining it inline like above?
I'm aware this example can be achieved using loops, reduce or any other number of ways. I am looking for a solution to the problem of using callCC with lapply and Reduce in specific.
If I was vague or any clarification is needed please leave a comment below. I hope someone can help with this :)
Edit One:
The approach should be 'production-quality'; no deparsing functions or similar black magic.
I found a soluton to this problem:
find100 <- function (val) {
if (val == 100) SHORT_CIRCUIT(val)
}
short_map <- function (fn, coll) {
callCC(function (SHORT_CIRCUIT) {
clone_env <- new.env(parent = environment(fn))
clone_env$SHORT_CIRCUIT <- SHORT_CIRCUIT
environment(fn) <- clone_env
lapply(coll, fn)
})
}
short_map(find100, c(1,2,100,3))
The trick to making higher-order functions work with callCC is to assign the short-circuiting function into the input functions environment before carrying on with the rest of the program. I made a clone of the environment to avoid unintended side-effects.
You can achieve this using metaprogramming in R.
#alexis_laz's approach was in fact already metaprogramming.
However, he used strings which are a dirty hack and error prone. So you did well to reject it.
The correct way to approach #alexis_laz's approach would be by wrangling on code level. In base R this is done using substitute(). There are however better packages e.g. rlang by Hadley Wickham. But I give you a base R solution (less dependency).
lapply_ <- function(lst, FUN) {
eval.parent(
substitute(
callCC(function(return_) {
lapply(lst_, FUN_)
}),
list(lst_ = lst, FUN_=substitute(FUN))))
}
Your SHORT_CIRCUIT function is actually a more general, control flow return function (or a break function which takes an argument to return it). Thus, I call it return_.
We want to have a lapply_ function, in which we can in the FUN= part use a return_ to break out of the usual lapply().
As you showed, this is the aim:
callCC(
function (return_) {
lapply(1:1000, function (x) if (x == 100) return_(x))
}
)
Just with the problem, that we want to be able to generalize this expression.
We want
callCC(
function(return_) lapply(lst, FUN_)
)
Where we can use inside the function definition we give for FUN_ the return_.
We can let, however, the function defintion see return_ only if we insert the function definition code into this expression.
This exactly #alexis_laz tried using string and eval.
Or you did this by manipulating environment variables.
We can safely achieve the insertion of literal code using substitute(expr, replacer_list) where expr is the code to be manipulated and replacer_list is the lookup table for the replacement of code.
By substitute(FUN) we take the literal code given for FUN= for lapply_ without evaluating it. This expression returns literal quoted code (better than the string in #alexis_laz's approach).
The big substitute command says: "Take the expression callCC(function(return_) lapply(lst_, FUN_)) and replace lst_ in this expression by the list given for coll and FUN_ by the literal quoted expression given for FUN.
This replaced expression is then evaluated in the parent environment (eval.parent()) meaning: the resulting expression replaces the lapply_() call and is executed exactly where it was placed.
Such use of eval.parent() (or eval( ... , envir=parent.frame())) is fool proof. (otherwise, tidyverse packages wouldn't be production level ...).
So in this way, you can generalize callCC() calls.
lapply_(1:1000, FUN=function(x) if (x==100) return_(x))
## [1] 100
I don't know if it can be of use, but:
find100 <- "function (val) { if (val == 100) SHORT_CIRCUIT(val) }"
callCC( function (SHORT_CIRCUIT) lapply(1:1000, eval(parse(text = find100))) )
#[1] 100

automatic redirection of functions

The language is R.
I have a couple of files:
utilities.non.foo.R
utilities.foo.R
utilities.R
foo is an in-house package that has been cobbled together (for image processing, although this is irrelevant). It works great, but only on Linux machines, and it is a huge pain to try and compile it even on those.
Basically, utilities.foo.R contains a whole lot of functions that require package foo.
The functions in here are called functionname.foo.
I'm about to start sharing this code with external collaborators who don't have this package or Linux, so I've written a file utilities.non.foo.R, which contains all the functions in utilities.foo.R, except the dependency on package foo has been removed.
These functions are all called functionname.non.foo.
The file utilities.R has a whole heap of this, for each function:
functionname <- function(...) {
if ( fooIsLoaded() ) {
functionname.foo(...)
} else {
functionname.non.foo(...)
}
}
The idea is that one only needs to load utilities.R and if you happen to have package foo (e.g. my internal collaborators), you will use that backend. If you don't have foo (external collaborators), you'll use the non-foo backend.
My question is: is there some way to do the redirection for each function name without explicitly writing the above bit of code for every single function name?
This reminds me of how (e.g.) there is a print method, a print.table, print.data.frame, etc, but the user only needs to use print and which method is used is chosen automatically.
I'd like to have that, except the method.class would be more like method.depends_on_which_package_is_loaded.
Is there any way to avoid writing a redirection function per function in my utilities.R file?
As Dirk says, just use a package. In this case, put all your new *.non.foo functions in a new package, which is also called foo. Distribute this foo to your collaborators, instead of your in-house version. That way your utilities code can just be
functionname <- function(...) functionname.foo(...)
without having to make any checks at all.
Here is an idea: write a function that sets f to either f.foo or f.non.foo. It could be called in a loop, over all functions in a given namespace (or all functions whose name ends in .foo).
dispatch <- function(s) {
if ( fooIsLoaded() ) {
f <- get( paste(s, "foo", sep=".") )
} else {
f <- get( paste(s, "non.foo", sep=".") )
}
assign( s, f, envir=.GlobalEnv ) # You may want to use a namespace
}
f.foo <- function() cat("foo\n")
f.non.foo <- function() cat("non-foo\n")
fooIsLoaded <- function() TRUE
dispatch("f")
f()
fooIsLoaded <- function() FALSE
dispatch("f")
f()
A simpler solution would be to give the same name
to both functions, but put them in different namespaces/packages.
This sounds quite inefficient and inelegant, but how about
funify = function(f, g, package="ggplot2") {
if(paste("package:", package, sep="") %in% search()) f else
{ message("how do you intend to work without ", package) ; g}
}
detach(package:ggplot2)
foo = funify(paste, function(x) letters[x])
foo(1:10)
library(ggplot2)
foo = funify(paste, function(x) letters[x])
foo(1:10)

Resources