R Script as a Function - r

I have a long script that involves data manipulation and estimation. I have it setup to use a set of parameters, though I would like to be able to run this script multiple times with different sets of inputs kind of like a function.
Running the script produces plots and saves estimates to a csv, I am not particularly concerned with the objects it creates.
I would rather not wrap the script in a function as it is meant to be used interactively.
How do people go about doing something like this?
I found this for command line arguments : How to pass command-line arguments when source() an R file but still doesn't solve the interactive problem

I have dealt with something similar before. Below is the solution I came up with.
I basically use list2env to push variables to either the global or function's local environment
and I then source the function in the designated environment.
This can be quite useful especially when coupled with exists as shown in the example below which would allow you to keep your script stand-alone.
These two questions may also be of help:
Source-ing an .R script within a function and passing a variable through (RODBC)
How to pass command-line arguments when source() an R file
# Function ----------------------------------------------------------------
subroutine <- function(file, param = list(), local = TRUE, ...) {
list2env(param, envir = if (local) environment() else globalenv())
source(file, local = local, ...)
}
# Example -----------------------------------------------------------------
# Create an example script
tmp <- "test_subroutine.R"
cat("if (!exists('msg')) msg <- 'no argument provided'; print(msg)", file = tmp)
# Example of using exists in the script to keep it stand-alone
subroutine(tmp)
# Evaluate in functions environment
subroutine(tmp, list(msg = "use function's environment"), local = TRUE)
exists("msg", envir = globalenv()) # FALSE
# Evaluate in global environment
subroutine(tmp, list(msg = "use global environment"), local = FALSE)
exists("msg", envir = globalenv()) # TRUE
unlink(tmp)

Just to clarify what was alluded to in Hansi's comment, here is one approach to this issue:
Wrap the script into a function, since this will let you go up one level of abstraction if needed, and will also make it easier to call the function whenever it is needed in any other script.
In cases where you want to use the script interactively, you can put a browser() call somewhere in your script. At the point where browser() is called, the function will pause and keep the environment as-is within the function, and you can then step through the function and use R interactively from within the function.

In the base package, check ?commandArgs, you can use this to parse out arguments from the command line.
If I have a script, test.R, containing the code:
args <- commandArgs(trailingOnly=TRUE)
for (arg in args){
print(arg)
}
and I call it from the command line with rscript as follows:
rscript test.R arg1 arg2 arg3
The output is:
[1] "arg1"
[1] "arg2"
[1] "arg3"

Related

Separate scripts from .GlobalEnv: Source script that source scripts

This question is similar to Source script to separate environment in R, not the global environment, but with a key twist.
Consider a script that sources another script:
# main.R
source("funs.R")
x <- 1
# funs.R
hello <- function() {message("Hi")}
I want to source the script main.R and keep everything in a "local" environment, say env <- new.env(). Normally, one could call source("main.R", local = env) and expect everything to be in the env environment. However, that's not the case here: x is part of env, but the function hello is not! It is in .GlobalEnv.
Question: How can I source a script to a separate environment in R, even if that script itself sources other scripts, and without modifying the other scripts being sourced?
Thanks for helping, and let me know if I can clarify anything.
EDIT 1: Updated question to be explicit that scripts being source cannot be modified (assume they are not under your control).
You can use trace to inject code in functions,
so you could force all source calls to set local = TRUE.
Here I just override it if local is FALSE in case any nested calls to source actually set it to other environments due to special logic of their own.
env <- new.env()
# use !isTRUE if you want to support older R versions (<3.5.0)
tracer <- quote(
if (isFALSE(local)) {
local <- TRUE
}
)
trace(source, tracer, print = FALSE, where = .GlobalEnv)
# if you're doing this inside a function, uncomment the next line
#on.exit(untrace(source, where = .GlobalEnv))
source("main.R", local = env)
As mentioned in the code,
if you wrap this logic in a function,
consider using on.exit to make sure you untrace even if there are errors.
EDIT: as mentioned in the comments,
this could have issues if some of the scripts you will be loading assume there is 1 (global) environment where everything ends.
I suppose you could change the tracer to something like
tracer <- quote(
if (missing(local)) {
local <- TRUE
}
)
or maybe
tracer <- quote(
if (isFALSE(local)) {
# fetch the specific environment you created
local <- get("env", .GlobalEnv)
}
)
The former assumes that if the script didn't specify local at all,
it doesn't care about which environment ends up holding everything.
The latter assumes that source calls that didn't specify local or set it to FALSE want everything to end up in 1 environment,
and modify the logic to use your environment instead of the global one.
Disclaimer: Very ugly and potentially dangerous, but whatever.
Redefine source:
env<-new.env()
source<-function(...) base::source(..., local = env)
source("main.R")
#just remove your redefinition when you don't need it
rm(source)
The best way to protect yourself from side effects of code you cannot control is isolation. You can use callr to easily execute the scripts isolated in a separate R session:
using environments:
env <- new.env()
env <- as.environment(callr::r(function(env) {
list2env(env, .GlobalEnv)
source("main.R")
as.list(.GlobalEnv)
}, args = list(as.list(env))))
env
#> <environment: 0x0000000018124878>
env$hello()
#> Hi
simpler version sticking to lists:
params <- list()
results <- callr::r(function(params) {
list2env(params, .GlobalEnv)
source("main.R")
as.list(.GlobalEnv)
}, args = list(params))
results
#> $x
#> [1] 1
#>
#> $hello
#> function ()
#> {
#> message("Hi")
#> }
results$hello()
#> Hi
The param part is only needed if you actually need to provide input the scripts (not used for you example).
Obviously, this will not work for open connections and similar stuff. In that case, you might want to look into callr::r_session.

Calling an R function in a different environment

I fell like it should be fairly straightforward to do this, but I can't for the life of me find a solution... I want to evaluate an R function in an environment different from the one where it is.
What I'd like:
# A simple function
f <- function() {
x + 1
}
# Create an env and assign x <- 3
env <- new.env()
assign("x", 3, envir = env)
# Call f on env
call_on_env(f, env)
#> 4
The closest I got to "call_on_env()" was:
# Quote call and evaluate
quo <- quote(f())
eval(quo, envir = env)
Unfortunately the code above returns an error: Error in f() : object 'x' not found. So then... Is there a way for me to evaluate f() on env?
Edit: I'm able to send f() to env and then call it, but this leaves f() permanently there. For context [see below], I want to call the function in parallel with some pre-loaded packages.
Context: I'm calling a function in parallel with parallel::clusterMap() and I'd like for the packages loaded in my global environment to also be loaded on the clusters. As far as I can tell, parallel::clusterExport() can only export a list of variables, so it doesn't work for me...
Move f into env
environment(f) <- env
f()
# [1] 4
Note: Evaluation of objects across different environments is not desirable, as you have encountered here. It's best to keep all objects that you plan to interact with one another in the same environment.
If you don't want to change the environment of f, you could put all the above into a new function.
fx <- function(f, env) {
environment(f) <- env
f()
}
fx(f, env)
# [1] 4
The source() function might help:
source('scriptfilename.R')
If the file is located in another path then use:
source('YOURPATH/scriptfilename.R')
When you run source() it will pull all of the functions into your current Environment. You can then reference any of the functions contained in the R script where it sits.
However I wouldn't recommend referencing functions/scripts outside of your R project folder structure, since the links will break if you share your R project folder with others.

R functions that execute functions

I'm trying to break out common lines of code used in a fairly large R script into encapsulated functions...however, they don't seem to be running the intended code when called. I feel like I'm missing some conceptual piece of how R works, or functional programming in general.
Examples:
Here's a piece of code I'd like to call to clear the workspace -
clearWorkSpace <- function() {
rm(list= ls(all=TRUE))
}
As noted, the code inside of the function executes as expected, however if the parent function is called, the environment is not cleared.
Again, here's a function intended to load all dependency files -
loadDependencies <- function() {
dep_files <- list.files(path="./dependencies")
for (file in dep_files) {
file_path <- paste0("./dependencies/",file)
source(file_path,local=TRUE)
}
}
If possible, it'd be great to be able to encapsulate code into easy to read functions. Thanks for your help in advance.
What you are calling workspace is more properly referred to as the global environment.
Functions execute in their own environments. This is, for example, why you don't see the variables defined inside a function in your global environment. Also how a function knows to use a variable named x defined in the function body rather than some x you might happen to have in your global environment.
Most functions don't modify the external environments, which is good! It's the functional programming paradigm. Functions that do modify environments, such as rm and source, usually take arguments so that you can be explicit about which environment is modified. If you look at ?rm you'll see an envir argument, and that argument is most of what its Details section describes. source has a local argument:
local - TRUE, FALSE or an environment, determining where the parsed expressions are evaluated. FALSE (the default) corresponds to the user's workspace (the global environment) and TRUE to the environment from which source is called.
You explicitly set local = TRUE when you call source, which explicitly tells source to only modify the local (inside the function) environment, so of course your global environment is untouched!
To make your functions work as I assume you want them to, you could modify clearWorkSpace like this:
clearWorkSpace <- function() {
rm(list= ls(all=TRUE, envir = .GlobalEnv), envir = .GlobalEnv)
}
And for loadDependencies simply delete the local = TRUE. (Or more explicitly set local = FALSE or local = .GlobalEnv) Though you could re-write it in a more R-like way:
loadDependencies = function() {
invisible(lapply(list.files(path = "./dependencies", full.names = TRUE), source))
}
For both of these (especially with the simplified dependency running above) I'd question whether you really need these wrapped up in functions. Might be better to just get in the habit of restarting R when you resume work on a project and keeping invisible(lapply(list.files(path = "./dependencies", full.names = TRUE), source)) at the top of your script...
For more reading on environments, there is The Evironments Section of Advanced R. Notably, there are several ways to specify environments that might be useful for different use cases rather than hard-coding the global environment.
In theory you need just to do something like:
rm(list= ls(all=TRUE, envir = .GlobalEnv))
I mean you set explicitly the environment ( even it is better here to use pos argument). but this will delete also the clearWorkSpace function since it is a defined in the global environment. So this will fails with a recursive call.
Personally I never use rm within a function or a local call. My understanding , rm is intended to be called from the console to clear the work space.

R user-defined functions in new environment

I use some user-defined small functions as helpers. These functions are all stored in a R_HOME_USER/helperdirectory. Until now, these functions were sourced at R start up. The overall method is something like `lapply(my.helper.list,source). I want now these functions to be sourced but not to appear in my environment, as they pollute it.
A first and clean approach would be to build a package with all my helper.R. For now, I do not want to follow this method. A second approach would be to name these helpers with a leading dot. This annoys me to have to run R > .helper1().
Best way would be to define these helpers in a specific and accessible environment, but I am messing with the code. My idea is to create first a new environment:
.helperEnv <- new.env(parent = baseenv())
attach(.helperEnv, name = '.helperEnv')
Fine, R > search() returns 'helperEnv' in the list. Then I run :
assign('helper1', helper1, envir = .helperEnv)
rm(helper1)
Fine, ls(.helperEnv)returns 'helper1' and this function does not appear anymore in my environment.
The issue is I can't run helper1 (object not found). I guess I am not on the right track and would appreciate some hints.
I think you should assign the pos argument in your call to attach as a negative number:
.helperEnv <- new.env()
.helperEnv$myfunc<-function(x) x^3+1
attach(.helperEnv,name="helper",pos=-1)
ls()
#character(0)
myfunc
#function(x) x^3+1

Write access to commandArgs?

I know that I can use commandArgs to read the command line arguments passed to a script in R, but I would like to debug a command line script by sourceing it in R and making it run using custom command line arguments. Is there a way of modifying the command line arguments without modifying the script file?
My scripts are normally using the optparse package for actual argument parsing, if that helps.
I'll try and expand what I said in a comment.
The python way of writing scripts usually involves detecting if the file is being run as a script, handling the args, and then calling functions defined in the file. Something like:
def foo(x):
return x*2
if __name__=="__main__":
v = sys.argv[1]
print foo(v)
This has the advantage that you can import the file into an interactive python session and the code in the 'if' block doesn't run. You can then test the foo function interactively.
Now is there a way you can check in R if the file is being run as a script, or being sourced from an interactive session?
foo=function(x){
return(x*2)
}
if(!interactive()){
x = as.numeric(commandArgs(trailingOnly=TRUE)[1])
print(foo(x))
}
If run with Rscript argtest.R 22 will print 44, if you run R interactively and do source("argtest.R") it won't run the code in the if block. Its a nice pattern.
How about simply overwriting it with your own definition, e.g.
commandArgs <- function(trailingOnly=FALSE) {
args<- c("/foo/bar", "baz")
# copied from base:::commandArgs
if (trailingOnly) {
m <- match("--args", args, 0L)
if (m)
args[-seq_len(m)]
else character()
}
else args
}
The simplest solution is to replace source() with system(). Try
system("Rscript file_to_source.R 1 2 3")

Resources