`R` equivalent to pythons `from src.file import function, class, variable`? - r

I have defined three different S4 classes in their own scripts in my R package. One of those classes uses the other two classes.
I see that devtools::load_all() loads the scripts in alphabetical order, so that if a script depends on another that is later in alphabetical order, there may be problems. Observe:
Example script a.r:
setClass("a", slots = c(name = "character"))
Example script b.r
setClass("b", slots = c(name = "character", a = "a", c = "c"))
Example script c.r:
setClass("c", slots = c(name = "character"))
When I run devtools::load_all(), the following warning appears:
Warning messages:
1: undefined slot classes in definition of "b": c(class "c")
I do not want to rename my scripts simply to put them in alphabetical order based on when I want them to be loaded.
I do not want to define those classes in a single script, because I want to keep the code more modular.
How do I ensure that the script defining the dependent class has access to the other classes:
Regardless of the names of the scripts those classes are in?
Without resorting to source() since this would import other functions, objects, variables, from that script that are not needed.
In python, this is relatively trivial. One uses a syntax like:
from <relative path to .py file that defines those objects> import <desired objects>
In R, I am spinning in circles trying to accomplish something similar.

Seeing as your b.r macro would like to have class(c) defined, I strongly recommend you explicitly include the command load(c.r) at the top of the b.r macro. Otherwise when someone tries to set up something with load(b.r) rather than "load.all()` there'll be a failure anyway. Always avoid depending on the user executing something "extra" .

Related

A note on graphics::curve() in R CMD check

I use the following code in my own package.
graphics::curve( foo (x) )
When I run R CMD check, it said the following note.How do I delete the NOTE?
> checking R code for possible problems ... NOTE
foo: no visible binding for global variable 'x'
Undefined global functions or variables:
x
Edit for the answers:
I try the answer as follows.
function(...){
utils::globalVariables("x")
graphics::curve( sin(x) )
}
But it did not work. So,..., now, I use the following code, instead
function(...){
x <-1 # This is not used but to avoid the NOTE, I use an object "x".
graphics::curve( sin(x) )
}
The last code can remove the NOTE.
Huuum, I guess, the answer is correct, but, I am not sure but it dose not work for me.
Two things:
Add
utils::globalVariables("x")
This can be added in a file of its own (e.g., globals.R), or (my technique) within the file that contains that code.
It is not an error to include the same named variables in multiple files, so the same-file technique will preclude you from accidentally removing it when you remove one (but not another) reference. From the help docs: "Repeated calls in the same package accumulate the names of the global variables".
This must go outside of any function declarations, on its own (top-level). While it is included in the package source (it needs to be, in order to have an effect on the CHECK process), but otherwise has no impact on the package.
Add
importFrom(utils,globalVariables)
to your package NAMESPACE file, since you are now using that function (unless you want another CHECK warning about objects not found in the global environment :-).

Externally set default argument of a function from a package

I am building a package with functions that have default arguments. I would like to find a clean way to set the values of the default arguments once the function has been imported.
My first attempt was to set the default values to unknown objects (within the package). Then when I load the package, I would have a external script that would assign a value to the unknown objects.
But it does not seem very clean since I am compiling a function with an unknown object.
My issue is that I will need other people to use the functions, and since they have many arguments I want to keep the code as concise as possible. And many arguments can be set by a configuration script before running the program.
So, for instance, I define in my package:
function_try <- function(x = VAL){
return(x)
}
I compile the package and load it, and then I have an external script that does (or reading from a config file)
VAL <- "hello"
And then a user of the function can just run
function_try()
I would use options for that. So your function looks like:
function_try <- function(x = getOption("mypackage.default.value")) x
In your external script you make sure that the option value is set:
options(mypackage.default.value = "hello")
IMHO that is a clean solution, because anybody reading your function will see at first sight that a certain options value is used as a default and has also a clear understanding of how to overwrite this value if needed.
I would also define a fall back value in your library onLoad to make sure that the value is defined in the first place. You can then even react in your functions to this fallback value and issue a meaningful warning if the function realizes that the the external script did for whatever reason not provide a new value.

How to suggest hints to Rstudio for auto completion for my code?

Normally if a is a data.frame then one can autocomplete the column names by doing a$ tab. The chunked package has a nice feature where if you run
a <- chunked::read_csv_chunkwise("some.csv")
then when you type a[ then tab then it will show a list of variable via autocompletion even though a is not a data.frame.
I was trying to replicate this for my own code but I couldn't find any relevant resources after googling for "rstudio autocompletion" and various other searches.
I note that class(a) returns
[1] "chunkwise" "tbl"
I had a look at all the functions that belong to the S3 class "chunked" and I note that it has a method called tbl_vars, so I thought maybe that's what Rstudio uses to do the autocomplete.
So to test it out I tried
write.csv(data.frame(a = 1, b = 2), file = "test.csv",row.names = F)
tbl_vars.test_auto_complete <- function(fs) {
names(fread(fs$path))
}
test_auto_complete <- list(path = "test.csv")
class(test_auto_complete) <- "test_auto_complete"
tbl_vars(test_auto_complete)
[1] "a" "b"
But then when I type test_auto_complete tab the auto-complete doesn't show the variables that I want.
How can we give hints to Rstudio to make auto-completion work?
For objects that inherit from the tbl class, RStudio does indeed call tbl_vars() to populate completions. (This is an RStudio-specific autocompletion system feature.)
In your example, the object you're creating does not inherit from tbl, so this autocompletion pathway doesn't kick in.
However, this form of 'ad-hoc' S3 dispatch (where you define S3 methods directly as code like this) is not detected by RStudio, so you won't be able to verify this with test code like this. You'll have to explicitly define and register the S3 method in an R package.
Alternatively, you can try explicitly registering the S3 method with something like:
registerS3method("tbl_vars", "test_auto_complete", tbl_vars.test_auto_complete)
for inline testing.

R how to restrict the names that are in scope to those I create explicitly?

I thought that it would be enough to use fully qualified names to avoid polluting my scope with names I did not explicitly introduce, but apparently, with R, this is not the case.
For example,
% R_PROFILE_USER= /usr/bin/R --quiet --no-save --no-restore
> ls(all = TRUE)
character(0)
> load("/home/berriz/_/projects/fda/deseq/.R/data_for_deseq.RData")
> ls(all = TRUE)
[1] "a" "b" "c"
> ?rlog
No documentation for ‘rlog’ in specified packages and libraries:
you could try ‘??rlog’
So far, so good. In particular, as the last command shows, the interpreter knows nothing of rlog.
But after I run
> d <- DESeq2::DESeqDataSetFromMatrix(countData = a, colData = b, design = c)
...then, henceforth, the command ?rlog will produce a documentation page for a function I did not explicitly introduce into the environment (and did not refer to with a fully qualified name).
I find this behavior disconcerting.
In particular, I don't know when some definition I have explicitly made will be silently shadowed as a side-effect of some seemingly unrelated command.
How can I control what the environment can see?
Or to put it differently, how can I prevent side effects like the one illustrated above?
Not sure if "scope" means the same thing in R as it may to other languages. R uses "environments" (see http://adv-r.had.co.nz/Environments.html for detailed explanation). Your scope in R includes all environments that are loaded, and as you have discovered, the user doesn't explicitly control every environment that is loaded.
For example,
ls()
lists the objects in your default environment '.GlobalEnv'
search()
lists the currently loaded environments.
ls(name='package.stats')
In default R installations, 'package:stats' is one of the environments loaded on startup.
By default, everything you create is stored in the global environment.
ls(name='.GlobalEnv')
You can explicitly reference objects you create by referencing their environment with the $ syntax.
x <- c(1,2,3)
.GlobalEnv$x

R: disentangling scopes

My question is about avoiding namespace pollution when writing modules in R.
Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions.
I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined.
I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging!
So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace.
Any tips? Thank you!
I would explore two possible solutions to this.
a) Think more in a more functional manner. Don't create any variables outside of a function. so, for example, main.R should contain one function main(), which sources in the other files, and does the work. when main returns, none of the clutter will remain.
b) Clean things up manually:
#main.R
prior_variables <- ls()
source('functions1.R')
source('functions2.R')
#stuff happens
rm(list = setdiff(ls(),prior_variables))`
The main function you want to use is sys.source(), which will load your functions/variables in a namespace ("environment" in R) other than the global one. One other thing you can do in R that is fantastic is to attach namespaces to your search() path so that you need not reference the namespace directly. That is, if "namespace1" is on your search path, a function within it, say "fun1", need not be called as namespace1.fun1() as in Python, but as fun1(). [Method resolution order:] If there are many functions with the same name, the one in the environment that appears first in the search() list will be called. To call a function in a particular namespace explicitly, one of many possible syntaxes - albeit a bit ugly - is get("fun1","namespace1")(...) where ... are the arguments to fun1(). This should also work with variables, using the syntax get("var1","namespace1"). I do this all the time (I usually load just functions, but the distinction between functions and variables in R is small) so I've written a few convenience functions that loads from my ~/.Rprofile.
name.to.env <- function(env.name)
## returns named environment on search() path
pos.to.env(grep(env.name,search()))
attach.env <- function(env.name)
## creates and attaches environment to search path if it doesn't already exist
if( all(regexpr(env.name,search())<0) ) attach(NULL,name=env.name,pos=2)
populate.env <- function(env.name,path,...) {
## populates environment with functions in file or directory
## creates and attaches named environment to search() path
## if it doesn't already exist
attach.env(env.name)
if( file.info(path[1])$isdir )
lapply(list.files(path,full.names=TRUE,...),
sys.source,name.to.env(env.name)) else
lapply(path,sys.source,name.to.env(env.name))
invisible()
}
Example usage:
populate.env("fun1","pathtofile/functions1.R")
populate.env("fun2","pathtofile/functions2.R")
and so on, which will create two separate namespaces: "fun1" and "fun2", which are attached to the search() path ("fun2" will be higher on the search() list in this case). This is akin to doing something like
attach(NULL,name="fun1")
sys.source("pathtofile/functions1.R",pos.to.env(2))
manually for each file ("2" is the default position on the search() path). The way that populate.env() is written, if a directory, say "functions/", contains many R files without conflicting function names, you can call it as
populate.env("myfunctions","functions/")
to load all functions (and variables) into a single namespace. With name.to.env(), you can also do something like
with(name.to.env("fun1"), doStuff(var1))
or
evalq(doStuff(var1), name.to.env("fun1"))
Of course, if your project grows big and you have lots and lots of functions (and variables), writing a package is the way to go.
If you switch to using packages, you get namespaces as a side-benefit (provided you use a NAMESPACE file). There are other advantages for using packages.
If you were really trying to avoid packages (which you shouldn't), then you could try assigning your variables in specific environments.
Well avoiding namespace pollution, as you put it, is just a matter of diligently partitioning the namespace and keeping your global namespace uncluttered.
Here are the essential functions for those two kinds of tasks:
Understanding/Navigating the Namespace Structure
At start-up, R creates a new environment to store all objects created during that session--this is the "global environment".
# to get the name of that environment:
globalenv()
But this isn't the root environment. The root is an environment called "the empty environment"--all environments chain back to it:
emptyenv()
returns: <environment: R_EmptyEnv>
# to view all of the chained parent environments (which includes '.GlobalEnv'):
search()
Creating New Environments:
workspace1 = new.env()
is.environment(workspace1)
returns: [1] TRUE
class(workspace1)
returns: [1] "environment"
# add an object to this new environment:
with(workspace1, attach(what="/Users/doug/Documents/test_obj.RData",
name=deparse(substitute(what)), warn.conflicts=T, pos=2))
# verify that it's there:
exists("test_obj", where=workspace1)
returns: [1] TRUE
# to locate the new environment (if it's not visible from your current environment)
parent.env(workspace1)
returns: <environment: R_GlobalEnv>
objects(".GlobalEnv")
returns: [1] "test_obj"
Coming from python, et al., this system (at first) seemed to me like a room full of carnival mirrors. The R Gurus on the other hand seem to be quite comfortable with it. I'm sure there are a number of reasons why, but my intuition is that they don't let environments persist. I notice that R beginners use 'attach', as in attach('this_dataframe'); I've noticed that experienced R users don't do that; they use 'with' instead eg,
with(this_dataframe, tapply(etc....))
(I suppose they would achieve the same thing if they used 'attach' then 'detach' but 'with' is faster and you don't have to remember the second step.) In other words, namespace collisions are avoided in part by limiting the objects visible from the global namespace.

Resources