Finding help files for moduled package functions in R - r

I am writing an R package which is organized into modules as per Sebastian Warnholz's modules package. Each function is organized into its own R file under, e.g., R/m1/fun.R. Each one of those files begins with roxygen code. The modules are defined in another file, R/modules.R. Here's an idea of how that file is structured:
#' Module 1
#' #name m1
#' #export
m1 <- modules::module({
expose("R/m1/fun.R")
expose("R/m1/foo.R")
})
The package checks and builds cleanly and I can call functions by issuing m1$fun() and m1$foo(). However, calling the help files don't work, no matter what I try (i.e., combinations of ? or help() with the function names, with or without the module prefix m1$.
Actually, I can't even expect the help files to be there, because after running devtools::document(), the roxygen code is not converted into man/*.Rd files. So I guess my problem is having devtools::document() search into the R subfolders. Running devtools::document("R/m1") doesn't do the trick, though.
One thing that works is putting the function scripts in the parent folder, R/, but then they lose the module scope and the help files (but not the functions themselves) can be seen at package level. Moreover, the "usage" section will state "foo(...)" instead of "m1$foo(...)", which sounds inadequate but I am not sure is currently fixable. This is my first time working with modules, so I was wondering if there's a cleaner way of organizing my functions and help files.

Related

Where to put R files that generate package data

I am currently developing an R package and want it to be as clean as possible, so I try to resolve all WARNINGs and NOTEs displayed by devtools::check().
One of these notes is related to some code I use for generating sample data to go with the package:
checking top-level files ... NOTE
Non-standard file/directory found at top level:
'generate_sample_data.R'
It's an R script currently placed in the package root directory and not meant to be distributed with the package (because it doesn't really seem useful to include)
So here's my question:
Where should I put such a file or how do I tell R to leave it be?
Is .Rbuildignore the right way to go?
Currently devtools::build() puts the R script in the final package, so I shouldn't just ignore the NOTE.
As suggested in http://r-pkgs.had.co.nz/data.html, it makes sense to use ./data-raw/ for scripts/functions that are necessary for creating/updating data but not something you need in the package itself. After adding ./data-raw/ to ./.Rbuildignore, the package generation should ignore anything within that directory. (And, as you commented, there is a helper-function devtools::use_data_raw().)

how to use utils::globalVariables

Following your recommendations (or trying to do it, at least), I have tried some options, but the problem remains, so there must be something I am missing.
I have included a more complete code
setwd("C:/naapp")
#' #import utils
#' #import devtools
I have tried with and without using suppressForeignCheck
if(getRversion() >= "2.15.1"){
utils::globalVariables(c("eleven"))
utils::suppressForeignCheck(c("eleven"))
}
myFunctionSum <- function(X){print(X+eleven)}
myFunctionMul <- function(X){print(X*eleven)}
myFunction11 <- function(X){
assign("eleven",11,envir=environment(myFunctionMul))
}
maybe I should use a particular environment?
package.skeleton(name = "myPack11", list=ls(),
path = "C:/naapp", force = TRUE,
code_files = character())
I remove the "man" directory from the directory myPack11,
otherwise I would get an error because the help files are empty.
I add the imports utils, and devtools to the descrption
Then I run check
devtools::check("myPack11")
And I still get this note
#checking R code for possible problems ... NOTE
#myFunctionMul: no visible binding for global variable 'eleven'
#myFunctionSum: no visible binding for global variable 'eleven'
#Undefined global functions or variables:eleven
I have tried also to make an enviroment, combining Tomas Kalibera's suggetion and an example I found in the Internet.
myEnvir <- new.env()
myEnvir$eleven <- 11
etc
In this case, I get the same note, but with "myEnvir", instead of "eleven"
First version of the question
I trying to use "globalVariables" from the package utils. I am building an interface in R and I am planning to submit to CRAN. This is my first time, so, sorry if the question is very basic.I have read the help and I have tried to find examples, but I still don't know how to use it.
I have made a little silly example to ilustrate my question, which is:
Where do I have to place this line exactly?:
if(getRversion() >= "2.15.1"){utils::globalVariables("eleven")}
My example has three functions. myFunction11 creates the global variable "eleven" and the other two functions manipulate it. In my real code, I cannot use arguments in the functions that are called by means of a button. Consider that this is just a silly example to learn how to use globalVariables (to avoid binding notes).
myFunction11 <- function(){
assign("eleven",11,envir=environment(myFunctionSum))
}
myFunctionSum <- function(X){
print(X+eleven)
}
myFunctionMul <- function(X){
print(X*eleven)
}
Thank you in advance
I thought that the file globals.R would be automatically generated when using globalsVariables. The problem was that I needed to create the package skeleton, then create the file globals.R, add it to the R directory in the package and check the package.
So, I needed to place this in a different file:
#' #import utils
utils::globalVariables(c("eleven"))
and save it
The documentation clearly says:
## In the same source file (to remind you that you did it) add:
if(getRversion() >= "2.15.1") utils::globalVariables(c(".obj1", "obj2"))
so put it in the same source file as your functions. It can go in any of your R source files, but the comment above recommends you put it close to your code. Looking at a bunch of github packages reveals another common pattern is to have a globals.R function with it in, but this is probably a bad idea. If you later remove the global from your package but neglect to update globals.R you could mask a problem. Putting it right close to the functions that use it will hopefully remind you when you edit those functions.
Make sure you put it outside any function definitions in the file, or it won't get seen.
You cannot modify bindings in a package namespace once the package is loaded (and namespace sealed, and bindings locked). The check tool helps you to spot violations of this restriction, so you find out about the problem when checking the package rather than while running it. globalVariables is just a call to silence check when looking for these violations, which is undesirable in almost all cases. If you really need mutable state in a package, you can create a new environment (using new.env) and bind it to an (unexported) "global" variable in your namespace. This binding will be locked, but this is ok, because in R you can change an environment in place (add/remove elements, effectively modifying the elements).
The best situation is however when you can keep all mutable state in user objects (passed in as arguments into functions, and their modified versions returned as output values of functions).

How to call R script from another R script, both in same package?

I'm building a package that uses two main functions. One of the functions model.R requires a special type of simulation sim.R and a way to set up the results in a table table.R
In a sharable package, how do I call both the sim.R and table.R files from within model.R? I've tried source("sim.R") and source("R/sim.R") but that call doesn't work from within the package. Any ideas?
Should I just copy and paste the codes from sim.R and table.R into the model.R script instead?
Edit:
I have all the scripts in the R directory, the DESCRIPTION and NAMESPACE files are all set. I just have multiple scripts in the R directory. ~R/ has premodel.R model.R sim.R and table.R. I need the model.R script to use both sim.R and table.R functions... located in the same directory in the package (e.g. ~R/).
To elaborate on joran's point, when you build a package you don't need to source functions.
For example, imagine I want to make a package named TEST. I will begin by generating a directory (i.e. folder) named TEST. Within TEST I will create another folder name R, in that folder I will include all R script(s) containing the different functions in the package.
At a minimum you need to also include a DESCRIPTION and NAMESPACE file. A man (for help files) and tests (for unit tests) are also nice to include.
Making a package is pretty easy. Here is a blog with a straightforward introduction: http://hilaryparker.com/2014/04/29/writing-an-r-package-from-scratch/
As others have pointed out you don't have to source R files in a package. The package loading mechanism will take care of losing the namespace and making all exported functions available. So usually you don't have to worry about any of this.
There are exceptions however. If you have multiple files with R code situations can arise where the order in which these files are processed matters. Often it doesn't matter or the default order used by R happens to be fine. If you find that there are some dependencies within your package that aren't resolved properly you may be faced with a situation where a custom processing order for the R files is required. The DESCRIPTION file offers the optional Collate field for this purpose. Simply list all your R files in the order they should be processed to satisfy the dependencies.
If all your files are in R directory, any function will be in memory after you do a package build or Load_All.
You may have issues if you have code in files that is not in a function tho.
R loads files in alphabetical order.
Usually, this is not a problem, because functions are evaluated when they are called for execution, not at loading time (id. a function can refer another function not yet defined, even in the same file).
But, if you have code outside a function in model.R, this code will be executed immediately at time of file loading, and your package build will fail usually with a
ERROR: lazy loading failed for package 'yourPackageName'
If this is the case, wrap the sparse code of model.R into a function so you can call it later, when the package has fully loaded, external library too.
If this piece of code is there for initialize some value, consider to use_data() to have R take care of load data into the environment for you.
If this piece of code is just interactive code written to test and implement the package itself, you should consider to put it elsewhere or wrap it to a function anyway.
if you really need that code to be executed at loading time or really have dependency to solve, then you must add the collate line into DESCRIPTION file, as already stated by Peter Humburg, to force R to load files order.
Roxygen2 can help you, put before your code
#' #include sim.R table.R
call roxygenize(), and collate line will be generate for you into the DESCRIPTION file.
But even doing that, external library you may depend are not yet loaded by the package, leading to failure again at build time.
In conclusion, you'd better don't leave code outside functions in a .R file if it's located inside a package.
Since you're building a package, the reason why you're having trouble accessing the other functions in your /R directory is because you need to first:
library(devtools)
document()
from within the working directory of your package. Now each function in your package should be accessible to any other function. Then, to finish up, do:
build()
install()
although it should be noted that a simple document() call will already be sufficient to solve your problem.
Make your functions global by defining them with <<- instead of <- and they will become available to any other script running in that environment.

How to manage R extension / package documentation with grace (or at least without pain)

Now and then I embrace project specific code in R packages. I use the documentation files as suggested by Writing R Extensions to document the application of the code.
So once you set up your project and did all the editing to the .Rd files,
how do you manage a painless and clean versioning without rewriting or intense copy-pasting of all the documentation files in case of code or, even worse, code structure changes?
To be more verbose, my current workflow is that I issue package.skeleton(), do the editing on the .Rd-files followed by R CMD check and R CMD build. When I do changes to my code I need to redo the above maybe appending '.2.0.1' or whatever in order to preserve the precursor version. Before running the R CMD check command I need to repopulate all the .Rd-files with great care in order to get a clean check and succsessful compilation of Tex-files. This is really silly and sometimes a real pain, e.g. if you want to address all the warnings or latex has a bad day.
What tricks do you use? Please share your workflow.
The solution you're looking for is roxygen2.
RStudio provides a handy guide, but briefly you document your function in-line with the function definition:
#' Function doing something
#' Extended description goes here
#' #param x Input foo blah
#' #return A numeric vector length one containing foo
myFunc <- function(x) NULL
If you're using RStudio (and maybe ESS also?) the Build Package command automagically creates the .Rd files for you. If not, you can read the roxygen2 documentation for the commands to generate the docs.

Using source subdirectories within R packages with roxygen2

I would like to use a directory structure within the R folder for the source code of a package. For example, within my R folder I have an algos folder with functions I want to export and document. However roxygen2 by default does not seem to go through the subfolders of the R folder.
I tried to use the #include keyword as follows for a file at `R/algos/algo1.r'
#' #include algos/algo1.r
but without success. Is there a simple way to use subfolder for the R source code?
Writing R Extensions has this to say (in Section 1.1.5) about subdirectories under the R directory:
The R and man subdirectories may contain OS-specific subdirectories named unix or windows.
Implied in this is that they can't have other subdirectories other than those two. This is confirmed in an r-devel thread and again later in another r-devel thread.
Another straightforward alternative that I've started to use is to simply define related functions within the same .R file and name these files something that unifies the functions. In the example above one could have something like algos.R in the /R folder and within algos.R:
#' #roxygen_header1
#' #export
algo1 <- function(...){}
#' #roxygen_header2
#' #export
algo2 <- function(...){}
I think this makes navigating /R much more intuitive (at least for the developer, but probably for users too)

Resources