Referencing user-created functions in R from seperate scripts - r

I'm trying to re-use some code that I've already written but often need to re-execute for various projects (IE I'd like to apply some Object-Oriented principles to my R code). I know that a framework exists for publishing new packages on CRAN, but the code I have isn't something that would be valuable for other parties.
Essentially I'd like to either create my own local packages and reference them using a require() call or at the very least call functions that I've saved in separate .r files as-needed.
I've searched around online and found several lengthy articles about creating packages and compiling them using RTools (I'm on a Windows OS) but since I'm not writing C this seems overkill for my simple purposes. To offer an example of what I'm referring to, I have a script to remove unwanted characters from string data that I constantly need to copy/paste into new scripts; I don't want to do this and would prefer to just do something like require(myFunction).
Is there a simple way to solve this problem or am I best served by grabbing RTools and compiling my custom functions locally?

Creating an R package is actually super easy. The link from Alex is how I started my first package. Here's a slightly simplified version I have to give my students. (NB: full credit to Hilary Parker, the author of the original blogpost).
First install devtools and roxygen:
install.packages("devtools")
library("devtools")
install.packages("roxygen2")
library("roxygen2")
Make a new directory for your functions:
setwd("/path/to/parentdirectory")
create("mypackage")
Add your functions to a file (or files) named anything.R in the R directory. The file should look like this, you can have one function per file, or multiple:
mymeanfun <- function(x){
mean(x)
}
myfilterfun <- function(x, y){
filter(x, y)
}
Now you should document the code. You can document (and import) using roxygen. Make sure you #import functions from any other packages, and #export the functions you want available. Roxygen and devtools will take care of everything else (namespace, requires etc etc.) until you get more advanced. Everything else is optional:
#' My Mean Function
#'
#' Takes the mean
#' #param x any default data type
#' #export
#' #examples
#' mymeanfun(c(1,2,3))
mymeanfun <- function(x){
mean(x)
}
#' My Filter Function
#'
#' Identical to dplyr::filter
#' #param x a data.frame
#' #export
#' #importFrom dplyr filter
myfilterfun <- function(x, y){
filter(x, y)
}
Now run the document() from roxygen2 in the directory you created:
setwd(".\mypackage")
document()
You are now up and running - I'd recommend putting it on github and installing from there:
install_github("yourgithubname/mypackage")
From then on, you can just call:
library(mypackage)
Every time you need your functions.
For more details and better documentation practices, see Hadley's book

Related

Does roxygen2 work for R scripts in data-raw?

I am using RStudio to create a package for a piece of data analysis I'm doing. To put my raw data into the package, I'm using devtools::use_data_raw() as per this article.
I have a script load-raw-data.R that loads the raw data and assembles it into a dataframe, then calls devtools::use_data() on this dataframe to add it to the package. load-raw-data.R is in /data-raw not /R, as per the article. I've added documentation to the functions in this script via a roxygen2 skeleton, however when I build the documentation the .Rd files for these functions are not built. I presume this is because roxygen2 is only looking in /R. Is there a way to tell roxygen2 to look in /data-raw as well? Or have I misunderstood something along the way?
Update: following #phil's suggestion
#phil - thanks - I tried this for one of the functions (load_data_files) in the load-raw-data.R script (see below for the documentation added to R/data.R), but on rebuilding the package I get an error: 'load_data_files' is not an exported object from 'namespace:clahrcnwlhf'. I have included the #export tag in the documentation in R/data.R. Any thoughts on how I might resolve this?
`# This script loads the individual component files of the raw dataset
# and stitches them together, saving the result as an .RData file
#' load_data_files
#'
#' load_data_files loads in a set of Excel files as dataframes
#'
#' #param fl list of paths of the files to be loaded
#'
#' #return A list of dataframes, one for each of the file paths in fl.
#' #export
"load_data_files"`

Is there a way to automatically generate `Imports` section in the DESCRIPTION file?

When developing an R package, it is common for me to just turn on Packrat and use a localized repository, and develop as I explore in a session. But when publishing the package, it is a big headache to recall and manually add every dependency I have used in the developed package. Is there a (semi-)automatic way to do this?
For example, in NodeJS development, we can just use npm install --save and the dependency will be added automatically to package.json.
Yes, use roxygen2 to generate your NAMESPACE file.
Example way to generate package level documentation:
#' #details
#' \tabular{ll}{
#' Package: \tab \cr
#' Type: \tab Package\cr
#' Version: \tab 1.0.0\cr
#' Date: \tab 2016-05-15\cr
#' License: \tab MIT \cr
#' }
#' #useDynLib pkg
#' #importFrom Rcpp evalCpp
#' #importFrom methods is
#' #importFrom stats ts as.ts is.ts
#' #import ggplot2
"_PACKAGE"
Note: I tend to keep my import statements together in the package-level documentation. You can use these same statements within function documentation.
Use #importFrom <pkg> <function1> <function2> for specific imports.
Otherwise, use #import <pkg> for all functions in a given package.
Edit
To lock it to a specific version, you will need to modify your DESCRIPTION file's Imports: section like so:
Imports:
Rcpp (>= 0.12.5),
scales (<= 0.4.0),
grid (== 0.7-4),
stats
A common way to do this (although not one that's used much by "old school" R programmers, who as far as I know maintain their NAMESPACE files by hand ...) is to manually insert roxygen2 #importFrom statements in the code ... e.g.
# returns a true family() object iff one was given
## to glmmTMB() in the first place ....
##' #importFrom stats family
##' #export
family.glmmTMB <- function(object, ...) {
object$modelInfo$family
}
although it would certainly be helpful to have an automated system, to provide hints if nothing else. Certainly maintaining dependencies in this way (making sure to add #importFrom statements as needed for every function that has a dependency) is better than trying to keep track of when dependencies are (still) needed after developing the code.
#import and #importFrom (which translate to import and importFrom statements) are both legal, the R extensions guide says
Using importFrom selectively rather than import is good practice and recommended notably when importing from packages with more than a dozen exports.
("good practice" translates to "CRAN maintainers will yell if you don't" ...)
I don't know if there's a way to do versioned importFrom statements: R puts package version dependencies in the Imports: field in the DESCRIPTION file and importFrom statements in the NAMESPACE file: roxygen2 generates this information automatically, but I don't think the mapping from roxygen2 directives to NAMESPACE/DESCRIPTION information is fine-grained enough to support this ...
Running R CMD check --as-cran gives hints about functions from recommended packages that need to be imported.

Best way to use support function in R to stay DRY

While working on my first R package a noticed that when the package structure gets created in the man directory "man" there is a documentation file for each function/method in the code.
In order to stay DRY (don't repeat yourself) I used some functions as "auxiliary" functions in loops or iteration. How can I tell R that I do not want to provide any documentation for them given that they should not be called directly by the end user?!?!
Use the roxygen2 and devtools packages to document your functions and build your package.
#' Function 1 Title
#'
#' Describe what function 1
#' does in a paragraph. This function
#' will be exported for external use because
#' it includes the #export tag.
#'
#' #param parameter1 describe the first parameter
#' #param parameter2 describe the second parameter
#' #examples
#' function1(letters[1:10], 1:10)
#' #export
function1 <- function(parameter1, parameter2) {
paste(parameter1, parameter2)
}
#' Function 2 Title
#'
#' Description here. This will not
#' be added to the NAMESPACE.
#'
#' #param parameter1
function2 <- function(parameter1) {
parameter1
}
Once you have all your documentation, use the tools in the devtools package to build, document, and check your package. It will automatically update the man files and DESCRIPTION, and add / remove functions from the NAMESPACE.
document()
build()
check()
I also recommend using the rbundler package to control how you load packages.
If you do not export them via the NAMESPACE you are not expected to provide documentation.
Another (older) was is too simple create one, say, internal.Rd and define a bunch of \alias{foo}, \alias{bar}, \alias{frob} and that way codetools is happy too.
thanks #Jojoshua-ulrich and #dirk-eddelbuettel
According to "Writing R Extensions":
The man subdirectory should contain (only) documentation files for the objects in the package in R documentation (Rd) format. The documentation filenames must start with an ASCII (lower or upper case) letter or digit and have the extension .Rd (the default) or .rd. Further, the names must be valid in ‘file://’ URLs, which means9 they must be entirely ASCII and not contain ‘%’. See Writing R documentation files, for more information. Note that all user-level objects in a package should be documented; if a package pkg contains user-level objects which are for “internal” use only, it should provide a file pkg-internal.Rd which documents all such objects, and clearly states that these are not meant to be called by the user. See e.g. the sources for package grid in the R distribution for an example. Note that packages which use internal objects extensively should not export those objects from their namespace, when they do not need to be documented (see Package namespaces).
By the way, is there any convention to include comments in the code so that man grabs the function description, arguments description etc directly from the code?

Rd file name conflict when extending a S4 method of some other package

Actual question
How do I avoid Rd file name conflicts when
a S4 generic and its method(s) are not necessarily all defined in the same package (package containing (some of) the custom method(s) depends on the package containing the generic) and
using roxygenize() from package roxygen2 to generate the actual Rd files?
I'm not sure if this is a roxygen2 problem or a common problem when the generic and its method(s) are scattered across packages (which IMHO in general definitely is a realistic use-case scenario if you follow a modular programming style).
What's the recommended way to handle these situations?
Illustration
In package pkga
Suppose in package pkga you defined a generic method foo and that you've provided the respective roxygen code that roxygenize() picks up to generate the Rd file:
#' Test function
#'
#' Test function.
#'
#' #param ... Further arguments.
#' #author Janko Thyson \email{janko.thyson##rappster.de}
#' #example inst/examples/foo.R
#' #docType methods
#' #rdname foo-methods
#' #export
setGeneric(
name="foo",
signature=c("x"),
def=function(
x,
...
) {
standardGeneric("xFoo")
}
)
When roxygenizing() your package, a file called foo-methods.Rd is created in the man subdirectory that serves as the reference Rd file for all methods that might be created for this generic method. So far so good. If all of the methods for this generic are also part of your package, everything's good. For example, this roxygen code would make sure that documentation is added to foo-methods.Rd for the ANY-method of foo:
#' #param x \code{ANY}.
#' #return \code{TRUE}.
#' #rdname foo-methods
#' #aliases foo,ANY-method
#' #export
setMethod(
f="foo",
signature=signature(x="ANY"),
definition=cmpfun(function(
x,
...
) {
return(TRUE)
}, options=list(suppressAll=TRUE))
)
However, if package pkga provides the generic for foo and you decide in some other package (say pkgb) to add a foo-method for x being of class character, then R CMD check will tell you that there is a name clash with respect to Rd file names and/or aliases (as there already exists a Rd file foo-methods.Rd in pkga):
In package pkgb
#' #param x \code{character}.
#' #return \code{character}.
#' #rdname foo-methods
#' #aliases foo,character-method
#' #export
setMethod(
f="foo",
signature=signature(x="character"),
definition=cmpfun(function(
x,
...
) {
return(x)
}, options=list(suppressAll=TRUE))
)
To be more precise, this is the error that's thrown/written to file 00install.out
Error : Q:/pkgb/man/foo-methods.Rd: Sections \title, and \name must exist and be unique in Rd files
ERROR: installing Rd objects failed for package 'pkgb'
Due dilligence
I tried to change the values for #rdname and #aliases to foo_pkgb* (instead of foo*), but \title and \name still are set to foo when roxygenizing and thus the error remains. Any ideas besides manually editing the Rd files generated by roxygenize()?
EDIT 2012-12-01
In light of starting the bounty, the actual question might get a slightly broader flavor:
How can we implement some sort of an "inter-package" check with respect to Rd files and/or how can we consolidate S4 method help files scattered across packages into one single Rd file in order to present a single source of reference to the end-user?
The basic question is indeed "roxygenize"-only.
That's why I never had seen the problem.
While there are good reasons for the roxygenizing approach of package development,
I still see a very good reason not to go there:
Plea for much less extreme roxygenation
The resulting help pages tend to look extremely boring, not only the auto generated *.Rd files but also the rendered result.
E.g.
examples are often minimal, do not contain comments, are often not well formatted (using space, / new lines /..)
Mathematical issues are rarely explained via \eqn{} or \deqn{}
\describe{ .. } and similar higher level formatting is rarely used
Why is that? Because
1) reading and editing roxygen comments is so much more
"cumbersome" or at least visually unrewarding
than reading and editing *.Rd files in ESS or Rstudio or (other IDE that has *.Rd support built in)
2) If you are used that documentation
is the thing that's automatically generated at the end of your package building/checking
you typically tend to not considerung well written R documentation as something important
(but rather your R code, to which all the docs is just a comment :-)
The result of all that: People prefer writing documentation about their functions in vignettes or even blogs, github gists, youtube videos, or ... where it is very nice at the time of authoring, but is
pretty much detached from the code and bound to get outdated and withering (and hence, via Google search misleading your useRs)
--> The original motivation of roxygen of having code and documentation in the same place is entirely defeated.
I like roxygen and use it extensively at the time I create a new function...
and I keep and maintain it as long as my function is not in a package, or is not exported.
Once I decide to export it,
I run (the ESS equivalent of) roxygenize() once
and from then on take the small extra burden of maintaining a *.Rd file that is well formatted, contains its own comments (for me as author), has many nice examples, has its own revision control (git / svn / ...) history, etc.
I managed to generate NAMESPACE and *.Rd files for S4 methods for generics defined in another package than mine.
It took me the following steps:
Create NAMESPACE by hand as a workaround to a known roxygen2 bug.
Writing a NAMESPACE by hand is not so difficult at all!
Switch off NAMESPACE generation by roxygen2 in RStudio:
Build > more > Configure build tools > configure roxygen > do not use roxygen2 to generate NAMESPACE.
import the package containing the generic and export the S4 methods using exportMethods.
Write separate roxygen2 documentation for each of the S4 methods. Do not combine roxygen2 documentation (as I generally do for different methods of the same generic).
Add explicit roxygen tags #title and #description to the roxygen documentation of the S4 methods. Write #description explicitly, even if its value is identical as #title.
That makes it work for me.

Is it possible to use R package data in testthat tests or run_examples()?

I'm working on developing an R package, using devtools, testthat, and roxygen2. I have a couple of data sets in the data folder (foo.txt and bar.csv).
My file structure looks like this:
/ mypackage
/ data
* foo.txt, bar.csv
/ inst
/ tests
* run-all.R, test_1.R
/ man
/ R
I'm pretty sure 'foo' and 'bar' are documented correctly:
#' Foo data
#'
#' Sample foo data
#'
#' #name foo
#' #docType data
NULL
#' Bar data
#'
#' Sample bar data
#'
#' #name bar
#' #docType data
NULL
I would like to use the data in 'foo' and 'bar' in my documentation examples and unit tests.
For example, I would like to use these data sets in my testthat tests by calling:
data(foo)
data(bar)
expect_that(foo$col[1], equals(bar$col[1]))
And, I would like the examples in the documentation to look like this:
#' #examples
#' data(foo)
#' functionThatUsesFoo(foo)
If I try to call data(foo) while developing the package, I get the error "data set 'foo' not found". However, if I build the package, install it, and load it - then I can make the tests and examples work.
My current work-arounds are to not run the example:
#' #examples
#' \dontrun{data(foo)}
#' \dontrun{functionThatUsesFoo(foo)}
And in the tests, pre-load the data using a path specific to my local computer:
foo <- read.delim(pathToFoo, sep="\t", fill = TRUE, comment.char="#")
bar <- read.delim(pathToBar, sep=";", fill = TRUE, comment.char="#"
expect_that(foo$col[1], equals(bar$col[1]))
This does not seem ideal - especially since I'm collaborating with others - requiring all the collaborators to have the same full paths to 'foo' and 'bar'. Plus, the examples in the documentation look like they can't be run, even though once the package is installed, they can.
Any suggestions? Thanks much.
Importing non-RData files within examples/tests
I found a solution to this problem by peering at the JSONIO package, which obviously needed to provide some examples of reading files other than those of the .RData variety.
I got this to work in function-level examples, and satisfy both R CMD check mypackage as well as testthat::test_package().
(1) Re-organize your package structure so that example data directory is within inst. At some point R CMD check mypackage told me to move non-RData data files to inst/extdata, so in this new structure, that is also renamed.
/ mypackage
/ inst
/ tests
* run-all.R, test_1.R
/ extdata
* foo.txt, bar.csv
/ man
/ R
/ tests
* run-testthat-mypackage.R
(2) (Optional) Add a top-level tests directory so that your new testthat tests are now also run during R CMD check mypackage.
The run-testthat-mypackage.R script should have at minimum the following two lines:
library("testthat")
test_package("mypackage")
Note that this is the part that allows testthat to be called during R CMD check mypackage, and not necessary otherwise. You should add testthat as a "Suggests:" dependency in your DESCRIPTION file as well.
(3) Finally, the secret-sauce for specifying your within-package path:
barfile <- system.file("extdata", "bar.csv", package="mypackage")
bar <- read.csv(barfile)
# remainder of example/test code here...
If you look at the output of the system.file() command, it is returning the full system path to your package within the R framework. On Mac OS X this looks something like:
"/Library/Frameworks/R.framework/Versions/2.15/Resources/library/mypackage/extdata/bar.csv"
The reason this seems okay to me is that you don't hard code any path features other than those within your package, so this approach should be robust relative to other R installations on other systems.
data() approach
As for the data() semantics, as far as I can tell this is specific to R binary (.RData) files in the top-level data directory. So you can circumvent my example above by pre-importing the data files and saving them with the save() command into your data-directory. However, this assumes you only need to show an example in which the data is already loaded into R, as opposed to also reproducibly demonstrating the upstream process of importing the files.
Per #hadley's comment, the .RData conversion will work well.
As for the broader question of team collaboration with different environments across team members, a common pattern is to agree on a single environment variable, e.g., FOO_PROJECT_ROOT, that everyone on the team will set up appropriately in their environment. From that point on you can use relative paths, including across projects.
An R-specific approach would be to agree on some data/functions that every team member will set up in their .Rprofile files. That's, for example, how devtools finds packages in non-standard locations.
Last but not least, though it is not optimal, you can actually put developer-specific code in your repository. If #hadley does it, it's not such a bad thing. See, for example, how he activates certain behaviors in testthat in his own environment.

Resources