Convenient way to load (and if needed install) a package in R - r

A user can work on many PCs. A good code runs no matter what PC it is running on. Assuming one does not want to rely on preference and option files, what is the best way to make sure a package is loaded (and installed if needed).
library command is cool, but the require command is much better. But even require is not getting the job done.
Triggering re-install that is not needed (eg, in R studio) causes an interesting prompt to restart the R session - and this is why unnecessary installs are best avoided.
One possible trick A is to do this (not to type the package name too often)
doInstall <- T;toInstall <- c("downloader");
if(doInstall) install.packages(toInstall);
lapply(toInstall, library, character.only = T)
or a worse trick B would be
if (!require(downloader)) {install.packages("downloader"); require(downloader)}
Is there a "2015 way" of doing it with one command - something like
justdoitall(c("downloader","dplyr"))

Here is an example of installing package zipcode using the pacman approach.
if (!require("pacman")) install.packages("pacman")
pacman::p_load(zipcode)

Assuming one does not want to rely on preference and option files
That rules out putting anything in .Rprofile or using external packages so we're stuck with base R to solve your problem. If that's the case then the answer is that you can't do this much better than what you have written in your question (I prefer B to A)
If you're willing to bend a little bit and require the user to load a package first (which could be done on startup by using .Rprofile) there are a few options that do exactly what you want.
installr::require2 and pacman::p_load do what you ask. Disclosure: I am a an author/maintainer of pacman. I agree with your sentiment that we shouldn't rely on options or external files though especially if we plan on sharing the code. I use pacman pretty much every day (it has much more use than just installing/loading packages) but for the most part these types of functions should be treated as useful for interactive use but if you want portable, shareable code without worries about whether packages will be available you will have to resort to something along the lines of what you have in your question.

Related

Additional information to ensure R code will run on another computer?

sessionInfo() includes very useful info that will improve the chances of someone being able to run your code on their machine, including
OS and version
R version
Attached packages
What other info can be provided with an R script to ensure someone else will be able to run it in their environment?
NB please include how to get that info (i.e. what command to run or where to look for it)
While this is not a complete answer, I tend to include this function with scripts I send along as it will download a package if the computer does not have it. This is more of a suggestion for scripts. For packages, you can explicitly put what versions of other packages your package depends on.
package_load<-function(packages = NULL, quiet=TRUE,
verbose=FALSE, warn.conflicts=FALSE){
# download required packages if they're not already
pkgsToDownload<- packages[!(packages %in% installed.packages()[,"Package"])]
if(length(pkgsToDownload)>0)
install.packages(pkgsToDownload, repos="http://cran.us.r-project.org",
quiet=quiet, verbose=verbose)
# then load them
for(i in 1:length(packages))
require(packages[i], character.only=T, quietly=quiet,
warn.conflicts=warn.conflicts)
}
## Example of use
package_load(c('dplyr', 'rgdal'))
This is helpful for one off scripts as it gets over the hurdle of a different computer not having the appropriate packages. However, I generally suggest to folks to make sure their version of R is up to date as well.
Is this the best solution? Probably not, but it does help with minor scripts you send along to others. For a larger code base, it would probably be better to put together a package or a docker image.
I think the criterion you listed are the "basics" of reusability of a script. The next levels would be the possible interaction of your scripts (e.g. R Shiny scripts will use web features: therefore, giving the web browser and its version used to produce the script is a good practice). Also, another kind of information would be commentaries precising the expected input and outputs.
NB: I would precise "attached packages and their versions", just for us to be sure...

Are there any good resources/best-practices to "industrialize" code in R for a data science project?

I need to "industrialize" an R code for a data science project, because the project will be rerun several times in the future with fresh data. The new code should be really easy to follow even for people who have not worked on the project before and they should be able to redo the whole workflow quite quickly. Therefore I am looking for tips, suggestions, resources and best-practices on how to achieve this objective.
Thank you for your help in advance!
You can make an R package out of your project, because it has everything you need for a standalone project that you want to share with others :
Easy to share, download and install
R has a very efficient documentation system for your functions and objects when you work within R Studio. Combined with roxygen2, it enables you to document precisely every function, and makes the code clearer since you can avoid commenting with inline comments (but please do so anyway if needed)
You can specify quite easily which dependancies your package will need, so that every one knows what to install for your project to work. You can also use packrat if you want to mimic python's virtualenv
R also provide a long format documentation system, which are called vignettes and are similar to a printed notebook : you can display code, text, code results, etc. This is were you will write guidelines and methods on how to use the functions, provide detailed instructions for a certain method, etc. Once the package is installed they are automatically included and available for all users.
The only downside is the following : since R is a functional programming language, a package consists of mainly functions, and some other relevant objects (data, for instance), but not really scripts.
More details about the last point if your project consists in a script that calls a set of functions to do something, it cannot directly appear within the package. Two options here : a) you make a dispatcher function that runs a set of functions to do the job, so that users just have to call one function to run the whole method (not really good for maintenance) ; b) you make the whole script appear in a vignette (see above). With this method, people just have to write a single R file (which can be copy-pasted from the vignette), which may look like this :
library(mydatascienceproject)
library(...)
...
dothis()
dothat()
finishwork()
That enables you to execute the whole work from a terminal or a distant machine with Rscript, with the following (using argparse to add arguments)
Rscript myautomatedtask.R --arg1 anargument --arg2 anotherargument
And finally if you write a bash file calling Rscript, you can automate everything !
Feel free to read Hadley Wickham's book about R packages, it is super clear, full of best practices and of great help in writing your packages.
One can get lost in the multiple files in the project's folder, so it should be structured properly: link
Naming conventions that I use: first, second.
Set up the random seed, so the outputs should be reproducible.
Documentation is important: you can use the Roxygen skeleton in rstudio (default ctrl+alt+shift+r).
I usually separate the code into smaller, logically cohesive scripts, and use a main.R script, that uses the others.
If you use a special set of libraries, you can consider using packrat. Once you set it up, you can manage the installed project-specific libraries.

How can I tell which packages I am not using in my R script?

As my code evolves from version to version, I'm aware that there are some packages for which I've found better/more appropriate packages for the task at hand or whose purpose was limited to a section of code which I've now phased out.
Is there any easy way to tell which of the loaded packages are actually used in a given script? My header is beginning to get cluttered.
Update 2020-04-13
I've now updated the referenced function to use the abstract syntax tree (AST) instead of using regular expressions as before. This is a much more robust way of approaching the problem (it's still not completely ironclad). This is available from version 0.2.0 of funchir, now on CRAN.
I've just got around to writing a quick-and-dirty function to handle this which I call stale_package_check, and I've added it to my package (funchir).
e.g., if we save the following script as test.R:
library(data.table)
library(iotools)
DT = data.table(a = 1:3)
Then (from the directory with that script) run funchir::stale_package_check('test.R'), we'll get:
Functions matched from package data.table: data.table
**No exported functions matched from iotools**
Have you considered using packrat?
packrat::clean() would remove unused packages, for example.
I've written a command-line script to accomplish this task. You can find it in this Github gist. I'm sure there are edge cases that it misses, but it works pretty well, on both R scripts and Rmd files.
My approach always is to close my R script or IDE (i.e. RStudio) and then start it again.
After this I run my function without loading any dependecies/packages beforehand.
This should result in various warning and error messages telling you which functions couldn't be found and executed. This again will give you hints on what packages are necessary to load beforehand and which one you can leave out.

R: what's the proper way to overwrite a function from a package?

I am using a R package, in which there are 2 functions f1 and f2 (with f2 calling f1)
I wish to overwrite function f1.
Since R 2.15 and the mandatory usage of namespace in packages, if I just source the new function, it is indeed available in the global environement (ie. just calling f1(x) in the console returns the new result). However, calling f2 will still use the packaged function f1. (Because the namespace modifies the search path, and seals it as explained here in the Writing R Extensions tutorial)
What is the proper way to completely replace f1 with the new one? (apart from building again the package!) This can be useful in several situations. For instance if there is a bug in a package that you have not developed. Or if you don't want to re-build your packages everyday while they are still under development.
I know about function
assignInNamespace("f1",f1,ns="mypackage")
However, the help page ?assignInNamespace is a bit enignmatic and seems to discourage people from using it without giving more information, and I couldn't find any best practice recommendations on the official CRAN tutorial. and after calling this function:
# Any of these 2 calls return the new function
mypackage::f1
getFromNamespace(x = "f1", envir = as.environment("package:mypackage"))
# while this one still returns the old packaged version
getFunction(name = "f1", where = as.environment("package:mypackage"))
This is very disturbing. How is the search path affected?
For now I am doing some ugly things such as modifying the lockEnvironment function so that library doesn't lock the package namespace, and I can lock it at a later stage once I have replaced f1 (which seems really not a good practice)
So basically I have 2 questions:
what does exactly do assignInNamespace in the case of a package namespace (which is supposed to be locked)
What are the good practices?
many thanks for sharing your experience there.
EDIT: people interested in this question might find this blog post extremely interesting.
There are lots of different cases here.
If it's a bug in someone else's package
Then the best practice is to contact the package maintainer and persuade them to fix it. That way everyone gets the fix, not just you.
If it's a bug while developing your own package
Then you need to find a workflow where rebuilding packages is easy. Like using the devtools package and typing build(mypackage), or clicking a button ("Build & Reload" in RStudio; "R CMD build" in Architect).
If you just want different behaviour to an existing package
If it isn't a bug as such, or the package maintainer won't make the fix that you want, then you'll have to maintain you own copy of f1. Using assignInNamespace to override it in the existing package is OK for exploring, but it's a bit hacky so it isn't really suitable for a permanent solution.
Your best bet is to create your own package containing copies of f1 and f2. This is less effort than it sounds, since you can just define f2 <- existingpackage::f2.
In response to the comment:
Second and third cases makes sense if you are alone but they require to build and install the packages which is tricky in the case of my organisation as the packages are deployed on dozens of computer and I need root access to update the packages.
So take a copy of the existing package source, apply your patch, and host it on your company network or github or Bitbucket. Then the updated package can be installed programmatically via
install.packages("//some/network/path/mypackage_0.0-1.tar.gz", repos = NULL)
or
library(devtools)
install_github("mypackage", "mygithubusername")
Since the installation is just a line of code, you can easily push it to as many machines as you like. You don't need root access either - just install the package to a library folder that doesn't require root access to write to. (Read the Startup and .libPaths help pages for how to define a new library.) You'll need network access to those machines, but I can't help you with that. Speak to your network administrator or your boss or whoever can get you permission.
In case the function has no explicit binding within the package:
rlang::env_unlock(env = asNamespace('mypackage'))
rlang::env_binding_unlock(env = asNamespace('mypackage'))
assign('f1', f1, envir = asNamespace('mypackage'))
rlang::env_binding_lock(env = asNamespace('mypackage'))
rlang::env_lock(asNamespace('mypackage'))

emacs ess crashes when trying to access h_elp

My emacs/ess session crashes when I try to access help. This happens if I have two packages loaded with the same functions; for example:
library(lubridate)
library(data.table)
?month
In Rgui interface pops out and asks to choose from which packages I want help. Emacs just crashes. Similar issues happens with install.packages, but there is a way to specify mirror Is there a way to install R packages using emacs?
Is there a similar trick with help?
Well, there is no full proof solution for time being as nobody really understands why these crashes happen. I assume you are on windows, right?
There are plans in ESS to completely internalize all the help (and other) calls in order not to depend on R dialogs. Hopefully in the next version.
For time being put this into your .Rprofile
tis <- utils:::index.search
formals(tis)[["firstOnly"]] <- TRUE
assignInNamespace("index.search", tis, "utils")
It basically makes help system to pick the first package with the found topic. In your case month help page in data.table package will be ignored. Not a big deal as common topic names are quite rare anyways.
I found out that starting library(tcltk) solves this problem. Menu appears even after it is called from emacs+ess. I added library(tcltk) to my Rprofile.site and now everything works great, install.packages() and accessing help when multiple packages load same function

Resources