Virtual environment in R? - r

I've found several posts about best practice, reproducibility and workflow in R, for example:
How to increase longer term reproducibility of research (particularly using R and Sweave)
Complete substantive examples of reproducible research using R
One of the major preoccupations is ensuring portability of code, in the sense that moving it to a new machine (possibly running a different OS) is relatively straightforward and gives the same results.
Coming from a Python background, I'm used to the concept of a virtual environment. When coupled with a simple list of required packages, this goes some way to ensuring that the installed packages and libraries are available on any machine without too much fuss. Sure, it's no guarantee - different OSes have their own foibles and peculiarities - but it gets you 95% of the way there.
Does such a thing exist within R? Even if it's not as sophisticated. For example simply maintaining a plain text list of required packages and a script that will install any that are missing?
I'm about to start using R in earnest for the first time, probably in conjunction with Sweave, and would ideally like to start in the best way possible! Thanks for your thoughts.

I'm going to use the comment posted by #cboettig in order to resolve this question.
Packrat
Packrat is a dependency management system for R. Gives you three important advantages (all of them focused in your portability needs)
Isolated : Installing a new or updated package for one project won’t break your other projects, and vice versa. That’s because packrat gives each project its own private package library.
Portable: Easily transport your projects from one computer to another, even across different platforms. Packrat makes it easy to install the packages your project depends on.
Reproducible: Packrat records the exact package versions you depend on, and ensures those exact versions are the ones that get installed wherever you go.
What's next?
Walkthrough guide: http://rstudio.github.io/packrat/walkthrough.html
Most common commands: http://rstudio.github.io/packrat/commands.html
Using Packrat with RStudio: http://rstudio.github.io/packrat/rstudio.html
Limitations and caveats: http://rstudio.github.io/packrat/limitations.html
Update: Packrat has been soft-deprecated and is now superseded by renv, so you might want to check this package instead.

The Anaconda package manager conda supports creating R environments.
conda create -n r-environment r-essentials r-base
conda activate r-environment
I have had a great experience using conda to maintain different Python installations, both user specific and several versions for the same user. I have tested R with conda and the jupyter-notebook and it works great. At least for my needs, which includes RNA-sequencing analyses using the DEseq2 and related packages, as well as data.table and dplyr. There are many bioconductor packages available in conda via bioconda and according to the comments on this SO question, it seems like install.packages() might work as well.

It looks like there is another option from RStudio devs, renv. It's available on CRAN and supersedes Packrat.
In short, you use renv::init() to initialize your project library, and use renv::snapshot() / renv::restore() to save and load the state of your library.
I prefer this option to conda r-enviroments because here everything is stored in the file renv.lock, which can be committed to a Git repo and distributed to the team.

To add to this:
Note:
1. Have Anaconda installed already
2. Assumed your working directory is "C:"
To create desired environment -> "r_environment_name"
C:\>conda create -n "r_environment_name" r-essentials r-base
To see available environments
C:\>conda info --envs
.
..
...
To activate environment
C:\>conda activate "r_environment_name"
(r_environment_name) C:\>
Launch Jupyter Notebook and let the party begins
(r_environment_name) C:\> jupyter notebook
For a similar "requirements.txt", perhaps this link will help -> Is there something like requirements.txt for R?

Check out roveR, the R container management solution. For details, see https://www.slideshare.net/DavidKunFF/ownr-technical-introduction, in particular slide 12.
To install roveR, execute the following command in R:
install.packages("rover", repos = c("https://lair.functionalfinances.com/repos/shared", "https://lair.functionalfinances.com/repos/cran"))
To make full use of the power of roveR (including installing specific versions of packages for reproducibility), you will need access to a laiR - for CRAN, you can use our laiR instance at https://lair.ownr.io, for uploading your own packages and sharing them with your organization you will need a laiR license. You can contact us on the email address in the presentation linked above.

Related

Why can R be linked to a shared BLAS later even if it was built with `--with-blas = lblas`?

The BLAS section in R installation and administration manual says that when R is built from source, with configuration parameter --without-blas, it will build Netlib's reference BLAS into a standalone shared library at R_HOME/lib/libRblas.so, along side the standard R shared library R_HOME/lib/libR.so. This makes it easier for user to switch and benchmark different tuned BLAS in R environment. The guide suggests that researcher might use symbolic link to libRblas.so to achieve this, and this article gives more details on this.
On contrary, when simply installing a pre-compiled binary version of R, either from R CRAN's mirrors or Ubuntu's repository (for linux user like me), in theory it should be more difficult to switch between different BLAS without rebuilding R, because a pre-compiled R version is configured with --with-blas = (some blas library). We can easily check this, either by reading configuration file at R_HOME/etc/Makeconf, or check the result of R CMD config BLAS_LIBS. For example, on my machine it returns -lblas, so it is linked to reference BLAS at build time. As a result, there is no R_HOME/lib/libRblas.so, only R_HOME/lib/libR.so.
However, this R-blog says that it is possible to switch between difference BLAS, even if R is not installed from source. The author tried the ATLAS and OpenBLAS from ubuntu's repository, and then use update-alternatives --config to work around. It is also possible to configure and install tuned BLAS from source, add them to "alternatives" through update-alternatives --install, and later switch between them in the same way. The BLAS library (a symbolic link) in this case will be found at /usr/lib/libblas.so.3, which is under both ubuntu and R's LD_LIBRARY_PATH. I have tested and this does work! But I am very surprised at how R achieves this. As I said, R should have been tied to the BLAS library configured at building time, i.e., I would expect all BLAS routines integrated into R_HOME/lib/libR.so. So why is it still possible to change BLAS via /usr/lib/libblas.so.3?
Thanks if someone can please explain this.

How can I ensure a consistent R environment among different users on the same server?

I am writing a protocol for a reproducible analysis using an in-house package "MyPKG". Each user will supply their own input files; other than the inputs, the analyses should be run under the same conditions. (e.g. so that we can infer that different results are due to different input files).
MyPKG is under development, so library(MyPKG) will load whichever was the last version that the user compiled in their local library. It will also load any dependencies found in their local libraries.
But I want everyone to use a specific version (MyPKG_3.14) for this analysis while still allowing development of more recent versions. If I understand correctly, "R --vanilla" will load the same dependencies for everyone.
Once we are done, we will save the working environment as a VM to maintain a stable reproducible environment. So a temporary (6 month) solution will suffice.
I have come up with two potential solutions, but am not sure if either is sufficient.
ask the server admin to install MyPKG_3.14 into the default R path and then provide the following code in the protocol:
R --vanilla
library(MyPKG)
....
or
compile MyPKG_3.14 in a specific library, e.g. lib.loc = "/home/share/lib/R/MyPKG_3.14", and then provide
R --vanilla
library(MyPKG)
Are both of these approaches sufficient to ensure that everyone is running the same version?
Is one preferable to the other?
Are there other unforseen issues that may arise?
Is there a preferred option for standardising the multiple analyses?
Should I include a test of the output of SessionInfo()?
Would it be better to create a single account on the server for everyone to use?
Couple of points:
Use system-wide installations of packages, e.g. the Debian / Ubuntu binary for R (incl the CRAN ports) will try to use /usr/local/lib/R/site-library (which users can install too if added to group owning the directory). That way everybody gets the same version
Use system-wide configuration, e.g. prefer $R_HOME/etc/ over the dotfiles below ~/. For the same reason, the Debian / Ubuntu package offers softlinks in /etc/R/
Use R's facilties to query its packages (eg installed.packages()) to report packages and versions.
Use, where available, OS-level facilities to query OS release and version. This, however, is less standardized.
Regarding the last point my box at home says
> edd#max:~$ lsb_release -a | tail -4
> Distributor ID: Ubuntu
> Description: Ubuntu 12.04.1 LTS
> Release: 12.04
> Codename: precise
> edd#max:~$
which is a start.

Generating dependency graph of Autotools packages

I have a software system having 30+ Open Source packages, most of them using GNU Autotools suite.
Are there tools to automatically generate package-to-package dependency graph? I.e. I'd like to see something like gst-plugins-good -> gst-plugins-base -> gstreamer -> glib.
I don't think so, but you could probably whip something together with this knowledge:
Scan the file named either configure.ac or configure.in in the package's root directory.
Look for a string of the form PKG_CHECK_MODULES([...],[...]...)
The second argument of that macro consists of package requirements of the form package or package >= version separated by whitespace.
The requirement string might not be the same as the package tarball name; a tarball that contains package.pc or package.pc.in provides the package package.
This only works for dependencies that use pkg-config. Some don't and you'll need to keep track of those dependencies by hand.
Probably not, because this is a hard problem. If there were only one way to build a package, it might not be too bad, but in general this isn't the case. You have the --enable-foo and --with-foo options that you can pass into configure. Those are sometimes package dependent also, requiring more packages. Most Linux distros (I think but am not completely sure) maintain these sort of dependency lists for yum or zypper or apt or whatever the package manager is by hand, and only one layer deep, leaving it up to the package manager to traverse the graph. The packages for the distro are only built one way. It's not unusual for these lists to be broken, also.

R: combining mutiple library locations with most up-to-date packages

Question: How do I move all of the most up-to-date R packages into one simple location that R (and everything else) will use from now and forever for my packages?
I have been playing around with R on Ubuntu 10.04 using variously RGedit, RCmdr, R shell, and RStudio. Meanwhile, I have installed packages, updated packages, and re-updated packages via apt, synaptic, install.packages(), etc... which apparently means these packages get placed everywhere, and (with the occasional sudo tossed in) with different permissions.
Currently I have different versions of different (and repeated) packages in:
/home/me/R/i486-pc-linux-gnu-library/2.10
/home/me/R/i486-pc-linux-gnu-library/2.14
/home/me/R/i486-pc-linux-gnu-library/
/usr/local/lib/R/site-library
/usr/lib/R/site-library
/usr/lib/R/library
First - I'm a single user, on a single machine - I don't want multiple library locations, I just want it to work.
Second - I am on an extremely slow connection, and can't keep just downloading packages repeatedly.
So - is there an easy way to merge all these library locations into one simple location? Can I just copy the folders over?
How do I set it in concrete that this is and always will be where anything R related looks for and updates packages?
This is maddening.
Thanks for your help.
Yes, it should almost work to just copy the folders over. But pre-2.14 packages WITHOUT a NAMESPACE file probably won't work in R 2.14 where all packages must have a namespace...
And you'd want to manually ensure you only copy the latest version of each package if you have multiple versions...
If you type .libPaths(), it will tell you where R looks for packages. The first in the list is where new packages are typically installed. I suspect that .libPaths() might return different things from RStudio vs. Rcmd etc.
After piecing together various bits of info here goes: A complete moron's guide to the R packages directory organization:
NB1 - this is my experience with Ubuntu - your mileage may vary
NB2 - I'm a single user on a single machine, and I like things simple.
Ubuntu puts anything installed via apt, or synaptic in:
/usr/lib/R/site-library
/usr/lib/R/library
directories. The default vanilla R install will try install packages here:
/usr/local/lib/R/site-library
Since these are system directories the user does not have write privileges to, depending on what method you are interacting with R you might be prompted with a friendly - "Hey buddy - we can't write there, you want us to put your packages in your home directory?" which seems innocent and reasonable enough... assuming you never change your GUI, or your working environment. If you do, the new GUI / environment might not be looking in the directory where the packages were placed, so won't find them. (Most interfaces have a way for you to point where your personal library of packages is, but who wants to muck about in config files?)
What seems to be the best practice for me (and feel free to correct me if I'm wrong) with a default install setup on Ubuntu, is to do any package management from a basic R shell as sudo: > sudo R and from there do your install.packages() voodoo. This seems to put packages in the usr/local/lib/R/site-library directory.
At the same time, update.packages() will update the files in /usr/lib/R/site-library and usr/lib/R/library directories, as well as usr/local/lib/R/site-library
(As for usr/lib/R/ division, it looks like /library/ has the core packages, while /site-library/ holds anything added, assuming they were installed by apt...)
Any packages previously installed and in the wrong place can be moved to the /usr/local/lib/R/site-library directory (assuming you are sudoing it) just by moving the directories (thanks #Tommy), but as usr/lib/R/ is controlled by apt - best not add or subtract anything from there...
Whew. Anyway - simple enough, and in simple language. Thanks everyone for the help.

Dependency management in R

Does R have a dependency management tool to facilitate project-specific dependencies? I'm looking for something akin to Java's maven, Ruby's bundler, Python's virtualenv, Node's npm, etc.
I'm aware of the "Depends" clause in the DESCRIPTION file, as well as the R_LIBS facility, but these don't seem to work in concert to provide a solution to some very common workflows.
I'd essentially like to be able to check out a project and run a single command to build and test the project. The command should install any required packages into a project-specific library without affecting the global R installation. E.g.:
my_project/.Rlibs/*
Unfortunately, Depends: within the DESCRIPTION: file is all you get for the following reasons:
R itself is reasonably cross-platform, but that means we need this to work across platforms and OSs
Encoding Depends: beyond R packages requires encoding the Depends in a portable manner across operating systems---good luck encoding even something simple such as 'a PNG graphics library' in a way that can be resolved unambiguously across systems
Windows does not have a package manager
AFAIK OS X does not have a package manager that mixes what Apple ships and what other Open Source projects provide
Even among Linux distributions, you do not get consistency: just take RStudio as an example which comes in two packages (which all provide their dependencies!) for RedHat/Fedora and Debian/Ubuntu
This is a hard problem.
The packrat package is precisely meant to achieve the following:
install any required packages into a project-specific library without affecting the global R installation
It allows installing different versions of the same packages in different project-local package libraries.
I am adding this answer even though this question is 5 years old, because this solution apparently didn't exist yet at the time the question was asked (as far as I can tell, packrat first appeared on CRAN in 2014).
Update (November 2019)
The new R package renv replaced packrat.
As a stop-gap, I've written a new rbundler package. It installs project dependencies into a project-specific subdirectory (e.g. <PROJECT>/.Rbundle), allowing the user to avoid using global libraries.
rbundler on Github
rbundler on CRAN
We've been using rbundler at Opower for a few months now and have seen a huge improvement in developer workflow, testability, and maintainability of internal packages. Combined with our internal package repository, we have been able to stabilize development of a dozen or so packages for use in production applications.
A common workflow:
Check out a project from github
cd into the project directory
Fire up R
From the R console:
library(rbundler)
bundle('.')
All dependencies will be installed into ./.Rbundle, and an .Renviron file will be created with the following contents:
R_LIBS_USER='.Rbundle'
Any R operations run from within this project directory will adhere to the project-speciic library and package dependencies. Note that, while this method uses the package DESCRIPTION to define dependencies, it needn't have an actual package structure. Thus, rbundler becomes a general tool for managing an R project, whether it be a simple script or a full-blown package.
You could use the following workflow:
1) create a script file, which contains everything you want to setup and store it in your projectd directory as e.g. projectInit.R
2) source this script from your .Rprofile (or any other file executed by R at startup) with a try statement
try(source("./projectInit.R"), silent=TRUE)
This will guarantee that even when no projectInit.R is found, R starts without error message
3) if you start R in your project directory, the projectInit.R file will be sourced if present in the directory and you are ready to go
This is from a Linux perspective, but should work in the same way under windows and Mac as well.

Resources