I would like users to be able to select a directory interactively in R. The solution needs to work on different platforms (at least on Linux, Windows and Mac machines that have a graphical desktop environment). And it needs to be robust enough to work on a variety of computers. I've run into problems with the variants I know of:
file.choose() unfortunately only works for files - It won't allow to select a directory. Other than this limitation, file.choose is a good example of the type of solution I'm looking for - it works across platforms and does not have external dependencies that may not be available on a particular computer.
choose.dir() Only works on Windows.
tk_choose.dir() from library(tcltk) was my preferred solution until recently. But I've had users report that this throws an error
log4cplus:ERROR No appenders could be found for logger (AdSyncNamespace).
log4cplus:ERROR Please initialize the log4cplus system properly.
which we tracked back to Autodesk360 software being installed, which for some reason interferes with tcltk. So this is not a suitable solution unless there is a fix for this. (the only solution I've found by googling is to uninstall Autodesk360, which won't be a solution for users who installed it because they actually use it).
This answer suggests the following as a possible alternative:
library(rJava)
library(rChoiceDialogs)
jchoose.dir()
But, as an example of the sort of thing that can go wrong with this, when I tried to install.packages("rJava") I got:
checking whether JNI programs can be compiled... configure: error:
Cannot compile a simple JNI program. See config.log for details.
Make sure you have Java Development Kit installed and correctly
registered in R. If in doubt, re-run "R CMD javareconf" as root.
ERROR: configuration failed for package ‘rJava’
* removing ‘/home/dominic/R/x86_64-pc-linux-gnu-library/3.3/rJava’ Warning in install.packages : installation of package ‘rJava’ had
non-zero exit status
I managed to fix this on my own machine (linux running openJDK) by installing the openjdk compiler using the linux package manager then running sudo R CMD javareconf. But I can't expect random users with varying levels of computer expertise to have to jump through hoops just so that they can select a directory. Even if they do manage to fix it, it will look bad when every other piece of software they use manages to open a directory-selection dialogue without any problems.
So my question: Is there a robust method that can reliably be expected to "just work" (like file.choose does for files), on a variety of platforms and makes no expectation of the end user being computer literate enough to solve these kinds of issues (such as incompatabilities with Autodesk360 or unresolved Java dependencies)?
In the time since posting this question and an earlier version of this answer, I've managed to test the various options that have been suggested on a range of computers. This process has converged on a fairly simple solution. The only cases I have found where tcltk::tk_choose.dir() fails due to conflicts are on Windows computers running Autodesk software. But on Windows, we have utils::choose.dir available instead. So the answer I am currently running with is:
choose_directory = function(caption = 'Select data directory') {
if (exists('utils::choose.dir')) {
choose.dir(caption = caption)
} else {
tk_choose.dir(caption = caption)
}
}
For completeness, I think it is useful to summarise some of the issues with other approaches and why they do not meet the criteria of being generally robust on a variety of platforms (including robustness against potentially unresolved external dependencies that can't be fixed from within R and that that may require administrator privileges and/or expertise to fix):
easycsv::choose_dir in Linux depends on zenity, which may not be available.
rstudioapi::selectDirectory requires that we are in RStudio Version greater than 1.1.287.
rChoiceDialogs::rchoose.dir requires not only that java runtime environment is installed, but also java compiler must be installed and configured correctly to work with rJava.
utils::menu does not work if the R function is run from the command line, rather than in an interactive session. Also on Linux X11 it frequently leaves an orphan window open after execution, which can't be readily closed.
gWidgets2::gfile has external dependency on either gtk2 or tcltk or Qt. Resolving these dependencies was found to be non-trivial in some cases.
Archived earlier version of this answer
Finally, an earlier version of this answer contained some longer code that tries out several possible solutions to find one that works. Although I have settled on the simple version above, I leave this version archived here in case it proves useful to someone else.
What it tries:
Check whether the function utils::choose.dir exists (will only be available on Windows). If so, use that
Check whether the user is working from within RStudio version 1.1.287 or greater. If so use the RStudio API.
Check if we can load the tcltk package and then open and close a tcltk window without throwing an error. If so, use tcltk.
Check whether we can load gWidgets2 and the RGtk2 widgets. If so, use gWidgets2. I don't try to load the tcltk widgets here, because if they worked, presumably we would already be using the tcltk package. I also do not try to load the Qt widgets, as they seem somewhat unmaintained and are not currently available on CRAN.
Check if we can load rJava and rChoiceDialogs. If so, use rChoiceDialogs.
If none of the above are successful, use a fallback position of requesting the directory name at the console.
Here's the longer version of the code:
# First a helper function to load packages, installing them first if necessary
# Returns logical value for whether successful
ensure_library = function (lib.name){
x = require(lib.name, quietly = TRUE, character.only = TRUE)
if (!x) {
install.packages(lib.name, dependencies = TRUE, quiet = TRUE)
x = require(lib.name, quietly = TRUE, character.only = TRUE)
}
x
}
select_directory_method = function() {
# Tries out a sequence of potential methods for selecting a directory to find one that works
# The fallback default method if nothing else works is to get user input from the console
if (!exists('.dir.method')){ # if we already established the best method, just use that
# otherwise lets try out some options to find the best one that works here
if (exists('utils::choose.dir')) {
.dir.method = 'choose.dir'
} else if (rstudioapi::isAvailable() & rstudioapi::getVersion() > '1.1.287') {
.dir.method = 'RStudioAPI'
ensure_library('rstudioapi')
} else if(ensure_library('tcltk') &
class(try({tt <- tktoplevel(); tkdestroy(tt)}, silent = TRUE)) != "try-error") {
.dir.method = 'tcltk'
} else if (ensure_library('gWidgets2') & ensure_library('RGtk2')) {
.dir.method = 'gWidgets2RGtk2'
} else if (ensure_library('rJava') & ensure_library('rChoiceDialogs')) {
.dir.method = 'rChoiceDialogs'
} else {
.dir.method = 'console'
}
assign('.dir.method', .dir.method, envir = .GlobalEnv) # remember the chosen method for later
}
return(.dir.method)
}
choose_directory = function(method = select_directory_method(), title = 'Select data directory') {
switch (method,
'choose.dir' = choose.dir(caption = title),
'RStudioAPI' = selectDirectory(caption = title),
'tcltk' = tk_choose.dir(caption = title),
'rChoiceDialogs' = rchoose.dir(caption = title),
'gWidgets2RGtk2' = gfile(type = 'selectdir', text = title),
readline('Please enter directory path: ')
)
}
Here is a simple directory navigation menu (using menu{utils}):
d=1
while(d != 0) {
a = getwd()
a = strsplit(a, "/")
a = unlist(a)
b = list.dirs(recursive = F, full.names = F)
c = paste("..", a[length(a) - 1], a[length(a)], sep = "/")
d = menu(c("..", b), title = c, graphics = T)
if(d==1){
e=paste(paste(a[1:(length(a)-1)],collapse = '/',sep = ''),'/',sep = '')
#print(e)
setwd(e)
}else{
e=paste(paste(a,collapse = '/',sep = ''),'/',b[d-1],sep='')
#print(e)
setwd(e)
}
}
Note: I did not (yet) test it under different systems. Here is what the documentation says:
If graphics = TRUE and a windowing system is available (Windows, macOS or X11 via Tcl/Tk) a listbox widget is used, otherwise a text menu. It is an error to use menu in a non-interactive session.
One limitation: The title = can only be a single line.
you can use the choose_dir function from easycsv.
it works on Windows, Linux and OSX
easycsv::choose_dir() # can be run without parameters to prompt a folder selection window
for some use cases a little trick might be to use dirname() around file.choose()
dir <- dirname(file.choose())
this will return the directory. It does however require at least one file to be present in the directory
Suggestion for adaption of choose_directory() as mentioned in my comment (06.09.2018 RFelber):
choose_directory <- function(ini_dir = getwd(),
method = select_directory_method(),
title = 'Select data directory') {
switch(method,
'choose.dir' = choose.dir(default = ini_dir, caption = title),
'RStudioAPI' = selectDirectory(path = ini_dir, caption = title),
'tcltk' = tk_choose.dir(default = ini_dir, caption = title),
'rChoiceDialogs' = rchoose.dir(default = ini_dir, caption = title),
'gWidgets2RGtk2' = gfile(type = 'selectdir', text = title, initial.dir = ini_dir),
readline('Please enter directory path: ')
)
}
My server.R contains the following code for dynamically installing packages when needed:
package <- input$chip
if (!require(package, character.only=T, quietly=T)) {
source("https://bioconductor.org/biocLite.R")
biocLite(package, ask = F, suppressUpdates = T, suppressAutoUpdate = T)
library(package, character.only=T)
}
ui.R has a select input element where the user can select one of the following bioconductor packages:
selectInput(inputId = 'chip', label='Chip', choices=c('Mouse Gene 1.0'='mogene10sttranscriptcluster.db',
'Mouse Gene 2.0'='mogene20sttranscriptcluster.db',
'Human Gene 1.0'='hugene10sttranscriptcluster.db',
'Human Genome U133A 2.0'='hgu133a2.db'))
So, based on what chip the user selects, the corresponding annotation package should get loaded, and if it is not already installed, it should install it.
This works on my local machine. But when I try to deploy my app on shinyapps.io. I get the following error:
Error:
* Application depends on package "package" but it is not installed. Please resolve before continuing.
I know that it is unable to recognize the package in biocLite(package, ask = F, suppressUpdates = T, suppressAutoUpdate = T). The deployment process thinks that package is a library name and not a variable and is unable to evaluate its value.
Is there any way to resolve this? Or do I have to explicitly load all required packages? The problem with explicitly loading the annotation packages is that these packages are so big they take up a lot of memory, which is why I wanted to load these packages only when required.
An alternative is to make an if-else loop or switch statement to install packages based on the condition:
package <- function(input$chip) {
switch(input$chip,
'mogene10sttranscriptcluster.db' = 'mogene10sttranscriptcluster.db',
'mogene20sttranscriptcluster.db' = 'mogene20sttranscriptcluster.db',
'hugene10sttranscriptcluster.db' = 'hugene10sttranscriptcluster.db',
'hgu133a2.db' = 'hgu133a2.db')
}
library(package)
But even in this case, the deployment process won't be able to evaluate the package value.
Thanks!
UPDATE:
Taking Yihui's suggestion, I modified my code to:
package <- input$genome
if(!do.call(require, list(package = package, character.only = T, quietly = T))){
do.call(biocLite, list(pkgs = package, ask = F, suppressUpdates = T, suppressAutoUpdate = T))
do.call(library, list(package = package, character.only = TRUE))
}
The application is able to deploy now, but it throws me this error:
Error: unable to install packages
Unfortunately, you have to fool the shinyapps (or rsconnect) package a bit so that it does not detect package as a literal package name. For example, you may use do.call():
do.call(library, list(package = package, character.only = TRUE))
The ShinyApps.io server does not allow you to install packages on the fly (strictly speaking, this is not true, but I don't want to show you how). You have to declare all packages you need in the app as dependencies beforehand. Again, it is a hack:
if (FALSE) {
library(mogene10sttranscriptcluster.db)
library(mogene20sttranscriptcluster.db)
library(hugene10sttranscriptcluster.db)
library(hgu133a2.db)
}
Then ShinyApps.io will detect these packages as dependencies and pre-install them for you. What you need to do in your app is simply load them, and you don't need to install them by yourself.
I wanted to try some new package. I installed it, it required a lot of dependencies, so it installed plenty of other packages. I tried it and I am not impressed - now I would like to uninstall that package including all the dependencies!
Is there any way to remove given packages including all dependencies which are not needed by any other package in the system?
I looked at ?remove.packages but there is no option to do this.
Here is some code that will all you to remove a package and its unneeded dependencies. Note that its interpretation of "unneeded" dependent packages is the set of packages that this package depends on but that are not used in any other package. This means that it will also default to suggesting to uninstall packages that have no reverse dependencies. Thus I've implemented it as an interactive menu (like in update.packages) to give you control over what to uninstall.
library("tools")
removeDepends <- function(pkg, recursive = FALSE){
d <- package_dependencies(,installed.packages(), recursive = recursive)
depends <- if(!is.null(d[[pkg]])) d[[pkg]] else character()
needed <- unique(unlist(d[!names(d) %in% c(pkg,depends)]))
toRemove <- depends[!depends %in% needed]
if(length(toRemove)){
toRemove <- select.list(c(pkg,sort(toRemove)), multiple = TRUE,
title = "Select packages to remove")
remove.packages(toRemove)
return(toRemove)
} else {
invisible(character())
}
}
# Example
install.packages("YplantQMC") # installs an unneeded dependency "LeafAngle"
c("YplantQMC","LeafAngle") %in% installed.packages()[,1]
## [1] TRUE TRUE
removeDepends("YplantQMC")
c("YplantQMC","LeafAngle") %in% installed.packages()[,1]
## [1] FALSE FALSE
Note: The recursive option may be particularly useful. If package dependencies further depend on other unneeded packages, setting recursive = TRUE is vital. If dependencies are shallow (i.e., only one level down the dependency tree), this can be left as FALSE (the default).
There is in fact a function remove.packages() in base R, but it's in the package utils, which you need to load first:
library(utils)
remove.packages()
It's not entirely clear to me how much recursive cleanup this function does.
There are base R ways to handle this but I'm going to recommend a package (I know you're trying to get rid of these). I'm recommending this package for 2 reasons (1) it solves two problems you're having & (2) Dason K. and I are developing this package (full disclosure). This package's value stands in that the functions are easier to remember names that are consistent. It also does some combined operations. Note you could do all of this in base but this question is already pretty localized so thus I'm going to use a tool that makes answering easier.
This package will:
allow you to delete package and dependencies
allow you to install packages in a temporary directory rather than main library
The caveat is that you can't be 100% certain that the package dependency wasn't already there, installed by the user previously. Therefore I would take caution with every step of this solution that you're not deleting things that are of importance. This solution relies on 2 factors (1) pacman (2) file.info. We'll assume that dependencies that were modified within a certain (user defined) threshold of time are indeed unwanted packages. Note the word assume here.
I made this reproducible for the folks at home in that the answer will randomly install a package from CRAN with additional dependencies (this installs a package you do not already have locally with 3 or more dependencies; used random to not single out any package).
Making a reproducible example
library(pacman)
(available <- p_cran())
(randoms <- setdiff(available, p_lib()))
(mypackages <- p_lib())
ndeps <- 1
while(ndeps < 3) {
package <- sample(randoms, 1)
deps <- unlist(p_depends(package, character.only=TRUE), use.names=FALSE)
ndeps <- length(setdiff(deps, mypackages))
}
package
p_install(package, character.only = TRUE)
Uninstalling package
We will assign the package name from the first part to package or the OP can use the unwanted package they installed and assign that to package (my random package happened to be package <- "OrdinalLogisticBiplot"). This deletions process should, ideally, be done in a clean R session with no add-on packages (except pacman) loaded.
## function to grab file info date/time modified
infograb <- function(x) file.info(file.path(p_path(), x))[["mtime"]]
## determine the differences in times modified for "package"
## and all other packages in library
diffs <- as.numeric(infograb(package)) - sapply(p_lib(), infograb)
## user defined threshold
threshold <- 15
## determine packages just installed within the time frame of the unwanted package
(delete_deps <- diffs[diffs < threshold & diffs >= 0])
## recursively find all packages that could have been installed
potential_depends <- unlist(lapply(unlist(p_depends(package, character=TRUE)),
p_depends, character=TRUE, recursive=TRUE))
## delete packages that are both on the lists of (1) installed within time
## frame of unwanted package and a dependency of that package
p_delete(intersect(names(delete_deps), potential_depends), character.only = TRUE)
This approach makes some big assumptions.
A better approach from the get go
p_temp(package_to_try)
This allows you to try it out first and not have it muddy your local library.
If you're unimpressed with pacman you can use the method described above to delete it.
Here is a quick solution to have a look at the packages that are not required by any other locally installed packages.
library(pacman)
ip <- installed.packages()[,1]
deps <- lapply(1:length(ip), function(i) tryCatch(p_depends(ip[i], local = TRUE)$Imports, error = function(e) return(NULL)))
packages.on.which.things.depend <- sort(unique(unlist(deps)))
packages.on.which.nothing.depends <- setdiff(ip, packages.on.which.things.depend)
packages.on.which.nothing.depends
Then, have a quick look at it and remove the ones you are not using (just be careful and do not try to uninstall base).
After you have determined which ones you use and which you don’t use, you may proceed with something like
i.need <- c("AER", "car", "devtools", "glmnet", "gmm", "Hmisc", "pacman", "plm", "RcppArmadillo", "RcppEigen", "rmarkdown", "rugarch", "base", "datasets")
un <- setdiff(packages.on.which.nothing.depends, i.need)
un
remove.packages(un)
Rinse and repeat until there are no unneeded orphanes packages. R should not allow you to remove built-in system packages.
Is anyone aware of a package that downloads a dataset from the internet during the installation process and then prepares and saves it so that it is available when loading the package using library(packageName)? Are there any drawbacks in this approach (besides the obvious one that package installation will fail if the data source is unavailable or the data format has changed)?
EDIT: Some background. The data is three tab-separated files in a ZIP archive, owned by federal statistics and generally freely accessible. I have R code which downloads, extracts and prepares the data, in the end three data frames are created which could be saved in .RData format.
I am thinking about creating two packages: A "data" package that provides the data, and a "code" package that operates on it.
I did this mockup before, while you were posting your edit. I presume it would work, but not tested. I've commented it so you can see what you would need to change. The idea here is to check to see if an expected object is available in the current working environment. If it is not, check to see that the file that the data can be found in is in the current working directory. If that is not found, prompt the user to download the file, then proceed from there.
myFunction <- function(this, that, dataset) {
# We're giving the user a chance to specify the dataset.
# Maybe they have already downloaded it and saved it.
if (is.null(dataset)) {
# Check to see if the object is already in the workspace.
# If it is not, check to see whether the .RData file that
# contains the object is in the current working directory.
if (!exists("OBJECTNAME", where = 1)) {
if (isTRUE(list.files(
pattern = "^DATAFILE.RData$") == "DATAFILE.RData")) {
load("DATAFILE.RData")
# If neither of those are successful, prompt the user
# to download the dataset.
} else {
ans = readline(
"DATAFILE.RData dataset not found in working directory.
OBJECTNAME object not found in workspace. \n
Download and load the dataset now? (y/n) ")
if (ans != "y")
return(invisible())
# I usually use RCurl in case the URL is https
require(RCurl)
baseURL = c("http://some/base/url/")
# Here, we actually download the data
temp = getBinaryURL(paste0(baseURL, "DATAFILE.RData"))
# Here we load the data
load(rawConnection(temp), envir=.GlobalEnv)
message("OBJECTNAME data downloaded from \n",
paste0(baseURL, "DATAFILE.RData \n"),
"and added to your workspace\n\n")
rm(temp, baseURL)
}
}
dataset <- OBJECTNAME
}
TEMP <- dataset
## Other fun stuff with TEMP, this, and that.
}
Two packages, hosted at Github
Here's another approach, building on the comments between #juba and I. The basic concept is to have, as you describe, one package for the codes and one for the data. This function would be part of the package that contains your code. It will:
Check to see if the data package is installed
Check to see if the version of the data package you have installed matches the version at Github, which we are going to assume is the most up to date version.
When it fails any of the checks, it asks the user if they want to update their installation of the package. In this case, for demonstration, I've linked to one of my packages in progress at Github. This should give you an idea of what you need to substitute to get it to work with your own package once you've hosted it there.
CheckVersionFirst <- function() {
# Check to see if installed
if (!"StataDCTutils" %in% installed.packages()[, 1]) {
Checks <- "Failed"
} else {
# Compare version numbers
require(RCurl)
temp <- getURL("https://raw.github.com/mrdwab/StataDCTutils/master/DESCRIPTION")
CurrentVersion <- gsub("^\\s|\\s$", "",
gsub(".*Version:(.*)\\nDate.*", "\\1", temp))
if (packageVersion("StataDCTutils") == CurrentVersion) {
Checks <- "Passed"
}
if (packageVersion("StataDCTutils") < CurrentVersion) {
Checks <- "Failed"
}
}
switch(
Checks,
Passed = { message("Everything looks OK! Proceeding!") },
Failed = {
ans = readline(
"'StataDCTutils is either outdated or not installed. Update now? (y/n) ")
if (ans != "y")
return(invisible())
require(devtools)
install_github("StataDCTutils", "mrdwab")
})
# Some cool things you want to do after you are sure the data is there
}
Try it out with CheckVersionFirst().
Note: This would succeed only if you religiously remember to update your version number in your description file every time you push a new version of the data to Github!
So, to clarify/recap/expand, the basic idea would be to:
Periodically push the updated version of your data package to Github, being sure to change the version number of the data package in its DESCRIPTION file when you do so.
Integrate this CheckVersionFirst() function as an .onLoad event in your code package. (Obviously modify the function to match your account and package name).
Change the commented line that reads # Some cool things you want to do after you are sure the data is there to reflect the cool things you actually want to do, which would probably start with library(YOURDATAPACKAGE) to load the data....
This method may not be efficient, but a good workaround. If you are making a package that needs regularly updated data, first make a package which has that data. It does not need any functions, but I like the concept of a setter (which you might not need in this case) & getter.
Then when you make your package, have the 'data'-package as a dependency. This way, whenever someone installs your package, he/she will always have the latest data.
On your part, you'll just have to swap out the data in your 'data' package, and upload it to the repo you want.
If you don't know how to build a package, check ?packages.skeleton and R CMD CHECK, R CMD BUILD
I seem to be sharing a lot of code with coauthors these days. Many of them are novice/intermediate R users and don't realize that they have to install packages they don't already have.
Is there an elegant way to call installed.packages(), compare that to the ones I am loading and install if missing?
Yes. If you have your list of packages, compare it to the output from installed.packages()[,"Package"] and install the missing packages. Something like this:
list.of.packages <- c("ggplot2", "Rcpp")
new.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[,"Package"])]
if(length(new.packages)) install.packages(new.packages)
Otherwise:
If you put your code in a package and make them dependencies, then they will automatically be installed when you install your package.
Dason K. and I have the pacman package that can do this nicely. The function p_load in the package does this. The first line is just to ensure that pacman is installed.
if (!require("pacman")) install.packages("pacman")
pacman::p_load(package1, package2, package_n)
You can just use the return value of require:
if(!require(somepackage)){
install.packages("somepackage")
library(somepackage)
}
I use library after the install because it will throw an exception if the install wasn't successful or the package can't be loaded for some other reason. You make this more robust and reuseable:
dynamic_require <- function(package){
if(eval(parse(text=paste("require(",package,")")))) return(TRUE)
install.packages(package)
return(eval(parse(text=paste("require(",package,")"))))
}
The downside to this method is that you have to pass the package name in quotes, which you don't do for the real require.
A lot of the answers above (and on duplicates of this question) rely on installed.packages which is bad form. From the documentation:
This can be slow when thousands of packages are installed, so do not use this to find out if a named package is installed (use system.file or find.package) nor to find out if a package is usable (call require and check the return value) nor to find details of a small number of packages (use packageDescription). It needs to read several files per installed package, which will be slow on Windows and on some network-mounted file systems.
So, a better approach is to attempt to load the package using require and and install if loading fails (require will return FALSE if it isn't found). I prefer this implementation:
using<-function(...) {
libs<-unlist(list(...))
req<-unlist(lapply(libs,require,character.only=TRUE))
need<-libs[req==FALSE]
if(length(need)>0){
install.packages(need)
lapply(need,require,character.only=TRUE)
}
}
which can be used like this:
using("RCurl","ggplot2","jsonlite","magrittr")
This way it loads all the packages, then goes back and installs all the missing packages (which if you want, is a handy place to insert a prompt to ask if the user wants to install packages). Instead of calling install.packages separately for each package it passes the whole vector of uninstalled packages just once.
Here's the same function but with a windows dialog that asks if the user wants to install the missing packages
using<-function(...) {
libs<-unlist(list(...))
req<-unlist(lapply(libs,require,character.only=TRUE))
need<-libs[req==FALSE]
n<-length(need)
if(n>0){
libsmsg<-if(n>2) paste(paste(need[1:(n-1)],collapse=", "),",",sep="") else need[1]
print(libsmsg)
if(n>1){
libsmsg<-paste(libsmsg," and ", need[n],sep="")
}
libsmsg<-paste("The following packages could not be found: ",libsmsg,"\n\r\n\rInstall missing packages?",collapse="")
if(winDialog(type = c("yesno"), libsmsg)=="YES"){
install.packages(need)
lapply(need,require,character.only=TRUE)
}
}
}
if (!require('ggplot2')) install.packages('ggplot2'); library('ggplot2')
"ggplot2" is the package. It checks to see if the package is installed, if it is not it installs it. It then loads the package regardless of which branch it took.
TL;DR you can use find.package() for this.
Almost all the answers here rely on either (1) require() or (2) installed.packages() to check if a given package is already installed or not.
I'm adding an answer because these are unsatisfactory for a lightweight approach to answering this question.
require has the side effect of loading the package's namespace, which may not always be desirable
installed.packages is a bazooka to light a candle -- it will check the universe of installed packages first, then we check if our one (or few) package(s) are "in stock" at this library. No need to build a haystack just to find a needle.
This answer was also inspired by #ArtemKlevtsov's great answer in a similar spirit on a duplicated version of this question. He noted that system.file(package=x) can have the desired affect of returning '' if the package isn't installed, and something with nchar > 1 otherwise.
If we look under the hood of how system.file accomplishes this, we can see it uses a different base function, find.package, which we could use directly:
# a package that exists
find.package('data.table', quiet=TRUE)
# [1] "/Library/Frameworks/R.framework/Versions/4.0/Resources/library/data.table"
# a package that does not
find.package('InstantaneousWorldPeace', quiet=TRUE)
# character(0)
We can also look under the hood at find.package to see how it works, but this is mainly an instructive exercise -- the only ways to slim down the function that I see would be to skip some robustness checks. But the basic idea is: look in .libPaths() -- any installed package pkg will have a DESCRIPTION file at file.path(.libPaths(), pkg), so a quick-and-dirty check is file.exists(file.path(.libPaths(), pkg, 'DESCRIPTION').
This solution will take a character vector of package names and attempt to load them, or install them if loading fails. It relies on the return behaviour of require to do this because...
require returns (invisibly) a logical indicating whether the required package is available
Therefore we can simply see if we were able to load the required package and if not, install it with dependencies. So given a character vector of packages you wish to load...
foo <- function(x){
for( i in x ){
# require returns TRUE invisibly if it was able to load package
if( ! require( i , character.only = TRUE ) ){
# If package was not able to be loaded then re-install
install.packages( i , dependencies = TRUE )
# Load package after installing
require( i , character.only = TRUE )
}
}
}
# Then try/install packages...
foo( c("ggplot2" , "reshape2" , "data.table" ) )
Although the answer of Shane is really good, for one of my project I needed to remove the ouput messages, warnings and install packages automagically. I have finally managed to get this script:
InstalledPackage <- function(package)
{
available <- suppressMessages(suppressWarnings(sapply(package, require, quietly = TRUE, character.only = TRUE, warn.conflicts = FALSE)))
missing <- package[!available]
if (length(missing) > 0) return(FALSE)
return(TRUE)
}
CRANChoosen <- function()
{
return(getOption("repos")["CRAN"] != "#CRAN#")
}
UsePackage <- function(package, defaultCRANmirror = "http://cran.at.r-project.org")
{
if(!InstalledPackage(package))
{
if(!CRANChoosen())
{
chooseCRANmirror()
if(!CRANChoosen())
{
options(repos = c(CRAN = defaultCRANmirror))
}
}
suppressMessages(suppressWarnings(install.packages(package)))
if(!InstalledPackage(package)) return(FALSE)
}
return(TRUE)
}
Use:
libraries <- c("ReadImages", "ggplot2")
for(library in libraries)
{
if(!UsePackage(library))
{
stop("Error!", library)
}
}
# List of packages for session
.packages = c("ggplot2", "plyr", "rms")
# Install CRAN packages (if not already installed)
.inst <- .packages %in% installed.packages()
if(length(.packages[!.inst]) > 0) install.packages(.packages[!.inst])
# Load packages into session
lapply(.packages, require, character.only=TRUE)
Use packrat so that the shared libraries are exactly the same and not changing other's environment.
In terms of elegance and best practice I think you're fundamentally going about it the wrong way. The package packrat was designed for these issues. It is developed by RStudio by Hadley Wickham. Instead of them having to install dependencies and possibly mess up someone's environment system, packrat uses its own directory and installs all the dependencies for your programs in there and doesn't touch someone's environment.
Packrat is a dependency management system for R.
R package dependencies can be frustrating. Have you ever had to use trial-and-error to figure out what R packages you need to install to make someone else’s code work–and then been left with those packages globally installed forever, because now you’re not sure whether you need them? Have you ever updated a package to get code in one of your projects to work, only to find that the updated package makes code in another project stop working?
We built packrat to solve these problems. Use packrat to make your R projects more:
Isolated: Installing a new or updated package for one project won’t break your other projects, and vice versa. That’s because packrat gives each project its own private package library.
Portable: Easily transport your projects from one computer to another, even across different platforms. Packrat makes it easy to install the packages your project depends on.
Reproducible: Packrat records the exact package versions you depend on, and ensures those exact versions are the ones that get installed wherever you go.
https://rstudio.github.io/packrat/
This is the purpose of the rbundler package: to provide a way to control the packages that are installed for a specific project. Right now the package works with the devtools functionality to install packages to your project's directory. The functionality is similar to Ruby's bundler.
If your project is a package (recommended) then all you have to do is load rbundler and bundle the packages. The bundle function will look at your package's DESCRIPTION file to determine which packages to bundle.
library(rbundler)
bundle('.', repos="http://cran.us.r-project.org")
Now the packages will be installed in the .Rbundle directory.
If your project isn't a package, then you can fake it by creating a DESCRIPTION file in your project's root directory with a Depends field that lists the packages that you want installed (with optional version information):
Depends: ggplot2 (>= 0.9.2), arm, glmnet
Here's the github repo for the project if you're interested in contributing: rbundler.
You can simply use the setdiff function to get the packages that aren't installed and then install them. In the sample below, we check if the ggplot2 and Rcpp packages are installed before installing them.
unavailable <- setdiff(c("ggplot2", "Rcpp"), rownames(installed.packages()))
install.packages(unavailable)
In one line, the above can be written as:
install.packages(setdiff(c("ggplot2", "Rcpp"), rownames(installed.packages())))
The current version of RStudio (>=1.2) includes a feature to detect missing packages in library() and require() calls, and prompts the user to install them:
Detect missing R packages
Many R scripts open with calls to library() and require() to load the packages they need in order to execute. If you open an R script that references packages that you don’t have installed, RStudio will now offer to install all the needed packages in a single click. No more typing install.packages() repeatedly until the errors go away!
https://blog.rstudio.com/2018/11/19/rstudio-1-2-preview-the-little-things/
This seems to address the original concern of OP particularly well:
Many of them are novice/intermediate R users and don't realize that they have to install packages they don't already have.
Sure.
You need to compare 'installed packages' with 'desired packages'. That's very close to what I do with CRANberries as I need to compare 'stored known packages' with 'currently known packages' to determine new and/or updated packages.
So do something like
AP <- available.packages(contrib.url(repos[i,"url"])) # available t repos[i]
to get all known packages, simular call for currently installed packages and compare that to a given set of target packages.
The following simple function works like a charm:
usePackage<-function(p){
# load a package if installed, else load after installation.
# Args:
# p: package name in quotes
if (!is.element(p, installed.packages()[,1])){
print(paste('Package:',p,'Not found, Installing Now...'))
install.packages(p, dep = TRUE)}
print(paste('Loading Package :',p))
require(p, character.only = TRUE)
}
(not mine, found this on the web some time back and had been using it since then. not sure of the original source)
I use following function to install package if require("<package>") exits with package not found error. It will query both - CRAN and Bioconductor repositories for missing package.
Adapted from the original work by Joshua Wiley,
http://r.789695.n4.nabble.com/Install-package-automatically-if-not-there-td2267532.html
install.packages.auto <- function(x) {
x <- as.character(substitute(x))
if(isTRUE(x %in% .packages(all.available=TRUE))) {
eval(parse(text = sprintf("require(\"%s\")", x)))
} else {
#update.packages(ask= FALSE) #update installed packages.
eval(parse(text = sprintf("install.packages(\"%s\", dependencies = TRUE)", x)))
}
if(isTRUE(x %in% .packages(all.available=TRUE))) {
eval(parse(text = sprintf("require(\"%s\")", x)))
} else {
source("http://bioconductor.org/biocLite.R")
#biocLite(character(), ask=FALSE) #update installed packages.
eval(parse(text = sprintf("biocLite(\"%s\")", x)))
eval(parse(text = sprintf("require(\"%s\")", x)))
}
}
Example:
install.packages.auto(qvalue) # from bioconductor
install.packages.auto(rNMF) # from CRAN
PS: update.packages(ask = FALSE) & biocLite(character(), ask=FALSE) will update all installed packages on the system. This can take a long time and consider it as a full R upgrade which may not be warranted all the time!
Today, I stumbled on two handy function provided by the rlang package, namely, is_installed() and check_installed().
From the help page (emphasis added):
These functions check that packages are installed with minimal side effects. If installed, the packages will be loaded but not attached.
is_installed() doesn't interact with the user. It simply returns TRUE or FALSE depending on whether the packages are installed.
In interactive sessions, check_installed() asks the user whether to install missing packages. If the user accepts, the packages are installed [...]. If the session is non interactive or if the user chooses not to install the packages, the current evaluation is aborted.
interactive()
#> [1] FALSE
rlang::is_installed(c("dplyr"))
#> [1] TRUE
rlang::is_installed(c("foobarbaz"))
#> [1] FALSE
rlang::check_installed(c("dplyr"))
rlang::check_installed(c("foobarbaz"))
#> Error:
#> ! The package `foobarbaz` is required.
Created on 2022-03-25 by the reprex package (v2.0.1)
I have implemented the function to install and load required R packages silently. Hope might help. Here is the code:
# Function to Install and Load R Packages
Install_And_Load <- function(Required_Packages)
{
Remaining_Packages <- Required_Packages[!(Required_Packages %in% installed.packages()[,"Package"])];
if(length(Remaining_Packages))
{
install.packages(Remaining_Packages);
}
for(package_name in Required_Packages)
{
library(package_name,character.only=TRUE,quietly=TRUE);
}
}
# Specify the list of required packages to be installed and load
Required_Packages=c("ggplot2", "Rcpp");
# Call the Function
Install_And_Load(Required_Packages);
Quite basic one.
pkgs = c("pacman","data.table")
if(length(new.pkgs <- setdiff(pkgs, rownames(installed.packages())))) install.packages(new.pkgs)
Thought I'd contribute the one I use:
testin <- function(package){if (!package %in% installed.packages())
install.packages(package)}
testin("packagename")
Regarding your main objective " to install libraries they don't already have. " and regardless of using " instllaed.packages() ". The following function mask the original function of require. It tries to load and check the named package "x" , if it's not installed, install it directly including dependencies; and lastly load it normaly. you rename the function name from 'require' to 'library' to maintain integrity . The only limitation is packages names should be quoted.
require <- function(x) {
if (!base::require(x, character.only = TRUE)) {
install.packages(x, dep = TRUE) ;
base::require(x, character.only = TRUE)
}
}
So you can load and installed package the old fashion way of R.
require ("ggplot2")
require ("Rcpp")
48 lapply_install_and_load <- function (package1, ...)
49 {
50 #
51 # convert arguments to vector
52 #
53 packages <- c(package1, ...)
54 #
55 # check if loaded and installed
56 #
57 loaded <- packages %in% (.packages())
58 names(loaded) <- packages
59 #
60 installed <- packages %in% rownames(installed.packages())
61 names(installed) <- packages
62 #
63 # start loop to determine if each package is installed
64 #
65 load_it <- function (p, loaded, installed)
66 {
67 if (loaded[p])
68 {
69 print(paste(p, "loaded"))
70 }
71 else
72 {
73 print(paste(p, "not loaded"))
74 if (installed[p])
75 {
76 print(paste(p, "installed"))
77 do.call("library", list(p))
78 }
79 else
80 {
81 print(paste(p, "not installed"))
82 install.packages(p)
83 do.call("library", list(p))
84 }
85 }
86 }
87 #
88 lapply(packages, load_it, loaded, installed)
89 }
source("https://bioconductor.org/biocLite.R")
if (!require("ggsci")) biocLite("ggsci")
Using lapply family and anonymous function approach you may:
Try to attach all listed packages.
Install missing only (using || lazy evaluation).
Attempt to attach again those were missing in step 1 and installed in step 2.
Print each package final load status (TRUE / FALSE).
req <- substitute(require(x, character.only = TRUE))
lbs <- c("plyr", "psych", "tm")
sapply(lbs, function(x) eval(req) || {install.packages(x); eval(req)})
plyr psych tm
TRUE TRUE TRUE
I use the following which will check if package is installed and if dependencies are updated, then loads the package.
p<-c('ggplot2','Rcpp')
install_package<-function(pack)
{if(!(pack %in% row.names(installed.packages())))
{
update.packages(ask=F)
install.packages(pack,dependencies=T)
}
require(pack,character.only=TRUE)
}
for(pack in p) {install_package(pack)}
completeFun <- function(data, desiredCols) {
completeVec <- complete.cases(data[, desiredCols])
return(data[completeVec, ])
}
Here's my code for it:
packages <- c("dplyr", "gridBase", "gridExtra")
package_loader <- function(x){
for (i in 1:length(x)){
if (!identical((x[i], installed.packages()[x[i],1])){
install.packages(x[i], dep = TRUE)
} else {
require(x[i], character.only = TRUE)
}
}
}
package_loader(packages)
library <- function(x){
x = toString(substitute(x))
if(!require(x,character.only=TRUE)){
install.packages(x)
base::library(x,character.only=TRUE)
}}
This works with unquoted package names and is fairly elegant (cf. GeoObserver's answer)
In my case, I wanted a one liner that I could run from the commandline (actually via a Makefile). Here is an example installing "VGAM" and "feather" if they are not already installed:
R -e 'for (p in c("VGAM", "feather")) if (!require(p, character.only=TRUE)) install.packages(p, repos="http://cran.us.r-project.org")'
From within R it would just be:
for (p in c("VGAM", "feather")) if (!require(p, character.only=TRUE)) install.packages(p, repos="http://cran.us.r-project.org")
There is nothing here beyond the previous solutions except that:
I keep it to a single line
I hard code the repos parameter (to avoid any popups asking about the mirror to use)
I don't bother to define a function to be used elsewhere
Also note the important character.only=TRUE (without it, the require would try to load the package p).
Let me share a bit of madness:
c("ggplot2","ggsci", "hrbrthemes", "gghighlight", "dplyr") %>% # What will you need to load for this script?
(function (x) ifelse(t =!(x %in% installed.packages()),
install.packages(x[t]),
lapply(x, require)))
There is a new-ish package (I am a codeveloper), Require, that is intended to be part of a reproducible workflow, meaning the function produces the same output the first time it is run or subsequent times, i.e., the end-state is the same regardless of starting state. The following installs any missing packages (I include require = FALSE to strictly address the original question... normally I leave this on the default because I will generally want them loaded to the search path).
These two lines are at the top of every script I write (adjusting the package selection as necessary), allowing the script to be used by anybody in any condition (including any or all dependencies missing).
if (!require("Require")) install.packages("Require")
Require::Require(c("ggplot2", "Rcpp"), require = FALSE)
You can thus use this in your script or pass it anyone.