I am wondering if there's a way to use install.packages() or other related functions to do the following: only download the sources (i.e. tar.gz files) of the specified packages and all their dependencies into a specified folder (on Windows).
One reason to do this is: say I have a Linux account that is not enabled for internet access. In order to install the packages on the Linux machine, I would first download all the needed sources on my Windows machine, then ftp them over to the Linux machine, and install them on the Linux machine using
install.packages('/home/me/R/Packages/blah.tar.gz', repos = NULL)
I recently had a problem where I wanted to download all dependencies and I've solved it thus:
Say I want all the dependencies and imports of ggplot2 and MASS:
getPackages <- function(packs){
packages <- unlist(
tools::package_dependencies(packs, available.packages(),
which=c("Depends", "Imports"), recursive=TRUE)
)
packages <- union(packs, packages)
packages
}
packages <- getPackages(c("ggplot2", "MASS"))
I can now download the packages to another directory.
download.packages(packages, destdir="whereyouactuallywantthefiles",
type="source")
From there if you want to make a local repo on your Linux PC, follow the instructions here.
Try download.packages(c("xts", "rms"), "c:/TEMP", .....) instead of install.packages(); you can directly give it a target directory in the 2nd argument.
Edit several years later: As stated above on other answers and comments, by now several helper functions have been added to R's tools and utils packages. R 3.4.0 will have tools::CRAN_package_db() to download the top-level PACKAGES.rds file (and of course you could just combine download.file() and readRDS() for that too).
There are now better options for this in the tools package that comes with base R: package_dependencies(). See for example, the Answer from #sebastian-c and this recent Q&A for a related use-case.
There is an unexported getDependencies() function in the utils package. I haven't studied how it works, but combining that with #Dirk's Answer should get you most of the way there.
Basically though, it appears you use it like:
utils:::getDependencies(pkgs, dependencies, available, lib)
where pkgs is the character vector of packages to install, dependencies is a character vector of types of dependencies (Depends, Enhances etc) that you want, available is the output from available.packages() and lib is the library location for the packages within which dependencies are evaluated.
If you debug install.packages() it is basically doing the getDependencies() step then #Dirk's download.packages() step before it actually starts installing anything.
Related
I installed the latest version of r 3.5.0 and copied all the packages, from my old version 3.4.3, and pasted them to the current version folder 3.5
I think this is not a good way to copy and paste the packages because RStudio asks me to reinstall the package that I call. For example, it gives me this error when I install zoo:
Error: package or namespace load failed for ‘zoo’:
package ‘zoo’ was installed by an R version with different internals; it
needs to be reinstalled for use with this R version
What should I do to copy them the right way?
It is much safer to re-build packages for the newer version of R rather than coping them.
The easiest way to re-build all the packages, would be to save the list of packages in the old version of R in the file, then load it into the new version of R and install them:
# In old version of R:
ip <- installed.packages()[,1]
write(ip,"rpackages_in_3.4.3.txt")
q()
# In new version of R:
ip_3.4.3 <- readLines("rpackages_in_3.4.3.txt")
setRepositories(graphics=FALSE, ind=1:6)
install.packages(ip_3.4.3)
There is also package installr that might be useful for this purpose:
https://cran.r-project.org/web/packages/installr/installr.pdf
For Windows at least, and perhaps others, what you have done plus what #Ben Bolker suggests is exactly what the manual says most people should do:
For most people the best thing to do is to [...] copy any installed packages to the library folder in the new installation, run update.packages(checkBuilt=TRUE, ask=FALSE) in the new R and then delete anything left of the old installation.
From: https://cran.r-project.org/bin/windows/base/rw-FAQ.html#What_0027s-the-best-way-to-upgrade_003f
However, they also qualify that by saying it is "a matter of taste", so if you find another method that works for you go with that, I just wanted to point out the method you tried is valid and even suggested by the documentation.
UPDATE: I just had R updated on my own system and since I use a fixed location for my packages (i.e. no version number in the path) I didn't even copy them from one place to another, I only did the update.packages(checkBuilt = TRUE, ask = FALSE) part and it works fine.
I have a code to track objects in the images. This code uses few function from the package clue. So clue is already installed in my system. Now I have created a package using the same code.
My description file has following lines.
Depends: R (>= 3.4.3),
clue
Because clue is already installed, I thought it will not get installed again when I use install("mypackage"). But to my surprise it re-installed the package. I have tried this with other installed packages, too. When I give it as "depends" or "import", it re-installs the packages. I do not want to re-install the packages if they are already on my system. Is there a way to tell R package installer to avoid re-installing packages that exist on the user's system? Some of these packages are quite large and take a lot of time to install. In addition, I have installed some packages with binary source/dependency that required me to give path for several libraries.
You can just use
install.packages(..., dependencies = FALSE)
or if you use devtools::install:
install(..., dependencies = FALSE)
I have a R script that I call from python using rpy2. It uses dplyr, doBy, and ggplot2. The script has install.packages commands for these 3 packages. Even thought the packages are already installed it still downloads, builds, and installs them, which is very time consuming. Is there a way to have it only do the install if the package is not already installed?
Also, I run in a docker container, so after the container is instantiated the packages are not there the first time the script runs. Is there a way to pre load the packages, in which case I would not need the install.packages commands for these packages and my above question would become moot.
I always use:
if (!require(package)) install.packages("package")
So if the package isn't available in the library, it will be installed.
install.packages( setdiff(required_packages, installed.packages()[,"Package"]) )
If you define required_packages as a character vector of the names of the packages you need, this line will only install the packages you don't currently have.
So for your case:
required_packages <- c("dplyr", "doBy", "ggplot2")
install.packages( setdiff(required_packages, installed.packages()[,"Package"]) )
I have built an R package, i.e. I have the mypackage.tar.gz file. This package depends on several other packages, all downloadable and installable from any CRAN mirror.
Now I want to install this package on a system where the dependencies are not yet installed, and I would like that the dependencies will be downloaded and installed automatically when I install my package.
I tried:
install.packages("mypackage.tar.gz",type="source",dependencies=TRUE,repos="http://a.cran.mirror")
but it searches for mypackage.tar.gz on the mirror (and obviously it does not find), while if I set repos=NULL it correctly tries to install the local package file (as documented), but obviously it does not find the dependencies packages.
So my question is: is there a way to perform a 'mixed' installation (local package with online dependencies) or the only way to do is to manually install all the dependencies?
You could use install from the devtools package. Just run install("<directory of your package>", dependencies = TRUE). Its help states:
Uses R CMD INSTALL to install the package. Will also try to install dependencies of the package from CRAN, if they're not already installed.
Here, I'm using untar() with devtools::install() and passing in a directory to which the source tarball has been extracted.
d <- tempdir()
untar("mypackage.tar.gz", compressed="gzip", exdir=d)
devtools::install(file.path(d, "mypackage"), dependencies=TRUE,
repos="https://cloud.r-project.org/")
If you want to install from multiple repos, you can provide a list of them. For example, to use both Bioconductor and CRAN, you could run:
devtools::install(file.path(d, "mypackage"), dependencies=TRUE,
repos=BiocManager::repositories())
NOTE: I can't figure out how to directly pass the tarball to install(), but this solution works in the meantime and leaves no clutter because we extract to a temp directory. It seems install_local() should be able to take a tarball, but I am getting an error when attempting to do so.
If you already have installed your local package, you should be able to use a couple functions in tools to install the dependencies from CRAN:
library('tools')
installFoundDepends(pkgDepends('mypackage', local = FALSE)$Found)
Note: You can pass args (like repos) through installFoundDepends to install.packages.
You can also use the Depends element from the pkgDepends output to pass directly to install.packages:
install.packages(pkgDepends('mypackage')$Depends)
UPDATE: Apparently it is not possible to install a local package with dependencies=FALSE. This seems odd, since you can do that for a remote package from a repository. The reason (looking at the source code) is that if(is.null(repos) & missing(contriburl)), installation is handled via system calls to R CMD INSTALL, which has no dependency-related arguments.
So old question and so many answers but unfortunately none of them presents the canonical way to address the problem.
R was designed to handle situations like this, no extra packages are needed. One has to create local repository, and then use it, together with CRAN url, as a repository source when installing.
Below is code that present complete process.
## double check our dependency is not yet installed
## remove.packages("data.table")
"data.table" %in% rownames(installed.packages())
#[1] FALSE
## create our pkg
hello = function() "world"
package.skeleton(name="pkg", list="hello")
#...
cat("Imports: data.table\n", file="pkg/DESCRIPTION", append=TRUE)
unlink(c("pkg/Read-and-delete-me", "pkg/man"), recursive=TRUE)
rm(hello)
## publish our pkg in current working directory
system("R CMD build pkg")
#...
dir.create("src/contrib", recursive=TRUE)
file.rename("pkg_1.0.tar.gz", "src/contrib/pkg_1.0.tar.gz")
#[1] TRUE
tools::write_PACKAGES("src/contrib")
## install pkg and its dependencies automatically
install.packages("pkg", repos=c(
paste0("file://", getwd()),
"https://cloud.r-project.org"
))
#Installing package into '/home/jan/R/x86_64-pc-linux-gnu-library/4.2'
#(as 'lib' is unspecified)
#also installing the dependency 'data.table'
#...
## test
library(pkg)
hello()
#[1] "world
"data.table" %in% rownames(installed.packages())
#[1] TRUE
On windows one may need to specify type="source" and amend paths.
If you are not opposed to using another package who manages this for you, this can nowadays be easily achieved with the {remotes} package.
install.packages("remotes")
remotes::install_local("mypackage.tar.gz")
You can specify some further options which dependencies you want (e.g. also those in 'Suggests') etc.:
?remotes::install_local
{remotes} itself does not have dependencies afaik, so it does not add too much clutter to your environment.
The forecast package for R has been updated to version 2.12, but there are currently only windows binarys for 2.11 available on CRAN.
How do I install an R package from the source on Windows?
I know this is an old question but it came up first in my Google search for this same question, even though I knew the answer I just wanted something to copy and paste. Which makes it worth improving the answer for future reference. So here is what works for me:
Install rtools, then:
install.packages(path_to_file, repos = NULL, type="source")
Two answers that may help you avoid the hassle of installing Rtools.
Use http://win-builder.r-project.org/ to build a binary version, download it, and install (using install.packages(...,repos=NULL))
If the package has no binary component (i.e. no src directory with C, C++, or Fortran code that needs to be compiled during installation (not true for forecast, but possibly useful some other time) then simply specifying type="source" within the install.packages call (whether from a repository or a local copy of the source tarball (.tar.gz file)) will install the source package, even on Windows.
Start by reviewing the section on Windows packages in the R Installation and Administration manual, then carefully follow the instructions from The Windows toolset appendix.
I know it's usually bad form to mainly provide links in an answer, but these are links to the canonical references on this topic. I simply link to them rather than summarize their contents, since they should be accurate for the most current R release.
I'm not sure if this is the best way, but I found the following method to work (based in part on the answers above):
1) Download the package .tar
2) Move the package to the directory with your user R libraries (e.g., in my case it was "C:/Users/yourUserName/Documents/R/win-library/3.3")
3) Within Rstudio (or elsewhere, probably), run the command... install.packages("packageName.tar", repos=NULL, type="source")
That worked for me at least. Hope it's helpful!
Download the package *.tar.gz.
make sure you have Rtools installed.
Make sure the R and Rtools paths are added in the environment varialble.
Open a command prompt. Type R CMD INSTALL packagename.tar.gz.
it will work i hope.
To install a package from a .tar.gz file, follow these steps:
Launch R to have the R command prompt
Type: install.packages(<path_to_tar.gz_file>, repos = NULL)
or launch directly:
R CMD INSTALL <path_to_.tar.gz_file>
You need to have R installed but you don't need RTools