RStudio cloud installed packages are not saved - r

Whenever I start a new project I have to install tidyverse and lubridate packages all over again. I thought I just have to update packages but not install them every time I start a new project. What am I missing? Maybe I have to save installed packages somehow or? and when I run .libpath() and go to the packages directory it is always empty.

So in RStudio Cloud you have to create a new workspace because there is no option to create a base project in the default workspace.
Create new workspace
New Project. download packages you need
Go back to your new workspace --> settings --> set as the base project.
After these steps, all projects you create will include packages that you downloaded previously in your base project.

Related

Ignore a dependency during R CMD check

I have a package that I developed to enable my team (and perhaps other interested users) to install and use a particular R package (RQDA) that was archived on CRAN. I have hosted this package on GitHub and am trying to set up GitHub Actions so that I have a CI workflow in place.
Whenever I run R CMD check locally everything is fine, but when I push to GitHub the build fails. This is because, by default, Actions tries to install that same (archived) package. Expectedly, this fails.
So, my question is this: is there a way I can disable the check for a specific package dependency? There are no plans to ever send this package to CRAN, so I am happy to bypass their package policy in this instance.
2 possible ways:
Upload the source for RQDA to a Github repo, or other publicly accessible location, and put a Remotes: line in your DESCRIPTION file
Save the package to cloud storage, eg an S3 bucket or Azure storage container, and download it from there as a separate workflow step prior to checking
This is how I was able to deal with the problem:
The changes made were entirely in the workflow file at ./.github/workflows/. One of the jobs there is for installing R package dependencies for the project:
- name: Install dependencies
run: |
remotes::install_deps(dependencies = TRUE)
remotes::install_cran("rcmdcheck")
shell: Rscript {0}
The first thing I did was to change the dependencies argument to NA so that only packages listed in Depends and Imports are installed. (The RQDA dependency that was giving me trouble is under Suggests).
There was still an error but this time with some guidance that involved going to the job Check and setting the environment variable _R_CHECK_FORCE_SUGGESTS_ to false.
The check now works as expected.

R: instructions for unbundling and using a packrat snapshot

I used packrat (v 0.4.8.-1) to to create a snapshot and bundle of the R package dependencies that go along with the corresponding R code. I want to provide the R code and packrat bundle to others to make the work I am doing (including the R environment) fully reproducible.
I tested unbundling using a different computer from the one I used to write R code and create the bundle. I opened an R code file in R studio, and called library(packrat) to load packrat (also v 0.4.8-1). I then called packrat::unbundle(bundle = "directory", where = "directory"), which unbundled successfully. But subsequently calling packrat::restore() gave me the error "This project has not yet been packified. Run 'packrat::init()' to init packrat". It seems like init() should not be necessary because I am not trying to create a new snapshot, but rather utilize the one in the bundle. The packrat page (https://rstudio.github.io/packrat/) and CRAN provide very little documentation about unbundling to help troubleshoot this, or that I could point users of my code to for instructions (who likely will be familiar with R, but may not have used packrat).
So, can someone please provide clear step-by-step instructions for how users of a bundled snapshot should unbundle, and then use that saved snapshot to run a R code file?
After some experimenting, I found an approach that seems to have worked so far.
I have provided users with three files:
-tar.gz (packrat bundle file)
-unbundle.R (R code file that includes a library statement to load
the packrat library, and the unbundle command for the tar.gz file)
-unbundle_readme.txt
The readme file includes instructions similar to those below, and so far users have been able to run R code using the package dependencies. The readme file tells users about requirements (R, R studio, packrat, R package development prerequisites (Rtools for Windows, XCode for Mac)), and includes output of sessionInfo() to document R package versions that the R code should use after instructions are followed. In the example below 'code_folder' refers to a folder within the tar.gz file that contains R. code and associated input files.
Example unbundle instructions:
Step 1
Save, but do not expand/unzip, the tar file to a directory.
Problems with accessing the saved package dependencies
are more likely when a program other than R or R studio
is used to unbundle the tar file.
If the tar file has already been expanded, re-save the
tar file to a new directory, which should not be a the same
directory as the expanded tar file, or a subdirectory of
the expanded tar file.
Step 2
Save unbundle.R in the same directory as the tar file
Step 3
Open unbundle.R using R studio
Step 4
Execute unbundle.R
(This will create a subfolder ‘code_folder’.
Please note that this step may take 5-15 minutes to run.)
Step 5
Close R studio
Step 6
Navigate to the subfolder ‘cold_folder’
Step 7
Open a R script using R studio
(The package library should correspond to that listed below.
This will indicate R studio is accessing the saved package
dependencies.)
Step 8
Execute the R code, which will utilize the project package library.
After the package library has been loaded using the above
steps, it is not necessary to re-load the package library for each
script. R studio will continue to access the package dependencies
for each script you open within the R studio session. If you
subsequently close R-studio, and then open scripts from within
the unbundle directory, R studio should still access the
dependencies without requiring re-loading of the saved package
snapshot.

R package development - old version of function used in project

I am developing a package locally with devtools in RStudio. After modifying a function, when I try to call it from a project, R keeps using the old version of the function.
My workflow is to:
Modify the function and save
Call Build & Reload
Test the function with some example code in the package development
project (I often run another Build & Reload after that)
Go to the project I want to use the function in
call library(my_library)
But the modification I just did would not be effective. What is wrong with this workflow?
?devtools::build:
Building converts a package source directory into a single bundled file. If binary = FALSE this creates a tar.gz package that can be installed on any platform, provided they have a full development environment (although packages without source code can typically be install out of the box). If binary = TRUE, the package will have a platform specific extension (e.g. .zip for windows), and will only be installable on the current platform, but no development environment is needed.
My reading of this is that you still need to devtools::install() your package. Building just creates the binary, it doesn't install the new version.

Meteor: Possible to reference a local package while writing a local package?

I have a local package that I am writing. I wanted to extend some functionality from a few of the core meteor packages. In order to do this, I was creating another local package.
So now local package A is attempting to use local package B. But even after adding the symlink to package A's packages folder, I get an unknown package error when running test-packages.
meteor add local-package-b
Doesn't work because meteor complains: You're not in a Meteor project directory
Is this even possible or do I need to publish to atmosphere first? I wanted to keep the additional package local as it's going to be a handful of helper methods.
I've double checked the name in package B's package.js file and it matches with my api.use in package A's package.js file.
Symlinks shouldn't be necessary here. Local packages are found by checking:
Anything under the directory pointed to by the environment variable PACKAGE_DIRS.
Anything under the packages directory in the current application directory.
Core packages.
As long as your two packages reside in one of those three locations, the should be found. Also see my article on local packaes for more details on (1).
Based on our conversation below, the main issue had to do with the approach to creating a new package. While it's being written, it's necessary that a package be added to (or tested by) an app. Here are the steps I take when working on a shared, local package:
Create a new app.
meteor create --package mypackage
meteor add mypackage
Finish writing and testing the package.
Move mypackage under my PACKAGE_DIRS directory so it will be available to other apps.

Move r packages to new computer which has no internet

Normally I install packages using:
install.packages("foo")
and a Repo over the internet. But I have a new machine now where I want to replicate the packages from my existing installation without having to pull everything off the internet all over again. (I've a ton of packages and slow internet access)
Both machines are Windows and run the same R version. (2.13.1)
Is there a way to do this? Closest I can get is I know I can install from local zip files using:
install.packages("pathtozip", repos = NULL)
But does R store all Zips somewhere? I found a few in locations like:
C:\Documents and Settings\foouser\Local Settings\Temp\RtmpjNKkyp\downloaded_packages
But not all.
Any tips?
The function .libPaths will give you a vector of all the libraries on your machine. Run this on your old machine to find all of them. You can simply copy all these files into the libraries on your new machine (run .libPaths on it too to find out where).
Alternatively, if you want to set up a real repository (i.e. basically a CRAN mirror) on your computer or on a network drive you can update, you can put binary or source packages into a folder and run tools::write_PACKAGES on that folder. You can them run install.packages using the contriburl argument and point it to your repository folder.
All packages that you have installed are stored in a folder called win-library\r-version, for example,
C:\Users\Ehsan\Documents\R\win-library\2.15 so, it is enough to copy all the folders inside 2.15 to the same folder in your new machine. because you have the same version of R you do not need to update them by update.packages().
On your original computer, run
write.csv(unique(data.frame(installed.packages())[,1]),"packages.csv",row.names=F)
Save this .csv into the working directory of your new computer, then run
install.packages(as.character(read.csv("packages.csv")[,1]))
You can check what your working directory is using getwd().

Resources