I am developing a package locally with devtools in RStudio. After modifying a function, when I try to call it from a project, R keeps using the old version of the function.
My workflow is to:
Modify the function and save
Call Build & Reload
Test the function with some example code in the package development
project (I often run another Build & Reload after that)
Go to the project I want to use the function in
call library(my_library)
But the modification I just did would not be effective. What is wrong with this workflow?
?devtools::build:
Building converts a package source directory into a single bundled file. If binary = FALSE this creates a tar.gz package that can be installed on any platform, provided they have a full development environment (although packages without source code can typically be install out of the box). If binary = TRUE, the package will have a platform specific extension (e.g. .zip for windows), and will only be installable on the current platform, but no development environment is needed.
My reading of this is that you still need to devtools::install() your package. Building just creates the binary, it doesn't install the new version.
Related
I have a package that I developed to enable my team (and perhaps other interested users) to install and use a particular R package (RQDA) that was archived on CRAN. I have hosted this package on GitHub and am trying to set up GitHub Actions so that I have a CI workflow in place.
Whenever I run R CMD check locally everything is fine, but when I push to GitHub the build fails. This is because, by default, Actions tries to install that same (archived) package. Expectedly, this fails.
So, my question is this: is there a way I can disable the check for a specific package dependency? There are no plans to ever send this package to CRAN, so I am happy to bypass their package policy in this instance.
2 possible ways:
Upload the source for RQDA to a Github repo, or other publicly accessible location, and put a Remotes: line in your DESCRIPTION file
Save the package to cloud storage, eg an S3 bucket or Azure storage container, and download it from there as a separate workflow step prior to checking
This is how I was able to deal with the problem:
The changes made were entirely in the workflow file at ./.github/workflows/. One of the jobs there is for installing R package dependencies for the project:
- name: Install dependencies
run: |
remotes::install_deps(dependencies = TRUE)
remotes::install_cran("rcmdcheck")
shell: Rscript {0}
The first thing I did was to change the dependencies argument to NA so that only packages listed in Depends and Imports are installed. (The RQDA dependency that was giving me trouble is under Suggests).
There was still an error but this time with some guidance that involved going to the job Check and setting the environment variable _R_CHECK_FORCE_SUGGESTS_ to false.
The check now works as expected.
I often ]dev Pkg but I want the devved packaged to be stored somewhere other than the default location for convenient access.
I don't want to change the path of the ]add Pkg. This seems to be controlled by the environment parameter DEPOT_PATH.
Is there a way to change only the path for dev Pkg, i.e. the path in which the dev package is stored?
You can set the environment variable JULIA_PKG_DEVDIR to change where development packages are installed. See the develop docs for more info.
As #crstnbr noted, an alternative is to use the --local option to the pkg> dev command to install a development version of the package in a dev directory within the current project. This could make sense if you're developing your own package MyCode.jl which relies on Example.jl and you need to make a hot fix to Example.jl. Then your Pkg REPL command would look like this:
(MyCode) pkg> dev --local Example
If you would like to make changes to a third-party package and submit those changes as a pull request on Github, there are a few more steps in the process. See this Discourse thread for more details on that process.
Not quite what you're asking for but you can of course always git clone the package to a path of your choice and then dev path/to/the/local/clone/of/the/pkg.
You can even do this from within julia:
using Pkg
Pkg.GitTools.clone("<pkg url>", "<local path>")
Pkg.develop(PackageSpec(path="<local path>"))
I used packrat (v 0.4.8.-1) to to create a snapshot and bundle of the R package dependencies that go along with the corresponding R code. I want to provide the R code and packrat bundle to others to make the work I am doing (including the R environment) fully reproducible.
I tested unbundling using a different computer from the one I used to write R code and create the bundle. I opened an R code file in R studio, and called library(packrat) to load packrat (also v 0.4.8-1). I then called packrat::unbundle(bundle = "directory", where = "directory"), which unbundled successfully. But subsequently calling packrat::restore() gave me the error "This project has not yet been packified. Run 'packrat::init()' to init packrat". It seems like init() should not be necessary because I am not trying to create a new snapshot, but rather utilize the one in the bundle. The packrat page (https://rstudio.github.io/packrat/) and CRAN provide very little documentation about unbundling to help troubleshoot this, or that I could point users of my code to for instructions (who likely will be familiar with R, but may not have used packrat).
So, can someone please provide clear step-by-step instructions for how users of a bundled snapshot should unbundle, and then use that saved snapshot to run a R code file?
After some experimenting, I found an approach that seems to have worked so far.
I have provided users with three files:
-tar.gz (packrat bundle file)
-unbundle.R (R code file that includes a library statement to load
the packrat library, and the unbundle command for the tar.gz file)
-unbundle_readme.txt
The readme file includes instructions similar to those below, and so far users have been able to run R code using the package dependencies. The readme file tells users about requirements (R, R studio, packrat, R package development prerequisites (Rtools for Windows, XCode for Mac)), and includes output of sessionInfo() to document R package versions that the R code should use after instructions are followed. In the example below 'code_folder' refers to a folder within the tar.gz file that contains R. code and associated input files.
Example unbundle instructions:
Step 1
Save, but do not expand/unzip, the tar file to a directory.
Problems with accessing the saved package dependencies
are more likely when a program other than R or R studio
is used to unbundle the tar file.
If the tar file has already been expanded, re-save the
tar file to a new directory, which should not be a the same
directory as the expanded tar file, or a subdirectory of
the expanded tar file.
Step 2
Save unbundle.R in the same directory as the tar file
Step 3
Open unbundle.R using R studio
Step 4
Execute unbundle.R
(This will create a subfolder ‘code_folder’.
Please note that this step may take 5-15 minutes to run.)
Step 5
Close R studio
Step 6
Navigate to the subfolder ‘cold_folder’
Step 7
Open a R script using R studio
(The package library should correspond to that listed below.
This will indicate R studio is accessing the saved package
dependencies.)
Step 8
Execute the R code, which will utilize the project package library.
After the package library has been loaded using the above
steps, it is not necessary to re-load the package library for each
script. R studio will continue to access the package dependencies
for each script you open within the R studio session. If you
subsequently close R-studio, and then open scripts from within
the unbundle directory, R studio should still access the
dependencies without requiring re-loading of the saved package
snapshot.
I have a local package that I am writing. I wanted to extend some functionality from a few of the core meteor packages. In order to do this, I was creating another local package.
So now local package A is attempting to use local package B. But even after adding the symlink to package A's packages folder, I get an unknown package error when running test-packages.
meteor add local-package-b
Doesn't work because meteor complains: You're not in a Meteor project directory
Is this even possible or do I need to publish to atmosphere first? I wanted to keep the additional package local as it's going to be a handful of helper methods.
I've double checked the name in package B's package.js file and it matches with my api.use in package A's package.js file.
Symlinks shouldn't be necessary here. Local packages are found by checking:
Anything under the directory pointed to by the environment variable PACKAGE_DIRS.
Anything under the packages directory in the current application directory.
Core packages.
As long as your two packages reside in one of those three locations, the should be found. Also see my article on local packaes for more details on (1).
Based on our conversation below, the main issue had to do with the approach to creating a new package. While it's being written, it's necessary that a package be added to (or tested by) an app. Here are the steps I take when working on a shared, local package:
Create a new app.
meteor create --package mypackage
meteor add mypackage
Finish writing and testing the package.
Move mypackage under my PACKAGE_DIRS directory so it will be available to other apps.
Does R have a dependency management tool to facilitate project-specific dependencies? I'm looking for something akin to Java's maven, Ruby's bundler, Python's virtualenv, Node's npm, etc.
I'm aware of the "Depends" clause in the DESCRIPTION file, as well as the R_LIBS facility, but these don't seem to work in concert to provide a solution to some very common workflows.
I'd essentially like to be able to check out a project and run a single command to build and test the project. The command should install any required packages into a project-specific library without affecting the global R installation. E.g.:
my_project/.Rlibs/*
Unfortunately, Depends: within the DESCRIPTION: file is all you get for the following reasons:
R itself is reasonably cross-platform, but that means we need this to work across platforms and OSs
Encoding Depends: beyond R packages requires encoding the Depends in a portable manner across operating systems---good luck encoding even something simple such as 'a PNG graphics library' in a way that can be resolved unambiguously across systems
Windows does not have a package manager
AFAIK OS X does not have a package manager that mixes what Apple ships and what other Open Source projects provide
Even among Linux distributions, you do not get consistency: just take RStudio as an example which comes in two packages (which all provide their dependencies!) for RedHat/Fedora and Debian/Ubuntu
This is a hard problem.
The packrat package is precisely meant to achieve the following:
install any required packages into a project-specific library without affecting the global R installation
It allows installing different versions of the same packages in different project-local package libraries.
I am adding this answer even though this question is 5 years old, because this solution apparently didn't exist yet at the time the question was asked (as far as I can tell, packrat first appeared on CRAN in 2014).
Update (November 2019)
The new R package renv replaced packrat.
As a stop-gap, I've written a new rbundler package. It installs project dependencies into a project-specific subdirectory (e.g. <PROJECT>/.Rbundle), allowing the user to avoid using global libraries.
rbundler on Github
rbundler on CRAN
We've been using rbundler at Opower for a few months now and have seen a huge improvement in developer workflow, testability, and maintainability of internal packages. Combined with our internal package repository, we have been able to stabilize development of a dozen or so packages for use in production applications.
A common workflow:
Check out a project from github
cd into the project directory
Fire up R
From the R console:
library(rbundler)
bundle('.')
All dependencies will be installed into ./.Rbundle, and an .Renviron file will be created with the following contents:
R_LIBS_USER='.Rbundle'
Any R operations run from within this project directory will adhere to the project-speciic library and package dependencies. Note that, while this method uses the package DESCRIPTION to define dependencies, it needn't have an actual package structure. Thus, rbundler becomes a general tool for managing an R project, whether it be a simple script or a full-blown package.
You could use the following workflow:
1) create a script file, which contains everything you want to setup and store it in your projectd directory as e.g. projectInit.R
2) source this script from your .Rprofile (or any other file executed by R at startup) with a try statement
try(source("./projectInit.R"), silent=TRUE)
This will guarantee that even when no projectInit.R is found, R starts without error message
3) if you start R in your project directory, the projectInit.R file will be sourced if present in the directory and you are ready to go
This is from a Linux perspective, but should work in the same way under windows and Mac as well.