I'm building an Rmarkdown document reliant on a frequently updated data package that's hosted on github.
How do I make sure that the document is always built using the latest version of the package, without installing the package on each build?
You can see the list of commits to a package by getting the commits page for the package. For example,
https://github.com/tidyverse/dplyr/commits
shows that there were commits today. If you save a copy of the top hash in that response (currently af75177), and then update whenever it changes, you should be sure to have the latest version.
However, this is likely a bad policy. The package is not necessarily in a working condition after a commit: perhaps the author is planning another one a minute later to finish some update. It's much safer to use update.packages() and only get the updates that are judged to be stable enough to be sent to and accepted on CRAN.
Related
I've created an R package and I'd like to upload it to CRAN via GitHub Actions whenever I merge changes into the master branch. I've found a lot of examples of R actions and I've even looked up how some of the most popular packages like dplyr do it and even though I've found a devtools::release() helper function, I still haven't seen a workflow that would submit a library to CRAN when you merge changes into the master branch. Do package developers do this manually? Is there any reason why this hasn't been automated?
CRAN works quite differently from other language repositories, as uploads are not fully automated like in e.g. PyPI.
When you upload a new package, it is subject to verification from an actual human. When you update a package, if it triggers certain checks it will also be subject to a new review from a human. When a package uploads successfully and passes the first verification, many automated checks are run for it over the course of weeks (e.g. different OSes, compilers, compiler options, architectures, sanitizers, valgrind, etc.), and precompiled binaries are automatically generated for some platforms and R versions from your source code.
The CRAN policies explicitly state that frequent updates are not allowed, and you're not supposed to be submitting uploads any faster than once every couple months, for which I think this level of automation would not be worth it.
Even if you do want to automate this process, there is an email verification in the middle, so you'd perhaps have to do something with selenium + other scripts.
BTW if you are worried about complicated building processes and are using RStudio, you can configure on a per-project basis what arguments to use when building source or binary distributions of your package.
I am developing an R package that will not be hosted on GitHub or submitted to CRAN. I am using git for version control. I would like to give my users the ability to load older versions of the package. I've read here about usethis::use_version() for versioning my package. This will track the versions using git, but I'm wondering if there's a straightforward way for my end users to load an older version without having to use git themselves. For packages hosted on CRAN, I know the versions package can be used to achieve this.
Right now my best solution is to create a copy of the R package in a new directory when starting work on a new version. Then the end users can load the version they want by choosing the appropriate directory. If there is a better solutions than this, I would be interested to hear it.
The remotes::install_git() function has a ref= parameter when you type a commit or tag name. If you tag your releases, then you can install which ever version you want with the correct tag. Your users don't need to run git themselves, but they will need access to the git repo to pull the correct version.
If you want to host your own repository for your users, you can also look into something like miniCRAN or drat. Since those basically a CRAN-like repository for your packages, you can probably use existing tools like the versions package to interact with the repo (assuming you keep older versions around in the same way CRAN does).
How do you best pin package versions in R?
Rejected strategy 1: Pin to CRAN source tar.gzs
Doesn't work if you want to pin it at the latest version since CRAN does not put the tip version in the archive (duh)
Rejected strategy 2: Use devtools
Don't want to, because it takes ages to compile and adds lots of stuff I don't want to use
Rejected strategy 3: Vendor
Would rather avoid having to copy all source
To provide a little bit more information on packrat, which I use for this purpose. From the website.
R package dependencies can be frustrating. Have you ever had to use
trial-and-error to figure out what R packages you need to install to
make someone else’s code work–and then been left with those packages
globally installed forever, because now you’re not sure whether you
need them? Have you ever updated a package to get code in one of your
projects to work, only to find that the updated package makes code in
another project stop working?
We built packrat to solve these problems. Use packrat to make your R
projects more:
Isolated: Installing a new or updated package for one project won’t
break your other projects, and vice versa. That’s because packrat
gives each project its own private package library. Portable: Easily
transport your projects from one computer to another, even across
different platforms. Packrat makes it easy to install the packages
your project depends on. Reproducible: Packrat records the exact
package versions you depend on, and ensures those exact versions are
the ones that get installed wherever you go.
Packrat stores the version of the packages you use in the packrat.lock file, and then downloads that version from CRAN whenever you packrat::restore(). It is much lighter weight than devtools, but can still take some time to re-download all of the packages (depending on the packages you are using).
If you prefer to store all of the sources in a zip file, you can use packrat::snapshot() to pull down the sources / update the packrat.lock and then packrat::bundle() to "bundle" everything up. The aim for this is to make projects / research reproducible and portable over time by storing the package versions and dependencies used on the original design (along with the source code so that the OS dependency on a binary is avoided).
There is much more information on the website I linked to, and you can see current activity on the git repo. I have encountered a few cases that work in a less-than-ideal way (packages not on CRAN have some issues at times), but the git repo still seems to be pretty active with issues/patches which is encouraging.
The development of RStudio and the packages devtools and roxygen2 has made R package creation pretty easy. I use GitHub for version control and devtools allows others to easily install directly from my account.
As my package gradually changes with each version, I'm wondering if I should be maintaining .zip files (or other format) of my past stable builds, in case anyone would ever want to use a previous version.
It's easy to download a .zip of an R package directly from GitHub, but I'm wondering if I should add this to the same GitHub directory (e.g. https://github.com/myaccount/mypackage/previous_versions/mypackage_0.1.zip) without messing up somebody's installation via install_github("myaccount/mypackage").
So, the main Qs are:
Should I keep an old package version at all?
Should I keep old package versions in a sub-folder of my GitHub R package directory?
Should I save .zip files downloaded from GitHub as my old version, or produce a Source or Binary file during the package build itself (i.e. in RStudio)?
Is this a superfluous activity if one isn't yet willing to publish to CRAN?!
When you think your package is at a good solid place, you should tag a release. This archives the branch at that point in time and stores the zip file with the source code, and the tar.gz file.
I tend to mark my CRAN packages as a release each time I release it to CRAN (for example, see https://github.com/nutterb/pixiedust/releases) and with some intemediary tags that I consider noteworthy.
Another good strategy for managing changes in between tagged releases is to maintain a development branch below your main branch. That way your development changes won't pollute or break anything being used by those pulling from your main branch. It makes you free to experiment in the dev branch while always having a clean, working copy to push to and restore from.
1. Should I keep an old package version at all?
It's subjective, but I'd definitely say "yes" unless there's a space constraint, which is probably unlikely.
This serves 2 purposes. One is for your own convenience, such as if you want to make sure that you always have a quick way to test the results of older versions versus a newer version.
The other is that people often need older versions of packages, such as if someone wants to use your package but they're using an older version of R on a server where the policies prevent an update to R. Perhaps a newer version of your package includes a new dependency which only works with a package that depends on a certain version of R or higher.
Of course, packages can always be installed without the compressed or binary files, but it's a nice convenience.
2. Should I keep old package versions in a sub-folder of my GitHub R package directory?
I would put it in a trunk or special subfolder that won't be automatically downloaded when someone tries to install_github or clone your master branch. Having a separate branch is a good idea.
3. Should I save .zip files downloaded from GitHub as my old version, or produce a Source or Binary file during the package build itself (i.e. in RStudio)?
As the package author you're in a position to know if these differ significantly and which if either is better, but by default I'd recommend the RStudio build because I assume (if you're like me) that you're less likely to include unnecessary files this way.
4. Is this a superfluous activity if one isn't yet willing to publish to CRAN?!
No, not necessarily. If people rely on your package then it really doesn't matter if it's on CRAN or not. In fact, not being on CRAN may be a reason to be more proactive like this to ensure that your users will always have access to the needed version of your package.
I use the Revolution R Enterprise distribution that is built upon R 3.2.2. Hence, I have an interest in only employing package versions that are based on this R release as well. Checking packages like 'checkpoint' or the Revolution MRAN page, I only found ways to access snapshots of CRAN datewise. Is there a way to install the most recent package versions still compatible with a certain R release?
I found a heuristical solution to my own problem:
Find out about the release date of the stable R version succeeding your working version.
Set up an R script that calls all the packages you need for your project via individual library() or require() calls.
Use checkpoint(release date minus at least one day) to automatically create a project specific library which is in harmony with your working R version.
Step 2 is a failsafe way to ensure detection of all necessary packages. I called them by sapply(package.list, require), which checkpoint() was not able to handle. A possible caveat against this solution might be that it possibly does not deliver the very last version of a package which is still compatible with your older R version. Alternatively, to be very sure, instead of the stable release one could use the prerelease date to be absolutely sure about compatibility.