This is a followup to a question I posted earlier. To summarize, I am writing an R Package called Slidify, which makes use of several external non-R based libraries. My earlier question was about how to manage dependencies.
Several solutions were proposed, of which the most attractive solution was to package the external libraries as a different R package, and make it a dependency for Slidify. This is the strategy followed by the package xlsx, which packages the java dependencies as a different package xlsxjars.
An alterative is for me to provide the external libraries as a download and package a install_libraries function within Slidify, which would automatically fetch the required files and download it into the package directory. I can also add an update_libraries function which would update if things change.
My question is, are there any specific advantages to doing the CRAN dance for external libraries which are not R based. Am I missing something here?
As discussed in the comment-thread, for a package like slidify with a number of large, (mostly) fixed, and portable files, a "resource" package makes more sense:
you will know the path where it installed (as the package itself will tell you)
users can't accidentally put it somewhere else
you get CRAN tests
you get CRAN distribution, mirrors, ...
users already know install.packages() etc
the more nimble development of your package using these fixed parts is not held back by the large support files
Related
I'm planning to submit my first package to CRAN. I've heard that you should not have any errors, warnings or notes. However, I get the Note stating that there are too many package dependencies:
"Imports includes 24 non-default packages.
Importing from so many packages makes the package vulnerable to any of
them becoming unavailable. Move as many as possible to Suggests and
use conditionally."
Is this note something I have to address in regard to a CRAN submission?
Does it make a difference to state that all/most packages used is OK to be included because they are well-maintained?
Is it possible to use tidyverse as a dependency instead of each individual package (I understand that this to some extent defeats the purpose with the limit; although having a 20-package-limit feels rather arbitrary anyway and the focus should also be on using well-maintained packages).
Warnings in tests
I have created test cases for the package; however, in order to keep the size limit I need to get use fewer cases than normally used; and this creates different warnings when running the test. Is this OK to have these test related warnings when submitting CRAN?
Thanks in advance!
John
In most cases, "Notes" won't automatically cause a reviewer to reject your submission, assuming you otherwise passed R CMD CHECK --as-cran [yourpackage] . In this case, I would take the advice to heart.
First, decide if you really, truly need all those imports at all , let alone as imports. That does seem like a very large collection. Make sure you can't, for example, call some functions in referenced packages A, B, C, and D rather than similar functions in packages K, Q, and T (listing your references from A to X). If you're only using one standalone function from a package, i.e. a function which doesn't depend on any other item in that package, copy the source code from there, with attribution, into your package's source directory.
Second, Only import them if they're needed for your functions to be able to execute regardless of their argument lists. Packages which only support specific "modes" or options should be moved to Suggests .
The relevant portion of the document "R_exts" , which I hope you've read, is quoted below.
All packages that are needed7 to successfully run R CMD check on the
package must be listed in one of ‘Depends’ or ‘Suggests’ or ‘Imports’.
Packages used to run examples or tests conditionally (e.g. via
if(require(pkgname))) should be listed in ‘Suggests’ or ‘Enhances’.
(This allows checkers to ensure that all the packages needed for a
complete check are installed.) In particular, packages providing
“only” data for examples or vignettes should be listed in ‘Suggests’
rather than ‘Depends’ in order to make lean installations possible.
Version dependencies in the ‘Depends’ and ‘Imports’ fields are used by
library when it loads the package, and install.packages checks
versions for the ‘Depends’, ‘Imports’ and (for dependencies = TRUE)
‘Suggests’ fields. It is increasingly important that the information
in these fields is complete and accurate: it is for example used to
compute which packages depend on an updated package and which packages
can safely be installed in parallel. This scheme was developed before
all packages had namespaces (R 2.14.0 in October 2011), and good
practice changed once that was in place. Field ‘Depends’ should
nowadays be used rarely, only for packages which are intended to be
put on the search path to make their facilities available to the end
user (and not to the package itself): for example it makes sense that
a user of package latticeExtra would want the functions of package
lattice made available. Almost always packages mentioned in ‘Depends’
should also be imported from in the NAMESPACE file: this ensures that
any needed parts of those packages are available when some other
package imports the current package. The ‘Imports’ field should not
contain packages which are not imported from (via the NAMESPACE file
or :: or ::: operators), as all the packages listed in that field need
to be installed for the current package to be installed. (This is
checked by R CMD check.) R code in the package should call library or
require only exceptionally. Such calls are never needed for packages
listed in ‘Depends’ as they will already be on the search path. It
used to be common practice to use require calls for packages listed in
‘suggests’ in functions which used their functionality, but nowadays
it is better to access such functionality via :: calls.
How do you best pin package versions in R?
Rejected strategy 1: Pin to CRAN source tar.gzs
Doesn't work if you want to pin it at the latest version since CRAN does not put the tip version in the archive (duh)
Rejected strategy 2: Use devtools
Don't want to, because it takes ages to compile and adds lots of stuff I don't want to use
Rejected strategy 3: Vendor
Would rather avoid having to copy all source
To provide a little bit more information on packrat, which I use for this purpose. From the website.
R package dependencies can be frustrating. Have you ever had to use
trial-and-error to figure out what R packages you need to install to
make someone else’s code work–and then been left with those packages
globally installed forever, because now you’re not sure whether you
need them? Have you ever updated a package to get code in one of your
projects to work, only to find that the updated package makes code in
another project stop working?
We built packrat to solve these problems. Use packrat to make your R
projects more:
Isolated: Installing a new or updated package for one project won’t
break your other projects, and vice versa. That’s because packrat
gives each project its own private package library. Portable: Easily
transport your projects from one computer to another, even across
different platforms. Packrat makes it easy to install the packages
your project depends on. Reproducible: Packrat records the exact
package versions you depend on, and ensures those exact versions are
the ones that get installed wherever you go.
Packrat stores the version of the packages you use in the packrat.lock file, and then downloads that version from CRAN whenever you packrat::restore(). It is much lighter weight than devtools, but can still take some time to re-download all of the packages (depending on the packages you are using).
If you prefer to store all of the sources in a zip file, you can use packrat::snapshot() to pull down the sources / update the packrat.lock and then packrat::bundle() to "bundle" everything up. The aim for this is to make projects / research reproducible and portable over time by storing the package versions and dependencies used on the original design (along with the source code so that the OS dependency on a binary is avoided).
There is much more information on the website I linked to, and you can see current activity on the git repo. I have encountered a few cases that work in a less-than-ideal way (packages not on CRAN have some issues at times), but the git repo still seems to be pretty active with issues/patches which is encouraging.
The development of RStudio and the packages devtools and roxygen2 has made R package creation pretty easy. I use GitHub for version control and devtools allows others to easily install directly from my account.
As my package gradually changes with each version, I'm wondering if I should be maintaining .zip files (or other format) of my past stable builds, in case anyone would ever want to use a previous version.
It's easy to download a .zip of an R package directly from GitHub, but I'm wondering if I should add this to the same GitHub directory (e.g. https://github.com/myaccount/mypackage/previous_versions/mypackage_0.1.zip) without messing up somebody's installation via install_github("myaccount/mypackage").
So, the main Qs are:
Should I keep an old package version at all?
Should I keep old package versions in a sub-folder of my GitHub R package directory?
Should I save .zip files downloaded from GitHub as my old version, or produce a Source or Binary file during the package build itself (i.e. in RStudio)?
Is this a superfluous activity if one isn't yet willing to publish to CRAN?!
When you think your package is at a good solid place, you should tag a release. This archives the branch at that point in time and stores the zip file with the source code, and the tar.gz file.
I tend to mark my CRAN packages as a release each time I release it to CRAN (for example, see https://github.com/nutterb/pixiedust/releases) and with some intemediary tags that I consider noteworthy.
Another good strategy for managing changes in between tagged releases is to maintain a development branch below your main branch. That way your development changes won't pollute or break anything being used by those pulling from your main branch. It makes you free to experiment in the dev branch while always having a clean, working copy to push to and restore from.
1. Should I keep an old package version at all?
It's subjective, but I'd definitely say "yes" unless there's a space constraint, which is probably unlikely.
This serves 2 purposes. One is for your own convenience, such as if you want to make sure that you always have a quick way to test the results of older versions versus a newer version.
The other is that people often need older versions of packages, such as if someone wants to use your package but they're using an older version of R on a server where the policies prevent an update to R. Perhaps a newer version of your package includes a new dependency which only works with a package that depends on a certain version of R or higher.
Of course, packages can always be installed without the compressed or binary files, but it's a nice convenience.
2. Should I keep old package versions in a sub-folder of my GitHub R package directory?
I would put it in a trunk or special subfolder that won't be automatically downloaded when someone tries to install_github or clone your master branch. Having a separate branch is a good idea.
3. Should I save .zip files downloaded from GitHub as my old version, or produce a Source or Binary file during the package build itself (i.e. in RStudio)?
As the package author you're in a position to know if these differ significantly and which if either is better, but by default I'd recommend the RStudio build because I assume (if you're like me) that you're less likely to include unnecessary files this way.
4. Is this a superfluous activity if one isn't yet willing to publish to CRAN?!
No, not necessarily. If people rely on your package then it really doesn't matter if it's on CRAN or not. In fact, not being on CRAN may be a reason to be more proactive like this to ensure that your users will always have access to the needed version of your package.
I'm not new to R but I'm new to finding errors in CRAN packages which I wish to correct. In my case, I like to upload packages under development on github; then if errors are found people can generate pull requests so they're fixed. Not not everyone chooses to go down this route though.
My question relates to the above - if I find a (substantial) error in a widely used CRAN package (which I need to import in my own package), and I have fixed the errors, what are the steps to take? In particular if
the CRAN package does not have a project page (github etc.) and
the author is not replying to e-mails
Currently my solution is to upload a copy of the 'corrected' package on my github page and instruct people to install that version before using my own. This is cumbersome and not an elegant solution. Are there better alternatives to this?
This the good and the bad of R ... sometimes package are forsaken!
Get the source code and create your own pacakge. If it is useful for you it will be useful for others!
there a lot of documentations on how to create packages:
http://www.r-bloggers.com/create-an-r-package-in-under-6-minutes/
In order to be able to compare two versions of a package, I need to able to choose which version of the package that I load. R's package system is set to by default to overwrite existing packages, so that you always have the latest version. How do I override this behaviour?
My thoughts so far are:
I could get the package sources, edit the descriptions to give different names and build, in effect, two different packages. I'd rather be able to work directly with the binaries though, as it is much less hassle.
I don't necessarily need to have both versions of the packages loaded at the same time (just installed somewhere at the same time). I could perhaps mess about with Sys.getenv('R_HOME') to change the place where R installs the packages, and then .libpaths() to change the place where R looks for them. This seems hacky though, so does anyone have any better ideas?
You could selectively alter the library path. For complete transparency, keep both out of your usual path and then do
library(foo, lib.loc="~/dev/foo/v1") ## loads v1
and
library(foo, lib.loc="~/dev/foo/v2") ## loads v2
The same works for install.packages(), of course. All these commands have a number of arguments, so the hooks you aim for may already be present. So don't look at changing R_HOME, rather look at help(install.packages) (assuming you install from source).
But AFAIK you cannot load the same package twice under the same name.
Many years have passed since the accepted answer which is of course still valid. It might however be worthwhile to mention a few new options that arised in the meanwhile:
Managing multiple versions of packages
For managing multiple versions of packages on a project (directory) level, the packrat tool can be useful: https://rstudio.github.io/packrat/. In short
Packrat enhances your project directory by storing your package dependencies inside it, rather than relying on your personal R library that is shared across all of your other R sessions.
This basically means that each of your projects can have its own "private library", isolated from the user and system libraries. If you are using RStudio, packrat is very neatly integrated and easy to use.
Installing custom package versions
In terms of installing a custom version of a package, there are many ways, perhaps the most convenient may be using the devtools package, example:
devtools::install_version("ggplot2", version = "0.9.1")
Alternatively, as suggested by Richie, there is now a more lightweight package called remotes that is a result of the decomposition of devtools into smaller packages, with very similar usage:
remotes::install_version("ggplot2", version = "0.9.1")
More info on the topic can be found:
https://support.rstudio.com/hc/en-us/articles/219949047-Installing-older-versions-of-packages
I worked with R for a longtime now and it's only today that I thought about this. The idea came from the fact that I started dabbling with Python and the first step I had to make was to manage what they (pythonistas) call "Virtual environments". They even have dedicated tools for this seemingly important task. I informed myself more about this aspect and why they take it so seriously. I finally realized that this is a neat and important way to manage different projects with conflicting dependencies. I wanted to know why R doesn't have this feature and found that actually the concept of "environments" exists in R but not introduced to newbies like in Python. So you need to check the documentation about this and it will solve your issue.
Sorry for rambling but I thought it would help.