Is there a way to automatically load a data object from a package in memory when the package is loaded (but not yet attached)? I.e. the opposite of lazy loading? The object is used in one of the package functions, so it needs to be available at all time.
When the package is set to lazydata=false, the data object is not exported by the package at all, and needs to be loaded manually with data(). We could use something like:
.onLoad <- function(lib, pkg){
data(mydata, package = pkg)
}
However, data() loads the object in the global environment. I strongly prefer to load it in the package environment (which is what lazydata does) to prevent masking conflicts.
A workaround is to bypass the data mechanics completely, and simply hardcode the object in the package. So the package myscore.R would look like
mymodel <- readRDS("inst/mymodel.rds")
myscore <- function(newdata){
predict(mymodel, newdata)
}
But this will lead to a huge packagedb for large data objects, and I am not sure what are the consequences of that.
As you say
The object is used in one of the package functions, so it needs to be available at all time.
I think the author of that package should really NOT use data(.) for that.
Instead he should define the object inside his /R/ either by simple R code in an R/*.R file,
or by using the sysdata.rda approach that is explained in the famous first reference for all these question,
"Writing R Extensions". In both cases the package author can also export the object which is often desirable for other users as in your case.
Of course this needs a polite conversation between you and the package author, and will only apply to the next version of that package.
I'm going to post this since it seems to work for my use case.
.onLoad() is:
function(lib,pkg)
data(mydata, package=pkg,
environment=parent.env(environment()))
Also need Imports: utils in DESCRIPTION and importFrom(utils, data) in NAMESPACE in order to pass R CMD check.
In my case I don't need the data object to be visible to the user, I need it to be visible to one of the functions in the package. If you need it visible to the user, that's going to be even harder (I think) because as far as I can tell you can't export data, just functions. The only way I've thought of to export data is to export a wrapper function for the data.
Related
Currently I have in my package DESCRIPTION, a dependency on dbplyr:
Imports:
dbplyr,
dplyr
dbplyr is useful almost solely because of the S3 methods it defines: https://github.com/tidyverse/dbplyr/blob/main/NAMESPACE. The actual functions you call to use dbplyr are almost entirely from dplyr.
By putting dbplyr in my Imports, it should automatically get loaded, but not attached, which should be enough to register its S3 methods: https://r-pkgs.org/dependencies-mindset-background.html#sec-dependencies-attach-vs-load.
This seems to work fine, but whenever I R CMD check, it tells me:
N checking dependencies in R code (10.8s)
Namespace in Imports field not imported from: ‘dbplyr’
All declared Imports should be used.
Firstly, why does R CMD check even check this, considering that it often makes sense to load packages without importing them. Secondly, how am I supposed to satisfy R CMD check without loading things into my namespace that I don't want or need?
I am pretty sure two of your assumptions are false.
First, putting Imports: dbplyr into your DESCRIPTION file won't load it, so its methods won't be loaded from that alone. Basically the Imports field in the DESCRIPTION file just guarantees that dbplyr is available to be loaded when requested. If you import something via the NAMESPACE file, that will cause it to be loaded. If you evaluate dbplyr::something that will cause it to be loaded. Executing loadNamespace("dbplyr") is another way, and there are a few others. You may also load some other package that loads it.
Second, I think you have misinterpreted the error message. It isn't saying that you loaded it without importing it (though it would complain about that too), it is saying that it can't detect any use of it in your package, so maybe it shouldn't be a requirement for installing your package.
Unfortunately, the code to detect uses is fallible, so it sometimes misses uses. Examples I've heard about are:
if the package is only used in the default value for a function argument. This has been fixed in R-devel.
if the package is only used during the build to construct some object, e.g. code like someclass <- R6::R6Class( ... ) needs R6, but the check code won't see it because it looks at someclass, not at the source code that created it.
if the use of the package is hidden by specifying the name of the package in a character variable.
if the need for the package is indirect, e.g. you need to use ggplot2::geom_hex. That needs the hexbin package, but ggplot2 only declares it as "Suggested".
These examples come from this discussion: https://github.com/hadley/r-pkgs/issues/828#issuecomment-1421353457 .
The recommended workaround there is to create an object that refers to the imported package explicitly, e.g. putting the line
dummy_r6 <- function() R6::R6Class
into your package is enough to suppress the note without actually loading R6. (It will be loaded if you ever call this function.)
However, your requirement is stronger: you do need to make sure dbplyr is loaded if you want its methods to be used. I'd put something in your .onLoad() function that triggers the load. For example,
.onLoad <- function(lib, pkg) {
# Make sure the dbplyr methods are loaded
loadNamespace("dbplyr")
}
EDITED TO ADD: As pointed out in the comments, there's a bug in the check code that means it won't detect this as being a use of dbplyr. You really need to do both things, e.g.
.onLoad <- function(lib, pkg) {
# Make sure the dbplyr methods are loaded
loadNamespace("dbplyr")
# Work around bug in code checking in R 4.2.2 for use of packages
dummy <- function() dbplyr::across_apply_fns
}
The function used in the dummy construction is arbitrary; it probably doesn't even need to exist, but I chose one that does.
I created a local package with personal functions to be easily used within R. One of these is aimed to be used in the lidR package within a wrapper function (i.e. grid_metrics). For this reason I took the scheme of this script as a reference, exporting both the long name (e.g. my_metrics(param1, param2,...)) and the lazy one (e.g. .my_metrics), because I really like its ease of use.
Nevertheless, if I load my package and then call the lazy function
library(mypackage)
test = grid_metrics(las, .my_metrics, 20)
it does not work, so I have to load in memory the function by running its code from the file. At this stage, I can use it in both forms.
Within the NAMESPACE file I can see that both forms are exported so my last guess is that this might be related somehow to lazyeval but I don't get how.
It seems that the problem is related to the DESCRIPTION section in which the lidR package was included. Since when I moved from Imports to Depends the issue is solved.
I am writing a function that will take the name of an installed package and return a data frame listing all the data frames available in that package along with the number and types of variables in those data frames.
In order to do this, I need to require the package temporarily so I can access its data sets. The problem I have is that requiring a package also introduces a whole lot of extra stuff into the search path and the loaded namespaces beyond just the package in question. I want my function to tidy up after itself, but I can't find a good way to detach everything that got imported when the package was required. In particular, detach seems to detach only the package, but not any of the other imported stuff.
Any advice?
I'm not sure what IDE you are workign with but many of them have "tab-completion". If I type :....... ?unload at my console and hit <tab> I immmediately see ??unloadNamespace ... so that would be a reasonable function to investigate. You should first look at:
?unloadNamespace
... and then decide if that is sufficient. There is also the detach function that has a link to its associated help page in that help page.
I'm working on an R package at present and trying to follow the best practice guidelines provided by Hadley Wickham at http://r-pkgs.had.co.nz. As part of this, I'm aiming to have all of the package dependencies within the Imports section of the DESCRIPTION file rather than the Depends since I agree with the philosophy of not unnecessarily altering the global environment (something that many CRAN and Bioconductor packages don't seem to follow).
I want to use functions within the Bioconductor package rhdf5 within one of my package functions, in particular h5write(). The issue I've now run into is that it doesn't have its S3 methods declared as such in its NAMESPACE. They are declared using (e.g.)
export(h5write.default)
export(h5writeDataset.matrix)
rather than
S3method(h5write, default)
S3method(h5writeDataset, matrix)
The generic h5write is defined as:
h5write <- function(obj, file, name, ...) {
res <- UseMethod("h5write")
invisible(res)
}
In practice, this means that calls to rhdf5::h5write fail because there is no appropriate h5write method registered.
As far as I can see, there are three solutions to this:
Use Depends rather than Imports in the DESCRIPTION file.
Use library("rhdf5") or require("rhdf5") in the code for the relevant function.
Amend the NAMESPACE file for rhdf5 to use S3methods() rather than export().
All of these have disadvantages. Option 1 means that the package is loaded and attached to the global environment even if the relevant function in my package is never called. Option 2 means use of library in a package, which while again attaches the package to the global environment, and is also deprecated per Hadley Wickham's guidelines. Option 3 would mean relying on the other package author to update their package on Bioconductor and also means that the S3 methods are no longer exported which could in turn break other packages which rely on calling them explicitly.
Have I missed another alternative? I've looked elsewhere on StackOverflow and found the following somewhat relevant questions Importing S3 method from another package and
How to export S3 method so it is available in namespace? but nothing that directly addresses my issue. Of note, the key difference from the first of these two is that the generic and the method are both in the same package, but the issue is the use of export rather than S3method.
Sample code to reproduce the error (without needing to create a package):
loadNamespace("rhdf5")
rdhf5::h5write(1:4, "test.h5", "test")
Error in UseMethod("h5write") :
no applicable method for 'h5write' applied to an object of class
"c('integer', 'numeric')
Alternatively, there is a skeleton package at https://github.com/NikNakk/s3issuedemo which provides a single function demonstrateIssue() which reproduces the error message. It can be installed using devtools::install_github("NikNakk/s3issuedemo").
The key here is to import the specific methods in addition to the generic you want to use. Here is how you can get it to work for the default method.
Note: this assumes that the test.h5 file already exists.
#' #importFrom rhdf5 h5write.default
#' #importFrom rhdf5 h5write
#' #export
myFun <- function(){
h5write(1:4, "test.h5", "test")
}
I also have put up my own small package demonstrating this here.
Here is a snippet of R script doing beta regression on data "GasolineYield":
library("betareg")
data("GasolineYield", package = "betareg")
gy_logit <- betareg(yield ~ batch + temp, data = GasolineYield)
It works fine but if I run the code with the second line deleted, it errs with message:
Error in terms.formula(form, ...) : object 'GasolineYield' not found
But isn't the data.frame GasolineYield in the package betareg ? What's actually happening when I call library("betareg")? Aren't all the data inside the package automatically loaded into current environment? Could anybody help me understand the mechanism behind this?
For the most part, data is included in R packages for the purposes of providing examples and other stuff that is not mission critical. That is why datasets are not automatically loaded into the environment for most packages and you have to load them using the data() command. This is a good thing. It would be a waste of memory, time, and namespace for packages that primarily provide functions to load their data all the time when users don't use it very frequently.
When you load a package, only the stuff that is exported in the "NAMESPACE" file by the package designer is made available. And the "DESCRIPTION" file has a field called "LazyData" that determines the data behavior as well. By the way, packages often have functions in them that are for internal use as well and are not exported in the NAMESPACE file.
TL;DR, the package writer determines what stuff will be available when the package is loaded and they specify those items in the NAMESPACE and DESCRIPTION files.