I am currently writing a package using reference classes. I have come across
an issue which from reading various sources:
Method initialisation in R reference classes
Can't reliably use RefClass methods in Snowfall
I gather is caused because reference methods are not all copied to every object
in the class rather they are copied when first accessed.
https://stat.ethz.ch/pipermail/r-devel/2011-June/061261.html
As an example define:
test <- setRefClass("TEST",
fields = list( a = "numeric"),
methods = list(
addone = function(){
a <<- a+1
},
initialize = function(){
a <<- 1
}
)
)
example <- test$new()
So example is a new object of class TEST. Typing example$ and tabbing in the
console gives
> example$
# example$.->a example$.refClassDef example$.self
# example$a example$initialize
so the method addone is not presented as an option. It is available to
call however:
example$addone()
Now tabbing again reveals
# >
# > example
# Reference class object of class "TEST"
# Field "a":
# [1] 2
# > example$
# example$.->a example$.refClassDef example$.self
# example$a example$addone example$field
# example$initialize example$show
so now addone and field and show are presented as options.
Martin Morgan advises to force definition of the methods in one of the above links. This
works well
test <- setRefClass("TEST",
fields = list( a = "numeric"),
methods = list(
addone = function(){
a <<- a+1
},
initialize = function(){
a <<- 1
.self$addone #force definition
}
)
)
example <- test$new()
so now tabbing gives:
# > example$
# example$.->a example$.refClassDef example$.self
# example$a example$addone example$initialize
Some of my classes have over 30 methods so I would like to do this as succintly as possible.
I have defined:
test <- setRefClass("TEST",
fields = list( a = "numeric"),
methods = list(
addone = function(){
a <<- a+1
},
initialize = function(){
a <<- 1
eval(parse(text=paste0('.self$',ls(test$def#refMethods))))
}
)
)
example <- test$new()
tabbing now gives:
# > example$
# example$.->a example$.refClassDef example$.self
# example$a example$addone example$callSuper
# example$copy example$export example$field
# example$getClass example$getRefClass example$import
# example$initFields example$initialize example$show
# example$trace example$untrace
Whilst this works it feels a bit clumsy. Also test$def#refMethods is used rather then getRefClass("TEST")$def#refMethods so that
feels a bit wrong. Has anyone dealt with this issue before.
Is there a better way to approach a solution? Thanks for any advice and apologies if the question is overly drawn out.
I wonder what your objective is? Function names showing up with tab completion? Then it's worth a post to the R-devel mailing list with a feature request. The original scenario is more elegantly handled with usingMethods as documented on ?setRefClass. A continued hack might be
initialize = function(...) {
methods <- getRefClass(class(.self))$methods()
eval(parse(text=paste0(".self$", methods)))
callSuper(...)
}
Tab completions can be customized via .DollarNames in the utils package, so
.DollarNames.TEST <- function(x, pattern)
grep(pattern, getRefClass(class(x))$methods(), value=TRUE)
Maybe an S3 method could be written at the base of your class hierarchy for this?
I know this is an old question but it is still the top entry when searching for refClass tab completion on google, so I'll just add an update:
Instead of using grep in the .DollarNames function as suggested by Martin, use findMatches from the utils package as it plays better with the different Rgui's around (grep will delete your partially typed name upon hitting tab)
.DollarNames.TEST <- function(x, pattern){
utils:::findMatches(pattern, getRefClass(class(x))$methods())
}
This is also how tab completion is handled internally for lists and data.frames
#Martin Morgan noted that this was termed tab completion. The package rcompletion and later rcompgen were tasked with achieving this. They have been now moved to utils.
rcompletion update
I looked thru the code for completion.R and from what I could determine utils:::.DollarNames.environment was handling tab completion for reference classes.
completion.R
Redefining the function seemed to achieve tab completion:
assignInNamespace( x = ".DollarNames.environment",
function(x, pattern = "") {
y <- NULL
if(isS4(x) && !is.null(x[['.refClassDef']])){
if(.hasSlot(x$.refClassDef,'refMethods')){
y<-x$.refClassDef#refMethods
y<-ls(y, all.names = TRUE, pattern = pattern)
}
}
x<-ls(x, all.names = TRUE, pattern = pattern)
unique(c(x,y))
}
,ns = "utils")
Some things to note:
I would only use this for my own use. Currently I am debugging and documenting a package. I had some longish method names and couldnt remember exactly what they were so tab completion will help greatly.
Usage of assignInNamespace in a package is frowned upon (if not banned) see ?assignInNamespace.
Forced definition of methods is more advisable.
Related
Does anyone have an idea (or a 'solution') that in case you define a View() method for a S4 class in which you would like to access a certain slot in the RStudio data.frame Viewer it just won't return it as expected?
What I mean is if you start a fresh R session eg
View(mtcars)
will work as expected and deliver
(works as expected including Filters etc in the Viewer)
But then if you define some S4 class eg
foo <- setClass("foo", slots = c(df = "data.frame"))
myfoo <- new('foo', df = mtcars)
and then a View method
setMethod("View", "foo", function(x, title) View(x#df, title))
You will suddenly face one of these (two differing) messages:
Creating a generic function for 'View' from package 'stats' in the global environment
OR this one
Creating a generic function for 'View' from package 'utils' in the global environment
Which imo is already "disturbing" since afaik the stats package does not seem to even have a View function.
But lets continue and get to what I really wonder about. Which is why I do not get the expected same result in RStudio's data.frame Viewer but some form of list view?
Can this somehow be avoided and made to produce the expected ie same type of "View" as for the standalone data.frame?
And to top it all off once you do that within a R package you end up with this (final pic below) where in the background you see View(mtcars) output before devtools::load_all() was run and in the foreground you see the result of View(mtcars) after load_all() picked up all methods/functions in the package? Is that a RStudio bug, or am I doing something wrong here?
Here is a (not-so-nice) workaround until a better solution comes along.
foo <- setClass("foo", slots = c(df = "data.frame"))
myfoo <- new('foo', df = mtcars)
old <- View
View <- function(...) {
if(isS4(...)) {
unclass(...)#df |> format.data.frame() |> old()
} else {
old(...)
}
}
The viewer with the red lines is the default utils::View,
function (x, title)
{
check <- Sys.getenv("_R_CHECK_SCREEN_DEVICE_", "")
msg <- "View() should not be used in examples etc"
if (identical(check, "stop"))
stop(msg, domain = NA)
else if (identical(check, "warn"))
warning(msg, immediate. = TRUE, noBreaks. = TRUE, domain = NA)
if (missing(title))
title <- paste("Data:", deparse(substitute(x))[1])
x0 <- as.data.frame(x)
x <- as.list(format.data.frame(x0))
rn <- row.names(x0)
if (any(rn != seq_along(rn)))
x <- c(list(row.names = rn), x)
if (!is.list(x) || !length(x) || !all(sapply(x, is.atomic)) ||
!max(lengths(x)))
stop("invalid 'x' argument")
if (grepl("darwin", R.version$os))
check_for_XQuartz()
invisible(.External2(C_dataviewer, x, title))
}
RStudio modifies View somewhere1, maybe here to a more user friendly interface, via
function (...)
.rs.callAs(name, hook, original, ...)
where original is utils::View, ( see e <- environment(View); e$original).
1 I have not figured out (yet) where exactly.
I've written an import function that gets a single file from an aws s3-bucket.
That function itself is a wrapper around aws.s3::s3read_using() which takes a reading function as its first argument.
Why do I wrap around aws.s3::s3read_using() ? Because I need to do some special error-handling and want the wrapping function to do some Recall() up to a limit... but that's a different story.
Now that i've successfully build and tested my wrapping function i want to do another wrapping arround that:
I want to iterate n times over my wrapper to bind the downloaded files together. I now have the difficulty to hand the 'reading_function' to the FUN argument of aws.s3::s3read_using().
I could do that by simply using ... - BUT!
I want to make clear to the USER of my wrapping wrapper, that he needs to specify that argument.
So I've decided to use rlangs rlang::enexpr() to capture the argument and to hand it over to my first wrapper via !! - which in return captures that argument again with rlang::enexpr() and hands it over - finally - to aws.s3::s3read_using() via rlang::expr(aws.s3::s3read_using(FUN = !!reading_fn, object = s3_object))
That works perfectly fine and smooth. My Problem is with testing that function construct using testthat and mockery
Here is some broadly simplyfied code:
my_workhorse_function <- function(fn_to_work_with, value_to_work_on) {
fn <- rlang::enexpr(fn_to_work_with)
# Some other magic happens here - error handling, condition-checking, etc...
out <- eval(rlang::expr((!!fn)(value_to_work_on)))
}
my_iterating_function <- function(fn_to_iter_with, iterate_over) {
fn <- rlang::enexpr(fn_to_iter_with)
out <- list()
for(i in seq_along(iterate_over)) {
out[[i]] <- my_workhorse_function(!!fn, iterate_over[i])
}
return(out)
}
# Works just fine
my_iterating_function(sqrt, c(9:16))
Now, to the test:
# Throws an ERROR: 'Error in `!fn`: invalid argument type'
test_that("my_iterating_function iterates length(iterate_over) times over my_workhorse_function", {
mock_1 <- mockery::mock(1, cycle = TRUE)
stub(my_iterating_function, "my_workhorse_function", mock_1)
expect_equal(my_iterating_function(sqrt, c(9:16)), list(1,1,1,1,1,1,1,1))
expect_called(mock_1, 8)
})
I've used a workarround, but that just doesn't feel right, even though, it works:
# Test passed
test_that("my_iterating_function iterates length(iterate_over) times over my_workhorse_function", {
mock_1 <- mockery::mock(1, cycle = TRUE)
stub(my_iterating_function, "my_workhorse_function",
function(fn_to_work_with, value_to_work_on) {
fn <- rlang::enexpr(fn_to_work_with)
out <- mock_1(fn, value_to_work_on)
out})
expect_equal(my_iterating_function(sqrt, c(9:16)), list(1,1,1,1,1,1,1,1))
expect_called(mock_1, 8)
})
I'm using version of R: 4.1.1
I'm using versions of testthat(3.1.1), mockery(0.4.2), rlang(0.4.12)
I think you're complicating things here, although maybe I'm not fully understanding your end goal. You can directly pass functions through arguments without any issue. Your example code above can be easily simplified to (keeping the loop just to match your test_that() call):
library(testthat)
library(mockery)
my_workhorse_function <- function(fn_to_work_with, value_to_work_on) {
fn_to_work_with(value_to_work_on)
}
my_iterating_function <- function(fn_to_iter_with, iterate_over) {
out <- list()
for(i in seq_along(iterate_over)) {
out[[i]] <- my_workhorse_function(fn_to_iter_with, iterate_over[i])
}
return(out)
}
# Works just fine
my_iterating_function(sqrt, c(9:16))
#> [[1]]
#> [1] 3
#>
#> ...
test_that("my_iterating_function iterates length(iterate_over) times over my_workhorse_function", {
mock_1 <- mockery::mock(1, cycle = TRUE)
stub(my_iterating_function, "my_workhorse_function", mock_1)
expect_equal(my_iterating_function(sqrt, c(9:16)), list(1,1,1,1,1,1,1,1))
expect_called(mock_1, 8)
})
#> Test passed š„
You can just pass FUN directly through all of your nested functions. The functions you're wrapping with enexpr() were never going to be evaluated in the first place until you explicitly call them. You usually use enexpr when users are supplying expressions, not just functions.
For complicated reasons I'm trying to spy on often-used I/O methods in R.
I've got working code but there's something going on I don't understand about R's evaluation which leads to a lengthy comment.
.spy.on.methods <- function() {
attached.packages <- .packages()
pkg.watch <- list(
"data.table" = "fread",
"maptools" = "readShapePoly",
"utils" = "read.csv"
)
for (pkg in names(pkg.watch)) {
methods <- pkg.watch[[pkg]]
if (pkg %in% attached.packages) {
# replace now
.replace.methods.with.spy(methods)
} else {
# replace them right after they're loaded
setHook(
packageEvent(pkg, "attach"),
# parameters included for reader comprehension - we don't need or use them
function(pkg.name, pkg.path) {
# WARNING: you cannot use `methods` here. The reason is the value is
# mutated within this loop and R's lazy evaluation means that ALL
# hook functions will end up having a `methods` value from the final
# iteration of the loop
.replace.methods.with.spy(pkg.watch[[pkg.name]])
},
action = "append"
)
}
}
}
It seems to me that I should be able to replace
.replace.methods.with.spy(pkg.watch[[pkg.name]])
with
.replace.methods.with.spy(methods)
but it doesn't work. Diagnostic print statements indicate that in all cases methods has the value associated with pkg.watch[["utils"]].
I feel as if some combination of force, eval, and substitute are the solution here but I have not yet found it. Can anyone enlighten me?
I am developing an R package and one of the function implements interaction with users through standard input via readline. I now wonder how to test the behavior of this function, preferably with testthat library.
It seems test_that function assumes the answer is "" for user-input. I wish I could test the behavior conditional of various answers users may type in.
Below is a small example code. In the actual development, the marryme function is defined in a separate file and exported to the namespace.
devtools::test() gets me an error on the last line because the answer never becomes yes. I would like to test if the function correctly returns true when user types "y".
library(testthat)
test_that("input", {
marryme <- function() {
ans <- readline("will you marry me? (y/n) > ")
return(ans == "y")
}
expect_false(marryme()) # this is good
expect_true(marryme()) # this is no good
})
Use readLines() with a custom connection
By using readLines() instead of readline(), you can define the connection, which allows you to customize it using global options.
There are two steps that you need to do:
set a default option in your package in zzz.R that points to stdin:
.onAttach <- function(libname, pkgname){
options(mypkg.connection = stdin())
}
In your function, change readline to readLines(n = 1) and set the connection in readLines() to getOption("mypkg.connection")
Example
Based on your MWE:
library(testthat)
options(mypkg.connection = stdin())
marryme <- function() {
cat("will you marry me? (y/n) > ")
ans <- readLines(con = getOption("mypkg.connection"), n = 1)
cat("\n")
return(ans == "y")
}
test_that("input", {
f <- file()
options(mypkg.connection = f)
ans <- paste(c("n", "y"), collapse = "\n") # set this to the number of tests you want to run
write(ans, f)
expect_false(marryme()) # this is good
expect_true(marryme()) # this is no good
# reset connection
options(mypkg.connection = stdin())
# close the file
close(f)
})
#> will you marry me? (y/n) >
#> will you marry me? (y/n) >
I'm currently exploring the possibilities of R's Reference Class and I'm trying to wrap my head around customized accessor functions. The manual states for field that:
The element in the list can alternatively be an accessor function, a
function of one argument that returns the field if called with no
argument or sets it to the value of the argument otherwise. Accessor
functions are used internally and for inter-system interface
applications. Their definition follows the rules for writing methods
for the class: they can refer to other fields and can call other
methods for this class or its superclasses. See the section on
āImplementationā for the internal mechanism used by accessor
functions.
All I've been able to find is using accessor functions in the context of file storage. Being accustomed to private internal variables and input validations I would argue that this is where the input data validation should be, see example below:
Account <-
setRefClass("Account",
fields = list(data = "list",
balance =
function(value){
if (missing(value)){
return(data$balance)
}else{
if (!is.numeric(value))
stop("You can only set the balance to a numeric value!")
data$balance <<- value
}
}),
methods = list(
withdraw = function(x) {
balance <<- balance - x
},
deposit = function(x) {
balance <<- balance + x
}
))
This works as expected:
> a <- Account$new(balance = 0)
>
> a$deposit(10)
> a$balance
[1] 10
>
> a$withdraw(1)
> a$balance
[1] 9
>
> a$balance <- "a"
Error in (function (value) :
You can only set the balance to a numeric value!
What I would like to know if there is a reason for not doing this since it seems like a natural approach but not mentioned in the manual? Is there a good way of completely hiding the data variable, e.g. using .self <- local({data = list(); .self}) at some point.
I've been struggling with this as well and it appears that there is no way to fully hide the data. One thing that I noticed with your example is that you can still manually change the balance value by calling:
a <- Account$new(balance = 0)
a$data$balance<-"a"
a$balance
#> [1] "a"
The reason it can still be manipulated is the reason I suspect it is not recommended. When the manual describes accessor functions it seems to be referring to the ones that you get if you want to use $accessor (described in the manual). Example below:
Account <-
setRefClass("Account",
fields = list(balance = "numeric"),
methods = list(
withdraw = function(x) {
balance <<- balance - x
},
deposit = function(x) {
balance <<- balance + x
}
))
Account$accessors("balance")
a<-Account$new("balance"=0)
# you can now get and set the balance with getBalance() and setBalance(x).
# (it automatically capitalizes your field name)
a$setBalance(10)
a$getBalance()
# [10]
Lastly, you can always create and set the getBalance()/setBalance(x) methods manually if you want to add extra checks in the methods argument for setClassRef.
Most of this is inferred from the documentation for setClassRef. See this link on stackoverflow below which discusses Private Members.