Force evaluation of all lazy function arguments - r

This is my function:
f <- function(a, b, ...){
c(as.list(environment()), list(...))
}
If I call f(a = 2) no error will be raised, although b is missing. I would like to get an error in this case:
Error in f(a = 2) : argument "b" is missing, with no default
What piece of dynamic and efficient code I must add such that this error be raised? I was thinking something in line of the following: force(as.symbol(names(formals()))).
Note: In case you wonder why I need this kind of function: It is a way to standardize the kinds of lists. Such a list must have a and b, and possibly other keys. I could play with objects too...
Solutions: See Carl's answer or comments below.
f <- function(a, b, ...){
sapply(ls(environment()), get, envir = environment(), inherits = FALSE)
c(as.list(environment()), list(...))
}
Or
f <- function(a, b, ...){
stopifnot(all(setdiff(names(formals()), '...') %in% names(as.list(match.call()[-1]))))
c(as.list(environment()), list(...))
}

An idea... first check for all arguments that exist in the any function anonymously... meaning regardless of the functions, get the arguments into a list with no preset requirements:
#' A function to grab all arguments of any calling environment.. ie.. a function
#'
#'
#' \code{grab.args}
#'
grab.args <- function() {
envir <- parent.frame()
func <- sys.function(-1)
call <- sys.call(-1)
dots <- match.call(func, call, expand.dots=FALSE)$...
c(as.list(envir), dots)
}
Then, in whatever function you use it for.. store the initial arguments on a list does_have, then find all the arguments that are pre-defined in the environment with should_have, loop through the list to match names and find if any are missing values... if any are... create the error with the names that are missing, if not... do your thing...
#' As an example
#'
f <- function(a, b, ...){
does_have <- grab.args()
should_have <- ls(envir = environment())
check_all <- sapply(should_have, function(i){
!nchar(does_have[[i]])
})
if(any(mapply(isTRUE, check_all))){
need_these <- paste(names(which(mapply(isTRUE,check_all))), collapse = " and ")
cat(sprintf('Values needed for %s', need_these))
}else {
does_have
}
}
Outputs for cause....
> f(mine = "yours", a = 3)
Values needed for b
> f(b = 12)
Values needed for a
> f(hey = "you")
Values needed for a and b
Edit to throw an actual error...
f <- function(a,b,...){
Filter(missing, sapply(ls(environment()), get, environment()))
}
> f(a = 2, wtf = "lol")
Error in FUN(X[[i]], ...) : argument "b" is missing, with no default

Related

R: S3 Method dispatch depending on arguments

I have a generic function foo that I want to call three different ways depending on the arguments given to it.
foo <- function(...) UseMethod("foo")
#default
foo.default <- function(x, y, ...) {
#does some magic
print("this is the default method")
}
#formula
foo.formula <- function(formula, data = list(), ...) {
print("this is the formula method")
}
#data.frame
foo.data.frame <- function(data, x, y, ...) {
print("this is the data.frame method")
}
In the following I'm going to show how I am expecting the method dispatch to work but the outputs are presented under each call...
mydata <- data.frame(x=c(1,2,3,4),y=c(5,6,7,8))
#ways to call default function
foo(x = mydata$x, y = mydata$y)
#[1] "this is the default method"
#ways to call formula
foo(formula = mydata$x~mydata$y)
#[1] "this is the formula method"
foo(formula = x~y, data = mydata)
#[1] "this is the formula method"
foo(data = mydata, formula = x~y) #ERROR
#[1] "this is the data.frame method"
#ways to call data.frame method
foo(data = mydata, x = x, y = y)
#[1] "this is the data.frame method"
foo(x = x, y = y, data = mydata) #ERROR
#Error in foo(x = x, y = y, data = mydata) : object 'x' not found
from what I can tell, the method used depends on the class of the first argument. Essentially, I would like for the method dispatch to depend on the arguments passed to the generic function foo and not the first argument.
I would like the dispatch to have the following priority:
If the formula argument is present the formula method is used (data argument should be optional here)
Then, if no formula argument is found, if data argument is present use data.frame method (which requires x and y arguments)
else foo expects the x and y arguments or it will fail.
Note
I would like to avoid defining the generic function foo as follows
foo <- function(formula, data,...) UseMethod("foo")
while this would fix all my issues (I believe all except the last case), this will cause a devtools::check() warning because the some of S3 functions will not have the same arguments as the generic function and will no longer be consistent (specifically foo.default and foo.data.frame). And I wouldn't like to include the missing arguments because those methods do not have use for those arguments.
As Thomas has pointed out, this is not the standard behavior for S3 classes. If you really want to stick to S3, however, you could write your functions so as to "mimick" UseMethod, even though it won't be pretty and is probably not what you want to do. Nevertheless, here an idea that is based on capturing all arguments first, and then checking for the presence of your "preferred" argument type:
Get some objects first:
a <- 1; class(a) <- "Americano"
b <- 2; class(b) <- "Espresso"
Let the function in question capture all arguments with dots, and then check for the presence of an argument type in order of your preference:
drink <- function(...){
dots <- list(...)
if(any(sapply(dots, function(cup) class(cup)=="Americano"))){
drink.Americano(...)
} else { # you can add more checks here to get a hierarchy
# try to find appropriate method first if one exists,
# using the first element of the arguments as usual
tryCatch(get(paste0("drink.", class(dots[[1]])))(),
# if no appropriate method is found, try the default method:
error = function(e) drink.default(...))
}
}
drink.Americano <- function(...) print("Hmm, gimme more!")
drink.Espresso <- function(...) print("Tripple, please!")
drink.default <- function(...) print("Any caffeine in there?")
drink(a) # "Americano", dispatch hard-coded.
# [1] "Hmm, gimme more!"
drink(b) # "Espresso", not hard-coded, but correct dispatch anyway
# [1] "Tripple, please!"
drink("sthelse") # Dispatches to default method
# [1] "Any caffeine in there?"
drink(a,b,"c")
# [1] "Hmm, gimme more!"
drink(b,"c", a)
# [1] "Hmm, gimme more!"

Pass optional arguments to function, three dots

I'm confused how ... works.
tt = function(...) {
return(x)
}
Why doesn't tt(x = 2) return 2?
Instead it fails with the error:
Error in tt(x = 2) : object 'x' not found
Even though I'm passing x as argument ?
Because everything you pass in the ... stays in the .... Variables you pass that aren't explicitly captured by a parameter are not expanded into the local environment. The ... should be used for values your current function doesn't need to interact with at all, but some later function does need to use do they can be easily passed along inside the .... It's meant for a scenario like
ss <- function(x) {
x
}
tt <- function(...) {
return(ss(...))
}
tt(x=2)
If your function needs the variable x to be defined, it should be a parameter
tt <- function(x, ...) {
return(x)
}
If you really want to expand the dots into the current environment (and I strongly suggest that you do not), you can do something like
tt <- function(...) {
list2env(list(...), environment())
return(x)
}
if you define three dots as an argument for your function and want it to work, you need to tell your function where the dots actually go. in your example you are neither defining x as an argument, neither ... feature elsewhere in the body of your function. an example that actually works is:
tt <- function(x, ...){
mean(x, ...)
}
x <- c(1, 2, 3, NA)
tt(x)
#[1] NA
tt(x, na.rm = TRUE)
#[1] 2
here ... is referring to any other arguments that the function mean might take. additionally you have a regular argument x. in the first example tt(x) just returns mean(x), whilst in the second example tt(x, na.rm = TRUE), passes the second argument na.rm = TRUE to mean so tt returns mean(x, na.rm = TRUE).
Another way that the programmers of R use a lot is list(...) as in
tt <- function(...) {
args <- list(...) # As in this
if("x" %in% names(args))
return(args$x)
else
return("Something else.")
}
tt(x = 2)
#[1] 2
tt(y = 1, 2)
#[1] "Something else."
I believe that this is one of their favorite, if not the favorite, way of handling the dots arguments.

R: how to find what S3 method will be called on an object?

I know about methods(), which returns all methods for a given class. Suppose I have x and I want to know what method will be called when I call foo(x). Is there a oneliner or package that will do this?
The shortest I can think of is:
sapply(class(x), function(y) try(getS3method('foo', y), silent = TRUE))
and then to check the class of the results... but is there not a builtin for this?
Update
The full one liner would be:
fm <- function (x, method) {
cls <- c(class(x), 'default')
results <- lapply(cls, function(y) try(getS3method(method, y), silent = TRUE))
Find(function (x) class(x) != 'try-error', results)
}
This will work with most things but be aware that it might fail with some complex objects. For example, according to ?S3Methods, calling foo on matrix(1:4, 2, 2) would try foo.matrix, then foo.numeric, then foo.default; whereas this code will just look for foo.matrix and foo.default.
findMethod defined below is not a one-liner but its body has only 4 lines of code (and if we required that the generic be passed as a character string it could be reduced to 3 lines of code). It will return a character string representing the name of the method that would be dispatched by the input generic given that generic and its arguments. (Replace the last line of the body of findMethod with get(X(...)) if you want to return the method itself instead.) Internally it creates a generic X and an X method corresponding to each method of the input generic such that each X method returns the name of the method of the input generic that would be run. The X generic and its methods are all created within the findMethod function so they disappear when findMethod exits. To get the result we just run X with the input argument(s) as the final line of the findMethod function body.
findMethod <- function(generic, ...) {
ch <- deparse(substitute(generic))
f <- X <- function(x, ...) UseMethod("X")
for(m in methods(ch)) assign(sub(ch, "X", m, fixed = TRUE), "body<-"(f, value = m))
X(...)
}
Now test it. (Note that the one-liner in the question fails with an error in several of these tests but findMethod gives the expected result.)
findMethod(as.ts, iris)
## [1] "as.ts.default"
findMethod(print, iris)
## [1] "print.data.frame"
findMethod(print, Sys.time())
## [1] "print.POSIXct"
findMethod(print, 22)
## [1] "print.default"
# in this example it looks at 2nd component of class vector as no print.ordered exists
class(ordered(3))
## [1] "ordered" "factor"
findMethod(print, ordered(3))
## [1] "print.factor"
findMethod(`[`, BOD, 1:2, "Time")
## [1] "[.data.frame"
I use this:
s3_method <- function(generic, class, env = parent.frame()) {
fn <- get(generic, envir = env)
ns <- asNamespace(topenv(fn))
tbl <- ns$.__S3MethodsTable__.
for (c in class) {
name <- paste0(generic, ".", c)
if (exists(name, envir = tbl, inherits = FALSE)) {
return(get(name, envir = tbl))
}
if (exists(name, envir = globalenv(), inherits = FALSE)) {
return(get(name, envir = globalenv()))
}
}
NULL
}
For simplicity this doesn't return methods defined by assignment in the calling environment. The global environment is checked for convenience during development. These are the same rules used in r-lib packages.

Defining a new class of functions in R

So I'm changing the class of some functions that I'm building in R in order to add a description attribute and because I want to use S3 generics to handle everything for me. Basically, I have a structure like
foo <- function(x) x + 1
addFunction <- function(f, description) {
class(f) <- c("addFunction", "function")
attr(f, "description") <- description
f
}
foo <- addFunction(foo, "Add one")
and then I do stuff like
description <- function(x) UseMethod("description")
description.default <- function(x) deparse(substitute(x))
description.addFunction <- function(x) attr(x, "description")
This works fine, but it's not that elegant. I'm wondering if it is possible to define a new class of functions such that instances of this class can be defined in a syntax similar to the function syntax. In other words, is it possible to define addFunction such that foo is generated in the following way:
foo <- addFunction(description = "Add one", x) {
x + 1
}
(or something similar, I have no strong feelings about where the attribute should be added to the function)?
Thanks for reading!
Update: I have experimented a bit more with the idea, but haven't really reached any concrete results yet - so this is just an overview of my current (updated) thoughts on the subject:
I tried the idea of just copying the function()-function, giving it a different name and then manipulating it afterwards. However, this does not work and I would love any inputs on what is happening here:
> function2 <- `function`
> identical(`function`, function2)
[1] TRUE
> function(x) x
function(x) x
> function2(x) x
Error: unexpected symbol in "function2(x) x"
> function2(x)
Error: incorrect number of arguments to "function"
As function() is a primitive function, I tried looking at the C-code defining it for more clues. I was particularly intrigued by the error message from the function2(x) call. The C-code underlying function() is
/* Declared with a variable number of args in names.c */
SEXP attribute_hidden do_function(SEXP call, SEXP op, SEXP args, SEXP rho)
{
SEXP rval, srcref;
if (TYPEOF(op) == PROMSXP) {
op = forcePromise(op);
SET_NAMED(op, 2);
}
if (length(args) < 2) WrongArgCount("function");
CheckFormals(CAR(args));
rval = mkCLOSXP(CAR(args), CADR(args), rho);
srcref = CADDR(args);
if (!isNull(srcref)) setAttrib(rval, R_SrcrefSymbol, srcref);
return rval;
}
and from this, I conclude that for some reason, at least two of the four arguments call, op, args and rho are now required. From the signature of do_function() I am guessing that the four arguments passed to do_function should be a call, a promise, a list of arguments and then maybe an environment. I tried a lot of different combinations for function2 (including setting up to two of these arguments to NULL), but I keep getting the same (new) error message:
> function2(call("sum", 2, 1), NULL, list(x=NULL), baseenv())
Error: invalid formal argument list for "function"
> function2(call("sum", 2, 1), NULL, list(x=NULL), NULL)
Error: invalid formal argument list for "function"
This error message is returned from the C-function CheckFormals(), which I also looked up:
/* used in coerce.c */
void attribute_hidden CheckFormals(SEXP ls)
{
if (isList(ls)) {
for (; ls != R_NilValue; ls = CDR(ls))
if (TYPEOF(TAG(ls)) != SYMSXP)
goto err;
return;
}
err:
error(_("invalid formal argument list for \"function\""));
}
I'm not fluent in C at all, so from here on I'm not quite sure what to do next.
So these are my updated questions:
Why do function and function2 not behave in the same way? Why
do I need to call function2 using a different syntax when they are
deemed identical in R?
What are the proper arguments of function2
such that function2([arguments]) will actually define a function?
Some keywords in R such as if and function have special syntax in the way that the underlying functions get called. It's quite easy to use if as a function if desired, e.g.
`if`(1 == 1, "True", "False")
is equivalent to
if (1 == 1) {
"True"
} else {
"False"
}
function is trickier. There's some help on this at a previous question.
For your current problem here's one solution:
# Your S3 methods
description <- function(x) UseMethod("description")
description.default <- function(x) deparse(substitute(x))
description.addFunction <- function(x) attr(x, "description")
# Creates the pairlist for arguments, handling arguments with no defaults
# properly. Also brings in the description
addFunction <- function(description, ...) {
args <- eval(substitute(alist(...)))
tmp <- names(args)
if (is.null(tmp)) tmp <- rep("", length(args))
names(args)[tmp==""] <- args[tmp==""]
args[tmp==""] <- list(alist(x=)$x)
list(args = as.pairlist(args), description = description)
}
# Actually creates the function using the structure created by addFunction and the body
`%{%` <- function(args, body) {
stopifnot(is.pairlist(args$args), class(substitute(body)) == "{")
f <- eval(call("function", args$args, substitute(body), parent.frame()))
class(f) <- c("addFunction", "function")
attr(f, "description") <- args$description
f
}
# Example. Note that the braces {} are mandatory even for one line functions
foo <- addFunction(description = "Add one", x) %{% {
x + 1
}
foo(1)
#[1] 2

Passing arguments to iterated function through apply

I have a function like this dummy-one:
FUN <- function(x, parameter){
if (parameter == 1){
z <- DO SOMETHING WITH "x"}
if (parameter ==2){
z <- DO OTHER STUFF WITH "x"}
return(z)
}
Now, I would like to use the function on a dataset using apply.
The problem is, that apply(data,1,FUN(parameter=1))
wont work, as FUN doesn't know what "x" is.
Is there a way to tell apply to call FUN with "x" as the current row/col?
`
You want apply(data,1,FUN,parameter=1). Note the ... in the function definition:
> args(apply)
function (X, MARGIN, FUN, ...)
NULL
and the corresponding entry in the documentation:
...: optional arguments to ‘FUN’.
You can make an anonymous function within the call to apply so that FUN will know what "x" is:
apply(data, 1, function(x) FUN(x, parameter = 1))
See ?apply for examples at the bottom that use this method.
Here's a practical example of passing arguments using the ... object and *apply. It's slick, and this seemed like an easy example to explain the use. An important point to remember is when you define an argument as ... all calls to that function must have named arguments. (so R understands what you're trying to put where). For example, I could have called times <- fperform(longfunction, 10, noise = 5000) but leaving off noise = would have given me an error because it's being passed through ... My personal style is to name all of the arguments if a ... is used just to be safe.
You can see that the argument noise is being defined in the call to fperform(FUN = longfunction, ntimes = 10, noise = 5000) but isn't being used for another 2 levels with the call to diff <- rbind(c(x, runtime(FUN, ...))) and ultimately fun <- FUN(...)
# Made this to take up time
longfunction <- function(noise = 2500, ...) {
lapply(seq(noise), function(x) {
z <- noise * runif(x)
})
}
# Takes a function and clocks the runtime
runtime <- function(FUN, display = TRUE, ...) {
before <- Sys.time()
fun <- FUN(...)
after <- Sys.time()
if (isTRUE(display)) {
print(after-before)
}
else {
after-before
}
}
# Vectorizes runtime() to allow for multiple tests
fperform <- function(FUN, ntimes = 10, ...) {
out <- sapply(seq(ntimes), function(x) {
diff <- rbind(c(x, runtime(FUN, ...)))
})
}
times <- fperform(FUN = longfunction, ntimes = 10, noise = 5000)
avgtime <- mean(times[2,])
print(paste("Average Time difference of ", avgtime, " secs", sep=""))

Resources