I'm trying to use the curve3d function in the emdbook-package to create a contour plot of a function defined locally inside another function as shown in the following minimal example:
library(emdbook)
testcurve3d <- function(a) {
fn <- function(x,y) {
x*y*a
}
curve3d(fn(x,y))
}
Unexpectedly, this generates the error
> testcurve3d(2)
Error in fn(x, y) : could not find function "fn"
whereas the same idea works fine with the more basic curve function of the base-package:
testcurve <- function(a) {
fn <- function(x) {
x*a
}
curve(a*x)
}
testcurve(2)
The question is how curve3d can be rewritten such that it behaves as expected.
You can temporarily attach the function environment to the search path to get it to work:
testcurve3d <- function(a) {
fn <- function(x,y) {
x*y*a
}
e <- environment()
attach(e)
curve3d(fn(x,y))
detach(e)
}
Analysis
The problem comes from this line in curve3d:
eval(expr, envir = env, enclos = parent.frame(2))
At this point, we appear to be 10 frames deep, and fn is defined in parent.frame(8). So you can edit the line in curve3d to use that, but I'm not sure how robust this is. Perhaps parent.frame(sys.nframe()-2) might be more robust, but as ?sys.parent warns there can be some strange things going on:
Strictly, sys.parent and parent.frame refer to the context of the
parent interpreted function. So internal functions (which may or may
not set contexts and so may or may not appear on the call stack) may
not be counted, and S3 methods can also do surprising things.
Beware of the effect of lazy evaluation: these two functions look at
the call stack at the time they are evaluated, not at the time they
are called. Passing calls to them as function arguments is unlikely to
be a good idea.
The eval - parse solution bypasses some worries about variable scope. This passes the value of both the variable and function directly as opposed to passing the variable or function names.
library(emdbook)
testcurve3d <- function(a) {
fn <- eval(parse(text = paste0(
"function(x, y) {",
"x*y*", a,
"}"
)))
eval(parse(text = paste0(
"curve3d(", deparse(fn)[3], ")"
)))
}
testcurve3d(2)
I have found other solution that I do not like very much, but maybe it will help you.
You can create the function fn how a call object and eval this in curve3d:
fn <- quote((function(x, y) {x*y*a})(x, y))
eval(call("curve3d", fn))
Inside of the other function, the continuous problem exists, a must be in the global environment, but it is can fix with substitute.
Example:
testcurve3d <- function(a) {
fn <- substitute((function(x, y) {
c <- cos(a*pi*x)
s <- sin(a*pi*y/3)
return(c + s)
})(x, y), list(a = a))
eval(call("curve3d", fn, zlab = "fn"))
}
par(mfrow = c(1, 2))
testcurve3d(2)
testcurve3d(5)
Related
I have some delicate issues with environments that are currently manifesting themselves in my unit tests. My basic structure is this
I have a main function main that has many arguments
wrapper is a wrapper function (one of many) that pertains only to selected arguments of main
helper is an intermediate helper function that is used by all wrapper functions
I use eval and match.call() to move between wrappers and the main function smoothly. My issue now is that my tests work when I run them line by line, but not using test_that().
Here is a MWE that shows the problem. If you step through the lines in the test manually, the test passes. However, evaluating the whole test_that() chunk the test fails because one of the arguments can not be found.
library(testthat)
wrapper <- function(a, b) {
fun_call <- as.list(match.call())
ret <- helper(fun_call)
return(ret)
}
helper <- function(fun_call) {
fun_call[[1]] <- quote(main)
fun_call <- as.call(fun_call)
fun_eval <- eval(as.call(fun_call))
return(fun_eval)
}
main <- function(a, b, c = 1) {
ret <- list(a = a, b = b, c = c)
return(ret)
}
test_that("Test", {
a <- 1
b <- 2
x <- wrapper(a = a, b = b)
y <- list(a = 1, b = 2, c = 1)
expect_equal(x, y)
})
With quite some confidence, I suspect I need to modify the default environment used by eval (i.e. parent.frame()), but I am not sure how to do this.
You want to evaluate your call in your parent environment, not the local function environment. Change your helper to
helper <- function(fun_call) {
fun_call[[1]] <- quote(main)
fun_call <- as.call(fun_call)
fun_eval <- eval.parent(fun_call, n=2)
return(fun_eval)
}
This is assuming that helper is always called within wrapper which is called from somewhere else the parameters are defined.
It's not clear in this case that you really need all this non-standard evaulation. You might also consider a solution like
wrapper <- function(a, b) {
helper(mget(ls()))
}
helper <- function(params) {
do.call("main", params)
}
Here wrapper just bundles all it's parameters values into a list. Then you can just pass a list of parameters to helper and do.call will pass that list as parameters to your main function. This will evaluate the parameters of wrapper when you call it do you don't have to worry about the execution evironment.
I am using a package that has 2 functions which ultimately look like the following:
pkgFun1 <- function(group) {
call <- match.call()
pkgFun2(call)
}
pkgFun2 <- function(call) {
eval(call$group)
}
If I just call pkgFun1(group = 2), it works fine. But I want to call it from a function:
myFun <- function(x) {
pkgFun1(group = x)
}
myFun(x = 2)
## Error in eval(call$group) : object 'x' not found
Is there any way to avoid this error, if I can't modify the package functions, but only myFun?
There are similar questions, such as Issue with match.call or Non-standard evaluation in a user-defined function with lapply or with in R, but my particular issue is that I can't modify the part of code containing the eval call.
It's pkgFun2 that is wrong, so I think you're out of luck without some weird contortions. It needs to pass the appropriate environment to eval(); if you can't modify it, then you can't fix it.
This hack might appear to work, but in real life it doesn't:
pkgFun1 <- function(group) {
call <- match.call()
f <- pkgFun2
environment(f) <- parent.frame()
f(call)
}
With this, you're calling a copy of pkgFun2 modified so its environment is appropriate to evaluate the call. It works in the test case, but will cause you untold grief in the future, because everything that is not local in pkgFun2 will be searched for in the wrong place. For example,
myFun <- function(x) {
eval <- function(...) print("Gotcha!")
pkgFun1(group = x)
}
myFun(x = 2)
# [1] "Gotcha!"
Best is to fix pkgFun2. Here's one fix:
pkgFun1 <- function(group) {
call <- match.call()
pkgFun2(call, parent.frame())
}
pkgFun2 <- function(call, envir) {
eval(call$group, envir = envir)
}
Edited to add: Actually, there is another hack that is not so weird that should work with your original pkgFun1 and pkgFun2. If you force the evaluation of x to happen in myFun so that pkgFun1 never sees the expression x, it should work. For example,
myFun <- function(x) {
do.call("pkgFun1", list(group = x))
}
If you do this, then after myFun(2), the pkgFun1 variable call will be pkgFun1(group = 2) and you won't get the error about x.
I'm trying to read a function call as a string and evaluate this function within another function. I'm using eval(parse(text = )) to evaluate the string. The function I'm calling in the string doesn't seem to have access to the environment in which it is nested. In the code below, my "isgreater" function finds the object y, defined in the global environment, but can't find the object x, defined within the function. Does anybody know why, and how to get around this? I have already tried adding the argument envir = .GlobalEnv to both of my evals, to no avail.
str <- "isgreater(y)"
isgreater <- function(y) {
return(eval(y > x))
}
y <- 4
test <- function() {
x <- 3
return(eval(parse(text = str)))
}
test()
Error:
Error in eval(y > x) : object 'x' not found
Thanks to #MrFlick and #r2evans for their useful and thought-provoking comments. As far as a solution, I've found that this code works. x must be passed into the function and cannot be a default value. In the code below, my function generates a list of results with the x variable being changed within the function. If anyone knows why this is, I would love to know.
str <- "isgreater(y, x)"
isgreater <- function(y, x) {
return(eval(y > x))
}
y <- 50
test <- function() {
list <- list()
for(i in 1:100) {
x <- i
bool <- eval(parse(text = str))
list <- append(list, bool)
}
return(list)
}
test()
After considering the points made by #r2evans, I have elected to change my approach to the problem so that I do not arrive at this string-parsing step. Thanks a lot, everyone.
I offer the following code, not as a solution, but rather as an insight into how R "works". The code does things that are quite dangerous and should only be examined for its demonstration of how to assert a value for x. Unfortunately, that assertion does destroy the x-value of 3 inside the isgreater-function:
str <- "isgreater(y)"
isgreater <- function(y) {
return(eval( y > x ))
}
y <- 4
test <- function() {
environment(isgreater)$x <- 5
return(eval(parse(text = str) ))
}
test()
#[1] FALSE
The environment<- function is used in the R6 programming paradigm. Take a look at ?R6 if you are interested in working with a more object-oriented set of structures and syntax. (I will note that when I first ran your code, there was an object named x in my workspace and some of my efforts were able to succeed to the extent of not throwing an error, but they were finding that length-10000 vector and filling up my console with logical results until I escaped the console. Yet another argument for passing both x and y to isgreater.)
I noticed that R functions check for missing arguments only at the time when the specific argument is evaluated in the function body.
Example:
f <- function(x, y) {
Sys.sleep(3)
return(x + y)
}
f(1)
The function takes 3 seconds to fail and report the missing argument rather that at the start of the function call. What is the advantage of such an implementation?
EDIT:
I'm aware of force() and missing(). I would like to know what the advantage is of missing() on an argument immediately before evaluation rather than at the start of a function call. Is there a necessary reason for such an implementation?
As a contrived example
f2 <- function() {
Sys.sleep(3)
}
f <- function(x, y) {
if (missing(y)) stop("y missing")
print(x)
}
f(1, f2())
The "expensive" call to f2() is still avoided by lazy evaluation, but its missingness can be checked without evaluation.
EDIT2:
I guess you can argue that it gives more flexibility for generating default values, in another contrived example
f <- function(x, y = 1:3) {
if (missing(x)) {
x <- y
}
x
}
f()
such code would fail if argument checking was done immediately upon function call. However this code is better written as function(x = y, y = 1:3). Though I guess such a feature would be used by a non-trivial number of codebases and changing the behaviour now would be more trouble than it's worth.
R uses lazy evaluation. That is, arguments to functions are not evaluated until
they are required. This can save both time and memory if it turns out the
argument is not required.
In extremely rare circumstances something is not evaluated that should be.
You can use force to get around the laziness.
Burns, Patrick. 2011. « The R Inferno ». http://www.burns-stat.com/pages/Tutor/R_inferno.pdf.
So, this following code will fail faster :
f <- function(x, y) {
force(y)
Sys.sleep(3)
return(x + y)
}
f(1)
or
f <- function(x, y) {
if(missing(y)) stop("missing y")
Sys.sleep(3)
return(x + y)
}
f(1)
So I'm changing the class of some functions that I'm building in R in order to add a description attribute and because I want to use S3 generics to handle everything for me. Basically, I have a structure like
foo <- function(x) x + 1
addFunction <- function(f, description) {
class(f) <- c("addFunction", "function")
attr(f, "description") <- description
f
}
foo <- addFunction(foo, "Add one")
and then I do stuff like
description <- function(x) UseMethod("description")
description.default <- function(x) deparse(substitute(x))
description.addFunction <- function(x) attr(x, "description")
This works fine, but it's not that elegant. I'm wondering if it is possible to define a new class of functions such that instances of this class can be defined in a syntax similar to the function syntax. In other words, is it possible to define addFunction such that foo is generated in the following way:
foo <- addFunction(description = "Add one", x) {
x + 1
}
(or something similar, I have no strong feelings about where the attribute should be added to the function)?
Thanks for reading!
Update: I have experimented a bit more with the idea, but haven't really reached any concrete results yet - so this is just an overview of my current (updated) thoughts on the subject:
I tried the idea of just copying the function()-function, giving it a different name and then manipulating it afterwards. However, this does not work and I would love any inputs on what is happening here:
> function2 <- `function`
> identical(`function`, function2)
[1] TRUE
> function(x) x
function(x) x
> function2(x) x
Error: unexpected symbol in "function2(x) x"
> function2(x)
Error: incorrect number of arguments to "function"
As function() is a primitive function, I tried looking at the C-code defining it for more clues. I was particularly intrigued by the error message from the function2(x) call. The C-code underlying function() is
/* Declared with a variable number of args in names.c */
SEXP attribute_hidden do_function(SEXP call, SEXP op, SEXP args, SEXP rho)
{
SEXP rval, srcref;
if (TYPEOF(op) == PROMSXP) {
op = forcePromise(op);
SET_NAMED(op, 2);
}
if (length(args) < 2) WrongArgCount("function");
CheckFormals(CAR(args));
rval = mkCLOSXP(CAR(args), CADR(args), rho);
srcref = CADDR(args);
if (!isNull(srcref)) setAttrib(rval, R_SrcrefSymbol, srcref);
return rval;
}
and from this, I conclude that for some reason, at least two of the four arguments call, op, args and rho are now required. From the signature of do_function() I am guessing that the four arguments passed to do_function should be a call, a promise, a list of arguments and then maybe an environment. I tried a lot of different combinations for function2 (including setting up to two of these arguments to NULL), but I keep getting the same (new) error message:
> function2(call("sum", 2, 1), NULL, list(x=NULL), baseenv())
Error: invalid formal argument list for "function"
> function2(call("sum", 2, 1), NULL, list(x=NULL), NULL)
Error: invalid formal argument list for "function"
This error message is returned from the C-function CheckFormals(), which I also looked up:
/* used in coerce.c */
void attribute_hidden CheckFormals(SEXP ls)
{
if (isList(ls)) {
for (; ls != R_NilValue; ls = CDR(ls))
if (TYPEOF(TAG(ls)) != SYMSXP)
goto err;
return;
}
err:
error(_("invalid formal argument list for \"function\""));
}
I'm not fluent in C at all, so from here on I'm not quite sure what to do next.
So these are my updated questions:
Why do function and function2 not behave in the same way? Why
do I need to call function2 using a different syntax when they are
deemed identical in R?
What are the proper arguments of function2
such that function2([arguments]) will actually define a function?
Some keywords in R such as if and function have special syntax in the way that the underlying functions get called. It's quite easy to use if as a function if desired, e.g.
`if`(1 == 1, "True", "False")
is equivalent to
if (1 == 1) {
"True"
} else {
"False"
}
function is trickier. There's some help on this at a previous question.
For your current problem here's one solution:
# Your S3 methods
description <- function(x) UseMethod("description")
description.default <- function(x) deparse(substitute(x))
description.addFunction <- function(x) attr(x, "description")
# Creates the pairlist for arguments, handling arguments with no defaults
# properly. Also brings in the description
addFunction <- function(description, ...) {
args <- eval(substitute(alist(...)))
tmp <- names(args)
if (is.null(tmp)) tmp <- rep("", length(args))
names(args)[tmp==""] <- args[tmp==""]
args[tmp==""] <- list(alist(x=)$x)
list(args = as.pairlist(args), description = description)
}
# Actually creates the function using the structure created by addFunction and the body
`%{%` <- function(args, body) {
stopifnot(is.pairlist(args$args), class(substitute(body)) == "{")
f <- eval(call("function", args$args, substitute(body), parent.frame()))
class(f) <- c("addFunction", "function")
attr(f, "description") <- args$description
f
}
# Example. Note that the braces {} are mandatory even for one line functions
foo <- addFunction(description = "Add one", x) %{% {
x + 1
}
foo(1)
#[1] 2