Is there a way to use do.call without explicitly providing arguments - r

Part of a custom function I am trying to create allows the user to provide a function as a parameter. For example
#Custom function
result <- function(.func){
do.call(.func, list(x,y))
}
#Data
x <- 1:2
y <- 0:1
#Call function
result(.func = function(x,y){ sum(x, y) })
However, the code above assumes that the user is providing a function with arguments x and y. Is there a way to use do.call (or something similar) so that the user can provide a function with different arguments? I think that the correct solution might be along the lines of:
#Custom function
result <- function(.func){
do.call(.func, formals(.func))
}
#Data
m <- 1:3
n <- 0:2
x <- 1:2
y <- 0:1
z <- c(4,6)
#Call function
result(.func = function(m,n){ sum(m, n) })
result(.func = function(x,y,z){ sum(x,y,z) })
But this is not it.

1) Use formals/names/mget to get the values in a list. An optional argument, envir, will allow the user to specify the environment that the variables are located in so it knows where to look. The default if not specified is the parent frame, i.e. the caller.
result1 <- function(.func, envir = parent.frame()) {
do.call(.func, mget(names(formals(.func)), envir))
}
m <- 1:3
n <- 0:2
x <- 1:2
y <- 0:1
z <- c(4,6)
result1(.func = function(m,n) sum(m, n) )
## [1] 9
result1(.func = function(x,y,z) sum(x,y,z) )
## [1] 14
result1(function(Time, demand) Time + demand, list2env(BOD))
## [1] 9.3 12.3 22.0 20.0 20.6 26.8
1a) Another possibility is to evaluate the body. This also works if envir is specified as a data frame whose columns are to be looked up.
result1a <- function(.func, envir = parent.frame()) {
eval(body(.func), envir)
}
result1a(.func = function(m,n) sum(m, n) )
## [1] 9
result1a(.func = function(x,y,z) sum(x,y,z) )
## [1] 14
result1a(function(Time, demand) Time + demand, BOD)
## [1] 9.3 12.3 22.0 20.0 20.6 26.8
2) Another design which is even simpler is to provide a one-sided formula interface. Formulas have environments so we can use that to look up the variables.
result2 <- function(fo, envir = environment(fo)) eval(fo[[2]], envir)
result2(~ sum(m, n))
## [1] 9
result2(~ sum(x,y,z))
## [1] 14
result2(~ Time + demand, BOD)
## [1] 9.3 12.3 22.0 20.0 20.6 26.8
3) Even simpler yet is to just pass the result of the computation as an argument.
result3 <- function(x) x
result3(sum(m, n))
## [1] 9
result3(sum(x,y,z))
## [1] 14
result3(with(BOD, Time + demand))
## [1] 9.3 12.3 22.0 20.0 20.6 26.8

This works.
#Custom function
result <- function(.func){
do.call(.func, lapply(formalArgs(.func), as.name))
}
#Data
m <- 1:3
n <- 0:2
x <- 1:2
y <- 0:1
z <- c(4,6)
#Call function
result(.func = function(m,n){ sum(m, n) })
result(.func = function(x,y,z){ sum(x,y,z) })

This seems like a bit of a pointless function, since the examples in your question imply that what you are trying to do is evaluate the body of the passed function using variables in the calling environment. You can certainly do this easily enough:
result <- function(.func){
eval(body(.func), envir = parent.frame())
}
This gives the expected results from your examples:
x <- 1:2
y <- 0:1
result(.func = function(x,y){ sum(x, y) })
#> [1] 4
and
m <- 1:3
n <- 0:2
x <- 1:2
y <- 0:1
z <- c(4,6)
result(.func = function(m,n){ sum(m, n) })
#> [1] 9
result(.func = function(x,y,z){ sum(x,y,z) })
#> [1] 14
But note that, when the user types:
result(.func = function(x,y){ ...user code... })
They get the same result they would already get if they didn't use your function and simply typed
...user code....
You could argue that it would be helpful with a pre-existing function like mean.default:
x <- 1:10
na.rm <- TRUE
trim <- 0
result(mean.default)
#> [1] 5.5
But this means users have to name their variables as the parameters being passed to the function, and this is just a less convenient way of calling the function.
It might be useful if you could demonstrate a use case where what you are proposing doesn't make the user's code longer or more complex.

You could also use ..., but like the other responses, I don't quite see the value, or perhaps I don't fully understand the use-case.
result <- function(.func, ...){
do.call(.func, list(...))
}
Create function
f1 <- function(a,b) sum(a,b)
Pass f1 and values to result()
result(f1, m,n)
Output:
[1] 9

Here is how I would do it based on your clarifying comments.
Basically since you say your function will take a data.frame as input, the function you are asking for essentially just reverses the order of arguments you pass to do.call()... which takes a function, then a list of arguments. A data.frame is just a special form of list where all elements (columns) are vectors of equal length (number of rows)
result <- function(.data, .func) {
# .data is a data.frame, which is a list of argument vectors of equal length
do.call(.func, .data)
}
result(data.frame(a=1, b=1:5), function(a, b) a * b)
result(data.frame(c=1:10, d=1:10), function(c, d) c * d)

Related

Evaluate expression in environment passed to function as parameter in R

I am trying to create a function for theoretical hessian matrix that I can then evaluate at different locations. First I tried setting expressions as values in a matrix or array, but although I could initially set an expression into a matrix I couldn't replace with the value calculated.
hessian_matrix <- function(gx, respect_to){
out_mat <- matrix(0, nrow=length(respect_to), ncol=length(respect_to))
for(i in 1:length(respect_to)){
for(j in 1:length(respect_to)){
dthetad2x <- deriv(D(gx, respect_to[i]), respect_to[j], function.arg=TRUE)
# also tried
# dthetad2x <- as.expression(D(D(gx, respect_to[i])))
out_mat[i,j] <- dthetad2x
}
return(out_mat)
}
Because that didn't work, I decided to create an environment to house the indeces of the hessian matrix as object.
hessian_matrix <- function(gx, respect_to){
out_env <- new.env()
for(i in 1:length(respect_to)){
for(j in 1:length(respect_to)){
dthetad2x <- as.call(D(D(gx, respect_to[i]), respect_to[j]))
assign(paste0(i,j), dthetad2x, out_env)
}
}
return(out_env)
}
g <- expression(x^3-2*x*y-y^6)
h_g <- hessian_matrix(g, respect_to = c('x', 'y'))
This worked, and when I pass this in as a parameter to evaluate I can see the expression, but I can't evaluate it. I tried with call(), eval(), do.call(), get(), etc. and it didn't work. I also assigning the answer within the environment passed, making a new environment to return, or simply using variables.
fisher_observed <- function(h, at_val, params, sum=TRUE){
out_env <- new.env()
# add params to passed environment
for(i in 1:length(at_val)){
h[[names(at_val)[i]]] <- unname(at_val[i])
}
for(i in ls(h)){
value <- do.call(i, envir=h, at_val)
assign(i, value, out_env)
}
return(h)
}
fisher_observed(h_g, at_val=list(x=1,y=2))
According the code for do.call() this is how it should be used, but it isn't working when passed as a parameter in this way.
R already has the hessian matrix function. You do not have to write one. You could use deriv or deriv3 as shown below:
g <- expression(x^3 - 2 * x * y - y^6)
eval(deriv3(g, c('x','y')),list(x=1,y=2))
[1] -67
attr(,"gradient")
x y
[1,] -1 -194
attr(,"hessian")
, , x
x y
[1,] 6 -2
, , y
x y
[1,] -2 -480
If you want to use a function, you could do:
hessian <- function(expr,values){
nms <- names(values)
f <- eval(deriv3(g, nms),as.list(values))
matrix(attr(f, 'hessian'), length(values), dimnames = list(nms,nms))
}
hessian(g, c(x=1,y=2))
x y
x 6 -2
y -2 -480
Although the function is not necessary as you would do double computation in case you wanted the gradient and hessian
I think this (almost) does what you're looking for:
fisher_observed <- function(h, at_val) {
values <- numeric(length = length(names(h)))
for (i in seq_len(length(names(h)))) {
values[i] = purrr::pmap(.l = at_val, function(x, y) eval(h[[names(h)[i]]]))
}
names(values) = names(h)
return(values)
}
This currently returns a named list of evaluated points:
$`21`
[1] -2
$`22`
[1] -480
$`11`
[1] 6
$`12`
[1] -2
you'd still need to re-arrange this into a matrix (should be fairly easy given the column names are preserved. I think the key thing is that the names must be characters when looking up values in h_g.
You cannot have a matrix of "calls" but you can have a character matrix then evaluate it:
hessian_matrix <- function(gx, respect_to){
out_mat <- matrix("", nrow=length(respect_to), ncol=length(respect_to))
for(i in 1:length(respect_to)){
for(j in 1:length(respect_to)){
dthetad2x <- D(D(gx, respect_to[i]), respect_to[j])
out_mat[i,j] <- deparse(dthetad2x)
}
}
return(out_mat)
}
g <- expression(x^3-2*x*y-y^6)
h_g <- hessian_matrix(g, respect_to = c('x', 'y'))
h_g
#> [,1] [,2]
#> [1,] "3 * (2 * x)" "-2"
#> [2,] "-2" "-(6 * (5 * y^4))"
apply(h_g, 1:2, \(x) eval(str2lang(x), list(x=1, y=2)))
#> [,1] [,2]
#> [1,] 6 -2
#> [2,] -2 -480

How to implicitly pass all arguments down the stack?

Take the following code:
f2 <- function(...) {
print(list(...))
}
f1 <- function(x, y = 1, ...) {
z <- 20
f2(x, y, ...)
}
f1(5, k = 6)
If I change the arguments to f1, and I still want to pass all those arguments to f2, I would need to change the call to f2. Is it possible to write the call to f2 so that it does not name x and y explicitly? Something like the following (non-working code):
f1 <- function(x, y = 1, ...) {
z <- 20
do.call(f2, formals())
}
I can use environment(), but then I need to take care that I call it at the very beginning:
f1 <- function(x, y = 1, ...) {
argv <- c(as.list(environment()), ...)
z <- 20
do.call(f2, argv)
}
Is there maybe a simpler, more direct way?
It's not clear whether you wanted the variable z added to the call, but in either case you can achieve what you are looking for using match.call. You can simply swap in the quoted name of f2 as the first element in the matched call, and if you wish to add the missing defaults from the f1 formals, you can find them in formals() and write any missing ones into the matched call. Finally, you evaluate this call.
f1 <- function(x, y = 1, ...) {
mc <- match.call()
form <- names(formals())[!names(formals()) %in% names(mc)]
form <- form[form != "..."]
mc[[1]] <- quote(f2)
mc[form] <- formals()[form]
mc$z <- 20
eval(mc)
}
f2 <- function(...) {
print(list(...))
}
f1(5, k = 6)
#> $x
#> [1] 5
#>
#> $k
#> [1] 6
#>
#> $y
#> [1] 1
#>
#> $z
#> [1] 20
Created on 2020-09-29 by the reprex package (v0.3.0)

R: How to write a function that replaces a function call with another function call?

E.g. I want to transform the code
mean(x)
to
fn(x)
everytime I see mean in the code.
replace_mean <- function(code) {
substitute(code, list(mean = fn)) # doesn't work
substitute(substitute(code), list(mean = fn)) # doesn't work
}
the above two approaches don't work. E.g.
replace_mean(list(mean(y), mean(x)))
What's the best way to do function replacement using NSE in R?
Base R Solutions preferred.
Update example output
replace(mean(x)) # fn(x)
replace(list(a = mean(x), mean(ok))) # list(a=fn(x), fn(ok)))
The following function, when passed mean(x) and some fn such as sqrt as its two arguments returns the call object fn(x), i.e. sqrt(x), replacing occurrences of mean with fn.
replace_mean <- function(code, fn) {
do.call("substitute", list(substitute(code), list(mean = substitute(fn))))
}
Examples
1) Basic example
e <- replace_mean(mean(x), sqrt)
e
## sqrt(x)
x <- 4
eval(e)
## [1] 2
2) more complex expression
ee <- replace_mean(mean(x) + mean(x*x), sqrt)
ee
## sqrt(x) + sqrt(x * x)
x <- 4
eval(ee)
## [1] 6
3) apply replace_mean to body of f creating g
f <- function(x) mean(x) + mean(x*x)
g <- f
body(g) <- do.call("replace_mean", list(body(f), quote(sqrt)))
g
## function (x)
## sqrt(x) + sqrt(x * x)
x <- 4
g(x)
## [1] 6
One way is much more ugly and relies on string manipulation to generate the code you want to run and then evaluating it.
replace_mean <- function(code) {
code_subbed = substitute(code)
# constructu the code I want
code_subbed_subbed = sprintf("substitute(%s, list(mean=quote(fn)))", deparse(code_subbed))
eval(parse(text = code_subbed_subbed))
}
replace_mean(list(mean(x), a= mean(ok)))

How to use R's S3-classes together with parameters?

I fear I get something really wrong. The basics are from here
and a basic (minimal) example is understood (I think) and working:
fun.default <- function(x) { # you could add further fun.class1 (works)...
print("default")
return(x[1] + x[2])
}
my_fun <- function(x) {
print("my_fun")
print(x)
res <- UseMethod("fun", x)
print(res)
print("END my_fun...")
return(res)
}
x <- c(1, 2)
my_fun(x)
However, if I want to add parameters, something goes really wrong. Form the link above:
Once UseMethod has found the correct method, it’s invoked in a special
way. Rather than creating a new evaluation environment, it uses the
environment of the current function call (the call to the generic), so
any assignments or evaluations that were made before the call to
UseMethod will be accessible to the method.
I tried all variants I could think of:
my_fun_wrong1 <- function(x, y) {
print("my_fun_wrong1")
print(x)
x <- x + y
print(x)
res <- UseMethod("fun", x)
print(res)
print("END my_fun_wrong1...")
return(res)
}
x <- c(1, 2)
# Throws: Error in fun.default(x, y = 2) : unused argument (y = 2)
my_fun_wrong1(x, y = 2)
my_fun_wrong2 <- function(x) {
print("my_fun_wrong2")
print(x)
x <- x + y
print(x)
res <- UseMethod("fun", x)
print(res)
print("END my_fun_wrong2...")
return(res)
}
x <- c(1, 2)
y = 2
# Does not throw an error, but does not give my expetced result "7":
my_fun_wrong2(x) # wrong result!?
rm(y)
my_fun_wrong3 <- function(x, ...) {
print("my_fun_wrong3")
print(x)
x <- x + y
print(x)
res <- UseMethod("fun", x)
print(res)
print("END my_fun_wrong3...")
return(res)
}
x <- c(1, 2)
# Throws: Error in my_fun_wrong3(x, y = 2) : object 'y' not found
my_fun_wrong3(x, y = 2)
Edit after answer G. Grothendieck: Using fun.default <- function(x, ...) I get
Runs after change, but I don't understand the result:
my_fun_wrong1(x, y = 2)
[1] "my_fun_wrong1"
[1] 1 2
[1] 3 4 # Ok
[1] "default"
[1] 3 # I excpect 7
As before - I don't understand the result:
my_fun_wrong2(x) # wrong result!?
[1] "my_fun_wrong2"
[1] 1 2
[1] 3 4 # Ok!
[1] "default"
[1] 3 # 3 + 4 = 7?
Still throws an error:
my_fun_wrong3(x, y = 2)
[1] "my_fun_wrong3"
[1] 1 2
Error in my_fun_wrong3(x, y = 2) : object 'y' not found
I think, this question is really useful!
fun.default needs ... so that the extra argument is matched.
fun.default <- function(x, ...) {
print("default")
return(x[1] + x[2])
}
x <- c(1, 2)
my_fun_wrong1(x, y = 2)
## [1] "my_fun_wrong1"
## [1] 1 2
## [1] 5 6
## [1] 3
Also, any statements after the call to UseMethod in the generic will not be evaluated as UseMethoddoes not return so it is pointless to put code after it in the generic.
Furthermore, you can't redefine the arguments to UseMethod. The arguments are passed on as they came in.
Suggest going over the help file ?UseMethod although admittedly it can be difficult to read.
Regarding the quote from ?UseMethod that was added to the question, this just means that the methods can access local variables defined in the function calling UseMethod. It does not mean that you can redefine arguments. Below ff.default refers to the a defined in ff.
a <- 0
ff <- function(x, ...) { a <- 1; UseMethod("ff") }
ff.default <- function(x, ...) a
ff(3)
## [1] 1

Reference a list of objects in a for loop

Say I have a two objects, a and b, and a function f1 in R
a<- 5
b<- 10
f1<-function(){
out<- a+b
return(out)
I want to write a for loop that evaluates the sensitivity of this function to the values of a and b by changing them each and running the function again. I imagine creating a vector of the objects and then running some code like this:
params<- c(a,b)
for(i in params){
store<- i #save the initial value of the object so I can restore it later.
base<-f1() #save function output with original object value
i<- i*1.1 #increase object value by 10%
base.10<- f1() #recalculate and save function output with new object value
calc<- base.10/base #generate a response metric
i<- store #reset the object value to its original value
return(calc)
}
It sounds like you have a function f1 that relies on objects a and b (which are not defined in that function), and you want to test the sensitivity of its output to values of a and b. One way to approach this would be looping through the values you want for the sensitivity analysis and manipulating the parent environment of f1 so it uses these values:
f1 <- function() a + b
sensitivity <- function(params) {
old.f1.env <- environment(f1)
grid <- expand.grid(lapply(params, function(x) x * c(1, 1.1)))
grid$outcome <- apply(grid, 1, function(x) {
for (n in names(x)) {
assign(n, x[n])
}
environment(f1) <- environment()
ret <- f1()
environment(f1) <- old.f1.env
ret
})
grid
}
sensitivity(list(a=5, b=10))
# a b outcome
# 1 5.0 10 15.0
# 2 5.5 10 15.5
# 3 5.0 11 16.0
# 4 5.5 11 16.5
Here, we've performed computed the function value for a grid of a and b values, both at the original a and b value and at a 10% increased value.
Note that a lot of our work came from specifying the variables in the parent environment of f1. I would encourage you to restructure your code so your function f1 takes the relevant parameters as input. Then you could use:
f1 <- function(a, b) a + b
sensitivity <- function(params) {
grid <- expand.grid(lapply(params, function(x) x * c(1, 1.1)))
grid$outcome <- apply(grid, 1, function(x) do.call(f1, as.list(x)))
grid
}
sensitivity(list(a=5, b=10))
# a b outcome
# 1 5.0 10 15.0
# 2 5.5 10 15.5
# 3 5.0 11 16.0
# 4 5.5 11 16.5
This sounds like a perfect use case for closures.
get_f1 <- function(a, b) {
f1<-function(){
out<- a+b
return(out)
}
return(f1)
}
Then:
my_f1 <- get_f1(a=5, b=10)
my_f1() #uses a=5 and b=10 because they are defined in the envir associated with my_f1
So in your loop you could simply do:
base <- (get_f1(a, b))()
base.10 <- (get_f1(a*1.1, b*1.1))()
Obviously you could define get_f1 with arguments i=c(a, b).
Use a closure (function attached to an environment) rather than tinkering with environments!
tl;dr: closures are awesome
Reading some of your comments, I think this is actually what you want: sensitivity takes a function and a list of arguments and returns the sensitivity of the function to its arguments. (BTW what you call sensitivity, already means something else)
sensitivity <- function(fun, args) {
out <- lapply(names(args), function(cur) {
base10 <- do.call(fun, `[[<-`(args, cur, `[[`(args,cur)*1.1))
base10 / do.call(fun, args)
})
names(out) <- names(args)
return(out)
}
Example:
f1 <- function(a,b) a+b
a1 <- list(a=5, b=2)
sensitivity(f1, a1)
This gives
$a
[1] 1.03
$b
[1] 1.07
Example 2:
f2 <- function(x, y, z) x^2 +3*y*z
sensitivity(f2, list(x=1, y=2, z=3))
$x
[1] 1.011053
$y
[1] 1.094737
$z
[1] 1.094737
It works "plug-and-play" with any function, BUT it requires you to define f differently (one would say, correctly). I could write something that would work with your function f as it is written but it would be much work and bad taste. If you want code modularity, you just cannot use side effects...
PS: if you would prefer to have a vector returned instead of a list, simply change lapply to sapply in the definition of sensitivity.
This would give for the last example:
> sensitivity(f2, list(x=1, y=2, z=3))
x y z
1.011053 1.094737 1.094737
PPS: any reason why you are not computing the gradient of f rather than doing what you are doing?

Resources