I am trying to create a call to a function f whose first argument is a call to another function (for which I've chosen dbinom as an example). The call to dbinom (passed on to f) does not include values for all the arguments as these should be finalised within f, and the completed call is returned by f. Here is my failed minimal attempt:
f <- function(a_call) {
call_modify(a_call, x=1)
}
a_call <- call2(dbinom, size=1, prob=0.5)
y <- call2(f, a_call)
The output for y is:
(function(a_call) {
call_modify(a_call, x=1)
})((function (x, size, prob, log = FALSE)
.Call(C_dbinom, x, size, prob, log))(size = 1, prob = 0.5))
This call will
call a_call without any arguments, and then;
pass this result on to f.
If I evaluate y, it errors because dinom's first argument is missing.
I similar-but-related construct:
> call2(call2(dbinom, x=1, size=1, prob=0.5))
((function (x, size, prob, log = FALSE)
.Call(C_dbinom, x, size, prob, log))(x = 1, size = 1, prob = 0.5))()
(function (x, size, prob, log = FALSE)
I get the sense there is something 'not even wrong' with what I'm trying here, and nesting a call modification is best done another way.
It seems that what you are trying to do is handled more naturally by purrr::partial(), which fills in one or more arguments of a function:
f <- function( a_fun ) {purrr::partial( a_fun, x=1 )}
a_fun <- purrr::partial( dbinom, size=1, prob=0.5 )
y <- f(a_fun)
y(...) is now effectively dbinom( x=1, size=1, prob=0.5, ... )
y() # 0.5
y(log=TRUE) # -0.6931472
The great thing about partial() is that it can be naturally chained with the %>% pipe:
z <- partial(dbinom, size=1) %>% partial(prob=0.5) %>% partial(x=1)
z(log=TRUE) # -0.6931472
If I understand correctly what you're trying to do,
then maybe this works better:
f <- function(a_call) {
call_modify(call_standardise(call2(ensym(a_call)),
caller_env()),
x=1)
}
Which you can use with or without characters:
f(print)
# print(x = 1)
f("print")
# print(x = 1)
eval(f(print))
# 1
Or with more indirection:
a_call <- expr(print)
eval(call2(f, a_call))
# print(x = 1)
eval(expr(f(!!a_call)))
# print(x = 1)
Since we do a bit of non-standard evaluation here,
things get a bit tricky.
call_standardise needs to be able to find the function you specify,
and it's very probable that it will be found in the environment that calls f,
and not necessarily in the environment that calls call_standardise,
which would be f's execution environment in this case.
That's why caller_env() is explicitly specified when calling call_standardise even though that's the default for the latter's env,
because default arguments are evaluated in the function's execution environment,
whereas explicit arguments are evaluated in the caller's environment.
Here's a contrived-looking example for this problem:
f2 <- function(a_call) {
call_modify(call_standardise(call2(ensym(a_call))),
x=1)
}
e <- new.env()
e$foo <- function(x) { x + 1 }
with(e, f(foo))
# foo(x = 1)
with(e, f2(foo))
# Error in eval_bare(node_car(expr), env) : object 'foo' not found
However, if you were to develop a package that provides f,
the example is no longer contrived:
f would live in your package's environment,
and other packages could call it for functions that are only available in their respective namespaces.
For more specifics and depictions,
check this reference,
and maybe try drawing the call tree for my example.
call2 constructs a call by passing evaluated ... arguments on to the callable object (the first argument). For example, the command below outputs to the console "y" as the second argument passed to call2 is evaluated,
> A <- call2(print, x=print('y'))
[1] "y"
and constructs a call to print which takes x="y" as its argument (not x=print("y")):
> A
(function (x, ...)
UseMethod("print"))(x = "y")
In order to get around a_call being evaluated and then passed (to f) in the constructed call, it can be quoted, e.g.
f <- function(a_call) {
call_modify(a_call, x=1)
}
a_call <- call2(dbinom, size=1, prob=0.5)
y <- call2(f, quote(a_call))
Now:
> y
(function(a_call) {
call_modify(a_call, x=1)
})(a_call)
Related
I wrote a wrapper around ftable because I need to compute flat tables with frequency and percentage for many variables. As ftable method for class "formula" uses non-standard evaluation, the wrapper relies on do.call and match.call to allow the use of the subset argument of ftable (more details in my previous question).
mytable <- function(...) {
do.call(what = ftable,
args = as.list(x = match.call()[-1]))
# etc
}
However, I cannot use this wrapper with lapply nor with:
# example 1: error with "lapply"
lapply(X = warpbreaks[c("breaks",
"wool",
"tension")],
FUN = mytable,
row.vars = 1)
Error in (function (x, ...) : object 'X' not found
# example 2: error with "with"
with(data = warpbreaks[warpbreaks$tension == "L", ],
expr = mytable(wool))
Error in (function (x, ...) : object 'wool' not found
These errors seem to be due to match.call not being evaluated in the right environment.
As this question is closely linked to my previous one, here is a sum up of my problems:
The wrapper with do.call and match.call cannot be used with lapply or with.
The wrapper without do.call and match.call cannot use the subset argument of ftable.
And a sum up of my questions:
How can I write a wrapper which allows both to use the subset argument of ftable and to be used with lapply and with? I have ideas to avoid the use of lapply and with, but I am looking to understand and correct these errors to improve my knowledge of R.
Is the error with lapply related to the following note from ?lapply?
For historical reasons, the calls created by lapply are unevaluated,
and code has been written (e.g., bquote) that relies on this. This
means that the recorded call is always of the form FUN(X[[i]], ...),
with i replaced by the current (integer or double) index. This is not
normally a problem, but it can be if FUN uses sys.call or match.call
or if it is a primitive function that makes use of the call. This
means that it is often safer to call primitive functions with a
wrapper, so that e.g. lapply(ll, function(x) is.numeric(x)) is
required to ensure that method dispatch for is.numeric occurs
correctly.
The problem with using match.call with lapply is that match.call returns the literal call that passed into it, without any interpretation. To see what's going on, let's make a simpler function which shows exactly how your function is interpreting the arguments passed into it:
match_call_fun <- function(...) {
call = as.list(match.call()[-1])
print(call)
}
When we call it directly, match.call correctly gets the arguments and puts them in a list that we can use with do.call:
match_call_fun(iris['Species'], 9)
[[1]]
iris["Species"]
[[2]]
[1] 9
But watch what happens when we use lapply (I've only included the output of the internal print statement):
lapply('Species', function(x) match_call_fun(iris[x], 9))
[[1]]
iris[x]
[[2]]
[1] 9
Since match.call gets the literal arguments passed to it, it receives iris[x], not the properly interpreted iris['Species'] that we want. When we pass those arguments into ftable with do.call, it looks for an object x in the current environment, and then returns an error when it can't find it. We need to interpret
As you've seen, adding envir = parent.frame() fixes the problem. This is because, adding that argument tells do.call to evaluate iris[x] in the parent frame, which is the anonymous function in lapply where x has it's proper meaning. To see this in action, let's make another simple function that uses do.call to print ls from 3 different environmental levels:
z <- function(...) {
print(do.call(ls, list()))
print(do.call(ls, list(), envir = parent.frame()))
print(do.call(ls, list(), envir = parent.frame(2)))
}
When we call z() from the global environment, we see the empty environment inside the function, then the Global Environment:
z()
character(0) # Interior function environment
[1] "match_call_fun" "y" "z" # GlobalEnv
[1] "match_call_fun" "y" "z" # GlobalEnv
But when we call from within lapply, we see that one level of parent.frame up is the anonymous function in lapply:
lapply(1, z)
character(0) # Interior function environment
[1] "FUN" "i" "X" # lapply
[1] "match_call_fun" "y" "z" # GlobalEnv
So, by adding envir = parent.frame(), do.call knows to evaluate iris[x] in the lapply environment where it knows that x is actually 'Species', and it evaluates correctly.
mytable_envir <- function(...) {
tab <- do.call(what = ftable,
args = as.list(match.call()[-1]),
envir = parent.frame())
prop <- prop.table(x = tab,
margin = 2) * 100
bind <- cbind(as.matrix(x = tab),
as.matrix(x = prop))
margin <- addmargins(A = bind,
margin = 1)
round(x = margin,
digits = 1)
}
# This works!
lapply(X = c("breaks","wool","tension"),
FUN = function(x) mytable_envir(warpbreaks[x],row.vars = 1))
As for why adding envir = parent.frame() makes a difference since that appears to be the default option. I'm not 100% sure, but my guess is that when the default argument is used, parent.frame is evaluated inside the do.call function, returning the environment in which do.call is run. What we're doing, however, is calling parent.frame outside do.call, which means it returns one level higher than the default version.
Here's a test function that takes parent.frame() as a default value:
fun <- function(y=parent.frame()) {
print(y)
print(parent.frame())
print(parent.frame(2))
print(parent.frame(3))
}
Now look at what happens when we call it from within lapply both with and without passing in parent.frame() as an argument:
lapply(1, function(y) fun())
<environment: 0x12c5bc1b0> # y argument
<environment: 0x12c5bc1b0> # parent.frame called inside
<environment: 0x12c5bc760> # 1 level up = lapply
<environment: R_GlobalEnv> # 2 levels up = globalEnv
lapply(1, function(y) fun(y = parent.frame()))
<environment: 0x104931358> # y argument
<environment: 0x104930da8> # parent.frame called inside
<environment: 0x104931358> # 1 level up = lapply
<environment: R_GlobalEnv> # 2 levels up = globalEnv
In the first example, the value of y is the same as what you get when you call parent.frame() inside the function. In the second example, the value of y is the same as the environment one level up (inside lapply). So, while they look the same, they're actually doing different things: in the first example, parent.frame is being evaluated inside the function when it sees that there is no y= argument, in the second, parent.frame is evaluated in the lapply anonymous function first, before calling fun, and then is passed into it.
As you only want to pass all the arguments passed to ftable u do not need the do.call().
mytable <- function(...) {
tab <- ftable(...)
prop <- prop.table(x = tab,
margin = 2) * 100
bind <- cbind(as.matrix(x = tab),
as.matrix(x = prop))
margin <- addmargins(A = bind,
margin = 1)
return(round(x = margin,
digits = 1))
}
The following lapply creates a table for every Variable separatly i don't know if that is what you want.
lapply(X = c("breaks",
"wool",
"tension"),
FUN = function(x) mytable(warpbreaks[x],
row.vars = 1))
If you want all 3 variables in 1 table
warpbreaks$newVar <- LETTERS[3:4]
lapply(X = cbind("c(\"breaks\", \"wool\", \"tension\")",
"c(\"newVar\", \"tension\",\"wool\")"),
FUN = function(X)
eval(parse(text=paste("mytable(warpbreaks[,",X,"],
row.vars = 1)")))
)
Thanks to this issue, the wrapper became:
# function 1
mytable <- function(...) {
do.call(what = ftable,
args = as.list(x = match.call()[-1]),
envir = parent.frame())
# etc
}
Or:
# function 2
mytable <- function(...) {
mc <- match.call()
mc[[1]] <- quote(expr = ftable)
eval.parent(expr = mc)
# etc
}
I can now use the subset argument of ftable, and use the wrapper in lapply:
lapply(X = warpbreaks[c("wool",
"tension")],
FUN = function(x) mytable(formula = x ~ breaks,
data = warpbreaks,
subset = breaks < 15))
However I do not understand why I have to supply envir = parent.frame() to do.call as it is a default argument.
More importantly, these methods do not resolve another issue: I can not use the subset argument of ftable with mapply.
I would like to have a function accept arguments in the usual R way, most of which will have defaults. But I would also like it to accept a list of named arguments corresponding to some or some or all of the formals. Finally, I would like arguments supplied to the function directly, and not through the list, to override the list arguments where they conflict.
I could do this with a bunch of nested if-statements. But I have a feeling there is some elegant, concise, R-ish programming-on-the-language solution -- probably multiple such solutions -- and I would like to learn to use them. To show the kind of solution I am looking for:
> arg_lst <- list(x=0, y=1)
> fn <- function(a_list = NULL, x=2, y=3, z=5, ...){
<missing code>
print(c(x, y, z))
}
> fn(a_list = arg_list, y=7)
Desired output:
x y z
0 7 5
I like a lot about #jdobres's approach, but I don't like the use of assign and the potential scoping breaks.
I also don't like the premise, that a function should be written in a special way for this to work. Wouldn't it be better to write a wrapper, much like do.call, to work this way with any function? Here is that approach:
Edit: solution based off of purrr::invoke
Thinking a bit more about this, purrr::invoke almost get's there - but it will result in an error if a list argument is also passed to .... But we can make slight modifications to the code and get a working version more concisely. This version seems more robust.
library(purrr)
h_invoke = function (.f, .x = NULL, ..., .env = NULL) {
.env <- .env %||% parent.frame()
args <- c(list(...), as.list(.x)) # switch order so ... is first
args = args[!duplicated(names(args))] # remove duplicates
do.call(.f, args, envir = .env)
}
h_invoke(fn, arg_list, y = 7)
# [1] 0 7 5
Original version borrowing heavily from jdobres's code:
hierarchical_do_call = function(f, a_list = NULL, ...){
formal_args = formals() # get the function's defined inputs and defaults
formal_args[names(formal_args) %in% c('f', 'a_list', '...')] = NULL # remove these two from formals
supplied_args <- as.list(match.call())[-1] # get the supplied arguments
supplied_args[c('f', 'a_list')] = NULL # ...but remove the argument list and the function
a_list[names(supplied_args)] = supplied_args
do.call(what = f, args = a_list)
}
fn = function(x=2, y=3, z=5) {
print(c(x, y, z))
}
arg_list <- list(x=0, y=1)
hierarchical_do_call(f = fn, a_list = arg_list, y=7)
# x y z
# 0 7 5
I'm not sure how "elegant" this is, but here's my best attempt to satisfy the OP's requirements. The if/else logic is actually pretty straightforward (no nesting needed, per se). The real work is in collecting and sanitizing the three different input types (formal defaults, the list object, and any supplied arguments).
fn <- function(a_list = NULL, x = 2, y = 3, z = 5, ...) {
formal_args <- formals() # get the function's defined inputs and defaults
formal_args[names(formal_args) %in% c('a_list', '...')] <- NULL # remove these two from formals
supplied_args <- as.list(match.call())[-1] # get the supplied arguments
supplied_args['a_list'] <- NULL # ...but remove the argument list
# for each uniquely named item among the 3 inputs (argument list, defaults, and supplied args):
for (i in unique(c(names(a_list), names(formal_args), names(supplied_args)))) {
if (!is.null(supplied_args[[i]])) {
assign(i, supplied_args[[i]])
} else if (!is.null(a_list[[i]])) {
assign(i, a_list[[i]])
}
}
print(c(x, y, z))
}
arg_lst <- list(x = 0, y = 1)
fn(a_list = arg_lst, y=7)
[1] 0 7 5
With a little more digging into R's meta-programming functions, it's actually possible to pack this hierarchical assignment into its own function, which is designed to operate on the function environment that called it. This makes it easier to reuse this functionality, but it definitely breaks scope and should be considered dangerous.
The "hierarchical assignment" function, mostly the same as before:
hierarchical_assign <- function(a_list) {
formal_args <- formals(sys.function(-1)) # get the function's defined inputs and defaults
formal_args[names(formal_args) %in% c('a_list', '...')] <- NULL # remove these two from formals
supplied_args <- as.list(match.call(sys.function(-1), sys.call(-1)))[-1] # get the supplied arguments
supplied_args['a_list'] <- NULL # ...but remove the argument list
# for each uniquely named item among the 3 inputs (argument list, defaults, and supplied args):
for (i in unique(c(names(a_list), names(formal_args), names(supplied_args)))) {
if (!is.null(supplied_args[[i]])) {
assign(i, supplied_args[[i]], envir = parent.frame())
} else if (!is.null(a_list[[i]])) {
assign(i, a_list[[i]], envir = parent.frame())
}
}
}
And the usage. Note that the the calling function must have an argument named a_list, and it must be passed to hierarchical_assign.
fn <- function(a_list = NULL, x = 2, y = 3, z = 5, ...) {
hierarchical_assign(a_list)
print(c(x, y, z))
}
[1] 0 7 5
I think do.call() does exactly what you want. It accepts a function and a list as arguments, the list being arguments for the functions. I think you will need a wrapper function to create this behavior of "overwriting defaults"
I have a generic function foo that I want to call three different ways depending on the arguments given to it.
foo <- function(...) UseMethod("foo")
#default
foo.default <- function(x, y, ...) {
#does some magic
print("this is the default method")
}
#formula
foo.formula <- function(formula, data = list(), ...) {
print("this is the formula method")
}
#data.frame
foo.data.frame <- function(data, x, y, ...) {
print("this is the data.frame method")
}
In the following I'm going to show how I am expecting the method dispatch to work but the outputs are presented under each call...
mydata <- data.frame(x=c(1,2,3,4),y=c(5,6,7,8))
#ways to call default function
foo(x = mydata$x, y = mydata$y)
#[1] "this is the default method"
#ways to call formula
foo(formula = mydata$x~mydata$y)
#[1] "this is the formula method"
foo(formula = x~y, data = mydata)
#[1] "this is the formula method"
foo(data = mydata, formula = x~y) #ERROR
#[1] "this is the data.frame method"
#ways to call data.frame method
foo(data = mydata, x = x, y = y)
#[1] "this is the data.frame method"
foo(x = x, y = y, data = mydata) #ERROR
#Error in foo(x = x, y = y, data = mydata) : object 'x' not found
from what I can tell, the method used depends on the class of the first argument. Essentially, I would like for the method dispatch to depend on the arguments passed to the generic function foo and not the first argument.
I would like the dispatch to have the following priority:
If the formula argument is present the formula method is used (data argument should be optional here)
Then, if no formula argument is found, if data argument is present use data.frame method (which requires x and y arguments)
else foo expects the x and y arguments or it will fail.
Note
I would like to avoid defining the generic function foo as follows
foo <- function(formula, data,...) UseMethod("foo")
while this would fix all my issues (I believe all except the last case), this will cause a devtools::check() warning because the some of S3 functions will not have the same arguments as the generic function and will no longer be consistent (specifically foo.default and foo.data.frame). And I wouldn't like to include the missing arguments because those methods do not have use for those arguments.
As Thomas has pointed out, this is not the standard behavior for S3 classes. If you really want to stick to S3, however, you could write your functions so as to "mimick" UseMethod, even though it won't be pretty and is probably not what you want to do. Nevertheless, here an idea that is based on capturing all arguments first, and then checking for the presence of your "preferred" argument type:
Get some objects first:
a <- 1; class(a) <- "Americano"
b <- 2; class(b) <- "Espresso"
Let the function in question capture all arguments with dots, and then check for the presence of an argument type in order of your preference:
drink <- function(...){
dots <- list(...)
if(any(sapply(dots, function(cup) class(cup)=="Americano"))){
drink.Americano(...)
} else { # you can add more checks here to get a hierarchy
# try to find appropriate method first if one exists,
# using the first element of the arguments as usual
tryCatch(get(paste0("drink.", class(dots[[1]])))(),
# if no appropriate method is found, try the default method:
error = function(e) drink.default(...))
}
}
drink.Americano <- function(...) print("Hmm, gimme more!")
drink.Espresso <- function(...) print("Tripple, please!")
drink.default <- function(...) print("Any caffeine in there?")
drink(a) # "Americano", dispatch hard-coded.
# [1] "Hmm, gimme more!"
drink(b) # "Espresso", not hard-coded, but correct dispatch anyway
# [1] "Tripple, please!"
drink("sthelse") # Dispatches to default method
# [1] "Any caffeine in there?"
drink(a,b,"c")
# [1] "Hmm, gimme more!"
drink(b,"c", a)
# [1] "Hmm, gimme more!"
How can I tell R to use the default value of a function argument without i) omitting the argument in the function call and ii) without knowing what the default value is?
I know I can use the default value of mean in rnorm():
rnorm(n = 100) # by omitting the argument
# or
rnorm(n = 100, mean = 0) # by including it in the call with the default value
But assume I don't know the default value but want to include it explicitly in the function call. How can I achieve that?
You can access the argument list and default values via:
> formals(rnorm)
$n
$mean
[1] 0
$sd
[1] 1
formals("rnorm") also works. Some simple examples:
> rnorm(10,mean = formals(rnorm)$mean)
[1] -0.5376897 0.4372421 0.3449424 -0.9569394 -1.1459726 -0.6109554 0.1907090 0.2991381 -0.2713715
[10] -1.4462570
> rnorm(10,mean = formals(rnorm)$mean + 3)
[1] 2.701544 2.863189 1.709289 2.987687 2.848045 5.136735 2.559616 3.827967 3.079658 5.016970
Obviously, you could store the result of formals(rnorm) ahead of time as well.
As #joran has already pointed out, formals() exposes the default values. However, as I understand the question, what you're really after is the construction of the call expression. To that end, it is useful to combine formals() with as.call() to produce the call itself. The following function does just that, by producing a function that produces "argument-completed calls," for a given function name f:
drop_missing <- function(sig) {
sig[!sapply(sig, identical, quote(expr =))]
}
complete_call <- function(f) {
nm <- as.name(f)
sig <- formals(args(f))
make_call <- function() {
args <- match.call()[-1]
sig[names(args)] <- args
as.call(c(nm, drop_missing(sig)))
}
formals(make_call) <- sig
make_call
}
Example usage:
complete_call("log")(1)
#> log(x = 1, base = exp(1))
complete_call("rnorm")(10)
#> rnorm(n = 10, mean = 0, sd = 1)
complete_call("rnorm")()
#> rnorm(mean = 0, sd = 1)
Remarks:
1) The output is a language object. To execute the call, you need to evaluate it, e.g.,
eval(complete_call("rnorm")(10))
#> [1] -0.89428324 -1.78405483 -1.83972728 ... (output truncated)
2) If you want complete_call() to accept a function, rather than the name of a function, you could write nm <- as.name(deparse(substitute(f))) in place of the given assignment. However, that would not work in a nested call, where you would get as.name("f") for nm, because of R's rules fo lexical scoping.
3) Without the call to args() in the assignment of sig, complete_call() would only work for closures, since primitive and builtin functions don't have formals.
In the documentation of sapply and replicate there is a warning regarding using ...
Now, I can accept it as such, but would like to understand what is behind it. So I've created this little contrived example:
innerfunction<-function(x, extrapar1=0, extrapar2=extrapar1)
{
cat("x:", x, ", xp1:", extrapar1, ", xp2:", extrapar2, "\n")
}
middlefunction<-function(x,...)
{
innerfunction(x,...)
}
outerfunction<-function(x, ...)
{
cat("Run middle function:\n")
replicate(2, middlefunction(x,...))
cat("Run inner function:\n")
replicate(2, innerfunction(x,...))
}
outerfunction(1,2,3)
outerfunction(1,extrapar1=2,3)
outerfunction(1,extrapar1=2,extrapar2=3)
Perhaps I've done something obvious horribly wrong, but I find the result of this rather upsetting. So can anyone explain to me why, in all of the above calls to outerfunction, I get this output:
Run middle function:
x: 1 , xp1: 0 , xp2: 0
x: 1 , xp1: 0 , xp2: 0
Run inner function:
x: 1 , xp1: 0 , xp2: 0
x: 1 , xp1: 0 , xp2: 0
Like I said: the docs seem to warn for this, but I do not see why this is so.
?replicate, in the Examples section, tells us explicitly that what you are trying to do does not and will not work. In the Note section of ?replicate we have:
If ‘expr’ is a function call, be aware of assumptions about where
it is evaluated, and in particular what ‘...’ might refer to. You
can pass additional named arguments to a function call as
additional named arguments to ‘replicate’: see ‘Examples’.
And if we look at Examples, we see:
## use of replicate() with parameters:
foo <- function(x=1, y=2) c(x,y)
# does not work: bar <- function(n, ...) replicate(n, foo(...))
bar <- function(n, x) replicate(n, foo(x=x))
bar(5, x=3)
My reading of the docs is that they do far more than warn you about using ... in replicate() calls; they explicitly document that it does not work. Much of the discussion in that help file relates to the ... argument of the other functions, not necessarily to replicate().
If you look at the code for replicate:
> replicate
function (n, expr, simplify = TRUE)
sapply(integer(n), eval.parent(substitute(function(...) expr)),
simplify = simplify)
<environment: namespace:base>
You see that the function is evaluated in the parent frame, where the ... from your calling function no longer exists.
There actually is a way to do this:
# Simple function:
ff <- function(a,b) print(a+b)
# This will NOT work:
testf <- function(...) {
replicate(expr = ff(...), n = 5)
}
testf(45,56) # argument "b" is missing, with no default
# This will:
testf <- function(...) {
args <- as.list(substitute(list(...)))[-1L]
replicate(expr = do.call(ff, args), n = 5)
}
testf(45,56) # 101
An alternative way to do that:
g <- function(x, y) x + y
f <- function(a = 1, ...) {
arg_list <- list(...)
replicate(n = 3, expr = do.call(g, args = arg_list))
}
f(x = 1, y = 2)