r - apply a function on data n number of times - r

I would like to apply the same function certain number of times on a vector using the output from the function every time.
A simplified example with a simple function just to demonstrate:
# sample vector
a <- c(1,2,3)
# function to be applied n times
f1 <- function(x) {
x^2 + x^3
}
I would like to apply f1 on a, n number of times, for example here lets say 3 times.
I heard purrr::reduce or purrr::map() might be a good idea for this but couldn't make it work.
The desired output if n = 3 would be equal to f1(f1(f1(a))).

Let's use Reduce (no external library requirements, generally good performance). I'll modify the function slightly to accept a second (ignored) argument:
f1 <- function(x, ign) x^2 + x^3
Reduce(f1, 1:3, init = a)
# [1] 1.872000e+03 6.563711e+09 1.102629e+14
Here's what's happening. Reduce:
uses a binary function to successively combine the elements of a given vector and a possibly given initial value.
The first argument is the function to use, and it should accept two arguments. The first is the value from the previous execution of the function in this reduction. On the first call of the function, it uses the init= value provided.
First call:
f1(c(1,2,3), 1)
# [1] 2 12 36
Second call:
f1(c(2,12,36), 2)
# [1] 12 1872 47952
Third call:
f1(c(12,1872,47952), 3)
# [1] 1.872000e+03 6.563711e+09 1.102629e+14
The second argument 1:3 is used just for its length. Anything of the proper length will work.
If you don't want to redefine f1 just for this reduction, you can always do
Reduce(function(a,ign) f1(a), ...)
Benchmark:
library(microbenchmark)
r <- Reduce(function(a,b) call("f1", a), 1:3, init=quote(a))
triple_f1 <- function(a) f1(f1(f1(a)))
microbenchmark::microbenchmark(
base = Reduce(function(a,ign) f1(a), 1:3, a),
accum = a %>% accumulate(~ .x %>% f1, .init = f1(a)) %>% extract2(3),
reduc = purrr::reduce(1:3, function(a,ign) f1(a), .init=a),
whil = {
i <- 1
a <- c(1,2,3)
while (i < 10) {
i <- i + 1
a <- f1(a)
}
},
forloop = {
out <- a
for(i in seq_len(3)) out <- f1(out)
},
evaluated = {
r <- Reduce(function(a,b) call("f1", a), 1:3, init=quote(a))
eval(r)
},
precompiled = eval(r),
anotherfun = triple_f1(a)
)
# Unit: microseconds
# expr min lq mean median uq max neval
# base 5.101 7.3015 18.28691 9.3010 10.8510 848.302 100
# accum 294.201 328.4015 381.21204 356.1520 402.6510 823.602 100
# reduc 27.000 38.1005 57.55694 45.2510 54.2005 747.401 100
# whil 1717.300 1814.3510 1949.03100 1861.8510 1948.9510 2931.001 100
# forloop 1110.001 1167.1010 1369.87696 1205.5010 1292.6500 9935.501 100
# evaluated 6.702 10.2505 22.18598 13.3015 15.5510 715.301 100
# precompiled 2.300 3.2005 4.69090 4.0005 4.5010 26.800 100
# anotherfun 1.400 2.0515 12.85201 2.5010 3.3505 1017.801 100

i <- 1
while (i < 10) {
i <- i + 1
x <- f(x)
}

Here is an option with accumulate
library(tidyverse)
n <- 3
a %>%
accumulate(~ .x %>%
f1, .init = f1(a)) %>%
extract2(n)
#[1] 1.872000e+03 6.563711e+09 1.102629e+14
NOTE: accumulate is similar to the base R option Reduce with accumulate = TRUE
checking with the OP's output
f1(f1(f1(a)))
#[1] 1.872000e+03 6.563711e+09 1.102629e+14
Or use a for loop (no external libraries used)
out <- a
for(i in seq_len(n)) out <- f1(out)
out
#[1] 1.872000e+03 6.563711e+09 1.102629e+14

Here's another way to do it with Reduce:
setting the stage
a <- 1:3
f1 <- function(x) x^2 + x^3
constructing a call and evaluating it
N <- 3 # how many times?
r <- Reduce(function(a,b) call("f1", a), rep(NA, N), init=a)
# f1(f1(f1(1:3)))
eval(r)
# [1] 1.872000e+03 6.563711e+09 1.102629e+14
alternative 2
# N defined as above
Reduce(function(x,y) y(x), replicate(N,f1), init=a)
# [1] 1.872000e+03 6.563711e+09 1.102629e+14
alternative 3 (recursive with a global-like variable)
doit <- function(N) {
i <- 0
function(fun, x){
i <<- i +1
if(i < N) Recall(fun, fun(x)) else fun(x)
}
}
doit(3)(f1, a)
# [1] 1.872000e+03 6.563711e+09 1.102629e+14
... or even
doit <- function(N, fun, x) (function(fun, x)
if((N <<- N - 1) > 0)
Recall(fun, fun(x)) else
fun(x))(fun, x)
doit(3, f1, a)
# [1] 1.872000e+03 6.563711e+09 1.102629e+14

Related

Is there a way to use do.call without explicitly providing arguments

Part of a custom function I am trying to create allows the user to provide a function as a parameter. For example
#Custom function
result <- function(.func){
do.call(.func, list(x,y))
}
#Data
x <- 1:2
y <- 0:1
#Call function
result(.func = function(x,y){ sum(x, y) })
However, the code above assumes that the user is providing a function with arguments x and y. Is there a way to use do.call (or something similar) so that the user can provide a function with different arguments? I think that the correct solution might be along the lines of:
#Custom function
result <- function(.func){
do.call(.func, formals(.func))
}
#Data
m <- 1:3
n <- 0:2
x <- 1:2
y <- 0:1
z <- c(4,6)
#Call function
result(.func = function(m,n){ sum(m, n) })
result(.func = function(x,y,z){ sum(x,y,z) })
But this is not it.
1) Use formals/names/mget to get the values in a list. An optional argument, envir, will allow the user to specify the environment that the variables are located in so it knows where to look. The default if not specified is the parent frame, i.e. the caller.
result1 <- function(.func, envir = parent.frame()) {
do.call(.func, mget(names(formals(.func)), envir))
}
m <- 1:3
n <- 0:2
x <- 1:2
y <- 0:1
z <- c(4,6)
result1(.func = function(m,n) sum(m, n) )
## [1] 9
result1(.func = function(x,y,z) sum(x,y,z) )
## [1] 14
result1(function(Time, demand) Time + demand, list2env(BOD))
## [1] 9.3 12.3 22.0 20.0 20.6 26.8
1a) Another possibility is to evaluate the body. This also works if envir is specified as a data frame whose columns are to be looked up.
result1a <- function(.func, envir = parent.frame()) {
eval(body(.func), envir)
}
result1a(.func = function(m,n) sum(m, n) )
## [1] 9
result1a(.func = function(x,y,z) sum(x,y,z) )
## [1] 14
result1a(function(Time, demand) Time + demand, BOD)
## [1] 9.3 12.3 22.0 20.0 20.6 26.8
2) Another design which is even simpler is to provide a one-sided formula interface. Formulas have environments so we can use that to look up the variables.
result2 <- function(fo, envir = environment(fo)) eval(fo[[2]], envir)
result2(~ sum(m, n))
## [1] 9
result2(~ sum(x,y,z))
## [1] 14
result2(~ Time + demand, BOD)
## [1] 9.3 12.3 22.0 20.0 20.6 26.8
3) Even simpler yet is to just pass the result of the computation as an argument.
result3 <- function(x) x
result3(sum(m, n))
## [1] 9
result3(sum(x,y,z))
## [1] 14
result3(with(BOD, Time + demand))
## [1] 9.3 12.3 22.0 20.0 20.6 26.8
This works.
#Custom function
result <- function(.func){
do.call(.func, lapply(formalArgs(.func), as.name))
}
#Data
m <- 1:3
n <- 0:2
x <- 1:2
y <- 0:1
z <- c(4,6)
#Call function
result(.func = function(m,n){ sum(m, n) })
result(.func = function(x,y,z){ sum(x,y,z) })
This seems like a bit of a pointless function, since the examples in your question imply that what you are trying to do is evaluate the body of the passed function using variables in the calling environment. You can certainly do this easily enough:
result <- function(.func){
eval(body(.func), envir = parent.frame())
}
This gives the expected results from your examples:
x <- 1:2
y <- 0:1
result(.func = function(x,y){ sum(x, y) })
#> [1] 4
and
m <- 1:3
n <- 0:2
x <- 1:2
y <- 0:1
z <- c(4,6)
result(.func = function(m,n){ sum(m, n) })
#> [1] 9
result(.func = function(x,y,z){ sum(x,y,z) })
#> [1] 14
But note that, when the user types:
result(.func = function(x,y){ ...user code... })
They get the same result they would already get if they didn't use your function and simply typed
...user code....
You could argue that it would be helpful with a pre-existing function like mean.default:
x <- 1:10
na.rm <- TRUE
trim <- 0
result(mean.default)
#> [1] 5.5
But this means users have to name their variables as the parameters being passed to the function, and this is just a less convenient way of calling the function.
It might be useful if you could demonstrate a use case where what you are proposing doesn't make the user's code longer or more complex.
You could also use ..., but like the other responses, I don't quite see the value, or perhaps I don't fully understand the use-case.
result <- function(.func, ...){
do.call(.func, list(...))
}
Create function
f1 <- function(a,b) sum(a,b)
Pass f1 and values to result()
result(f1, m,n)
Output:
[1] 9
Here is how I would do it based on your clarifying comments.
Basically since you say your function will take a data.frame as input, the function you are asking for essentially just reverses the order of arguments you pass to do.call()... which takes a function, then a list of arguments. A data.frame is just a special form of list where all elements (columns) are vectors of equal length (number of rows)
result <- function(.data, .func) {
# .data is a data.frame, which is a list of argument vectors of equal length
do.call(.func, .data)
}
result(data.frame(a=1, b=1:5), function(a, b) a * b)
result(data.frame(c=1:10, d=1:10), function(c, d) c * d)

R: How to write a function that replaces a function call with another function call?

E.g. I want to transform the code
mean(x)
to
fn(x)
everytime I see mean in the code.
replace_mean <- function(code) {
substitute(code, list(mean = fn)) # doesn't work
substitute(substitute(code), list(mean = fn)) # doesn't work
}
the above two approaches don't work. E.g.
replace_mean(list(mean(y), mean(x)))
What's the best way to do function replacement using NSE in R?
Base R Solutions preferred.
Update example output
replace(mean(x)) # fn(x)
replace(list(a = mean(x), mean(ok))) # list(a=fn(x), fn(ok)))
The following function, when passed mean(x) and some fn such as sqrt as its two arguments returns the call object fn(x), i.e. sqrt(x), replacing occurrences of mean with fn.
replace_mean <- function(code, fn) {
do.call("substitute", list(substitute(code), list(mean = substitute(fn))))
}
Examples
1) Basic example
e <- replace_mean(mean(x), sqrt)
e
## sqrt(x)
x <- 4
eval(e)
## [1] 2
2) more complex expression
ee <- replace_mean(mean(x) + mean(x*x), sqrt)
ee
## sqrt(x) + sqrt(x * x)
x <- 4
eval(ee)
## [1] 6
3) apply replace_mean to body of f creating g
f <- function(x) mean(x) + mean(x*x)
g <- f
body(g) <- do.call("replace_mean", list(body(f), quote(sqrt)))
g
## function (x)
## sqrt(x) + sqrt(x * x)
x <- 4
g(x)
## [1] 6
One way is much more ugly and relies on string manipulation to generate the code you want to run and then evaluating it.
replace_mean <- function(code) {
code_subbed = substitute(code)
# constructu the code I want
code_subbed_subbed = sprintf("substitute(%s, list(mean=quote(fn)))", deparse(code_subbed))
eval(parse(text = code_subbed_subbed))
}
replace_mean(list(mean(x), a= mean(ok)))

How to compose a list of functions

For example, I have a vector of functions: fun_vec <- c(step1,step2,step3).
Now I want to compose them like this: step1(step2(step3(x))). How do I do this using fun_vec? (Suppose that fun_vec isn't fixed and can have more or less functions.)
Similar to Frank's use of freduce, you can use Reduce:
step1 <- function(a) a^2
step2 <- function(a) sum(a)
step3 <- function(a) sqrt(a)
steps <- list(step1, step2, step3)
Reduce(function(a,f) f(a), steps, 1:3)
# [1] 3.741657
step3(step2(step1(1:3)))
# [1] 3.741657
You can see it "in action" with:
Reduce(function(a,f) f(a), steps, 1:3, accumulate=TRUE)
# [[1]]
# [1] 1 2 3
# [[2]]
# [1] 1 4 9
# [[3]]
# [1] 14
# [[4]]
# [1] 3.741657
You can use freduce from the magrittr package:
fun_vec = c(function(x) x^2, function(x) sum(x), function(x) sqrt(x))
library(magrittr)
freduce(1:10, fun_vec)
Alternately, define a function sequence with pipes like...
library(magrittr)
f = . %>% raise_to_power(2) %>% sum %>% sqrt
f(1:10)
A similar example: Is there a way to `pipe through a list'?
Here's a base R recursive approach:
compose <- function(funs) {
n <- length(funs)
fcomp <- function(x) funs[[n - 1]](funs[[n]](x))
ifelse(n > 2, compose(c(funs[1:(n - 2)], fcomp)), fcomp)
}
x <- c(sqrt, log, exp)
compose(x)(2)
# [1] 1.414214
sqrt(log(exp(2)))
# [1] 1.414214
If the number of functions in funs is greater than two, we shorten the list by one by replacing the last two functions by their composition. Otherwise, we return the composition of the last remaining two. It's assumed that initially there are at least two functions in funs.
Take a look at purrr::compose. If your functions are stored inside a list, use purrr::invoke to pass that list to compose:
fun_vec <- c( exp, log10, sqrt )
f <- purrr::invoke( purrr::compose, fun_vec )
f(4) # 1.35125
exp( log10( sqrt(4) ) ) # 1.35125

Subsetting with negative indices: best practices?

Say I have a function for subsetting (this is just a minimal example):
f <- function(x, ind = seq(length(x))) {
x[ind]
}
(Note: one could use only seq(x) instead of seq(length(x)), but I don't find it very clear.)
So, if
x <- 1:5
ind <- c(2, 4)
ind2 <- which(x > 5) # integer(0)
I have the following results:
f(x)
[1] 1 2 3 4 5
f(x, ind)
[1] 2 4
f(x, -ind)
[1] 1 3 5
f(x, ind2)
integer(0)
f(x, -ind2)
integer(0)
For the last result, we would have wanted to get all x, but this is a common cause of error (as mentionned in the book Advanced R).
So, if I want to make a function for removing indices, I use:
f2 <- function(x, ind.rm) {
f(x, ind = `if`(length(ind.rm) > 0, -ind.rm, seq(length(x))))
}
Then I get what I wanted:
f2(x, ind)
[1] 1 3 5
f2(x, ind2)
[1] 1 2 3 4 5
My question is:
Can I do something cleaner and that doesn't need passing seq(length(x)) explicitly in f2 but using directly the default value of f's parameter ind when ind.rm is integer(0)?
If you anticipate having "empty" negative indices a lot, you can get a performance improvement for these cases if you can avoid the indexing used by x[seq(x)] as opposed to just x. In other words, if you are able to combine f and f2 into something like:
new_f <- function(x, ind.rm){
if(length(ind.rm)) x[-ind.rm] else x
}
There will be a huge speedup in the case of empty negative indices.
n <- 1000000L
x <- 1:n
ind <- seq(0L,n,2L)
ind2 <- which(x>n+1) # integer(0)
library(microbenchmark)
microbenchmark(
f2(x, ind),
new_f(x, ind),
f2(x, ind2),
new_f(x, ind2)
)
all.equal(f2(x, ind), new_f(x, ind)) # TRUE - same result at about same speed
all.equal(f2(x, ind2), new_f(x, ind2)) # TRUE - same result at much faster speed
Unit: nanoseconds
expr min lq mean median uq max neval
f2(x, ind) 6223596 7377396.5 11039152.47 9317005 10271521 50434514 100
new_f(x, ind) 6190239 7398993.0 11129271.17 9239386 10202882 59717093 100
f2(x, ind2) 6823589 7992571.5 11267034.52 9217149 10568524 63417978 100
new_f(x, ind2) 428 1283.5 5414.74 6843 7271 14969 100
What you have isn't bad, but if you want to avoid passing the default value of a default argument you could restructure like this:
f2 <- function(x, ind.rm) {
`if`(length(ind.rm) > 0, f(x,-ind.rm), f(x))
}
which is slightly shorter than what you have.
On Edit
Based on the comments, it seems you want to be able to pass a function nothing (rather than simply not pass at all), so that it uses the default value. You can do so by writing a function which is set up to receive nothing, also known as NULL. You can rewrite your f as:
f <- function(x, ind = NULL) {
if(is.null(ind)){ind <- seq(length(x))}
x[ind]
}
NULL functions as a flag which tells the receiving function to use a default value for the parameter, although that default value must be set in the body of the function.
Now f2 can be rewritten as
f2 <- function(x, ind.rm) {
f(x, ind = `if`(length(ind.rm) > 0, -ind.rm, NULL))
}
This is slightly more readable than what you have, but at the cost of making the original function slightly longer.
To implement "parameter1 = if(cond1) then value1 else default_value_of_param1", I used formals to get default parameters as a call:
f <- function(x, ind.row = seq_len(nrow(x)), ind.col = seq_len(ncol(x))) {
x[ind.row, ind.col]
}
f2 <- function(x, ind.row.rm = integer(0), ind.col.rm = integer(0)) {
f.args <- formals(f)
f(x,
ind.row = `if`(length(ind.row.rm) > 0, -ind.row.rm, eval(f.args$ind.row)),
ind.col = `if`(length(ind.col.rm) > 0, -ind.col.rm, eval(f.args$ind.col)))
}
Then:
> x <- matrix(1:6, 2)
> f2(x, 1:2)
[,1] [,2] [,3]
> f2(x, , 1:2)
[1] 5 6
> f2(x, 1, 2)
[1] 2 6
> f2(x, , 1)
[,1] [,2]
[1,] 3 5
[2,] 4 6
> f2(x, 1, )
[1] 2 4 6
> f2(x)
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6

R: Nesting Function

In Wolfram Mathematica, there is function NestList[f,x,n] that produces vector output of length n+1 with multiple application of function f on variable x. See documentation.
Is there something similar in R?
Executing do.call would make the same computations multiple times.
Example (reaction to USER_1's suggestion):
foo <- function(x) {x+1}
map(0, foo)
# [[1]]
# [1] 1
Just write one. Such a function has to loop anyway (rescursion is not advisable if n can get large).
NestList <- function(f, x, n) {
stopifnot(n > 0)
res <- rep(x, n + 1)
if (n == 1L) return(res)
for (i in seq_len(n)) res[i+1] <- f(res[i])
res
}
NestList(function(x) x^2, 2, 5)
#[1] 2 4 16 256 65536 4294967296

Resources