Does lapply apply the function orderly? - r

I have a list of functions
functions <- list(f1, f2, f3, ...)
And I need to pass an object x through all the functions. I could do it by:
for (fun in functions){
fun(x)
}
The functions do not return anything, but their order is important, i.e. f1(x) must be applied before f2(x).
Thus, I'm thinking on using lapply:
lapply(functions, function(fun) fun(x))
But I don't know if lapply applies first the first function of the list functions or if it follows another order. With the loop I assure the ordering but it may go slower.
Any idea?

The wording of the question "pass x through ..." suggests that you think this will accomplish a "composition", i.e. a serial application of functions to results from prior applications. Neither of your proposed solutions will do that although you could rework your for loop to do so. Take a look at the ?funprog help page which I am shamelessly quoting in part:
## Iterative function application:
Funcall <- function(f, ...) f(...)
## Compute log(exp(acos(cos(0))
Reduce(Funcall, list(log, exp, acos, cos), 0, right = TRUE)
Compare the results of a for loop version with the Reduce version:
> flist <- list(log, exp, acos, cos)
> arg <- 0; for (f in flist) {arg <- f(arg)}
> arg
[1] 6.123234e-17
> Funcall <- function(f, ...) f(...)
> ## Compute log(exp(acos(cos(0))
> Reduce(Funcall, list(log, exp, acos, cos), 0, right = TRUE)
[1] 0
This shows that <something> is actually happening:
arg <- 0; for (f in flist) {arg <- f(arg);cat(arg,"\n")}
-Inf
0
1.570796
6.123234e-17
But they are not the same since the right=TRUE actually reverses the order of application and explains the trivial difference in the final result. Compare:
arg <- 0; for (f in rev(flist)) {arg <- f(arg);cat(arg,"\n")}

Related

Why doesn't R check for missing arguments at start of call?

I noticed that R functions check for missing arguments only at the time when the specific argument is evaluated in the function body.
Example:
f <- function(x, y) {
Sys.sleep(3)
return(x + y)
}
f(1)
The function takes 3 seconds to fail and report the missing argument rather that at the start of the function call. What is the advantage of such an implementation?
EDIT:
I'm aware of force() and missing(). I would like to know what the advantage is of missing() on an argument immediately before evaluation rather than at the start of a function call. Is there a necessary reason for such an implementation?
As a contrived example
f2 <- function() {
Sys.sleep(3)
}
f <- function(x, y) {
if (missing(y)) stop("y missing")
print(x)
}
f(1, f2())
The "expensive" call to f2() is still avoided by lazy evaluation, but its missingness can be checked without evaluation.
EDIT2:
I guess you can argue that it gives more flexibility for generating default values, in another contrived example
f <- function(x, y = 1:3) {
if (missing(x)) {
x <- y
}
x
}
f()
such code would fail if argument checking was done immediately upon function call. However this code is better written as function(x = y, y = 1:3). Though I guess such a feature would be used by a non-trivial number of codebases and changing the behaviour now would be more trouble than it's worth.
R uses lazy evaluation. That is, arguments to functions are not evaluated until
they are required. This can save both time and memory if it turns out the
argument is not required.
In extremely rare circumstances something is not evaluated that should be.
You can use force to get around the laziness.
Burns, Patrick. 2011. « The R Inferno ». http://www.burns-stat.com/pages/Tutor/R_inferno.pdf.
So, this following code will fail faster :
f <- function(x, y) {
force(y)
Sys.sleep(3)
return(x + y)
}
f(1)
or
f <- function(x, y) {
if(missing(y)) stop("missing y")
Sys.sleep(3)
return(x + y)
}
f(1)

Not passing all optional arguments in apply

I am facing some problem with the apply function passing on arguments to a function when not needed. I understand that apply don't know what to do with the optional arguments and just pass them on the function.
But anyhow, here is what I would like to do:
First I want to specify a list of functions that I would like to use.
functions <- list(length, sum)
Then I would like to create a function which apply these specified functions on a data set.
myFunc <- function(data, functions) {
for (i in 1:length(functions)) print(apply(X=data, MARGIN=2, FUN=functions[[i]]))
}
This works fine.
data <- cbind(rnorm(100), rnorm(100))
myFunc(data, functions)
[1] 100 100
[1] -0.5758939 -5.1311173
But I would also like to use additional arguments for some functions, e.g.
power <- function(x, p) x^p
Which don't work as I want to. If I modify myFunc to:
myFunc <- function(data, functions, ...) {
for (i in 1:length(functions)) print(apply(X=data, MARGIN=2, FUN=functions[[i]], ...))
}
functions as
functions <- list(length, sum, power)
and then try my function I get
myFunc(data, functions, p=2)
Error in FUN(newX[, i], ...) :
2 arguments passed to 'length' which requires 1
How may I solve this issue?
Sorry for the wall of text. Thank you!
You can use Curry from functional to fix the parameter you want, put the function in the list of function you want to apply and finally iterate over this list of functions:
library(functional)
power <- function(x, p) x^p
funcs = list(length, sum, Curry(power, p=2), Curry(power, p=3))
lapply(funcs, function(f) apply(data, 2 , f))
With your code you can use:
functions <- list(length, sum, Curry(power, p=2))
myFunc(data, functions)
I'd advocate using Colonel's Curry approach, but if you want to stick to base R you can always:
funcs <- list(length, sum, function(x) power(x, 2))
which is roughly what Curry ends up doing
One option is to pass the parameters in a list with the arguments needed for each function. You can add those parameters to the others needed for apply using c and then use do.call to call the function. Something like this. I also wrap all the output in a list here rather than using print; your usage may vary.
power <- function(x, p) x^p
myFunc <- function(data, functions, parameters) {
lapply(seq_along(functions), function(i) {
p0 <- list(X=data, MARGIN=2, FUN=functions[[i]])
do.call(apply, c(p0, parameters[[i]]))
})
}
d <- matrix(1:6, nrow=2)
functions <- list(length, sum, power)
parameters <- list(NULL, NULL, p=3)
myFunc(d, functions, parameters)
You can use lazyeval package:
library(lazyeval)
my_evaluate <- function(data, expressions, ...) {
lapply(expressions, function(e) {
apply(data, MARGIN=2, FUN=function(x) {
lazy_eval(e, c(list(x=x), list(...)))
})
})
}
And use it like this:
my_expressions <- lazy_dots(sum = sum(x), sumpow = sum(x^p), length_k = length(x)*k )
data <- cbind(rnorm(100), rnorm(100))
my_evaluate(data, my_expressions, p = 2, k = 2)

How to use Reduce to create a new function from a list of functions?

I would like to add some functions f1,f2,...,fn so that I have a new function which yields f(x)=f1(x)+...+fn(x) (called pointwise addition). So I have a list of functions and tried
Reduce("funadd",fun.list)
where funadd is defined by
funadd <- function(f1,f2){
retfun <- function(x){
f1(x)+f2(x)
}
retfun
}
When testing funadd on two functions, it works flawlessly. However, when I try to evaluate the result of the Reduce command, I get the error
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?
It's funny that Reduce does not work... Note that "reducing by hand" works:
f <- function(x) x^2
g <- function(x) x^3
h <- function(x) x^4
x <- runif(3)
f(x)+g(x)+h(x)
#[1] 0.9760703 0.1873004 0.1266966
funadd(funadd(f,g),h)(x)
#[1] 0.9760703 0.1873004 0.1266966
Alternatively, you can use this:
funadd2 <- function(...){
function(x) Reduce(`+`, lapply(list(...), function(f) f(x)))
}
funadd2(f,g,h)(x)
#[1] 0.9760703 0.1873004 0.1266966
EDIT: This is what is going on:
Looking at the source code for Reduce, we can see that it (roughly) has a loop doing this:
init <- f
init <- funadd(init, g)
and continuing if there are more elements (init <- funadd(init, h), ...).
This causes the reference to f to be lost in the first loop iteration:
init(x)
# Error: evaluation nested too deeply: infinite recursion / options(expressions=)?
This happens because the f1 in the last retfun is pointing to itself:
identical(environment(init)$f1, init, ignore.environment=FALSE)
# [1] TRUE
As #Vincent figured it out, this can also be solved by forcing the arguments, i.e., by making a local copy that avoids lazy evaluation of f1 and f2:
funadd3 <- function(f1,f2){
f1.save <- f1
f2.save <- f2
retfun <- function(x){
f1.save(x)+f2.save(x)
}
retfun
}
Reduce(funadd3, list(f,g,h))(x)
# [1] 0.9760703 0.1873004 0.1266966
Forcing the evaluation of the arguments fixes the problem.
funadd <- function(f1,f2){
force(f1)
force(f2)
retfun <- function(x){
f1(x)+f2(x)
}
retfun
}
r <- Reduce( funadd, list( f, g, h ) )
r(x) # works

apply over nested functions

This is not really a problem, but I'm wondering if there is a more elegant solution:
Lets say i have a vector vec <- rlnorm(10) and I want to apply a not vectorized function to it, e.g. exp (ignore for the moment that it is vectorized), I can do
sapply( vec, exp )
But when the function I want to apply is nested, the expression becomes directly less simple:
sapply( vec, function(x) exp( sqrt(x) ) )
This happens to me all the time with the apply and plyr family.
So my question is, is there in general an elegant way to nest (or pipe) functions without defining explicitly an (anonymous) function function(x){...}? Something like
# notrun
sapply( vec, sqrt | exp )
or similar.
See the examples for ?Reduce:
## Iterative function application:
Funcall <- function(f, ...) f(...)
## Compute log(exp(acos(cos(0))
Reduce(Funcall, list(log, exp, acos, cos), 0, right = TRUE)
Here's a more bare-bones implementation with a slightly different interface:
Compose <- function(x, ...)
{
lst <- list(...)
for(i in rev(seq_along(lst)))
x <- lst[[i]](x)
x
}
sapply(0, Compose, log, exp, acos, cos)
The package functional includes a Compose function.
library(functional)
id <- Compose(exp, log)
id(2) # 2
Its implementation is simple enough to include in your source, if, say, you don't need the rest of the stuff in the functional package.
R> Compose
function (...)
{
fs <- list(...)
if (!all(sapply(fs, is.function)))
stop("Argument is not a function")
function(...) Reduce(function(x, f) f(x), fs, ...)
}
<environment: namespace:functional>

integrate a function with R/ prevent integrate to evaluate the integrand vectorially

I want to integrate a function defined with an if statement. The problem is that the R-function "integrate" evaluates my function for several x values as if it was a vectorial one, so I recieve a warning telling that the condition was evaluated only for the first item of the vector. Consequently, my condition is not taken into account and the following program returns a division by zero error..
Is there a way to circumvent that.
f1 <- function(x) dnorm(x,0,1)
myInt <- function(f,lower,upper){
f <- match.fun(f)
integrand <- function(x) if (f(x)==0) 0 else 1/f(x)
integrate(integrand,lower,upper)
}
a=myInt(f1,-Inf,Inf)
The are two things you could do:
1) You could write you function different so it is vectorized:
integrand <- function(x) ifelse(f(x) == 0, 0, 1/f(x))
2) You can call "Vectorize" on it to make it vectorized
g = Vectorize(f)
(Also I think floating-point mischief might be coming into play here:
> f(38)
[1] 1.097221e-314
> 1/f(38)
[1] Inf
)

Resources