Why the parameter FUN in tapply is invalid combined with colwise - r

I usually use the combination of colwise and tapply to calculate grouped values in a data frame. However, I found unexpectedly that the parameter FUN in tapply cannot work correctly with colwise from plyr. The example is as follows:
Data:
df <- data.frame(a = 1:10, b = rep(1:2, each = 5), c = 2:11)
Normal:
library(plyr)
colwise(tapply)(subset(df, select = c(a, c)), df$b, function(x){sum(x[x > 2])})
Above code is correct and can work normally. But if I add FUN, it will be wrong:
colwise(tapply)(subset(df, select = c(a, c)), df$b, FUN = function(x){sum(x[x > 2])})
Error is:
Error in FUN(X[[1L]], ...) :
unused arguments (function (X, INDEX, FUN = NULL, ..., simplify = TRUE)
{
FUN <- if (!is.null(FUN)) match.fun(FUN)
if (!is.list(INDEX)) INDEX <- list(INDEX)
nI <- length(INDEX)
if (!nI) stop("'INDEX' is of length zero")
namelist <- vector("list", nI)
names(namelist) <- names(INDEX)
extent <- integer(nI)
nx <- length(X)
one <- 1
group <- rep.int(one, nx)
ngroup <- one
for (i in seq_along(INDEX)) {
index <- as.factor(INDEX[[i]])
if (length(index) != nx) stop("arguments must have same length")
namelist[[i]] <- levels(index)
extent[i] <- nlevels(index)
group <- group + ngroup * (as.integer(index) - one)
ngroup <- ngroup * nlevels(index)
}
if (is.null(FUN)) return(group)
ans <- lapply(X = split(X, group), FUN = FUN, ...)
index <- as.integer(names(ans))
if (simplify && all(unlist(lapply(ans, length)) == 1)) {
ansmat <- array(dim = extent, dimnames = namelist)
Could anyone explain the reason? Thank you in advance.

Well, the issue is that both lapply and tapply have an optional FUN argument. Note that colwise(tapply) is a function with the following line:
out <- do.call("lapply", c(list(filtered, .fun, ...), dots))
Let's go to this line with our debugger by writing
ct <- colwise(tapply); trace(ct, quote(browser()), at = 6)
and then running
ct(subset(df, select = c(a, c)), df$b, FUN = function(x){sum(x[x > 2])})
Now let's print c(list(filtered, .fun, ...), dots). Notice that the first three (unnamed) arguments are now the dataframe, tapply, and db$b, with the FUN argument above coming in last. However, this argument is named. Since this is a do.call on lapply, instead of that argument becoming an optional parameter for tapply, it now becomes the main call on lapply! So what is happening is that you are turning this into:
lapply(subset(df, select = c(a, c)), function(x){sum(x[x > 2])}, tapply, df$b)
This, of course, makes no sense, and if you execute the above (still in your debugger) manually, you will get the exact same error you are getting. For a simple workaround, try:
tapply2 <- function(.FUN, ...) tapply(FUN = .FUN, ...)
colwise(tapply2)(subset(df, select = c(a, c)), df$b, .FUN = function(x){sum(x[x > 2])})
The plyr package should be checking for ... arguments named FUN (or anything that can interfere with lapply's job), but it doesn't seem the author included this. You can submit a pull request to the plyr package that implements any of the following workarounds:
Defines a local
.lapply <- function(`*X*`, `*FUN*`, ...) lapply(X = `*X*`, `*FUN*`, ...)
(minimizing interference further).
Checks names(list(...)) within the colwise(tapply) function for X and FUN (can introduce problems if the author intended to prevent evaluation of promises until the child call).
Calls do.call("lapply", ...) explicitly with named X and FUN, so that you get the intended
formal argument "FUN" matched by multiple actual arguments

Related

Store the arguments an R function expects in a character vector

Is it possible to get the information which arguments are expected by a function and then store it in a character vector?
I know args(foo) but it only prints this information and returns NULL.
Why do I need this?
I want to work with the three dot arguments (dot dot dot, ...) and pass it to different functions.
Let me explain...
The following simple case works.
data <- c(1:10)
cv <- function(x, ...) {
numerator <- mean(x, ...)
denominator <- sd(x, ...)
return(numerator / denominator)
}
cv(data, na.rm = TRUE)
However, in a slightly different case, R will not figure out automatically which aruments match which function.
data <- c(1:10)
roundCv <- function(x, ...) {
numerator <- mean(x, ...)
denominator <- sd(x, ...)
result <- round(numerator / denominator, ...)
return(result)
}
roundCv(data, na.rm = TRUE, digits = 2)
# Error in sd(x, ...) : unused argument (digits = 2)
If I want to separate those arguments, it gets a little hairy. The approach is not generic but has to be adapted to all functions involved.
data <- c(1:10)
roundCv2 <- function(x, ...) {
args <- list(...)
args1 <- args[ names(args) %in% "na.rm"] # For mean/sd
args2 <- args[!names(args) %in% "na.rm"] # For print
numerator <- do.call("mean", c(list(x = x), args1))
denominator <- do.call("sd", c(list(x = x), args1))
tmp <- numerator / denominator
do.call("round", c(list(x = tmp), args2))
}
roundCv2(data, na.rm = TRUE, digits = 2)
Is there a simple way to do this?!
If I would know the arguments each function expects, I could handle it generically. That's why I'm asking:
Is it possible to get the information which arguments are expected by a function and then store it in a character vector?
A shout-out to MrFlick for pointing to similar questions and giving the answer in the comments.
You can use formals() to get a list like object back, bit it won't work for primitive functions. Like names(formals(...))
More details can be found here: https://stackoverflow.com/a/4128401/1553796

R function ambigous call

I'm working on a project, trying to convert an R function to CUDA C++, but I can't understand some R function call, I'm really new to R and I can't find what I'm really looking after. To be exactly, this is the main R function code:
for (i in 1:ncy) {
res <- apply(allsubset, 2, banddepthforonecurve, xdata=x, ydata=y[,i], tau=tau, use=use)
depth[i] <- sum(res[1,])
localdepth[i] <- sum(res[2,])
}
The part that I can't really understand is "banddepthforonecurve" function call, this is the "banddepthforonecurve" function code:
banddepthforonecurve <- function(x, xdata, ydata, tau, use) {
envsup <- apply(xdata[,x], 1, max)
envinf <- apply(xdata[,x], 1, min)
inenvsup <- ydata <= envsup
inenvinf <- ydata >= envinf
depth <- all(inenvsup) & all(inenvinf)
localdepth <- depth & use(envsup-envinf) <= tau
res <- c(depth,localdepth)
return(res)
}
When it is called in:
res <- apply(allsubset, 2, banddepthforonecurve, xdata=x, ydata=y[,i], tau=tau, use=use)
I don't really get what it set for the first parameter "x" of the "banddepthforonecurve", I supposed its like banddepthforonecurve(i, xdata=x, ydata=y[,i], tau = tau, use=use)
but if I try to run it separately on R studio to try to understand it better I get:
apply(xdata[, x], 1, max) : dim(X) must have a positive length
Why when I compile the whole R project there isn't this error? What it set for the "x" parameter when called in the "res <- apply(...)"? I hope I was clear, sorry for my bad english, Thank you in advance !
# This apply function
res = apply(X = input, MAR = 2, FUN = foo, ...)
# is essentially syntactical sugar for this:
res = list()
for(i in 1:ncol(X)) {
res[[i]] = foo(X[, i], ...)
}
# plus an attempt simplify `res` (e.g., to a matrix or vector)
So in your line:
apply(allsubset, 2, banddepthforonecurve, xdata=x, ydata=y[,i], tau=tau, use=use)
In a single iteration of your for loop, the first parameter of banddepthforonecurve (x) will be allubset[, 1], then allsubset[, 2], ..., allsubset[, ncol(allsubset)].
The xdata parameter is always x, the tau and use parameters are always tau and use, and the for loop iterates over the columns of y to use as the ydata argument. You can think of it as a nested loop, for each column of y, use it as ydata and (via apply) iterate over all columns of allsubset.
(If the MAR argument of apply was 1, then it would iterate over rows instead of columns.)

Passing an expression into `MoreArgs` of `mapply`

I'm doing some programming using dplyr, and am curious about how to pass an expression as (specifically a MoreArgs) argument to mapply?
Consider a simple function F that subsets a data.frame based on some ids and a time_range, then outputs a summary statistic based on some other column x.
require(dplyr)
F <- function(ids, time_range, df, date_column, x) {
date_column <- enquo(date_column)
x <- enquo(x)
df %>%
filter(person_id %chin% ids) %>%
filter(time_range[1] <= (!!date_column) & (!!date_column) <= time_range[2]) %>%
summarise(newvar = sum(!!x))
}
We can make up some example data to which we can apply our function F.
person_ids <- lapply(1:2, function(i) sample(letters, size = 10))
time_ranges <- lapply(list(c("2014-01-01", "2014-12-31"),
c("2015-01-01", "2015-12-31")), as.Date)
require(data.table)
dt <- CJ(person_id = letters,
date_col = seq.Date(from = as.Date('2014-01-01'), to = as.Date('2015-12-31'), by = '1 day'))
dt[, z := rnorm(nrow(dt))] # The variable we will later sum over, i.e. apply F to.
We can successfully apply our function to each of our inputs.
F(person_ids[[1]], time_ranges[[1]], dt, date_col, z)
F(person_ids[[2]], time_ranges[[2]], dt, date_col, z)
And so if I wanted, I could write a simple for-loop to solve my problem. But if we try to apply syntactic sugar and wrap everything within mapply, we get an error.
mapply(F, ids = person_ids, time_range = time_ranges, MoreArgs = list(df = dt, date_column = date_col, x = z))
# Error in mapply... object 'date_col' not found
In mapply, MoreArgs is provided as a list, but R tries to evaluate the list elements, causing the error. As suggested by #Gregor, you can quote those MoreArgs that we don't want to evaluate immediately, preventing the error and allowing the function to proceed. This can be done with base quote or dplyr quo:
mapply(F, person_ids, time_ranges, MoreArgs = list(dt, quote(date_col), quote(z)))
mapply(F, person_ids, time_ranges, MoreArgs = list(dt, quo(date_col), quo(z)))
Another option is to use map2 from the purrr package, which is the tidyverse equivalent of mapply with two input vectors. tidyverse functions are set up to work with non-standard evaluation, which avoids the error you're getting with mapply without the need for quoting the arguments:
library(purrr)
map2(person_ids, time_ranges, F, dt, date_col, z)
[[1]]
newvar
1 40.23419
[[2]]
newvar
1 71.42327
More generally, you could use pmap, which iterates in parallel over any number of input vectors:
pmap(list(person_ids, time_ranges), F, dt, date_col, z)

Ways to add multiple columns to data frame using plyr/dplyr/purrr

I often have a need to mutate a data frame through the additional of several columns at once using a custom function, preferably using parallelization. Below are the ways I already know how to do this.
Setup
library(dplyr)
library(plyr)
library(purrr)
library(doMC)
registerDoMC(2)
df <- data.frame(x = rnorm(10), y = rnorm(10), z = rnorm(10))
Suppose that I want two new columns, foocol = x + y and barcol = (x + y) * 100, but that these are actually complex calculations done in a custom function.
Method 1: Add columns separately using rowwise and mutate
foo <- function(x, y) return(x + y)
bar <- function(x, y) return((x + y) * 100)
df_out1 <- df %>% rowwise() %>% mutate(foocol = foo(x, y), barcol = bar(x, y))
This is not a good solution since it requires two function calls for each row and two "expensive" calculations of x + y. It's also not parallelized.
Method 2: Trick ddply into rowwise operation
df2 <- df
df2$id <- 1:nrow(df2)
df_out2 <- ddply(df2, .(id), function(r) {
foocol <- r$x + r$y
barcol <- foocol * 100
return(cbind(r, foocol, barcol))
}, .parallel = T)
Here I trick ddply into calling a function on each row by splitting on a unique id column I just created. It's clunky, though, and requires maintaining a useless column.
Method 3: splat
foobar <- function(x, y, ...) {
foocol <- x + y
barcol <- foocol * 100
return(data.frame(x, y, ..., foocol, barcol))
}
df_out3 <- splat(foobar)(df)
I like this solution since you can reference the columns of df in the custom function (which can be anonymous if desired) without array comprehension. However, this method isn't parallelized.
Method 4: by_row
df_out4 <- df %>% by_row(function(r) {
foocol <- r$x + r$y
barcol <- foocol * 100
return(data.frame(foocol = foocol, barcol = barcol))
}, .collate = "cols")
The by_row function from purrr eliminates the need for the unique id column, but this operation isn't parallelized.
Method 5: pmap_df
df_out5 <- pmap_df(df, foobar)
# or equivalently...
df_out5 <- df %>% pmap_df(foobar)
This is the best option I've found. The pmap family of functions also accept anonymous functions to apply to the arguments. I believe pmap_df converts df to a list and back, though, so maybe there is a performance hit.
It's also a bit annoying that I need to reference all the columns I plan on using for calculation in the function definition function(x, y, ...) instead of just function(r) for the row object.
Am I missing any good or better options? Are there any concerns with the methods I described?
How about using data.table?
library(data.table)
foo <- function(x, y) return(x + y)
bar <- function(x, y) return((x + y) * 100)
dt <- as.data.table(df)
dt[, foocol:=foo(x,y)]
dt[, barcol:=bar(x,y)]
The data.table library is quite fast and has at least some some potential for parallelization.

Object disappears from namespace in function

I am writing a wrapper to combine any number of datasets row-wise. Since some may have unique variables, I am first restricting to the variables in the data.
My function works like this
rcombine <- function(List, Vars) {
List2 <- lapply(List, subset, select=Vars)
Reduce(rbind, List2)
}
When I run the code directly, it works. But in the function, my variable Vars disappears.
For instance:
x <- data.frame('a'=sample(LETTERS, 10), 'b'=sample(LETTERS, 10), 'c'=sample(LETTERS, 10))
y <- data.frame('a'=sample(LETTERS, 10), 'b'=sample(LETTERS, 10), 'e'=sample(LETTERS, 10))
rcombine(list(x, y), c('a', 'b'))
gives me:
Error in eval(expr, envir, enclos) : object 'Vars' not found
but running:
List <- list(x, y)
Reduce(rbind, lapply(List, subset, select=c('a','b')))
Works. I can print Vars from the function, but inside lapply it disappears. What is going on?
subset really shouldn't be used for these types of things. From the help page
This is a convenience function intended for use interactively. For
programming it is better to use the standard subsetting functions like
[, and in particular the non-standard evaluation of argument subset
can have unanticipated consequences.
For your particular problem I don't see why just replacing subset with directly using "[" would be a problem.
rcombine <- function(List, Vars) {
List2 <- lapply(List, "[", i= , j = Vars, drop = FALSE) # here is the change
Reduce(rbind, List2)
}
# alternatively...
rcombine <- function(List, Vars) {
List2 <- lapply(List, function(x){x[, Vars, drop = FALSE]}) # here is the change
Reduce(rbind, List2)
}
x <- data.frame('a'=sample(LETTERS, 10), 'b'=sample(LETTERS, 10), 'c'=sample(LETTERS, 10))
y <- data.frame('a'=sample(LETTERS, 10), 'b'=sample(LETTERS, 10), 'e'=sample(LETTERS, 10))
rcombine(list(x, y), c('a', 'b'))

Resources