R: fast (conditional) subsetting where feasible - r

I would like to subset rows of my data
library(data.table); set.seed(333); n <- 100
dat <- data.table(id=1:n, x=runif(n,100,120), y=runif(n,200,220), z=runif(n,300,320))
> head(dat)
id x y z
1: 1 109.3400 208.6732 308.7595
2: 2 101.6920 201.0989 310.1080
3: 3 119.4697 217.8550 313.9384
4: 4 111.4261 205.2945 317.3651
5: 5 100.4024 212.2826 305.1375
6: 6 114.4711 203.6988 319.4913
in several stages. I am aware that I could apply subset(.) sequentially to achieve this.
> s <- subset(dat, x>119)
> s <- subset(s, y>219)
> subset(s, z>315)
id x y z
1: 55 119.2634 219.0044 315.6556
My problem is that I need to automate this and it might happen that the subset is empty. In this case, I would want to skip the step(s) that result in an empty set. For example, if my data was
dat2 <- dat[1:50]
> s <-subset(dat2,x>119)
> s
id x y z
1: 3 119.4697 217.8550 313.9384
2: 50 119.2519 214.2517 318.8567
the second step subset(s, y>219) would come up empty but I would still want to apply the third step subset(s,z>315). Is there a way to apply a subset-command only if it results in a non-empty set? I imagine something like subset(s, y>219, nonzero=TRUE). I would want to avoid constructions like
s <- dat
if(nrow(subset(s, x>119))>0){s <- subset(s, x>119)}
if(nrow(subset(s, y>219))>0){s <- subset(s, y>219)}
if(nrow(subset(s, z>318))>0){s <- subset(s, z>319)}
because I fear the if-then jungle would be rather slow, especially since I need to apply all of this to different data.tables within a list using lapply(.). That's why I am hoping to find a solution optimized for speed.
PS. I only chose subset(.) for clarity, solutions with e.g. data.table would be just as welcome if not more so.

I agree with Konrad's answer that this should throw a warning or at least report what happens somehow. Here's a data.table way that will take advantage of indices (see package vignettes for details):
f = function(x, ..., verbose=FALSE){
L = substitute(list(...))[-1]
mon = data.table(cond = as.character(L))[, skip := FALSE]
for (i in seq_along(L)){
d = eval( substitute(x[cond, verbose=v], list(cond = L[[i]], v = verbose)) )
if (nrow(d)){
x = d
} else {
mon[i, skip := TRUE]
}
}
print(mon)
return(x)
}
Usage
> f(dat, x > 119, y > 219, y > 1e6)
cond skip
1: x > 119 FALSE
2: y > 219 FALSE
3: y > 1e+06 TRUE
id x y z
1: 55 119.2634 219.0044 315.6556
The verbose option will print extra info provided by data.table package, so you can see when indices are being used. For example, with f(dat, x == 119, verbose=TRUE), I see it.
because I fear the if-then jungle would be rather slow, especially since I need to apply all of this to different data.tables within a list using lapply(.).
If it's for non-interactive use, maybe better to have the function return list(mon = mon, x = x) to more easily keep track of what the query was and what happened. Also, the verbose console output could be captured and returned.

An interesting approach could be developed using modified filter function offered in dplyr. In case of conditions not being met the non_empty_filter filter function returns original data set.
Notes
IMHO, this is fairly non-standard behaviour and should be reported via warning. Of course, this can be removed and has no bearing on the function results.
Function
library(tidyverse)
library(rlang) # enquo
non_empty_filter <- function(df, expr) {
expr <- enquo(expr)
res <- df %>% filter(!!expr)
if (nrow(res) > 0) {
return(res)
} else {
# Indicate that filter is not applied
warning("No rows meeting conditon")
return(df)
}
}
Condition met
Behaviour: Returning one row for which the condition is met.
dat %>%
non_empty_filter(x > 119 & y > 219)
Results
# id x y z
# 1 55 119.2634 219.0044 315.6556
Condition not met
Behaviour: Returning the full data set as the whole condition is not met due to y > 1e6.
dat %>%
non_empty_filter(x > 119 & y > 219 & y > 1e6)
Results
# id x y z
# 1: 1 109.3400 208.6732 308.7595
# 2: 2 101.6920 201.0989 310.1080
# 3: 3 119.4697 217.8550 313.9384
# 4: 4 111.4261 205.2945 317.3651
# 5: 5 100.4024 212.2826 305.1375
# 6: 6 114.4711 203.6988 319.4913
# 7: 7 112.1879 209.5716 319.6732
# 8: 8 106.1344 202.2453 312.9427
# 9: 9 101.2702 210.5923 309.2864
# 10: 10 106.1071 211.8266 301.0645
Condition met/not met one-by-one
Behaviour: Skipping filter that would return an empty data set.
dat %>%
non_empty_filter(y > 1e6) %>%
non_empty_filter(x > 119) %>%
non_empty_filter(y > 219)
Results
# id x y z
# 1 55 119.2634 219.0044 315.6556

Related

If rows has a similar element, keep the shorter row

I have a dataframe as shown below.
dataframe
Data for replication:
x <- data.frame(cluster=c(1,2,3,4,5),
groups=c('20000127 20000128',
'20000127 20000128 20000134',
'20000129 20000130 20000131 20000132',
'20000133 20000134 20000135 20000136',
'20000128 20000133 20000134 20000135 20000136'),
chr=c(17,26,35,35,44), stringsAsFactors=FALSE)
I'm trying to come up with a way to analyze the 'group' column for any groups with similar elements and remove the row with the higher count.
For example,
element 20000128 is present in rows 1,2 & 5. Since row 1 has a lower number of characters, I want to remove rows 2 & 5. I appreciate any help!!
Ideally the end result should only have Cluster 1,3,4. Each element should only appear once. (the clusters with the lowest character count)
Exploring this problem has been fun. I've learned that this is a variation of the set cover problem and is NP Complete.
It would help to understand the scope of your problem. If we are talking 10s of clusters, we could use brute force. If it's thousands of clusters, we are going to have to use an approximation.
I have learned there is an R implementation of the greedy algorithm in the RcppGreedySetCover package.
First we need to convert to two column long form. We can use dplyr.
library(tidyverse)
longx <- x %>%
mutate(splitgroups = strsplit(as.character(groups), " ")) %>%
unnest(splitgroups) %>% select(cluster, splitgroups)
Then we can use greedySetCover to approximate the smallest set that covers all elements.
library(RcppGreedySetCover)
greedySetCover(longx)
#100% covered by 3 sets.
# cluster splitgroups
# 1: 2 20000127
# 2: 3 20000129
# 3: 3 20000130
# 4: 3 20000131
# 5: 3 20000132
# 6: 5 20000128
# 7: 5 20000133
# 8: 5 20000134
# 9: 5 20000135
#10: 5 20000136
This suggests the set of 2,3, and 5 covers everything. But this does not fully answer your question, because, as you know there is a set of clusters that is shorter.
However, what we have learned, is that the minimum set is 3 clusters. Now we can test all combinations of 3 clusters.
set.size <- length(unique(greedySetCover(longx)$cluster))
binary.matrix <- table(longx)
combinations <- combn(unique(x$cluster),set.size)
total.lengths <- apply(combinations,2,function(x){
if(sum(as.logical(colSums(binary.matrix[x,]))) == ncol(binary.matrix))
{sum(rowSums(binary.matrix[x,]))}
else {NA}})
min.length <- min(total.lengths,na.rm = TRUE)
min.set <- combinations[,which(total.lengths == min.length)]
x[min.set,]
# cluster groups chr
#1 1 20000127 20000128 17
#3 3 20000129 20000130 20000131 20000132 35
#4 4 20000133 20000134 20000135 20000136 35
Data
x <- data.frame(cluster=c(1,2,3,4,5),
groups=c('20000127 20000128',
'20000127 20000128 20000134',
'20000129 20000130 20000131 20000132',
'20000133 20000134 20000135 20000136',
'20000128 20000133 20000134 20000135 20000136'),
chr=c(17,26,35,35,44), stringsAsFactors=FALSE)
I had to use a while loop, maybe there's a less loopy solution...
foo <- function(x) {
i <- 1
while(i < nrow(x)) {
grps <- strsplit(x$groups, " ")
keep <- unlist(lapply(grps, function(x) identical(x, grps[[i]]) | !any((length(x) > length(grps[[i]]) & duplicated(c(grps[[i]], x))))))
x <- x[keep,]
i <- i+1
}
x
}
foo(x)
cluster groups chr
1 1 20000127 20000128 17
3 3 20000129 20000130 20000131 20000132 35
4 4 20000133 20000134 20000135 20000136 35
Explanation.
# I created a function to keep things compact and allow it to be used for other datasets.
# The `x` is the argument, assumed to be your data frame.
# 1: foo <- function(x) {
# Start the ball rolling with a counter to use in the while loop.
# 2: i <- 1
# This starts the while loop and will continue until "i" reaches the end of the data.
# But note later that the data may change if there are rows that meet your condition.
# 3: while(i < nrow(x)) {
# Split the groups variable at the " " and store in "grps"
# 4: grps <- strsplit(x$groups, " ")
# This next line does the work.
# It creates a vector of logical indices which are used to remove rows of "x"
# I split this into many lines to explain better.
# 5: keep <- unlist(lapply(grps, function(x) # apply a function to "grps"
# identical(x, grps[[i]]) | # Returns TRUE for each row we are checking
# !any( # Negate the next conditions. They will return rows to remove.
# (length(x) > length(grps[[i]]) & # return TRUE (negated=FALSE) if the length of each x is more than all others
# duplicated(c(grps[[i]], x)))))) # if duplicated, return TRUE (negated=FALSE)
# Update "x" by keeping only the rows that meet the criteria defined in step 5.
# 6: x <- x[keep,]
# Increase i
# 7: i <- i+1
# 8: } # This ends the while loop
# 9: x # Return the result
} # End of function

By group, match values to a particular value

I have a dataset that includes a vote result r for each voter v on a particular decision d. My data thus looks like:
d <- c(1,1,1,1,2,2,2,2,3,3,3,4,4,4,4)
v <- c(6,7,8,9,6,7,8,9,6,7,9,6,7,8,9)
r <- c(y,y,n,n,n,n,n,n,y,y,y,y,y,a,y)
df <- data.frame(d,v,r)
Not every voter votes in every election. What I want to do is see if other voters make the same call as a particular voter (let's say v == 8). Normally I would just use dplyr:
df %>% group_by(d) %>% mutate(like8 = ifelse(r == r[v == 8], 1, 0))
The problem that I have is that that particular voter v == 8 doesn't have a recorded vote for each decision (which is distinct from abstaining votes, which are recorded). Because of this I get the following error.
Error in mutate_impl(.data, dots) :
Column like8 must be length 3 (the group size) or one, not 0
What I've done so far is to write up a combination of ifelse and looping in order to get around this issue.
with(df,
for (i in unique(d)) {
if(8 %in% v){
for (j in r[d == i]) {
df$like8[d == i & r == j] <- ifelse(j == r[v == 8], 1, 0)
}
} else {
for (j in r[d == i]){
df$like8[d == i & r == j] <- NA
}
}
}
)
--note: I've never been formally instructed in 'good' programming conventions, so my bracket placement is probably unclear and open to suggestions.
The problem I have is that my actual dataset has over 500,000 observations, and this is extremely slow. I've seen here solutions using data.table for when the value isn't missing, but I don't understand data.table enough to know how to make it work for my case.
Try this:
df %>%
group_by(d) %>%
mutate(
like8 = {
if (sum(v == 8) > 0) as.numeric(r == r[v == 8])
else NA
}
)
It wraps the test in an if/else statement checking to see there is a voter 8. The as.numeric statement is equivalent to what you wrote, but should be faster when your response is 1/0.
It is not clear about the expected output. If we follow the methodology in #Melissa Key's tidyverse answer, the similar approach in data.table (as OP mentioned in the post) would be
library(data.table)
setDT(df)[, like8 := if(8 %in% v) +(r == r[v == 8]) else NA_integer_, by = d]
df
# d v r like8
# 1: 1 6 y 0
# 2: 1 7 y 0
# 3: 1 8 n 1
# 4: 1 9 n 1
# 5: 2 6 n 1
# 6: 2 7 n 1
# 7: 2 8 n 1
# 8: 2 9 n 1
# 9: 3 6 y NA
#10: 3 7 y NA
#11: 3 9 y NA
#12: 4 6 y 0
#13: 4 7 y 0
#14: 4 8 a 1
#15: 4 9 y 0
Or we avoid the if/else by splitting it to two steps and assign only to those that satisfy the condition (8 %in% v)
i1 <- setDT(df)[, .I[8 %in% v], by = d]$V1
df[i1, like8 := +(r == r[v==8]), by = d]
The other values in 'like8' will by default filled up by NA
data
d <- c(1,1,1,1,2,2,2,2,3,3,3,4,4,4,4)
v <- c(6,7,8,9,6,7,8,9,6,7,9,6,7,8,9)
r <- c('y','y','n','n','n','n','n','n','y','y','y','y','y','a','y')
df <- data.frame(d,v,r)
Another solution using 2 joins:
#initialize column
DT1[, like8 := NA_integer_][
#set to 0 if voter 8 voted on decision
DT1[v==8L], like8 := 0L, on=.(d)][
#set to 1 if other voters voted the same in a particular decision
DT1[v==8L], like8 := 1L, on=.(d, r)]
data:
library(data.table)
library(microbenchmark)
#generate dummy data
set.seed(0L)
numD <- 100L
numV <- 1e4L
DT <- unique(data.table(d=sample(numD, numD*numV, replace=TRUE),
v=sample(numV, numD*numV, replace=TRUE)))
DT[, r:=sample(c('y','n','a'), .N, replace=TRUE)]
setorder(DT, d, v, r)
#set key to speed up the subsetting to voter
setkey(DT, d, v)
DT1 <- copy(DT)

Return multiple results of column-to-matrix operations within a data.table

I have a data.table with multiple categorical variables for which I would like to create contrast (or "dummy") variables along with many more numerical variables which I would like to simply pass by reference.
Example dataset:
library('data.table')
d <- data.table(1:3, # there are lots of numerics, so I want to avoid copying
letters[1:3], # convert these to factor then dummy variable
10:12,
LETTERS[24:26])
# >d
# V1 V2 V3 V4
# 1: 1 a 10 X
# 2: 2 b 11 Y
# 3: 3 c 12 Z
The desired result looks like:
>dummyDT(d)
V1 V3 V2.b V2.c V4.Y V4.Z
1: 1 10 0 0 0 0
2: 2 11 1 0 1 0
3: 3 12 0 1 0 1
which can be produced with:
# this does what I want but is slow and inelegant and not idiomatic data.table
categorToMatrix <- function(x, name_prefix='Var'){
# set levels in order of appearance to avoid default re-sort by alpha
m <- contrasts(factor(x, levels=unique(x)))
dimnames(m) <- list(NULL, paste(name_prefix, colnames(m), sep='.') )
m
}
dummyDT <- function(d){
toDummy <- which(sapply(d, function(x) is.factor(x) | is.character(x)))
if(length(toDummy)>0){
dummyComponent <-
data.table(
do.call(cbind, lapply(toDummy, function(j) {
categorToMatrix(d[[j]], name_prefix = names(d)[j])
} )
)
)
asIs <- (1:ncol(d))[-toDummy]
if(length(asIs)>0) {
allCols <- cbind(d[,asIs,with=FALSE], dummyComponent)
} else allCols <- dummyComponent
} else allCols <- d
return(allCols)
}
(I do not care about maintaining original column ordering.)
I have tried in addition to the above, the approach of splitting each matrix into a list of columns, as in:
# split a matrix into list of columns and keep track of column names
# expanded from #Tommy's answer at: https://stackoverflow.com/a/6821395/2573061
splitMatrix <- function(m){
setNames( lapply(seq_len(ncol(m)), function(j) m[,j]), colnames(m) )
}
# Example:
splitMatrix(categoricalToMatrix(d$V2, name_prefix='V2'))
# $V2.b
# [1] 0 1 0
#
# $V2.c
# [1] 0 0 1
which works for an individual column, but then when I try to lapply to multiple columns, these lists get somehow coerced into string-rows and recycled, which is baffling me:
dummyDT2 <- function(d){
stopifnot(inherits(d,'data.table'))
toDummy <- which(sapply(d, function(x) is.factor(x) | is.character(x)))
if(length(toDummy)>0){
dummyComponent <- d[, lapply(.SD, function(x) splitMatrix( categorToMatrix(x) ) ) ,
.SDcols=isChar]
asIs <- (1:ncol(d))[-toDummy]
if(length(asIs)>0) {
allCols <- cbind(d[,asIs,with=FALSE], dummyComponent)
} else allCols <- dummyComponent
} else allCols <- d
return(allCols)
}
dummyDT2(d)
# V1 V3 V2
# 1: 1 10 0,1,0
# 2: 2 11 0,0,1
# 3: 3 12 0,1,0
# Warning message:
# In data.table::data.table(...) :
# Item 2 is of size 2 but maximum size is 3 (recycled leaving remainder of 1 items)
I then tried wrapping splitMatrix with data.table() and got an amusingly laconic error message.
I know that functions like caret::dummyVars exist for data.frame. I am trying to create a data.table optimized version.
Closely related question: How to one-hot-encode factor variables with data.table?
But there are two differences: I do not want full-rank dummy variables (because I'm using this for regression) but rather contrast variables (n-1 of these for n levels) and I have multiple numeric variables that I do not want to OHE.

Referencing a dataframe recursively

Is there a way to have a dataframe refer to itself?
I find myself spending a lot of time writing things like y$Category1[is.na(y$Category1)]<-NULL which are hard to read and feel like a lot of slow repetitive typing. I wondered if there was something along the lines of:
y$Category1[is.na(self)] <- NULL I could use instead.
Thanks
What a great question. Unfortunately, as #user295691 pointed out in the coments, the issue is with regards to referencing a vector twice: once as the object being indexed and once as the subject of a condition. It does appear impossible to avoid the double reference.
numericVector[cond(numericVector)] <- newVal
What I think we can do is have a nice and neat function so that instead of
# this
y$Category1[is.na(y$Category1)] <- list(NULL)
# we can have this:
NAtoNULL(y$Category1)
For example, the following functions wrap selfAssign() (below):
NAtoNULL(obj) # Replaces NA values in obj with NULL.
NAtoVal(obj, val) # Replaces NA values in obj with val.
selfReplace(obj, toReplace, val) # Replaces toReplace values in obj with val
# and selfAssign can be called directly, but I'm not sure there would be a good reason to
selfAssign(obj, ind, val) # equivalent to obj[ind] <- val
Example:
# sample df
df <- structure(list(subj=c("A",NA,"C","D","E",NA,"G"),temp=c(111L,112L,NA,114L,115L,116L,NA),size=c(0.7133,NA,0.7457,NA,0.0487,NA,0.8481)),.Names=c("subj","temp","size"),row.names=c(NA,-7L),class="data.frame")
df
subj temp size
1 A 111 0.7133
2 <NA> 112 NA
3 C NA 0.7457
4 D 114 NA
5 E 115 0.0487
6 <NA> 116 NA
7 G NA 0.8481
# Make some replacements
NAtoNULL(df$size) # Replace all NA's in df$size wtih NULL's
NAtoVal(df$temp, 0) # Replace all NA's in df$tmp wtih 0's
NAtoVal(df$subj, c("B", "E")) # Replace all NA's in df$subj with alternating "B" and "E"
# the modified df is now:
df
subj temp size
1 A 111 0.7133
2 B 112 NULL
3 C 0 0.7457
4 D 114 NULL
5 E 115 0.0487
6 E 116 NULL
7 G 0 0.8481
# replace the 0's in temp for NA
selfReplace(df$temp, 0, NA)
# replace NULL's in size for 1's
selfReplace(df$size, NULL, 1)
# replace all "E"'s in subj with alternate c("E", "F")
selfReplace(df$subj, c("E"), c("E", "F"))
df
subj temp size
1 A 111 0.7133
2 B 112 1
3 C NA 0.7457
4 D 114 1
5 E 115 0.0487
6 F 116 1
7 G NA 0.8481
Right now this works for vectors, but will fail with *apply. I would love to get it working fully, especially with applying plyr. The key would be to modify
FUNCTIONS
The code for the functions are below.
An important point. This does not (yet!) work with *apply / plyr.
I believe it can by modifying the value of n and adjusting sys.parent(.) in match.call() but it still needs some fiddling.
Any suggestions / modifications would be grealy appreciated
selfAssign <- function(self, ind, val, n=1, silent=FALSE) {
## assigns val to self[ind] in environment parent.frame(n)
## self should be a vector. Currently will not work for matricies or data frames
## GRAB THE CORRECT MATCH CALL
#--------------------------------------
# if nested function, match.call appropriately
if (class(match.call()) == "call") {
mc <- (match.call(call=sys.call(sys.parent(1))))
} else {
mc <- match.call()
}
# needed in case self is complex (ie df$name)
mc2 <- paste(as.expression(mc[[2]]))
## CLEAN UP ARGUMENT VALUES
#--------------------------------------
# replace logical indecies with numeric indecies
if (is.logical(ind))
ind <- which(ind)
# if no indecies will be selected, stop here
if(identical(ind, integer(0)) || is.null(ind)) {
if(!silent) warning("No indecies selected")
return()
}
# if val is a string, we need to wrap it in quotes
if (is.character(val))
val <- paste('"', val, '"', sep="")
# val cannot directly be NULL, must be list(NULL)
if(is.null(val))
val <- "list(NULL)"
## CREATE EXPRESSIONS AND EVAL THEM
#--------------------------------------
# create expressions to evaluate
ret <- paste0("'[['(", mc2, ", ", ind, ") <- ", val)
# evaluate in parent.frame(n)
eval(parse(text=ret), envir=parent.frame(n))
}
NAtoNULL <- function(obj, n=1) {
selfAssign(match.call()[[2]], is.na(obj), NULL, n=n+1)
}
NAtoVal <- function(obj, val, n=1) {
selfAssign(match.call()[[2]], is.na(obj), val, n=n+1)
}
selfReplace <- function(obj, toReplace, val, n=1) {
## replaces occurrences of toReplace within obj with val
# determine ind based on value & length of toReplace
# TODO: this will not work properly for data frames, but neither will selfAssign, yet.
if (is.null(toReplace)) {
ind <- sapply(obj, function(x) is.null(x[[1]]))
} else if (is.na(toReplace)) {
ind <- is.na(obj)
} else {
if (length(obj) > 1) { # note, this wont work for data frames
ind <- obj %in% toReplace
} else {
ind <- obj == toReplace
}
}
selfAssign(match.call()[[2]], ind, val, n=n+1)
}
## THIS SHOULD GO INSIDE NAtoNULL, NAtoVal etc.
# todo: modify for use with *apply
if(substr(paste(as.expression(x1)), 1, 10) == "FUN(obj = ") {
# PASS. This should identify when the call is coming from *apply.
# in such a case, need to increase n by 1 for apply & lapply. Increase n by 2 for sapply
# I'm not sure the increase required for plyr functions
}

Aggregate over categories that contain NAs with ddply and lapply?

I would like to aggregate a data.frame over 3 categories, with one of them varying. Unfortunately this one varying category contains NAs (actually it's the reason why it needs to vary). Thus I created a list of data.frames. Every data.frame within this list contains only complete cases with respect to three variables (with only one of them changing).
Let's reproduce this:
library(plyr)
mydata <- warpbreaks
names(mydata) <- c("someValue","group","size")
mydata$category <- c(1,2,3)
mydata$categoryA <- c("A","A","X","X","Z","Z")
# add some NA
mydata$category[c(8,10,19)] <- NA
mydata$categoryA[c(14,1,20)] <- NA
# create a list of dfs that contains TRUE FALSE
noNAList <- function(vec){
res <- !is.na(vec)
return(res)
}
testTF <- lapply(mydata[,c("category","categoryA")],noNAList)
# create a list of data.frames
selectDF <- function(TFvec){
res <- mydata[TFvec,]
return(res)
}
# check x and see that it may contain NAs as long
# as it's not in one of the 3 categories I want to aggregate over
x <-lapply(testTF,selectDF)
## let's ddply get to work
doddply <- function(df){
ddply(df,.(group,size),summarize,sumTest = sum(someValue))
}
y <- lapply(x, doddply);y
y comes very close to what I want to get
$category
group size sumTest
1 A L 375
2 A M 198
3 A H 185
4 B L 254
5 B M 259
6 B H 169
$categoryA
group size sumTest
1 A L 375
2 A M 204
3 A H 200
4 B L 254
5 B M 259
6 B H 169
But I need to implement aggregation over a third varying variable, which is in this case category and categoryA. Just like:
group size category sumTest sumTestTotal
1 A H 1 46 221
2 A H 2 46 221
3 A H 3 93 221
and so forth. How can I add names(x) to lapply, or do I need a loop or environment here?
EDIT:
Note that I want EITHER category OR categoryA added to the mix. In reality I have about 15 mutually exclusive categorical vars.
I think you might be making this really hard on yourself, if I understand your question correctly.
If you want to aggregate the data.frame 'myData' by three (or four) variables, you would simply do this:
aggregate(someValue ~ group + size + category + categoryA, sum, data=mydata)
group size category categoryA someValue
1 A L 1 A 51
2 B L 1 A 19
3 A M 1 A 17
4 B M 1 A 63
aggregate will automatically remove rows that include NA in any of the categories. If someValue is sometimes NA, then you can add the parameter na.rm=T.
I also noted that you put a lot of unnecessary code into functions. For example:
# create a list of data.frames
selectDF <- function(TFvec){
res <- mydata[TFvec,]
return(res)
}
Can be written like:
selectDF <- function(TFvec) mydata[TFvec,]
Also, using lapply to create a list of two data frames without the NA is overkill. Try this code:
x = list(mydata[!is.na(mydata$category),],mydata[!is.na(mydata$categoryA),])
I know the question explicitly requests a ddply()/lapply() solution.
But ... if you are willing to come on over to the dark side, here is a data.table()-based function that should do the trick:
# Convert mydata to a data.table
library(data.table)
dt <- data.table(mydata, key = c("group", "size"))
# Define workhorse function
myfunction <- function(dt, VAR) {
E <- as.name(substitute(VAR))
dt[i = !is.na(eval(E)),
j = {n <- sum(.SD[,someValue])
.SD[, list(sumTest = sum(someValue),
sumTestTotal = n,
share = sum(someValue)/n),
by = VAR]
},
by = key(dt)]
}
# Test it out
s1 <- myfunction(dt, "category")
s2 <- myfunction(dt, "categoryA")
ADDED ON EDIT
Here's how you could run this for a vector of different categorical variables:
catVars <- c("category", "categoryA")
ll <- lapply(catVars,
FUN = function(X) {
do.call(myfunction, list(dt, X))
})
names(ll) <- catVars
lapply(ll, head, 3)
# $category
# group size category sumTest sumTestTotal share
# [1,] A H 2 46 185 0.2486486
# [2,] A H 3 93 185 0.5027027
# [3,] A H 1 46 185 0.2486486
#
# $categoryA
# group size categoryA sumTest sumTestTotal share
# [1,] A H A 79 200 0.395
# [2,] A H X 68 200 0.340
# [3,] A H Z 53 200 0.265
Finally, I found a solution that might not be as slick as Josh' but it works without no dark forces (data.table). You may laugh – here's my reproducible example using the same sample data as in the question.
qual <- c("category","categoryA")
# get T / F vectors
noNAList <- function(vec){
res <- !is.na(vec)
return(res)
}
selectDF <- function(TFvec) mydata[TFvec,]
NAcheck <- lapply(mydata[,qual],noNAList)
# create a list of data.frames
listOfDf <- lapply(NAcheck,selectDF)
workhorse <- function(charVec,listOfDf){
dfs <- list2env(listOfDf)
# create expression list
exlist <- list()
for(i in 1:length(qual)){
exlist[[qual[i]]] <- parse(text=paste("ddply(",qual[i],
",.(group,size,",qual[i],"),summarize,sumTest = sum(someValue))",
sep=""))
}
res <- lapply(exlist,eval,envir=dfs)
return(res)
}
Is this more like what you mean? I find your example extremely difficult to understand. In the below code, the method can take any column, and then aggregate by it. It can return multiple aggregation functions of someValue. I then find all the column names you would like to aggregate by, and then apply the function to that vector.
# Build a method to aggregate by column.
agg.by.col = function (column) {
by.list=list(mydata$group,mydata$size,mydata[,column])
names(by.list) = c('group','size',column)
aggregate(mydata$someValue, by=by.list, function(x) c(sum=sum(x),mean=mean(x)))
}
# Find all the column names you want to aggregate by
cols = names(mydata)[!(names(mydata) %in% c('someValue','group','size'))]
# Apply the method to each column name.
lapply (cols, agg.by.col)

Resources