I would like to match multiple conditions from independent data tables onto my main data table.
How can I do this using the data.table package?
What would be the most efficient/fastest way?
I have a mock example, with some mock conditions here to illustrate my question:
main_data <- data.frame( pnum = c(1,2,3,4,5,6,7,8,9,10),
age = c(24,35,43,34,55,24,36,43,34,54),
gender = c("f","m","f","f","m","f","m","f","f","m"))
data_1 <- data.frame( pnum = c(1,4,5,8,9),
value_data_1 = c(1, 2, 1, 1, 1),
date = as.Date(c("2019-01-01", "2018-07-01", "2018-01-01", "2016-07-01", "2016-07-01")))
data_2 <- data.frame( pnum = c(1,5,7,8,9),
value_data_2 = c(1, 2, 1, 1, 2),
date = as.Date(c("2019-01-01", "2018-07-01", "2018-01-01", "2016-07-01", "2016-07-01")))
I would like to create a new variable in my main_data table called "matching" of those rows that match between data_1 and data_2 under multiple conditions:
First, the value of data_1$value_data_1 has to be equal to 1.
Second, the value of data_2$value_data_2 also has to be equal to 1.
Third, the pnum and the date should match between data_1 and data_2.
When all these conditions are met, I would expect the new output of main_data to look like this:
> main_data
pnum age gender matching
1 1 24 f 1
2 2 35 m 0
3 3 43 f 0
4 4 34 f 0
5 5 55 m 0
6 6 24 f 0
7 7 36 m 0
8 8 43 f 1
9 9 34 f 0
10 10 54 m 0
Until now, I programmed each condition seperately and created new placeholder tables in between, but this is not very memory efficient. Is there an efficient way to chain all the conditions using the data.tables package specifically?
Here's one way:
library(data.table)
library(magrittr)
setDT(main_data)
setDT(data_1)
setDT(data_2)
main_data %>%
data_1[., on = .(pnum == pnum) ] %>%
data_2[., on = .(pnum == pnum, date == date) ] %>%
.[, matching := fcoalesce(+(value_data_1 == 1 & value_data_2 == 1), 0L) ] %>%
.[, .(pnum, age, gender, matching) ]
# pnum age gender matching
# 1: 1 24 f 1
# 2: 2 35 m 0
# 3: 3 43 f 0
# 4: 4 34 f 0
# 5: 5 55 m 0
# 6: 6 24 f 0
# 7: 7 36 m 0
# 8: 8 43 f 1
# 9: 9 34 f 0
# 10: 10 54 m 0
I used the magrittr package because I find it useful for portraying the flow code. It is not at all required, and the alternative pipeline for data.table for the same code could be:
data_2[
data_1[main_data, on = .(pnum == pnum) ]
,on = .(pnum == pnum, date == date)
][ ,matching := fcoalesce(+(value_data_1 == 1 & value_data_2 == 1), 0L)
][ ,.(pnum, age, gender, matching) ]
There are other ways to break it out, including the use of temporary (mid-step) variables. (This is mostly style and personal preference.)
You can use something like Reduce(merge, list(...))
library(data.table)
setDT(main_data); setDT(data_1); setDT(data_2)
res <- Reduce(function(x, y) {
merge(x, y, by = "pnum", all.x = TRUE)
}, list(main_data, data_1[, -"date"], data_2[, -"date"]))[, `:=`(
matching = 1L - (value_data_1 != 1 | value_data_2 != 1 | is.na(value_data_1) | is.na(value_data_2)),
value_data_1 = NULL,
value_data_2 = NULL
)]
Output
> res[]
pnum age gender matching
1: 1 24 f 1
2: 2 35 m 0
3: 3 43 f 0
4: 4 34 f 0
5: 5 55 m 0
6: 6 24 f 0
7: 7 36 m 0
8: 8 43 f 1
9: 9 34 f 0
10: 10 54 m 0
Related
I have the following data frame in R:
df <- data.frame(name = c('p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end'),
time = c(1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31),
target = c(1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2),
comb = c(0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1))
And another data frame:
data <- data.frame(time = c(2,5,8,14,14,20,21,26,28,28),
name = c('a','b','c','d','e','f','g','h','i','j'))
So, if we take a look at df we could sort the data by target and combination and we will notice that there are basically "groups". For example for target=1 and comb=0 there are four entries p1_start,p1_end,p2_start,p2_end and it is the same for all other target/comb combinations.
On the other side data contains entries with time being a timestamp.
Goal: I want to map the values from both data frames based on time.
Example: The first entry of data has time=2 meaning it happened between p1_start,p1_end so it should get the values target=1 and comb=0 mapped to the data data frame.
Example 2: The entries of data with time=14 happened between p2_start,p2_end so they should get the values target=1 and comb=1 mapped to the data data frame.
Idea: I thought I iterate over df by target and comb and for each combination of them check if there are rows in data whose time is between. The second could be done with the following command:
data[which(data$time > p1_start & data$time < p2_end),]
once I get the rows it is easy to append the values.
Problem: how could I do the iteration? I tried with the following:
df %>%
group_by(target, comb) %>%
print(data[which(data$time > df$p1_start & data$time < df$p2_end),])
But I am getting an error that time has not been initialized
Your problem is best known as performing non-equi join. We need to find a range in some given dataframe that corresponds to each value in one or more given vectors. This is better handled by the data.table package.
We would first transform your df into a format suitable for performing the join and then join data with df by time <= end while time >= start. Here is the code
library(data.table)
setDT(df)[, c("type", "name") := tstrsplit(name, "_", fixed = TRUE)]
df <- dcast(df, ... ~ name, value.var = "time")
cols <- c("target", "comb", "type")
setDT(data)[df, (cols) := mget(paste0("i.", cols)), on = .(time<=end, time>=start)]
After dcast, df looks like this
target comb type end start
1: 1 0 p1 3 1
2: 1 0 p2 7 5
3: 1 1 p1 11 9
4: 1 1 p2 15 13
5: 2 0 p1 19 17
6: 2 0 p2 23 21
7: 2 1 p1 27 25
8: 2 1 p2 31 29
And the output is
> data
time name target comb type
1: 2 a 1 0 p1
2: 5 b 1 0 p2
3: 8 c NA NA <NA>
4: 14 d 1 1 p2
5: 14 e 1 1 p2
6: 20 f NA NA <NA>
7: 21 g 2 0 p2
8: 26 h 2 1 p1
9: 28 i NA NA <NA>
10: 28 j NA NA <NA>
Here is a tidyverse solution:
library(tidyr)
library(dplyr)
df %>%
rename(name_df=name) %>%
mutate(x = time +1) %>%
pivot_longer(
cols = c(time, x),
names_to = "helper",
values_to = "time"
) %>%
right_join(data, by="time") %>%
select(time, name, target, comb)
time name target comb
<dbl> <chr> <dbl> <dbl>
1 2 a 1 0
2 5 b 1 0
3 8 c 1 0
4 14 d 1 1
5 14 e 1 1
6 20 f 2 0
7 21 g 2 0
8 26 h 2 1
9 28 i 2 1
10 28 j 2 1
df <- data.frame(name = c('p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end','p1_start','p1_end','p2_start','p2_end'),
time = c(1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31),
target = c(1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2),
comb = c(0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1))
data <- data.frame(time = c(2,5,8,14,14,20,21,26,28,28),
name = c('a','b','c','d','e','f','g','h','i','j'))
library(fuzzyjoin)
library(tidyverse)
tmp <- df %>%
separate(name,
into = c("p", "period"),
sep = "_",
remove = TRUE) %>%
pivot_wider(
id_cols = c(p, target, comb),
names_from = period,
values_from = time
) %>%
select(-p)
fuzzy_left_join(
x = data,
y = tmp,
by = c("time" = "start",
"time" = "end"),
match_fun = list(`>=`, `<=`))
#> time name target comb start end
#> 1 2 a 1 0 1 3
#> 2 5 b 1 0 5 7
#> 3 8 c NA NA NA NA
#> 4 14 d 1 1 13 15
#> 5 14 e 1 1 13 15
#> 6 20 f NA NA NA NA
#> 7 21 g 2 0 21 23
#> 8 26 h 2 1 25 27
#> 9 28 i NA NA NA NA
#> 10 28 j NA NA NA NA
Created on 2022-01-11 by the reprex package (v2.0.1)
I have the following data frame as an example
df <- data.frame(score=letters[1:15], total1=1:15, total2=16:30)
> df
score total1 total2
1 a 1 16
2 b 2 17
3 c 3 18
4 d 4 19
5 e 5 20
6 f 6 21
7 g 7 22
8 h 8 23
9 i 9 24
10 j 10 25
11 k 11 26
12 l 12 27
13 m 13 28
14 n 14 29
15 o 15 30
I would like to aggregate my data frame by sum by grouping the rows having different name, i.e.
groups sum1 sum2
'a-b-c' 6 51
'c-d-e' 21 60
etc
All the given answers to this kind of question assume that the strings repeat in the row.
The usual aggregate function that I use to obtain the summary delivers a different result:
aggregate(df$total1, by=list(sum1=df$score %in% c('a','b','c'), sum2=df$score %in% c('d','e','f')), FUN=sum)
sum1 sum2 x
1 FALSE FALSE 99
2 TRUE FALSE 6
3 FALSE TRUE 15
If you want a tidyverse solution, here is one possibility:
df <- data.frame(score=letters[1:15], total1=1:15, total2=16:30)
df %>%
mutate(groups = case_when(
score %in% c("a","b","c") ~ "a-b-c",
score %in% c("d","e","f") ~ "d-e-f"
)) %>%
group_by(groups) %>%
summarise_if(is.numeric, sum)
returns
# A tibble: 3 x 3
groups total1 total2
<chr> <int> <int>
1 a-b-c 6 51
2 d-e-f 15 60
3 <NA> 99 234
Add a "groups" column with the category value.
df$groups = NA
and then define each group like this:
df$groups[df$score=="a" | df$score=="b" | df$score=="c" ] = "a-b-c"
Finally aggregate by that column.
Here's a solution that works for any sized data frame.
df <- data.frame(score=letters[1:15], total1=1:15, total2=16:30)
# I'm adding a row to demonstrate that the grouping pattern works when the
# number of rows is not equally divisible by 3.
df <- rbind(df, data.frame(score = letters[16], total1 = 16, total2 = 31))
# A vector that represents the correct groupings for the data frame.
groups <- c(rep(1:floor(nrow(df) / 3), each = 3),
rep(floor(nrow(df) / 3) + 1, nrow(df) - length(1:(nrow(df) / 3)) * 3))
# Your method of aggregation by `groups`. I'm going to use `data.table`.
require(data.table)
dt <- as.data.table(df)
dt[, group := groups]
aggDT <- dt[, list(score = paste0(score, collapse = "-"),
total1 = sum(total1), total2 = sum(total2)), by = group][
, group := NULL]
aggDT
score total1 total2
1: a-b-c 6 51
2: d-e-f 15 60
3: g-h-i 24 69
4: j-k-l 33 78
5: m-n-o 42 87
6: p 16 31
I'm working with R
WHAT I HAVE:
ID_1 ID_2 Date x_1 y_2
1 12 3 2011-12-21 15 10
2 12 13 2011-12-22 50 40
3 3 12 2011-12-22 20 30
4 15 13 2011-12-23 30 20
...
and so on
TARGET:
ID_1 ID_2 Date x_1 y_2 XX_1 YY_2
1 12 3 2011-12-21 15 10 0 0
2 12 13 2011-12-22 50 40 15 0
3 3 12 2011-12-22 20 30 10 50
4 15 13 2011-12-23 30 20 0 40
...
and so on
I want to see in XX_1 and in YY_2 the values from the columns x_1 and y_2 corresponding to the previous values of ID_1 and ID1_2 in or "0" in case of no value is available before that date. I don't know how to handle the fact that different values could be in ID_1 and ID_2 (like IDs 3 and 12 in the example).
#Ekatef
ID1 AND ID2 (find match of the whole ID row, even if the order of IDs is switched):
ID_1 ID_2 Date x_1 y_2 XX_1 YY_2
1 12 3 2011-12-21 15 10 0 0
2 12 13 2011-12-22 50 40 0 0
3 3 12 2011-12-22 20 30 10 15
4 15 13 2011-12-23 30 20 0 0
5 12 13 2011-12-23 10 5 50 40
The OP has requested to copy the previous value for an ID (if any) to the appropriate new column.
This can solved by reshaping multiple columns simultaneously from wide to long format, finding the previous value by shifting / lagging, and reshaping back to wide format:
library(data.table)
setDT(DF)[, rn := .I]
long <- melt(DF, id.vars = c("rn", "Date"), measure.vars = patterns("^ID", "^x|y"),
value.name = c("ID", "value"))
long[order(Date), previous := shift(value, fill = 0), by = ID]
dcast(long, rn + Date ~ variable, value.var = c("ID", "value", "previous"))
rn Date ID_1 ID_2 value_1 value_2 previous_1 previous_2
1: 1 2011-12-21 12 3 15 10 0 0
2: 2 2011-12-22 12 13 50 40 15 0
3: 3 2011-12-22 3 12 20 30 10 50
4: 4 2011-12-23 15 13 30 20 0 40
Alternatively, the final call to dcast() can be replaced by an update while joining:
DF[long, on = .(rn),
c("XX_1", "YY_2") := .(previous[variable == 1L], previous[variable == 2L])][
, rn := NULL]
DF
ID_1 ID_2 Date x_1 y_2 XX_1 YY_2
1: 12 3 2011-12-21 15 10 0 0
2: 12 13 2011-12-22 50 40 15 0
3: 3 12 2011-12-22 20 30 10 50
4: 15 13 2011-12-23 30 20 0 40
which reproduces exactly OP's expected result.
Data
library(data.table)
DF <- fread(
"i ID_1 ID_2 Date x_1 y_2
1 12 3 2011-12-21 15 10
2 12 13 2011-12-22 50 40
3 3 12 2011-12-22 20 30
4 15 13 2011-12-23 30 20 ",
drop = 1L
)
If I understand you correctly, the target ID should be looked up from the left to the right and from the bottom to the top in all the rows strictly above the given ID value. I would write the function to find the coordinates of the preceded ID like that
# find the indices of the preceded ID value
# #id_matrix == your_data_frame[, c("ID_1", "ID_2")]
# [#i_of_row, #i_of_col] are the coordinates of the considered ID
# i_of_row > 1
FindPreviousID <- function(id_matrix, i_of_row, i_of_col) {
shorten_matrix <- id_matrix[1:(i_of_row - 1),,drop = FALSE]
rev_ind <- match(table = rev(t(shorten_matrix)),
x = ids[i_of_row,i_of_col], nomatch = NA_real_)
n_row_found <- floor((length(shorten_matrix) - rev_ind)/2) + 1
n_col_found <- (length(shorten_matrix) - rev_ind) %% ncol(shorten_matrix) + 1
return(c(row = n_row_found, col = n_col_found))
}
...and use it to calculate XX_1 and YY2
# emulate the original dataframe
ID_1 <- c(12,12,3,15,16,3)
ID_2<-c(3,13,12,13,17,15)
ids <- cbind(ID_1, ID_2) # IDs columns
x1 <- c(15, 50, 20, 30, 51, 60)
y2 <- c(10, 40, 30, 20, 53, 62)
vars <- cbind(x1, y2) # x&y columns
# assuming that the first XX_1 & YY_2 should be always 0
indices_XX <- sapply(FUN = function(i) FindPreviousID(id_matrix = ids, i_of_col = 1, i),
X = seq(along.with = ids[, 1])[-1])
indices_YY <- sapply(FUN = function(i) FindPreviousID(id_matrix = ids, i_of_col = 2, i),
X = seq(along.with = ids[, 1])[-1])
# construct XX and YY columns
XX_column <- c(NA, vars[t(indices_XX)])
XX_column[is.na(XX_column)] <- 0
YY_column <- c(NA, vars[t(indices_YY)])
YY_column[is.na(YY_column)] <- 0
Hope, that helps :)
Upd If you are interested to find a pair of IDs instead of the single ID, the function should be redesigned. One of the possible solutions looks like this
FindPreviousIDsPair <- function(id_matrix, i_of_row) {
shorten_matrix <- id_matrix[1:(i_of_row - 1),,drop = FALSE]
string_to_search_for <- id_matrix[i_of_row, ]
string_to_search_for_sorted <-
string_to_search_for[order(string_to_search_for)]
found_rows_boolean <- sapply(FUN = function(i) all(shorten_matrix[i,
order(shorten_matrix[i, ])] ==
string_to_search_for_sorted), X = 1:(i_of_row - 1))
found_row_n <- ifelse(any(found_rows_boolean),
max(which(found_rows_boolean)), NA_real_)
found_col_of_DI1 <- ifelse(any(found_rows_boolean),
match(string_to_search_for[1], shorten_matrix[found_row_n, ]), NA_real_)
found_col_of_DI2 <- ifelse(any(found_rows_boolean),
match(string_to_search_for[2], shorten_matrix[found_row_n, ]), NA_real_)
return(c(found_row_n, found_col_of_DI1, found_col_of_DI2))
}
Application of the redisigned look-up function to calculate XX and YY
indices_of_vars <- sapply(FUN = function(i) FindPreviousIDsPair(id_matrix =
ids, i), X = seq(along.with = ids[, 1])[-1])
indices_XX <- indices_of_vars[1:2, ]
indices_YY <- indices_of_vars[c(1, 3), ]
XX_column <- c(NA, vars[t(indices_XX)])
XX_column[is.na(XX_column)] <- 0
YY_column <- c(NA, vars[t(indices_YY)])
YY_column[is.na(YY_column)] <- 0
My dataframe looks like this
ID t1 obs1 t2 obs2 t3 obs3
1 0 a 11 d 0 g
2 0 b 13 e 11 i
3 0 c 0 f 0 h
I need to make sure each ID has at least one t above 10 (delete row if not). Then, I want to save the lowest t value above 10, but also save the corresponding obs in new columns. (The complicated part about my question is that the lowest t above 10 could be in any column). The corresponding obs to some t is located in the next column, so that helps. So my resulting data frame would look like this:
ID t1 obs1 t2 obs2 t3 obs3 lowesttabove10 correspondingobs
1 0 a 11 d 0 g 11 d
2 0 b 13 e 11 i 11 i
With data.table, go to long format:
library(data.table)
setDT(DT)
dat = melt(DT, measure.vars = patterns("^t\\d+$", "^obs\\d+$"), value.name = c("t", "obs"))
setorder(dat, ID, variable)
# ID variable t obs
# 1: 1 1 0 a
# 2: 1 2 11 d
# 3: 1 3 0 g
# 4: 2 1 0 b
# 5: 2 2 13 e
# 6: 2 3 11 i
# 7: 3 1 0 c
# 8: 3 2 0 f
# 9: 3 3 0 h
Find max value per group and mark groups to keep:
IDDT = dat[order(-t),
.(max.variable = first(variable), max.t = first(t), max.obs = first(obs))
, by=ID]
IDDT[, keep := max.t > 10]
# ID max.variable max.t max.obs keep
# 1: 2 2 13 e TRUE
# 2: 1 2 11 d TRUE
# 3: 3 1 0 c FALSE
Find min value over 10 per kept group using a rolling update join:
IDDT[(keep), c("my.variable", "my.t", "my.obs") := {
m = .(ID = ID, t_thresh = 10)
dat[m, on=.(ID, t = t_thresh), roll=-Inf, .(x.variable, x.t, x.obs)]
}]
# ID max.variable max.t max.obs keep my.variable my.t my.obs
# 1: 2 2 13 e TRUE 3 11 i
# 2: 1 2 11 d TRUE 2 11 d
# 3: 3 1 0 c FALSE NA NA NA
I would stop here, with the main data in long format dat and the ID level variables in the separate table IDDT. To filter dat to groups that should be kept: dat[IDDT[(keep), .(ID)], on=.(ID)]. See ?data.table and the other intro materials mentioned when you load the package for details on the syntax.
See ?dcast if you insist on going back to wide.
Using base R:
Drop all rows with no t-values above 10:
df1 <- df1[rowSums(df1[, grepl("^t", colnames(df1))] >10) > 0, ]
Determine the group that contains the lowest value above 10 and then retrieve values:
df1$group <- apply(df1[grepl("^t", names(df1))], 1, function(x) which(x == min(x[x > 10])))
df1 <- cbind(df1, do.call(rbind, lapply(seq_len(nrow(df1)),
function(x) setNames(df1[x, paste0(c("t", "obs"), df1$group[x])],
c("lowesttabove10", "correspondingobs")))))
> df1
ID t1 obs1 t2 obs2 t3 obs3 group lowesttabove10 correspondingobs
1 1 0 a 11 d 0 g 2 11 d
2 2 0 b 13 e 11 i 3 11 i
My approach is not neat , but still works, You can try it.
library(dplyr)
library(reshape)
df1=melt(df,id='ID')
df2=df1%>%group_by(ID)%>%filter(value>10)%>%dplyr::slice(which.min(value))%>%na.omit()
> df2
# A tibble: 2 x 3
# Groups: ID [2]
ID variable value
<int> <fctr> <chr>
1 1 t2 11
2 2 t3 11
df2$variable=as.character(df2$variable)
C=as.numeric(gsub("[[:alpha:]]", "", df2$variable))
df=df[df$ID%in%df2$ID,]
for (i in 1:length(C)){
DF1=df[i,str_detect(names(df),as.character(C[i]))]
names(DF1)=c('lowesttabove10 ','correspondingobs')
if (i ==1 ){DFF=DF1}else{DFF=rbind(DFF,DF1)}
}
cbind(df,DFF)
ID t1 obs1 t2 obs2 t3 obs3 lowesttabove10 correspondingobs
1 1 0 a 11 d 0 g 11 d
2 2 0 b 13 e 11 i 11 i
Solution uses dplyr and tidyr in one pipeline. dt is the original data, while dt2 is the final output.
library(dplyr)
library(tidyr)
dt2 <- dt %>%
gather(t_group, t_value, starts_with("t")) %>%
gather(obs_group, obs_value, starts_with("obs")) %>%
filter(gsub("t", "", t_group) == gsub("obs", "", obs_group)) %>%
filter(t_value >= 10) %>%
filter(t_value == min(t_value)) %>%
select(ID, lowesttabove10 = t_value, correspondingobs = obs_value) %>%
inner_join(dt, by = "ID") %>%
select(colnames(dt), lowesttabove10, correspondingobs)
df2
ID t1 obs1 t2 obs2 t3 obs3 lowesttabove10 correspondingobs
1 1 0 a 11 d 0 g 11 d
2 2 0 b 13 e 11 i 11 i
Data:
dt <- read.table(text = "ID t1 obs1 t2 obs2 t3 obs3
1 0 a 11 d 0 g
2 0 b 13 e 11 i
3 0 c 0 f 0 h",
header = TRUE, stringsAsFactors = FALSE)
I would like to reference a column inside the summarise() in dplyr with its index rather than with its name. For example:
> a
id visit timepoint bedroom den
1 0 0 62 NA
2 1 0 53 6.00
3 2 0 56 2.75
4 0 1 55 NA
5 1 2 61 NA
6 2 0 54 NA
7 0 1 58 2.75
8 1 2 59 NA
9 2 2 60 NA
10 0 1 57 NA
# E.g.
a %>% group_by(visit) %>% summarise(avg.bedroom = mean(bedroom, na.rm =T)
# Returns
visit avg.dedroom
<dbl> <dbl>
1 0 4.375
2 1 2.750
3 2 NaN
How could I use the index of column "bedroom" rather its name in the summarise clause? I tried:
a %>% group_by(visit) %>% summarise("4" = mean(.[[4]], na.rm = T))
but this returned false results:
visit `4`
<dbl> <dbl>
1 0 3.833333
2 1 3.833333
3 2 3.833333
Is my objective achievable and if yes how? Thank you.
Perhaps not exactly what you're looking for, but one option would be to use purrr rather than dplyr. Something like
# Read in data
d <- read.table(textConnection(" id visit timepoint bedroom den
1 12 0 62 NA
2 14 0 53 6.00
3 14 0 56 2.75
4 14 1 55 NA
5 14 2 61 NA
6 15 0 54 NA
7 15 1 58 2.75
8 16 2 59 NA
9 16 2 60 NA
10 17 1 57 NA "),
header = TRUE)
library(purrr)
d %>%
split(.$timepoint) %>%
map_dbl(function(x) mean(x[ ,5], na.rm = TRUE))
# 0 1 2
# 4.375 2.750 NaN
Or, with base
aggregate(d[ ,5] ~ timepoint, data = d, mean)
# timepoint d[, 5]
# 1 0 4.375
# 2 1 2.750
The answer I found is the summarize_at() function of dplyr. Here is how I used summarize_at() to create summary statistics on subsets of my dataframe where the columns were not known in advance (object is my original dataframe which is in a long form and has a column -- room -- that contains the names of the rooms, as well as two other columns, "visit" and "value"):
# Convert object to a wide form
object$row <- 1 : nrow(object)
y <- spread(object, room, value)
# Remove the row column from y
y <- y %>% select(-row)
# Initialize stat1, the dataframe with the summary
# statistics
stat1 <- data.frame(visit = c(0, 1, 2))
# Find the number of columns that stat1 will eventually
# have
y <- y %>% filter(id == id) %>%
select_if(function(col) mean(is.na(col)) != 1)
n <- ncol(y)
# Append columns with summary statistics to stat1
for (i in 3 : n) {
t <- y %>% group_by(visit) %>%
summarise_at(c(i), mean, na.rm = T)
t[, 2] <- round(t[, 2], 2)
stat1 <- cbind(stat1, t[, 2])
}
# Pass the dataframe stat1 to the list "results"
results$stat1 <- stat1