I try to use the group by function of the data.table package in R.
start <- as.Date('2014-1-1')
end <- as.Date('2014-1-6')
time.span <- seq(start, end, "days")
a <- data.table(date = time.span, value=c(1,2,3,4,5,6), group=c('a','a','b','b','a','b'))
date value group
1 2014-01-01 1 a
2 2014-01-02 2 a
3 2014-01-03 3 b
4 2014-01-04 4 b
5 2014-01-05 5 a
6 2014-01-06 6 b
a[,mean(value),by=group]
> group V1
1: a 2.6667
2: b 4.3333
This works fine.
Since i am working with Dates it can happen that a special date not only has one but two groups.
a <- data.table(date = time.span, value=c(1,2,3,4,5,6), group=list('a',c('a','b'),'b','b','a','b'))
date value group
1 2014-01-01 1 a
2 2014-01-02 2 c("a", "b")
3 2014-01-03 3 b
4 2014-01-04 4 b
5 2014-01-05 5 a
6 2014-01-06 6 b
a[,mean(value),by=group]
> Error in `[.data.table`(a, , mean(value), by = group) :
The items in the 'by' or 'keyby' list are length (1,2,1,1,1,1). Each must be same length as rows in x or number of rows returned by i (6).
I would like that the group date with both groups will be used for calculating the mean of group a as well as of group b.
Expected results:
mean a: 2.6667
mean b: 3.75
Is that possible with the data.table package?
Update
Thx to akrun my initial issue is solved. After "splitting" the data.table and in my case calculate different factors (based on the groups) i need the data.table back in its "original" form with unique rows based on the date. My solution so far:
a <- data.table(date = time.span, value=c(1,2,3,4,5,6), group=list('a',c('a','b'),'b','b','a','b'))
b <- a[rep(1:nrow(a), lengths(group))][, group:=unlist(a$group)]
date value group
1 2014-01-01 1 a
2 2014-01-02 2 a
3 2014-01-02 2 b
4 2014-01-03 3 b
5 2014-01-04 4 b
6 2014-01-05 5 a
7 2014-01-06 6 b
# creates new column with mean based on group
b[,factor := mean(value), by=group]
#creates new data.table c without duplicate rows (based on date) + if a row has group a & b it creates the product of their factors
c <- b[,.(value = unique(value), group = list(group), factor = prod(factor)),by=date]
date value group factor
01/01/14 1 a 2.666666667
02/01/14 2 c("a", "b") 10
03/01/14 3 b 3.75
04/01/14 4 b 3.75
05/01/14 5 a 2.666666667
06/01/14 6 b 3.75
I guess it is not the perfect way to do it, but it works. Any suggestions how i could do it better?
Alternative solution (really slow!!!):
d <- a[rep(1:nrow(a), lengths(group))][,group:=unlist(a$group)][, mean(value), by = group]
for(i in 1:NROW(a)){
y1 <- 1
for(j in a[i,group][[1]]){
y1 <- y1 * d[group==j, V1]
}
a[i, factor := y1]
}
My fastest solution so far:
# split rows that more than one group
b <- a[rep(1:nrow(a), lengths(group))][, group:=unlist(a$group)]
# calculate mean of different groups
b <- b[,factor := mean(value), by=group]
# only keep date + factor columns
b <- b[,.(date, factor)]
# summarise rows by date
b <- b[,lapply(.SD,prod), by=date]
# add summarised factor column to initial data.table
c <- merge(a,b,by='date')
Any chance to make it faster?
One option would be to group by the row sequence, we unlist the list column ('group'), paste the list elements together (toString(..)), use cSplit from splitstackshape with direction='long' to reshape it into 'long' format, and then get the mean of the 'value' column using 'grp' as the grouping variable.
library(data.table)
library(splitstackshape)
a[, grp:= toString(unlist(group)), 1:nrow(a)]
cSplit(a, 'grp', ', ', 'long')[, mean(value), grp]
# grp V1
#1: a 2.666667
#2: b 3.750000
Just realized that another option using splitstackshape would be listCol_l which unlists a list column into long form. As the output is a data.table, we can use the data.table methods to calculate the mean. It is much more compact to get the mean.
listCol_l(a, 'group')[, mean(value), group_ul]
# group_ul V1
#1: a 2.666667
#2: b 3.750000
Or another option without using splitstackshape would be to replicate the rows of the dataset by the length of the list element. The lengths is a convenient wrapper for sapply(group, length) and is much faster. Then, we change the 'group' column by unlisting the original 'group' from 'a' dataset and get the mean of 'value', grouped by 'group'.
a[rep(1:nrow(a), lengths(group))][,
group:=unlist(a$group)][, mean(value), by = group]
# group V1
#1: a 2.666667
#2: b 3.750000
As shorter solution posted by #mike-h in this question also uses unlist() but groups by the remaining columns:
require(data.table)
a = data.table(date = time.span,
value = c(1,2,3,4,5,6),
group = list('a',c('a','b'),'b','b','a','b'))
a[ , .(group = unlist(group)), .(date, value)][ , mean(value), group ]
Related
I would like to know, if I have data that I can group by a variable, how can I get the last observation of the previous group?
I have the following data:
dt <- data.table(a=c(1,1,1,2,2,2,2,2,3,3,3,3,3,3,4,4,5,5,5,5,5), b=sample.int(21))
I would like to create a new data.table that has the group ID and the difference between the last observation of the group from the last observation of the previous group. So that from the above I'd get:
a c
1: 1 NA
2: 2 9
3: 3 1
4: 4 -8
5: 5 5
Thanks!
We group by 'a', get the last element of 'b', then take the lag of 'c' by shifting
dt[, .(c = last(b)), a][, c:= shift(c)][]
Here is a way:
dt[, c := b * (1:.N == .N), by = a] ## get last row within the group
dt <- dt[b == c] ## filter data.table to get rows of interest
dt[, c := shift(c, type = "lag") - c][] ## getting difference using shift with lag argument
# a b c
#1: 1 11 NA
#2: 2 10 NA
#3: 3 18 9
#4: 4 19 -7
#5: 5 12 -8
data
set.seed(1)
dt <- data.table(a=c(1,1,1,2,2,2,2,2,3,3,3,3,3,3,4,4,5,5,5,5,5), b=sample.int(21))
I would like to apply a function to selected columns that match two different substrings. I've found this post related to my question but I couldn't get an answer from there.
Here is a reproducible example with my failed attempt. For the sake of this example, I want to do a row-wise operation where I sum the values from all columns starting with string v and subtract from the average of the values in all columns starting with f.
update: the proposed solution must (a) use the := operator to make the most of data.table fast performance, and (2) be flexible to other operation rather than mean and sum, which I used here just for the sake of simplicity
library(data.table)
# generate data
dt <- data.table(id= letters[1:5],
v1= 1:5,
v2= 1:5,
f1= 11:15,
f2= 11:15)
dt
#> id v1 v2 f1 f2
#> 1: a 1 1 11 11
#> 2: b 2 2 12 12
#> 3: c 3 3 13 13
#> 4: d 4 4 14 14
#> 5: e 5 5 15 15
# what I've tried
dt[, Y := sum( .SDcols=names(dt) %like% "v" ) - mean( .SDcols=names(dt) %like% "f" ) by = id]
We melt the dataset into 'long' format, by making use of the measure argument, get the difference between the sum of 'v' and mean of 'f', grouped by 'id', join on the 'id' column with the original dataset and assign (:=) the 'V1' as the 'Y' variable
dt[melt(dt, measure = patterns("^v", "^f"), value.name = c("v", "f"))[
, sum(v) - mean(f), id], Y :=V1, on = .(id)]
dt
# id v1 v2 f1 f2 Y
#1: a 1 1 11 11 -9
#2: b 2 2 12 12 -8
#3: c 3 3 13 13 -7
#4: d 4 4 14 14 -6
#5: e 5 5 15 15 -5
Or another option is with Reduce after creating index or 'v' and 'f' columns
nmv <- which(startsWith(names(dt), "v"))
nmf <- which(startsWith(names(dt), "f"))
l1 <- length(nmv)
dt[, Y := Reduce(`+`, .SD[, nmv, with = FALSE])- (Reduce(`+`, .SD[, nmf, with = FALSE])/l1)]
rowSums and rowMeans combined with grep can accomplish this.
dt$Y <- rowMeans(dt[,grep("f", names(dt)),with=FALSE]) - rowSums(dt[,grep("v", names(dt)),with=FALSE])
Suppose, that one has the following dataframe:
x=data.frame(c(1,1,2,2,2,3),c("A","A","B","B","B","B"))
names(x)=c("v1","v2")
x
v1 v2
1 1 A
2 1 A
3 2 B
4 2 B
5 2 B
6 3 B
In this dataframe a value in v1 I want to correspond into a label in v2. However, as one can see in this example B has more than one corresponding values.
Is there any elegant and fast way to find which labels in v2 correspond to more than one values in v1 ?
The result I want ideally to show, the values - which in our example should be c(2,3) - as well as the row number - which in our example should be r=c(5,6).
Assuming that we want the index of the unique elements in 'v1' grouped by 'v2' and that should have more than one unique elements, we create a logical index with ave and use that to subset the rows of 'x'.
i1 <- with(x, ave(v1, v2, FUN = function(x)
length(unique(x))>1 & !duplicated(x, fromLast=TRUE)))!=0
x[i1,]
# v1 v2
#5 2 B
#6 3 B
Or a faster option is data.table
library(data.table)
i1 <- setDT(x)[, .I[uniqueN(v1)>1 & !duplicated(v1, fromLast=TRUE)], v2]$V1
x[i1, 'v1', with = FALSE][, rn := i1][]
# v1 rn
#1: 2 5
#2: 3 6
This is my example:
df<-data.frame(ID=as.factor(c(rep("A",20),rep("B",15))),var=as.factor(c(rep("w",5),rep("x",10),rep("y",12),rep("z",8))), obs=runif(35,0,10))
What I want to do is, for each 'ID', to be able to select a single 'var', at random and possibly by selecting the 'var' with the most 'obs'. So for example, at random it could give this:
ID var obs
6 A x 3.44405412
7 A x 1.50957637
8 A x 8.22009420
9 A x 7.47094473
10 A x 8.26098410
11 A x 9.62919537
12 A x 0.10393890
13 A x 0.11298502
14 A x 4.33822574
15 A x 4.20109035
28 B z 1.07697286
29 B z 8.40864310
30 B z 7.62563257
31 B z 0.06885177
32 B z 4.33959316
33 B z 7.98303782
34 B z 8.38335593
35 B z 4.52110318
Thank you in advance for your help.
Here's another data.table approach. To begin...
library(data.table)
setDT(df)
Then, select the var for each ID:
# var with highest #obs
idvar_selected = df[,.(var = .SD[,.N,by=var][which.max(N)]$var), by=ID]
# or... at random, weighted by #obs
idvar_selected = df[,.(var = sample(var,1)), by=ID]
And "join" using the selection:
df[idvar_selected, on=c("ID","var")]
One option using data.table.
We convert the 'data.frame' to 'data.table' (setDT(df)). Grouped by 'ID' and 'var', we create a variable 'N' that gives the number of rows (.N) for each group. Then, we group by 'ID' and subset the rows that have the max value of 'N' (.SD[N==max(N)]). The 'N' column can be assigned to 'NULL' as it is not needed in the expected output.
library(data.table)
setDT(df)[,N := .N , by = .(ID, var)][, .SD[N==max(N)] ,
by = .(ID)][, N:= NULL][]
# ID var obs
# 1: A x 9.2044378
# 2: A x 2.7973557
# 3: A x 7.6382046
# 4: A x 8.0163062
# 5: A x 2.5472509
# 6: A x 6.0488886
# 7: A x 3.7073495
# 8: A x 6.7169025
# 9: A x 6.7298231
#10: A x 3.2043056
#11: B z 5.9973018
#12: B z 6.3014766
#13: B z 0.4663503
#14: B z 3.1951313
#15: B z 2.3874890
#16: B z 3.6881753
#17: B z 1.4802475
#18: B z 9.3776173
By assigning a new column, we are changing the original dataset 'df'. We could remove that column later from the original dataset by
df[, N:=NULL]
Or a modification of the above code without assigning (:=) so that original dataset remains the same. We concatenate .SD i.e. Subset of Datatable with .N to create the new column 'N', and then subset the rows as before.
setDT(df)[, c(list(N=.N), .SD) ,by =.(ID, var)][,
.SD[N==max(N)], by =ID][, N:= NULL][]
Or as suggested by #Frank, we can copy(.SD) to avoid the original dataset getting changed, then assign the 'N', and do as before.
setDT(df)[,copy(.SD)][,N := .N , by = .(ID, var)][,
.SD[N==max(N)] , by = .(ID)][]
If we want to select random 'var' within each 'ID', we can use sample to select a single 'var' grouped by 'ID', get a logical vector (var==sample(var, 1)]) and subset the rows
setDT(df)[, .SD[var==sample(var, 1)] , by =ID]
data
set.seed(24)
df <- data.frame(ID=as.factor(c(rep("A",20),rep("B",15))),
var=as.factor(c(rep("w",5),rep("x",10),rep("y",12),rep("z",8))),
obs=runif(35,0,10))
I have the following data.table.
ts,id
1,a
2,a
3,a
4,a
5,a
6,a
7,a
1,b
2,b
3,b
4,b
I want to subset this data.table into two. The criteria is to have approximately the first half for each group (in this case column "id") in one data table and the remaining in another data.table. So the expected result are two data.tables as follows
ts,id
1,a
2,a
3,a
4,a
1,b
2,b
and
ts,id
5,a
6,a
7,a
3,b
4,b
I tried the following,
z1 = x[,.SD[.I < .N/2,],by=dev]
z1
and got just the following
id ts
a 1
a 2
a 3
Somehow, .I within the .SD isn't working the way I think it should. Any help appreciated.
Thanks in advance.
.I gives the row locations with respect to the whole data.table. Thus it can't be used like that within .SD.
Something like
DT[, subset := seq_len(.N) > .N/2,by='id']
subset1 <- DT[(subset)][,subset:=NULL]
subset2 <- DT[!(subset)][,subset:=NULL]
subset1
# ts id
# 1: 4 a
# 2: 5 a
# 3: 6 a
# 4: 7 a
# 5: 3 b
# 6: 4 b
subset2
# ts id
# 1: 1 a
# 2: 2 a
# 3: 3 a
# 4: 1 b
# 5: 2 b
Should work
For more than 2 groups, you could use cut to create a factor with the appropriate number of levels
Something like
DT[, subset := cut(seq_len(.N), 3, labels= FALSE),by='id']
# you could copy to the global environment a subset for each, but this
# will not be memory efficient!
list2env(setattr(split(DT, DT[['subset']]),'names', paste0('s',1:3)), .GlobalEnv)
Here's the corrected version of your expression:
dt[, .SD[, .SD[.I <= .N/2]], by = id]
# id ts
#1: a 1
#2: a 2
#3: a 3
#4: b 1
#5: b 2
The reason yours is not working is because .I and .N are not available in the i-expression (i.e. first argument of [) and so the parent data.table's .I and .N are used (i.e. dt's).