The data set is similar to this:
library(data.table)
uid <- c("a","a","a","b","b","b","c","c","c")
date <- c(2001,2002,2003)
DT <- data.table(id=uid, year=rep(date,3), value= c(1,3,2,1:6))
Q1
Now I want to find which observations has the "value" column increase year by year
what I want is like this:
for b and c, value is increasing all the time.
4: b 2001 1
5: b 2002 2
6: b 2003 3
7: c 2001 4
8: c 2002 5
9: c 2003 6
In real data, the recording time span for each id is different.
besides, I want to calculate : for given id, how many years the value increases.
ID V1
1: a 1
2: b 2
3: c 2
Thanks a lot if you have some ideas about this.
I preferred the data.table method, due to the speed calculation requirement.
I think this does what you want:
DT[order(year)][, sum(diff(value) > 0), by=id]
produces:
id V1
1: a 1
2: b 2
3: c 2
This assumes you have at most one value per year.
For your first question, if they're not sorted, I'd do a setkey on id, year for sorting (rather than using base:::order, as it's very slow). id is also added so that you'll get the results in the same order as you expect for question 2 as well.
setkey(DT, id, year)
DT[, if (.N == 1L ||
( .N > 1 && all(value[2:.N]-value[1:(.N-1)] > 0) )
) .SD,
by=list(id)]
id year value
1: b 2001 1
2: b 2002 2
3: b 2003 3
4: c 2001 4
5: c 2002 5
6: c 2003 6
For your second question:
DT[, if (.N == 1L) 1L else sum(value[2:.N]-value[1:(.N-1)] > 0), by=list(id)]
id V1
1: a 1
2: b 2
3: c 2
I take the 2nd to the last (.N) value and subtract it with 1st to n-1th explicitly because diff being a S3 generic will take time for dispatch of the right method (here, diff.default) and it would be much faster to directly write your function in j.
Related
I want to use data.table to incrementally find out new elements i.e. for every row, I'd see whether values in list have been seen before. If they are, we will ignore them. If not, we will select them.
I was able to wrap elements by group in a list, but I am unsure how I can find incremental differences.
Here's my attempt:
df = data.table::data.table(id = c('A','B','C','A','B','A','A','A','D','E','E','E'),
Value = c(1,2,3,4,3,5,2,3,7,2,3,9))
df_wrapped=df[,.(Values=(list(unique(Value)))), by=id]
expected_output = data.table::data.table(id = c("A","B","C","D","E"),
Value = list(c(1,4,5,2,3),c(2,3),c(3),c(7),c(2,3,9)),
Diff=list(c(1,4,5,2,3),c(NA),c(NA),c(7),c(9)),
Count = c(5,0,0,1,1))
Thoughts about expected output:
For the first row, all elements are unique. So, we will include them in Diff column.
In the second row, 2,3 have occurred in row 1. So, we will ignore them. Ditto for row 3.
Similarly, 7 and 9 are seen for the first time in row 4 and 5, so we will include them.
Here's visual representation:
expected_output
id Value Diff Count
A 1,4,5,2,3 1,4,5,2,3 5
B 2,3 NA 0
C 3 NA 0
D 7 7 1
E 2,3,9 9 1
I'd appreciate any thoughts. I am only looking for data.table based solutions because of performance issues in my original dataset.
I am not sure why you specifically need to put them in a list, but otherwise I wrote a small piece that could help you.
df = data.table::data.table(id = c('A','B','C','A','B','A','A','A','D','E','E','E'),
Value = c(1,2,3,4,3,5,2,3,7,2,3,9))
df = df[order(id, Value)]
df = df[duplicated(Value) == FALSE, diff := Value][]
df = df[, count := uniqueN(diff, na.rm = TRUE), by = id]
The outcome would be:
> df
id Value diff count
1: A 1 1 5
2: A 2 2 5
3: A 3 3 5
4: A 4 4 5
5: A 5 5 5
6: B 2 NA 0
7: B 3 NA 0
8: C 3 NA 0
9: D 7 7 1
10: E 2 NA 1
11: E 3 NA 1
12: E 9 9 1
Hope this helps, or at least get you started.
Here is another possible approach:
library(data.table)
df = data.table(
id = c('A','B','C','A','B','A','A','A','D','E','E','E'),
Value = c(1,2,3,4,3,5,2,3,7,2,3,9))
valset <- c()
df[, {
d <- setdiff(Value, valset)
valset <- unique(c(valset, Value))
.(Values=.(Value), Diff=.(d), Count=length(d))
},
by=.(id)]
output:
id Values Diff Count
1: A 1,4,5,2,3 1,4,5,2,3 5
2: B 2,3 0
3: C 3 0
4: D 7 7 1
5: E 2,3,9 9 1
I am trying to set the previous observation per group to NA, if a certain condition applies.
Assume I have the following datatable:
DT = data.table(group=rep(c("b","a","c"),each=3), v=c(1,1,1,2,2,1,1,2,2), y=c(1,3,6,6,3,1,1,3,6), a=1:9, b=9:1)
and I am using the simple condition:
DT[y == 6]
How can I set the previous rows of DT[y == 6] within DT to NA, namely the rows with the numbers 2 and 8 of DT? That is, how to set the respectively previous rows per group to NA.
Please note: From DT we can see that there are 3 rows when y is equal to 6, but for group a (row nr 4) I do not want to set the previous row to NA, as the previous row belongs to a different group.
So what I want in different terms is the previous index of certain elements in datatable. Is that possible? Would be also interesting if one can go further back than 1 period. Thanks for any hints.
You can find the row indices where current y is not 6 and next row is 6, then set the whole row to NA:
DT[shift(y, type="lead")==6 & y!=6,
(names(DT)) := lapply(.SD, function(x) NA)]
DT
output:
group v y a b
1: b 1 1 1 9
2: <NA> NA NA NA NA
3: b 1 6 3 7
4: a 2 6 4 6
5: a 2 3 5 5
6: a 1 1 6 4
7: c 1 1 7 3
8: <NA> NA NA NA NA
9: c 2 6 9 1
As usual, Frank commenting with a more succinct version:
DT[shift(y, type="lead")==6 & y!=6, names(DT) := NA]
I am trying to find all the records in my data.table for which there is more than one row with value v in field f.
For instance, we can use this data:
dt <- data.table(f1=c(1,2,3,4,5), f2=c(1,1,2,3,3))
If looking for that property in field f2, we'd get (note the absence of the (3,2) tuple)
f1 f2
1: 1 1
2: 2 1
3: 4 3
4: 5 3
My first guess was dt[.N>2,list(.N),by=f2], but that actually keeps entries with .N==1.
dt[.N>2,list(.N),by=f2]
f2 N
1: 1 2
2: 2 1
3: 3 2
The other easy guess, dt[duplicated(dt$f2)], doesn't do the trick, as it keeps one of the 'duplicates' out of the results.
dt[duplicated(dt$f2)]
f1 f2
1: 2 1
2: 5 3
So how can I get this done?
Edited to add example
The question is not clear. Based on the title, it looks like we want to extract all groups with number of rows (.N) greater than 1.
DT[, if(.N>1) .SD, by=f]
But the value v in field f is making it confusing.
If I understand what you're after correctly, you'll need to do some compound queries:
library(data.table)
DT <- data.table(v1 = 1:10, f = c(rep(1:3, 3), 4))
DT[, N := .N, f][N > 2][, N := NULL][]
# v1 f
# 1: 1 1
# 2: 2 2
# 3: 3 3
# 4: 4 1
# 5: 5 2
# 6: 6 3
# 7: 7 1
# 8: 8 2
# 9: 9 3
I have a data.table:
> (a <- data.table(id=c(1,1,1,2,2,3),
attribute=c("a","b","c","a","b","c"),
importance=1:6,
key=c("id","importance")))
id attribute importance
1: 1 a 1
2: 1 b 2
3: 1 c 3
4: 2 a 4
5: 2 b 5
6: 3 c 6
I want:
--1-- sort it by the second key in the decreasing order (i.e., the most important attributes should come first)
--2-- select the top 2 (or 10) attributes for each id, i.e.:
id attribute importance
3: 1 c 3
2: 1 b 2
5: 2 b 5
4: 2 a 4
6: 3 c 6
--3-- pivot the above:
id attribute.1 importance.1 attribute.2 importance.2
1 c 3 b 2
2 b 5 a 4
3 c 6 NA NA
It appears that the last operation can be done with something like:
a[,{
tmp <- .SD[.N:1];
list(a1 = tmp$attribute[1],
i1 = tmp$importance[1])
}, by=id]
Is this The Right Way?
How do I do the first two tasks?
I'd do the first two tasks like this:
a[a[, .I[.N:(.N-1)], by=list(id)]$V1]
The inner a[, .I[.N:(.N-1)], ,by=list(id)] gives you the indices in the order you require for every unique group in id. Then you subset a with the V1 column (which has the indices in the order you require).
You'll have to take care of negative indices here, maybe something like:
a[a[, .I[seq.int(.N, max(.N-1L, 1L))], by=list(id)]$V1]
I have recently been work with much larger datasets and have started learning and migrating to data.table to improve performance of aggregation/grouping. I have been unable to get certain expressions or functions to group as expected. Here is an example of a basic group by operation that I am having trouble with.
library(data.table)
category <- rep(1:10, 10)
value <- rnorm(100)
df <- data.frame(category, value)
dt <- data.table(df)
If I want to simply calculate the mean for each group by category. This works easily enough.
dt[,mean(value),by="category"]
category V1
1: 1 -0.67555478
2: 2 -0.50438413
3: 3 0.29093723
4: 4 -0.41684790
5: 5 0.33921764
6: 6 0.01970997
7: 7 -0.23684245
8: 8 -0.04280998
9: 9 0.01838804
10: 10 0.44295978
I run into problems if I try and use the scale function or even a simple expression subtracting the value from itself. The grouping is ignored and I get the function/expression applied to each row instead. The following returns all 100 rows instead of 10 group by categories.
dt[,scale(value),by="category"]
dt[,value-mean(value),by="category"]
I thought recreating scale as function that returns a numeric vector instead of a matrix might help.
zScore <- function(x) {
z=(x-mean(x,na.rm=TRUE))/sd(x,na.rm = TRUE)
return(z)
}
dt[,zScore(value),by="category"]
category V1
1: 1 -1.45114132
2: 1 -0.35304528
3: 1 -0.94075418
4: 1 1.44454416
5: 1 1.39448268
6: 1 0.55366652
....
97: 10 -0.43190602
98: 10 -0.25409244
99: 10 0.35496694
100: 10 0.57323480
category V1
This also returns the zScore function applied to all rows (N=100) and ignoring the grouping. What am I missing in order to get scale() or a custom function to use the grouping like it did above when using mean()?
You've clarified in the comments that you'd like the same behaviour as:
ddply(df,"category",transform, zscorebycategory=zScore(value))
which gives:
category value zscorebycategory
1 1 0.28860691 0.31565682
2 1 1.17473759 1.33282374
3 1 0.06395503 0.05778463
4 1 1.37825487 1.56643607
etc
The data table option you gave gives:
category V1
1: 1 0.31565682
2: 1 1.33282374
3: 1 0.05778463
4: 1 1.56643607
etc
Which is exactly the same data. However you'd like to also repeat the value column in your result, and rename the V1 variable with something more descriptive. data.table gives you the grouping variable in the result, along with the result of the expression you provide. So lets modify that to give the rows you'd like:
Your
dt[,zScore(value),by="category"]
becomes:
dt[,list(value=value, zscorebycategory=zScore(value)),by="category"]
Where the named items in the list become columns in the result.
plyr = data.table(ddply(df,"category",transform, zscorebycategory=zScore(value)))
dt = dt[,list(value=value, zscorebycategory=zScore(value)),by="category"]
identical(plyr, dt)
> TRUE
(note I converted your ddply data.frame result into a data.table, to allow the identical command to work).
Your claim that data.table does not group is wrong:
library(data.table)
category <- rep(1:2, each=4)
value <- c(rep(c(1:2),each=2),rep(c(4,10),each=2))
dt <- data.table(category, value)
category value
1: 1 1
2: 1 1
3: 1 2
4: 1 2
5: 2 4
6: 2 4
7: 2 10
8: 2 10
dt[,value-mean(value),by=category]
category V1
1: 1 -0.5
2: 1 -0.5
3: 1 0.5
4: 1 0.5
5: 2 -3.0
6: 2 -3.0
7: 2 3.0
8: 2 3.0
If you want to scale/transform this is exactly the behavior you want, because these operations by definition return an object of the same size as the input.