I'm looking for a python-like dictionary structure in R to replace values in a large dataset (>100 MB) and I think data.table package can help me do this. However, I cannot find out an easy way to solve the problem.
For example, I have two data.table:
Table A:
V1 V2
1: A B
2: C D
3: C D
4: B C
5: D A
Table B:
V3 V4
1: A 1
2: B 2
3: C 3
4: D 4
I want to use B as a dictionary to replace the values in A. So the result I want to get is:
Table R:
V5 V6
1 2
3 4
3 4
2 3
4 1
What I did is:
c2=tB[tA[,list(V2)],list(V4)]
c1=tB[tA[,list(V1)],list(V4)]
Although I specified j=list(V4), it still returned me with the values of V3. I don't know why.
c2:
V3 V4
1: B 2
2: D 4
3: D 4
4: C 3
5: A 1
c1:
V3 V4
1: A 1
2: C 3
3: C 3
4: B 2
5: D 4
Finally, I combined the two V4 columns and got the result I want.
But I think there should be a much easier way to do this. Any ideas?
Here's an alternative way:
setkey(B, V3)
for (i in seq_len(length(A))) {
thisA = A[[i]]
set(A, j=i, value=B[thisA]$V4)
}
# V1 V2
# 1: 1 2
# 2: 3 4
# 3: 3 4
# 4: 2 3
# 5: 4 1
Since thisA is character column, we don't need the J() (for convenience). Here, A's columns are replaced by reference, and is therefore also memory efficient. But if you don't want to replace A, then you can just use cA <- copy(A) and replace cA's columns.
Alternatively, using :=:
A[, names(A) := lapply(.SD, function(x) B[J(x)]$V4)]
# or
ans = copy(A)[, names(A) := lapply(.SD, function(x) B[J(x)]$V4)]
(Following user2923419's comment): You can drop the J() if the lookup is a single column of type character (just for convenience).
In 1.9.3, when j is a single column, it returns a vector (based on user request). So, it's a bit more natural data.table syntax:
setkey(B, V3)
for (i in seq_len(length(A))) {
thisA = A[[i]]
set(A, j=i, value=B[thisA, V4])
}
I am not sure how fast this is with big data, but chmatch is supposed to be fast.
tA[ , lapply(.SD,function(x) tB$V4[chmatch(x,tB$V3)])]
V1 V2
1: 1 2
2: 3 4
3: 3 4
4: 2 3
5: 4 1
Related
This is related to this question from Henrik
Assign multiple columns using := in data.table, by group
But what if I want to create a new data.table with given column names instead of assigning new columns to an existing one?
f <- function(x){list(head(x,2),tail(x,2))}
dt <- data.table(group=sample(c('a','b'),10,replace = TRUE),val=1:10)
> dt
group val
1: b 1
2: b 2
3: a 3
4: b 4
5: a 5
6: b 6
7: a 7
8: a 8
9: b 9
10: b 10
I want to get a new data.table with predefined column names by calling the function f:
dt[,c('head','tail')=f(val),by=group]
I wish to get this:
group head tail
1: a 1 8
2: a 3 10
3: b 2 6
4: b 5 9
But it gives me an error. What I can do is create the table then change the column names, but that seems cumbersome:
> dt2 <- dt[,f(val),by=group]
> dt2
group V1 V2
1: a 1 8
2: a 3 10
3: b 2 6
4: b 5 9
> colnames(dt2)[-1] <- c('head','tail')
> dt2
group head tail
1: a 1 8
2: a 3 10
3: b 2 6
4: b 5 9
Is it something I can do with one call?
From running your code as-is, this is the error I get:
dt[,c('head','tail')=f(val),by=group]
# Error: unexpected '=' in "dt2[,c('head','tail')="
The problem is using = instead of := for assignment.
On to your problem of wanting a new data.table:
dt2 <- dt[, setNames(f(val), c('head', 'tail')), by = group]
I tried to find all possible combinations by group. I tried to use combn function and data.table package as a below post teaches [(here is the link)](Generate All ID Pairs, by group with data.table in R
This gives me the expected result.
dat1 <- data.table(ids=1:4, groups=c("B","A","B","A"))
dat1
ids groups
1: 1 B
2: 2 A
3: 3 B
4: 4 A
dat1[, as.data.table(t(combn(ids, 2))), .(groups)]
groups V1 V2
1: B 1 3
2: A 2 4
But this gives me a strange result. It's very weird. I tried to understand this result for about 3 hours but I can't. Isn't it a bug?
dat2 <- data.table(ids=1:4, groups=c("B","A","B","C"))
dat2
ids groups
1: 1 B
2: 2 A
3: 3 B
4: 4 C
dat2[, as.data.table(t(combn(ids, 2))), .( groups)]
groups V1 V2
1: B 1 3
2: A 1 2
3: C 1 2
4: C 1 3
5: C 1 4
6: C 2 3
7: C 2 4
8: C 3 4
I really appreciate it for your teaching.
I am trying to find all the records in my data.table for which there is more than one row with value v in field f.
For instance, we can use this data:
dt <- data.table(f1=c(1,2,3,4,5), f2=c(1,1,2,3,3))
If looking for that property in field f2, we'd get (note the absence of the (3,2) tuple)
f1 f2
1: 1 1
2: 2 1
3: 4 3
4: 5 3
My first guess was dt[.N>2,list(.N),by=f2], but that actually keeps entries with .N==1.
dt[.N>2,list(.N),by=f2]
f2 N
1: 1 2
2: 2 1
3: 3 2
The other easy guess, dt[duplicated(dt$f2)], doesn't do the trick, as it keeps one of the 'duplicates' out of the results.
dt[duplicated(dt$f2)]
f1 f2
1: 2 1
2: 5 3
So how can I get this done?
Edited to add example
The question is not clear. Based on the title, it looks like we want to extract all groups with number of rows (.N) greater than 1.
DT[, if(.N>1) .SD, by=f]
But the value v in field f is making it confusing.
If I understand what you're after correctly, you'll need to do some compound queries:
library(data.table)
DT <- data.table(v1 = 1:10, f = c(rep(1:3, 3), 4))
DT[, N := .N, f][N > 2][, N := NULL][]
# v1 f
# 1: 1 1
# 2: 2 2
# 3: 3 3
# 4: 4 1
# 5: 5 2
# 6: 6 3
# 7: 7 1
# 8: 8 2
# 9: 9 3
I have a question which drives me crazy and really need your help.
The simplified question is this:
d1<-data.table(v1=c("a","b","c","d","d","b","a","c","a","d","b","a"),
v2=(seq(1:12)),V3=rep(1:4,times=3))
d2<-data.table(v1=c("a","b","c","d"),v3=c(3,2,1,4),v4=c("y","x","t","e"))
This will yield two data sets:
D1:
v1 v2 V3
1: a 1 1
2: b 2 2
3: c 3 3
4: d 4 4
5: d 5 1
6: b 6 2
7: a 7 3
8: c 8 4
9: a 9 1
10: d 10 2
11: b 11 3
12: a 12 4
> d2
v1 v3 v4
1: a 3 y
2: b 2 x
3: c 1 t
4: d 4 e
As you could see that the elements in v1 and v3 is the same. Now I want to joint both data set by creating a new column in the D1 which return the value of V4 in d2 that match both indices v1 and v3, I hope I could get output looking like this:
>
d3
v1 v2 V3 V4
1: a 1 1 na
2: b 2 2 x
3: c 3 3 na
4: d 4 4 e
5: d 5 1 na
6: b 6 2 x
7: a 7 3 y
8: c 8 4 na
9: a 9 1 na
10: d 10 2 na
11: b 11 3 na
12: a 12 4 na
The size of actual data I am using is relatively very large. It is something like joint 113MB data with 23MB. I tried to use for loop to do this problem by because the data is so long, it takes ages to finish the task. I also tried mergeand sqldf but both of them failed to finish the job. Could you please help me with this problem? Thank you very much!
I'd do it like this:
setkey(d1, v1, V3)
d1[d2, v4 := v4][]
For a join of the form x[i], key for x needs to be set. i may or may not have the key set. So we set the key for d1 here to columns v1 and V3.
Next, we perform a join d1[d2] which, for each row of d2 finds the rows that matches with the key columns of d1 and returns the join result. We're not looking for that result exactly. We'd rather like to add a new column where each matching row gets it's value from d2's v4 and otherwise NA. For this we make use of data.table's sub-assign by reference functionality. While joining i to x, we can still provide an expression in j, and refer to i's columns. You can also refer to them as i.v4 (usually used if there are columns of the same names in both x and i).
:= adds/updates a column by reference. The LHS of := is the column name we want to create here and the RHS v4 is the value we want to assign it from (here, it's the column from d2). For each matching row therefore, we assign d2's v4 onto d1's new column (which we name) v4 by reference (in-place, meaning no copy is made), and those rows with no matches will get the default value of NA.
The last [] is just to print the output to screen, as := returns the result invisibly.
Hope this helps to understand what's going on here.
I have a data.table:
> (a <- data.table(id=c(1,1,1,2,2,3),
attribute=c("a","b","c","a","b","c"),
importance=1:6,
key=c("id","importance")))
id attribute importance
1: 1 a 1
2: 1 b 2
3: 1 c 3
4: 2 a 4
5: 2 b 5
6: 3 c 6
I want:
--1-- sort it by the second key in the decreasing order (i.e., the most important attributes should come first)
--2-- select the top 2 (or 10) attributes for each id, i.e.:
id attribute importance
3: 1 c 3
2: 1 b 2
5: 2 b 5
4: 2 a 4
6: 3 c 6
--3-- pivot the above:
id attribute.1 importance.1 attribute.2 importance.2
1 c 3 b 2
2 b 5 a 4
3 c 6 NA NA
It appears that the last operation can be done with something like:
a[,{
tmp <- .SD[.N:1];
list(a1 = tmp$attribute[1],
i1 = tmp$importance[1])
}, by=id]
Is this The Right Way?
How do I do the first two tasks?
I'd do the first two tasks like this:
a[a[, .I[.N:(.N-1)], by=list(id)]$V1]
The inner a[, .I[.N:(.N-1)], ,by=list(id)] gives you the indices in the order you require for every unique group in id. Then you subset a with the V1 column (which has the indices in the order you require).
You'll have to take care of negative indices here, maybe something like:
a[a[, .I[seq.int(.N, max(.N-1L, 1L))], by=list(id)]$V1]