data.table merge produces extra columns [R] - r

Below I define a master dataset of dimensions 12x5. I divide it into four data.tables and I want to merge them. There is no row ID overlap between data.tables and some column name overlap. When I merge them, merge() doesn't recognize column name matches, and creates new columns for every column in each data.table. The final merged data.table should be 12x5, but it is coming out as 12x7. I thought that the all=TRUE command in data.table's merge() would solve this.
library(data.table)
a <- data.table(id = c(1, 2, 3), C1 = c(1, 2, 3))
b <- data.table(id = c(4, 5, 6), C1 = c(1, 2, 3), C2 = c(2, 3, 4))
c <- data.table(id = c(7, 8, 9), C3 = c(5, 2, 7))
d <- data.table(id = c(10, 11, 12), C3 = c(8, 2, 3), C4 = c(4, 6, 8))
setkey(a, "id")
setkey(b, "id")
setkey(c, "id")
setkey(d, "id")
final <- merge(a, b, all = TRUE)
final <- merge(final, c, all = TRUE)
final <- merge(final, d, all = TRUE)
names(final)
dim(final) #outputs correct numb of rows, but too many columns

The problem is with the way you are using the 'merge' function.
'merge' function in data.table package by default merges two data tables by the "shared key columns between them". Suppose you create 'a' and 'b' data tables like this:
library(data.table)
a <- data.table(id = c(1, 2, 3), C1 = c(1, 2, 3))
b <- data.table(id = c(4, 5, 6), C1 = c(1, 2, 3), C2 = c(2, 3, 4))
setkey(a, "id")
setkey(b, "id")
where 'a' is going to be like:
id C1
1: 1 1
2: 2 2
3: 3 3
and 'b' is going to be like:
id C1 C2
1: 4 1 2
2: 5 2 3
3: 6 3 4
now, lets first try your code:
merge(a, b, all = TRUE)
This is the result:
id C1.x C1.y C2
1: 1 1 NA NA
2: 2 2 NA NA
3: 3 3 NA NA
4: 4 NA 1 2
5: 5 NA 2 3
6: 6 NA 3 4
This is due to the fact that 'merge' function is taking only 'id' field (shared key between data tables 'a' and 'b') as the merging column, while adding all non-shared columns to the resulting data table. Now lets try specifying what columns to merge on:
merge(a, b, by=c("id","C1"), all = TRUE)
now the result is going to be:
id C1 C2
1: 1 1 NA
2: 2 2 NA
3: 3 3 NA
4: 4 1 2
5: 5 2 3
6: 6 3 4
Same applies to other merge functions you called. So try this:
final <- merge(a, b, by=c("id","C1"), all = TRUE)
final <- merge(final, c, by="id", all = TRUE) #here you don't necessarily need to specify by...
final <- merge( final, d, by=c("id","C3"),all=TRUE)
dim(final)
[1] 12 5

Related

Remove rows of a data frame from another dataframe but keep duplicated in R

I'm working in R and I have two dataframes, one is the base dataframe, and another has the rows that i need to remove from the base one. But I can't use setdiff() function, because it removes duplicated rows. Here's an example:
a <- data.frame(var1 = c(1, NA, 2, 2, 3, 4, 5),
var2 = c(1, 7, 2, 2, 3, 4, 5))
b <- data.frame(id = c(2, 4),
numero = c(2, 4))
And the result must be:
id numero
1 1
NA 7
2 2
3 3
5 5
It must be an efficient algorithm, too, because the base dataframe has 3 million rows with 26 columns.
We may need to create a sequence column before joining
library(data.table)
setDT(a)[, rn := rowid(var1, var2)][!setDT(b)[,
rn:= rowid(id, numero)], on = .(var1 = id, var2 = numero, rn)][,
rn := NULL][]
-output
var1 var2
<num> <num>
1: 1 1
2: NA 7
3: 2 2
4: 3 3
5: 5 5

How to join data to only the first matching row with {data.table} in R

I have a look-up table of "firsts" in column d. For example, the first time the patient was admitted because of a specific disease. I would like to join this back into the main data frame via data.table on multiple other conditions.
My problem is that, unfortunately, the main data.table could have multiple records with identical joining criteria that results in multiple "firsts" per patient after the join. Real world data is messy, people!
Is it possible to do a {data.table} join on only the first matching record?
This is similar to this question, but the multiple-matches are on the main data table. I think that mult only works on when there are several entries on the table being joined in.
repex:
library(data.table)
set.seed(1724)
d1 <- data.table(a = c(1, 1, 1),
b = c(1, 1, 2),
c = sample(1:10, 3))
d2 <- data.table(a = 1, b = 1, d = TRUE)
d2[d1, on = c("a", "b")]
a b d c
1: 1 1 TRUE 4
2: 1 1 TRUE 8
3: 1 2 NA 2
desired output
a b d c
1: 1 1 TRUE 4
2: 1 1 NA 8
3: 1 2 NA 2
library(data.table)
set.seed(1724)
d1 = data.table(a = c(1, 1, 1), b = c(1, 1, 2), c = sample(1:10, 3))
d2 = data.table(a = 1, b = 1, d = TRUE)
d1[, i1:=seq_len(.N), by=c("a","b")]
d2[, i2:=seq_len(.N), by=c("a","b")]
d2[d1, on = c("a","b","i2==i1")][, "i2":=NULL][]
# a b d c
# <num> <num> <lgcl> <int>
#1: 1 1 TRUE 4
#2: 1 1 NA 8
#3: 1 2 NA 2
One way would be to turn the values to NA after join.
library(data.table)
d3 <- d2[d1, on = c("a", "b")]
d3[, d:= replace(d, seq_len(.N) != 1, NA), .(a, b)]
d3
# a b d c
#1: 1 1 TRUE 4
#2: 1 1 NA 8
#3: 1 2 NA 2
The easy solution would be to index every row and join on this also (the d2 is a filtered version of d1):
library(data.table)
set.seed(1724)
d1 <- data.table(a = c(1, 1, 1),
b = c(1, 1, 2),
c = sample(1:10, 3))
d1[, rid := seq(to = .N)]
d2 <- d1[, .SD[1], by = c("a"), .SDcols = c("b", "rid")][, d := TRUE] # UPDATE
d2[d1, on = c("a", "b", "rid")]

Using a column as a column index to extract value from a data frame in R

I am trying to use the values from a column to extract column numbers in a data frame. My problem is similar to this topic in r-bloggers. Copying the script here:
df <- data.frame(x = c(1, 2, 3, 4),
y = c(5, 6, 7, 8),
choice = c("x", "y", "x", "z"),
stringsAsFactors = FALSE)
However, instead of having column names in choice, I have column index number, such that my data frame looks like this:
df <- data.frame(x = c(1, 2, 3, 4),
y = c(5, 6, 7, 8),
choice = c(1, 2, 1, 3),
stringsAsFactors = FALSE)
I tried using this solution:
df$newValue <-
df[cbind(
seq_len(nrow(df)),
match(df$choice, colnames(df))
)]
Instead of giving me an output that looks like this:
# x y choice newValue
# 1 1 4 1 1
# 2 2 5 2 2
# 3 3 6 1 6
# 4 8 9 3 NA
My newValue column returns all NAs.
# x y choice newValue
# 1 1 4 1 NA
# 2 2 5 2 NA
# 3 3 6 1 NA
# 4 8 9 3 NA
What should I modify in the code so that it would read my choice column as column index?
As you have column numbers which we need to extract from data frame already we don't need match here. However, since there is a column called choice in the data which you don't want to consider while extracting data we need to turn the values which are not in the range to NA before subsetting from the dataframe.
mat <- cbind(seq_len(nrow(df)), df$choice)
mat[mat[, 2] > (ncol(df) -1), ] <- NA
df$newValue <- df[mat]
df
# x y choice newValue
#1 1 5 1 1
#2 2 6 2 6
#3 3 7 1 3
#4 4 8 3 NA
data
df <- data.frame(x = c(1, 2, 3, 4),
y = c(5, 6, 7, 8),
choice = c(1, 2, 1, 3))

Combining 3 versions of same table together in R

I scraped some data from a website but it was really janky and for some reason had little mistakes in it. So, I scraped the same data 3 times, and produced 3 tables that look like:
library(data.table)
df1 <- data.table(name = c('adam', 'bob', 'carl', 'dan'),
id = c(1, 2, 3, 4),
thing=c(2, 1, 3, 4),
otherthing = c(2,1, 3, 4)
)
df2 <- data.table(name = c('adam', 'bob', 'carl', 'dan'),
id = c(1, 2, 3, 4),
thing=c(1, 1, 1, 4),
otherthing = c(2,2, 3, 4)
)
df3 <- data.table(name = c('adam', 'bob', 'carl', 'dan'),
id = c(1, 2, 3, 4),
thing=c(1, 1, 3, 4),
otherthing = c(2,1, 3, 3)
)
Except I have many more columns. I want to combine the 3 tables together, and when the values for "thing" and "other thing" etc. conflict, I want it to pick the value that has at least 2/3 and perhaps return an N/A if there is no 2/3 value. I'm confident the "name" and "id" field are good and they're what I want to sort of merge on.
I was considering setting the names for the tables to be, "thing1" "thing2" and "thing3" in the 3 tables respectively, merging together, and then writing some loops through the names. Is there a more elegant solution? It needs to work for 300+ value columns although I'm not super worried about speed.
In this example, the solution I think should be:
final_result <- data.table(name = c('adam', 'bob', 'carl', 'dan'),
id = c(1, 2, 3, 4),
thing=c(1, 1, 3, 4),
otherthing = c(2,1, 3, 4)
)
To generalize the approach from #IceCreamToucan, we can use:
library(dplyr)
n_mode <- function(...) {
x <- table(c(...))
if(any(x > 1)) as.numeric(names(x)[which.max(x)])
else NA
}
bind_rows(df1, df2, df3) %>%
group_by(name, id) %>%
summarise_all(funs(n_mode(.)))
N.B. Be careful with your namespace and how you name the function...preferring something like n_mode() to avoid conflicts with base::mode. Finally, if you extend this to more data.frames, you probably want to put them in a list. If that's not possible/practical, you could replace the bind_rows with purrr::map_df(ls(pattern = "^df[[:digit:]]+"), get)
data table version of Jason's solution (you should leave his as accepted)
library(data.table)
n_mode <- function(x) {
x <- table(x)
if(any(x > 1)) as.numeric(names(x)[which.max(x)])
else NA
}
my_list <- list(df1, df2, df3)
rbindlist(my_list)[, lapply(.SD, n_mode), .(name, id)]
# name id thing otherthing
# 1: adam 1 1 2
# 2: bob 2 1 1
# 3: carl 3 3 3
# 4: dan 4 4 4
Here's the output of rbindlist. Hopefully this makes it more clear why just taking n_mode of all the columns, grouped by name and id, gives the output you want.
rbindlist(my_list)[order(name, id)]
# name id thing otherthing
# 1: adam 1 2 2
# 2: adam 1 1 2
# 3: adam 1 1 2
# 4: bob 2 1 1
# 5: bob 2 1 2
# 6: bob 2 1 1
# 7: carl 3 3 3
# 8: carl 3 1 3
# 9: carl 3 3 3
# 10: dan 4 4 4
# 11: dan 4 4 4
# 12: dan 4 4 3

Row-wise sum for columns with certain names

I have a sample data:
SampleID a b d f ca k l cb
1 0.1 2 1 2 7 1 4 3
2 0.2 3 2 3 4 2 5 5
3 0.5 4 3 6 1 3 9 2
I need to find row-wise sum of columns which have something common in names, e.g. row-wise sum(a, ca) or row-wise sum(b,cb). The problem is that i have large data.frame and ideally i would be able to write what is common in column header, so that code would pick only those columns to sum
Thank you beforehand for any assistance.
We can select the columns that have 'a' with grep, subset the columns and do rowSums and the same with 'b' columns.
rowSums(df1[grep('a', names(df1)[-1])+1])
rowSums(df1[grep('b', names(df1)[-1])+1])
If you want the output as a data frame, try using dplyr
# Recreating your sample data
df <- data.frame(SampleID = c(1, 2, 3),
a = c(0.1, 0.2, 0.5),
b = c(2, 3, 4),
d = c(1, 2, 3),
f = c(2, 3, 6),
ca = c(7, 4, 1),
k = c(1, 2, 3),
l = c(4, 5, 9),
cb = c(3, 5, 2))
Process the data
# load dplyr
library(dplyr)
# Sum across columns 'a' and 'ca' (sum(a, ca))
df2 <- df %>%
select(contains('a'), -SampleID) %>% # 'select' function to choose the columns you want
mutate(row_sum = rowSums(.)) # 'mutate' function to create a new column 'row_sum' with the sum of the selected columns. You can drop the selected columns by using 'transmute' instead.
df2 # have a look
a ca row_sum
1 0.1 7 7.1
2 0.2 4 4.2
3 0.5 1 1.5

Resources