Suppose I have two datasets that I want to left-join,
i <- data.table(id=1:3, k=7:9, l=7:9, m=7:9, n=7:9)
and
x <- data.table(id=c(1,2), x=c(10,20))
To left-join, keeping all lines in i, I execute
x[i, .(id=i.id, k=i.k, l=i.l, m=i.m, n=i.n, x=x.x), on=.(id=id)]
but I wonder whether there is an easier and more efficient way that makes it unnecessary to spell out all the columns from i.
For example, in the reverse case (that is, when I want to keep all columns from i), I could use the := operator, as in x[i, k:=i.k, on=.(id=id)]. My understanding is that this makes things also more efficient because columns do not need to be copied. Is there something comparable for this case?
you can use data-tables setcolorder() after the join..
setcolorder( x[i, on = "id"], c( names(i), "x" ) )
# id k l m n x
# 1: 1 7 7 7 7 10
# 2: 2 8 8 8 8 20
# 3: 3 9 9 9 9 NA
What's wrong with merge?
y <- merge(i, x, all.x = TRUE, by = "id")
Related
I would like to join the two data frames :
a <- data.frame(x=c(1,3,5))
b <- data.frame(start=c(0,4),end=c(2,6),y=c("a","b"))
with a condition like (x>start)&(x<end) in order to get such a result:
# x y
#1 1 a
#2 2 <NA>
#3 3 b
I don't want to make a potentially large cartesian product and then select only the few rows matching the condition and I'd like a solution using the tidyverse (I am not interested in a solution using SQL which would be a confession of failure). I thought of the 'fuzzyjoin' package but I cannot find examples fitting my need : the function to apply for the condition has only two arguments. I also tried to put 'start' and 'end' into a single argument with data.frame(z=I(purrr::map2(b$start,b$end,list)),y=b$y)
# z y
#1 0, 2 a
#2 4, 6 b
but although the data looks fine fuzzy_left_join doesn't accept it.
I search for solutions working in more general cases (n variables on the LHS, m on the RHS, not necessarily numeric with arbitrary conditions).
UPDATE
I also want to be able to express conditions like (x=start+1)|(x=end+1) giving here:
# x y
#1 1 a
#2 3 a
#3 5 b
For this case you don't need multi_by or multy_match_fun, this works :
library(fuzzyjoin)
fuzzy_left_join(a, b, by = c(x = "start", x = "end"), match_fun = list(`>`, `<`))
# x start end y
# 1 1 0 2 a
# 2 3 NA NA <NA>
# 3 5 4 6 b
I eventually went to the code of fuzzy_join and found a way to make what I want even without proper documentation. fuzzy_let_join doesn't work but there is the following way (not really pretty and it actually does a cartesian product):
g <- function(x,y) (x>y[,"start"])&(x<y[,"end"])
fuzzy_join(a,b, multi_by = list(x="x",y=c("start","end"))
, multi_match_fun = g, mode = "left") %>% select(x,y)
data.table approach could be
library(data.table)
name1 <- setdiff(names(setDT(b)), names(setDT(a)))
#perform left outer join and then select required columns
a[b, (name1) := mget(name1), on = .(x > start, x < end)][, .(x, y)]
which gives
x y
1: 1 a
2: 3 <NA>
3: 5 b
Sample data:
a <- data.frame(x = c(1, 3, 5))
b <- data.frame(start = c(0, 4), end = c(2, 6), y = c("a", "b"))
Update: In case you want to join both dataframes on (x=start+1)|(x=end+1) condition then you can try
library(data.table)
DT1 <- as.data.table(a)
DT2 <- as.data.table(b)
#Perform 1st join on "x = start+1" and then another on "x = end+1". Finally row-bind both results.
DT <- rbindlist(list(DT1[DT2[, start_temp := start+1], on = c(x = "start_temp"), .(x, y), nomatch = 0],
DT1[DT2[, end_temp := end+1], on = c(x = "end_temp"), .(x, y), nomatch = 0]))
DT
# x y
#1: 1 a
#2: 5 b
#3: 3 a
A possible answer to explain what I am trying to do : extending dplyr in some way. And I will be happy to know if there are ways to improve this solution or some problems I didn't see.
The solution avoids the cartesian product, but duplicates into lists of data frames both one of the input data frame and the result. I didn't include the final column selection of x and y that is easy to code.
my_left_join <- function(.DATA1,.DATA2,.WHERE)
{
call = as.list(match.call())
df1 <- .DATA1
df1$._row_ <- 1:nrow(df1)
dfl1 <- replyr::replyr_split(df1,"._row_")
eval(substitute(
dfl2 <- mapply(function(.x)
{filter(.DATA2,with(.x,WHERE)) %>%
mutate(._row_=.x$._row_)}
, dfl1, SIMPLIFY=FALSE)
,list(WHERE=call$.WHERE)))
df2 <- replyr::replyr_bind_rows(dfl2)
left_join(df1,df2,by="._row_") %>% select(-._row_)
}
my_left_join(a,b,(x>start)&(x<end))
# x start end y
#1 1 0 2 a
#2 3 NA NA <NA>
#3 5 4 6 b
my_left_join(a,b,(x==(start+1))|(x==(end+1)))
# x start end y
#1 1 0 2 a
#2 3 0 2 a
#3 5 4 6 b
You can try a GenomicRanges solution
library(GenomicRanges)
# setup GRanges objects
a_gr <- GRanges(1, IRanges(a$x,a$x))
b_gr <- GRanges(1, IRanges(b$start, b$end))
# find overlaps between the two data sets
res <- as.data.frame(findOverlaps(a_gr,b_gr))
# create the expected output
a$y <- NA
a$y[res$queryHits] <- as.character(b$y)[res$subjectHits]
a
x y
1 1 a
2 3 <NA>
3 5 b
I'm using data.table to make aggregation, collapse and group by. The thing is that i know a method to do this with column number but when i put a by it directly make the aggregation. I just want the collapse to be done without group by but putting the by. i know this method:
dt[,X := list(paste(X, collapse = ";")),by = list(Y,Z)]
What i want to do now is:
dt[,names(dt)[1] := list(paste(names(dt)[1], collapse = ";")),by = list(Y,Z)]
But with this code it just write me X at each line
here is an example:
X <- c("a","b","c","d","e","f","g")
Y <- c(1,2,3,4,4,6,4)
Z <- c(10,11,23,8,8,1,3)
dt <- data.table(X,Y,Z)
This is the desired output, but i need to now this because i'm trying to do this in multiple columns (i have a data frame with 400 columns):
X Y Z
1: a 1 10
2: b 2 11
3: c 3 23
4: d;e 4 8
5: f 6 1
6: g 4 3
You should wrap names(dt)[1] inside get():
dt[,names(dt)[1] := list(paste(get(names(dt)[1]), collapse = ";")),by = list(Y,Z)]
Additionally, if you want to deduplicate your data you can use unique(dt).
To apply your functions to multiple columns, you can use .SD in combination with lapply(). For example pasting together the first two cols, grouped by Z:
dt[, lapply(.SD, function(x) paste(x, collapse=";")), by=list(Z),.SDcols=names(dt)[1:2]]
An example case is here:
DT = data.table(x=1:4, y=6:9, z=3:6)
setkey(DT, x, y)
Join columns have multiple values:
xc = c(1, 2, 4)
yc = c(6, 9)
DT[J(xc, yc), nomatch=0]
x y z
1: 1 6 3
This use of J() returns only single row. Actually, I want to join as %in% operator.
DT[x %in% xc & y %in% yc]
x y z
1: 1 6 3
2: 4 9 6
But using %in% operator makes the search a vector scan which is very slow compared to binary search. In order to have binary search, I build every possible combination of join values:
xc2 = rep(xc, length(yc))
yc2 = unlist(lapply(yc, rep, length(xc)))
DT[J(xc2, yc2), nomatch=0]
x y z
1: 1 6 3
2: 4 9 6
But building xc2, yc2 in this way makes code difficult to read. Is there a better way to have the speed of binary search and the simplicity of %in% operator in this case?
Answering to remove this question from DT tag open questions.
Code from Arun's comment DT[CJ(xc,yc), nomatch=0L] will do the job.
I am struggling a bit with my tables. I am trying to split some variables (using R), but I am having difficulties with one specific column.
My dataset is like this:
test<-data.frame(
Chrom_no=c(1,1,2,3),
Region=c('12..13','22..23','100','34..36'),
Ref=c('AT','CG','A','AAA'),
Alt=c('TA','GA','T','CGG'),
Prob=c(99,98.7,99,99.9))
I want to separate all the regions that are grouped together. So far, I have solved for all the columns, but the 'Region' one:
ref2 <- strsplit(as.character(test$Ref), '')
alt2<-strsplit(as.character(test$Alt), '')
test2<-data.frame(
Chrom_no=rep(test$Chrom_no, vapply(ref2, FUN=length, FUN.VALUE=integer(1))),
Region=rep(test$Region, vapply(ref2, FUN=length, FUN.VALUE=integer(1))),
Ref=unlist(ref2),
Alt=unlist(alt2),
Prob=rep(test$Prob, vapply(ref2, FUN=length, FUN.VALUE=integer(1))))
I don't know how to solve fix that column: e.g. '12..13': 12 should go on the Ref=A and 13 should go in Ref=T (first and second character, respectively). Things get complicated, as some of the columns have 3 characters (and corresponding range: 22..24), some will have more.
How could I solve? I have been looking for a solution in the last couple of days, but I am still not sure how to solve. I apologize if this has already been solved somewhere else.
P.S.: I am aware that in order to strsplit on the 'Region' column I need to use:
'\\..'
as separator.
If I understand your end goal correctly, you can look into using the "data.table" package. With it, you can set up your problem like the following:
library(data.table)
## Change your data.frame to a data.table
DT <- as.data.table(test)
## Convert the relevant columns to be characters instead of factors
DT[, c("Region", "Ref", "Alt") := lapply(.SD, as.character),
.SDcols = c("Region", "Ref", "Alt")]
DT[, list(Chrom_no = rep(Chrom_no, nchar(Ref)), # Expand the Chrom_no
Region = unlist(lapply( # Split Region and use
strsplit(Region, "..", TRUE), # the result to create
function(x) { # the range of values
x <- as.numeric(x) # needed
if (length(x) > 1) seq(x[1], x[2]) else x
})),
Ref = unlist(strsplit(Ref, "")), # Split Ref
Alt = unlist(strsplit(Alt, "")), # Split Alt
Prob = rep(Prob, nchar(Ref)))] # Expand Prob
# Chrom_no Region Ref Alt Prob
# 1: 1 12 A T 99.0
# 2: 1 13 T A 99.0
# 3: 1 22 C G 98.7
# 4: 1 23 G A 98.7
# 5: 2 100 A T 99.0
# 6: 3 34 A C 99.9
# 7: 3 35 A G 99.9
# 8: 3 36 A G 99.9
The above code can probably be streamlined a bit, but I thought this should be enough to get you started.
I have read in a large data file into R using the following command
data <- as.data.set(spss.system.file(paste(path, file, sep = '/')))
The data set contains columns which should not belong, and contain only blanks. This issue has to do with R creating new variables based on the variable labels attached to the SPSS file (Source).
Unfortunately, I have not been able to determine the options necessary to resolve the problem. I have tried all of: foreign::read.spss, memisc:spss.system.file, and Hemisc::spss.get, with no luck.
Instead, I would like to read in the entire data set (with ghost columns) and remove unnecessary variables manually. Since the ghost columns contain only blank spaces, I would like to remove any variables from my data.table where the number of unique observations is equal to one.
My data are large, so they are stored in data.table format. I would like to determine an easy way to check the number of unique observations in each column, and drop columns which contain only one unique observation.
require(data.table)
### Create a data.table
dt <- data.table(a = 1:10,
b = letters[1:10],
c = rep(1, times = 10))
### Create a comparable data.frame
df <- data.frame(dt)
### Expected result
unique(dt$a)
### Expected result
length(unique(dt$a))
However, I wish to calculate the number of obs for a large data file, so referencing each column by name is not desired. I am not a fan of eval(parse()).
### I want to determine the number of unique obs in
# each variable, for a large list of vars
lapply(names(df), function(x) {
length(unique(df[, x]))
})
### Unexpected result
length(unique(dt[, 'a', with = F])) # Returns 1
It seems to me the problem is that
dt[, 'a', with = F]
returns an object of class "data.table". It makes sense that the length of this object is 1, since it is a data.table containing 1 variable. We know that data.frames are really just lists of variables, and so in this case the length of the list is just 1.
Here's pseudo code for how I would remedy the solution, using the data.frame way:
for (x in names(data)) {
unique.obs <- length(unique(data[, x]))
if (unique.obs == 1) {
data[, x] <- NULL
}
}
Any insight as to how I may more efficiently ask for the number of unique observations by column in a data.table would be much appreciated. Alternatively, if you can recommend how to drop observations if there is only one unique observation within a data.table would be even better.
Update: uniqueN
As of version 1.9.6, there is a built in (optimized) version of this solution, the uniqueN function. Now this is as simple as:
dt[ , lapply(.SD, uniqueN)]
If you want to find the number of unique values in each column, something like
dt[, lapply(.SD, function(x) length(unique(x)))]
## a b c
## 1: 10 10 1
To get your function to work you need to use with=FALSE within [.data.table, or simply use [[ instead (read fortune(312) as well...)
lapply(names(df) function(x) length(unique(dt[, x, with = FALSE])))
or
lapply(names(df) function(x) length(unique(dt[[x]])))
will work
In one step
dt[,names(dt) := lapply(.SD, function(x) if(length(unique(x)) ==1) {return(NULL)} else{return(x)})]
# or to avoid calling `.SD`
dt[, Filter(names(dt), f = function(x) length(unique(dt[[x]]))==1) := NULL]
The approaches in the other answers are good. Another way to add to the mix, just for fun :
for (i in names(DT)) if (length(unique(DT[[i]]))==1) DT[,(i):=NULL]
or if there may be duplicate column names :
for (i in ncol(DT):1) if (length(unique(DT[[i]]))==1) DT[,(i):=NULL]
NB: (i) on the LHS of := is a trick to use the value of i rather than a column named "i".
Here is a solution to your core problem (I hope I got it right).
require(data.table)
### Create a data.table
dt <- data.table(a = 1:10,
b = letters[1:10],
d1 = "",
c = rep(1, times = 10),
d2 = "")
dt
a b d1 c d2
1: 1 a 1
2: 2 b 1
3: 3 c 1
4: 4 d 1
5: 5 e 1
6: 6 f 1
7: 7 g 1
8: 8 h 1
9: 9 i 1
10: 10 j 1
First, I introduce two columns d1 and d2 that have no values whatsoever. Those you want to delete, right? If so, I just identify those columns and select all other columns in the dt.
only_space <- function(x) {
length(unique(x))==1 && x[1]==""
}
bolCols <- apply(dt, 2, only_space)
dt[, (1:ncol(dt))[!bolCols], with=FALSE]
Somehow, I have the feeling that you could further simplify it...
Output:
a b c
1: 1 a 1
2: 2 b 1
3: 3 c 1
4: 4 d 1
5: 5 e 1
6: 6 f 1
7: 7 g 1
8: 8 h 1
9: 9 i 1
10: 10 j 1
There is an easy way to do that using "dplyr" library, and then use select function as follow:
library(dplyr)
newdata <- select(old_data, first variable,second variable)
Note that, you can choose as many variables as you like.
Then you will get the type of data that you want.
Many thanks,
Fadhah