group_by() and summarise() with all combinations (including non-existing combinations) - r

Basically, I want to have all unique combinations of column i and j; and expecting NA in column k if it is not available (similar like doing a group_by & summarise, but with all unique possible combinations).
Is there any existing function from the tidyverse or else that does the same thing as the one that I wrote below?
library(tidyverse)
df <- tibble(
i = c("a", "a", "b"),
j = c("x", "y", "x"),
k = c(100, 300, 20)
)
# I often write this chunk below after group_by & summarise
df %>%
spread(j, k) %>%
gather(j, k, -i)

you can use complete function from the tidyverse
http://tidyr.tidyverse.org/reference/complete.html
df %>% complete(i,j)
This gives you all the combinations of column i, and j

As it seems that cross_join() is still a tidyverse feature request on github, expand.grid(unique(df$i), unique(df$j), stringsAsFactors = FALSE) needs to be used to create a data.frame with all unique possible combinations:
df %>%
right_join(expand.grid(unique(df$i), unique(df$j), stringsAsFactors = FALSE),
by = c("i" = "Var1", "j" = "Var2"))
i j k
1 a x 100
2 b x 20
3 a y 300
4 b y NA
So, OP's approach using spread() and gather() appears to be more concise.
Personally, I do prefer the CJ() function from the data.table package:
library(data.table)
setDT(df)[CJ(i = i, j = j, unique = TRUE), on = .(i, j)]
i j k
1: a x 100
2: a y 300
3: b x 20
4: b y NA
CJ()can be used as replacement for expand.grid() together with right_join():
df %>%
right_join(data.table::CJ(i = .$i, j = .$j, unique = TRUE))
i j k
1 a x 100
2 a y 300
3 b x 20
4 b y NA

Related

Is there a way to replace rows in one dataframe with another in R?

I'm trying to figure out how to replace rows in one dataframe with another by matching the values of one of the columns. Both dataframes have the same column names.
Ex:
df1 <- data.frame(x = c(1,2,3,4), y = c("a", "b", "c", "d"))
df2 <- data.frame(x = c(1,2), y = c("f", "g"))
Is there a way to replace the rows of df1 with the same row in df2 where they share the same x variable? It would look like this.
data.frame(x = c(1,2,3,4), y = c("f","g","c","d")
I've been working on this for a while and this is the closest I've gotten -
df1[which(df1$x %in% df2$x),]$y <- df2[which(df1$x %in% df2$x),]$y
But it just replaces the values with NA.
Does anyone know how to do this?
We can use match. :
inds <- match(df1$x, df2$x)
df1$y[!is.na(inds)] <- df2$y[na.omit(inds)]
df1
# x y
#1 1 f
#2 2 g
#3 3 c
#4 4 d
First off, well done in producing a nice reproducible example that's directly copy-pastable. That always helps, specially with an example of expected output. Nice one!
You have several options, but lets look at why your solution doesn't quite work:
First of all, I tried copy-pasting your last line into a new session and got the dreaded factor-error:
Warning message:
In `[<-.factor`(`*tmp*`, iseq, value = 1:2) :
invalid factor level, NA generated
If we look at your data frames df1 and df2 with the str function, you will see that they do not contain text but factors. These are not text - in short they represent categorical data (male vs. female, scores A, B, C, D, and F, etc.) and are really integers that have a text as label. So that could be your issue.
Running your code gives a warning because you are trying to import new factors (labels) into df1 that don't exist. And R doesn't know what to do with them, so it just inserts NA-values.
As r2evens answered, he used the stringsAsFactors to disable using strings as Factors - you can even go as far as disabling it on a session-wide basis using options(stringsAsFactors=FALSE) (and I've heard it will be disabled as default in forthcoming R4.0 - yay!).
After disabling stringsAsFactors, your code works - or does it? Try this on for size:
df2 <- df2[c(2,1),]
df1[which(df1$x %in% df2$x),]$y <- df2[which(df1$x %in% df2$x),]$y
What's in df1 now? Not quite right anymore.
In the first line, I swapped the two rows in df2 and lo and behold, the replaced values in df1 were swapped. Why is that?
Let's deconstruct your statement df2[which(df1$x %in% df2$x),]$y
Call df1$x %in% df2$x returns a logical vector (boolean) of which elements in df1$x are found ind df2 - i.e. the first two and not the second two. But it doesn't relate which positions in the first vector corresponds to which in the second.
Calling which(df1$x %in% df2$x) then reduces the logical vector to which indices were TRUE. Again, we do not now which elements correspond to which.
For solutions, I would recommend r2evans, as it doesn't rely on extra packages (although data.table or dplyr are two powerful packages to get to know).
In his solution, he uses merge to perform a "full join" which matches rows based on the value, rather than - well, what you did. With transform, he assigns new variables within the context of the data.frame returned from the merge function called in the first argument.
I think what you need here is a "merge" or "join" operation.
(I add stringsAsFactors=FALSE to the frames so that the merging and later work is without any issue, as factors can be disruptive sometimes.)
Base R:
df1 <- data.frame(x = c(1,2,3,4), y = c("a", "b", "c", "d"), stringsAsFactors = FALSE)
# df2 <- data.frame(x = c(1,2), y = c("f", "g"), stringsAsFactors = FALSE)
merge(df1, df2, by = "x", all = TRUE)
# x y.x y.y
# 1 1 a f
# 2 2 b g
# 3 3 c <NA>
# 4 4 d <NA>
transform(merge(df1, df2, by = "x", all = TRUE), y = ifelse(is.na(y.y), y.x, y.y))
# x y.x y.y y
# 1 1 a f f
# 2 2 b g g
# 3 3 c <NA> c
# 4 4 d <NA> d
transform(merge(df1, df2, by = "x", all = TRUE), y = ifelse(is.na(y.y), y.x, y.y), y.x = NULL, y.y = NULL)
# x y
# 1 1 f
# 2 2 g
# 3 3 c
# 4 4 d
Dplyr:
library(dplyr)
full_join(df1, df2, by = "x") %>%
mutate(y = coalesce(y.y, y.x)) %>%
select(-y.x, -y.y)
# x y
# 1 1 f
# 2 2 g
# 3 3 c
# 4 4 d
A join option with data.table where we join on the 'x' column, assign the values of 'y' in second dataset (i.y) to the first one with :=
library(data.table)
setDT(df1)[df2, y := i.y, on = .(x)]
NOTE: It is better to use stringsAsFactors = FALSE (in R 4.0.0 - it is by default though) or else we need to have all the levels common in both datasets

why is dcast so impossible to pass a non-aggregate function?

I am using the data.table package for a table like this:
DT <- data.table(id=rep(1:100, each=50),
grp=rep(letters[1:4], each=1250),
time=rep(1:50,100),
outcome=rnorm(5000),
seconds=rep(1:500,10),
weights=rnorm(5000),
response=rep(1:200, each=25),
key=c("grp", "time"))
I would like to create a new (possibly rbindlisted) data table of some summary statistics from this table. I first created two intermediary tables a and b,
a <- DT[, list(mean = weighted.mean(outcome, weights),
median=median(outcome),seconds), by=c("grp","time")]
b <- DT[, list(mean=weighted.mean(response, seconds),
median=median(response)), by=c("grp","time")]
and then am trying to rowbind these together across all groups but still preserve the grouping along the rows. This does not work:
DTfinal <- data.table(DT$grp, DT$time,
outcomemean=a$mean, responsemean=b$mean,
outcomemedian=a$median, responsemedian=b$median)
I don't think a merge works since a and b have different lengths. Rowbinding a and b also mixes up the different means and medians of a and b, ideally I would like a rbindlist that has some kind of suffix for each column like c(".a",".b").
Update:
I get an error (since a and b have different dimension) doing
DTfinal <- rbindlist(setNames(list(a[, c("grp", "time", "mean", "median"),
with = FALSE],
b[, c("grp", "time", "mean", "median"),
with = FALSE]),
c("a", "b")),
idcol= "id")
dcast(DTfinal, grp + time ~id, value.var = c('mean', 'median'))
where it returns
Aggregate function missing, defaulting to 'length'
We can use rbindlist after placing the datasets in a list
DTfinal <- rbindlist(list(a,b))
dim(DTfinal)
#[1] 400 4
dim(a)
#[1] 200 4
dim(b)
#[1] 200 4
Suppose if both datasets have different number of columns, and we have a vector of column names that we need to keep
nm1 <- intersect(names(a), names(b))
rbindlist(list(a[, nm1, with = FALSE], b[, nm1, with = FALSE]), idcol= "id")
Update
If we need to convert to 'wide' format
DTfinal <- rbindlist(setNames(list(a,b), c("a", "b")), idcol= "id")
dcast(DTfinal, grp + time ~id, value.var = c('mean', 'median'))
# grp time mean_a mean_b median_a median_b
# 1: a 1 0.52171471 25.99502 -0.06558068 25
# 2: a 2 0.36445108 25.99010 0.13518412 25
# 3: a 3 0.08993721 25.98522 0.20128790 25
# 4: a 4 -64.04617391 25.98039 0.40999376 25
# 5: a 5 0.81730847 25.97561 -0.03481697 25
# ---
#196: d 46 1.62818374 176.67568 -0.26695999 176
#197: d 47 -1.45259871 176.67340 0.14893356 176
#198: d 48 9.59796683 176.67114 -0.05834959 176
#199: d 49 -2.74285453 176.66890 -0.22094347 176
#200: d 50 1.22109043 176.66667 -0.08172928 176

Fastest way to filter a data.frame list column contents in R / Rcpp

I have a data.frame:
df <- structure(list(id = 1:3, vars = list("a", c("a", "b", "c"), c("b",
"c"))), .Names = c("id", "vars"), row.names = c(NA, -3L), class = "data.frame")
with a list column (each with a character vector):
> str(df)
'data.frame': 3 obs. of 2 variables:
$ id : int 1 2 3
$ vars:List of 3
..$ : chr "a"
..$ : chr "a" "b" "c"
..$ : chr "b" "c"
I want to filter the data.frame according to setdiff(vars,remove_this)
library(dplyr)
library(tidyr)
res <- df %>% mutate(vars = lapply(df$vars, setdiff, "a"))
which gets me this:
> res
id vars
1 1
2 2 b, c
3 3 b, c
But to get drop the character(0) vars I have to do something like:
res %>% unnest(vars) # and then do the equivalent of nest(vars) again after...
Actual datasets:
560K rows and 3800K rows that also have 10 more columns (to carry along).
(this is quite slow, which leads to question...)
What is the Fastest way to do this in R?
Is there a dplyr/ data.table/ other faster method?
How to do this with Rcpp?
UPDATE/EXTENSION:
can the column modification be done in place rather then by copying the lapply(vars,setdiff(... result?
what's the most efficient way to filter out for vars == character(0) if it must be a seperate step.
Setting aside any algorithmic improvements, the analogous data.table solution is automatically going to be faster because you won't have to copy the entire thing just to add a column:
library(data.table)
dt = as.data.table(df) # or use setDT to convert in place
dt[, newcol := lapply(vars, setdiff, 'a')][sapply(newcol, length) != 0]
# id vars newcol
#1: 2 a,b,c b,c
#2: 3 b,c b,c
You can also delete the original column (with basically 0 cost), by adding [, vars := NULL] at the end). Or you can simply overwrite the initial column if you don't need that info, i.e. dt[, vars := lapply(vars, setdiff, 'a')].
Now as far as algorithmic improvements go, assuming your id values are unique for each vars (and if not, add a new unique identifier), I think this is much faster and automatically takes care of the filtering:
dt[, unlist(vars), by = id][!V1 %in% 'a', .(vars = list(V1)), by = id]
# id vars
#1: 2 b,c
#2: 3 b,c
To carry along the other columns, I think it's easiest to simply merge back:
dt[, othercol := 5:7]
# notice the keyby
dt[, unlist(vars), by = id][!V1 %in% 'a', .(vars = list(V1)), keyby = id][dt, nomatch = 0]
# id vars i.vars othercol
#1: 2 b,c a,b,c 6
#2: 3 b,c b,c 7
Here's another way:
# prep
DT <- data.table(df)
DT[,vstr:=paste0(sort(unlist(vars)),collapse="_"),by=1:nrow(DT)]
setkey(DT,vstr)
get_badkeys <- function(x)
unlist(sapply(1:length(x),function(n) combn(sort(x),n,paste0,collapse="_")))
# choose values to exclude
baduns <- c("a","b")
# subset
DT[!J(get_badkeys(baduns))]
This is fairly fast, but it takes up your key.
Benchmarks. Here's a made-up example:
Candidates:
hannahh <- function(df,baduns){
df %>%
mutate(vars = lapply(.$vars, setdiff, baduns)) %>%
filter(!!sapply(vars,length))
}
eddi <- function(df,baduns){
dt = as.data.table(df)
dt[,
unlist(vars)
, by = id][!V1 %in% baduns,
.(vars = list(V1))
, keyby = id][dt, nomatch = 0]
}
stevenb <- function(df,baduns){
df %>%
rowwise() %>%
do(id = .$id, vars = .$vars, newcol = setdiff(.$vars, baduns)) %>%
mutate(length = length(newcol)) %>%
ungroup() %>%
filter(length > 0)
}
frank <- function(df,baduns){
DT <- data.table(df)
DT[,vstr:=paste0(sort(unlist(vars)),collapse="_"),by=1:nrow(DT)]
setkey(DT,vstr)
DT[!J(get_badkeys(baduns))]
}
Simulation:
nvals <- 4
nbads <- 2
maxlen <- 4
nobs <- 1e4
exdf <- data.table(
id=1:nobs,
vars=replicate(nobs,list(sample(valset,sample(maxlen,1))))
)
setDF(exdf)
baduns <- valset[1:nbads]
Results:
system.time(frank_res <- frank(exdf,baduns))
# user system elapsed
# 0.24 0.00 0.28
system.time(hannahh_res <- hannahh(exdf,baduns))
# 0.42 0.00 0.42
system.time(eddi_res <- eddi(exdf,baduns))
# 0.05 0.00 0.04
system.time(stevenb_res <- stevenb(exdf,baduns))
# 36.27 55.36 93.98
Checks:
identical(sort(frank_res$id),eddi_res$id) # TRUE
identical(unlist(stevenb_res$id),eddi_res$id) # TRUE
identical(unlist(hannahh_res$id),eddi_res$id) # TRUE
Discussion:
For eddi() and hannahh(), the results scarcely change with nvals, nbads and maxlen. In contrast, when baduns goes over 20, frank() becomes incredibly slow (like 20+ sec); it also scales up with nbads and maxlen a little worse than the other two.
Scaling up nobs, eddi()'s lead over hannahh() stays the same, at about 10x. Against frank(), it sometimes shrinks and sometimes stays the same. In the best nobs = 1e5 case for frank(), eddi() is still 3x faster.
If we switch from a valset of characters to something that frank() must coerce to a character for its by-row paste0 operation, both eddi() and hannahh() beat it as nobs grows.
Benchmarks for doing this repeatedly. This is probably obvious, but if you have to do this "many" times (...how many is hard to say), it's better to create the key column than to go through the subsetting for each set of baduns. In the simulation above, eddi() is about 5x as fast as frank(), so I'd go for the latter if I was doing this subsetting 10+ times.
maxbadlen <- 2
set_o_baduns <- replicate(10,sample(valset,size=sample(maxbadlen,1)))
system.time({
DT <- data.table(exdf)
DT[,vstr:=paste0(sort(unlist(vars)),collapse="_"),by=1:nrow(DT)]
setkey(DT,vstr)
for (i in 1:10) DT[!J(get_badkeys(set_o_baduns[[i]]))]
})
# user system elapsed
# 0.29 0.00 0.29
system.time({
dt = as.data.table(exdf)
for (i in 1:10) dt[,
unlist(vars), by = id][!V1 %in% set_o_baduns[[i]],
.(vars = list(V1)), keyby = id][dt, nomatch = 0]
})
# user system elapsed
# 0.39 0.00 0.39
system.time({
for (i in 1:10) hannahh(exdf,set_o_baduns[[i]])
})
# user system elapsed
# 4.10 0.00 4.13
So, as expected, frank() takes very little time for additional evaluations, while eddi() and hannahh() grow linearly.
Here's another idea:
df %>%
rowwise() %>%
do(id = .$id, vars = .$vars, newcol = setdiff(.$vars, "a")) %>%
mutate(length = length(newcol)) %>%
ungroup()
Which gives:
# id vars newcol length
#1 1 a 0
#2 2 a, b, c b, c 2
#3 3 b, c b, c 2
You could then filter on length > 0 to keep only non-empty newcol
df %>%
rowwise() %>%
do(id = .$id, vars = .$vars, newcol = setdiff(.$vars, "a")) %>%
mutate(length = length(newcol)) %>%
ungroup() %>%
filter(length > 0)
Which gives:
# id vars newcol length
#1 2 a, b, c b, c 2
#2 3 b, c b, c 2
Note: As mentioned by #Arun in the comments, this approach is quite slow. You are better off with the data.table solutions.

Perform a semi-join with data.table

How do I perform a semi-join with data.table? A semi-join is like an inner join except that it only returns the columns of X (not also those of Y), and does not repeat the rows of X to match the rows of Y. For example, the following code performs an inner join:
x <- data.table(x = 1:2, y = c("a", "b"))
setkey(x, x)
y <- data.table(x = c(1, 1), z = 10:11)
x[y]
# x y z
# 1: 1 a 10
# 2: 1 a 11
A semi-join would return just x[1]
More possibilities :
w = unique(x[y,which=TRUE]) # the row numbers in x which have a match from y
x[w]
If there are duplicate key values in x, then that needs :
w = unique(x[y,which=TRUE,allow.cartesian=TRUE])
x[w]
Or, the other way around :
setkey(y,x)
w = !is.na(y[x,which=TRUE,mult="first"])
x[w]
If nrow(x) << nrow(y) then the y[x] approach should be faster.
If nrow(x) >> nrow(y) then the x[y] approach should be faster.
But the anti anti join appeals too :-)
One solution I can think of is:
tmp <- x[!y]
x[!tmp]
In data.table, you can have another data table as an i expression (i.e., the first expression in the data.table.[ call), and that will perform a join, e.g.:
x <- data.table(x = 1:10, y = letters[1:10])
setkey(x, x)
y <- data.table(x = c(1,3,5,1), z = 1:4)
> x[y]
x y z
1: 1 a 1
2: 3 c 2
3: 5 e 3
4: 1 a 4
The ! before the i expression is an extension of the syntax above that performs a 'not-join', as described on p. 11 of data.table documentation. So the first assignments evaluates to a subset of x that doesn't have any rows where the key (column x) is present in y:
> x[!y]
x y
1: 2 b
2: 4 d
3: 6 f
4: 7 g
5: 8 h
6: 9 i
7: 10 j
It is similar to setdiff in this regard. And therefore the second statement returns all the rows in x where the key is present in y.
The ! feature was added in data.table 1.8.4 with the following note in NEWS:
o A new "!" prefix on i signals 'not-join' (a.k.a. 'not-where'), #1384i.
DT[-DT["a", which=TRUE, nomatch=0]] # old not-join idiom, still works
DT[!"a"] # same result, now preferred.
DT[!J(6),...] # !J == not-join
DT[!2:3,...] # ! on all types of i
DT[colA!=6L | colB!=23L,...] # multiple vector scanning approach (slow)
DT[!J(6L,23L)] # same result, faster binary search
'!' has been used rather than '-' :
* to match the 'not-join'/'not-where' nomenclature
* with '-', DT[-0] would return DT rather than DT[0] and not be backwards
compatible. With '!', DT[!0] returns DT both before (since !0 is TRUE in
base R) and after this new feature.
* to leave DT[+J...] and DT[-J...] available for future use
For some reason, the following doesn't work x[!(x[!y])] - probably data.table is too smart about parsing the argument.
P.S. As Josh O'Brien pointed in another answer, a one-line would be x[!eval(x[!y])].
I'm confused with all the not-joins above, isn't what you want simply:
unique(x[y, .SD])
# x y
#1: 1 a
If x can have duplicate keys, then you can unique y instead:
## Creating an example data.table 'a' three-times-repeated first row
x <- data.table(x = c(1,1,1,2), y = c("a", "a", "a", "b"))
setkey(x, x)
y <- data.table(x = c(1, 1), z = 10:11)
setkey(y, x)
x[eval(unique(y, by = key(y))), .SD] # data.table >= 1.9.8 requires by=key(y)
# x y
# 1: 1 a
# 2: 1 a
# 3: 1 a
Update. Based on all the discussion here, I would do something like this, which should be fast and work in the most general case:
x[eval(unique(y[, key(x), with = FALSE]))]
Here is another, more direct solution:
unique(x[eval(y$x)])
It's more direct and runs faster - here is the comparison in run times with my previous solution:
# Generate some large data
N <- 1000000 * 26
x <- data.table(x = 1:N, y = letters, z = rnorm(N))
setkey(x, x)
y <- data.table(x = sample(N, N/10, replace = TRUE), z = sample(letters, N/10, replace = TRUE))
setkey(y, x)
system.time(r1 <- x[!eval(x[!y])])
user system elapsed
7.772 1.217 11.998
system.time(r2 <- unique(x[eval(y$x)]))
user system elapsed
0.540 0.142 0.723
In a more general case, you can do something like
x[eval(y[, key(x), with = FALSE])]
I tried to write a method that doesn't use any names, which are downright confusing in the OP's example.
sJ <- function(x,y){
ycols <- 1:min(ncol(y),length(key(x)))
yjoin <- unique(y[, ..ycols])
yjoin
}
x[eval(sJ(x,y))]
For Victor's simpler example, this gives the desired output:
x y
1: 1 a
2: 3 c
3: 5 e
This is a ~30% slower than Victor's way.
EDIT: And Victor's approach, taking unique before joining, is quite a bit faster:
N <- 1e5*26
x <- data.table(x = 1:N, y = letters, z = rnorm(N))
setkey(x, x)
y <- data.table(x = sample(N, N/10, replace = TRUE), z = sample(letters, N/10, replace = TRUE))
setkey(y, x)
require(microbenchmark)
microbenchmark(
sJ=x[eval(sJ(x,y))],
dolla=unique(x[eval(y$x)]),
brack=x[eval(unique(y[['x']]))]
)
Unit: milliseconds
expr min lq median uq max neval
# sJ 120.22700 125.04900 126.50704 132.35326 217.6566 100
# dolla 105.05373 108.33804 109.16249 118.17613 285.9814 100
# brack 53.95656 61.32669 61.88227 65.21571 235.8048 100
I'm guessing the [[ vs $ doesn't help the speed, but didn't check.
This thread is so old. But I noticed that the solution can be easily derived from the definition of semi-join given in the original post:
"A semi-join is like an inner join except that it only returns the
columns of X (not also those of Y), and does not repeat the rows of X
to match the rows of Y"
library(data.table)
dt1 <- data.table(ProdId = 1:4,
Product = c("Bread", "Cheese", "Pizza", "Butter"))
dt2 <- data.table(ProdId = c(1, 1, 3, 4, 5),
Company = c("A", "B", "C", "D", "E"))
# semi-join
unique(merge(dt1, dt2, on="ProdId")[, names(dt1), with=F])
ProdId Product
1: 1 Bread
2: 3 Pizza
3: 4 Butter
I've simply applied the syntax of inner-join, followed by filtering columns from first table only, with unique() to remove rows of first table which were repeated to match rows of second table.
Edit: The above approach will match dplyr::semi_join() output only if we have unique rows in the first table. If we need to output all the rows including duplicates from first table, then we may use fsetdiff() method shown below.
Another one line data.table solution:
fsetdiff(dt1, dt1[!dt2, on="ProdId"])
ProdId Product
1: 1 Bread
2: 3 Pizza
3: 4 Butter
I've just removed from first table the anti-join of first and second. Seems simpler to me. If the first table has duplicate rows, we will need:
fsetdiff(dt1, dt1[!dt2, on="ProdId"], all=T)
The fsetdiff() result with ,all=T matches the output from dplyr:
dplyr::semi_join(dt1, dt2, by="ProdId")
ProdId Product
1 1 Bread
2 3 Pizza
3 4 Butter
Using another set of data taken from one of previous posts:
x <- data.table(x = c(1,1,1,2), y = c("a", "a", "a", "b"))
y <- data.table(x = c(1, 1), z = 10:11)
With dplyr:
dplyr::semi_join(x, y, by="x")
x y
1 1 a
2 1 a
3 1 a
With data.table:
fsetdiff(x, x[!y, on="x"], all=T)
x y
1: 1 a
2: 1 a
3: 1 a
Without ,all=T, the duplicate rows are removed:
fsetdiff(x, x[!y, on="x"])
x y
1: 1 a
The package dplyr supports the following four join types:
inner_join, left_join, semi_join, anti_join
So for the semi-join try the following code
library("dplyr")
table1 <- data.table(x = 1:2, y = c("a", "b"))
table2 <- data.table(x = c(1, 1), z = 10:11)
semi_join(table1, table2)
The output is as expected:
# Joining by: "x"
# Source: local data table [1 x 2]
#
# x y
# (int) (chr)
# 1 1 a
Try the following:
w <- y[,unique(x)]
x[x %in% w]
Output will be:
x y
1: 1 a

How to merge two columns in R with a specific symbol?

I have a table read in R as follows:
column1 column2
A B
What is the command to be used to match two columns together as follows?
Column 3
A_B
I'm a bit unsure what you mean by "merge", but is this what you mean?
> DF = data.frame(A = LETTERS[1:10], B = LETTERS[11:20])
> DF$C = paste(DF$A, DF$B, sep="_")
> head(DF)
A B C
1 A K A_K
2 B L B_L
3 C M C_M
4 D N D_N
Or equivalently, as #daroczig points out:
within(DF, C <- paste(A, B, sep='_'))
My personal favourite involves making use of the unite in tidyr:
set.seed(1)
df <- data.frame(colA = sample(LETTERS, 10),
colB = sample(LETTERS, 10))
# packs: pipe + unite
require(magrittr); require(tidyr)
# Unite
df %<>%
unite(ColAandB, colA, colB, remove = FALSE)
Results
> head(df, 3)
ColAandB colA colB
1 G_F G F
2 J_E J E
3 N_Q N Q
Side notes
Personally, I find the remove = TRUE / FALSE functionality of unite very useful. In addition tidyr firs the dplyr workflow very well and plays well with separate in case you change your mind about the columns being merged. On the same lines, if NAs are the problem introducing na.omit to your workflow would enable you to conveniently drop the undesirable rows before creating the desired column.

Resources