I often have two data frames that I wish to join, where I expect all values to join. If not all values are present in both data frames, I want it to return an error.
Here is a MWE:
library(dplyr, warn.conflicts = FALSE)
df1 <- data.frame(
id = c(1:5),
value1 = rep(1, 5)
)
print(df1)
#> id value1
#> 1 1 1
#> 2 2 1
#> 3 3 1
#> 4 4 1
#> 5 5 1
df2 <- data.frame(
id = c(1:4),
value2 = rep(2, 4)
)
print(df2)
#> id value2
#> 1 1 2
#> 2 2 2
#> 3 3 2
#> 4 4 2
df3 <- inner_join(
df1,
df2,
by = "id")
print(df3)
#> id value1 value2
#> 1 1 1 2
#> 2 2 1 2
#> 3 3 1 2
#> 4 4 1 2
# Check if all values have joined
stopifnot(
nrow(df3) == max(nrow(df1), nrow(df2))
)
#> Error: nrow(df3) == max(nrow(df1), nrow(df2)) is not TRUE
Created on 2021-03-31 by the reprex package (v1.0.0)
This works, but I do not like the stopifnot(). It feels cumbersome, and particularly if I wish to overwrite df2, then I need to create a temp value df2_previous_row_num = nrow(df2) and then do stopifnot(nrow(df2) == df2_previous_row_num).
Also the nrow() test only works if all values in id are unique. There are other methods, e.g. stopifnot(c(df1$id %in% df3$id, df2$id %in% df3$id)) but again these are ugly.
Really what I am looking for is a parameter that makes the join fail if some values do not join. Something like, inner_join(df1, df2, fail_if_not_all_present = TRUE).
I am not attached to the tidyverse - if there is a base R or data.table way of doing this then I would consider those.
Does anyone know anything?
You can try writing a custom inner join function.
custom_inner_join <- function(data1,data2,by, fail_if_not_all_present = FALSE) {
if(fail_if_not_all_present) {
vals1 <- do.call(paste, data1[cols])
vals2 <- do.call(paste, data2[cols])
if(all(vals1 %in% vals2) && all(vals2 %in% vals1)) {
merge(data1, data2, by)
} else stop('Not all key values are present')
} else {
merge(data1, data2, by)
}
}
custom_inner_join(df1, df2, 'id')
# id value1 value2
#1 1 1 2
#2 2 1 2
#3 3 1 2
#4 4 1 2
custom_inner_join(df1, df2, 'id', fail_if_not_all_present = TRUE)
Error in custom_inner_join(df1, df2, "id", fail_if_not_all_present = TRUE) :
Not all key values are present
Related
I was examining below code
library(dplyr)
DF = data.frame('A' = 1:3, 'B' =2:4)
Condition = 'A'
fn1 = function(x) x + 3
fn2 = function(x) x + 5
DF %>% mutate('aa' = 3:5) %>%
{if (Condition == 'A') {
bb = . %>% mutate('A1' = fn1(A), 'B1' = fn1(B))
bb
} else {
bb = . %>% mutate('A1' = fn2(A), 'B1' = fn2(B))
bb
}
}
Basically, I have 2 similar functions fn1 and fn2. Now based on some condition, I want to use one of these functions.
Above implementation is throwing below error -
Functional sequence with the following components:
1. mutate(., A1 = fn1(A), B1 = fn1(B))
Use 'functions' to extract the individual functions.
Can you please help be how to properly write the pipe sequence to execute above code?
We could use across within mutate
library(dplyr)
DF %>%
mutate(aa = 3:5, across(c(A, B), ~ if(Condition == 'A') fn1(.)
else fn2(.), .names = "{.col}1"))
-output
A B aa A1 B1
1 1 2 3 4 5
2 2 3 4 5 6
3 3 4 5 6 7
Also, an option is to get the functions in a list and convert the logical vector to numeric index for subsetting
DF %>%
mutate(aa = 3:5,
across(c(A, B), ~ list(fn2, fn1)[[1 + (Condition == 'A')]](.),
.names = "{.col}1"))
-output
A B aa A1 B1
1 1 2 3 4 5
2 2 3 4 5 6
3 3 4 5 6 7
Based on the comments, if we need a custom name for the new columns, create a named vector and replace with str_replace_all
library(stringr)
nm1 <- setNames(c("XXX", "YYY"), names(DF)[1:2])
DF %>%
mutate(aa = 3:5,
across(c(A, B), ~ list(fn2, fn1)[[1 + (Condition == 'A')]](.),
.names = "{str_replace_all(.col, nm1)}"))
A B aa XXX YYY
1 1 2 3 4 5
2 2 3 4 5 6
3 3 4 5 6 7
library(purrr)
library(tibble)
library(dplyr)
Starting list of dataframes
lst <- list(df1 = data.frame(X.1 = as.character(1:2),
heading = letters[1:2]),
df2 = data.frame(X.32 = as.character(3:4),
another.topic = paste("Line ", 1:2)))
lst
#> $df1
#> X.1 heading
#> 1 1 a
#> 2 2 b
#>
#> $df2
#> X.32 another.topic
#> 1 3 Line 1
#> 2 4 Line 2
Expected "combined" dataframe, with new consistent variable names, and old variable names in the first row of each constituent dataframe.
#> id h1 h2
#> 1 df1 X.1 heading
#> 2 df1 1 a
#> 3 df1 2 b
#> 4 df2 X.32 another.topic
#> 5 df2 3 Line 1
#> 6 df2 4 Line 2
add_row requires "Name-value pairs, passed on to tibble(). Values can be defined only for columns that already exist in .data and unset columns will get an NA value."
Which is what I think I have achieved with this:
df_nms <-
map(lst, names) %>%
map(set_names)
#> $df1
#> X.1 heading
#> "X.1" "heading"
#>
#> $df2
#> X.32 another.topic
#> "X.32" "another.topic"
But I cannot tie up the last bit, using a purrr function to add the names to the head of each dataframe. I've tried numerous variations with map2 and pmap the closest I can get at present (if I treat add_row as a formula , prefixing it with ~ and remove the .y I get a new first row populated with NAs). I think I'm missing how to pass the name-value pairs to the add_row function.
map2(lst, df_nms, add_row(.x, .y, .before = 1)) %>%
map(set_names, c("h1", "h2")) %>%
map_dfr(bind_rows, .id = "id")
#> Error in add_row(.x, .y, .before = 1): object '.x' not found
A pointer to resolve this last step would be most appreciated.
Not quite sure how to do this via purrr map functions, but here is an alternative,
library(dplyr)
bind_rows(lapply(lst, function(i){d1 <- as.data.frame(matrix(names(i), ncol = ncol(i)));
rbind(d1, setNames(i, names(d1)))}), .id = 'id')
# id V1 V2
#1 df1 X.1 heading
#2 df1 1 a
#3 df1 2 b
#4 df2 X.32 another.topic
#5 df2 3 Line 1
#6 df2 4 Line 2
Here's an approach using map, rbindlist from data.table and some base R functions:
library(purrr)
library(dplyr)
library(data.table)
map(lst, ~ as.data.frame(unname(rbind(colnames(.x),as.matrix(.x))))) %>%
rbindlist(idcol = "id")
# id V1 V2
#1: df1 X.1 heading
#2: df1 1 a
#3: df1 2 b
#4: df2 X.32 another.topic
#5: df2 3 Line 1
#6: df2 4 Line 2
Alternatively we could use map_df if we use colnames<-:
map_df(lst, ~ as.data.frame(rbind(colnames(.x),as.matrix(.x))) %>%
`colnames<-`(.,paste0("h",seq(1,dim(.)[2]))), .id = "id")
# id h1 h2
#1 df1 X.1 heading
#2 df1 1 a
#3 df1 2 b
#4 df2 X.32 another.topic
#5 df2 3 Line 1
#6 df2 4 Line 2
Key things here are:
Use as.matrix to get rid of the factor / character incompatibility.
Remove names with unname or set them with colnames<-
Use the idcols = or .id = feature to get the names of the list as a column.
I altered your sample data a bit, setting stringsAsFactors to FALSE when creating the data.frames in lst.
here is a solution using data.table::rbindlist().
#sample data
lst <- list(df1 = data.frame(X.1 = as.character(1:2),
heading = letters[1:2],
stringsAsFactors = FALSE), # !! <--
df2 = data.frame(X.32 = as.character(3:4),
another.topic = paste("Line ", 1:2),
stringsAsFactors = FALSE) # !! <--
)
DT <- data.table::rbindlist( lapply( lst, function(x) rbind( names(x), x ) ),
use.names = FALSE, idcol = "id" )
setnames(DT, names( lst[[1]] ), c("h1", "h2") )
# id h1 h2
# 1: df1 X.1 heading
# 2: df1 1 a
# 3: df1 2 b
# 4: df2 X.32 another.topic
# 5: df2 3 Line 1
# 6: df2 4 Line 2
I have a data frame with several ID cols containing only one unique value and columns that actually contain variables. How to separate those?
I have come up with the following approach using a conditional statement in sapply, but I wondered if there may be a more elegant way to do that?
I am happy with any package, and any output where the data frames are separated, this can also be in a list. Each frame does not need to be assigned to a new object.
mydf <- data.frame(a = 'a', b = 'b', val1 = 1:10, val2 = 10:1)
head(mydf,3)
#> a b val1 val2
#> 1 a b 1 10
#> 2 a b 2 9
#> 3 a b 3 8
id_cols <- mydf[sapply(names(mydf), function(x) {length(unique(mydf[[x]])) == 1})]
variable_cols <- mydf[sapply(names(mydf), function(x) {length(unique(mydf[[x]])) != 1})]
head(id_cols, 3)
#> a b
#> 1 a b
#> 2 a b
#> 3 a b
head(variable_cols, 3)
#> val1 val2
#> 1 1 10
#> 2 2 9
#> 3 3 8
Created on 2020-04-02 by the reprex package (v0.3.0)
A very, very slightly shorter way would be
Var = lengths(lapply(mydf, unique)) > 1
id_cols = mydf[, Var]
variable_cols = mydf[, !Var]
I would like to combine a set of data frames into a single data frame by summing columns that have matching variables (instead of appending columns).
For example, given
df1 <- data.frame(A = c(0,0,1,1,1,2,2), B = c(1,2,1,2,3,1,5), x = c(2,3,1,5,3,7,0))
df2 <- data.frame(A = c(0,1,1,2,2,2), B = c(1,1,3,2,4,5), x = c(4,8,4,1,0,3))
df3 <- data.frame(A = c(0,1,2), B = c(5,4,2), x = c(5,3,1))
I want to match by "A" and "B" and sum the values of "x". For this example, I can get the desired result as follows:
library(plyr)
library(dplyr)
# rename columns so that join_all preserves them all:
colnames(df1)[3] <- "x1"
colnames(df2)[3] <- "x2"
colnames(df3)[3] <- "x3"
# join the data frames by matching "A" and "B" values:
res <- join_all(list(df1, df2, df3), by = c("A", "B"), type = "full")
# get the sums and drop superfluous columns:
arrange(res, A, B) %>%
rowwise() %>%
mutate(x = sum(x1, x2, x3, na.rm = TRUE)) %>%
select(A, B, x)
Result:
A B x
<dbl> <dbl> <dbl>
1 0 1 6
2 0 2 3
3 0 5 5
4 1 1 9
5 1 2 5
6 1 3 7
7 1 4 3
8 2 1 7
9 2 2 2
10 2 4 0
11 2 5 3
A more general solution is
library(dplyr)
# function to get the desired result for two data frames:
my_merge <- function(df1, df2)
{
m1 <- merge(df1, df2, by = c("A", "B"), all = TRUE)
m1 <- rowwise(res) %>%
mutate(x = sum(x.x, x.y, na.rm = TRUE)) %>%
select(A, B, x)
return(m1)
}
l1 <- list(df2, df3) # omit the first data frame
res <- df1 # initial value of the result
for(df in l1) res <- my_merge(res, df) # call the function repeatedly
Is there a more efficient option for combining a large set of data frames? Ideally it should be recursive (i.e. it's better not to join all data frames into one massive data frame before calculating the sums).
An easier option is to bind the rows of the datasets, then group by the columns of interest and get the summarised output by getting the sum of 'x'
library(tidyverse)
bind_rows(df1, df2, df3) %>%
group_by(A, B) %>%
summarise(x = sum(x))
# A tibble: 11 x 3
# Groups: A [?]
# A B x
# <dbl> <dbl> <dbl>
# 1 0 1 6
# 2 0 2 3
# 3 0 5 5
# 4 1 1 9
# 5 1 2 5
# 6 1 3 7
# 7 1 4 3
# 8 2 1 7
# 9 2 2 2
#10 2 4 0
#11 2 5 3
If there are many objects in the global environment with the pattern "df" followed by some digits
mget(ls(pattern= "^df\\d+")) %>%
bind_rows %>%
group_by(A, B) %>%
summarise(x = sum(x))
As the OP mentioned about memory constraints, if we do the join first and then use rowSums or + with reduce, it would be more efficient
mget(ls(pattern= "^df\\d+")) %>%
reduce(full_join, by = c("A", "B")) %>%
transmute(A, B, x = rowSums(.[3:5], na.rm = TRUE)) %>%
arrange(A, B)
# A B x
#1 0 1 6
#2 0 2 3
#3 0 5 5
#4 1 1 9
#5 1 2 5
#6 1 3 7
#7 1 4 3
#8 2 1 7
#9 2 2 2
#10 2 4 0
#11 2 5 3
This could also be done with data.table
library(data.table)
rbindlist(mget(ls(pattern= "^df\\d+")))[, .(x = sum(x)), by = .(A, B)]
Ideally it should be recursive (i.e. it's better not to join all data frames into one massive data frame before calculating the sums).
If you're memory constrained and willing to sacrifice speed (vs #akrun's data.table approach), use one table at a time in a loop:
library(data.table)
tabs = c("df1", "df2", "df3")
# enumerate all combos for the results table
# initializing sum to 0
res = CJ(A = 0:2, B = 1:5, x = 0)
# loop over tabs, adding on
for (i in seq_along(tabs)){
tab = get(tabs[[i]])
res[tab, on=.(A, B), x := x + i.x][]
rm(tab)
}
If you need to read tables from disk, change tabs to file names and get to fread or whatever function.
I am skeptical that you can fit all the tables in memory, but cannot also fit an rbind-ed copy of them together.
Similarly (thanks to #akrun's comment), use his approach pairwise:
res = data.table(get(tabs[[1]]))[0L]
for (i in seq_along(tabs)){
tab = get(tabs[[i]])
res = rbind(res, tab)[, .(x = sum(x)), by=.(A,B)]
rm(tab)
}
I would like to ask if there is a way of removing a group from dataframe using dplyr (or anz other way in that matter) in the following way. Lets say I have a dataframe in the following form grouped by variable 1:
Variable 1 Variable 2
1 a
1 b
2 a
2 a
2 b
3 a
3 c
3 a
... ...
I would like to remove only groups that have in Variable 2 two consecutive same values. That is in table above it would remove group 2 because there are values a,a,b but not group c where is a,c,a. So I would get the table bellow?
Variable 1 Variable 2
1 a
1 b
3 a
3 c
3 a
... ...
To test for consecutive identical values, you can compare a value to the previous value in that column. In dplyr, this is possible with lag. (You could do the same thing with comparing to the next value, using lead. Result comes out the same.)
Group the data by variable1, get the lag of variable2, then add up how many of these duplicates there are in that group. Then filter for just the groups with no duplicates. After that, feel free to remove the dupesInGroup column.
library(tidyverse)
df %>%
group_by(variable1) %>%
mutate(dupesInGroup = sum(variable2 == lag(variable2), na.rm = T)) %>%
filter(dupesInGroup == 0)
#> # A tibble: 5 x 3
#> # Groups: variable1 [2]
#> variable1 variable2 dupesInGroup
#> <int> <chr> <int>
#> 1 1 a 0
#> 2 1 b 0
#> 3 3 a 0
#> 4 3 c 0
#> 5 3 a 0
Created on 2018-05-10 by the reprex package (v0.2.0).
prepare data frame:
df <- data.frame("Variable 1" = c(1, 1, 2, 2, 2, 3, 3, 3), "Variable 2" = unlist(strsplit("abaabaca", "")))
write functions to test if consecutive repetitions are there or not:
any.consecutive.p <- function(v) {
for (i in 1:(length(v) - 1)) {
if (v[i] == v[i + 1]) {
return(TRUE)
}
}
return(FALSE)
}
any.consecutive.in.col.p <- function(df, col) {
any.consecutive.p(df[, col])
}
any.consecutive.p returns TRUE if it finds first consecutive repetition in a vector (v).
any.consecutive.in.col.p() looks for consecutive repetitions in a column of a data frame.
split data frame by values of Variable.1
df.l <- split(df, df$Variable.1)
df.l
$`1`
Variable.1 Variable.2
1 1 a
2 1 b
$`2`
Variable.1 Variable.2
3 2 a
4 2 a
5 2 b
$`3`
Variable.1 Variable.2
6 3 a
7 3 c
8 3 a
Finally go over this data.frame list and test for each data frame, if it contains consecutive duplicates in Variable.2 column.
If found, don't collect it.
Bind the collected data frames by rows.
Reduce(rbind, lapply(df.l, function(df) if(!any.consecutive.in.col.p(df, "Variable.2")) {df}))
Variable.1 Variable.2
1 1 a
2 1 b
6 3 a
7 3 c
8 3 a
Say you want to remove all groups of df, grouped by a, where the column b has repeated values. You can do that as below.
set.seed(0)
df <- data.frame(a = rep(1:3, rep(3, 3)), b = sample(1:5, 9, T))
# dplyr
library(dplyr)
df %>%
group_by(a) %>%
filter(all(b != lag(b), na.rm = T))
#data.table
library(data.table)
setDT(df)
df[, if(all(b != shift(b), na.rm = T)) .SD, by = a]
Benchmark shows data.table is faster
#Results
# Unit: milliseconds
# expr min lq mean median uq max neval
# use_dplyr() 141.46819 165.03761 201.0975 179.48334 205.82301 539.5643 100
# use_DT() 36.27936 50.23011 64.9218 53.87114 66.73943 345.2863 100
# Method
set.seed(0)
df <- data.table(a = rep(1:2000, rep(1e3, 2000)), b = sample(1:1e3, 2e6, T))
use_dplyr <- function(x){
df %>%
group_by(a) %>%
filter(all(b != lag(b), na.rm = T))
}
use_DT <- function(x){
df[, if (all(b != shift(b), na.rm = T)) .SD, a]
}
microbenchmark(use_dplyr(), use_DT())