Using purrr functions to do left join and bind rows - r

I've built a web scraping function that takes a variety of arguments. Let's use the sample arguments for demonstration purposes.
Arguments: year, type, gender and col_types.
My function takes the referenced arguments and scrapes the data to return a df.
I am looking to join the alternate col_types to standard based on matches at the year, type, gender, name.
Then I want to bind all of the rows to one df.
Sample Data:
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
# Sample DF
a <- tibble(year = 2019, type = "full_year", col_types = "standard", gender = "M", name = c("a","b","c"), variable_1 = 1:3)
b <- tibble(year = 2019, type = "full_year", col_types = "alternate", gender = "M", name = c("a","b","c"), variable_2 = 1:3, variable_3 = 8:10)
c <- tibble(year = 2019, type = "full_year", col_types = "standard", gender = "F", name = c("ab","ba","ca"), variable_1 = 4:6)
d <- tibble(year = 2019, type = "full_year", col_types = "alternate", gender = "F", name = c("ab","ba","ca"), variable_2 = 1:3, variable_3 = 16:18)
e <- tibble(year = 2019, type = "last_month", col_types = "standard", gender = "M", name = c("a","b","c"), variable_1 = 1:3)
f <- tibble(year = 2019, type = "last_month", col_types = "alternate", gender = "M", name = c("a","b","c"), variable_2 = 1:3, variable_3 = 8:10)
g <- tibble(year = 2019, type = "last_month", col_types = "standard", gender = "F", name = c("ab","ba","ca"), variable_1 = 4:6)
h <- tibble(year = 2019, type = "last_month", col_types = "alternate", gender = "F", name = c("ab","ba","ca"), variable_2 = 1:3, variable_3 = 16:18)
# I know this is not going to work as it presents me with NA where I want there to be joins
df <- bind_rows(a, b, c, d, e, f, g, h)
# Adding desired output
df <- bind_rows(a, b, c, d, e, f, g, h)
m_fy_join <-
a %>%
left_join(b %>% select(-matches("col_types")))
f_fy_join <-
c %>%
left_join(d %>% select(-matches("col_types")))
m_lm_join <-
e %>%
left_join(f %>% select(-matches("col_types")))
f_lm_join <-
g %>%
left_join(h %>% select(-matches("col_types")))
# Desired Output
desired_output <- bind_rows(m_fy_join, f_fy_join, m_lm_join, f_lm_join)
What purrr function can I use to do a left_join, and then bind rows?

I don't think you necessarily need to do a join. You can bind all the tibbles together and use coalesce to get rid of the NAs (which arise due to the fact that the "standard"s don't have variable 2/3 and the "alternate"s don't have variable 1).
I think this may be the easiest given the way your data is currently arranged. But, you might consider re-engineering the process (if possible) so that all the "alternate" tibbles are added to one list when created, and all the "standard" tibbles are added to another, so you could just rbind_list each of them and join the two together, rather than devising a way to manage a bunch of tibbles which are all mixed together.
library(tidyverse)
bind_rows(a, b, c, d, e, f, g, h) %>%
group_by(year, type, gender, name) %>%
summarise_at(vars(contains('variable')), reduce, coalesce)
# # A tibble: 12 x 7
# # Groups: year, type, gender [4]
# year type gender name variable_1 variable_2 variable_3
# <dbl> <chr> <chr> <chr> <int> <int> <int>
# 1 2019 full_year F ab 4 1 16
# 2 2019 full_year F ba 5 2 17
# 3 2019 full_year F ca 6 3 18
# 4 2019 full_year M a 1 1 8
# 5 2019 full_year M b 2 2 9
# 6 2019 full_year M c 3 3 10
# 7 2019 last_month F ab 4 1 16
# 8 2019 last_month F ba 5 2 17
# 9 2019 last_month F ca 6 3 18
# 10 2019 last_month M a 1 1 8
# 11 2019 last_month M b 2 2 9
# 12 2019 last_month M c 3 3 10
Edit: Thanks for showing desired output. I've checked and this output is equivalent, except for ordering and the fact that it doesn't have a col_types column,

library(dplyr)
library(purrr)
my_join_function <- function(df1, df2) {
x <- get(df1)
y <- get(df2)
left_join(x, select(y, -matches("col_types")))
}
desired_output2 <- map2_df(
.x = c("a", "c", "e", "g"),
.y = c("b", "d", "f", "h"),
.f = my_join_function
)
testthat::expect_error(testthat::expect_identical(desired_output, desired_output2))
Error: testthat::expect_identical(desired_output, desired_output2) did not throw an error.

Related

How to choose the most common value in a group related to other group in R?

I have in R the following data frame:
ID = c(rep(1,5),rep(2,3),rep(3,2),rep(4,6));ID
VAR = c("A","A","A","A","B","C","C","D",
"E","E","F","A","B","F","C","F");VAR
CATEGORY = c("ANE","ANE","ANA","ANB","ANE","BOO","BOA","BOO",
"CAT","CAT","DOG","ANE","ANE","DOG","FUT","DOG");CATEGORY
DATA = data.frame(ID,VAR,CATEGORY);DATA
That looks like this table below :
ID
VAR
CATEGORY
1
A
ANE
1
A
ANE
1
A
ANA
1
A
ANB
1
B
ANE
2
C
BOO
2
C
BOA
2
D
BOO
3
E
CAT
3
E
CAT
4
F
DOG
4
A
ANE
4
B
ANE
4
F
DOG
4
C
FUT
4
F
DOG
ideal output given the above data frame in R I want to be like that:
ID
TEXTS
category
1
A
ANE
2
C
BOO
3
E
CAT
4
F
DOG
More specifically: I want for ID say 1 to search the most common value in the column VAR which is A and then to search the most common value in the column CATEGORY related to the most common value A which is the ANE and so forth.
How can I do it in R ?
Imagine that it is sample example.My real data frame contains 850.000 rows and has 14000 unique ID.
Another dplyr strategy using count and slice:
library(dplyr)
DATA %>%
group_by(ID) %>%
count(VAR, CATEGORY) %>%
slice(which.max(n)) %>%
select(-n)
ID VAR CATEGORY
<dbl> <chr> <chr>
1 1 A ANE
2 2 C BOA
3 3 E CAT
4 4 F DOG
dplyr
library(dplyr)
DATA %>%
group_by(ID) %>%
filter(VAR == names(sort(table(VAR), decreasing=TRUE))[1]) %>%
group_by(ID, VAR) %>%
summarize(CATEGORY = names(sort(table(CATEGORY), decreasing=TRUE))[1]) %>%
ungroup()
# # A tibble: 4 x 3
# ID VAR CATEGORY
# <dbl> <chr> <chr>
# 1 1 A ANE
# 2 2 C BOA
# 3 3 E CAT
# 4 4 F DOG
Data
DATA <- structure(list(ID = c(1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 4, 4, 4), VAR = c("A", "A", "A", "A", "B", "C", "C", "D", "E", "E", "F", "A", "B", "F", "C", "F"), CATEGORY = c("ANE", "ANE", "ANA", "ANB", "ANE", "BOO", "BOA", "BOO", "CAT", "CAT", "DOG", "ANE", "ANE", "DOG", "FUT", "DOG")), class = "data.frame", row.names = c(NA, -16L))
We could modify the Mode to return the index and use that in slice after grouping by 'ID'
Modeind <- function(x) {
ux <- unique(x)
which.max(tabulate(match(x, ux)))
}
library(dplyr)
DATA %>%
group_by(ID) %>%
slice(Modeind(VAR)) %>%
ungroup
-output
# A tibble: 4 x 3
ID VAR CATEGORY
<dbl> <chr> <chr>
1 1 A ANE
2 2 C BOO
3 3 E CAT
4 4 F DOG
A base R option with nested subset + ave
subset(
subset(
DATA,
!!ave(ave(ID, ID, VAR, FUN = length), ID, FUN = function(x) x == max(x))
),
!!ave(ave(ID, ID, VAR, CATEGORY, FUN = length), ID, VAR, FUN = function(x) seq_along(x) == which.max(x))
)
gives
ID VAR CATEGORY
1 1 A ANE
6 2 C BOO
9 3 E CAT
11 4 F DOG
Explanation
The inner subset + ave is to filter out the rows with the most common VAR values (grouped by ID)
Based on the trimmed data frame the previous step, the outer subset + ave is to filter out the rows with the most common CATEGORY values ( grouped by ID + VAR)

Pivot_wider using multiple variables for publication table

I'm working on creating a table for publication and an having trouble creating the structure I need.
The "data":
a <- data.frame(Year = c(2018, 2019, 2020), a = 1:3,
b = c("a", "b", "c"),
c = c("d", "e", "f"),
fac = rep("this", 3))
The product would look like this ideally.
fac 2018_a 2018_b 2018_c 2019_a 2019_b 2019_c 2020_a 2020_b 2020_c
this 1 a d 2 b e 3 c f
I know that his should be possible with the pivot functions, but I'm not sure if I need to pivot longer before I go wider, and all the experiments I've done I can not get the names or data order correct. I'd very much appreciate any help!
We can also use the following solution:
library(tidyr)
a %>%
pivot_wider(names_from = Year, values_from = c(a, b, c),
names_glue = "{Year}_{.value}") %>%
select(fac, sort(names(.)[-1]))
# A tibble: 1 x 10
fac `2018_a` `2018_b` `2018_c` `2019_a` `2019_b` `2019_c` `2020_a` `2020_b` `2020_c`
<chr> <int> <chr> <chr> <int> <chr> <chr> <int> <chr> <chr>
1 this 1 a d 2 b e 3 c f
You could use recast from reshape2 package:
reshape2::recast(a, fac~Year+variable, id.var = c('Year', 'fac'))
fac 2018_a 2018_b 2018_c 2019_a 2019_b 2019_c 2020_a 2020_b 2020_c
1 this 1 a d 2 b e 3 c f

R merge rows with same row names and column name [duplicate]

This question already has answers here:
Transpose / reshape dataframe without "timevar" from long to wide format
(9 answers)
Closed 2 years ago.
I have the data like
How can I reshape the data by merge the rows with same rowname and columname like this:
Trust you have allele information missing.
If added as following to the data:
data['allele']=c('a1','a2','a1','a2')
then following will solve the problem easily:
Basically wide to long, followed by joining columns of SNP and allele and then wide again.
library(tidyr)
long=data %>% gather(snp, value, -c(Pedigree,allele))
long_joined=unite(long, snp, c(snp, allele), remove=TRUE)
spread(long_joined, key = snp, value = value)
Maybe you can try aggregate with unlist:
> aggregate(.~P,df,unlist)
P S1.1 S1.2 S2.1 S2.2
1 a C C G G
2 b C C T T
Data
> dput(df)
structure(list(P = c("a", "a", "b", "b"), S1 = c("C", "C", "C",
"C"), S2 = c("G", "G", "T", "T")), class = "data.frame", row.names = c(NA,
-4L))
Solution using dplyr which is part of the tidyverse collection of R packages.
library(dplyr)
Data:
bar <- "Pedigree SNP1 SNP2
'Individual 1' C G
'Individual 1' C G
'Individual 2' C T
'Individual 2' C T"
foo <- read.table(text=bar, header = TRUE)
Code:
foo %>%
group_by(Pedigree) %>%
mutate(id = row_number()) %>%
pivot_wider(names_from = id, values_from = SNP1:SNP2, names_prefix = ".a")
Output:
#> # A tibble: 2 x 5
#> # Groups: Pedigree [2]
#> Pedigree SNP1_.a1 SNP1_.a2 SNP2_.a1 SNP2_.a2
#> <fct> <fct> <fct> <fct> <fct>
#> 1 Individual 1 C C G G
#> 2 Individual 2 C C T T
```
Created on 2020-07-26 by the reprex package (v0.3.0)

Count occurrence of a categorical variable, when grouping and summarising by a different variable in R

I have a table df that looks like this:
a <- c(10,20, 20, 20, 30)
b <- c("u", "u", "u", "r", "r")
c <- c("a", "a", "b", "b", "b")
df <- data.frame(a,b,c)
I would like to create a new table that contains the mean of col a, grouped by variable c. And I would like to have a column with the counts of the occurrence of b types within each group c.
I would therefore like the result table to look like df2:
a_m <- c(15, 23.3)
c <- c("a", "b")
counts_b <-c("2 u", "1 u, 2 r")
df2 <- data.frame(a_m, c, counts_b)
What I have so far is:
df2 <- df %>% group_by(c) %>% summarise(a_m = mean(a, na.rm = TRUE))
I do not know how to add the column counts_b in the example df2.
Giulia
Here's a way using a little table magic:
df %>%
group_by(c) %>%
summarise(a_mean = mean(a),
b_list = paste(names(table(b)), table(b), collapse = ', '))
# A tibble: 2 x 3
c a_mean b_list
<fct> <dbl> <chr>
1 a 15.0 r 0, u 2
2 b 23.3 r 2, u 1
Here is another solution using reshape2. The output format may be more convenient to work with, each value of b has its own column with the number of occurrences.
out1 <- dcast(df, c ~ b, value.var="c", fun.aggregate=length)
c r u
1 a 0 2
2 b 2 1
out2 <- df %>% group_by(c) %>% summarise(a_m = mean(a))
# A tibble: 2 x 2
c a_m
<fctr> <dbl>
1 a 15.00000
2 b 23.33333
df2 <- merge(out1, out2, by=c)
c r u a_m
1 a 0 2 15.00000
2 b 2 1 23.33333

dplyr concat columns stored in variable (mutate and non standard evaluation)

I would like to concatenate an arbitrary number of columns in a dataframe based on a variable cols_to_concat
df <- dplyr::data_frame(a = letters[1:3], b = letters[4:6], c = letters[7:9])
cols_to_concat = c("a", "b", "c")
To achieve the desired result with this specific value of cols_to_concat I could do this:
df %>%
dplyr::mutate(concat = paste0(a, b, c))
But I need to generalise this, using syntax a bit like this
# (DOES NOT WORK)
df %>%
dplyr::mutate(concat = paste0(cols))
I'd like to use the new NSE approach of dplyr 0.7.0, if this is appropriate, but can't figure out the correct syntax.
You can perform this operation using only the tidyverse if you'd like to stick to those packages and principles. You can do it by using either mutate() or unite_(), which comes from the tidyr package.
Using mutate()
library(dplyr)
df <- tibble(a = letters[1:3], b = letters[4:6], c = letters[7:9])
cols_to_concat <- c("a", "b", "c")
df %>% mutate(new_col = do.call(paste0, .[cols_to_concat]))
# A tibble: 3 × 4
a b c new_col
<chr> <chr> <chr> <chr>
1 a d g adg
2 b e h beh
3 c f i cfi
Using unite_()
library(tidyr)
df %>% unite_(col='new_col', cols_to_concat, sep="", remove=FALSE)
# A tibble: 3 × 4
new_col a b c
* <chr> <chr> <chr> <chr>
1 adg a d g
2 beh b e h
3 cfi c f i
EDITED July 2020
As of dplyr 1.0.0, it appears that across() and c_across() are replacing the underscore verbs (e.g. unite_) and scoped variants like mutate_if(), mutate_at() and mutate_all(). Below is an example using that convention. Not the most concise, but still an option that promises to be more extensible.
Using c_across()
library(dplyr)
df <- tibble(a = letters[1:3], b = letters[4:6], c = letters[7:9])
cols_to_concat <- c("a", "b", "c")
df %>%
rowwise() %>%
mutate(new_col = paste0(c_across(all_of(cols_to_concat)), collapse=""))
#> # A tibble: 3 x 4
#> # Rowwise:
#> a b c new_col
#> <chr> <chr> <chr> <chr>
#> 1 a d g adg
#> 2 b e h beh
#> 3 c f i cfi
Created on 2020-07-08 by the reprex package (v0.3.0)
You can try syms from rlang:
library(dplyr)
packageVersion('dplyr')
#[1] ‘0.7.0’
df <- dplyr::data_frame(a = letters[1:3], b = letters[4:6], c = letters[7:9])
cols_to_concat = c("a", "b", "c")
library(rlang)
cols_quo <- syms(cols_to_concat)
df %>% mutate(concat = paste0(!!!cols_quo))
# or
df %>% mutate(concat = paste0(!!!syms(cols_to_concat)))
# # A tibble: 3 x 4
# a b c concat
# <chr> <chr> <chr> <chr>
# 1 a d g adg
# 2 b e h beh
# 3 c f i cfi
You can do the following:
library(dplyr)
df <- dplyr::data_frame(a = letters[1:3], b = letters[4:6], c = letters[7:9])
cols_to_concat = lapply(list("a", "b", "c"), as.name)
q <- quos(paste0(!!! cols_to_concat))
df %>%
dplyr::mutate(concat = !!! q)

Resources