Sub setting a column into multiple values in r - r

I have the following data,
col <- c('Data1,Data2','a,b,c','d')
df <- data.frame(col)
I want to split the data where the elements are more than 2 in a cell. So "a,b,c" should be split into "a,b" , "b,c" and "c,a". See attached for reference.

We create a row identifier (row_number()), split the 'col' by the delimiter (separate_rows), grouped by 'rn', summarise on those groups where the number of rows is greater than 1 to get the combn of 'col' and paste them together
library(stringr)
library(dplyr)
library(tidyr)
df %>%
mutate(rn = row_number()) %>%
separate_rows(col) %>%
group_by(rn) %>%
summarise(col = if(n() > 1) combn(col, 2, FUN = str_c, collapse=",") else col,
.groups = 'drop') %>%
select(-rn)
-output
# A tibble: 5 x 1
# col
# <chr>
#1 Data1,Data2
#2 a,b
#3 a,c
#4 b,c
#5 d

Here is a base R option using combn
data.frame(col = unlist(sapply(
strsplit(df$col, ","),
function(x) {
if (length(x) == 1) {
x
} else {
combn(x, 2, paste0, collapse = ",")
}
}
)))
which gives
col
1 Data1,Data2
2 a,b
3 a,c
4 b,c
5 d

library(tidyverse)
df %>%
rowwise()%>%
mutate(col = list(if(str_count(col, ",")>1) combn(strsplit(col, ",")[[1]], 2, toString) else col))%>%
unnest(col)
# A tibble: 5 x 1
col
<chr>
1 Data1,Data2
2 a, b
3 a, c
4 b, c
5 d

Related

paste() in dplyr mutate does not compute rowwise?

this is my first post here :)
So I encountered some weird behavior today: When using the dplyr mutate function together with the paste function, the outcome is the same for every row.
Here is an example:
vec1 <- c(2, 5)
vec2 <- c(4, 6)
test_df <- data.frame(vec1, vec2)
test_df %>% mutate(new_col = paste(vec1:vec2, collapse = ","))
with the output
vec1 vec2 new_col
1 2 4 2,3,4
2 5 6 2,3,4
but thats not what I wanted or expected.
Here is what I wanted, achieved with a loop:
df <- test_df %>% mutate(new_col = 1)
for(i in 1:nrow(test_df)){
df$new_col[i] <- paste(df$vec1[i]:df$vec2[i], collapse = ",")
}
With the output:
vec1 vec2 new_col
1 2 4 2,3,4
2 5 6 5,6
Whats going on and how can I achieve the same with mutate and paste?
We can get the sequence by loop over the vec1, vec2 elements with map2, and paste (str_c) the sequence values to a single string
library(dplyr)
library(purrr)
library(stringr)
test_df %>%
mutate(new_col = map2_chr(vec1, vec2, ~ str_c(.x:.y, collapse = ",")))
-output
vec1 vec2 new_col
1 2 4 2,3,4
2 5 6 5,6
Or with rowwise
test_df %>%
rowwise %>%
mutate(new_col = str_c(vec1:vec2, collapse = ",")) %>%
ungroup
# A tibble: 2 × 3
vec1 vec2 new_col
<dbl> <dbl> <chr>
1 2 4 2,3,4
2 5 6 5,6

`r`/`rlang`/`dplyr`: How to render `sym` resilient to `NULL`?

Edited in response to #akrun's insight:
This works:
require("magrittr")
requireNamespace("dplyr")
df <- data.frame(a = 1:5)
b_column <- c_column <- "a"
df %>% dplyr::mutate(
b = !!dplyr::sym(b_column),
c = !!dplyr::sym(c_column))
But when any one of the *_columns is NULL it doesn't:
c_column <- NULL
df %>% dplyr::mutate(
b = !!dplyr::sym(b_column),
c = !!dplyr::sym(c_column))
The resulting error is:
Error: Only strings can be converted to symbols
Run `rlang::last_error()` to see where the error occurred.
How would I make the call to ANY of the ensymboled *_column variables resilient to it being NULL?
If we need to check for NULL case, use an if condition
df1 <- if(!is.null(c)) {
df %>%
dplyr::mutate(b = !!dplyr::sym(c))
} else df
With multiple columns, an option is map
library(purrr)
b_column <- c_column <- "a"
map2_dfc(list(b_column, c_column), c("b", "c"), ~
if(!is.null(.x)) df %>%
transmute(!! .y := !! sym(.x))) %>%
bind_cols(df, .)
-output
# a b c
#1 1 1 1
#2 2 2 2
#3 3 3 3
#4 4 4 4
#5 5 5 5
If one of them is NULL
c_column <- NULL
map2_dfc(list(b_column, c_column), c("b", "c"), ~
if(!is.null(.x)) df %>%
transmute(!! .y := !! sym(.x))) %>%
bind_cols(df, .)
# a b
#1 1 1
#2 2 2
#3 3 3
#4 4 4
#5 5 5
Another option is mutate with across, but make sure that we need to rename only the columns that are not NULL
nm1 <- c("b", "c")
i1 <- !map_lgl(list(b_column, c_column), is.null)
nm2 <- nm1[i1]
df %>%
mutate(across(all_of(c(b_column, c_column)), ~ .)) %>%
rename_at(vars(everything()), ~ nm2) %>%
bind_cols(df, .)

Replacing value depending on paired column

I have a dataframe with two columns per sample (n > 1000 samples):
df <- data.frame(
"sample1.a" = 1:5, "sample1.b" = 2,
"sample2.a" = 2:6, "sample2.b" = c(1, 3, 3, 3, 3),
"sample3.a" = 3:7, "sample3.b" = 2)
If there is a zero in column .b, the correspsonding value in column .a should be set to NA.
I thought to write a function over colnames (without suffix) to filter each pair of columns and conditional exchaning values. Is there a simpler approach based on tidyverse?
We can split the data.frame into a list of data.frames and do the replacement in base R
df1 <- do.call(cbind, lapply(split.default(df,
sub("\\..*", "", names(df))), function(x) {
x[,1][x[2] == 0] <- NA
x}))
Or another option is Map
acols <- endsWith(names(df), "a")
bcols <- endsWith(names(df), "b")
df[acols] <- Map(function(x, y) replace(x, y == 0, NA), df[acols], df[bcols])
Or if the columns are alternate with 'a', 'b' columns, use a logical index for recycling, create the logical matrix with 'b' columns and assign the corresponding values in 'a' columns to NA
df[c(TRUE, FALSE)][df[c(FALSE, TRUE)] == 0] <- NA
or an option with tidyverse by reshaping into 'long' format (pivot_longer), changing the 'a' column to NA if there is a correspoinding 0 in 'a', and reshape back to 'wide' format with pivot_wider
library(dplyr)
library(tidyr)
df %>%
mutate(rn = row_number()) %>%
pivot_longer(cols = -rn, names_sep="\\.",
names_to = c('group', '.value')) %>%
mutate(a = na_if(b, a == 0)) %>%
pivot_wider(names_from = group, values_from = c(a, b)) %>%
select(-rn)
# A tibble: 5 x 6
# a_sample1 a_sample2 a_sample3 b_sample1 b_sample2 b_sample3
# <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#1 2 1 2 2 1 2
#2 2 3 2 2 3 2
#3 2 3 2 2 3 2
#4 2 3 2 2 3 2
#5 2 3 2 2 3 2

Remove exact rows and frequency of rows of a data.frame that are in another data.frame in r

Consider the following two data.frames:
a1 <- data.frame(A = c(1:5, 2, 4, 2), B = letters[c(1:5, 2, 4, 2)])
a2 <- data.frame(A = c(1:3,2), B = letters[c(1:3,2)])
I would like to remove the exact rows of a1 that are in a2 so that the result should be:
A B
4 d
5 e
4 d
2 b
Note that one row with 2 b in a1 is retained in the final result. Currently, I use a looping statement, which becomes extremely slow as I have many variables and thousands of rows in my data.frames. Is there any built-in function to get this result?
The idea is, add a counter for duplicates to each file, so you can get a unique match for each occurrence of a row. Data table is nice because it is easy to count the duplicates (with .N), and it also gives the necessary function (fsetdiff) for set operations.
library(data.table)
a1 <- data.table(A = c(1:5, 2, 4, 2), B = letters[c(1:5, 2, 4, 2)])
a2 <- data.table(A = c(1:3,2), B = letters[c(1:3,2)])
# add counter for duplicates
a1[, i := 1:.N, .(A,B)]
a2[, i := 1:.N, .(A,B)]
# setdiff gets the exception
# "all = T" allows duplicate rows to be returned
fsetdiff(a1, a2, all = T)
# A B i
# 1: 4 d 1
# 2: 5 e 1
# 3: 4 d 2
# 4: 2 b 3
You could use dplyr to do this. I set stringsAsFactors = FALSE to get rid of warnings about factor mismatches.
library(dplyr)
a1 <- data.frame(A = c(1:5, 2, 4, 2), B = letters[c(1:5, 2, 4, 2)], stringsAsFactors = FALSE)
a2 <- data.frame(A = c(1:3,2), B = letters[c(1:3,2)], stringsAsFactors = FALSE)
## Make temp variables to join on then delete later.
# Create a row number
a1_tmp <-
a1 %>%
group_by(A, B) %>%
mutate(tmp_id = row_number()) %>%
ungroup()
# Create a count
a2_tmp <-
a2 %>%
group_by(A, B) %>%
summarise(count = n()) %>%
ungroup()
## Keep all that have no entry int a2 or the id > the count (i.e. used up a2 entries).
left_join(a1_tmp, a2_tmp, by = c('A', 'B')) %>%
ungroup() %>% filter(is.na(count) | tmp_id > count) %>%
select(-tmp_id, -count)
## # A tibble: 4 x 2
## A B
## <dbl> <chr>
## 1 4 d
## 2 5 e
## 3 4 d
## 4 2 b
EDIT
Here is a similar solution that is a little shorter. This does the following: (1) add a column for row number to join both data.frame items (2) a temporary column in a2 (2nd data.frame) that will show up as null in the join to a1 (i.e. indicates it's unique to a1).
library(dplyr)
left_join(a1 %>% group_by(A,B) %>% mutate(rn = row_number()) %>% ungroup(),
a2 %>% group_by(A,B) %>% mutate(rn = row_number(), tmpcol = 0) %>% ungroup(),
by = c('A', 'B', 'rn')) %>%
filter(is.na(tmpcol)) %>%
select(-tmpcol, -rn)
## # A tibble: 4 x 2
## A B
## <dbl> <chr>
## 1 4 d
## 2 5 e
## 3 4 d
## 4 2 b
I think this solution is a little simpler (perhaps very little) than the first.
I guess this is similar to DWal's solution but in base R
a1_temp = Reduce(paste, a1)
a1_temp = paste(a1_temp, ave(seq_along(a1_temp), a1_temp, FUN = seq_along))
a2_temp = Reduce(paste, a2)
a2_temp = paste(a2_temp, ave(seq_along(a2_temp), a2_temp, FUN = seq_along))
a1[!a1_temp %in% a2_temp,]
# A B
#4 4 d
#5 5 e
#7 4 d
#8 2 b
Here's another solution with dplyr:
library(dplyr)
a1 %>%
arrange(A) %>%
group_by(A) %>%
filter(!(paste0(1:n(), A, B) %in% with(arrange(a2, A), paste0(1:n(), A, B))))
Result:
# A tibble: 4 x 2
# Groups: A [3]
A B
<dbl> <fctr>
1 2 b
2 4 d
3 4 d
4 5 e
This way of filtering avoids creating extra unwanted columns that you have to later remove in the final output. This method also sorts the output. Not sure if it's what you want.

Summarize all group values and a conditional subset in the same call

I'll illustrate my question with an example.
Sample data:
df <- data.frame(ID = c(1, 1, 2, 2, 3, 5), A = c("foo", "bar", "foo", "foo", "bar", "bar"), B = c(1, 5, 7, 23, 54, 202))
df
ID A B
1 1 foo 1
2 1 bar 5
3 2 foo 7
4 2 foo 23
5 3 bar 54
6 5 bar 202
What I want to do is to summarize, by ID, the sum of B and the sum of B when A is "foo". I can do this in a couple steps like:
require(magrittr)
require(dplyr)
df1 <- df %>%
group_by(ID) %>%
summarize(sumB = sum(B))
df2 <- df %>%
filter(A == "foo") %>%
group_by(ID) %>%
summarize(sumBfoo = sum(B))
left_join(df1, df2)
ID sumB sumBfoo
1 1 6 1
2 2 30 30
3 3 54 NA
4 5 202 NA
However, I'm looking for a more elegant/faster way, as I'm dealing with 10gb+ of out-of-memory data in sqlite.
require(sqldf)
my_db <- src_sqlite("my_db.sqlite3", create = T)
df_sqlite <- copy_to(my_db, df)
I thought of using mutate to define a new Bfoo column:
df_sqlite %>%
mutate(Bfoo = ifelse(A=="foo", B, 0))
Unfortunately, this doesn't work on the database end of things.
Error in sqliteExecStatement(conn, statement, ...) :
RS-DBI driver: (error in statement: no such function: IFELSE)
You can do both sums in a single dplyr statement:
df1 <- df %>%
group_by(ID) %>%
summarize(sumB = sum(B),
sumBfoo = sum(B[A=="foo"]))
And here is a data.table version:
library(data.table)
dt = setDT(df)
dt1 = dt[ , .(sumB = sum(B),
sumBfoo = sum(B[A=="foo"])),
by = ID]
dt1
ID sumB sumBfoo
1: 1 6 1
2: 2 30 30
3: 3 54 0
4: 5 202 0
Writing up #hadley's comment as an answer
df_sqlite %>%
group_by(ID) %>%
mutate(Bfoo = if(A=="foo") B else 0) %>%
summarize(sumB = sum(B),
sumBfoo = sum(Bfoo)) %>%
collect
If you want to do counting instead of summarizing, then the answer is somewhat different. The change in code is small, especially in the conditional counting part.
df1 <- df %>%
group_by(ID) %>%
summarize(countB = n(),
countBfoo = sum(A=="foo"))
df1
Source: local data frame [4 x 3]
ID countB countBfoo
1 1 2 1
2 2 2 2
3 3 1 0
4 5 1 0
If you wanted to count the rows, instead of summing them, can you pass a variable to the function:
df1 <- df %>%
group_by(ID) %>%
summarize(RowCountB = n(),
RowCountBfoo = n(A=="foo"))
I get an error both with n() and nrow().

Resources