Tidyverse Solution for Using Tibble Columns as Input to a Function - r

I am trying to run a function on all on combinations of two column vectors in a tibble.
library(tidyverse)
combination <- tibble(x = c(1, 2), y = c(3, 4))
sum_square <- function(x, y) {
x^2+y^2
}
I would like to run this function all combinations of column x and column y:
sum_square(1, 3)
sum_square(1, 4)
sum_square(2, 3)
sum_square(2, 4)
Ideally I would like a tidyverse solution.

We can first expand and then apply sum_square on the expanded dataset
library(tidyverse)
expand(combination, x, y) %>%
mutate(new = sum_square(x, y))
# A tibble: 4 x 3
# x y new
# <dbl> <dbl> <dbl>
#1 1 3 10
#2 1 4 17
#3 2 3 13
#4 2 4 20
Another option is outer
combination %>%
reduce(outer, FUN = sum_square) %>%
c %>%
tibble(new = .)

Related

R: Using dplyr to Mutate Multiple Columns

There are various questions on Stack Overflow regarding this, but I have been unable to find a solution to my question, which follows.
Suppose I have a data frame (or tibble) df with two columns, say X1 and X2. I have a function, say f, which takes inputs X1 and X2 and outputs a vector, say [V1, V2].
Now, if the output were a singleton, then I would be able to write
df %>% mutate(V = f(X1,X2))
to add a column labelled V to my df, and the entry would be f(X1,X2). However, I want to add two columns, V1 and V2. I do not know how to do this.
Of course, I could do something like
df %>% mutate(V1 = f(X1,X2)[1], V2 = f(X1,X2)[2]),
but this (I assume) involves calling the function f twice; I have a large data set, and would rather not call it twice.
Alternatively, I could do
df %>% mutate(V_list = as.list(f(X1,X2)), V1 = V_list[[1]], V2 = V_list[[2]]) %>% select(-V_list),
but this seems like a rather clunky way, and I'd rather not.
Further, I would like eventually to apply this to a grouped tibble, and so then the naive way of writing this would duplicate V_list for each entry in the group. As such, ideally any answer would be 'vectorisable', in the following sense.
Suppose I have done df %>% group_by(var1) and have a function f which takes a data frame with two columns as its input -- this should be thought of as 'a vector of pairs' -- and then outputs a new data frame with two columns.
Here is some code to set-up the example.
library(dplyr)
df = tibble(var1 = c(1,1,2,2), X1 = c(1,2,3,4), X2 = c(5,6,7,8))
f = function(sub_df, var){ return( data.frame(x1 = (x1+x2)^var, x2 = (x1-x2)^var) ) }
If your function outputs a data.frame it will be auto-spliced into new columns by mutate
library(dplyr, warn.conflicts = FALSE)
df = tibble(var1 = c(1,1,2,2), X1 = c(1,2,3,4), X2 = c(5,6,7,8))
f = function(x1,x2) tibble(a = x1 + x2, b = x1 - x2)
df %>%
mutate(f(X1, X2))
#> # A tibble: 4 × 5
#> var1 X1 X2 a b
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 1 5 6 -4
#> 2 1 2 6 8 -4
#> 3 2 3 7 10 -4
#> 4 2 4 8 12 -4
Created on 2021-09-16 by the reprex package (v2.0.1)
Or if your function outputs a vector you can use purrr:map2 with tidyr::unnest_wider
Modify function so output is named
f = function(x1,x2) c(a = x1 + x2, b = x1 - x2)
Create a new column which is a list containing a vector for each row, then apply unnest_wider to this column to split the vector elements into their own columns.
df %>%
mutate(new = map2(X1, X2, f)) %>%
unnest_wider(new)
# # A tibble: 4 x 5
# var1 X1 X2 a b
# <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 1 1 5 6 -4
# 2 1 2 6 8 -4
# 3 2 3 7 10 -4
# 4 2 4 8 12 -4
This may not be an ideal solution but I have faced this situation and this is what I usually do. Return a delimiter separated string from the function and separate the column based on that delimiter.
f = function(x1,x2){ return( toString(c(x1+x2, x1-x2))) }
library(tidyverse)
df %>%
mutate(new = map2_chr(X1, X2, f)) %>%
separate(new, c("col1", "col2"), sep = ",", convert = TRUE)
# A tibble: 2 x 4
# X1 X2 col1 col2
# <dbl> <dbl> <int> <int>
#1 1 3 4 -2
#2 2 4 6 -2

dplyr rowwise with lag variables

I am trying to fill NAs in a variable using another correlated variable as per the code below.
test <- tibble(x = c(1,4,3,2,5,6), y = c(2,NA,6,NA,NA,5))
test <- test %>% mutate(chng = x/lag(x,1))
for(i in 1:nrow(test)){
if(is.na(test$y[i])) test$y[i] <- test$y[i - 1] * test$chng[i]
}
Can I do the same operation in dplyr? I've tried rowwise but it seems that it doesn't recognize the lag function.
test %>% rowwise() %>% mutate(y = ifelse(is.na(y), lag(y,1) * chng, y))
Multiple NAs in a row also prevents me from creating a new column consisting of the lagged variable.
You could just repeat the dplyr operation until all NA have been filled:
while(sum(is.na(test$y)) > 0){
test <- test %>%
mutate(y = ifelse(is.na(y), lag(y,1) * chng, y))
}
# A tibble: 6 x 3
x y chng
<dbl> <dbl> <dbl>
1 1 2 NA
2 4 8 4
3 3 6 0.75
4 2 4 0.667
5 5 10 2.5
6 6 5 1.2
I'm pretty sure this won't gain you any efficiency for computing time, though.
It's not working because in rowwise you are using lag on a subset of one row. Creating a new column of y.lag before you enter rowwise mode will work:
test %>% mutate(y.lag = lag(y,1)) %>%
rowwise() %>%
mutate(y = ifelse(is.na(y), y.lag * chng, y)) %>%
select(-y.lag)

Include empty factor levels in tally with tidyr and dplyr

a question as a learn dplyr and its ilk.
I am calculating a tally and a relative frequency of a factor conditioned on two other variables in a df. For instance:
library(dplyr)
library(tidyr)
set.seed(3457)
pct <- function(x) {x/sum(x)}
foo <- data.frame(x = rep(seq(1:3),20),
y = rep(rep(c("a","b"),each=3),10),
z = LETTERS[floor(runif(60, 1,5))])
bar <- foo %>%
group_by(x, y, z) %>%
tally %>%
mutate(freq = (n / sum(n)) * 100)
head(bar)
I'd like the output, bar, to include all the levels of foo$z. I.e., there are no cases of C here:
subset(bar, x==2 & y=="a")
How can I have bar tally the missing levels so I get:
subset(bar, x==2 & y=="a",select = n)
to return 4, 5, 0, 1 (and select = freq to give 40, 50, 0, 10)?
Many thanks.
Edit: Ran with the seed set!
We can use complete from tidyr
bar1 <- bar %>%
complete(z, nesting(x, y), fill = list(n = 0, freq = 0))%>%
select_(.dots = names(bar))
filter(bar1, x==2 & y=="a")
# x y z n freq
# <int> <fctr> <fctr> <dbl> <dbl>
#1 2 a A 4 40
#2 2 a B 5 50
#3 2 a C 0 0
#4 2 a D 1 10

Proper idiom for adding zero count rows in tidyr/dplyr

Suppose I have some count data that looks like this:
library(tidyr)
library(dplyr)
X.raw <- data.frame(
x = as.factor(c("A", "A", "A", "B", "B", "B")),
y = as.factor(c("i", "ii", "ii", "i", "i", "i")),
z = 1:6
)
X.raw
# x y z
# 1 A i 1
# 2 A ii 2
# 3 A ii 3
# 4 B i 4
# 5 B i 5
# 6 B i 6
I'd like to tidy and summarise like this:
X.tidy <- X.raw %>% group_by(x, y) %>% summarise(count = sum(z))
X.tidy
# Source: local data frame [3 x 3]
# Groups: x
#
# x y count
# 1 A i 1
# 2 A ii 5
# 3 B i 15
I know that for x=="B" and y=="ii" we have observed count of zero, rather than a missing value. i.e. the field worker was actually there, but because there wasn't a positive count no row was entered into the raw data. I can add the zero count explicitly by doing this:
X.fill <- X.tidy %>% spread(y, count, fill = 0) %>% gather(y, count, -x)
X.fill
# Source: local data frame [4 x 3]
#
# x y count
# 1 A i 1
# 2 B i 15
# 3 A ii 5
# 4 B ii 0
But that seems a little bit of a roundabout way of doing things. Is there a cleaner idiom for this?
Just to clarify: My code already does what I need it to do, using spread then gather, so what I'm interested in is finding a more direct route within tidyr and dplyr.
Since dplyr 0.8 you can do it by setting the parameter .drop = FALSE in group_by:
X.tidy <- X.raw %>% group_by(x, y, .drop = FALSE) %>% summarise(count=sum(z))
X.tidy
# # A tibble: 4 x 3
# # Groups: x [2]
# x y count
# <fct> <fct> <int>
# 1 A i 1
# 2 A ii 5
# 3 B i 15
# 4 B ii 0
This will keep groups made of all the levels of factor columns so if you have character columns you might want to convert them (thanks to Pake for the note).
The complete function from tidyr is made for just this situation.
From the docs:
This is a wrapper around expand(), left_join() and replace_na that's
useful for completing missing combinations of data.
You could use it in two ways. First, you could use it on the original dataset before summarizing, "completing" the dataset with all combinations of x and y, and filling z with 0 (you could use the default NA fill and use na.rm = TRUE in sum).
X.raw %>%
complete(x, y, fill = list(z = 0)) %>%
group_by(x,y) %>%
summarise(count = sum(z))
Source: local data frame [4 x 3]
Groups: x [?]
x y count
<fctr> <fctr> <dbl>
1 A i 1
2 A ii 5
3 B i 15
4 B ii 0
You can also use complete on your pre-summarized dataset. Note that complete respects grouping. X.tidy is grouped, so you can either ungroup and complete the dataset by x and y or just list the variable you want completed within each group - in this case, y.
# Complete after ungrouping
X.tidy %>%
ungroup %>%
complete(x, y, fill = list(count = 0))
# Complete within grouping
X.tidy %>%
complete(y, fill = list(count = 0))
The result is the same for each option:
Source: local data frame [4 x 3]
x y count
<fctr> <fctr> <dbl>
1 A i 1
2 A ii 5
3 B i 15
4 B ii 0
You can use tidyr's expand to make all combinations of levels of factors, and then left_join:
X.tidy %>% expand(x, y) %>% left_join(X.tidy)
# Joining by: c("x", "y")
# Source: local data frame [4 x 3]
#
# x y count
# 1 A i 1
# 2 A ii 5
# 3 B i 15
# 4 B ii NA
Then you may keep values as NAs or replace them with 0 or any other value.
That way isn't a complete solution of the problem too, but it's faster and more RAM-friendly than spread & gather.
plyr has the functionality you're looking for, but dplyr doesn't (yet), so you need some extra code to include the zero-count groups, as shown by #momeara. Also see this question. In plyr::ddply you just add .drop=FALSE to keep zero-count groups in the final result. For example:
library(plyr)
X.tidy = ddply(X.raw, .(x,y), summarise, count=sum(z), .drop=FALSE)
X.tidy
x y count
1 A i 1
2 A ii 5
3 B i 15
4 B ii 0
You could explicitly make all possible combinations and then joining it with the tidy summary:
x.fill <- expand.grid(x=unique(x.tidy$x), x=unique(x.tidy$y)) %>%
left_join(x.tidy, by=("x", "y")) %>%
mutate(count = ifelse(is.na(count), 0, count)) # replace null values with 0's
You can also use the data.table package and its Cross Join CJ() function for that.
require(data.table)
X = data.table(X.raw)[
CJ(y = y,
x = x,
unique = TRUE),
on = .(x, y)
][ , .(z = sum(z)), .(x, y) ][ order(x, y) ]
X
# filling the NAs with 0s
setnafill(X, fill = 0, cols = 'z')
X
# x y z
# 1: A i 1
# 2: A ii 5
# 3: B i 15
# 4: B ii 0
Though it's not initially asked for, I'm adding a data.table solution here for the sake of completeness and to also link to the related data.table question.

Remove duplicated rows using dplyr

I have a data.frame like this -
set.seed(123)
df = data.frame(x=sample(0:1,10,replace=T),y=sample(0:1,10,replace=T),z=1:10)
> df
x y z
1 0 1 1
2 1 0 2
3 0 1 3
4 1 1 4
5 1 0 5
6 0 1 6
7 1 0 7
8 1 0 8
9 1 0 9
10 0 1 10
I would like to remove duplicate rows based on first two columns. Expected output -
df[!duplicated(df[,1:2]),]
x y z
1 0 1 1
2 1 0 2
4 1 1 4
I am specifically looking for a solution using dplyr package.
Here is a solution using dplyr >= 0.5.
library(dplyr)
set.seed(123)
df <- data.frame(
x = sample(0:1, 10, replace = T),
y = sample(0:1, 10, replace = T),
z = 1:10
)
> df %>% distinct(x, y, .keep_all = TRUE)
x y z
1 0 1 1
2 1 0 2
3 1 1 4
Note: dplyr now contains the distinct function for this purpose.
Original answer below:
library(dplyr)
set.seed(123)
df <- data.frame(
x = sample(0:1, 10, replace = T),
y = sample(0:1, 10, replace = T),
z = 1:10
)
One approach would be to group, and then only keep the first row:
df %>% group_by(x, y) %>% filter(row_number(z) == 1)
## Source: local data frame [3 x 3]
## Groups: x, y
##
## x y z
## 1 0 1 1
## 2 1 0 2
## 3 1 1 4
(In dplyr 0.2 you won't need the dummy z variable and will just be
able to write row_number() == 1)
I've also been thinking about adding a slice() function that would
work like:
df %>% group_by(x, y) %>% slice(from = 1, to = 1)
Or maybe a variation of unique() that would let you select which
variables to use:
df %>% unique(x, y)
For completeness’ sake, the following also works:
df %>% group_by(x) %>% filter (! duplicated(y))
However, I prefer the solution using distinct, and I suspect it’s faster, too.
Most of the time, the best solution is using distinct() from dplyr, as has already been suggested.
However, here's another approach that uses the slice() function from dplyr.
# Generate fake data for the example
library(dplyr)
set.seed(123)
df <- data.frame(
x = sample(0:1, 10, replace = T),
y = sample(0:1, 10, replace = T),
z = 1:10
)
# In each group of rows formed by combinations of x and y
# retain only the first row
df %>%
group_by(x, y) %>%
slice(1)
Difference from using the distinct() function
The advantage of this solution is that it makes it explicit which rows are retained from the original dataframe, and it can pair nicely with the arrange() function.
Let's say you had customer sales data and you wanted to retain one record per customer, and you want that record to be the one from their latest purchase. Then you could write:
customer_purchase_data %>%
arrange(desc(Purchase_Date)) %>%
group_by(Customer_ID) %>%
slice(1)
When selecting columns in R for a reduced data-set you can often end up with duplicates.
These two lines give the same result. Each outputs a unique data-set with two selected columns only:
distinct(mtcars, cyl, hp);
summarise(group_by(mtcars, cyl, hp));
If you want to find the rows that are duplicated you can use find_duplicates from hablar:
library(dplyr)
library(hablar)
df <- tibble(a = c(1, 2, 2, 4),
b = c(5, 2, 2, 8))
df %>% find_duplicates()

Resources