Is it possible to use rbind within a pipe so that I don't have to define and store a variable to use it?
library(tidyverse)
## works fine
df <- iris %>%
group_by(Species) %>%
summarise(Avg.Sepal.Length = mean(Sepal.Length)) %>%
ungroup
df %>%
rbind(df)
## anyway to make this work?
iris %>%
group_by(Species) %>%
summarise(Avg.Sepal.Length = mean(Sepal.Length)) %>%
ungroup %>%
rbind(.)
Just to elaborate #MichaelDewar's answer, note the following section of ?magrittr::`%>%`:
Placing lhs elsewhere in rhs call
Often you will want lhs to the rhs call at another position than the first. For this purpose you can use the dot (.) as placeholder. For example, y %>% f(x, .) is equivalent to f(x, y) and z %>% f(x, y, arg = .) is equivalent to f(x, y, arg = z).
My understanding is that when . appears as an argument in the right hand side call, the left hand side is not inserted in the first position. The call is evaluated "as is", with . evaluating to the left hand side. Hence:
library("dplyr")
x <- data.frame(a = 1:2, b = 3:4)
x %>% rbind() # rbind(x)
## a b
## 1 1 3
## 2 2 4
x %>% rbind(.) # same
## a b
## 1 1 3
## 2 2 4
x %>% rbind(x) # rbind(x, x)
## a b
## 1 1 3
## 2 2 4
## 3 1 3
## 4 2 4
x %>% rbind(x, .) # same
x %>% rbind(., x) # same
x %>% rbind(., .) # same
## a b
## 1 1 3
## 2 2 4
## 3 1 3
## 4 2 4
You can devise clever tricks if you know the rules:
x %>% rbind((.)) # rbind(x, (x))
## a b
## 1 1 3
## 2 2 4
## 3 1 3
## 4 2 4
(.) isn't parsed like ., so the left hand is inserted in the first position of the right hand side call. Compare:
as.list(quote(.))
## [[1]]
## .
as.list(quote((.)))
## [[1]]
## `(`
##
## [[2]]
## .
I don't know why you would want to rbind something with itself, but here you go:
iris %>%
group_by(Species) %>%
summarise(Avg.Sepal.Length = mean(Sepal.Length)) %>%
ungroup %>%
rbind(., .)
Related
My question is similar to this question but I need to apply a more complex function across columns and I can't figure out how to apply Lionel's suggested solution to a custom function with a scoped verb like filter_at() or a filter()+across() equivalent. It doesn't look like a "superstache"/{{{}}} operator has been introduced.
Here is a non-programmed example of what I want to do (doesn't use NSE):
library(dplyr)
library(magrittr)
foo <- tibble(group = c(1,1,2,2,3,3),
a = c(1,1,0,1,2,2),
b = c(1,1,2,2,0,1))
foo %>%
group_by(group) %>%
filter_at(vars(a,b), any_vars(n_distinct(.) != 1)) %>%
ungroup
#> # A tibble: 4 x 3
#> group a b
#> <dbl> <dbl> <dbl>
#> 1 2 0 2
#> 2 2 1 2
#> 3 3 2 0
#> 4 3 2 1
I haven't found an equivalent of this filter_at line with filter+across() yet, but since the new(ish) tidyeval functions predate dplyr 1.0 I assume that issue can be set aside. Here is my attempt to make a programmed version where the filtering variables are user-supplied with dots:
my_function <- function(data, ..., by) {
dots <- enquos(..., .named = TRUE)
helperfunc <- function(arg) {
return(any_vars(n_distinct(arg) != length(arg)))
}
dots <- lapply(dots, function(dot) call("helperfunc", dot))
data %>%
group_by({{ by }}) %>%
filter(!!!dots) %>%
ungroup
}
foo %>%
my_function(a, b, group)
#> Error: Problem with `filter()` input `..1`.
#> x Input `..1` is named.
#> i This usually means that you've used `=` instead of `==`.
#> i Did you mean `a == helperfunc(a)`?
I'd love if there were a way to just plug in an NSE operator inside the vars() argument in filter_at and not have to make all these extra calls (I assume this is what a {{{}}} function would do?)
Maybe I'm misunderstanding what the issue is, but the standard pattern of forwarding the dots seems to work fine here:
my_function <- function(data, ..., by) {
data %>%
group_by({{ by }}) %>%
filter_at(vars(...), any_vars(n_distinct(.) != 1)) %>%
ungroup
}
foo %>%
my_function( a, b, by=group ) # works
Here is a way to use across() to achieve this that is covered in vignette("colwise").
my_function <- function(data, vars, by) {
data %>%
group_by({{ by }}) %>%
filter(n_distinct(across({{ vars }}, ~ .x)) != 1) %>%
ungroup()
}
foo %>%
my_function(c(a, b), by = group)
# A tibble: 4 x 3
group a b
<dbl> <dbl> <dbl>
1 2 0 2
2 2 1 2
3 3 2 0
4 3 2 1
An option with across
my_function <- function(data, by, ...) {
dots <- enquos(..., .named = TRUE)
nm1 <- purrr::map_chr(dots, rlang::as_label)
data %>%
dplyr::group_by({{ by }}) %>%
dplyr::mutate(across(nm1, ~ n_distinct(.) !=1, .names = "{col}_ind")) %>%
dplyr::ungroup() %>%
dplyr::filter(dplyr::select(., ends_with('ind')) %>% purrr::reduce(`|`)) %>%
dplyr::select(-ends_with('ind'))
}
my_function(foo, group, a, b)
# A tibble: 4 x 3
# group a b
# <dbl> <dbl> <dbl>
#1 2 0 2
#2 2 1 2
#3 3 2 0
#4 3 2 1
Or with filter/across
foo %>%
group_by(group) %>%
filter(any(!across(c(a,b), ~ n_distinct(.) == 1)))
# A tibble: 4 x 3
# Groups: group [2]
# group a b
# <dbl> <dbl> <dbl>
#1 2 0 2
#2 2 1 2
#3 3 2 0
#4 3 2 1
I want to apply different functions to the same column in a tibble. These functions are stored in a character string. I used to do this with mutate_ and the .dots argument like this:
library(dplyr)
myfuns <- c(f1 = "a^2", f2 = "exp(a)", f3 = "sqrt(a)")
tibble(a = 1:3) %>%
mutate_(.dots = myfuns)
This approach still works fine but mutate_ is deprecated. I tried to achieve the same result with mutate and the rlang package but did not get very far.
In my real example myfuns contains about 200 functions so typing them one by one is not an option.
Thanks in advance.
For simple equations that take a single input, it’s sufficient to supply the function itself, e.g.
iris %>% mutate_at(vars(-Species), sqrt)
Or, when using an equation rather than a simple function, via a formula:
iris %>% mutate_at(vars(-Species), ~ . ^ 2)
When using equations that access more than a single variable, you need to use rlang quosures instead:
area = quo(Sepal.Length * Sepal.Width)
iris %>% mutate(Sepal.Area = !! area)
Here, quo creates a “quosure” — i.e. a quoted representation of your equation, same as your use of strings, except, unlike strings, this one is properly scoped, is directly usable by dplyr, and is conceptually cleaner: It is like any other R expression, except not yet evaluated. The difference is as follows:
1 + 2 is an expression with value 3.
quo(1 + 2) is an unevaluated expression with value 1 + 2 that evaluates to 3, but it needs to be explicitly evaluated. So how do we evaluated an unevaluated expression? Well …:
Then !! (pronounced “bang bang”) unquotes the previously-quoted expression, i.e. evaluates it — inside the context of mutate. This is important, because Sepal.Length and Sepal.Width are only known inside the mutate call, not outside of it.
In all the cases above, the expressions can be inside a list, too. The only difference is that for lists you need to use !!! instead of !!:
funs = list(
Sepal.Area = quo(Sepal.Length * Sepal.Width),
Sepal.Ratio = quo(Sepal.Length / Sepal.Width)
)
iris %>% mutate(!!! funs)
The !!! operation is known as “unquote-splice”. The idea is that it “splices” the list elements of its arguments into the parent call. That is, it seems to modify the call as if it contained the list elements verbatim as arguments (this only works in functions, such as mutate, that support it, though).
Convert your strings to expressions
myexprs <- purrr::map( myfuns, rlang::parse_expr )
then pass those expressions to regular mutate using quasiquotation:
tibble(a = 1:3) %>% mutate( !!!myexprs )
# # A tibble: 3 x 4
# a f1 f2 f3
# <int> <dbl> <dbl> <dbl>
# 1 1 1 2.72 1
# 2 2 4 7.39 1.41
# 3 3 9 20.1 1.73
Note that this will also work with strings / expressions involving multiple columns.
You have only one column, so both approaches below will give you the same result.
You only have to modify your functions' list.
library(dplyr)
myfuns <- c(f1 = ~.^2, f2 = ~exp(.), f3 = ~sqrt(.))
tibble(a = 1:3) %>% mutate_at(vars(a), myfuns)
tibble(a = 1:3) %>% mutate_all(myfuns)
# # A tibble: 3 x 4
# a f1 f2 f3
# <int> <dbl> <dbl> <dbl>
# 1 1 1 2.72 1
# 2 2 4 7.39 1.41
# 3 3 9 20.1 1.73
A base alternative :
myfuns <- c(f1 = "a^2", f2 = "exp(a)", f3 = "sqrt(a)")
df <- data.frame(a = 1:3)
df[names(myfuns)] <- lapply(myfuns , function(x) eval(parse(text= x), envir = df))
df
#> a f1 f2 f3
#> 1 1 1 2.718282 1.000000
#> 2 2 4 7.389056 1.414214
#> 3 3 9 20.085537 1.732051
Created on 2019-07-08 by the reprex package (v0.3.0)
One way using parse_expr from rlang
library(tidyverse)
library(rlang)
tibble(a = 1:3) %>%
mutate(ans = map(myfuns, ~eval(parse_expr(.)))) %>%
#OR mutate(ans = map(myfuns, ~eval(parse(text = .)))) %>%
unnest() %>%
group_by(a) %>%
mutate(temp = row_number()) %>%
spread(a, ans) %>%
select(-temp) %>%
rename_all(~names(myfuns))
# A tibble: 3 x 3
# f1 f2 f3
# <dbl> <dbl> <dbl>
#1 1 2.72 1
#2 4 7.39 1.41
#3 9 20.1 1.73
you can try also a purrr approach
# define the functions
f1 <- function(a) a^2
f2 <- function(a, b) a + b
f3 <- function(b) sqrt(b)
# put all functions in one list
tibble(funs=list(f1, f2, f3)) %>%
# give each function a name
mutate(fun_id=paste0("f", row_number())) %>%
# add to each row/function the matching column profile
# first extract the column names you specified in each function
#mutate(columns=funs %>%
# toString() %>%
# str_extract_all(., "function \\(.*?\\)", simplify = T) %>%
# str_extract_all(., "(?<=\\().+?(?=\\))", simplify = T) %>%
# gsub(" ", "", .) %>%
# str_split(., ",")) %>%
# with the help of Konrad we can use fn_fmls_names
mutate(columns=map(funs, ~ rlang::fn_fmls_names(.))) %>%
# select the columns and add to our tibble/data.frame
mutate(params=map(columns, ~select(df, .))) %>%
# invoke the functions
mutate(results = invoke_map(.f = funs, .x = params)) %>%
# transform to desired output
unnest(results) %>%
group_by(fun_id) %>%
mutate(n=row_number()) %>%
spread(fun_id, results) %>%
left_join(mutate(df, n=row_number()), .) %>%
select(-n)
Joining, by = "n"
# A tibble: 5 x 5
a b f1 f2 f3
<dbl> <dbl> <dbl> <dbl> <dbl>
1 2 1 4 3 1
2 4 1 16 5 1
3 5 2 25 7 1.41
4 7 2 49 9 1.41
5 8 2 64 10 1.41
some data
df <- data_frame(
a = c(2, 4, 5, 7, 8),
b = c(1, 1, 2, 2, 2))
I have the following data.frame:
df <- data.frame(X1 = c(1,2,2))
df$X2 <- list(list(1, 2), list(0, 1), list(1,0))
df
X1 X2
1 1 1, 2
2 2 0, 1
3 2 1, 0
Now, I would like to add a new column that is the element-wise mean of all the lists in X2 that share the same X1 value, e.g.:
X1 mean
1 1 1, 2
2 2 0.5, 0.5
I tried with the following instructions:
df %>% group_by(X1) %>% summarise(mean = mean(X2))
But all I get is
X1 mean
<dbl> <dbl>
1 1.00 NA
2 2.00 NA
Warning messages:
1: In mean.default(X2) : argument is not numeric or logical: returning NA
How can I build this new column?
We may use
df <- df %>% group_by(X1) %>%
summarise(mean = list(map(reduce(X2, `map2`, `+`), `/`, n())))
df$mean
# [[1]]
# [[1]][[1]]
# [1] 1
#
# [[1]][[2]]
# [1] 2
#
#
# [[2]]
# [[2]][[1]]
# [1] 0.5
#
# [[2]][[2]]
# [1] 0.5
Explanation: first, after grouping, with
reduce(X2, `map2`, `+`)
we add all the lists element-wise. Then as to get the mean we use another map with /. Lastly, list returns a list.
Update: you may also use
df %>% group_by(X1) %>%
summarise(mean = list(pmap(X2, ~ sum(...) / n())))
or
df %>% group_by(X1) %>%
summarise(mean = list(pmap(X2, ~ mean(c(...)))))
Unfortunately list(pmap(X2, mean)) doesn't work as
mean(1, 2)
# [1] 1
I have a dataset like so:
df<-data.frame(x=c("A","A","A","A", "B","B","B","B","B",
"C","C","C","C","C","D","D","D","D","D"),
y= as.factor(c(rep("Eoissp2",4),rep("Eoissp1",5),"Eoissp1","Eoisp4","Automerissp1","Automerissp2","Acharias",rep("Eoissp2",3),rep("Eoissp1",2))))
I want to identify, for each subset of x, the corresponding levels in y that are entirely duplicates containing the expression Eois. Therefore, A , B, and D will be returned in a vector because every level of A , B, and D contains the expression Eois , while level C consists of various unique levels (e.g. Eois, Automeris and Acharias). For this example the output would be:
output<- c("A", "B", "D")
Using new df:
> df %>% filter(str_detect(y,"Eois")) %>% group_by(x) %>% distinct(y) %>%
count() %>% filter(n==1) %>% select(x)
# A tibble: 2 x 1
# Groups: x [2]
x
<fct>
1 A
2 B
(Answer below uses the original df posted by the question author.)
Using the pipe function in magrittr & functions from dplyr:
> df %>% group_by(x) %>% distinct(y)
# A tibble: 7 x 2
# Groups: x [3]
x y
<fct> <fct>
1 A plant1a
2 B plant1b
3 C plant1a
4 C plant2a
5 C plant3a
6 C plant4a
7 C plant5a
Then you can roll up the results like this:
> results <- df %>% group_by(x) %>% distinct(y) %>%
count() %>% filter(n==1) %>% select(x)
> results
# A tibble: 2 x 1
# Groups: x [2]
x
<fct>
1 A
2 B
If you know your original data frame is always going to come with the x's in order, you can drop the group_by part.
A dplyr based solution could be as:
library(dplyr)
df %>% group_by(x) %>%
filter(grepl("Eoiss", y)) %>%
mutate(y = sub("\\d+", "", y)) %>%
filter(n() >1 & length(unique(y)) == 1) %>%
select(x) %>% unique(.)
# A tibble: 3 x 1
# Groups: x [3]
# x
# <fctr>
#1 A
#2 B
#3 D
Data
df<-data.frame(x=c("A","A","A","A", "B","B","B","B","B",
"C","C","C","C","C","D","D","D","D","D"),
y= as.factor(c(rep("Eoissp2",4),
rep("Eoissp1",5),"Eoissp1","Eoisp4","Automerissp1","Automerissp2",
"Acharias",rep("Eoissp2",3),rep("Eoissp1",2))))
Seems the number of resulting rows is different when using distinct vs unique. The data set I am working with is huge. Hope the code is OK to understand.
dt2a <- select(dt, mutation.genome.position,
mutation.cds, primary.site, sample.name, mutation.id) %>%
group_by(mutation.genome.position, mutation.cds, primary.site) %>%
mutate(occ = nrow(.)) %>%
select(-sample.name) %>% distinct()
dim(dt2a)
[1] 2316382 5
## Using unique instead
dt2b <- select(dt, mutation.genome.position, mutation.cds,
primary.site, sample.name, mutation.id) %>%
group_by(mutation.genome.position, mutation.cds, primary.site) %>%
mutate(occ = nrow(.)) %>%
select(-sample.name) %>% unique()
dim(dt2b)
[1] 2837982 5
This is the file I am working with:
sftp://sftp-cancer.sanger.ac.uk/files/grch38/cosmic/v72/CosmicMutantExport.tsv.gz
dt = fread(fl)
This appears to be a result of the group_by Consider this case
dt<-data.frame(g=rep(c("a","b"), each=3),
v=c(2,2,5,2,7,7))
dt %>% group_by(g) %>% unique()
# Source: local data frame [4 x 2]
# Groups: g
#
# g v
# 1 a 2
# 2 a 5
# 3 b 2
# 4 b 7
dt %>% group_by(g) %>% distinct()
# Source: local data frame [2 x 2]
# Groups: g
#
# g v
# 1 a 2
# 2 b 2
dt %>% group_by(g) %>% distinct(v)
# Source: local data frame [4 x 2]
# Groups: g
#
# g v
# 1 a 2
# 2 a 5
# 3 b 2
# 4 b 7
When you use distinct() without indicating which variables to make distinct, it appears to use the grouping variable.