Using R 3.2.2 and dplyr 0.7.2 I'm trying to figure out how to effectively use group_by with fields supplied as character vectors.
Selecting is easy I can select a field via string like this
(function(field) {
mpg %>% dplyr::select(field)
})("cyl")
Multiple fields via multiple strings like this
(function(...) {
mpg %>% dplyr::select(!!!quos(...))
})("cyl", "hwy")
and multiple fields via one character vector with length > 1 like this
(function(fields) {
mpg %>% dplyr::select(fields)
})(c("cyl", "hwy"))
With group_by I cannot really find a way to do this for more than one string because if I manage to get an output it ends up grouping by the string I supply.
I managed to group by one string like this
(function(field) {
mpg %>% group_by(!!field := .data[[field]]) %>% tally()
})("cyl")
Which is already quite ugly.
Does anyone know what I have to write so I can run
(function(field) {...})("cyl", "hwy")
and
(function(field) {...})(c("cyl", "hwy"))
respectively? I tried all sorts of combinations of !!, !!!, UQ, enquo, quos, unlist, etc... and saving them in intermediate variables because that sometimes seems to make a difference, but cannot get it to work.
select() is very special in dplyr. It doesn't accept columns, but column names or positions. So that's about the only main verb that accepts strings. (Technically when you supply a bare name like cyl to select, it actually gets evaluated as its own name, not as the vector inside the data frame.)
If you want your function to take simple strings, as opposed to bare expressions or symbols, you don't need quosures. Just create symbols from the strings and unquote them:
myselect <- function(...) {
syms <- syms(list(...))
select(mtcars, !!! syms)
}
mygroup <- function(...) {
syms <- syms(list(...))
group_by(mtcars, !!! syms)
}
myselect("cyl", "disp")
mygroup("cyl", "disp")
To debug the unquoting, wrap with expr() and check that the expression looks right:
syms <- syms(list("cyl", "disp"))
expr(group_by(mtcars, !!! syms))
#> group_by(mtcars, cyl, disp) # yup, looks right!
See this talk for more on this (we'll update the programming vignette to make the concepts clearer): https://schd.ws/hosted_files/user2017/43/tidyeval-user.pdf.
Finally, note that many verbs have a _at suffix variant that accepts strings and character vectors without fuss:
group_by_at(mtcars, c("cyl", "disp"))
Related
Though I tried to search whether it is duplicate, but I cannot find similar question. (though a similar one is there, but that is somewhat different from my requirement)
My question is that whether we can use string manipulation function such substr or stringr::str_remove inside .names argument of dplyr::across. As a reproducible example consider this
library(dplyr)
iris %>%
summarise(across(starts_with('Sepal'), mean, .names = '{.col}_mean'))
Sepal.Length_mean Sepal.Width_mean
1 5.843333 3.057333
Now my problem is that I want to rename output columns say str_remove(.col, 'Sepal') so that my output column names are just Length.mean and Width.mean . Why I am asking because, the description of this argument states that
.names
A glue specification that describes how to name the output columns. This can use {.col} to stand for the selected column name, and {.fn} to stand for the name of the function being applied. The default (NULL) is equivalent to "{.col}" for the single function case and "{.col}_{.fn}" for the case where a list is used for .fns.
I have tried many possibilities including the following, but none of these work
library(tidyverse)
library(glue)
iris %>%
summarise(across(starts_with('Sepal'), mean,
.names = glue('{xx}_mean', xx = str_remove(.col, 'Sepal'))))
Error: Problem with `summarise()` input `..1`.
x argument `str` should be a character vector (or an object coercible to)
i Input `..1` is `(function (.cols = everything(), .fns = NULL, ..., .names = NULL) ...`.
Run `rlang::last_error()` to see where the error occurred.
#OR
iris %>%
summarise(across(starts_with('Sepal'), mean,
.names = glue('{xx}_mean', xx = str_remove(glue('{.col}'), 'Sepal'))))
I know that this can be solved by adding another step using rename_with so I am not looking after that answer.
This works, but with probably a few caveats. You can use functions inside a glue specification, so you could clean up the strings that way. However, when I tried escaping the ".", I got an error, which I assume has something to do with how across parses the string. If you need something more dynamic, you might want to dig into the source code at that point.
In order to use the {.fn} helper, at least in conjunction with creating the glue string on the fly like this, the function needs a name; otherwise you get a number for the function's index in the .fns argument. I tested this out with a second function and using lst for automatic naming.
library(dplyr)
iris %>%
summarise(across(starts_with('Sepal'), .fns = lst(mean, max),
.names = '{stringr::str_remove(.col, "^[A-Za-z]+.")}_{.fn}'))
#> Length_mean Length_max Width_mean Width_max
#> 1 5.843333 7.9 3.057333 4.4
I know that the %>% operator allows one to input the LHS to the first argument of the RHS, (so that xxx %>% fun() is equivalent to fun(xxx,)) which allows us to "chain" functions together, but is there a way to generalize this operation so that I can pass the LHS to the "nth" argument of the RHS? I am using the R programming language.
You use the . to pass the LHS into the desired named argument on right. If you want to replace 'hey' with 'ho' in 'hey ho' using gsub(pattern,replacement,text) then you could do any of the following. Note, %>% does not pass the LHS into the first argument of the function, but the first unnamed argument (see the third example below).
'hey ho' %>% gsub('hey','ho',.)
'hey ho' %>% gsub('hey','ho',text=.)
'hey ho' %>% gsub(pattern='hey',replacement='ho')
I am trying to do a filter in dplyr where a column is like certain observations. I can use sqldf as
Test <- sqldf("select * from database
Where SOURCE LIKE '%ALPHA%'
OR SOURCE LIKE '%BETA%'
OR SOURCE LIKE '%GAMMA%'")
I tried to use the following which doesn't return any results:
database %>% dplyr::filter(SOURCE %like% c('%ALPHA%', '%BETA%', '%GAMMA%'))
Thanks
You can use grepl with ALPHA|BETA|GAMMA, which will match if any of the three patterns is contained in SOURCE column.
database %>% filter(grepl('ALPHA|BETA|GAMMA', SOURCE))
If you want it to be case insensitive, add ignore.case = T in grepl.
%like% is from the data.table package. You're probably also seeing this warning message:
Warning message:
In grepl(pattern, vector) :
argument 'pattern' has length > 1 and only the first element will be used
The %like% operator is just a wrapper around the grepl function, which does string matching using regular expressions. So % aren't necessary, and in fact they represent literal percent signs.
You can only supply one pattern to match at a time, so either combine them using the regex 'ALPHA|BETA|GAMMA' (as Psidom suggests) or break the tests into three statements:
database %>%
dplyr::filter(
SOURCE %like% 'ALPHA' |
SOURCE %like% 'BETA' |
SOURCE %like% 'GAMMA'
)
Building on Psidom and Nathan Werth's response, for a Tidyverse friendly and concise method, we can do;
library(data.table); library(tidyverse)
database %>%
dplyr::filter(SOURCE %ilike% "ALPHA|BETA|GAMMA") # ilike = case insensitive fuzzysearch
I am quite new to R.
Using the table called SE_CSVLinelist_clean, I want to extract the rows where the Variable called where_case_travelled_1 DOES NOT contain the strings "Outside Canada" OR "Outside province/territory of residence but within Canada". Then create a new table called SE_CSVLinelist_filtered.
SE_CSVLinelist_filtered <- filter(SE_CSVLinelist_clean,
where_case_travelled_1 %in% -c('Outside Canada','Outside province/territory of residence but within Canada'))
The code above works when I just use "c" and not "-c".
So, how do I specify the above when I really want to exclude rows that contains that outside of the country or province?
Note that %in% returns a logical vector of TRUE and FALSE. To negate it, you can use ! in front of the logical statement:
SE_CSVLinelist_filtered <- filter(SE_CSVLinelist_clean,
!where_case_travelled_1 %in%
c('Outside Canada','Outside province/territory of residence but within Canada'))
Regarding your original approach with -c(...), - is a unary operator that "performs arithmetic on numeric or complex vectors (or objects which can be coerced to them)" (from help("-")). Since you are dealing with a character vector that cannot be coerced to numeric or complex, you cannot use -.
Try putting the search condition in a bracket, as shown below. This returns the result of the conditional query inside the bracket. Then test its result to determine if it is negative (i.e. it does not belong to any of the options in the vector), by setting it to FALSE.
SE_CSVLinelist_filtered <- filter(SE_CSVLinelist_clean,
(where_case_travelled_1 %in% c('Outside Canada','Outside province/territory of residence but within Canada')) == FALSE)
Just be careful with the previous solutions since they require to type out EXACTLY the string you are trying to detect.
Ask yourself if the word "Outside", for example, is sufficient. If so, then:
data_filtered <- data %>%
filter(!str_detect(where_case_travelled_1, "Outside")
A reprex version:
iris
iris %>%
filter(!str_detect(Species, "versicolor"))
Quick fix. First define the opposite of %in%:
'%ni%' <- Negate("%in%")
Then apply:
SE_CSVLinelist_filtered <- filter(
SE_CSVLinelist_clean,
where_case_travelled_1 %ni% c('Outside Canada',
'Outside province/territory of residence but within Canada'))
I can't figure out how to use SE dplyr function with invalid variable names, for example selecting a variable with a space in it.
Example:
df <- dplyr::data_frame(`a b` = 1)
myvar <- "a b"
If I want to select a b variable, I can do it with dplyr::select(df, `a b`), but how do I do that with select_?
I suppose I just need to find a function that "wraps" a string in backticks, so that I can call dplyr::select_(df, backtick(myvar))
As MyFlick said in the comments, this behaviour should generally be avoided, but if you want to make it work you can make your own backtick wrapper
backtick <- function(x) paste0("`", x, "`")
dplyr::select_(df, backtick(myvar))
EDIT: Hadley replied to my tweets about this and showed me that simply using as.name will work for this instead of using backticks:
df <- dplyr::data_frame(`a b` = 1)
myvar <- "a b"
dplyr::select_(df, as.name(myvar))
My solution was to exploit the ability of select to use column positions. The as.name solution did not appear to work for some of my columns.
select(df, which(names(df) %in% myvar))
or even more succinctly if already in a pipe:
df %>%
select(which(names(.) %in% myvar))
Although this uses select, in my view it does not rely on NSE.
Note that if there are no matches, all columns will be dropped with no error or warning.