Is there a way to output the result of a pipeline at each step without doing it manually? (eg. without selecting and running only the selected chunks)
I often find myself running a pipeline line-by-line to remember what it was doing or when I am developing some analysis.
For example:
library(dplyr)
mtcars %>%
group_by(cyl) %>%
sample_frac(0.1) %>%
summarise(res = mean(mpg))
# Source: local data frame [3 x 2]
#
# cyl res
# 1 4 33.9
# 2 6 18.1
# 3 8 18.7
I'd to select and run:
mtcars %>% group_by(cyl)
and then...
mtcars %>% group_by(cyl) %>% sample_frac(0.1)
and so on...
But selecting and CMD/CTRL+ENTER in RStudio leaves a more efficient method to be desired.
Can this be done in code?
Is there a function which takes a pipeline and runs/digests it line by line showing output at each step in the console and you continue by pressing enter like in demos(...) or examples(...) of package guides
You can select which results to print by using the tee-operator (%T>%) and print(). The tee-operator is used exclusively for side-effects like printing.
# i.e.
mtcars %>%
group_by(cyl) %T>% print() %>%
sample_frac(0.1) %T>% print() %>%
summarise(res = mean(mpg))
It is easy with magrittr function chain. For example define a function my_chain with:
foo <- function(x) x + 1
bar <- function(x) x + 1
baz <- function(x) x + 1
my_chain <- . %>% foo %>% bar %>% baz
and get the final result of a chain as:
> my_chain(0)
[1] 3
You can get a function list with functions(my_chain)
and define a "stepper" function like this:
stepper <- function(fun_chain, x, FUN = print) {
f_list <- functions(fun_chain)
for(i in seq_along(f_list)) {
x <- f_list[[i]](x)
FUN(x)
}
invisible(x)
}
And run the chain with interposed print function:
stepper(my_chain, 0, print)
# [1] 1
# [1] 2
# [1] 3
Or with waiting for user input:
stepper(my_chain, 0, function(x) {print(x); readline()})
Add print:
mtcars %>%
group_by(cyl) %>%
print %>%
sample_frac(0.1) %>%
print %>%
summarise(res = mean(mpg))
IMHO magrittr is mostly useful interactively, that is when I am exploring data or building a new formula/model.
In this cases, storing intermediate results in distinct variables is very time consuming and distracting, while pipes let me focus on data, rather than typing:
x %>% foo
## reason on results and
x %>% foo %>% bar
## reason on results and
x %>% foo %>% bar %>% baz
## etc.
The problem here is that I don't know in advance what the final pipe will be, like in #bergant.
Typing, as in #zx8754,
x %>% print %>% foo %>% print %>% bar %>% print %>% baz
adds to much overhead and, to me, defeats the whole purpose of magrittr.
Essentially magrittr lacks a simple operator that both prints and pipes results.
The good news is that it seems quite easy to craft one:
`%P>%`=function(lhs, rhs){ print(lhs); lhs %>% rhs }
Now you can print an pipe:
1:4 %P>% sqrt %P>% sum
## [1] 1 2 3 4
## [1] 1.000000 1.414214 1.732051 2.000000
## [1] 6.146264
I found that if one defines/uses a key bindings for %P>% and %>%, the prototyping workflow is very streamlined (see Emacs ESS or RStudio).
I wrote the package pipes that can do several things that might help :
use %P>% to print the output.
use %ae>% to use all.equal on input and output.
use %V>% to use View on the output, it will open a viewer for each relevant step.
If you want to see some aggregated info you can try %summary>%, %glimpse>% or %skim>% which will use summary, tibble::glimpse or skimr::skim, or you can define your own pipe to show specific changes, using new_pipe
# devtools::install_github("moodymudskipper/pipes")
library(dplyr)
library(pipes)
res <- mtcars %P>%
group_by(cyl) %P>%
sample_frac(0.1) %P>%
summarise(res = mean(mpg))
#> group_by(., cyl)
#> # A tibble: 32 x 11
#> # Groups: cyl [3]
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> * <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4
#> 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4
#> 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1
#> 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1
#> 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2
#> 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1
#> 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4
#> 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2
#> 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2
#> 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4
#> # ... with 22 more rows
#> sample_frac(., 0.1)
#> # A tibble: 3 x 11
#> # Groups: cyl [3]
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 26 4 120. 91 4.43 2.14 16.7 0 1 5 2
#> 2 17.8 6 168. 123 3.92 3.44 18.9 1 0 4 4
#> 3 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2
#> summarise(., res = mean(mpg))
#> # A tibble: 3 x 2
#> cyl res
#> <dbl> <dbl>
#> 1 4 26
#> 2 6 17.8
#> 3 8 18.7
res <- mtcars %ae>%
group_by(cyl) %ae>%
sample_frac(0.1) %ae>%
summarise(res = mean(mpg))
#> group_by(., cyl)
#> [1] "Attributes: < Names: 1 string mismatch >"
#> [2] "Attributes: < Length mismatch: comparison on first 2 components >"
#> [3] "Attributes: < Component \"class\": Lengths (1, 4) differ (string compare on first 1) >"
#> [4] "Attributes: < Component \"class\": 1 string mismatch >"
#> [5] "Attributes: < Component 2: Modes: character, list >"
#> [6] "Attributes: < Component 2: Lengths: 32, 2 >"
#> [7] "Attributes: < Component 2: names for current but not for target >"
#> [8] "Attributes: < Component 2: Attributes: < target is NULL, current is list > >"
#> [9] "Attributes: < Component 2: target is character, current is tbl_df >"
#> sample_frac(., 0.1)
#> [1] "Different number of rows"
#> summarise(., res = mean(mpg))
#> [1] "Cols in y but not x: `res`. "
#> [2] "Cols in x but not y: `qsec`, `wt`, `drat`, `hp`, `disp`, `mpg`, `carb`, `gear`, `am`, `vs`. "
res <- mtcars %V>%
group_by(cyl) %V>%
sample_frac(0.1) %V>%
summarise(res = mean(mpg))
# you'll have to test this one by yourself
Related
I am trying to calculate an indicator value per group in a dataframe, where the indicator value per group is the sum of one column divided by the sum of another column within that group. I want to pass the column names as numerator and denominator arguments. I have tried the following code to no avail.
library(tidyverse)
a = c(1,1,1,2,2)
b = 1:5
c = 6:10
d = 9:13
dummy_data = tibble(
a,b,c,d
)
calc_indicator = function(numerator,denominator){
data = dummy_data %>%
group_by(a) %>%
mutate(
indicator_value = sum({{numerator}})/sum({{denominator}})
)
data
}
calc_indicator("b","d")
#> Error in `mutate()`:
#> ! Problem while computing `indicator_value = sum("b")/sum("d")`.
#> ℹ The error occurred in group 1: a = 1.
#> Caused by error in `sum()`:
#> ! invalid 'type' (character) of argument
Created on 2022-10-17 by the reprex package (v2.0.1)
I realize that if I do not use quotations in the arguments submitted to the function (rather than calc_indicator("b","d") I enter calc_indicator(b,d)), this code runs. However, numerators and denominators for different indicators are defined in an excel file, so they arrive in the R environment as strings.
Any suggestions?
As per the Programming with dplyr article/vignette, {{ is used for unquoted column names, but for string/character vector of column names in objects you should use .data[[col]], e.g.,
calc_indicator = function(numerator,denominator){
data = dummy_data %>%
group_by(a) %>%
mutate(
indicator_value = sum(.data[[numerator]])/sum(.data[[denominator]])
)
data
}
calc_indicator("b","d")
I'd also recommend passing the data frame in to the function as an argument too. Functions that rely on having (in this case) a data frame named dummy_data in your global environment are much less flexible.
Right now, your function will only work if you have data frame named dummy_data, and it will only work on a data frame with that name. If you rewrite the function to have a data argument, then you can use it on any data frame:
calc_indicator = function(data, group, numerator, denominator){
data %>%
group_by(.data[[group]]) %>%
mutate(
indicator_value = sum(.data[[numerator]])/sum(.data[[denominator]])
)
}
## you can still use it on your dummy data
calc_indicator(dummy_data, "a", "b", "c")
## you can use it on other data too
calc_indicator(mtcars, "cyl", "hp", "wt")
# # A tibble: 32 × 12
# # Groups: cyl [3]
# mpg cyl disp hp drat wt qsec vs am gear carb indicator_value
# <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 39.2
# 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 39.2
# 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 36.2
# 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 39.2
# 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 52.3
# ...
I want to create a function that can pass multiple different arguments to sets of parameters in R user-defined functions.
I am using dplyr to create functions that can work with tidyverse ecosystem.
For example:
library(dplyr)
# Create the function
myfunction <- function(.data, ..., ...) {
.action_vars <- enquos(...)
.group_vars <- enquos(...)
.data %>%
group_by(!!!.group_vars) %>%
other_function(!!!.action_vars, parameter_x = "other_argument")
}
# Apply the function
result <- myfunction(MyData, Var1, Var2, Var3, Var4)
Following the example let's say I want .action_vars = Var1 and Var2 and .group_vars = Var3 and Var4
I know I cannot use the three-dot ellipsis twice in my defined function. I'd love to hear how you would solve this problem. I have looked everywhere but I seem to not find the answer.
Use across() as a selection bridge to take a selection of multiple variables in a single argument:
my_function <- function(data, group_vars, action_vars) {
data |>
# Use `across()` as a selection bridge within `group_by()`
group_by(across({{ group_vars }})) |>
# Pass selection directly to `select()`
select({{ action_vars }})
}
mtcars |>
my_function(c(cyl, am), disp:drat)
#> Adding missing grouping variables: `cyl`, `am`
#> # A tibble: 32 × 5
#> # Groups: cyl, am [6]
#> cyl am disp hp drat
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 6 1 160 110 3.9
#> 2 6 1 160 110 3.9
#> 3 4 1 108 93 3.85
#> 4 6 0 258 110 3.08
#> # … with 28 more rows
across() is also convenient for complex operations since you can pass a function to map over the selection:
my_function <- function(data, group_vars, action_vars) {
data |>
group_by(across({{ group_vars }})) |>
summarise(across({{ action_vars }}, \(x) mean(x, na.rm = TRUE)))
}
mtcars |>
my_function(c(cyl, am), disp:drat)
#> `summarise()` has grouped output by 'cyl'. You can override using the `.groups`
#> argument.
#> # A tibble: 6 × 5
#> # Groups: cyl [3]
#> cyl am disp hp drat
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 4 0 136. 84.7 3.77
#> 2 4 1 93.6 81.9 4.18
#> 3 6 0 205. 115. 3.42
#> 4 6 1 155 132. 3.81
#> 5 8 0 358. 194. 3.12
#> 6 8 1 326 300. 3.88
Learn more about this pattern in https://rlang.r-lib.org/reference/topic-data-mask-programming.html#bridge-patterns
I would do this using arrays containing text strings:
# Create the function
myfunction <- function(.data, action_vars, group_vars) {
.action_vars <- action_vars %>% enquos(...)
.group_vars <- group_vars %>% enquos(...)
.data %>%
group_by(!!!.group_vars) %>%
other_function(!!!.action_vars, parameter_x = "other_argument")
}
Then you can provide an array of text strings using the standard c() as so:
# Apply the function
result <- myfunction(MyData, c("Var1", "Var2"), c("Var3", "Var4"))
You may need to use sym or syms instead of enquo or enquos because your inputs are now text strings.
The following question seems very basic in programming with data.table, so my apologies if it's a duplicate. I spent time researching but could not find an answer.
I want to create a "user-defined function" that wraps around a data.table wrangling procedure. In this procedure, a new column is created, and I want to let the user set the name of that new column.
Example
Consider the following code that works as-is. I want to wrap it inside a function.
library(data.table)
library(magrittr)
library(tibble)
mtcars %>%
as.data.table() %>%
.[, .(max_mpg = max(mpg)), by = cyl] %>%
as_tibble()
#> # A tibble: 3 x 2
#> cyl max_mpg
#> <dbl> <dbl>
#> 1 6 21.4
#> 2 4 33.9
#> 3 8 19.2
Created on 2021-10-13 by the reprex package (v0.3.0)
All I want my function to do is let the user set the name of new_colname_of_choice:
my_wrapper <- function(new_colname_of_choice) {
mtcars %>%
as.data.table() %>%
.[, .(new_colname_of_choice = max(mpg)), by = cyl] %>%
as_tibble()
}
my_wrapper(new_colname_of_choice = "my_lovely_colname")
#> # A tibble: 3 x 2
#> cyl new_colname_of_choice <---------- why this isn't called "my_lovely_colname"?
#> <dbl> <dbl>
#> 1 6 21.4
#> 2 4 33.9
#> 3 8 19.2
I've tried using curly braces which didn't work either (actually threw an error):
my_wrapper_2 <- function(new_colname_of_choice) {
mtcars %>%
as.data.table() %>%
.[, .({new_colname_of_choice} = max(mpg)), by = cyl] %>%
as_tibble()
}
Error: unexpected '=' in:
" as.data.table() %>%
.[, .({new_colname_of_choice} ="
Which is surprising because curly braces do promote the desired naming ability, but in a different (yet similar) kind of code:
my_wrapper_3 <- function(new_colname_of_choice) {
mtcars %>%
as.data.table() %>%
.[, {new_colname_of_choice} := max(mpg), by = cyl] %>%
as_tibble()
}
my_wrapper_3(new_colname_of_choice = "my_lovely_colname")
## # A tibble: 32 x 12
## mpg cyl disp hp drat wt qsec vs am gear carb my_lovely_colname <---- SUCCESS!
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 21.4
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 21.4
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 33.9
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 21.4
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 19.2
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 21.4
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 19.2
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 33.9
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 33.9
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 21.4
## # ... with 22 more rows
Bottom line
My conclusion is that the = operator is sensitive to {...} on the LHS. How can I otherwise pass a name (from argument) to the LHS in the initial my_wrapper() example?
EDIT
I'd like to add the dplyr solution for the same problem, taken from the programming with dplyr vignette:
library(dplyr)
my_wrapper_dplyr <- function(new_colname_of_choice) {
mtcars %>%
group_by(cyl) %>%
summarise("{new_colname_of_choice}" := max(mpg))
}
my_wrapper_dplyr("another_lovely_colname")
Which is pretty robust and works in all naming situations I've encountered. Is there a built-in/canonical practice in data.table similar to {dplyr}'s?
With the upcoming data.table version 1.14.3, you'll be able to use the new env parameter:
A new interface for programming on data.table has been added, closing #2655 and many other linked issues. It is built using base R's substitute-like interface via a new env argument to [.data.table. For details see the new vignette programming on data.table, and the new ?substitute2 manual page. Thanks to numerous users for filing requests, and Jan Gorecki for implementing.
# install dev version
install.packages("https://github.com/Rdatatable/data.table/archive/master.tar.gz", repo = NULL, type = "source")
library(tibble)
library(data.table)
my_wrapper_new <- function(new_colname_of_choice) {
mtcars %>%
as.data.table() %>%
.[, .(new_colname_of_choice = max(mpg)), by = cyl,
env=list(new_colname_of_choice = new_colname_of_choice)] %>%
as_tibble()
}
my_wrapper_new('test')
# A tibble: 3 x 2
cyl test
<dbl> <dbl>
1 6 21.4
2 4 33.9
3 8 19.2
One thing you can do is separate the creation of the column and the naming of the column like so:
my_wrapper <- function(new_colname_of_choice) {
mtcars %>%
as.data.table() %>%
.[, .(tempcol = max(mpg)), by = cyl] %>%
setnames(., "tempcol", new_colname_of_choice) %>%
as.tibble()
}
my_wrapper("my_lovely_colname")
Using this method you can use either .(tempcol = max(mpg)) or tempcol := max(mpg)
Using setNames from stats:
my_wrapper <- function(new_colname_of_choice) {
mtcars %>%
as.data.table() %>%
.[, setNames(list(max(mpg)), new_colname_of_choice), by = cyl] %>%
as_tibble()
}
my_wrapper(new_colname_of_choice = "my_lovely_colname")
I'm looking to create a function that accepts a list of (data frame) variables as one of its parameters. I've managed to get it working partially, but when I get to the group_by/count, things fall apart. How can I do this??
## Works
f1 <- function(dfr, ..., split = NULL) {
dots <- rlang::enquos(...)
split <- rlang::enquos(split)
dfr %>%
select(!!!dots, !!!split) %>%
gather('type', 'score', -c(!!!split))
}
## does not work
f2 <- function(dfr, ..., split = NULL) {
dots <- rlang::enquos(...)
split <- rlang::enquos(split)
dfr %>%
select(!!!dots, !!!split) %>%
gather('type', 'score', -c(!!!split))
count(!!!split, type, score)
}
I would want to do things like
mtcars %>% f2(drat:qsec)
mtcars %>% f2(drat:qsec, split = gear)
mtcars %>% f2(drat:qsec, split = c(gear, carb)) ## ??
These calls with f1() all work, but for f2 none of the commands work. They all end up with a Error in !split : invalid argument type. That f2(drat:qsec) doesn't (immediately) work without the split argument, I'm not too surprised about, but how to get the second and third comment working?
The issue with the second function (the missing pipe notwithstanding) is that count() (or rather group_by() which is called by count()) doesn't support tidyselect syntax so you can't pass it a list to be spliced like you can with select(), gather() etc. Instead, one option is to use group_by_at() and add_tally(). Here's a slightly modified version of the function:
library(dplyr)
f2 <- function(dfr, ..., split = NULL) {
dfr %>%
select(..., {{split}}) %>%
gather('type', 'score', -{{split}}) %>%
group_by_at(vars({{split}}, type, score)) %>% # could use `group_by_all()`
add_tally()
}
mtcars %>% f2(drat:qsec)
# A tibble: 96 x 3
# Groups: type, score [81]
type score n
<chr> <dbl> <int>
1 drat 3.9 2
2 drat 3.9 2
3 drat 3.85 1
4 drat 3.08 2
5 drat 3.15 2
6 drat 2.76 2
7 drat 3.21 1
8 drat 3.69 1
9 drat 3.92 3
10 drat 3.92 3
# ... with 86 more rows
mtcars %>% f2(drat:qsec, split = c(gear, carb))
# A tibble: 96 x 5
# Groups: gear, carb, type, score [89]
gear carb type score n
<dbl> <dbl> <chr> <dbl> <int>
1 4 4 drat 3.9 2
2 4 4 drat 3.9 2
3 4 1 drat 3.85 1
4 3 1 drat 3.08 1
5 3 2 drat 3.15 2
6 3 1 drat 2.76 1
7 3 4 drat 3.21 1
8 4 2 drat 3.69 1
9 4 2 drat 3.92 1
10 4 4 drat 3.92 2
# ... with 86 more rows
I'm trying to group_by multiple columns in my data frame and I can't write out every single column name in the group_by function so I want to call the column names as a vector like so:
cols <- grep("[a-z]{3,}$", colnames(mtcars), value = TRUE)
mtcars %>% filter(disp < 160) %>% group_by(cols) %>% summarise(n = n())
This returns error:
Error in mutate_impl(.data, dots) :
Column `mtcars[colnames(mtcars)[grep("[a-z]{3,}$", colnames(mtcars))]]` must be length 12 (the number of rows) or one, not 7
I definitely want to use a dplyr function to do this, but can't figure this one out.
Update
group_by_at() has been superseded; see https://dplyr.tidyverse.org/reference/group_by_all.html. Refer to Harrison Jones' answer for the current recommended approach.
Retaining the below approach for posterity
You can use group_by_at, where you can pass a character vector of column names as group variables:
mtcars %>%
filter(disp < 160) %>%
group_by_at(cols) %>%
summarise(n = n())
# A tibble: 12 x 8
# Groups: mpg, cyl, disp, drat, qsec, gear [?]
# mpg cyl disp drat qsec gear carb n
# <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
# 1 19.7 6 145.0 3.62 15.50 5 6 1
# 2 21.4 4 121.0 4.11 18.60 4 2 1
# 3 21.5 4 120.1 3.70 20.01 3 1 1
# 4 22.8 4 108.0 3.85 18.61 4 1 1
# ...
Or you can move the column selection inside group_by_at using vars and column select helper functions:
mtcars %>%
filter(disp < 160) %>%
group_by_at(vars(matches('[a-z]{3,}$'))) %>%
summarise(n = n())
# A tibble: 12 x 8
# Groups: mpg, cyl, disp, drat, qsec, gear [?]
# mpg cyl disp drat qsec gear carb n
# <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
# 1 19.7 6 145.0 3.62 15.50 5 6 1
# 2 21.4 4 121.0 4.11 18.60 4 2 1
# 3 21.5 4 120.1 3.70 20.01 3 1 1
# 4 22.8 4 108.0 3.85 18.61 4 1 1
# ...
I believe group_by_at has now been superseded by using a combination of group_by and across. And summarise has an experimental .groups argument where you can choose how to handle the grouping after you create a summarised object. Here is an alternative to consider:
cols <- grep("[a-z]{3,}$", colnames(mtcars), value = TRUE)
original <- mtcars %>%
filter(disp < 160) %>%
group_by_at(cols) %>%
summarise(n = n())
superseded <- mtcars %>%
filter(disp < 160) %>%
group_by(across(all_of(cols))) %>%
summarise(n = n(), .groups = 'drop_last')
all.equal(original, superseded)
Here is a blog post that goes into more detail about using the across function:
https://www.tidyverse.org/blog/2020/04/dplyr-1-0-0-colwise/