I have seen the use of %>% (percent greater than percent) function in some packages like dplyr and rvest. What does it mean? Is it a way to write closure blocks in R?
%...% operators
%>% has no builtin meaning but the user (or a package) is free to define operators of the form %whatever% in any way they like. For example, this function will return a string consisting of its left argument followed by a comma and space and then it's right argument.
"%,%" <- function(x, y) paste0(x, ", ", y)
# test run
"Hello" %,% "World"
## [1] "Hello, World"
The base of R provides %*% (matrix mulitiplication), %/% (integer division), %in% (is lhs a component of the rhs?), %o% (outer product) and %x% (kronecker product). It is not clear whether %% falls in this category or not but it represents modulo.
expm The R package, expm, defines a matrix power operator %^%. For an example see Matrix power in R .
operators The operators R package has defined a large number of such operators such as %!in% (for not %in%). See http://cran.r-project.org/web/packages/operators/operators.pdf
igraph This package defines %--% , %->% and %<-% to select edges.
lubridate This package defines %m+% and %m-% to add and subtract months and %--% to define an interval. igraph also defines %--% .
Pipes
magrittr In the case of %>% the magrittr R package has defined it as discussed in the magrittr vignette. See http://cran.r-project.org/web/packages/magrittr/vignettes/magrittr.html
magittr has also defined a number of other such operators too. See the Additional Pipe Operators section of the prior link which discusses %T>%, %<>% and %$% and http://cran.r-project.org/web/packages/magrittr/magrittr.pdf for even more details.
dplyr The dplyr R package used to define a %.% operator which is similar; however, it has been deprecated and dplyr now recommends that users use %>% which dplyr imports from magrittr and makes available to the dplyr user. As David Arenburg has mentioned in the comments this SO question discusses the differences between it and magrittr's %>% : Differences between %.% (dplyr) and %>% (magrittr)
pipeR The R package, pipeR, defines a %>>% operator that is similar to magrittr's %>% and can be used as an alternative to it. See http://renkun.me/pipeR-tutorial/
The pipeR package also has defined a number of other such operators too. See: http://cran.r-project.org/web/packages/pipeR/pipeR.pdf
postlogic The postlogic package defined %if% and %unless% operators.
wrapr The R package, wrapr, defines a dot pipe %.>% that is an explicit version of %>% in that it does not do implicit insertion of arguments but only substitutes explicit uses of dot on the right hand side. This can be considered as another alternative to %>%. See https://winvector.github.io/wrapr/articles/dot_pipe.html
Bizarro pipe. This is not really a pipe but rather some clever base syntax to work in a way similar to pipes without actually using pipes. It is discussed in http://www.win-vector.com/blog/2017/01/using-the-bizarro-pipe-to-debug-magrittr-pipelines-in-r/ The idea is that instead of writing:
1:8 %>% sum %>% sqrt
## [1] 6
one writes the following. In this case we explicitly use dot rather than eliding the dot argument and end each component of the pipeline with an assignment to the variable whose name is dot (.) . We follow that with a semicolon.
1:8 ->.; sum(.) ->.; sqrt(.)
## [1] 6
Update Added info on expm package and simplified example at top. Added postlogic package.
Update 2 The development version of R has defined a |> pipe. Unlike magrittr's %>% it can only substitute into the first argument of the right hand side. Although limited, it works via syntax transformation so it has no performance impact.
%>% is similar to pipe in Unix. For example, in
a <- combined_data_set %>% group_by(Outlet_Identifier) %>% tally()
the output of combined_data_set will go into group_by and its output will go into tally, then the final output is assigned to a.
This gives you handy and easy way to use functions in series without creating variables and storing intermediate values.
My understanding after reading the link offered by G.Grothendieck is that %>% is an operator that pipes functions. This helps readability and productivity as it's easier to follow the flow of multiple functions through these pipes than going backwards when multiple function are nested.
The R packages dplyr and sf import the operator %>% from the R package magrittr.
Help is available by using the following command:
?'%>%'
Of course the package must be loaded before by using e.g.
library(sf)
The documentation of the magrittr forward-pipe operator gives a good example:
When functions require only one argument, x %>% f is equivalent to f(x)
Another usage for %---% is the use of %<-% which means a multi-assignment operator for example:
session <- function(){
x <- 1
y <- 2
z <- y + x
list(x,y,z)
}
c(var1,var2,result) %<-% session()
I don't know much about it but I have seen it in one case study during the study of Multivariate Normal Distribution in R in my college
suppose you have a data frame in a variable called "df_gather" and you want to pipe it into a ggplot then you can use that %>%
EG:
df_gather %>% ggplot(aes(x = Value, fill = Variable, color = Variable))+
geom_density(alpha = 0.3)+ggtitle('Distibution of X')
Related
My dataframe includes options data. I want to find the closest to the money option for every trading date. Unfortunately
ir_OEX_data %>% group_by(quotedate) %>% which.min(abs(moneyness_call - 1))
leads to the following error:
Error in which.min(., abs(ir_OEX_data$moneyness_call - 1)) :
unused argument (abs(ir_OEX_data$moneyness_call - 1))
But when I run solely:
which.min(abs(ir_OEX_data$moneyness_call - 1))
The command works perfectly fine.
What is my mistake here?
Problem: not all functions are pipe-friendly
{magrittr} pipes work best with functions written to be "pipe-friendly." These generally take a dataframe as a first argument, and may use data masking to let you refer to columns within that dataframe without prefixing them. e.g., many {dplyr} verbs are pipe-friendly.
which.min isn't pipe-friendly. Your code,
ir_OEX_data %>% group_by(quotedate) %>% which.min(abs(moneyness_call - 1))
is actually equivalent to
which.min(
group_by(ir_OEX_data, quotedate),
abs(moneyness_call - 1)
)
but which.min expects only one argument, so throws an error.
Solution 1: the exposition pipe (%$%)
There are a few ways to deal with this. One is the {magrittr} exposition pipe, %$%, which makes your column names available to the next function without passing the data:
library(magrittr)
library(dplyr)
ir_OEX_data %>%
group_by(quotedate) %$%
which.min(abs(moneyness_call - 1))
Solution 2: use inside a pipe-friendly function
If you wanted to add the result of which.min to your dataset, you'd just need to use it inside summarize or mutate:
ir_OEX_data %>%
group_by(quotedate) %>%
summarize(call_which_min = which.min(abs(moneyness_call - 1)))
Solution 3: write a pipe-friendly wrapper
You can also put a non-friendly function in a pipe-friendly wrapper. This would probably be overkill here, but can be useful in more complex cases.
which_min_pipe <- function(.data, x) {
.data %>% summarize(out = which.min({{ x }})) %>% pull(out)
}
ir_OEX_data %>%
group_by(quotedate) %>%
which_min_pipe(abs(moneyness_call - 1))
I recently posted two questions (1, 2) related to functions I was trying to write. I received useful answers to each, which resulted in the following two functions:
second_table <- function(dat, variable1, variable2){
dat %>%
tabyl({{variable1}}, {{variable2}}, show_na = FALSE) %>%
adorn_percentages("row") %>%
adorn_pct_formatting(digits = 1) %>%
adorn_ns()
}
And
second_table2 = function(dat, variable1, variable2){
variable1 <- sym(variable1)
dat %>%
tabyl(!!variable1, {{variable2}}, show_na = FALSE) %>%
adorn_percentages("row") %>%
adorn_pct_formatting(digits = 1) %>%
adorn_ns()
}
These functions work as intended, but I had never used the rlang package before and am still confused about the difference between the {{}} operator and !! + sym() after looking through the available documentation and writing some additional functions. I don't like to use code that I don't fully understand and am sure I will have further use for these rlang operators in the future, so would greatly appreciate a plain-language explanation of what the difference is between these operators.
R has a particular feature called non-standard evaluation (NSE), where expressions are used as-is instead of being evaluated. Most people first encounter NSE when they load packages:
a <- "rlang"
print(a) # Standard evaluation - the expression a is evaluated to its value
# [1] "rlang"
library(a) # Non-standard evaluation - the expression a is used as-is
# Error in library(a) : there is no package called ‘a’
rlang enables sophisticated NSE by providing three main functions to capture unevaluated symbols and expressions:
sym("x") captures a symbol (i.e., variable name, column name, etc.). Older versions allowed for sym(x), but I think the latest version of rlang forces the input to be a string.
expr(a + b) captures arbitrary expressions
quo(a + b) captures arbitrary expressions AND the environment where these expression were defined.
The difference between expressions and quosures is that evaluating the former will be done in the immediate environment, while the latter is always evaluated in the environment where the expression was captured:
f <- function(e) {a <- 2; b <- 3; eval_tidy(e)}
a <- 5; b <- 10
f(expr(a+b)) # Evaluated inside f
# [1] 5
f(quo(a+b)) # Evaluated in the environment where it is captured
# [1] 15
All three verbs have en-equivalents: ensym, enexpr and enquo. These are used to capture symbols and expressions provided to a function from within that function. This is useful when you want to remove the need for a user of the function to use sym, etc. themselves:
f <- function(x) {enexpr(x)} # Expression captured within a function
f(a+b)
# This has exact equivalence to
f <- function(x) {x}
f(expr(a+b)) # The user has to do the capture themselves
In all cases, the operator !! evaluates symbols and expressions. Think of it as eval() on steroids, because !! forces immediate evaluation that takes precedence over everything else. Among other things, this can be useful for iterative construction of more complicated expressions:
a <- expr(b + 2)
expr(d * !!a) # a is evaluated immediately
# d * (b + 2)
expr(d * eval(a)) # evaluation of a is delayed
# d * eval(a)
With all that said, {{x}} is shorthand notation for !!enquo(x)
Let's say I want to add 1 to every value of a column using dplyr and standard evaluation.
I can do :
library(dplyr)
data <- head(iris)
var <- "Sepal.Length"
mutate(data, !!rlang::sym(var) := !!quo(`+`(!!rlang::sym(var), 1)))
But what if I would like to use + as binary operator and not as function ?
I can't figure out how to write the + with a symbol in a quosure.
In most of my attempts I got an error for trying to use a non-numeric argument (the symbol for example) with the binary operator +.
With the deprecated mutate_ you can use lazyeval::interp which allowed you to do it easily :
mutate_(data, .dots = setNames(list(lazyeval::interp(~var + 1, var = as.symbol(var))), var))
Any help would be appreciated. Thanks.
You can just use
mutate(data, !!rlang::sym(var) := (!!rlang::sym(var)) + 1)
Note the parenthesis around the bang-bang part. This is only necessary because you are probably using an older version of rlang. In older versions (<0.2) the !! has a very low precedence so the addition happens before the expansion. Starting with rlang 0.2 the !! has been given a different operator precedence and works more how you might expect.
Of course if you are applyting the same transformation to a bunch of columns, you might want to use the mutate_at, mutate_all, or mutate_if versions, which also allow the transformations to be specific with the formula syntax.
mutate_if(data, is.numeric, ~.x+1)
mutate_all(data, ~.x+1)
mutate_at(data, var, ~.x+1)
I'm trying to make a function that subsets and mutates data with dplyr commands. My fake data is like this:
newTest_rv <- data.frame(is_op=c(rep(0,6),rep(1,4)),
has_click=c(0,0,1,1,1,1,0,0,1,1),
num_pimp=c(3,5,1,2,3,5,2,5,3,5),
freq = c(rep(1,5),5,1,2,1,2))
And my function is like this:
reweight <- function(data, conds){
require(dplyr)
require(lazyeval)
data %>%
filter_(lazy(conds)) %>%
group_by(num_pimp) %>%
mutate_(lazy(new_num) = lazy(num_pimp) - lazy(sum(freq[lazy(!conds)]))) %>%
mutate(new_weight=freq*(1/new_num)) %>%
ungroup()
}
> reweight(newTest_rv, is_op==0)
The non-standard evaluation with the conditional statement "is_op==0" seems to work in other places but not in the subset within a group "lazy(sum(freq[lazy(!conds)]))". Is there any way I can circumvent this problem?
Thank you!
It looks like you went a bit overboard with the lazys. The lazy() function creates a lazy object which basically delays evaluation of an expression. You can't just compose standard expressions and lazy expression. Generally you combine them via lazyeval's interp() function. I think what you want is
mutate_(new_num = interp(~num_pimp - sum(freq[!(x)]), x=lazy(conds)))
Here we use interp() to take a standard expression (in this case one that uses the formula syntax) and insert the lazy expression as a subsetting vector.
This question already has answers here:
Is it possible to get F#'s function application "|>" operator in R? [duplicate]
(2 answers)
Closed 7 years ago.
How can you implement F#'s forward pipe operator in R? The operator makes it possible to easily chain a sequence of calculations. For example, when you have an input data and want to call functions foo and bar in sequence, you can write:
data |> foo |> bar
Instead of writing bar(foo(data)). The benefits are that you avoid some parentheses and the computations are written in the same order in which they are executed (left-to-right). In F#, the operator is defined as follows:
let (|>) a f = f a
It would appear that %...% can be used for binary operators, but how would this work?
I don't know how well it would hold up to any real use, but this seems (?) to do what you want, at least for single-argument functions ...
> "%>%" <- function(x,f) do.call(f,list(x))
> pi %>% sin
[1] 1.224606e-16
> pi %>% sin %>% cos
[1] 1
> cos(sin(pi))
[1] 1
For what it's worth, as of now (3 December 2021), in addition to the magrittr/tidyverse pipe (%>%), there also a native pipe |> in R (and an experimental => operator that can be enabled in the development version): see here, for example.
Edit: package now on CRAN. Example included.
The magrittr package is made for this.
install.packages("magrittr")
Example:
iris %>%
subset(Sepal.Length > 5) %>%
aggregate(. ~ Species, ., mean)
Also, see the vignette:http://cran.r-project.org/web/packages/magrittr/vignettes/magrittr.html
It has quite a few useful features if you like the F# pipe, and who doesn't?!
The problem is that you are talking about entirely different paradigms of calling functions so it's not really clear what you want. R only uses what in F# would be tuple arguments (named in R), so one way to think of it is trivially
fp = function(x, f) f(x)
which will perform the call so for example
> fp(4, print)
[1] 4
This is equivalent, but won't work in non-tupple case like 4 |> f x y because there is no such thing in R. You could try to emulate the F# functional behavior, but it would be awkward:
fp = function(x, f, ...) function(...) f(x, ...)
That will be always functional and thus chaining will work so for example
> tri = function(x, y, z) paste(x,y,z)
> fp("foo", fp("mar", tri))("bar")
[1] "mar foo bar"
but since R doesn't convert incomplete calls into functions it's not really useful. Instead, R has much more flexible calling based on the tuple concept. Note that R uses a mixture of functional and imperative paradigm, it is not purely functional so it doesn't perform argument value matching etc.
Edit: since you changed the question in that you are interested in syntax and only a special case, just replace fp above with the infix notation:
`%>%` = function(x, f) f(x)
> 1:10 %>% range %>% mean
[1] 5.5
(Using Ben's operator ;))