Crayons concat gives NULL - r

I am using
R version 3.4.4 (2018-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Linux Mint 18.3
and tidyverse_1.2.1. Using the %+% operator provided by the crayons package (loaded by tdiyverse) gives NULL. Why? Is this a bug?
E.g. reproducing the example from the manual gives:
> "foo" %+% "bar" %>% print
NULL
instead of "foobar".

ggplot2 has its own version of %+%, which can mask the one from crayon. If I make sure that I load ggplot2/tidyverse first, before loading crayon, I get the expected results:
> library(tidyverse)
-- Attaching packages ---------------------- tidyverse 1.2.1 --
v ggplot2 3.1.0 v purrr 0.2.5
v tibble 1.4.2 v dplyr 0.7.8
v tidyr 0.8.2 v stringr 1.3.1
v readr 1.2.1 v forcats 0.3.0
-- Conflicts ------------------------- tidyverse_conflicts() --
x dplyr::filter() masks stats::filter()
x dplyr::lag() masks stats::lag()
> library(crayon)
Attaching package: ‘crayon’
The following object is masked from ‘package:ggplot2’:
%+%
> "foo" %+% "bar" %>% print
[1] "foobar"

This is indeed just because both ggplot2 and crayon define a %+% function! Then which function is used will depend on the order of the packages loaded, making your code fragile.
To make sure to avoid any conflict, you can give an alias to these operators, such as (see stack post):
library(tidyverse)
`%+c%` <- crayon::`%+%`
"foo" %+% "bar" %>% print
#> NULL
"foo" %+c% "bar" %>% print
#> [1] "foobar"
Created on 2021-08-13 by the reprex package (v0.3.0)

Related

why does forecast function from package fabletools (R) back transform log(.) but not box_cox(., lambda)

It was my impression that the forecast function from the R package fabletools automatically back transformed forecasts: "If the response variable has been transformed in the model formula, the transformation will be automatically back-transformed". It does so for the log transform, but not box_cox. Am I missing something obvious?
library(fpp3)
#> ── Attaching packages ──────────────────────────────────────────── fpp3 0.4.0 ──
#> ✓ tibble 3.1.6 ✓ tsibble 1.1.0
#> ✓ dplyr 1.0.7 ✓ tsibbledata 0.3.0
#> ✓ tidyr 1.1.4 ✓ feasts 0.2.2
#> ✓ lubridate 1.8.0 ✓ fable 0.3.1
#> ✓ ggplot2 3.3.5
#> ── Conflicts ───────────────────────────────────────────────── fpp3_conflicts ──
#> x lubridate::date() masks base::date()
#> x dplyr::filter() masks stats::filter()
#> x tsibble::intersect() masks base::intersect()
#> x tsibble::interval() masks lubridate::interval()
#> x dplyr::lag() masks stats::lag()
#> x tsibble::setdiff() masks base::setdiff()
#> x tsibble::union() masks base::union()
library(dplyr)
lambda <- us_employment %>%
features(Employed, features = guerrero)%>%
filter(!is.na(lambda_guerrero))%>%
head(n = 2)
#> Warning: 126 errors (1 unique) encountered for feature 1
#> [126] missing value where TRUE/FALSE needed
with_lambda <- left_join(lambda, us_employment)%>%
as_tsibble(key = Series_ID, index = Month)
#> Joining, by = "Series_ID"
box_fit <- with_lambda %>%
model(box_model = ARIMA(box_cox(Employed, lambda_guerrero)))
box_fcast <- box_fit %>%
forecast(h = "3 years")
log_fit <- with_lambda %>%
model(log_model = ARIMA(log(Employed)))
log_fcast <- log_fit %>%
forecast(h = "3 years")
autoplot(filter(us_employment, Series_ID=="CEU0500000001"))+
autolayer(filter(box_fcast, Series_ID=="CEU0500000001"))+
autolayer(filter(log_fcast, Series_ID=="CEU0500000001"))
#> Plot variable not specified, automatically selected `.vars = Employed`
Created on 2022-01-03 by the reprex package (v2.0.1)
Found the solution here: https://github.com/tidyverts/fabletools/issues/103 Hope this helps someone else. The crux of the issue is that you need to supply the value of lambda for the forecast period.

Wrap function passing NULL to lower-level haven::read_dta function in R

I am trying to build a function wrapping over haven::read_dta() similar to the wrap_function() defined in the code below.
My wrap_function() has a default variables = NULL, which should be able to pass NULL to haven::read_dta()'s col_select argument if no values are specified. However, passing the NULL from variables to col_select throws an error (i.e. 'Error: Can't find any columns matching col_select in data.').
Can someone help me understand why this happens and how could I build a wrap_function capable of passing a NULL default value to the lower-level function?
Thanks!
library(reprex)
library(haven)
df_ <- data.frame(a = 1:5,
b = letters[1:5])
haven::write_dta(df_,
path = "file.dta")
# works well:
haven::read_dta(file = "file.dta",
col_select = NULL)
#> # A tibble: 5 x 2
#> a b
#> <dbl> <chr>
#> 1 1 a
#> 2 2 b
#> 3 3 c
#> 4 4 d
#> 5 5 e
# does not work:
wrap_function <- function(file, variables = NULL){
haven::read_dta(file = file,
col_select = variables)
}
wrap_function("file.dta")
#> Note: Using an external vector in selections is ambiguous.
#> ℹ Use `all_of(variables)` instead of `variables` to silence this message.
#> ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
#> This message is displayed once per session.
#> Error: Can't find any columns matching `col_select` in data.
sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.0.3 (2020-10-10)
#> os CentOS Linux 8
#> system x86_64, linux-gnu
#> ui X11
#> language (EN)
#> collate en_US.UTF-8
#> ctype en_US.UTF-8
#> date 2021-05-14
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date lib source
#> cli 2.4.0 2021-04-05 [1] CRAN (R 4.0.3)
#> crayon 1.4.1.9000 2021-04-16 [1] Github (r-lib/crayon#965d1dc)
#> digest 0.6.27 2020-10-24 [1] CRAN (R 4.0.3)
#> ellipsis 0.3.1 2020-05-15 [1] CRAN (R 4.0.2)
#> evaluate 0.14 2019-05-28 [1] CRAN (R 4.0.2)
#> fansi 0.4.2 2021-01-15 [1] CRAN (R 4.0.3)
#> forcats 0.5.1 2021-01-27 [1] CRAN (R 4.0.3)
#> fs 1.5.0 2020-07-31 [1] CRAN (R 4.0.2)
#> glue 1.4.2 2020-08-27 [1] CRAN (R 4.0.2)
#> haven * 2.3.1 2020-06-01 [1] CRAN (R 4.0.2)
TLDR: You just need to embrace the argument by wrapping it in double curly brackets{{ }}, previously called "curly-curly". This passes the variable properly. See the programming with dplyr vignette for more info.
wrap_function <- function(file, variables = NULL){
haven::read_dta(file = file,
col_select = {{ variables }})
}
wrap_function("file.dta")
#> # A tibble: 5 x 2
#> a b
#> <dbl> <chr>
#> 1 1 a
#> 2 2 b
#> 3 3 c
#> 4 4 d
#> 5 5 e
Unfortunately it's a little hard to understand that this is necessary without looking at the code. If you look up the haven repository, you can see that read_dta uses the double-curly around col_select as well. This is a pretty good indication that you need to use it in your wrapper function.
If you look further, it is using them to pass the argument to a function skip_cols, which uses it inside tidyselect::vars_select. The reason this is needed is so that you can delay evaluation of the argument until the point that you actually need it. In other words, it lets you call the function like this:
wrap_function("file.dta", variables = a)
instead of forcing you to do something like
wrap_function("file.dta", variables = "a")
and saves you a lot of typed quotes, especially with a lot of columns. You see this pattern in dplyr and other tidyverse functions a lot, especially any time an argument refers to a dataframe column rather than a variable.
In other words, you want to not have the code check exactly what a is until you reach skip_cols, which knows that a refers to a column inside the file you're reading. If you don't use the curly braces, it will think that a is some object in your working environment.

\xe8 matching \xf1 in str_detect() and str_replace_all()

I want to process text files including some characters shown in hexadecimals on R. When I tried to convert those back into more readable characters, I encountered some unexpected (to me) behaviours of stringr functions. Specifically, \xe8 apparently matches \xf1:
> library("tidyverse")
> str <- "ni\xf1a"
> str_detect(str, "\xe8")
[1] TRUE
This is inconvenient when I want to convert \xe8 into è and \xf1 into ñ in the same files:
> str %>%
+ str_replace_all("\xe8", "è") %>%
+ str_replace_all("\xf1", "ñ")
[1] "nièa" # I expect niña
Interestingly, gsub() works as I expect:
> str %>%
+ gsub("\xe8", "è", .) %>%
+ gsub("\xf1", "ñ", .)
[1] "niña"
Why does \xe8 match \xf1 in str_detect() and str_replace_all()? Is there a way to avoid it?
Why is the behaviour different between stringr functions and gsub()?
Update
Here is part of the output of devtools::session_info():
> devtools::session_info()
─ Session info ──────────────────────────────────────────────────────────────────
setting value
version R version 4.0.2 (2020-06-22)
os macOS Catalina 10.15.7
system x86_64, darwin17.0
ui RStudio
language (EN)
collate en_GB.UTF-8
ctype en_GB.UTF-8
tz Europe/London
date 2020-09-30
─ Packages ──────────────────────────────────────────────────────────────────────
package * version date lib source
...
stringr * 1.4.0 2019-02-10 [1] CRAN (R 4.0.2)
...

Polynomial Function Expansion in R

I am currently reviewing this question on SO and see that the OP stated that by adding more for loops you can expand the polynomials. How exactly would you do so? I am trying to expand to polyorder 5.
Polynomial feature expansion in R
Here is the code below:
polyexp = function(df){
df.polyexp = df
colnames = colnames(df)
for (i in 1:ncol(df)){
for (j in i:ncol(df)){
colnames = c(colnames, paste0(names(df)[i],'.',names(df)[j]))
df.polyexp = cbind(df.polyexp, df[,i]*df[,j])
}
}
names(df.polyexp) = colnames
return(df.polyexp)
}
Ultimately, I'd like to order the matrix so that it expands in order of degree. I tried using the poly function but I'm not sure if you can order the result so that it returns a matrix that starts with degree 1 then moves to degree 2, then 3, 4, and 5.
To "sort by degree" is a little ambiguous. x^2 and x*y both have degree 2. I'll assume you want to sort by total degree, and then within each of those, by degree of the 1st column; within that, by degree of the second column, etc. (I believe the default is to ignore total degree and sort by degree of the last column, within that the second last, and so on, but this is not documented so I won't count on it.)
Here's how to use polym to do this. The columns are named things like "2.0" or "1.1". You could sort these alphabetically and it would be fine up to degree 9, but if you convert those names using as.numeric_version, there's no limit. So convert the column names to version names, get the sort order, and use that plus degree to re-order the columns of the result. For example,
df <- data.frame(x = 1:6, y = 0:5, z = -(1:6))
expanded <- polym(as.matrix(df), degree = 5)
o <- order(attr(expanded, "degree"),
as.numeric_version(colnames(expanded)))
sorted <- expanded[,o]
# That lost the attributes, so put them back
attr(sorted, "degree") <- attr(expanded, "degree")[o]
attr(sorted, "coefs") <- attr(expanded, "coefs")
class(sorted) <- class(expanded)
# If you call predict(), it comes out in the default order,
# so will need sorting too:
predict(sorted, newdata = as.matrix(df[1,]))[, o]
#> 0.0.1 0.1.0 1.0.0 0.0.2 0.1.1 0.2.0
#> 0.59761430 -0.59761430 -0.59761430 0.54554473 -0.35714286 0.54554473
#> 1.0.1 1.1.0 2.0.0 0.0.3 0.1.2 0.2.1
#> -0.35714286 0.35714286 0.54554473 0.37267800 -0.32602533 0.32602533
#> 0.3.0 1.0.2 1.1.1 1.2.0 2.0.1 2.1.0
#> -0.37267800 -0.32602533 0.21343368 -0.32602533 0.32602533 -0.32602533
#> 3.0.0 0.0.4 0.1.3 0.2.2 0.3.1 0.4.0
#> -0.37267800 0.18898224 -0.22271770 0.29761905 -0.22271770 0.18898224
#> 1.0.3 1.1.2 1.2.1 1.3.0 2.0.2 2.1.1
#> -0.22271770 0.19483740 -0.19483740 0.22271770 0.29761905 -0.19483740
#> 2.2.0 3.0.1 3.1.0 4.0.0 0.0.5 0.1.4
#> 0.29761905 -0.22271770 0.22271770 0.18898224 0.06299408 -0.11293849
#> 0.2.3 0.3.2 0.4.1 0.5.0 1.0.4 1.1.3
#> 0.20331252 -0.20331252 0.11293849 -0.06299408 -0.11293849 0.13309928
#> 1.2.2 1.3.1 1.4.0 2.0.3 2.1.2 2.2.1
#> -0.17786140 0.13309928 -0.11293849 0.20331252 -0.17786140 0.17786140
#> 2.3.0 3.0.2 3.1.1 3.2.0 4.0.1 4.1.0
#> -0.20331252 -0.20331252 0.13309928 -0.20331252 0.11293849 -0.11293849
#> 5.0.0
#> -0.06299408
Created on 2020-03-21 by the reprex package (v0.3.0)

How should I use the uq() function inside a package?

I'm puzzled by the behaviour of the uq() function. The behavior is not the same when I use uq() or lazyeval::uq().
Here is my reproducible example :
First, I generate a fake dataset
library(tibble)
library(lazyeval)
fruits <- c("apple", "banana", "peanut")
price <- c(5,6,4)
table_fruits <- tibble(fruits, price)
Then I write a toy function, toy_function_v1, using only uq() :
toy_function_v1 <- function(data, var) {
lazyeval::f_eval(f = ~ uq(var), data = data)
}
and a second function using lazyeval::uq() :
toy_function_v2 <- function(data, var) {
lazyeval::f_eval(f = ~ lazyeval::uq(var), data = data)
}
Surprisingly, the output of v1 and v2 is not the same :
> toy_function_v1(data = table_fruits, var = ~ price)
[1] 5 6 4
> toy_function_v2(data = table_fruits, var = ~ price)
price
Is there any explanation ?
I know it's a good practice to use the syntaxe package::function() to use the function inside a new package. So what's the best solution in that case ?
Here is my session_info :
> devtools::session_info()
Session info ----------------------------------------------------------------------------------------------------------------------------------------------------
setting value
version R version 3.3.1 (2016-06-21)
system x86_64, linux-gnu
ui RStudio (1.0.35)
language (EN)
collate C
tz <NA>
date 2016-11-07
Packages --------------------------------------------------------------------------------------------------------------------------------------------------------
package * version date source
Rcpp 0.12.7 2016-09-05 CRAN (R 3.2.3)
assertthat 0.1 2013-12-06 CRAN (R 3.2.2)
devtools 1.12.0 2016-06-24 CRAN (R 3.2.3)
digest 0.6.10 2016-08-02 CRAN (R 3.2.3)
lazyeval * 0.2.0.9000 2016-10-14 Github (hadley/lazyeval#c155c3d)
memoise 1.0.0 2016-01-29 CRAN (R 3.2.3)
tibble * 1.2 2016-08-26 CRAN (R 3.2.3)
withr 1.0.2 2016-06-20 CRAN (R 3.2.3)
It's just a bug in the uq() function. The issue is open on Github : https://github.com/hadley/lazyeval/issues/78.

Resources