I am trying to dynamically insert variables into a fable model.
Data
library(dplyr)
library(fable)
library(stringr)
df <- tsibbledata::aus_retail %>%
filter(State == "Victoria", Industry == "Food retailing") %>%
mutate(reg_test = rnorm(441, 5, 2),
reg_test2 = rnorm(441, 5, 2))
Note that there can be an undetermined number of regressors included in the tsibble, but in this example, I have only two (reg_test and reg_test2). All regressor columns will start with reg_
Problem Function
I have a function where I want to dynamically put the regressor columns into an ARIMA model using the fable package.
test_f <- function(df) {
var_names <- str_subset(names(df), "reg_") %>%
paste0(collapse = "+")
test <- enquo(var_names)
df %>%
model(ARIMA(Turnover ~ !!test))
}
test_f(df)
# A mable: 1 x 3
# Key: State, Industry [1]
State Industry `ARIMA(Turnover ~ ~"reg_test+reg_tes~
<chr> <chr> <model>
1 Victoria Food retaili~ <NULL model>
Warning message:
1 error encountered for ARIMA(Turnover ~ ~"reg_test+reg_test2")
[1] invalid model formula in ExtractVars
I know that it is just putting the string var_names into the formula, which does not work, but I can't figure out how to create var_names in such a way that I can enquo() it correctly.
I read through the Quasiquotation section here
I searched SO but have not found the answer yet.
This question with pasre_expr() seemed to get closer, but still not what I wanted.
I know that I can use sym() if I have one variable, but I don't know how many reg_ variables there will be and I want to include them all.
Expected Output
By putting in the variables manually, I can show the output that I expect.
test <- df %>%
model(ARIMA(Turnover ~ reg_test + reg_test2))
test$`ARIMA(Turnover ~ reg_test + reg_test2)`[[1]]
Series: Turnover
Model: LM w/ ARIMA(2,1,0)(0,1,2)[12] errors
Coefficients:
ar1 ar2 sma1 sma2 reg_test reg_test2
-0.6472 -0.3541 -0.4115 -0.0793 -0.0296 -0.6143
s.e. 0.0473 0.0479 0.0520 0.0446 0.5045 0.5273
sigma^2 estimated as 884.9: log likelihood=-2058.04
AIC=4130.08 AICc=4130.35 BIC=4158.5
I also imagine that there is a better way for me to make the formula in the ARIMA function. If this can fix my problem as well, that will work too.
I appreciate any help!
You're possibly making this a bit more complicated than it needs to be. You can convert a string to a formula by doing as.formula(string), so simply build your formula as a string, convert it to a formula, then feed it to ARIMA. Here's a reprex:
library(dplyr)
library(fable)
library(stringr)
df <- tsibbledata::aus_retail %>%
filter(State == "Victoria", Industry == "Food retailing") %>%
mutate(reg_test = rnorm(441, 5, 2),
reg_test2 = rnorm(441, 5, 2))
test_f <- function(df) {
var_names <- paste0(str_subset(names(df), "reg_"), collapse = " + ")
mod <- model(df, ARIMA(as.formula(paste("Turnover ~", var_names))))
unclass(mod[1, 3][[1]])[[1]]
}
test_f(df)
#> Series: Turnover
#> Model: LM w/ ARIMA(2,1,0)(0,1,1)[12] errors
#>
#> Coefficients:
#> ar1 ar2 sma1 reg_test reg_test2
#> -0.6689 -0.376 -0.4765 0.3363 1.0194
#> s.e. 0.0448 0.045 0.0426 0.4978 0.5436
#>
#> sigma^2 estimated as 883.1: log likelihood=-2058.28
#> AIC=4128.56 AICc=4128.76 BIC=4152.91
Created on 2020-04-23 by the reprex package (v0.3.0)
Related
I am working on exercise #4 from this book.
https://otexts.com/fpp3/dynamic-exercises.html
I am trying to create a transformation to adjust for CPI, however, it appears because I am passing two variables from the tsibble object into my transformation function, it does not recognize it as a transformation of the data, but rather as a new data series. Any idea how to correct?
library(fpp3)
cpi_adj <- function(data, cpi) {
return(data/cpi*100)
}
cpi_unadj <- function(data, cpi) {
return(data*cpi/100)
}
cpi_trans <- new_transformation(cpi_adj, cpi_unadj)
aus_acc_fit <- aus_accommodation %>%
model(ARIMA(cpi_trans(Takings, CPI) ~ trend(knots = yearquarter("2008 Q1")) + season()))
aus_acc_fit %>%
filter(State == "Victoria") %>%
report()
This is the output I get and you can see in the header of the model summary, rather than recognizing it as a transformation, it recognizes it as a new data series instead.
Series: cpi_trans(Takings, CPI)
Model: LM w/ ARIMA(1,0,0)(0,0,1)[4] errors
Coefficients:
ar1 sma1 trend(knots = yearquarter("2008 Q1"))trend trend(knots = yearquarter("2008 Q1"))trend_41 season()year2 season()year3 season()year4 intercept
0.6596 0.3756 3.1995 0.3892 -57.8760 -36.0249 -12.9360 264.4398
s.e. 0.0866 0.1242 0.4871 0.9790 3.9693 4.6151 4.0447 13.3769
sigma^2 estimated as 170.8: log likelihood=-291.63
AIC=601.26 AICc=604.07 BIC=622
{fable} will attempt to identify the response variable from your left hand side based on the length of the variables. Since both Takings and CPI have the same variable length, it cannot distinguish if you were trying to forecast Takings or CPI - so it instead forecasts cpi_trans(Takings, CPI).
To explicitly define which variable is the response, you can use the resp() function.
Also note that for simpler transformations like yours, it is possible to directly write the transformation on the left side of the formula. For example: ARIMA(resp(Takings)/CPI*100 ~ trend(knots = yearquarter("2008 Q1")) + season())
library(fpp3)
cpi_adj <- function(data, cpi) {
return(data/cpi*100)
}
cpi_unadj <- function(data, cpi) {
return(data*cpi/100)
}
cpi_trans <- new_transformation(cpi_adj, cpi_unadj)
aus_acc_fit <- aus_accommodation %>%
model(ARIMA(cpi_trans(resp(Takings), CPI) ~ trend(knots = yearquarter("2008 Q1")) + season()))
aus_acc_fit %>%
filter(State == "Victoria") %>%
report()
#> Series: Takings
#> Model: LM w/ ARIMA(1,0,0)(0,0,1)[4] errors
#> Transformation: cpi_trans(Takings, CPI)
#>
#> Coefficients:
#> ar1 sma1 trend(knots = yearquarter("2008 Q1"))trend
#> 0.6596 0.3756 3.1995
#> s.e. 0.0866 0.1242 0.4871
#> trend(knots = yearquarter("2008 Q1"))trend_41 season()year2
#> 0.3892 -57.8760
#> s.e. 0.9790 3.9693
#> season()year3 season()year4 intercept
#> -36.0249 -12.9360 264.4398
#> s.e. 4.6151 4.0447 13.3769
#>
#> sigma^2 estimated as 170.8: log likelihood=-291.63
#> AIC=601.26 AICc=604.07 BIC=622
Created on 2022-07-28 by the reprex package (v2.0.1)
I want to run every combination possible for every 2 independent variables (OLS regression). I have a csv where I have my data (just one dependent variable and 23 independent variables), and I've tried renaming the variables inside my database from a to z, and I called 'y' to my dependent variable (a column with name "y" which is my dependent variable) to be recognized by the following code:
#all the combinations
all_comb <- combn(letters, 2)
#create the formulas from the combinations above and paste
text_form <- apply(all_comb, 2, function(x) paste('Y ~', paste0(x, collapse = '+')))
lapply(text_form, function(i) lm(i, data= KOFS05.12))
but this error is shown:
Error in eval(predvars, data, env) : object 'y' not found
I need to know the R squared
Any idea to make it work and run every possible regression?
As mentioned in the comments under the question check whether you need y or Y. Having addressed that we can use any of these. There is no need to rename the columns. We use the built in mtcars data set as an example since no test data was provided in the question. (Please always provide that in the future.)
1) ExhaustiveSearch This runs quite fast so you might be able to try combinations higher than 2 as well.
library(ExhaustiveSearch)
ExhaustiveSearch(mpg ~., mtcars, combsUpTo = 2)
2) combn Use the lmfun function defined below with combn.
dep <- "mpg" # name of dependent variable
nms <- setdiff(names(mtcars), dep) # names of indep variables
lmfun <- function(x, dep) do.call("lm", list(reformulate(x, dep), quote(mtcars)))
lms <- combn(nms, 2, lmfun, dep = dep, simplify = FALSE)
names(lms) <- lapply(lms, formula)
3) listcompr Using lmfun from above and listcompr we can use the following. Note that we need version 0.1.1 or later of listcompr which is not yet on CRAN so we get it from github.
# install.github("patrickroocks/listcompr")
library(listcompr)
packageVersion("listcompr") # need version 0.1.1 or later
dep <- "mpg" # name of dependent variable
nms <- setdiff(names(mtcars), dep) # names of indep variables
lms2 <- gen.named.list("{nm1}.{nm2}", lmfun(c(nm1, nm2), dep),
nm1 = nms, nm2 = nms, nm1 < nm2)
You should specify your text_form as formulas:
KOFS05.12 <- data.frame(y = rnorm(10),
a = rnorm(10),
b = rnorm(10),
c = rnorm(10))
all_comb <- combn(letters[1:3], 2)
fmla_form <- apply(all_comb, 2, function(x) as.formula(sprintf("y ~ %s", paste(x, collapse = "+"))))
lapply(fmla_form, function(i) lm(i, KOFS05.12))
#> [[1]]
#>
#> Call:
#> lm(formula = i, data = KOFS05.12)
#>
#> Coefficients:
#> (Intercept) a b
#> 0.19763 -0.15873 0.02854
#>
#>
#> [[2]]
#>
#> Call:
#> lm(formula = i, data = KOFS05.12)
#>
#> Coefficients:
#> (Intercept) a c
#> 0.21395 -0.15967 0.05737
#>
#>
#> [[3]]
#>
#> Call:
#> lm(formula = i, data = KOFS05.12)
#>
#> Coefficients:
#> (Intercept) b c
#> 0.157140 0.002523 0.028088
Created on 2021-02-17 by the reprex package (v1.0.0)
I would like to make a regression loop lm(y~x) with a dataset with one y and several x, and run the regression for each x, and then also store the results (estimate, p-values) in a data.frame() so I don't have to copy them manually (especially as my real data set it much bigger).
I think this should not be too difficult, but I struggle a lot to make it work and appreciate your help:
Here is my sample data set:
sample_data <- data.frame(
fit = c(0.8971963, 1.4205607, 1.4953271, 0.8971963, 1.1588785, 0.1869159, 1.1588785, 1.142857143, 0.523809524),
Xbeta = c(2.8907744, -0.7680777, -0.7278847, -0.06293916, -0.04047017, 2.3755812, 1.3043990, -0.5698354, -0.5698354),
Xgamma = c( 0.1180758, -0.6275700, 0.3731964, -0.2353454,-0.5761923, -0.5186803, 0.43041835, 3.9111749, -0.5030638),
Xalpha = c(0.2643091, 1.6663923, 0.4041057, -0.2100472, -0.2100472, 7.4874195, -0.2385278, 0.3183102, -0.2385278),
Xdelta = c(0.1498646, -0.6325119, -0.5947564, -0.2530748, 3.8413339, 0.6839322, 0.7401834, 3.8966404, 1.2028175)
)
#yname <- ("fit")
#xnames <- c("Xbeta ","Xgamma", "Xalpha", "Xdelta")
The simple regression with the first independant variable Xbeta would look like this lm(fit~Xbeta, data= sample_data)and I would like to run the regression for each variable starting with an "X" and then store the result (estimate, p-value).
I have found a code that allows me to select variables that start with "X" and then use it for the model, but the code gives me an error from mutate() onwards (indicated by #).
library(tidyverse)
library(tsibble)
sample_data %>%
gather(stock, return, starts_with("X")) %>%
group_nest(stock)
# %>%
# mutate(model = map(data,
# ~lm(formula = "fit~ return",
# data = .x))
# ),
# resid = map(model, residuals)
# ) %>%
# unnest(c(data,resid)) %>%
# summarise(sd_residual = sd(resid))
For then storing the regression results I have also found the following appraoch using the R package "broom": r for loop for regression lm(y~x)
sample_data%>%
group_by(y,x)%>% # get combinations of y and x to regress
do(tidy(lm(fRS_relative~xvalue, data=.)))
But I always get an error for group_by() and do()
I really appreciate your help!
One option would be to use lapply to perform a regression with each of the independent variables. Use tidy from broom library to store the results into a tidy format.
lapply(1:length(xnames),
function(i) broom::tidy(lm(as.formula(paste0('fit ~ ', xnames[i])), data = sample_data))) -> test
and then combine all the results into a single dataframe:
do.call('rbind', test)
# term estimate std.error statistic p.value
# <chr> <dbl> <dbl> <dbl> <dbl>
# 1 (Intercept) 1.05 0.133 7.89 0.0000995
# 2 Xbeta -0.156 0.0958 -1.62 0.148
# 3 (Intercept) 0.968 0.147 6.57 0.000313
# 4 Xgamma 0.0712 0.107 0.662 0.529
# 5 (Intercept) 1.09 0.131 8.34 0.0000697
# 6 Xalpha -0.0999 0.0508 -1.96 0.0902
# 7 (Intercept) 0.998 0.175 5.72 0.000723
# 8 Xdelta -0.0114 0.0909 -0.125 0.904
Step one
Your data is messy, let us tidy it up.
sample_data <- data.frame(
fit = c(0.8971963, 1.4205607, 1.4953271, 0.8971963, 1.1588785, 0.1869159, 1.1588785, 1.142857143, 0.523809524),
Xbeta = c(2.8907744, -0.7680777, -0.7278847, -0.06293916, -0.04047017, 2.3755812, 1.3043990, -0.5698354, -0.5698354),
Xgamma = c( 0.1180758, -0.6275700, 0.3731964, -0.2353454,-0.5761923, -0.5186803, 0.43041835, 3.9111749, -0.5030638),
Xalpha = c(0.2643091, 1.6663923, 0.4041057, -0.2100472, -0.2100472, 7.4874195, -0.2385278, 0.3183102, -0.2385278),
Xdelta = c(0.1498646, -0.6325119, -0.5947564, -0.2530748, 3.8413339, 0.6839322, 0.7401834, 3.8966404, 1.2028175)
)
tidyframe = data.frame(fit = sample_data$fit,
X = c(sample_data$Xbeta,sample_data$Xgamma,sample_data$Xalpha,sample_data$Xdelta),
type = c(rep("beta",9),rep("gamma",9),rep("alpha",9),rep("delta",9)))
Created on 2020-07-13 by the reprex package (v0.3.0)
Step two
Iterate over each type, and get the P-value, using this nifty function
# From https://stackoverflow.com/a/5587781/3212698
lmp <- function (modelobject) {
if (class(modelobject) != "lm") stop("Not an object of class 'lm' ")
f <- summary(modelobject)$fstatistic
p <- pf(f[1],f[2],f[3],lower.tail=F)
attributes(p) <- NULL
return(p)
}
Then do some clever piping
tidyframe %>% group_by(type) %>%
summarise(type = type, p = lmp(lm(formula = fit ~ X))) %>%
unique()
#> `summarise()` regrouping output by 'type' (override with `.groups` argument)
#> # A tibble: 4 x 2
#> # Groups: type [4]
#> type p
#> <fct> <dbl>
#> 1 alpha 0.0902
#> 2 beta 0.148
#> 3 delta 0.904
#> 4 gamma 0.529
Created on 2020-07-13 by the reprex package (v0.3.0)
It seems like predict is producing a standard error that is too large. I get 0.820 with a parsnip model but 0.194 with a base R model. 0.194 for a standard error seems more reasonable since about 2*0.195 above and below my prediction are the ends of the confidence interval. What is my problem/misunderstanding?
library(parsnip)
library(dplyr)
# example data
mod_dat <- mtcars %>%
as_tibble() %>%
mutate(cyl_8 = as.numeric(cyl == 8)) %>%
select(mpg, cyl_8)
parsnip_mod <- logistic_reg() %>%
set_engine("glm") %>%
fit(as.factor(cyl_8) ~ mpg, data = mod_dat)
base_mod <- glm(as.factor(cyl_8) ~ mpg, data = mod_dat, family = "binomial")
parsnip_pred <- tibble(mpg = 18) %>%
bind_cols(predict(parsnip_mod, new_data = ., type = 'prob'),
predict(parsnip_mod, new_data = ., type = 'conf_int', std_error = T)) %>%
select(!ends_with("_0"))
base_pred <- predict(base_mod, tibble(mpg = 18), se.fit = T, type = "response") %>%
unlist()
# these give the same prediction but different SE
parsnip_pred
#> # A tibble: 1 x 5
#> mpg .pred_1 .pred_lower_1 .pred_upper_1 .std_error
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 18 0.614 0.230 0.895 0.820
base_pred
#> fit.1 se.fit.1 residual.scale
#> 0.6140551 0.1942435 1.0000000
Created on 2020-06-04 by the reprex package (v0.3.0)
--EDIT--
As #thelatemail and #Limey said, using type="link" for the base model will give the standard error on the logit scale (0.820). However, I want the standard error on the probability scale.
Is there an option in the parsnip documentation that I'm missing? I would like to use parsnip.
#thelatemail is correct. From the online doc for predict.glm:
type
the type of prediction required. The default is on the scale of the linear predictors; the alternative "response" is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and type = "response" gives the predicted probabilities.
The default is to report using the logit scale,, 'response' requests results on the raw probability scale. It's not obvious from the parsnip::predict documentation that I found how that chooses the scale on which to return its results, but it's clear it's using the raw probability scale.
So both methods are returning correct answers, they're just using different scales.
I don't want to steal an accepted solution from #thelatemail, so invite them to post a similar answer to this.
As #thelatemail said, you can get the standard error on the probability scale with parsnip using the arguments: type="raw", opts=list(se.fit=TRUE, type="response"). But at that point, you might as well use a base model since the output is exactly the same. However, this is still useful if you are already using a parsnip model and you want the standard error output of a base model.
library(parsnip)
library(dplyr)
mod_dat <- mtcars %>%
as_tibble() %>%
mutate(cyl_8 = as.numeric(cyl == 8)) %>%
select(mpg, cyl_8)
parsnip_mod <- logistic_reg() %>%
set_engine("glm") %>%
fit(as.factor(cyl_8) ~ mpg, data = mod_dat)
base_mod <- glm(as.factor(cyl_8) ~ mpg, data = mod_dat, family = "binomial")
predict(parsnip_mod, tibble(mpg = 18), type="raw",
opts=list(se.fit=TRUE, type="response")) %>%
as_tibble()
#> # A tibble: 1 x 3
#> fit se.fit residual.scale
#> <dbl> <dbl> <dbl>
#> 1 0.614 0.194 1
predict.glm(base_mod, tibble(mpg = 18), se.fit = T, type="response") %>%
as_tibble()
#> # A tibble: 1 x 3
#> fit se.fit residual.scale
#> <dbl> <dbl> <dbl>
#> 1 0.614 0.194 1
Created on 2020-06-11 by the reprex package (v0.3.0)
I'm writing a function for my (working) R script in order to clean up my code. I do not have experience with writing functions, but decided I should invest some time into this. The goal of my function is to perform multiple statistical tests while only passing the required dataframe, quantitative variable and grouping variable once. However, I cannot get this to work. For your reference, I'll use the ToothGrowth data frame to illustrate my problem.
Say I want to run a Kruskal-Wallis test and one-way ANOVA on len, to compare different groups named supp, for whatever reason. I can do this separately with
kruskal.test(len ~ supp, data = ToothGrowth)
aov(len ~ supp, data = ToothGrowth)
Now I want to write a function that performs both tests. This is what I had thought should work:
stat_test <- function(mydata, quantvar, groupvar) {
kruskal.test(quantvar ~ groupvar, data = mydata)
aov(quantvar ~ groupvar, data = mydata)
}
But if I then run stat_test(ToothGrowth, "len", "sup"), I get the error
Error in kruskal.test.default("len", "supp") :
all observations are in the same group
What am I doing wrong? Any help would be much appreciated!
You can use deparse(substitute(quantvar)) to get the quoted name of the column you are passing to the function, and this will allow you to build a formula using paste. This is a more idiomatic way of operating in R.
Here's a reproducible example:
stat_test <- function(mydata, quantvar, groupvar) {
A <- as.formula(paste(deparse(substitute(quantvar)), "~",
deparse(substitute(groupvar))))
print(kruskal.test(A, data = mydata))
cat("\n--------------------------------------\n\n")
aov(A, data = mydata)
}
stat_test(ToothGrowth, len, supp)
#>
#> Kruskal-Wallis rank sum test
#>
#> data: len by supp
#> Kruskal-Wallis chi-squared = 3.4454, df = 1, p-value = 0.06343
#>
#>
#> --------------------------------------
#> Call:
#> aov(formula = A, data = mydata)
#>
#> Terms:
#> supp Residuals
#> Sum of Squares 205.350 3246.859
#> Deg. of Freedom 1 58
#>
#> Residual standard error: 7.482001
#> Estimated effects may be unbalanced
Created on 2020-03-30 by the reprex package (v0.3.0)
It looks like you need to convert your variable arguments, given as text strings, into a formula. You can do this by concatenating the strings with paste(). Also, you will need to wrap print() around both of your statistical tests within the function, otherwise only the last one will display.
Here is the modified function:
stat_test <- function(mydata, quantvar, groupvar) {
model_formula <- formula(paste(quantvar, '~', groupvar))
print(kruskal.test(model_formula, data = mydata))
print(aov(model_formula, data = mydata))
}
For reference, if using rstatix (tidy version of statistical functions), you need to use sym and !!, while using formula() when needed.
make_kruskal_test <- function(data, quantvar, groupvar) {
library(rstatix, quietly = TRUE)
library(rlang, quietly = TRUE)
formula_expression <- formula(paste(quantvar, "~", groupvar))
quantvar_sym <- sym(quantvar)
shapiro <- shapiro_test(data, !!quantvar_sym) %>% print()
}
sample_data <- tibble::tibble(sample = letters[1:5], mean = 1:5)
make_kruskal_test(sample_data, "mean", "sample")
#> # A tibble: 1 x 3
#> variable statistic p
#> <chr> <dbl> <dbl>
#> 1 mean 0.987 0.967