I am running a simulation of multiple experiments using random data to create glm models. In each individual experiment I need to select different covariates to build the glm. Is there a way to use variable names to specify which covariates to use in the formula? For example, for a data frame called data that will contain the heading y plus a set of other headings that changes with each iteration, something like:
data <- data.frame(x1 = c(1:100),x2 = c(2:101),x3 = c(3:102),x4 = c(4:103),x5 = c(5,104),y = c(6:105))
#Experiment #1:
covars = c(x1,x2,x4)
glm(y ~ sum(covars),data=data)
#Experiment #2:
covars = c(x1,x3,x4,x5)
glm(y ~ sum(covars),data=data)
#Experiment #3:
covars = c(x2,x4,x5)
glm(y ~ sum(covars),data=data)
#etc...
So far, I have tried using this approach with the sum & colnames functions but I get the following error: "invalid 'type' (character) of argument"
Thank you!
We can use . to represent all the columns except the dependent column 'y'
glm(y ~ ., data = data)
Related
EDITED:
I'm trying to assess the effect of variables (e.g. presence of severe trauma) on a continous variable (here energy expenditure (=REE) in calories) over time (Day). The dataframe is called my_data. Amongst the variables
Following I would like to display the results using the mixed linear model for each assessed variable in one large file.
General concept:
REE ~ Time*predictor + (1 + Time | Case identifier)
(1) Starting creating the lmer model:
library(tidyverse)
library(ggpmisc)
library(sjPlot)
library(lme4)
mixed.modelloop <- function(x) {
lmer(REE ~ Day*(x) + (1 + Day | Studynumber),
data=my_data,
REML=FALSE,
na.action=na.omit,
control = lmerControl(check.nobs.vs.nRE = "ignore"))
}
(2) Then creating the predictors (x)
cols <- c(colnames(my_data))
(3) And then generating the overall purrr function:
output <- purrr::map(cols, ~ mixed.modelloop(.x) %>% tab_model)
(4) generating the file which should include all separate univariate mixed model analyses:
pdf(file="mixed linear models.pdf" )
output
dev.off()
Unfortunately currently after step (3) I'm getting the following error message:
Error in model.frame.default(data = my_data, na.action = na.omit, drop.unused.levels = TRUE, :
variable lengths differ (found for 'x')
Any idea on how to adapt the function to resolve this issue?
Thanks!
Formulas have special rules, you can't insert a string into them and expect them to work.
This should work, although you haven't given a reproducible example to test with ...
mixed.modelloop <- function(x) {
form <- reformulate(c(sprintf("Day*%s", x), "(1 + Day | Studynumber)"),
response = "REE")
lmer(form,
data=my_data,
REML=FALSE,
na.action=na.omit,
control = lmerControl(check.nobs.vs.nRE = "ignore"))
}
I have generated randomly a dataset that has been split in two (L and I).
First I run the regression on L using all the covariates.
After defining the set of variables that are significantly different form zero I want to run the regression on I using this set of variables.
reg_L = lm(y ~ ., data = data)
S_hat = as.data.frame(round(summary(reg_L)$coefficients[,"Pr(>|t|)"], 3)<0.05)
S_hat_L = rownames(which(S_hat==TRUE, arr.ind = TRUE))
Therefore here I want to run the new model that doesn't work only due to a problem in the specification of the variable x.
What am I doing wrong?
# Using the I proportion to construct the p-values
x = noquote(paste(S_hat_L, collapse = " + "))
reg_I = lm(y ~ x, data = data)
summary(reg_I)
A simpler way than trying to manipulate a formula programmatically would be to remove the unwanted predictors from the data:
wanted <- summary(fit)$coefficients[,"Pr(>|t|)"] < 0.05
reduced.data <- data[, wanted]
reg_S <- lm(y ~ ., data=reduced.data)
Note however, that it is more robust with respect to out-of-sample performance to reduce variables with the LASSO. This will yield a model that has some coefficients set to zero, but the other coefficients are adjusted in such a way that the uot-of-sample performance will be better.
I am currently learning R and am playing around with a dataset that has four nominal variables (Hour.Of.Arrival, Mode, Unit, Weekday), and a continuous dependent variable (Overall). This is all imported from a .csv in a data frame named basic. What I am trying to do is run an ANOVA just using this data frame, without creating separate vectors (e.g. Mode<-basic$Mode). "Fit" holds the results of the ANOVA. Here is the code that I wrote:
Fit<-aov(basic["Overall"],basic["Unit"],data=basic)
However, I keep getting the error
"Error in terms.default(formula, "Error", data = data) : no terms
component nor attribute
I hope this question isn't too basic!!
Thanks :)
I think you want something more like Fit<-aov(Overall ~ Unit,data=basic). The Overall ~ Unit tells R to treat Overall as an outcome being predicted by Unit; you already specify that the dataframe to find these variables is basic.
Here's an example to show you how it works:
> y <- rnorm(100)
> x <- factor(rep(c('A', 'B', 'C', 'D'), each = 25))
> dat <- data.frame(x, y)
> aov(y ~ x, data = dat)
Call:
aov(formula = y ~ x, data = dat)
Terms:
x Residuals
Sum of Squares 2.72218 114.54631
Deg. of Freedom 3 96
Residual standard error: 1.092333
Estimated effects may be unbalanced
Note, you don't need to use the data argument, you could also use aov(dat$y ~ dat$x), but the first argument to the function should be a formula.
I want to perform a multiple regression in R and make predictions based on the trained model. Below is an example code I am using:
price = c(10,18,18,11,17)
predictors = cbind(c(5,6,3,4,5),c(2,1,8,5,6))
predict(lm(price ~ predictors), data.frame(predictors=matrix(c(3,5),nrow=1)))
So, based on the 2-variate regression model trained by 5 samples, I want to make a prediction for the test data point where the first variate is 3 and second variate is 5. But I get a warning from above code saying that 'newdata' had 1 rows but variable(s) found have 5 rows. How can I correct above code? Below code works fine where I give the variables separately to the model formula. But since I will have hundreds of variates, I have to give them in a matrix since it would be unfeasible to append hundreds of columns using + sign.
price = c(10,18,18,11,17)
predictor1 = c(5,6,3,4,5)
predictor2 = c(2,1,8,5,6)
predict(lm(price ~ predictor1 + predictor2), data.frame(predictor1=3,predictor2=5))
Thanks in advance!
The easiest way to get past the issue of matching up variable names from a matrix of covariates to newdata data.frame column names is to put your input data into a data.frame as well. Try this
price = c(10,18,18,11,17)
predictors = cbind(c(5,6,3,4,5),c(2,1,8,5,6))
indata<-data.frame(price,predictors=predictors)
predict(lm(price ~ ., indata), data.frame(predictors=matrix(c(3,5),nrow=1)))
Here we combine price and predictors into a data.frame such that it will be named the same say as the newdata data.frame. We use the . in the formula to mean "all other columns" so we don't have to specify them explicitly.
Need to build the model first, then predict from it:
mod1 <- lm(price ~ predictor1 + predictor2)
predict( mod1 , data.frame(predictor1=3,predictor2=5))
So I have a bunch of variables sitting in a data frame and I want to use the step function to select a model.
Right now I'm doing something like this
step(lm(SalePrice ~ Gr.Liv.Area + Total.Bsmt.SF + Garage.Area + Lot.Area, list= ~upper(Neighborhood + Neighborhood:Bedroom.AbvGr) ....
How do I add multiple interaction terms without having to manually input them with the : notation?
Here is one way of adding interactions: Assume that all your data of interest is in dat and your dependent variable is named y. The code
init_mod <- lm(y ~ ., data = dat)
step(init_mod, scope = . ~ .^2, direction = 'forward')
will add interaction terms to your model using AIC. If you want k order interactions you can replace .^2 with .^k.