Function for Logistic Regression Training Set - r

I am trying to create a function to test a logistic regression model developed on a training set.
For example
train <- filter(y, folds != i)
test <- filter(y, folds == i)
I want to be able to use the formula for different data sets.
For example, if I were to take y to be a response variable such as “low” in the birthwt data set and x to be the explanatory variables e.g. “age", “race” how would I implement these arguments into glm.train formula without having to type the function separately for different data sets ?
glm.train <- glm(y ~x, family = binomial, data = train)

You can use reformulate to create a formula based on strings:
x <- c("age", "race")
y <- "low"
form <- reformulate(x, response = y)
# low ~ age + race
Use this formula for glm:
glm.train <- glm(form, family = binomial, data = train)

Related

How to loop over columns to evaluate different fixed effects in consecutive lme4 mixed models and extract the coefficients and P values?

I am new to R and am trying to loop a mixed model across 90 columns in a dataset.
My dataset looks like the following one but has 90 predictors instead of 7 that I need to evaluate as fixed effects in consecutive models.
I then need to store the model output (coefficients and P values) to finally construct a figure summarizing the size effects of each predictor. I know the discussion of P value estimates from lme4 mixed models.
For example:
set.seed(101)
mydata <- tibble(id = rep(1:32, times=25),
time = sample(1:800),
experiment = rep(1:4, times=200),
Y = sample(1:800),
predictor_1 = runif(800),
predictor_2 = rnorm(800),
predictor_3 = sample(1:800),
predictor_4 = sample(1:800),
predictor_5 = seq(1:800),
predictor_6 = sample(1:800),
predictor_7 = runif(800)) %>% arrange (id, time)
The model to iterate across the N predictors is:
library(lme4)
library(lmerTest) # To obtain new values
mixed.model <- lmer(Y ~ predictor_1 + time + (1|id) + (1|experiment), data = mydata)
summary(mixed.model)
My coding skills are far from being able to set a loop to repeat the model across the N predictors in my dataset and store the coefficients and P values in a dataframe.
I have been able to iterate across all the predictors fitting linear models instead of mixed models using lapply. But I have failed to apply this strategy with mixed models.
varlist <- names(mydata)[5:11]
lm_models <- lapply(varlist, function(x) {
lm(substitute(Y ~ i, list(i = as.name(x))), data = mydata)
})
One option is to update the formula of a restricted model (w/o predictor) in an lapply loop over the predictors. Then summaryze the resulting list and subset the coefficient matrix using a Vectorized function.
library(lmerTest)
mixed.model <- lmer(Y ~ time + (1|id) + (1|experiment), data = mydata)
preds <- grep('pred', names(mydata), value=TRUE)
fits <- lapply(preds, \(x) update(mixed.model, paste('. ~ . + ', x)))
extract_coef_p <- Vectorize(\(x) x |> summary() |> coef() |> {\(.) .[3, c(1, 5)]}())
res <- `rownames<-`(t(extract_coef_p(fits)), preds)
res
# Estimate Pr(>|t|)
# predictor_1 -7.177579138 0.8002737
# predictor_2 -5.010342111 0.5377551
# predictor_3 -0.013030513 0.7126500
# predictor_4 -0.041702039 0.2383835
# predictor_5 -0.001437124 0.9676346
# predictor_6 0.005259293 0.8818644
# predictor_7 31.304496255 0.2511275

Univariate logistic regression analysis with glm on multiple predictors

So I am trying to univariate logistic regression analysis on some data I have.
Basically I have a data frame with 1 response variable and 50 predictors.
In order to analyse it I just use the glm function as:
glm(response_var~predictor_var1, data = mydata, family = binomial(link=logit))
However, I don't want to do that manually for all 50 predictors, and it doesn't seem like looping works here. I have tried to say something like this:
predictors <- colnames(mydata)[-c(1)]
glm_list <- list()
i <- 1
for (predictor in predictors) {
model <- glm(response_var~predictor, data = mydata, family = binomial(link=logit))
glm_list[[i]] <- model
i <- i + 1
}
So here I just create a list with the names of the predictors in the data frame through colnames.
But when doing this I just get the error:
variable lengths differ (found for 'predictors')
What am I doing wrong here ?
Try with lapply and as.formula():
"%+%" <- function(x,y) paste(x, y, sep = "")
lapply(predictors, function(x){
glm(as.formula("response_var ~ " %+% x), data = mydata, family = binomial(link = logit))
})
You are passing a character vector, and first you must coerce it to formula.
Hope it helps.

Pass model formula as argument in R

I need to cross-validate several glmer models on the same data so I've made a function to do this (I'm not interested in preexisting functions for doing this). I want to pass an arbitrary glmer model to my function as the only argument. Sadly, I can't figure out how to do this, and the interwebz won't tell me.
Ideally, I would like to do something like:
model = glmer(y ~ x + (1|z), data = train_folds, family = "binomial"
model2 = glmer(y ~ x2 + (1|z), data = train_folds, family = "binomial"
And then call cross_validation_function(model) and cross_validation_function(model2). The training data within the function is called train_fold.
However, I suspect I need to pass the model formula in different way using reformulate.
Here is an example of my function. The project is about predicting autism(ASD) from behavioral features. The data variable is da.
library(pacman)
p_load(tidyverse, stringr, lmerTest, MuMIn, psych, corrgram, ModelMetrics,
caret, boot)
cross_validation_function <- function(model){
#creating folds
participants = unique(da$participant)
folds <- createFolds(participants, 10)
cross_val <- sapply(seq_along(folds), function(x) {
train_folds = filter(da, !(as.numeric(participant) %in% folds[[x]]))
predict_fold = filter(da, as.numeric(participant) %in% folds[[x]])
#model to be tested should be passed as an argument here
train_model <- model
predict_fold <- predict_fold %>%
mutate(predictions_perc = predict(train_model, predict_fold, allow.new.levels = T),
predictions_perc = inv.logit(predictions_perc),
predictions = ifelse(predictions_perc > 0.5, "ASD","control"))
conf_mat <- caret::confusionMatrix(data = predict_fold$predictions, reference = predict_fold$diagnosis, positive = "ASD")
accuracy <- conf_mat$overall[1]
sensitivity <- conf_mat$byClass[1]
specificity <- conf_mat$byClass[2]
fixed_ef <- fixef(train_model)
output <- c(accuracy, sensitivity, specificity, fixed_ef)
})
cross_df <- t(cross_val)
return(cross_df)
}
Solution developed from the comment: Using as.formula strings can be converted into a formula which can passed as arguments to my function in the following way:
cross_validation_function <- function(model_formula){
...
train_model <- glmer(model_formula, data = da, family = "binomial")
...}
formula <- as.formula( "y~ x + (1|z"))
cross_validation_function(formula)
If you aim is to extract the model formula from a fitted model, the you can use
attributes(model)$call[[2]]. Then you can use this formula when fitting model with the cv folds.
mod_formula <- attributes(model)$call[[2]]
train_model = glmer(mod_formula , data = train_data,
family = "binomial")

Simple Way to Combine Predictions from Multiple Models for Subset Data in R

I would like to build separate models for the different segments of my data. I have built the models like so:
log1 <- glm(y ~ ., family = "binomial", data = train, subset = x1==0)
log2 <- glm(y ~ ., family = "binomial", data = train, subset = x1==1 & x2<10)
log3 <- glm(y ~ ., family = "binomial", data = train, subset = x1==1 & x2>=10)
If I run the predictions on the training data, R remembers the subsets and the prediction vectors are with the length of the respective subset.
However, if I run the predictions on the testing data, the prediction vectors are with the length of the whole dataset, not that of the subsets.
My question is whether there is a simpler way to achieve what I would by first subsetting the testing data, then running the predictions on each dataset, concatenating the predictions, rbinding the subset data, and appending the concatenated predictions like this:
T1 <- subset(Test, x1==0)
T2 <- subset(Test, x1==1 & x2<10)
T3 <- subset(Test, x1==1 & x2>=10)
log1pred <- predict(log1, newdata = T1, type = "response")
log2pred <- predict(log2, newdata = T2, type = "response")
log3pred <- predict(log3, newdata = T3, type = "response")
allpred <- c(log1pred, log2pred, log3pred)
TAll <- rbind(T1, T2, T3)
TAll$allpred <- as.data.frame(allpred)
I'd like to think I am being stupid and there is an easier way to accomplish this - many models on small subsets of the data. How to combine them to get the predictions on the full testing data?
First, here's some sample data
set.seed(15)
train <- data.frame(x1=sample(0:1, 100, replace=T),
x2=rpois(100,10),
y=sample(0:1, 100, replace=T))
test <- data.frame(x1=sample(0:1, 10, replace=T),
x2=rpois(10,10))
Now we can fit the models. Here I place them in a list to make it easier to keep them together, and I also remove x1 from the model since it will be fixed for each subset
fits<-list(
glm(y ~ .-x1, family = "binomial", data = train, subset = x1==0),
glm(y ~ .-x1, family = "binomial", data = train, subset = x1==1 & x2<10),
glm(y ~ .-x1, family = "binomial", data = train, subset = x1==1 & x2>=10)
)
Now, for the training data, I create an indicator which specifies which group the observation falls into. I do this by looking at the subset= parameter of each of the calls and evaluating those conditions in the test data.
whichsubset <- as.vector(sapply(fits, function(x) {
subsetparam<-x$call$subset
eval(subsetparam, test)
})%*% matrix(1:length(fits), ncol=1))
You'll want to make sure your groups are mutually exclusive because this code does not check. Then you can use factor with a split/unsplit strategy for making your predictions
unsplit(
Map(function(a,b) predict(a,b),
fits, split(test, whichsubset)
),
whichsubset
)
And even easier strategy would have been just to create the segregating factor in the first place. This would make the model fitting easier as well.

ANOVA after using glm.fit

I would like to perform a likelihood ratio test to determine the power of a model term in a DOE. Till now I have been using the p-value from the glm fit to do this and things have been fine. As I started to use the anova function, I realized that there does not seem to be an anova function designed to accept the input from a glm.fit function, only a glm function. Here is an example of what I would like to do:
X # This is a model matrix from matrix.model
y # These are the y values for the fit
tfit = glm.fit(X, y, family = poisson())
anova(tfit, test = 'LRT')
Typically I would assume that the anova function call would just need to be altered to anova.glm, but that is not the case. How can I get the glm.fit function output to be compatible with an anova function input?
The problem is that glm.fit does not output of class glm, but a raw list with all kinds of data about the model. This cannot be fed to anova.glm since this function expects an object of class glm as produced by the glm function. If you have the raw data available (thus not turned in to a model matrix, you can apply the glm function to this to produce the desired outcome.
X <- matrix(c(runif(10), rnorm(10)), ncol = 2)
y <- round(runif(10, 1, 5))
X.mm <- model.matrix(y ~ X)
model.fit.1 <- glm.fit(X.mm, y, family = poisson())
class(model.fit.1)
model.fit.2 <- glm(y ~ X, family = "poisson")
class(model.fit.2)
anova(model.fit.2, test = "LRT")
If you can't use the glm function and must use the glm.fit then you can construct the LRT yourself from the glm.fit output. For a start take the following function
LRT.glm.fit <- function(glm.fit.mod){
df.null <- glm.fit.mod$df.null
df.mod <- glm.fit.mod$df.residual
dev.null <- glm.fit.mod$null.deviance
dev.mod <- glm.fit.mod$deviance
dev.diff <- dev.null - dev.mod
p.value <- 1 - pchisq(dev.null - dev.mod, df.null - df.mod)
output <- c(round(df.null), round(df.mod), dev.null, dev.mod, p.value)
names(output) <- c("df.null", "df.mod", "dev.null", "dev.mod", "p.value")
output
}

Resources