Plot the results of a multivariate logistic regression model in R - r

I would like to plot the results of a multivariate logistic regression analysis (GLM) for a specific independent variables adjusted (i.e. independent of the confounders included in the model) relationship with the outcome (binary).
I have seen posts that recommend the following method using the predict command followed by curve, here's an example;
x <- data.frame(binary.outcome, cont.exposure)
model <- glm(binary.outcome ~ cont.exposure, family=binomial, data=x)
plot(cont.exposure, binary.outcome, xlab="Temperature",ylab="Probability of Response")
curve(predict(model, data.frame(cont.exposure=x), type="resp"), add=TRUE, col="red")
However this does not seem to work for multivariate regression models. I get the following error when I add 'age' (arbitrary - could be any variable of same length) as a confounding variable;
> x <- data.frame(binary.outcome, cont.exposure, age)
> model <- glm(binary.outcome ~ cont.exposure + age, family=binomial, data=x)
> plot(cont.exposure, binary.outcome, xlab="Temperature",ylab="Probability of Response")
> curve(predict(model, data.frame(cont.exposure=x), type="resp"), add=TRUE, col="red")
Error in model.frame.default(Terms, newdata, na.action = na.action, xlev = object$xlevels) :
variable lengths differ (found for 'age')
In addition: Warning message:
'newdata' had 101 rows but variable(s) found have 698 rows
The above model is a simplified version of the models I'd like to run, but the principle is the same; I would like to plot the relationship between a binary outcome variable and a continuous exposure, independent of confounding factors..
It would be great to get either a workaround for the above, or an alternative way to view the relationship I am interested in. Many thanks.

set.seed(12345)
dataset <- expand.grid(Temp = rnorm(30), Age = runif(10))
dataset$Truth <- with(dataset, plogis(2 * Temp - 3 * Age))
dataset$Sample <- rbinom(nrow(dataset), size = 1, prob = dataset$Truth)
model <- glm(Sample ~ Temp + Age, data = dataset, family = binomial)
newdata <- expand.grid(
Temp = pretty(dataset$Temp, 20),
Age = pretty(dataset$Age, 5))
newdata$Sample <- predict(model, newdata = newdata, type = "response")
library(ggplot2)
ggplot(newdata, aes(x = Temp, y = Sample)) + geom_line() + facet_wrap(~Age)
ggplot(newdata, aes(x = Temp, y = Sample, colour = Age, group = Age)) +
geom_line()

Related

How to plot predicted data of a GLMM from the package glmmTMB?

i have the following data and created a model with the package glmmTMB in R for plant diameters ~ plant density (number of plants) with a random plot effect:
d <- data.frame (diameter = c(17,16,15,13,11, 19,17,15,11,11, 19,15,14,11,8),
plant_density = c(1000,2000,3000,4000,5000, 1000,2000,3000,4000,5000, 1000,2000,3000,4000,5000),
plot = c(1,1,1,1,1, 2,2,2,2,2, 3,3,3,3,3))
glmm.model <- glmmTMB(diameter ~ plant_density + (1|plot),
data = d,
na.action = na.omit,
family="gaussian",
ziformula = ~ 0)
My intention was to create a plot with predicted diameter data for different plant densities with an included random plot effect. So i tried to predict the data:
new.dat <- data.frame(diameter= d$diameter,
plant_density = d$plant_density,
plot= d$plot)
new.dat$prediction <- predict(glmm.model, new.data = new.dat,
type = "response", re.form = NA)
Unfortunately I get an output for every plot but wanted a generalized prediction for the diameter ~ plant density.
My goal is to create a plot like here, but with a regression model from glmmTMB which consider the random effect.
Thanks for ur help!
The ggeffects package makes this type of thing very easy to implement and customize.
For example
library('ggplot2')
library('glmmTMB')
library('ggeffects')
d <- data.frame (diameter = c(17,16,15,13,11, 19,17,15,11,11, 19,15,14,11,8),
plant_density = c(1000,2000,3000,4000,5000, 1000,2000,3000,4000,5000, 1000,2000,3000,4000,5000),
plotx = as.factor( c(1,1,1,1,1, 2,2,2,2,2, 3,3,3,3,3)))
glmm.model <- glmmTMB(diameter ~ plant_density + (1|plotx),
data = d,
family="gaussian")
# basically what your looking for
plot(ggpredict(glmm.model, terms = "plant_density"))
# with additional a change of limits on the y-axis
plot(ggpredict(glmm.model, terms = "plant_density")) +
scale_y_continuous(limits = c(0, 20))
You can really do whatever you'd like with it from there, changing colors, themes, scales, the package has some nice vignettes as well.

Plotting multiple logistic regression in R

I've built this logistic regression model which includes four predictors, optimized from a dataframe that includes ten predictors (I've uploaded the data here http://www.filedropper.com/df). This is my first time trying to plot a logistic model in R, and I'm not sure how to go about visualizing the results. Here's my code:
df <- read.csv("df.csv")
model <- glm(y ~ v1 + v6 + v9 + v3, data = df,
family = binomial, maxit = 100)
summary(model)
df$pred_res <- predict(model, df, type = "response")
ggplot(data = df, aes(x = v3, y = pred_res)) +
geom_line(mapping = aes(colour = "blue"), size = 1)
Can anyone suggest a better way to visualize/plot the predicted values of the model? In general, does anyone have any tips for visualizing a logit model with multiple predictors? Thank you!

In R, how can I use sjPlot with cbind() outcome of glm model?

I'm trying to use sjp.glm command from sjPlot package to plot the results of glm model. All works fine until I specify y outcome as a vector of successes and failures, which is one of the way glm can be called.
To illustrate the problem
library(sjPlot)
library(sjmisc)
data(efc)
# example from docs
y <- ifelse(efc$neg_c_7 < median(na.omit(efc$neg_c_7)), 0, 1)
# create data frame for fitting model
df <- data.frame(y = to_factor(y),
sex = to_factor(efc$c161sex),
dep = to_factor(efc$e42dep),
barthel = efc$barthtot,
education = to_factor(efc$c172code))
fit1 <- glm(y ~., data = df, family = binomial(link = "logit"))
# plot works
sjp.glm(fit1)
# similar as above but with aggregates
y0 <- ifelse(efc$neg_c_7 < median(na.omit(efc$neg_c_7)), 0, 1)
y1 <- ifelse(efc$neg_c_7 >= median(na.omit(efc$neg_c_7)), 0, 1)
df <- data.frame(y0 = y0,
y1 = y1,
sex = to_factor(efc$c161sex),
dep = to_factor(efc$e42dep),
barthel = efc$barthtot,
education = to_factor(efc$c172code))
df_agg <- df %>%
dplyr::group_by(sex, dep, barthel, education) %>%
dplyr::summarise(c0=sum(y0),
c1=sum(y1))
fit2 <- glm(cbind(c0, c1) ~., data = df_agg, family = binomial(link = "logit"))
# plot works not
sjp.glm(fit2)
Throws an error message:
Adding missing grouping variables: `sex`, `dep`, `barthel`
Error in cbind_all(x) : Argument 2 must be length 275, not 213
In addition: Warning message:
Unknown columns: `cbind(c0, c1)`
Any solutions for that?

Visualize Multilevel Growth Model with nlme/ggplot2 vs lme4/ggplot2

I am trying to visualize the results of an nlme object without success. When I do so with an lmer object, the correct plot is created. My goal is to use nlme and visualize a fitted growth curve for each individual with ggplot2. The predict() function seems to work differently with nlme and lmer objects.
model:
#AR1 with REML
autoregressive <- lme(NPI ~ time,
data = data,
random = ~time|patient,
method = "REML",
na.action = "na.omit",
control = list(maxlter=5000, opt="optim"),
correlation = corAR1())
nlme visualization attempt:
data <- na.omit(data)
data$patient <- factor(data$patient,
levels = 1:23)
ggplot(data, aes(x=time, y=NPI, colour=factor(patient))) +
geom_point(size=1) +
#facet_wrap(~patient) +
geom_line(aes(y = predict(autoregressive,
level = 1)), size = 1)
when I use:
data$fit<-fitted(autoregressive, level = 1)
geom_line(aes(y = fitted(autoregressive), group = patient))
it returns the same fitted values for each individual and so ggplot produces the same growth curve for each. Running test <-data.frame(ranef(autoregressive, level=1)) returns varying intercepts and slopes by patient id. Interestingly, when I fit the model with lmer and run the below code it returns the correct plot. Why does predict() work differently with nlme and lmer objects?
timeREML <- lmer(NPI ~ time + (time | patient),
data = data,
REML=T, na.action=na.omit)
ggplot(data, aes(x = time, y = NPI, colour = factor(patient))) +
geom_point(size=3) +
#facet_wrap(~patient) +
geom_line(aes(y = predict(timeREML)))
In creating a reproducible example, I found that the error was not occurring in predict() nor in ggplot() but instead in the lme model.
Data:
###libraries
library(nlme)
library(tidyr)
library(ggplot2)
###example data
df <- data.frame(replicate(78, sample(seq(from = 0,
to = 100, by = 2), size = 25,
replace = F)))
##add id
df$id <- 1:nrow(df)
##rearrange cols
df <- df[c(79, 1:78)]
##sort columns
df[,2:79] <- lapply(df[,2:79], sort)
##long format
df <- gather(df, time, value, 2:79)
##convert time to numeric
df$time <- factor(df$time)
df$time <- as.numeric(df$time)
##order by id, time, value
df <- df[order(df$id, df$time),]
##order value
df$value <- sort(df$value)
Model 1 with no NA values fits successfully.
###model1
model1 <- lme(value ~ time,
data = df,
random = ~time|id,
method = "ML",
na.action = "na.omit",
control = list(maxlter=5000, opt="optim"),
correlation = corAR1(0, form=~time|id,
fixed=F))
Introducing NA's causes invertible coefficient matrix error in model 1.
###model 1 with one NA value
df[3,3] <- NA
model1 <- lme(value ~ time,
data = df,
random = ~time|id,
method = "ML",
na.action = "na.omit",
control = list(maxlter=2000, opt="optim"),
correlation = corAR1(0, form=~time|id,
fixed=F))
But not in model 2, which has a more simplistic within-group AR(1) correlation structure.
###but not in model2
model2 <- lme(value ~ time,
data = df,
random = ~time|id,
method = "ML",
na.action = "na.omit",
control = list(maxlter=2000, opt="optim"),
correlation = corAR1(0, form = ~1 | id))
However, changing opt="optim" to opt="nlminb" fits model 1 successfully.
###however changing the opt to "nlminb", model 1 runs
model3 <- lme(value ~ time,
data = df,
random = ~time|id,
method = "ML",
na.action = "na.omit",
control = list(maxlter=2000, opt="nlminb"),
correlation = corAR1(0, form=~time|id,
fixed=F))
The code below visualizes model 3 (formerly model 1) successfully.
df <- na.omit(df)
ggplot(df, aes(x=time, y=value)) +
geom_point(aes(colour = factor(id))) +
#facet_wrap(~id) +
geom_line(aes(y = predict(model3, level = 0)), size = 1.3, colour = "black") +
geom_line(aes(y = predict(model3, level=1, group=id), colour = factor(id)), size = 1)
Note that I am not exactly sure what changing the optimizer from "optim" to "nlminb" does and why it works.

estimate h2o glm coefficients by a categorical variable level

I would like to estimate coefficient for a predictor by a categorical variable level in h2o glm. For example, if my data frame has product price (continuous variable) and product type (categorical variable), then I want to estimate a coefficient for price by product. In SAS, you can easily accomplish this by specifying model effect as price*type. How can I do the same in h2o or R?
There is an interactions() function, but it cannot handle interaction between a continuous and categorical variables. Any tips to get around this problem?
Many thanks,
set.seed(1234)
x1 = rnorm(100,0,1)
x2 = as.factor(rep(c("A","B","C","D"), each = 25))
y = as.factor(rep(0:1, each = 50))
data = data.frame(x1 = x1, x2 = x2, y = y)
Interactions can be specified using a ":" in the formula argument
# glm base example
fit <- glm(data = data, y ~ x1 + x2 + x1:x2, family = "binomial")
print(fit)
Using h2o.glm pairwise interactions can be specified by passing column indices to the interactions argument
# h2o.glm example
library("h2o")
h2o.init(nthreads = -1)
data.hex = as.h2o(data)
h2o_fit <- h2o.glm(x = 1:2, y = 3, training_frame = data.hex, family = "binomial", interactions = 1:2)
h2o_fit#model$coefficients_table
h2o.shutdown(prompt = F)

Resources