Calculating working residuals of a Gamma GLM model - r

I am trying to calculate the working residuals of a Gamma GLM model. I'm doing this manually because I want to calculate the partial residuals step-by-step. My model and its coefficients and predictions are described below:
library(datasets)
data(mtcars)
model <- glm(mpg ~ cyl + disp + hp, data=mtcars, family=Gamma)
coefs <- coef(model)
pred <- coefs[1] + coefs[2]*mtcars$cyl + coefs[3]*mtcars$disp + coefs[4]*mtcars$hp
I tried to calculate the working residuals by applying the formula (value - fitted.value)/fitted.value , which works fine for a Poisson glm. However, it didn't work for Gamma since the values differ from those I generate using the function resid():
(mtcars$mpg - (-pred^(-1)))/-pred^(-1))
resid(model, type="working")
Does anybody know how to estimate such working residuals to then calculate the partial residuals?

The working residuals are just model$residuals. See ?glm
## setup
library(datasets)
data(mtcars)
model <- glm(mpg ~ cyl + disp + hp, data = mtcars, family = Gamma)
## family info
oo <- Gamma(link = "inverse")
## compute linear predictor manually (assuming no model offset)
coefs <- coef(model)
eta <- coefs[1] + coefs[2] * mtcars$cyl + coefs[3] * mtcars$disp +
coefs[4] * mtcars$hp
## compute working residuals
resi_working <- (mtcars$mpg - oo$linkinv(eta)) / oo$mu.eta(eta)
## validation
range(resi_working - model$residuals)
#[1] 0 0

Related

Converting logistic regression output from log odds to probability

I initially made this model for a class. Looking back at it, I found that, when I tried to convert my logistic regression output to probability, I got values greater than 1. I am using the following dataset: https://stats.idre.ucla.edu/stat/data/binary.csv
My code to set up the model:
mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
mydata$rank<- factor(mydata$rank)
mylogit <- glm(admit ~ gre + gpa + rank, data=mydata, family="binomial")
summary(mylogit)
Now, I exponentiate these coefficients to get my odds("odds"):
odds <- exp(coef(mylogit))
and convert the odds to probability:
odds/(1 + odds)
# (Intercept) gre gpa rank2 rank3 rank4
# 0.01816406 0.50056611 0.69083749 0.33727915 0.20747653 0.17487497
This output does not make sense; probability must be less than 1, and if GRE is 300, GPA is 3, and rank2 is true (all reasonable possibilities), then probability would be much more than 1.
What is my mistake here? What would be the correct way to convert this to probability?
Your formula p/(1+p) is for the odds ratio, you need the sigmoid function
You need to sum all the variable terms before calculating the sigmoid function
You need to multiply the model coefficients by some value, otherwise you are assuming all the x's are equal to 1
Here is an example using mtcars data set
mod <- glm(vs ~ mpg + cyl + disp, mtcars, family="binomial")
z <- coef(mod)[1] + sum(coef(mod)[-1]*mtcars[1, c("mpg", "cyl", "disp")])
1/(1 + exp(-z))
# 0.3810432
which we can verify using
predict(mod, mtcars[1, c("mpg", "cyl", "disp")], type="response")
# 0.3810432

Getting the same results for two different models in glm() in RStudio

I'm new to the R tool and am having some trouble with the glm() function.
I have some data that I have showed below. When the linear predictor is just x, the glm() function works fine but as soon as I change the linear predictor to x + x^2, it starts giving me the same results that I got for the first model.
The code is as follows:
model1 <- glm(y ~ x, data=data1, family=poisson (link="log"))
coef(model1)
(Intercept) x
0.3396339 0.2565236
model2 <- glm(y ~ x + x^2, data=data1, family=poisson (link="log"))
coef(model2)
(Intercept) x
0.3396339 0.2565236
As you can see there's no coefficient for x^2 as if it's not even in the model.
The lm and glm functions have a special interpretation of the formula (see ?formula) which can be confusing if you are not expecting it. The intended usage of the interface is (w + x)^2 means a*w + b*x + c*w*x + d! If you wish to suppress this you need to use the literal function, I.
model2 <- glm(gear ~ disp + I(disp^2),
data = mtcars, family = poisson (link = "log"))
coef(model2)
# (Intercept) disp I(disp^2)
# 1.542059e+00 -1.248689e-03 6.578518e-07
Put another way, I allows you to perform transformations in the call to glm. The following is equivalent.
mtcars1 <- mtcars
mtcars1$disp_sq <- mtcars1$disp^2
model2a <- glm(gear ~ disp + disp_sq,
data = mtcars1, family = poisson (link = "log"))
coef(model2a)
# (Intercept) disp disp_sq
# 1.542059e+00 -1.248689e-03 6.578518e-07

Calculating confusion matrix for fixed effect logit

I would like to ask how to calculate a confusion matrix for a fixed effect logit model (bife package)
With the basic logit model (glm) there is no problem, but with fixed effect logit there is.
For some reason the number of predictions is different for logit and fixed effect logit.
Example:
library(bife)
library(tidyverse)
library(caret)
dataset <- psid
logit <- glm(LFP ~ AGE + I(AGE^2) + log(INCH) + KID1 + KID2 + KID3, data = dataset, family = "binomial")
mod <- bife(LFP ~AGE + I(AGE^2) + log(INCH) + KID1 + KID2 + KID3| ID, dataset)
summary(mod)
summary(logit)
predict(logit)
predict(mod)
Y <- factor(dataset$LFP)
PRE <- factor(round(predict(logit, type = "response")))
PRE_FIX <- factor(round(predict(mod, type = "response")))
confusionMatrix(Y, PRE)
# Not working
confusionMatrix(Y, PRE_FIX)
It is possible to compute the confusionMatrix:
confusionMatrix<-table(true=Y , pred = round(fitted(PRE_FIX)))
And then, convert the confusion matrix to a table shape.
0 1
0 TruePositive FalseNegative
1 FalsePositive TrueNegative

How to compare a model with no random effects to a model with a random effect using lme4?

I can use gls() from the nlme package to build mod1 with no random effects.
I can then compare mod1 using AIC to mod2 built using lme() which does include a random effect.
mod1 = gls(response ~ fixed1 + fixed2, method="REML", data)
mod2 = lme(response ~ fixed1 + fixed2, random = ~1 | random1, method="REML",data)
AIC(mod1,mod2)
Is there something similar to gls() for the lme4 package which would allow me to build mod3 with no random effects and compare it to mod4 built using lmer() which does include a random effect?
mod3 = ???(response ~ fixed1 + fixed2, REML=T, data)
mod4 = lmer(response ~ fixed1 + fixed2 + (1|random1), REML=T, data)
AIC(mod3,mod4)
With modern (>1.0) versions of lme4 you can make a direct comparison between lmer fits and the corresponding lm model, but you have to use ML --- it's hard to come up with a sensible analogue of the "REML criterion" for a model without random effects (because it would involve a linear transformation of the data that set all of the fixed effects to zero ...)
You should be aware that there are theoretical issues with information-theoretic comparisons between models with and without variance components: see the GLMM FAQ for more information.
library(lme4)
fm1 <- lmer(Reaction~Days+(1|Subject),sleepstudy, REML=FALSE)
fm0 <- lm(Reaction~Days,sleepstudy)
AIC(fm1,fm0)
## df AIC
## fm1 4 1802.079
## fm0 3 1906.293
I prefer output in this format (delta-AIC rather than raw AIC values):
bbmle::AICtab(fm1,fm0)
## dAIC df
## fm1 0.0 4
## fm0 104.2 3
To test, let's simulate data with no random effect (I had to try a couple of random-number seeds to get an example where the among-subject std dev was actually estimated as zero):
rr <- simulate(~Days+(1|Subject),
newparams=list(theta=0,beta=fixef(fm1),
sigma=sigma(fm1)),
newdata=sleepstudy,
family="gaussian",
seed=103)[[1]]
ss <- transform(sleepstudy,Reaction=rr)
fm1Z <- update(fm1,data=ss)
VarCorr(fm1Z)
## Groups Name Std.Dev.
## Subject (Intercept) 0.000
## Residual 29.241
fm0Z <- update(fm0,data=ss)
all.equal(c(logLik(fm0Z)),c(logLik(fm1Z))) ## TRUE
While I agree that with Ben that the simplest solution is to set REML=FALSE, the maximum REML likelihood for a model without random effects is well defined and is fairly straightforward to compute via the well known relation
between the ordinary profile likelihood function and the restricted likelihood.
The following code simulates data for which the estimated variance of the random intercept of a LMM ends up at 0 such that the maximum restricted log likelihood of the LMM should be equal to the restricted likelihood of the model without any random effects included.
The restricted likelihood of the LM is computed via the above formula and evaluates to the same value as that of the LMM.
An even simpler alternative is to use glmmTMB:
library(lme4)
#> Loading required package: Matrix
# simulate some toy data for which the LMM ends up at the boundary
set.seed(5)
n <- 100 # the sample size
x <- rnorm(n)
y <- rnorm(n)
group <- factor(rep(1:10,10))
# fit the LMM via REML
mod1 <- lmer(y ~ x + (1|group), REML=TRUE, control=lmerControl(boundary.tol=1e-8))
#> boundary (singular) fit: see ?isSingular
logLik(mod1)
#> 'log Lik.' -147.8086 (df=4)
# fit a model without random effects and compute its maximum REML log likelihood
mod0 <- lm(y ~ x)
p <- length(coef(mod0)) # number of fixed effect parameters
X <- model.matrix(mod0) # the fixed effect design matrix
sigma.REML <- summary(mod0)$sigma # REMLE of sigma
# the maximum ordinary log likelihood evaluated at the REML estimates
logLik.lm.at.REML <- sum(dnorm(residuals(mod0), 0, sigma.REML, log=TRUE))
# the restricted log likelihood of the model without random effects (via above formula)
logLik.lm.at.REML + p/2*log(2*pi) - 1/2*(- p*log(sigma.REML^2) + determinant(crossprod(X))$modulus)
#> [1] -147.8086
#> attr(,"logarithm")
#> [1] TRUE
library(glmmTMB)
data <- data.frame(y,x,group)
logLik(glmmTMB(y~x, family = gaussian(), data=data, REML=TRUE))
#> 'log Lik.' -147.8086 (df=3)
logLik(glmmTMB(y~x+(1|group), family = gaussian(), data=data, REML=TRUE))
#> 'log Lik.' -147.8086 (df=4)

How to predict and graph non-linear varying slopes in lmer or glmer?

My goal is to calculate predicted values from a varying-intercept, varying-slope multilevel model using the lmer and glmer functions of the lme4 package in R. To make this concrete and clear, I present here a toy example with the "mtcars" data set:
Here's how I usually create predicted values from a varying-intercept, varying-slope multilevel model (this code should work just fine):
# loading in-built cars dataset
data(mtcars)
# the "gear" column will be the group-level factor, so we'll have cars nested
# within "gear" type
mtcars$gear <- as.factor(mtcars$gear)
# fitting varying-slope, varying-intercept model
m <- lmer(mpg ~ 1 + wt + hp + (1 + wt|gear), data=mtcars)
# creating the prediction frame
newdata <- with(mtcars, expand.grid(wt=unique(wt),
gear=unique(gear),
hp=mean(hp)))
# calculating predictions
newdata$pred <- predict(m, newdata, re.form=~(1 + wt|gear))
# quick ggplot2 graph
p <- ggplot(newdata, aes(x=wt, y=pred, colour=gear))
p + geom_line() + ggtitle("Varying Slopes")
The above R code should work, but if I want to create and graph predictions from a non-linear varying-intercept, varying-slope then it clearly fails. For simplicity and reproducibility, here's the stumbling block using the "mtcars" data set:
# key question: how to create predictions if I want to examine a non-linear
# varying slope?
# creating a squared term for a non-linear relationship
# NB: usually I use the `poly` function
mtcars$wtsq <- (mtcars$wt)^2
# fitting varying-slope, varying-intercept model with a non-linear trend
m <- lmer(mpg ~ 1 + wt + wtsq + hp + (1 + wt + wtsq|gear), data=mtcars)
# creating the prediction frame
newdata <- with(mtcars, expand.grid(wt=unique(wt),
wtsq=unique(wtsq),
gear=unique(gear),
hp=mean(hp)))
# calculating predictions
newdata$pred <- predict(m, newdata, re.form=~(1 + wt + wtsq|gear))
# quick ggplot2 graph
# clearly not correct (see the graph below)
p <- ggplot(newdata, aes(x=wt, y=pred, colour=gear))
p + geom_line() + ggtitle("Varying Slopes")
Clearly the prediction frame is not set up correctly. Any ideas on how to create and graph predicted values when fitting a non-linear varying-intercept, varying-slope multilevel model in R? Thanks!
The issue is that when you use expand.grid with both wt and wt^2, you create all possible combinations of wt and wt^2. This modification of your code works:
newdata <- with(mtcars, expand.grid(wt=unique(wt),
gear=unique(gear),
hp=mean(hp)))
newdata$wtsq <- newdata$wt^2
newdata$pred <- predict(m, newdata)
p <- ggplot(newdata, aes(x=wt, y=pred, colour=gear, group=gear))
p + geom_line() + ggtitle("Varying Slopes")

Resources