A glm with interactions for overdisperesed rates - r

I have measurements obtained from 2 groups (a and b) where each group has the same 3 levels (x, y, z). The measurements are counts out of totals (i.e., rates), but in group a there cannot be zeros whereas in group b there can (hard coded in the example below).
Here's my example data.frame:
set.seed(3)
df <- data.frame(count = c(rpois(15,5),rpois(15,5),rpois(15,3),
rpois(15,7.5),rpois(15,2.5),rep(0,15)),
group = as.factor(c(rep("a",45),rep("b",45))),
level = as.factor(rep(c(rep("x",15),rep("y",15),rep("z",15)),2)))
#add total - fixed for all
df$total <- rep(max(df$count)*2,nrow(df))
I'm interested in quantifying for each level x,y,z if there is any difference between the (average) measurements of a and b? If there is, is it statistically significant?
From what I understand a Poisson GLM for rates seems to be appropriate for these types of data. In my case it seems that perhaps a negative binomial GLM would be even more appropriate since my data are over dispersed (I tried to create that in my example data to some extent but in my real data it is definitely the case).
Following the answer I got for a previous post I went with:
library(dplyr)
library(MASS)
df %>%
mutate(interactions = paste0(group,":",level),
interactions = ifelse(group=="a","a",interactions)) -> df2
df2$interactions = as.factor(df2$interactions)
fit <- glm.nb(count ~ interactions + offset(log(total)), data = df2)
> summary(fit)
Call:
glm.nb(formula = count ~ interactions + offset(log(total)), data = df2,
init.theta = 41.48656798, link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.40686 -0.75495 -0.00009 0.46892 2.28720
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.02047 0.07824 -25.822 < 2e-16 ***
interactionsb:x 0.59336 0.13034 4.552 5.3e-06 ***
interactionsb:y -0.28211 0.17306 -1.630 0.103
interactionsb:z -20.68331 2433.94201 -0.008 0.993
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(41.4866) family taken to be 1)
Null deviance: 218.340 on 89 degrees of freedom
Residual deviance: 74.379 on 86 degrees of freedom
AIC: 330.23
Number of Fisher Scoring iterations: 1
Theta: 41.5
Std. Err.: 64.6
2 x log-likelihood: -320.233
I'd expect the difference between a and b for level z to be significant. However, the Std. Error for level z seems enormous and hence the p-value is nearly 1.
My question is whether the model I'm using is set up correctly to answer my question (mainly through the use of the interactions factor?)

Related

How to us lapply or sapply for GLM on multiple species separately?

I am trying to run a GLM on multiple different species in my data set. Currently I have been sub-setting my data for each species and copying this code and it's turned into quite the mess. I know there has to be a better way to do this, (maybe with the lapply function?) but I'm not sure how to begin with that.
I'm running the model on the CPUE (catch per unit effort) for a species and using Year, Salinity, Discharge, and Rainfall as my explanatory variables.
My data is here: https://drive.google.com/file/d/1_ylbMoqevvsuucwZn2VMA_KMNaykDItk/view?usp=sharing
This is the code that I have tried. It gets the job done, but I have just been copying this code and changing the species each time. I'm hoping to find a way to simplify this process and clean up my code a bit.
fish_df$pinfishCPUE <- ifelse(fish_df$Commonname == "Pinfish", fish_all$CPUE, 0)
#create binomial column
fish_df$binom <- ifelse(fish_df$pinfishCPUE > 0, 1,0)
glm.full.bin = glm(binom~Year+Salinity+Discharge +Rainfall,data=fish_df,family=binomial)
glm.base.bin = glm(binom~Year,data=fish_df,family=binomial)
#step to simplify model and get appropriate order
glm.step.bin = step(glm.base.bin,scope=list(upper=glm.full.bin,lower=~Year),direction='forward',
trace=1,k=log(nrow(fish_df)))
#final model - may choose to reduce based on deviance and cutoff in above step
glm.final.bin = glm.step.bin
print(summary(glm.final.bin))
#calculate the LSMeans for the proportion of positive trips
lsm.b.glm = emmeans(glm.final.bin,"Year",data=fish_df)
LSMeansProp = summary(lsm.b.glm)
Output:
Call:
glm(formula = log.CPUE ~ Month + Salinity + Temperature, family = gaussian,
data = fish_B_pos)
Deviance Residuals:
Min 1Q Median 3Q Max
-3.8927 -0.7852 0.1038 0.8974 3.5887
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.38530 0.72009 3.313 0.00098 ***
Month 0.10333 0.03433 3.010 0.00272 **
Salinity -0.13530 0.01241 -10.900 < 2e-16 ***
Temperature 0.06901 0.01434 4.811 1.9e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for gaussian family taken to be 1.679401)
Null deviance: 1286.4 on 603 degrees of freedom
Residual deviance: 1007.6 on 600 degrees of freedom
AIC: 2033.2
Number of Fisher Scoring iterations: 2
I would suggest next approach creating a function for the models and then using lapply over a list which results from applying split() to the dataframe by variable Commonname:
library(emmeans)
#Load data
fish_df <- read.csv('fish_df.csv',stringsAsFactors = F)
#Code
List <- split(fish_df,fish_df$Commonname)
#Function for models
mymodelfun <- function(x)
{
#Create binomial column
x$binom <- ifelse(x$pinfishCPUE > 0, 1,0)
glm.full.bin = glm(binom~Year+Salinity+Discharge +Rainfall,data=x,family=binomial)
glm.base.bin = glm(binom~Year,data=x,family=binomial)
#step to simplify model and get appropriate order
glm.step.bin = step(glm.base.bin,scope=list(upper=glm.full.bin,lower=~Year),direction='forward',
trace=1,k=log(nrow(x)))
#final model - may choose to reduce based on deviance and cutoff in above step
glm.final.bin = glm.step.bin
print(summary(glm.final.bin))
#calculate the LSMeans for the proportion of positive trips
lsm.b.glm = emmeans(glm.final.bin,"Year",data=x)
LSMeansProp = summary(lsm.b.glm)
return(LSMeansProp)
}
#Apply function
Lmods <- lapply(List,mymodelfun)
In Lmods there will be the results of the models, here an example:
Lmods$`Atlantic Stingray`
Output:
Year emmean SE df asymp.LCL asymp.UCL
2009 -22.6 48196 Inf -94485 94440
Results are given on the logit (not the response) scale.
Confidence level used: 0.95

How to predict a response for a GLM using my values?

Apologies for any bad English, it is not my first language :)
So I have a dataset of the passengers of the titanic, and produced the following fit summary:
glm(formula = Survived ~ factor(Pclass) + Age + I(Age^2) + Sex +
Fare + I(Fare^2), family = binomial(), data = titan)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.7298 -0.6738 -0.3769 0.6291 2.4821
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 4.678e+00 6.321e-01 7.401 1.35e-13 ***
factor(Pclass)2 -1.543e+00 3.525e-01 -4.377 1.20e-05 ***
factor(Pclass)3 -2.909e+00 3.882e-01 -7.494 6.69e-14 ***
Age -6.813e-02 2.196e-02 -3.102 0.00192 **
I(Age^2) 4.620e-04 3.193e-04 1.447 0.14792
Sexmale -2.595e+00 2.131e-01 -12.177 < 2e-16 ***
Fare -9.800e-03 5.925e-03 -1.654 0.09815 .
I(Fare^2) 2.798e-05 1.720e-05 1.627 0.10373
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 964.52 on 713 degrees of freedom
Residual deviance: 641.74 on 706 degrees of freedom
(177 observations deleted due to missingness)
AIC: 657.74
Number of Fisher Scoring iterations: 5
Now I'm trying to predict the survival probability of a female aged 21 who paid 35 for her ticket fare.
I'm unable to use predict or predict.glm and am unsure why. I run the following and produce this error:
predict(glmfit, data.frame(PClass=2, Sex="female", Age=20), type="response")
Error in factor(Pclass) : object 'Pclass' not found
I then just try to calculate it the long-way, that is by multiplying my coefficients to the desired values but the answer I get there is not right either.
(4.678e+00)+(1*-1.543e+00)+(21*-6.813e-02)+((21^2)*4.620e-04)+(35*-9.800e-03)+((35^2)*2.798e-05)
[1] 1.599287
Not sure how I could a probability greater than 1, especially when my response is a binomial factor of 0 or 1.
Could someone please shed some light on my mistakes? Thanks in advance.
If you want to calculate the probability by hand, then follow the steps
Multiply coefficients to the desired values
Take exponential of the output from step 1
Probability = output of step 2/(1 + output of step 2)
In your case, the output of step 1 is 1.599287. The output of step 2 will be exp(1.599287) = 4.949502. Then probability = 4.949502/(1 + 4.949502) = 0.8319187.
So, in R you can create your own function like
logit2prob <- function(logit){
odds <- exp(logit)
prob <- odds / (1 + odds)
return(prob)
}
For more details, you can visit this.
Otherwise, the suggestion by #Roland should work fine.

Quark in producing fitted values using LM model in R

A colleague and I noticed this interesting quark in the lm function in R.
Say, I am regressing y variable on an x variable and x variable is a factor level variable with two categories (0/1).
When I run the regression and examine the fitted values, there should be two fitted values. One for the intercept and one fitted value when beta = 1.
Instead, there are more than two. Three nearly identical fitted values for the intercept and one fitted value when beta = 1.
Among those that are different, the difference occurs at the last decimal point.
What might be occurring within R that produces this quark? Why are the intercept's fitted values nearly identical but not perfectly identical?
set.seed(1995)
x <- sample(c(0,1), 100, replace = T, prob = c(.5,.5))
y <- runif(100, min = 1, max = 100)
df <- data.frame(x, y)
OLS <- lm(y ~ as.factor(x), data = df)
summary(OLS)
Call:
lm(formula = y ~ as.factor(x), data = df)
Residuals:
Min 1Q Median 3Q Max
-52.374 -25.163 1.776 25.521 46.571
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 54.503 4.176 13.05 <0.0000000000000002 ***
as.factor(x)1 -5.117 5.683 -0.90 0.37
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 28.33 on 98 degrees of freedom
Multiple R-squared: 0.008205, Adjusted R-squared: -0.001916
F-statistic: 0.8107 on 1 and 98 DF, p-value: 0.3701
table(OLS$fitted.values)
49.385426930928 54.5027935733593 54.5027935733594 54.5027935733595
54 32 13 1
My hunch is that this is the product of numerical errors as outlined in the first circle of Burn's (2011) R Inferno?

How does the Predict function handle continuous values with a 0 in R for a Poisson Log Link Model?

I am using a Poisson GLM on some dummy data to predict ClaimCounts based on two variables, frequency and Judicial Orientation.
Dummy Data Frame:
data5 <-data.frame(Year=c("2006","2006","2006","2007","2007","2007","2008","2009","2010","2010","2009","2009"),
JudicialOrientation=c("Defense","Plaintiff","Plaintiff","Neutral","Defense","Plaintiff","Defense","Plaintiff","Neutral","Neutral","Plaintiff","Defense"),
Frequency=c(0.0,0.06,.07,.04,.03,.02,0,.1,.09,.08,.11,0),
ClaimCount=c(0,5,10,3,4,0,7,8,15,16,17,12),
Loss = c(100000,100,2500,100000,25000,0,7500,5200, 900,100,0,50),
Exposure=c(10,20,30,1,2,4,3,2,1,54,12,13)
)
Model GLM:
ClaimModel <- glm(ClaimCount~JudicialOrientation+Frequency
,family = poisson(link="log"), offset=log(Exposure), data = data5, na.action=na.pass)
Call:
glm(formula = ClaimCount ~ JudicialOrientation + Frequency, family = poisson(link = "log"),
data = data5, na.action = na.pass, offset = log(Exposure))
Deviance Residuals:
Min 1Q Median 3Q Max
-3.7555 -0.7277 -0.1196 2.6895 7.4768
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3493 0.2125 -1.644 0.1
JudicialOrientationNeutral -3.3343 0.5664 -5.887 3.94e-09 ***
JudicialOrientationPlaintiff -3.4512 0.6337 -5.446 5.15e-08 ***
Frequency 39.8765 6.7255 5.929 3.04e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 149.72 on 11 degrees of freedom
Residual deviance: 111.59 on 8 degrees of freedom
AIC: 159.43
Number of Fisher Scoring iterations: 6
I am using an offset of Exposure as well.
I then want to use this GLM to predict claim counts for the same observations:
data5$ExpClaimCount <- predict(ClaimModel, newdata=data5, type="response")
If I understand correctly then the Poisson glm equation should then be:
ClaimCount = exp(-.3493 + -3.3343*JudicialOrientationNeutral +
-3.4512*JudicialOrientationPlaintiff + 39.8765*Frequency + log(Exposure))
However I tried this manually(In excel =EXP(-0.3493+0+0+LOG(10)) for observation 1 for example) and for some of the observations but did not get the correct answer.
Is my understanding of the GLM equation incorrect?
You are right with the assumption about how predict() for a Poisson GLM works. This can be verified in R:
co <- coef(ClaimModel)
p1 <- with(data5,
exp(log(Exposure) + # offset
co[1] + # intercept
ifelse(as.numeric(JudicialOrientation)>1, # factor term
co[as.numeric(JudicialOrientation)], 0) +
Frequency * co[4])) # linear term
all.equal(p1, predict(ClaimModel, type="response"), check.names=FALSE)
[1] TRUE
As indicated in the comments you probably get the wrong results in Excel because of the different basis of the logarithm (10 in Excel, Euler's number in R).

Likelihood ratio test: 'models were not all fitted to the same size of dataset'

I'm an absolute R beginner and need some help with my likelihood ratio tests for my univariate analyses. Here's the code:
#Univariate analysis for conscientiousness (categorical)
fit <- glm(BCS_Bin~Conscientiousness_cat,data=dat,family=binomial)
summary(fit)
#Likelihood ratio test
fit0<-glm(BCS_Bin~1, data=dat, family=binomial)
summary(fit0)
lrtest(fit, fit0)
The results are:
Call:
glm(formula = BCS_Bin ~ Conscientiousness_cat, family = binomial,
data = dat)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.8847 -0.8847 -0.8439 1.5016 1.5527
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.84933 0.03461 -24.541 <2e-16 ***
Conscientiousness_catLow 0.11321 0.05526 2.049 0.0405 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 7962.1 on 6439 degrees of freedom
Residual deviance: 7957.9 on 6438 degrees of freedom
(1963 observations deleted due to missingness)
AIC: 7961.9
Number of Fisher Scoring iterations: 4
And:
Call:
glm(formula = BCS_Bin ~ 1, family = binomial, data = dat)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.8524 -0.8524 -0.8524 1.5419 1.5419
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.82535 0.02379 -34.69 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 10251 on 8337 degrees of freedom
Residual deviance: 10251 on 8337 degrees of freedom
(65 observations deleted due to missingness)
AIC: 10253
Number of Fisher Scoring iterations: 4
For my LRT:
Error in lrtest.default(fit, fit0) :
models were not all fitted to the same size of dataset
I understand that this is happening because there's different numbers of observations missing? That's because it is data from a large questionnaire, and many more drop outs had occurred by the question assessing my predictor variable (conscientiousness) when compared with the outcome variable (body condition score/BCS). So I just have more data for BCS than conscientiousness, for example (it's producing the same error for many of my other variables too).
In order to run the likelihood ratio test, the model with just the intercept has to be fit to the same observations as the model that includes Conscientiousness_cat. So, you need the subset of the data that has no missing values for Conscientiousness_cat:
BCS_bin_subset = BCS_bin[complete.cases(BCS_bin[,"Conscientiousness_cat"]), ]
You can run both models on this subset of the data and the likelihood ratio test should run without error.
In your case, you could also do:
BCS_bin_subset = BCS_bin[!is.na(BCS_bin$Conscientiousness_cat), ]
However, it's nice to have complete.cases handy when you want a subset of a data frame with no missing values across multiple variables.
Another option that is more convenient if you're going to run multiple models, but that is also more complex is to first fit whatever model uses the largest number of variables from BCS_bin (since that model will exclude the largest number of observations due to missingness) and then use the update function to update that model to models with fewer variables. We just need to make sure that update uses the same observations each time, which we do using a wrapper function defined below. Here's an example using the built-in mtcars data frame:
library(lmtest)
dat = mtcars
# Create some missing values in mtcars
dat[1, "wt"] = NA
dat[5, "cyl"] = NA
dat[7, "hp"] = NA
# Wrapper function to ensure the same observations are used for each
# updated model as were used in the first model
# From https://stackoverflow.com/a/37341927/496488
update_nested <- function(object, formula., ..., evaluate = TRUE){
update(object = object, formula. = formula., data = object$model, ..., evaluate = evaluate)
}
m1 = lm(mpg ~ wt + cyl + hp, data=dat)
m2 = update_nested(m1, . ~ . - wt) # Remove wt
m3 = update_nested(m1, . ~ . - cyl) # Remove cyl
m4 = update_nested(m1, . ~ . - wt - cyl) # Remove wt and cyl
m5 = update_nested(m1, . ~ . - wt - cyl - hp) # Remove all three variables (i.e., model with intercept only)
lrtest(m5,m4,m3,m2,m1)

Resources