We're trying to model a count variable with excessive zeros using a zero-inflated poisson (as implemented in pscl package). Here is a (simplified) output showing both categorical and continuous explanatory variables:
library(pscl)
> m1 <- zeroinfl(y ~ treatment + some_covar, data = d, dist =
"poisson")
> summary(m1)
Count model coefficients (poisson with log link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.189253 0.102256 31.189 < 2e-16 ***
treatmentB -0.282478 0.107965 -2.616 0.00889 **
treatmentC 0.227633 0.103605 2.197 0.02801 *
some_covar 0.002190 0.002329 0.940 0.34706
Zero-inflation model coefficients (binomial with logit link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.67251 0.74961 0.897 0.3696
treatmentB -1.72728 0.89931 -1.921 0.0548 .
treatmentC -0.31761 0.77668 -0.409 0.6826
some_covar -0.03736 0.02684 -1.392 0.1640
summary gave us some good answers but we are looking for a ANOVA-like table. So, the question is: is it ok to use car::Anova to obtain such table?
> Anova(m1)
Analysis of Deviance Table (Type II tests)
Response: y
Df Chisq Pr(>Chisq)
treatment 2 30.7830 2.068e-07 ***
some_covar 1 0.8842 0.3471
It seems to work fine but i'm not really sure whether is a valid approach since documentation is missing (seems like is only considering the 'count model' part?). Do you recommend to follow this approach or there is a better way?
Related
I have data from a longitudinal study and calculated the regression using the lme4::lmer function. After that I calculated the contrasts for these data but I am having difficulty interpreting my results, as they were unexpected. I think I might have made a mistake in the code. Unfortunately I couldn't replicate my results with an example, but I will post both the failed example and my actual results below.
My results:
library(lme4)
library(lmerTest)
library(emmeans)
#regression
regmemory <- lmer(memory ~ as.factor(QuartileConsumption)*Age+
(1 + Age | ID) + sex + education +
HealthScore, CognitionData)
#results
summary(regmemory)
#Fixed effects:
# Estimate Std. Error df t value Pr(>|t|)
#(Intercept) -7.981e-01 9.803e-02 1.785e+04 -8.142 4.15e-16 ***
#as.factor(QuartileConsumption)2 -8.723e-02 1.045e-01 2.217e+04 -0.835 0.40376
#as.factor(QuartileConsumption)3 5.069e-03 1.036e-01 2.226e+04 0.049 0.96097
#as.factor(QuartileConsumption)4 -2.431e-02 1.030e-01 2.213e+04 -0.236 0.81337
#Age -1.709e-02 1.343e-03 1.989e+04 -12.721 < 2e-16 ***
#sex 3.247e-01 1.520e-02 1.023e+04 21.355 < 2e-16 ***
#education 2.979e-01 1.093e-02 1.061e+04 27.266 < 2e-16 ***
#HealthScore -1.098e-06 5.687e-07 1.021e+04 -1.931 0.05352 .
#as.factor(QuartileConsumption)2:Age 1.101e-03 1.842e-03 1.951e+04 0.598 0.55006
#as.factor(QuartileConsumption)3:Age 4.113e-05 1.845e-03 1.935e+04 0.022 0.98221
#as.factor(QuartileConsumption)4:Age 1.519e-03 1.851e-03 1.989e+04 0.821 0.41174
#contrasts
emmeans(regmemory, poly ~ QuartileConsumption * Age)$contrast
#$contrasts
# contrast estimate SE df z.ratio p.value
# linear 0.2165 0.0660 Inf 3.280 0.0010
# quadratic 0.0791 0.0289 Inf 2.733 0.0063
# cubic -0.0364 0.0642 Inf -0.567 0.5709
The interaction terms in the regression results are not significant, but the linear contrast is. Shouldn't the p-value for the contrast be non-significant?
Below is the code I wrote to try to recreate these results, but failed:
library(dplyr)
library(lme4)
library(lmerTest)
library(emmeans)
data("sleepstudy")
#create quartile column
sleepstudy$Quartile <- sample(1:4, size = nrow(sleepstudy), replace = T)
#regression
model1 <- lmer(Reaction ~ Days * as.factor(Quartile) + (1 + Days | Subject), data = sleepstudy)
#results
summary(model1)
#Fixed effects:
# Estimate Std. Error df t value Pr(>|t|)
#(Intercept) 258.1519 9.6513 54.5194 26.748 < 2e-16 ***
#Days 9.8606 2.0019 43.8516 4.926 1.24e-05 ***
#as.factor(Quartile)2 -11.5897 11.3420 154.1400 -1.022 0.308
#as.factor(Quartile)3 -5.0381 11.2064 155.3822 -0.450 0.654
#as.factor(Quartile)4 -10.7821 10.8798 154.0820 -0.991 0.323
#Days:as.factor(Quartile)2 0.5676 2.1010 152.1491 0.270 0.787
#Days:as.factor(Quartile)3 0.2833 2.0660 155.5669 0.137 0.891
#Days:as.factor(Quartile)4 1.8639 2.1293 153.1315 0.875 0.383
#contrast
emmeans(model1, poly ~ Quartile*Days)$contrast
#contrast estimate SE df t.ratio p.value
# linear -1.91 18.78 149 -0.102 0.9191
# quadratic 10.40 8.48 152 1.227 0.2215
# cubic -18.21 18.94 150 -0.961 0.3379
In this example, the p-value for the linear contrast is non-significant just as the interactions from the regression. Did I do something wrong, or these results are to be expected?
Look at the emmeans() call for the original model:
emmeans(regmemory, poly ~ QuartileConsumption * Age)
This requests that we obtain marginal means for combinations of QuartileConsumption and Age, and obtain polynomial contrasts from those results. It appears that Age is a quantitative variable, so in computing the marginal means, we just use the mean value of Age (see documentation for ref_grid() and vignette("basics", "emmeans")). So the marginal means display, which wasn't shown in the OP, will be in this general form:
QuartileConsumption Age emmean
------------------------------------
1 <mean> <est1>
2 <mean> <est2>
3 <mean> <est3>
4 <mean> <est4>
... and the contrasts shown will be the linear, quadratic, and cubic trends of those four estimates, in the order shown.
Note that these marginal means have nothing to do with the interaction effect; they are just predictions from the model for the four levels of QuartileConsumption at the mean Age (and mean education, mean health score), averaged over the two sexes, if I understand the data structure correctly. So essentially the polynomial contrasts estimate polynomial trends of the 4-level factor at the mean age. And note in particular that age is held constant, so we certainly are not looking at any effects of Age.
I am guessing what you want to be doing to examine the interaction is to assess how the age trend varies over the four levels of that factor. If that is the case, one useful thing to do would be something like
slopes <- emtrends(regmemory, ~ QuartileConsumption, var = "age")
slopes # display the estimated slope at each level
pairs(slopes) # pairwise comparisons of these slopes
See vignette("interactions", "emmeans") and the section on interactions with covariates.
Sorry that this error has been discussed before, each answer on stackoverflow seems specific to the data
I'm attempting to run the following negative binomial model in lme4:
Model5.binomial<-glmer.nb(countvariable ~ waves + var1 + dummycodedvar2 + dummycodedvar3 + (1|record_id), data=datadfomit)
However, I receive the following error when attempting to run the model:
Error in f_refitNB(lastfit, theta = exp(t), control = control) :pwrssUpdate did not converge in (maxit) iterations
I first ran the model with only 3 predictor variables (waves, var1, dummycodedvar2) and got the same error. But centering the predictors fixed this problem and the model ran fine.
Now with 4 variables (all centered) I expected the model to run smoothly, but receive the error again.
Since every answer on this site seems to point towards a problem in the data, data that replicates the problem can be found here:
https://file.io/3vtX9RwMJ6LF
Your response variable has a lot of zeros:
I would suggest fitting a model that takes account of this, such as a zero-inflated model. The GLMMadaptive package can fit zero-inflated negative binomial mixed effects models:
## library(GLMMadaptive)
## mixed_model(countvariable ~ waves + var1 + dummycodedvar2 + dummycodedvar3, ## random = ~ 1 | record_id, data = data,
## family = zi.negative.binomial(),
## zi_fixed = ~ var1,
## zi_random = ~ 1 | record_id) %>% summary()
Random effects covariance matrix:
StdDev Corr
(Intercept) 0.8029
zi_(Intercept) 1.0607 -0.7287
Fixed effects:
Estimate Std.Err z-value p-value
(Intercept) 1.4923 0.1892 7.8870 < 1e-04
waves -0.0091 0.0366 -0.2492 0.803222
var1 0.2102 0.0950 2.2130 0.026898
dummycodedvar2 -0.6956 0.1702 -4.0870 < 1e-04
dummycodedvar3 -0.1746 0.1523 -1.1468 0.251451
Zero-part coefficients:
Estimate Std.Err z-value p-value
(Intercept) 1.8726 0.1284 14.5856 < 1e-04
var1 -0.3451 0.1041 -3.3139 0.00091993
log(dispersion) parameter:
Estimate Std.Err
0.4942 0.2859
Integration:
method: adaptive Gauss-Hermite quadrature rule
quadrature points: 11
Optimization:
method: hybrid EM and quasi-Newton
converged: TRUE
I have the following model:
ModelPower <- lmer(DV ~ GroupAbstract * Condition_Cat_Abs + (1|Participant) + (1 + GroupAbstract|Stimulus), data = Dataset)
This model gives the following output:
Random effects:
Groups Name Variance Std.Dev. Corr
Participant (Intercept) 377.401 19.427
Stimulus (Intercept) 91.902 9.587
GroupAbstractOutgroup 2.003 1.415 -0.40
Residual 338.927 18.410
Number of obs: 16512, groups: Participant, 344; Stimulus, 32
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 65.8962 2.0239 59.6906 32.559 < 0.0000000000000002 ***
GroupAbstractOutgroup -0.9287 0.5561 129.9242 -1.670 0.0973 .
Condition_Cat_AbsSecondOrderIn -2.2584 0.4963 16103.9277 -4.550 0.00000539 ***
Condition_Cat_AbsSecondOrderOut -7.0821 0.4963 16103.9277 -14.270 < 0.0000000000000002 ***
GroupAbstractOutgroup:Condition_Cat_AbsSecondOrderIn -3.0229 0.7019 16103.9277 -4.307 0.00001665 ***
GroupAbstractOutgroup:Condition_Cat_AbsSecondOrderOut 7.8765 0.7019 16103.9277 11.222 < 0.0000000000000002 ***
I am interested in the interaction "GroupAbstractOutgroup:Condition_Cat_AbsSecondOrderIn" and I am trying to estimate the sample size to detect an effect size of at least -2 using the R package simr. the original slope is -3.02 so I specify the new one:
ModelPower#beta[names(fixef(ModelPower)) %in% "GroupAbstractOutgroup:Condition_Cat_AbsSecondOrderIn"] <- -2
However, regardless of how I specify the powerSim function both for the main effects and interactions (see some examples below), I get power of 0% and the following error when running lastResult()$errors 'object is not a matrix'. I know what the error should mean but even after converting the original data frame and the table of fixed effects to a matrix, the error is still there and I am not sure what it is referring to and how to get the actual output. Any help would be much appreciated!
Examples of the powerSim function:
powerSim(ModelPower, test=fixed("GroupAbstract", "anova"), nsim=10, seed=1)
powerSim(ModelPower, test=fixed("GroupAbstractOutgroup:Condition_Cat_AbsSecondOrderIn", "anova"), nsim=10, seed=1)
I am trying to understand the difference between two different fitting methods for a data set with a bounded response variable. The response variable is a fraction and therefore has a range of [0,1]. I have uncovered through my Google searching that there are a lot of different methods out there as this is a common operation. I am currently interested in the difference between the stock R GLM fit and the Beta regression offered in the betareg package. I am using the GasolineYield data set from the "betareg" package as my sample data set. Before I post the code and the results my two questions are the following:
Am I performing the Logistic Regression fit in R using the builtin R GLM correctly?
Why are the standard errors reported in the Beta regression so much smaller than the standard errors for the R logistic regression?
R Setup Code
library(betareg)
data("GasolineYield", package = "betareg")
Beta Regression code from the "betareg" package
gy = betareg(yield ~ batch + temp, data = GasolineYield)
summary(gy)
Beta Regression summary output
Call:
betareg(formula = yield ~ batch + temp, data = GasolineYield)
Standardized weighted residuals 2:
Min 1Q Median 3Q Max
-2.8750 -0.8149 0.1601 0.8384 2.0483
Coefficients (mean model with logit link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.1595710 0.1823247 -33.784 < 2e-16 ***
batch1 1.7277289 0.1012294 17.067 < 2e-16 ***
batch2 1.3225969 0.1179020 11.218 < 2e-16 ***
batch3 1.5723099 0.1161045 13.542 < 2e-16 ***
batch4 1.0597141 0.1023598 10.353 < 2e-16 ***
batch5 1.1337518 0.1035232 10.952 < 2e-16 ***
batch6 1.0401618 0.1060365 9.809 < 2e-16 ***
batch7 0.5436922 0.1091275 4.982 6.29e-07 ***
batch8 0.4959007 0.1089257 4.553 5.30e-06 ***
batch9 0.3857930 0.1185933 3.253 0.00114 **
temp 0.0109669 0.0004126 26.577 < 2e-16 ***
Phi coefficients (precision model with identity link):
Estimate Std. Error z value Pr(>|z|)
(phi) 440.3 110.0 4.002 6.29e-05 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Type of estimator: ML (maximum likelihood)
Log-likelihood: 84.8 on 12 Df
Pseudo R-squared: 0.9617
Number of iterations: 51 (BFGS) + 3 (Fisher scoring)
R GLM Logistic Regression code from stock R
glmfit = glm(yield ~ batch + temp, data = GasolineYield, family = "binomial")
summary(glmfit)
R GLM Logistic Regression summary output
Call:
glm(formula = yield ~ batch + temp, family = "binomial", data = GasolineYield)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.100459 -0.025272 0.004217 0.032879 0.082113
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.130227 3.831798 -1.600 0.110
batch1 1.720311 2.127205 0.809 0.419
batch2 1.305746 2.481266 0.526 0.599
batch3 1.562343 2.440712 0.640 0.522
batch4 1.048928 2.152385 0.487 0.626
batch5 1.125075 2.176242 0.517 0.605
batch6 1.029601 2.229773 0.462 0.644
batch7 0.540401 2.294474 0.236 0.814
batch8 0.497355 2.288564 0.217 0.828
batch9 0.378315 2.494881 0.152 0.879
temp 0.010906 0.008676 1.257 0.209
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 2.34184 on 31 degrees of freedom
Residual deviance: 0.07046 on 21 degrees of freedom
AIC: 36.631
Number of Fisher Scoring iterations: 5
The standard errors are different because the variance assumptions in the two models are different.
Logistic regression assumes the response has a binomial distribution, while beta regression assumes it has a beta distribution.
The variance functions of the two are different. For the binomial, if you specify the mean (and $n$ is a given) the variance is determined. For the beta there's another free parameter, so it isn't determined by the mean and would presumably be estimated from the data.
This suggests that if you fit a quasibinomial GLM (adding a variance parameter) you might get closer to the same standard errors, but they still won't be the same, since they would weight the observations differently.
What you should actually do:
if your proportions are originally counts divided by some total count, then a binomial GLM would be an appropriate model to consider. (You would need the total counts, though.)
if your proportions are continuous fractions (the proportion of milk that's cream for example), then beta regression is an appropriate model to consider.
I want to grab the Standard Error column when I do summary on a linear regression model. The output is below:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -8.436954 0.616937 -13.676 < 2e-16 ***
x1 -0.138902 0.024247 -5.729 1.01e-08 ***
x2 0.005978 0.009142 0.654 0.51316 `
...
I just want the Std. Error column values stored into a vector. How would I go about doing so? I tried model$coefficients[,2] but that keeps giving me extra values. If anyone could help that would be great.
Say fit is the linear model, then summary(fit)$coefficients[,2] has the standard errors. Type ?summary.lm.
fit <- lm(y~x, myData)
summary(fit)$coefficients[,1] # the coefficients
summary(fit)$coefficients[,2] # the std. error in the coefficients
summary(fit)$coefficients[,3] # the t-values
summary(fit)$coefficients[,4] # the p-values