Folks,
I'm trying to print a {texreg} table of lmer() {nlme} and lme() {lme4} models including variances. The variances however, differ significantly between the two model (several orders of magnitude). It seems that the lme() variances are the square root of the lmer() ones. Which ones are correct?
library(plm)
library(lme4)
library(nlme)
library(texreg)
data("Grunfeld", package="plm")
reML0 <-lmer(inv ~ value + capital + (1|firm), data=Grunfeld)
reML1 <- lme(inv ~ value + capital, data=Grunfeld, random=~1|firm)
screenreg(list(reML0, reML1), digits=3, include.variance=TRUE)
========================================================
Model 1 Model 2
--------------------------------------------------------
(Intercept) -57.864 * -57.864
(29.378) (29.378)
value 0.110 *** 0.110 ***
(0.011) (0.011)
capital 0.308 *** 0.308 ***
(0.017) (0.017)
--------------------------------------------------------
AIC 2205.851 2205.851
Num. obs. 200 200
Num. groups: firm 10
Variance: firm.(Intercept) 7366.992
Variance: Residual 2781.426
Num. groups 10
sigma 52.739
sigma. RE 85.831
========================================================
*** p < 0.001, ** p < 0.01, * p < 0.05
It seems pretty clear that it's reporting variance for model 1 and sigma for model 2. Variance is sigma squared, so the answer to your question is "both"
Related
I have data from a longitudinal study and calculated the regression using the lme4::lmer function. After that I calculated the contrasts for these data but I am having difficulty interpreting my results, as they were unexpected. I think I might have made a mistake in the code. Unfortunately I couldn't replicate my results with an example, but I will post both the failed example and my actual results below.
My results:
library(lme4)
library(lmerTest)
library(emmeans)
#regression
regmemory <- lmer(memory ~ as.factor(QuartileConsumption)*Age+
(1 + Age | ID) + sex + education +
HealthScore, CognitionData)
#results
summary(regmemory)
#Fixed effects:
# Estimate Std. Error df t value Pr(>|t|)
#(Intercept) -7.981e-01 9.803e-02 1.785e+04 -8.142 4.15e-16 ***
#as.factor(QuartileConsumption)2 -8.723e-02 1.045e-01 2.217e+04 -0.835 0.40376
#as.factor(QuartileConsumption)3 5.069e-03 1.036e-01 2.226e+04 0.049 0.96097
#as.factor(QuartileConsumption)4 -2.431e-02 1.030e-01 2.213e+04 -0.236 0.81337
#Age -1.709e-02 1.343e-03 1.989e+04 -12.721 < 2e-16 ***
#sex 3.247e-01 1.520e-02 1.023e+04 21.355 < 2e-16 ***
#education 2.979e-01 1.093e-02 1.061e+04 27.266 < 2e-16 ***
#HealthScore -1.098e-06 5.687e-07 1.021e+04 -1.931 0.05352 .
#as.factor(QuartileConsumption)2:Age 1.101e-03 1.842e-03 1.951e+04 0.598 0.55006
#as.factor(QuartileConsumption)3:Age 4.113e-05 1.845e-03 1.935e+04 0.022 0.98221
#as.factor(QuartileConsumption)4:Age 1.519e-03 1.851e-03 1.989e+04 0.821 0.41174
#contrasts
emmeans(regmemory, poly ~ QuartileConsumption * Age)$contrast
#$contrasts
# contrast estimate SE df z.ratio p.value
# linear 0.2165 0.0660 Inf 3.280 0.0010
# quadratic 0.0791 0.0289 Inf 2.733 0.0063
# cubic -0.0364 0.0642 Inf -0.567 0.5709
The interaction terms in the regression results are not significant, but the linear contrast is. Shouldn't the p-value for the contrast be non-significant?
Below is the code I wrote to try to recreate these results, but failed:
library(dplyr)
library(lme4)
library(lmerTest)
library(emmeans)
data("sleepstudy")
#create quartile column
sleepstudy$Quartile <- sample(1:4, size = nrow(sleepstudy), replace = T)
#regression
model1 <- lmer(Reaction ~ Days * as.factor(Quartile) + (1 + Days | Subject), data = sleepstudy)
#results
summary(model1)
#Fixed effects:
# Estimate Std. Error df t value Pr(>|t|)
#(Intercept) 258.1519 9.6513 54.5194 26.748 < 2e-16 ***
#Days 9.8606 2.0019 43.8516 4.926 1.24e-05 ***
#as.factor(Quartile)2 -11.5897 11.3420 154.1400 -1.022 0.308
#as.factor(Quartile)3 -5.0381 11.2064 155.3822 -0.450 0.654
#as.factor(Quartile)4 -10.7821 10.8798 154.0820 -0.991 0.323
#Days:as.factor(Quartile)2 0.5676 2.1010 152.1491 0.270 0.787
#Days:as.factor(Quartile)3 0.2833 2.0660 155.5669 0.137 0.891
#Days:as.factor(Quartile)4 1.8639 2.1293 153.1315 0.875 0.383
#contrast
emmeans(model1, poly ~ Quartile*Days)$contrast
#contrast estimate SE df t.ratio p.value
# linear -1.91 18.78 149 -0.102 0.9191
# quadratic 10.40 8.48 152 1.227 0.2215
# cubic -18.21 18.94 150 -0.961 0.3379
In this example, the p-value for the linear contrast is non-significant just as the interactions from the regression. Did I do something wrong, or these results are to be expected?
Look at the emmeans() call for the original model:
emmeans(regmemory, poly ~ QuartileConsumption * Age)
This requests that we obtain marginal means for combinations of QuartileConsumption and Age, and obtain polynomial contrasts from those results. It appears that Age is a quantitative variable, so in computing the marginal means, we just use the mean value of Age (see documentation for ref_grid() and vignette("basics", "emmeans")). So the marginal means display, which wasn't shown in the OP, will be in this general form:
QuartileConsumption Age emmean
------------------------------------
1 <mean> <est1>
2 <mean> <est2>
3 <mean> <est3>
4 <mean> <est4>
... and the contrasts shown will be the linear, quadratic, and cubic trends of those four estimates, in the order shown.
Note that these marginal means have nothing to do with the interaction effect; they are just predictions from the model for the four levels of QuartileConsumption at the mean Age (and mean education, mean health score), averaged over the two sexes, if I understand the data structure correctly. So essentially the polynomial contrasts estimate polynomial trends of the 4-level factor at the mean age. And note in particular that age is held constant, so we certainly are not looking at any effects of Age.
I am guessing what you want to be doing to examine the interaction is to assess how the age trend varies over the four levels of that factor. If that is the case, one useful thing to do would be something like
slopes <- emtrends(regmemory, ~ QuartileConsumption, var = "age")
slopes # display the estimated slope at each level
pairs(slopes) # pairwise comparisons of these slopes
See vignette("interactions", "emmeans") and the section on interactions with covariates.
We're trying to model a count variable with excessive zeros using a zero-inflated poisson (as implemented in pscl package). Here is a (simplified) output showing both categorical and continuous explanatory variables:
library(pscl)
> m1 <- zeroinfl(y ~ treatment + some_covar, data = d, dist =
"poisson")
> summary(m1)
Count model coefficients (poisson with log link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.189253 0.102256 31.189 < 2e-16 ***
treatmentB -0.282478 0.107965 -2.616 0.00889 **
treatmentC 0.227633 0.103605 2.197 0.02801 *
some_covar 0.002190 0.002329 0.940 0.34706
Zero-inflation model coefficients (binomial with logit link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.67251 0.74961 0.897 0.3696
treatmentB -1.72728 0.89931 -1.921 0.0548 .
treatmentC -0.31761 0.77668 -0.409 0.6826
some_covar -0.03736 0.02684 -1.392 0.1640
summary gave us some good answers but we are looking for a ANOVA-like table. So, the question is: is it ok to use car::Anova to obtain such table?
> Anova(m1)
Analysis of Deviance Table (Type II tests)
Response: y
Df Chisq Pr(>Chisq)
treatment 2 30.7830 2.068e-07 ***
some_covar 1 0.8842 0.3471
It seems to work fine but i'm not really sure whether is a valid approach since documentation is missing (seems like is only considering the 'count model' part?). Do you recommend to follow this approach or there is a better way?
This may be an obvious question that I have yet to be able to find an answer for (R newbie), but when generating a mixed effects model using the lmer function then displaying the results using:
screenreg(list(model4re), single.row = TRUE)
we get a list of the beta estimates, standard error and the significance level in the form of stars.
What test is used to determine these p-values to label the stars (Important as I recognize that there is some contention around how appropriately determine a significant influence using these models) and how can we extract the p-values used for these stars?
A detailed description of the methods available in R to calculate p-values for the parameters estimated by lmer can be found typing ?lme4::pvalues.
Below I show the code for calculating p-values for the Kenward-Roger-corrected tests:
library(lmerTest)
fm1 <- lmerTest::lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
lmerTest::anova(fm1)
#############
Analysis of Variance Table of type III with Satterthwaite
approximation for degrees of freedom
Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
Days 30031 30031 1 17 45.853 3.264e-06 ***
The stargazer command in the stargazer package print p-values for estimated parameters:
library(stargazer)
fm2 <- lme4::lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
stargazer(fm2, type="text", report="vcp")
===============================================
Dependent variable:
---------------------------
Reaction
-----------------------------------------------
Days 10.467
p = 0.000
Constant 251.405
p = 0.000
-----------------------------------------------
Observations 180
Log Likelihood -871.814
Akaike Inf. Crit. 1,755.628
Bayesian Inf. Crit. 1,774.786
===============================================
Note: *p<0.1; **p<0.05; ***p<0.01
In texreg p-values for lme4 objects are calculated by the extract.lmerMod command. See the following example:
library(lme4)
data(oats, package="MASS")
(fm1 <- lmer(Y ~ V*N + (1| B/V), data = oats))
##############
Linear mixed model fit by REML ['merModLmerTest']
Formula: Y ~ V * N + (1 | B/V)
Data: oats
REML criterion at convergence: 529.0285
Random effects:
Groups Name Std.Dev.
V:B (Intercept) 10.30
B (Intercept) 14.65
Residual 13.31
Number of obs: 72, groups: V:B, 18; B, 6
Fixed Effects:
(Intercept) VMarvellous VVictory N0.2cwt N0.4cwt N0.6cwt
80.0000 6.6667 -8.5000 18.5000 34.6667 44.8333
VMarvellous:N0.2cwt VVictory:N0.2cwt VMarvellous:N0.4cwt VVictory:N0.4cwt VMarvellous:N0.6cwt VVictory:N0.6cwt
3.3333 -0.3333 -4.1667 4.6667 -4.6667 2.1667
###############
Using extract.lmerMod we get:
extract.lmerMod(fm1)
###############
coef. s.e. p
(Intercept) 80.0000000 9.106977 1.570989e-18
VMarvellous 6.6666667 9.715025 4.925730e-01
VVictory -8.5000000 9.715025 3.816101e-01
N0.2cwt 18.5000000 7.682954 1.604334e-02
N0.4cwt 34.6666667 7.682954 6.417271e-06
N0.6cwt 44.8333333 7.682954 5.365224e-09
VMarvellous:N0.2cwt 3.3333333 10.865337 7.590063e-01
VVictory:N0.2cwt -0.3333333 10.865337 9.755259e-01
VMarvellous:N0.4cwt -4.1666667 10.865337 7.013620e-01
VVictory:N0.4cwt 4.6666667 10.865337 6.675591e-01
VMarvellous:N0.6cwt -4.6666667 10.865337 6.675591e-01
VVictory:N0.6cwt 2.1666667 10.865337 8.419413e-01
GOF dec. places
AIC 559.0285 TRUE
BIC 593.1785 TRUE
Log Likelihood -264.5143 TRUE
Num. obs. 72.0000 FALSE
Num. groups: V:B 18.0000 FALSE
Num. groups: B 6.0000 FALSE
Var: V:B (Intercept) 106.0618 TRUE
Var: B (Intercept) 214.4771 TRUE
Var: Residual 177.0833 TRUE
Looking inside the extract.lmerMod function, the p-values are calculated as follows:
betas <- lme4::fixef(fm1)
Vcov <- vcov(fm1)
Vcov <- as.matrix(Vcov)
se <- sqrt(diag(Vcov))
zval <- betas/se
(pval <- 2 * pnorm(abs(zval), lower.tail = FALSE))
##################
(Intercept) VMarvellous VVictory N0.2cwt N0.4cwt N0.6cwt VMarvellous:N0.2cwt VVictory:N0.2cwt
1.570989e-18 4.925730e-01 3.816101e-01 1.604334e-02 6.417271e-06 5.365224e-09 7.590063e-01 9.755259e-01
VMarvellous:N0.4cwt VVictory:N0.4cwt VMarvellous:N0.6cwt VVictory:N0.6cwt
7.013620e-01 6.675591e-01 6.675591e-01 8.419413e-01
I am testing differences on the number of pollen grains loading on plant stigmas in different habitats and stigma types.
My sample design comprises two habitats, with 10 sites each habitat.
In each site, I have up to 3 stigma types (wet, dry and semidry), and for each stigma stype, I have different number of plant species, with different number of individuals per plant species (code).
So, I ended up with nested design as follow: habitat/site/stigmatype/stigmaspecies/code
As it is a descriptive study, stigmatype, stigmaspecies and code vary between sites.
My response variable (n) is the number of pollengrains (log10+1)per stigma per plant, average because i collected 3 stigmas per plant.
Data doesnt fit Poisson distribution because (i) is not integers, and (ii) variance much higher than the mean (ratio = 911.0756). So, I fitted as negative.binomial.
After model selection, I have:
m4a <- glmer(n ~ habitat*stigmatype + (1|stigmaspecies/code),
family=negative.binomial(2))
> summary(m4a)
Generalized linear mixed model fit by maximum likelihood ['glmerMod']
Family: Negative Binomial(2) ( log )
Formula: n ~ habitat * stigmatype + (1 | stigmaspecies/code)
AIC BIC logLik deviance
993.9713 1030.6079 -487.9856 975.9713
Random effects:
Groups Name Variance Std.Dev.
code:stigmaspecies (Intercept) 1.034e-12 1.017e-06
stigmaspecies (Intercept) 4.144e-02 2.036e-01
Residual 2.515e-01 5.015e-01
Number of obs: 433, groups: code:stigmaspecies, 433; stigmaspecies, 41
Fixed effects:
Estimate Std. Error t value Pr(>|z|)
(Intercept) -0.31641 0.08896 -3.557 0.000375 ***
habitatnon-invaded -0.67714 0.10060 -6.731 1.68e-11 ***
stigmatypesemidry -0.24193 0.15975 -1.514 0.129905
stigmatypewet -0.07195 0.18665 -0.385 0.699885
habitatnon-invaded:stigmatypesemidry 0.60479 0.22310 2.711 0.006712 **
habitatnon-invaded:stigmatypewet 0.16653 0.34119 0.488 0.625491
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) hbttn- stgmtyps stgmtypw hbttnn-nvdd:stgmtyps
hbttnn-nvdd -0.335
stgmtypsmdr -0.557 0.186
stigmatypwt -0.477 0.160 0.265
hbttnn-nvdd:stgmtyps 0.151 -0.451 -0.458 -0.072
hbttnn-nvdd:stgmtypw 0.099 -0.295 -0.055 -0.403 0.133
Two questions:
How do I check for overdispersion from this output?
What is the best way to go through model validation here?
I have been using:
qqnorm(resid(m4a))
hist(resid(m4a))
plot(fitted(m4a),resid(m4a))
While qqnorm() and hist() seem ok, and there is a tendency of heteroscedasticity on the 3rd graph. And here is my final question:
Can I go through model validation with this graph in glmer? or is there a better way to do it? if not, how much should I worry about the 3rd graph?
a simple way to check for overdispersion in glmer is:
> library("blmeco")
> dispersion_glmer(your_model) #it shouldn't be over
> 1.4
To solve overdispersion I usually add an observation level random factor
For model validation I usually start from these plots...but then depends on your specific model...
par(mfrow=c(2,2))
qqnorm(resid(your_model), main="normal qq-plot, residuals")
qqline(resid(your_model))
qqnorm(ranef(your_model)$id[,1])
qqline(ranef(your_model)$id[,1])
plot(fitted(your_model), resid(your_model)) #residuals vs fitted
abline(h=0)
dat_kackle$fitted <- fitted(your_model) #fitted vs observed
plot(your_data$fitted, jitter(your_data$total,0.1))
abline(0,1)
hope this helps a little....
cheers
Just an addition to Q1 for those who might find this by googling: the blmco dispersion_glmer function appears to be outdated. It is better to use #Ben_Bolker's function for this purpose:
overdisp_fun <- function(model) {
rdf <- df.residual(model)
rp <- residuals(model,type="pearson")
Pearson.chisq <- sum(rp^2)
prat <- Pearson.chisq/rdf
pval <- pchisq(Pearson.chisq, df=rdf, lower.tail=FALSE)
c(chisq=Pearson.chisq,ratio=prat,rdf=rdf,p=pval)
}
Source: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#overdispersion.
With the highlighted notion:
Do PLEASE note the usual, and extra, caveats noted here: this is an APPROXIMATE estimate of an overdispersion parameter.
PS. Why outdated?
The lme4 package includes the residuals function these days, and Pearson residuals are supposedly more robust for this type of calculation than the deviance residuals. The blmeco::dispersion_glmer sums up the deviance residuals together with u cubed, divides by residual degrees of freedom and takes a square root of the value (the function):
dispersion_glmer <- function (modelglmer)
{
n <- length(resid(modelglmer))
return(sqrt(sum(c(resid(modelglmer), modelglmer#u)^2)/n))
}
The blmeco solution gives considerably higher deviance/df ratios than Bolker's function. Since Ben is one of the authors of the lme4 package, I would trust his solution more although I am not qualified to rationalize the statistical reason.
x <- InsectSprays
x$id <- rownames(x)
mod <- lme4::glmer(count ~ spray + (1|id), data = x, family = poisson)
blmeco::dispersion_glmer(mod)
# [1] 1.012649
overdisp_fun(mod)
# chisq ratio rdf p
# 55.7160734 0.8571704 65.0000000 0.7873823
I am trying to understand the difference between two different fitting methods for a data set with a bounded response variable. The response variable is a fraction and therefore has a range of [0,1]. I have uncovered through my Google searching that there are a lot of different methods out there as this is a common operation. I am currently interested in the difference between the stock R GLM fit and the Beta regression offered in the betareg package. I am using the GasolineYield data set from the "betareg" package as my sample data set. Before I post the code and the results my two questions are the following:
Am I performing the Logistic Regression fit in R using the builtin R GLM correctly?
Why are the standard errors reported in the Beta regression so much smaller than the standard errors for the R logistic regression?
R Setup Code
library(betareg)
data("GasolineYield", package = "betareg")
Beta Regression code from the "betareg" package
gy = betareg(yield ~ batch + temp, data = GasolineYield)
summary(gy)
Beta Regression summary output
Call:
betareg(formula = yield ~ batch + temp, data = GasolineYield)
Standardized weighted residuals 2:
Min 1Q Median 3Q Max
-2.8750 -0.8149 0.1601 0.8384 2.0483
Coefficients (mean model with logit link):
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.1595710 0.1823247 -33.784 < 2e-16 ***
batch1 1.7277289 0.1012294 17.067 < 2e-16 ***
batch2 1.3225969 0.1179020 11.218 < 2e-16 ***
batch3 1.5723099 0.1161045 13.542 < 2e-16 ***
batch4 1.0597141 0.1023598 10.353 < 2e-16 ***
batch5 1.1337518 0.1035232 10.952 < 2e-16 ***
batch6 1.0401618 0.1060365 9.809 < 2e-16 ***
batch7 0.5436922 0.1091275 4.982 6.29e-07 ***
batch8 0.4959007 0.1089257 4.553 5.30e-06 ***
batch9 0.3857930 0.1185933 3.253 0.00114 **
temp 0.0109669 0.0004126 26.577 < 2e-16 ***
Phi coefficients (precision model with identity link):
Estimate Std. Error z value Pr(>|z|)
(phi) 440.3 110.0 4.002 6.29e-05 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Type of estimator: ML (maximum likelihood)
Log-likelihood: 84.8 on 12 Df
Pseudo R-squared: 0.9617
Number of iterations: 51 (BFGS) + 3 (Fisher scoring)
R GLM Logistic Regression code from stock R
glmfit = glm(yield ~ batch + temp, data = GasolineYield, family = "binomial")
summary(glmfit)
R GLM Logistic Regression summary output
Call:
glm(formula = yield ~ batch + temp, family = "binomial", data = GasolineYield)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.100459 -0.025272 0.004217 0.032879 0.082113
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -6.130227 3.831798 -1.600 0.110
batch1 1.720311 2.127205 0.809 0.419
batch2 1.305746 2.481266 0.526 0.599
batch3 1.562343 2.440712 0.640 0.522
batch4 1.048928 2.152385 0.487 0.626
batch5 1.125075 2.176242 0.517 0.605
batch6 1.029601 2.229773 0.462 0.644
batch7 0.540401 2.294474 0.236 0.814
batch8 0.497355 2.288564 0.217 0.828
batch9 0.378315 2.494881 0.152 0.879
temp 0.010906 0.008676 1.257 0.209
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 2.34184 on 31 degrees of freedom
Residual deviance: 0.07046 on 21 degrees of freedom
AIC: 36.631
Number of Fisher Scoring iterations: 5
The standard errors are different because the variance assumptions in the two models are different.
Logistic regression assumes the response has a binomial distribution, while beta regression assumes it has a beta distribution.
The variance functions of the two are different. For the binomial, if you specify the mean (and $n$ is a given) the variance is determined. For the beta there's another free parameter, so it isn't determined by the mean and would presumably be estimated from the data.
This suggests that if you fit a quasibinomial GLM (adding a variance parameter) you might get closer to the same standard errors, but they still won't be the same, since they would weight the observations differently.
What you should actually do:
if your proportions are originally counts divided by some total count, then a binomial GLM would be an appropriate model to consider. (You would need the total counts, though.)
if your proportions are continuous fractions (the proportion of milk that's cream for example), then beta regression is an appropriate model to consider.