Error with nlme - r

For IGF data from nlme library, I'm getting this error message:
lme(conc ~ 1, data=IGF, random=~age|Lot)
Error in lme.formula(conc ~ 1, data = IGF, random = ~age | Lot) :
nlminb problem, convergence error code = 1
message = iteration limit reached without convergence (10)
But everything is fine with this code
lme(conc ~ age, data=IGF)
Linear mixed-effects model fit by REML
Data: IGF
Log-restricted-likelihood: -297.1831
Fixed: conc ~ age
(Intercept) age
5.374974367 -0.002535021
Random effects:
Formula: ~age | Lot
Structure: General positive-definite
StdDev Corr
(Intercept) 0.082512196 (Intr)
age 0.008092173 -1
Residual 0.820627711
Number of Observations: 237
Number of Groups: 10
As IGF is groupedData, so both codes are identical. I'm confused why the first code produces error. Thanks for your time and help.

I find the other, older answer here unsatisfactory. I distinguish between cases where, statistically, age has no impact and conversely we encounter a computational error. Personally, I have made career mistakes by conflating these two cases. R has signaled the latter and I would like to dive into why that is.
The model that OP has specified is a growth model, with random slopes and intercepts. A grand intercept is included but not a grand age slope. One unsavory constraint that is imposed by fitting a random slope without addition of its "grand" term is that you are forcing the random slope to have 0 mean, which is very difficult to optimize. Marginal models indicate age does not have a statistically significant different value from 0 in the model. Furthermore adding age as a fixed effect does not remedy the problem.
> lme(conc~ age, random=~age|Lot, data=IGF)
Error in lme.formula(conc ~ age, random = ~age | Lot, data = IGF) :
nlminb problem, convergence error code = 1
message = iteration limit reached without convergence (10)
Here the error is obvious. It might be tempting to set the number of iterations up. lmeControl has many iterative estimands. But even that doesn't work:
> fit <- lme(conc~ 1, random=~age|Lot, data=IGF,
control = lmeControl(maxIter = 1e8, msMaxIter = 1e8))
Error in lme.formula(conc ~ 1, random = ~age | Lot,
data = IGF, control = lmeControl(maxIter = 1e+08, :
nlminb problem, convergence error code = 1
message = singular convergence (7)
So it's not a precision thing, the optimizer is running out-of-bounds.
There must be key differences between the two models you have proposed fitting, and a way to diagnose the error that you have found. One simple approach is specifying a "verbose" fit for the problematic model:
> lme(conc~ 1, random=~age|Lot, data=IGF, control = lmeControl(msVerbose = TRUE))
0: 602.96050: 2.63471 4.78706 141.598
1: 602.85855: 3.09182 4.81754 141.597
2: 602.85312: 3.12199 4.97587 141.598
3: 602.83803: 3.23502 4.93514 141.598
(truncated)
48: 602.76219: 6.22172 4.81029 4211.89
49: 602.76217: 6.26814 4.81000 4425.23
50: 602.76216: 6.31630 4.80997 4638.57
50: 602.76216: 6.31630 4.80997 4638.57
The first term is the REML (I think). The second through fourth terms are the parameters to an object called lmeSt of class lmeStructInt, lmeStruct, and modelStruct. If you use Rstudio's debugger to inspect attributes of this object (the lynchpin of the problem), you'll see it is the random effects component that explodes here. coef(lmeSt) after 50 iterations produces
reStruct.Lot1 reStruct.Lot2 reStruct.Lot3
6.316295 4.809975 4638.570586
as seen above and produces
> coef(lmeSt, unconstrained = FALSE)
reStruct.Lot.var((Intercept)) reStruct.Lot.cov(age,(Intercept))
306382.7 2567534.6
reStruct.Lot.var(age)
21531399.4
which is the same as the
Browse[1]> lmeSt$reStruct$Lot
Positive definite matrix structure of class pdLogChol representing
(Intercept) age
(Intercept) 306382.7 2567535
age 2567534.6 21531399
So it's clear the covariance of the random effects is something that's exploding here for this particular optimizer. PORT routines in nlminb have been criticized for their uninformative errors. The text from David Gay (Bell Labs) is here http://ms.mcmaster.ca/~bolker/misc/port.pdf The PORT documentation suggests our error 7 from using a 1 billion iter max "x may have too many free components. See ยง5.". Rather than fix the algorithm, it behooves us to ask if there are approximate results which should generate similar outcomes. It is, for instance, easy to fit an lmList object to come up with the random intercept and random slope variance:
> fit <- lmList(conc ~ age | Lot, data=IGF)
> cov(coef(fit))
(Intercept) age
(Intercept) 0.13763699 -0.018609973
age -0.01860997 0.003435819
although ideally these would be weighted by their respective precision weights:
To use the nlme package I note that unconstrained optimization using BFGS does not produce such an error and gives similar results:
> lme(conc ~ 1, data=IGF, random=~age|Lot, control = lmeControl(opt = 'optim'))
Linear mixed-effects model fit by REML
Data: IGF
Log-restricted-likelihood: -292.9675
Fixed: conc ~ 1
(Intercept)
5.333577
Random effects:
Formula: ~age | Lot
Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
(Intercept) 0.032109976 (Intr)
age 0.005647296 -0.698
Residual 0.820819785
Number of Observations: 237
Number of Groups: 10
An alternative syntactical declaration of such a model can be done with the MUCH easier lme4 package:
library(lme4)
lmer(conc ~ 1 + (age | Lot), data=IGF)
which yields:
> lmer(conc ~ 1 + (age | Lot), data=IGF)
Linear mixed model fit by REML ['lmerMod']
Formula: conc ~ 1 + (age | Lot)
Data: IGF
REML criterion at convergence: 585.7987
Random effects:
Groups Name Std.Dev. Corr
Lot (Intercept) 0.056254
age 0.006687 -1.00
Residual 0.820609
Number of obs: 237, groups: Lot, 10
Fixed Effects:
(Intercept)
5.331
An attribute of lmer and its optimizer is that random effects correlations which are very close to 1, 0, or -1 are simply set to those values since it simplifies the optimization (and statistical efficiency of the estimation) substantially.
Taken together, this does not suggest that age does not have an effect, as was said earlier, and this argument can be supported by the numeric results.

If you plot the data, you can see that there is no effect of age, so it seems strange to be trying to fit a random effect of age in spite of this. No wonder it is not converging.
library(nlme)
library(ggplot2)
dev.new(width=6, height=3)
qplot(age, conc, data=IGF) + facet_wrap(~Lot, nrow=2) + geom_smooth(method='lm')
I think what you want to do is model a random effect of Lot on the intercept. We can try including age as a fixed effect, but we'll see that it is not significant and can be thrown out:
> summary(lme(conc ~ 1 + age, data=IGF, random=~1|Lot))
Linear mixed-effects model fit by REML
Data: IGF
AIC BIC logLik
604.8711 618.7094 -298.4355
Random effects:
Formula: ~1 | Lot
(Intercept) Residual
StdDev: 0.07153912 0.829998
Fixed effects: conc ~ 1 + age
Value Std.Error DF t-value p-value
(Intercept) 5.354435 0.10619982 226 50.41849 0.0000
age -0.000817 0.00396984 226 -0.20587 0.8371
Correlation:
(Intr)
age -0.828
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-5.46774548 -0.43073893 -0.01519143 0.30336310 5.28952876
Number of Observations: 237
Number of Groups: 10

Related

Calculate proportion of random effect variance from zero-inflation component of glmmTMB model

I fitted a zero-inflation Model in glmmTMB onto my data. It is defined in the following way:
MT.total.glmm.zi <- glmmTMB(MT_total ~ Factor1 * Factor2 + Factor3
+ (1|RANEF1)
+ (1|RANEF2)
+ (1|RANEF3)
+ (1|RANEF4)
+ (1|RANEF5),
family = "poisson",
zi=~Factor1 * Factor2 + Factor3
+ (1|RANEF1)
+ (1|RANEF2)
+ (1|RANEF3)
+ (1|RANEF4)
+ (1|RANEF5),
data=df.MT.total)
Now, for the random effects (RANEF1-5) fitted as random intercepts I would like to report the proportion of variance they explain. For the conditional models, there is the function "get_variance" from the "insight" package providing me with the much needed information:
get_variance(MT.total.glmm.zi.complex, component = "all") %>%
print(.)
$var.fixed
[1] 0.02833294
$var.random
[1] 1.029546
$var.residual
[1] 0.4704095
$var.distribution
[1] 0.4704095
$var.dispersion
[1] 0
$var.intercept
RANEF1 RANEF2 RANEF3 RANEF4 RANEF5
0.2862753 0.1710377 0.0486532 0.1655541 0.3580260
Unfortunately, I could not find a downstream wrapper for get_variance to gain the same information for the zero-inflation component of my model. I "only" found downstream wrappers for the Anova, emmeans and effects package in the documentation of the glmmTMB package from Ben Bolker. Unfortunately due to the nature of my data, the models are zero-inflation models and not hurdle/zero-truncated poisson models. Otherwise I could have just modelled a separate binomial model on the binary version of "MT_total".
There is the VarCorr function which allows printing the variances of the individual random effects:
print(VarCorr(MT.total.glmm.zi.complex), comp = "Variance")
Conditional model:
Groups Name Std.Dev.
RANEF1 (Intercept) 0.286275
RANEF2 (Intercept) 0.171038
RANEF3 (Intercept) 0.048653
RANEF4 (Intercept) 0.165554
RANEF5 (Intercept) 0.358026
Zero-inflation model:
Groups Name Std.Dev.
RANEF1 (Intercept) 1.14835
RANEF2 (Intercept) 0.85102
RANEF3 (Intercept) 0.11784
RANEF4 (Intercept) 0.14599
RANEF5 (Intercept) 0.85835
But I don't really know how I could calculate the remaining variance components of the zero-inflation component of my model (var.fixed, var.residual, var.distribution)
So my two questions are:
Is there a function or downstream wrapper I overlooked which would allow me to use get_variance onto the zero-inflation component of my model?
Or
Could someone give me a hint or guide me in the direction of how I can calculate the remaining variance components of my model in order to calculate the proportion of variance explained by my random effects manually?

Equivalence of a mixed model fitted by lme and lmer

I have fitted a mixed effects model considering both functions widely used in R, namely: the lme function from the nlme package and the lmer function from the lme4 package.
To readjust the model from lme to lme4, following the same reparametrization, I used the following information from this topic, being that is only possible to do this in lme4 in a hackable way.: Heterocesdastic model of mixed effects via lmer function
I apologize for hosting the data in a link, however, I couldn't find an internal R database that has variables that might match my problem.
Data: https://drive.google.com/file/d/1jKFhs4MGaVxh-OPErvLDfMNmQBouywoM/view?usp=sharing
The fitted models were:
library(nlme)
library(lme4)
ModLME = lme(Var1~I(Var2)+I(Var2^2),
random = ~1|Var3,
weights = varIdent(form=~1|Var4),
Dataone, method="REML")
ModLMER = lmer(Var1~I(Var2)+I(Var2^2)+(1|Var3)+(0+dummy(Var4,"1")|Var5),
Dataone, REML = TRUE,
control=lmerControl(check.nobs.vs.nlev="ignore",
check.nobs.vs.nRE="ignore"))
Which are equivalent, see:
all.equal(REMLcrit(ModLMER), c(-2*logLik(ModLME)))
[1] TRUE
all.equal(fixef(ModLME), fixef(ModLMER), tolerance=1e-7)
[1] TRUE
> summary(ModLME)
Linear mixed-effects model fit by REML
Data: Dataone
AIC BIC logLik
-209.1431 -193.6948 110.5715
Random effects:
Formula: ~1 | Var3
(Intercept) Residual
StdDev: 0.05789852 0.03636468
Variance function:
Structure: Different standard deviations per stratum
Formula: ~1 | Var4
Parameter estimates:
0 1
1.000000 5.641709
Fixed effects: Var1 ~ I(Var2) + I(Var2^2)
Value Std.Error DF t-value p-value
(Intercept) 0.9538547 0.01699642 97 56.12093 0
I(Var2) -0.5009804 0.09336479 97 -5.36584 0
I(Var2^2) -0.4280151 0.10038257 97 -4.26384 0
summary(ModLMER)
Linear mixed model fit by REML. t-tests use Satterthwaites method [lmerModLmerTest]
Formula: Var1 ~ I(Var2) + I(Var2^2) + (1 | Var3) + (0 + dummy(Var4, "1") |
Var5)
Data: Dataone
Control: lmerControl(check.nobs.vs.nlev = "ignore", check.nobs.vs.nRE = "ignore")
REML criterion at convergence: -221.1
Scaled residuals:
Min 1Q Median 3Q Max
-4.1151 -0.5891 0.0374 0.5229 2.1880
Random effects:
Groups Name Variance Std.Dev.
Var3 (Intercept) 6.466e-12 2.543e-06
Var5 dummy(Var4, "1") 4.077e-02 2.019e-01
Residual 4.675e-03 6.837e-02
Number of obs: 100, groups: Var3, 100; Var5, 100
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 0.95385 0.01700 95.02863 56.121 < 2e-16 ***
I(Var2) -0.50098 0.09336 92.94048 -5.366 5.88e-07 ***
I(Var2^2) -0.42802 0.10038 91.64017 -4.264 4.88e-05 ***
However, when observing the residuals of these models, note that they are not similar. See that in the model adjusted by lmer, mysteriously appears a residue with the shape of a few points close to a straight line. So, how could you solve such a problem so that they are identical? I believe the problem is in the lme4 model.
aa=plot(ModLME, main="LME")
bb=plot(ModLMER, main="LMER")
gridExtra::grid.arrange(aa,bb,ncol=2)
I can tell you what's going on and what should in principle fix it, but at the moment the fix doesn't work ...
The residuals being plotted take all of the random effects into account, which in the case of the lmer fit includes the individual-level random effects (the (0+dummy(Var4,"1")|Var5) term), which leads to weird residuals for the Var4==1 group. To illustrate this:
plot(ModLMER, col = Dataone$Var4+1)
i.e., you can see that the weird residuals are exactly the ones in red == those for which Var4==1.
In theory we should be able to get the same residuals via:
res <- Dataone$Var1 - predict(ModLMER, re.form = ~(1|Var3))
i.e., ignore the group-specific observation-level random effect term. However, it looks like there is a bug at the moment ("contrasts can be applied only to factors with 2 or more levels").
An extremely hacky solution is to construct the random-effect predictions without the observation-level term yourself:
## fixed-effect predictions
p0 <- predict(ModLMER, re.form = NA)
## construct RE prediction, Var3 term only:
Z <- getME(ModLMER, "Z")
b <- drop(getME(ModLMER, "b"))
## zero out observation-level components
b[101:200] <- 0
## add RE predictions to fixed predictions
p1 <- drop(p0 + Z %*% b)
## plot fitted vs residual
plot(p1, Dataone$Var1 - p1)
For what it's worth, this also works:
library(glmmTMB)
ModGLMMTMB <- glmmTMB(Var1~I(Var2)+I(Var2^2)+(1|Var3),
dispformula = ~factor(Var4),
REML = TRUE,
data = Dataone)

lme4 1.1-27.1 error: pwrssUpdate did not converge in (maxit) iterations

Sorry that this error has been discussed before, each answer on stackoverflow seems specific to the data
I'm attempting to run the following negative binomial model in lme4:
Model5.binomial<-glmer.nb(countvariable ~ waves + var1 + dummycodedvar2 + dummycodedvar3 + (1|record_id), data=datadfomit)
However, I receive the following error when attempting to run the model:
Error in f_refitNB(lastfit, theta = exp(t), control = control) :pwrssUpdate did not converge in (maxit) iterations
I first ran the model with only 3 predictor variables (waves, var1, dummycodedvar2) and got the same error. But centering the predictors fixed this problem and the model ran fine.
Now with 4 variables (all centered) I expected the model to run smoothly, but receive the error again.
Since every answer on this site seems to point towards a problem in the data, data that replicates the problem can be found here:
https://file.io/3vtX9RwMJ6LF
Your response variable has a lot of zeros:
I would suggest fitting a model that takes account of this, such as a zero-inflated model. The GLMMadaptive package can fit zero-inflated negative binomial mixed effects models:
## library(GLMMadaptive)
## mixed_model(countvariable ~ waves + var1 + dummycodedvar2 + dummycodedvar3, ## random = ~ 1 | record_id, data = data,
## family = zi.negative.binomial(),
## zi_fixed = ~ var1,
## zi_random = ~ 1 | record_id) %>% summary()
Random effects covariance matrix:
StdDev Corr
(Intercept) 0.8029
zi_(Intercept) 1.0607 -0.7287
Fixed effects:
Estimate Std.Err z-value p-value
(Intercept) 1.4923 0.1892 7.8870 < 1e-04
waves -0.0091 0.0366 -0.2492 0.803222
var1 0.2102 0.0950 2.2130 0.026898
dummycodedvar2 -0.6956 0.1702 -4.0870 < 1e-04
dummycodedvar3 -0.1746 0.1523 -1.1468 0.251451
Zero-part coefficients:
Estimate Std.Err z-value p-value
(Intercept) 1.8726 0.1284 14.5856 < 1e-04
var1 -0.3451 0.1041 -3.3139 0.00091993
log(dispersion) parameter:
Estimate Std.Err
0.4942 0.2859
Integration:
method: adaptive Gauss-Hermite quadrature rule
quadrature points: 11
Optimization:
method: hybrid EM and quasi-Newton
converged: TRUE

Two-level modelling with lme in R

I am interested in estimating a mixed effect model with two random components (I am sorry for the somewhat unprecise notation. I am somewhat new to these kind of models). Finally, I also want also the standard errors of the variances of the random components. That is why I am somewhat boudn to using the package lme. The reason is that I found this description on how to calculate those standard errors and also interesting, the standard error for function of these variances link.
I believe I know how to use the package lmer. I am finally interested in model2. For the model1, both command yield the same estimates. But model2 with lme yields different results than model2 with lmer from the lme4 package. Could you help me to get around how to set up the random components for lme? This would be much appreciated. Thanks. Please find attached my MWE.
Best
Daniel
#### load all packages #####
loadpackage <- function(x){
for( i in x ){
# require returns TRUE invisibly if it was able to load package
if( ! require( i , character.only = TRUE ) ){
# If package was not able to be loaded then re-install
install.packages( i , dependencies = TRUE )
}
# Load package (after installing)
library( i , character.only = TRUE )
}
}
# Then try/install packages...
loadpackage( c("nlme", "msm", "lmeInfo", "lme4"))
alcohol1 <- read.table("https://stats.idre.ucla.edu/stat/r/examples/alda/data/alcohol1_pp.txt", header=T, sep=",")
attach(alcohol1)
id <- as.factor(id)
age <- as.factor(age)
model1.lmer <-lmer(alcuse ~ 1 + peer + (1|id))
summary(model1.lmer)
model2.lmer <-lmer(alcuse ~ 1 + peer + (1|id) + (1|age))
summary(model2.lmer)
model1.lme <- lme(alcuse ~ 1+ peer, data = alcohol1, random = ~ 1 |id, method ="REML")
summary(model1.lme)
model2.lme <- lme(alcuse ~ 1+ peer, data = alcohol1, random = ~ 1 |id + 1|age, method ="REML")
Edit (15/09/2021):
Estimating the model as follows end then returning the estimates via nlme::VarCorr gives me different results. While the estimates seem to be in the ball park, it is as they are switched across components.
model2a.lme <- lme(alcuse ~ 1+ peer, data = alcohol1, random = ~ 1 |id/age, method ="REML")
summary(model2a.lme)
nlme::VarCorr(model2a.lme)
Variance StdDev
id = pdLogChol(1)
(Intercept) 0.38390274 0.6195989
age = pdLogChol(1)
(Intercept) 0.47892113 0.6920413
Residual 0.08282585 0.2877948
EDIT (16/09/2021):
Since Bob pushed me to think more about my model, I want to give some additional information. Please know that the data I use in the MWE do not match my true data. I just used it for illustrative purposes since I can not upload myy true data. I have a household panel with income, demographic informations and parent indicators.
I am interested in intergenerational mobility. Sibling correlations of permanent income are one industry standard. At the very least, contemporanous observations are very bad proxies of permanent income. Due to transitory shocks, i.e., classical measurement error, those estimates are most certainly attenuated. For this reason, we exploit the longitudinal dimension of our data.
For sibling correlations, this amounts to hypothesising that the income process is as follows:
$$Y_{ijt} = \beta X_{ijt} + \epsilon_{ijt}.$$
With Y being income from individual i from family j in year t. X comprises age and survey year indicators to account for life-cycle effects and macroeconmic conditions in survey years. Epsilon is a compund term comprising a random individual and family component as well as a transitory component (measurement error and short lived shocks). It looks as follows:
$$\epsilon_{ijt} = \alpha_i + \gamma_j + \eta_{ijt}.$$
The variance of income is then:
$$\sigma^2_\epsilon = \sigma^2_\alpha + \sigma^2\gamma + \sigma^2\eta.$$
The quantitiy we are interested in is
$$\rho = \frac(\sigma^2\gamma}{\sigma^2_\alpha + \sigma^2\gamma},$$
which reflects the share of shared family (and other characteristics) among siblings of the variation in permanent income.
B.t.w.: The struggle is simply because I want to have a standard errors for all estimates and for \rho.
This is an example of crossed vs nested random effects. (Note that the example you refer to is fitting a different kind of model, a random-slopes model rather than a model with two different grouping variables ...)
If you try with(alcohol1, table(age,id)) you can see that every id is associated with every possible age (14, 15, 16). Or subset(alcohol1, id==1) for example:
id age coa male age_14 alcuse peer cpeer ccoa
1 1 14 1 0 0 1.732051 1.264911 0.2469111 0.549
2 1 15 1 0 1 2.000000 1.264911 0.2469111 0.549
3 1 16 1 0 2 2.000000 1.264911 0.2469111 0.549
There are three possible models you could fit for a model with random effects of age(indexed by i) and id (indexed by j)
crossed ((1|age) + (1|id)): Y_{ij} = beta0 + beta1*peer + eps1_i + eps2_j +epsr_{ij}; alcohol use varies among individuals and, independently, across ages (this model won't work very well because there are only three distinct ages in the data set, more levels are usually needed)
id nested within age ((1|age/id) = (1|age) + (1|age:id)): Y_{ij} = beta0 + beta1*peer + eps1_i + eps2_{ij} + epsr_{ij}; alcohol use varies across ages, and varies across individuals within ages (see note above about number of levels).
age nested within id ((1|id/age) = (1|id) + (1|age:id)): Y_{ij} = beta0 + beta1*peer + eps1_j + eps2_{ij} + epsr_{ij}; alcohol use varies across individuals, and varies across ages within individuals
Here eps1_i, eps2_{ij}, and epsr_{ij} are normal deviates; epsr is the residual error term.
The latter two models actually don't make sense in this case; because there is only one observation per age/id combination, the nested variance (eps2) is completely confounded with the residual variance (epsr). lme doesn't notice this; if you try to fit one of the nested models in lmer it will give an error that
number of levels of each grouping factor must be < number of observations (problems: id:age)
(Although if you try to compute confidence intervals based on model1.lme you'll get an error "cannot get confidence intervals on var-cov components: Non-positive definite approximate variance-covariance", which is a hint that something is wrong.)
You could restate this problem as saying that the residual variation, and the variation among ages within individuals, are jointly unidentifiable (can't be separated from each other, statistically).
The updated answer here shows how to get the standard errors of the variance components from an lmer model, so you shouldn't be stuck with lme (but you should think carefully about which model you're really trying to fit ...)
The GLMM FAQ might also be useful.
More generally, the standard error of
rho = (V_gamma)/(V_alpha + V_gamma)
will be hard to compute accurately, because this is a nonlinear function of the model parameters. You can apply the delta method, but the most reliable approach would be to use parametric bootstrapping: if you have a fitted model m, then something like this should work:
var_ratio <- function(m) {
v <- as.data.frame(sapply(VarCorr(m), as.numeric))
return(v$family/(v$family + v$id))
}
confint(m, method="boot", FUN =var_ratio)
You should specify random effects in lme by using / not +
By lmer
model2.lmer <-lmer(alcuse ~ 1 + peer + (1|id) + (1|age), data = alcohol1)
summary(model2.lmer)
Linear mixed model fit by REML ['lmerMod']
Formula: alcuse ~ 1 + peer + (1 | id) + (1 | age)
Data: alcohol1
REML criterion at convergence: 651.3
Scaled residuals:
Min 1Q Median 3Q Max
-2.0228 -0.5310 -0.1329 0.5854 3.1545
Random effects:
Groups Name Variance Std.Dev.
id (Intercept) 0.08078 0.2842
age (Intercept) 0.30313 0.5506
Residual 0.56175 0.7495
Number of obs: 246, groups: id, 82; age, 82
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.3039 0.1438 2.113
peer 0.6074 0.1151 5.276
Correlation of Fixed Effects:
(Intr)
peer -0.814
By lme
model2.lme <- lme(alcuse ~ 1+ peer, data = alcohol1, random = ~ 1 |id/age, method ="REML")
summary(model2.lme)
Linear mixed-effects model fit by REML
Data: alcohol1
AIC BIC logLik
661.3109 678.7967 -325.6554
Random effects:
Formula: ~1 | id
(Intercept)
StdDev: 0.4381206
Formula: ~1 | age %in% id
(Intercept) Residual
StdDev: 0.4381203 0.7494988
Fixed effects: alcuse ~ 1 + peer
Value Std.Error DF t-value p-value
(Intercept) 0.3038946 0.1438333 164 2.112825 0.0361
peer 0.6073948 0.1151228 80 5.276060 0.0000
Correlation:
(Intr)
peer -0.814
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-2.0227793 -0.5309669 -0.1329302 0.5853768 3.1544873
Number of Observations: 246
Number of Groups:
id age %in% id
82 82
Okay, finally. Just to sketch my confidential data: I have a panel of individuals. The data includes siblings, identified via mnr. income is earnings, wavey survey year, age age factors. female a factor for gender, pid is the factor identifying the individual.
m1 <- lmer(income ~ age + wavey + female + (1|pid) + (1 | mnr),
data = panel)
vv <- vcov(m1, full = TRUE)
covvar <- vv[58:60, 58:60]
covvar
3 x 3 Matrix of class "dgeMatrix"
cov_pid.(Intercept) cov_mnr.(Intercept) residual
[1,] 2.6528679 -1.4624588 -0.4077576
[2,] -1.4624588 3.1015001 -0.0597926
[3,] -0.4077576 -0.0597926 1.1634680
mean <- as.data.frame(VarCorr(m1))$vcov
mean
[1] 17.92341 16.86084 56.77185
deltamethod(~ x2/(x1+x2), mean, covvar, ses =TRUE)
[1] 0.04242089
The last scalar should be what I interprete as the shared background of the siblings of permanent income.
Thanks to #Ben Bolker who pointed me into this direction.

Getting Generalized Least Squares Means for fixed effects in nlme or lme4

Least Squares Means with their standard errors for aov object can be obtained with model.tables function:
npk.aov <- aov(yield ~ block + N*P*K, npk)
model.tables(npk.aov, "means", se = TRUE)
I wonder how to get the generalized least squares means with their standard errors from nlme or lme4 objects:
library(nlme)
data(Machines)
fm1Machine <- lme(score ~ Machine, data = Machines, random = ~ 1 | Worker )
Any comment and hint will be highly appreciated. Thanks
lme and nlme fit through maximum likelihood or restricted maximum likelihood (the latter is the default), so your results will be based on either of those methods
summary(fm1Machine) will provide you with the output that includes the means and standard errors:
....irrelevant output deleted
Fixed effects: score ~ Machine
Value Std.Error DF t-value p-value
(Intercept) 52.35556 2.229312 46 23.48507 0
MachineB 7.96667 1.053883 46 7.55935 0
MachineC 13.91667 1.053883 46 13.20514 0
Correlation:
....irrelevant output deleted
Because you have fitted the fixed effects with an intercept, you get an intercept term in the fixed effects result instead of a result for MachineA. The results for MachineB and MachineC are contrasts with the intercept, so to get the means for MachineB and MachineC, add the value of each to the intercept mean. But the standard errors are not the ones you would like.
To get the information you are after, fit the model so it doesn't have an intercept term in the fixed effects (see the -1 at the end of the fixed effects:
fm1Machine <- lme(score ~ Machine-1, data = Machines, random = ~ 1 | Worker )
This will then give you the means and standard error output you want:
....irrelevant output deleted
Fixed effects: score ~ Machine - 1
Value Std.Error DF t-value p-value
MachineA 52.35556 2.229312 46 23.48507 0
MachineB 60.32222 2.229312 46 27.05867 0
MachineC 66.27222 2.229312 46 29.72765 0
....irrelevant output deleted
To quote Douglas Bates from
http://markmail.org/message/dqpk6ftztpbzgekm
"I have a strong suspicion that, for most users, the definition of lsmeans is "the numbers that I get from SAS when I use an lsmeans statement". My suggestion for obtaining such numbers is to buy a SAS license and use SAS to fit your models."

Resources