Two random terms with nlme - r

I am performing a mixed model with nlme package in R. My situation is:
The mixed model is:
MY = DFC + DFC2, random=~DFC|Animal, data=my_data)
where Animal is the random effect.
However, if I write the model like this, I can only obtain random intercept, and slope for DFC (by Animal), but not DFC2.
I would like to have also the random slope (by Animal) for DFC2!
Could you please help me?
Thank you very much,

If you use the library lme4
library(lme4)
fit <- lmer(y ~ x1 + x2 (1+ x1 + x2|group), data = test.df)
coef(fit)
Use coef() on your fitted object to see the slopes.

Related

Residual modeling for mixed models: Any other package than nlme?

Aside from R function nlme::lme(), I'm wondering how else I can model the Level-1 residual variance-covariance structure?
ps. My search showed I could possibly use glmmTMB package but it seems it is not about Level-1 residuals but random-effects themselves (see below code).
glmmTMB::glmmTMB(y ~ times + ar1(times | subjects), data = data) ## DON'T RUN
nlme::lme (y ~ times, random = ~ times | subjects,
correlation = corAR1(), data = data) ## DON'T RUN
glmmTMB can effectively be used to model level-1 residuals, by adding an observation-level random effect to the model (and if necessary suppressing the level-1 variance via dispformula ~ 0. For example, comparing the same fit in lme and glmmTMB:
library(glmmTMB)
library(nlme)
data("sleepstudy" ,package="lme4")
ss <- sleepstudy
ss$times <- factor(ss$Days) ## needed for glmmTMB
I initially tried with random = ~Days|Subject but neither lme nor glmmTMB were happy (overfitted):
lme1 <- lme(Reaction ~ Days, random = ~1|Subject,
correlation=corAR1(form=~Days|Subject), data=ss)
m1 <- glmmTMB(Reaction ~ Days + (1|Subject) +
ar1(times + 0 | Subject),
dispformula=~0,
data=ss,
REML=TRUE,
start=list(theta=c(4,4,1)))
Unfortunately, in order to get a good answer with glmmTMB I did have to tweak the starting values ...

Is there a way to both include PCSE and Prais-Winsten correction in a fixed effects model in R (similar to the xtpcse function in Stata)?

I want to estimate a fixed effects model while using panel-corrected standard errors as well as Prais-Winsten (AR1) transformation in order to solve panel heteroscedasticity, contemporaneous spatial correlation and autocorrelation.
I have time-series cross-section data and want to perform regression analysis. I was able to estimate a fixed effects model, panel corrected standard errors and Prais-winsten estimates individually. And I was able to include panel corrected standard errors in a fixed effects model. But I want them all at once.
# Basic ols model
ols1 <- lm(y ~ x1 + x2, data = data)
summary(ols1)
# Fixed effects model
library('plm')
plm1 <- plm(y ~ x1 + x2, data = data, model = 'within')
summary(plm1)
# Panel Corrected Standard Errors
library(pcse)
lm.pcse1 <- pcse(ols1, groupN = Country, groupT = Time)
summary(lm.pcse1)
# Prais-Winsten estimates
library(prais)
prais1 <- prais_winsten(y ~ x1 + x2, data = data)
summary(prais1)
# Combination of Fixed effects and Panel Corrected Standard Errors
ols.fe <- lm(y ~ x1 + x2 + factor(Country) - 1, data = data)
pcse.fe <- pcse(ols.fe, groupN = Country, groupT = Time)
summary(pcse.fe)
In the Stata command: xtpcse it is possible to include both panel corrected standard errors and Prais-Winsten corrected estimates, with something allong the following code:
xtpcse y x x x i.cc, c(ar1)
I would like to achieve this in R as well.
I am not sure that my answer will completely address your concern, these days I've been trying to deal with the same problem that you mention.
In my case, I ran the Prais-Winsten function from the package prais where I included my model with the fixed effects. Afterwards, I correct for heteroskedasticity using the function vcovHC.prais which is analogous to vcovHC function from the package sandwich.
This basically will give you White's/sandwich heteroskedasticity-consistent covariance matrix which, if you later fit into the function coeftest from the package lmtest, it will give you the table output with the corrected standard errors. Taking your posted example, see below the code that I have used:
# Prais-Winsten estimates with Fixed Effects
library(prais)
prais.fe <- prais_winsten(y ~ x1 + x2 + factor(Country), data = data)
library(lmtest)
prais.fe.w <- coeftest(prais.fe, vcov = vcovHC.prais(prais.fe, "HC1")
h.m1 # run the object to see the output with the corrected standard errors.
Alas, I am aware that the sandwhich heteroskedasticity-consistent standard errors are not exactly the same as the Beck and Katz's PCSEs because PCSE deals with panel heteroskedasticity while sandwhich SEs addresses overall heteroskedasticity. I am not totally sure in how much these two differ in practice, but something is something.
I hope my answer was somehow helpful, this is actually my very first answer :D

Comparing script for random intercept and slope independent between nlme and lme4

So for random mixed effects, I am making a comparison list of scripts between the 2 packages.
For independent random intercept and slope, if I am using the following code in lme4 package, what is the corresponding script in nlme?
model1 <- lmer(y~A + (1+site) + (0+A|site), data, REML = FALSE)
Also, for nested mixed effects, which calculates the random effect in different way from the above, are my scripts correct?
model2 <- lme(y~A, random = ~1+site/A, data, method="REML")
and
model3 <- lmer(y~A + (1|site) + (1|site:A), data, method=FALSE)
Thank you so much!
I hope this answer is not too late!
For your first model the version in nlme would be:
model1 <- lme(y ~ A ,
random = list(A = pdDiag(~time)),
data=data)
Your seccond and third models are equivalent. Model 3 in lme4 package can be also written as:
model3 <- lmer(y~A + (1|site/A), data, method=FALSE)
I foud this link that might help you a lot to compare nlme and lme4 packages
https://rpsychologist.com/r-guide-longitudinal-lme-lmer#conditional-growth-model-dropping-intercept-slope-covariance

Variance-covariance matrix for random effect from GAM with mgcv package

Random effect and variance-covariance matrix of random effect with lme4 package are extracted as following:
library(lme4)
fm1 <- lmer(Reaction ~ Days + (1|Subject), sleepstudy)
fm1.rr <- ranef(fm1,condVar=TRUE)
fm1.pv <- attr(rr[[1]],"postVar")
I wonder how I can do this with mgcv?
'gam.vcomp' function does extract the estimated variance components, but not for each level of random effect.
library(mgcv)
fm2 <- gam(Reaction ~ Days + s (Subject, bs="re"), data = sleepstudy, method = "REML")
gam.vcomp(fm2)
library(lme4)
data(sleepstudy)
fm1 <- lmer(Reaction ~ Days + (1|Subject), sleepstudy)
fm1.rr <- ranef(fm1,condVar=TRUE)$Subject[,1]
fm1.pv <- sqrt(attr(ranef(fm1,condVar=TRUE) [['Subject']],"postVar")[1,1,])
library(mgcv)
fm2 <- gam(Reaction ~ Days + s (Subject, bs="re"),
data = sleepstudy, method = "REML")
To extract random effect for each Subject
idx <-grep("Subject", names(coef(fm2)))
fm2.rr<-coef(fm2)[idx]
attributes(fm2.rr)<-NULL
We can see that random effects in both models are identical as expected.
To extract variance-covariance matrix for random effect and calculate an error we use parameter Vp which is a Bayesian posterior covariance matrix:
fm2.pv <-sqrt(diag(fm2$Vp))[idx]
Or frequentist estimated covariance matrix Ve
fm2.pv <-sqrt(diag(fm2$Ve))[idx]
We can see that random effect errors estimated with mgcv slightly differ that those estimated with lme4 model. Errors based on a Bayesian posterior covariance matrix are larger, whereas based on a frequentist matrix are smaller.
You can also use the package gamm4, which is based on the gamm package but using lme4 underneath. The model would be fitted as:
fm3 <- gamm4(Reaction ~ Days, random = ~ (1|Subject), data = sleepstudy)
Random effects and variance-covariance matrix of random effects can be obtained following the normal lme4 procedure.
fm3.rr <- ranef(fm3$mer,condVar=TRUE)
fm3.pv <- attr(fm3.rr[[1]],"postVar")[1,1,]
However gamm4 can be much slower than gam so read the help file to see when it best suits your need.

R-package lavaan (structure equation modeling) - How to access estimates for latent variables?

suppose I fit a model in lavaan like this:
# model
model <- '
L1 =~ x1 + x2
'
# fit & summary
fit <- lavaan(model=model1, model.type="sem", data=PE_test)
summary(fit)
How can I access the estimates for the latent variable (L1) for every case?
I was looking for some kind of fit$coeff using str(fit) but I wasn's successful so far.
Any suggestions?
The predict function will return latent variable score estimates. For example,
LatentScoreEstimates <- predict(fit)

Resources