In Stata, I know that if I use the following command, I can get the logits for each possible combination between my dependent variable (thkbins) and my two predictor variables (cc & tv):
melogit thkbins cc#tv || school:,
Is there a way to produce a similar output in R? I have been using the glmer command from the lme4 package, and while I can get the output with the interaction term, it isn't exactly what I can produce in Stata.
model1 <- glmer(thkbin ~ cc + tv + cc*tv + (1|school),
data=thkdata, family = binomial, nAGQ = 7)
summary(model1)
I would use clmm from the package ordinal (tutorial here):
model<-clmm(DepVar~IndVar+(1|WithinVar),data=df)
I hope this helps.
Related
I am running a linear mixed effects models using the "nlme" package looking at stress and lifestyle as predictors of change in cognition over 4 years in a longitudinal dataset. All variables in the model are continuous variables.
I am able to create the model and get the summary statistics using this code:
mod1 <- lme(MS ~ age + sex + edu + GDST1*Time + HLI*Time + GDST1*HLI*Time, random= ~ 1|ID, data=NuAge_long, na.action=na.omit)
summary(mod1)
I am trying to use the "interactions" package to probe the 3-way interaction:
sim_slopes(model = mod1, pred = Time, modx = GDST1, mod2 = HLI, data = NuAge_long)
but am receiving this error:
Error in if (tcol == "df") tcol <- "t val." : argument is of length zero
I am also trying to plot the interaction using the same "interactions" package:
interact_plot(model = mod1, pred = Time, modx = GDST1, mod2 = HLI, data = NuAge_long)
and am receiving this error:
Error in UseMethod("family") : no applicable method for 'family' applied to an object of class "lme"
I can't seem to find what these errors mean and why I'm getting them. Any help would be appreciated!
From ?interactions::sim_slopes:
The function is tested with ‘lm’, ‘glm’,
‘svyglm’, ‘merMod’, ‘rq’, ‘brmsfit’, ‘stanreg’ models. Models
from other classes may work as well but are not officially
supported. The model should include the interaction of
interest.
Note this does not include lme models. On the other hand, merMod models are those generated by lme4::[g]lmer(), and as far as I can tell you should be able to fit this model equally well with lmer():
library(lme4)
mod1 <- lmer(MS ~ age + sex + edu + GDST1*Time + HLI*Time + GDST1*HLI*Time
+ (1|ID), data=NuAge_long)
(things will get harder if you want to specify correlation structures, e.g. correlation = corAR1(), which works for lme() but not lmer() ...)
I have estimated a Tobit model using the censReg package, along with the censReg function. Alternatively, the same Tobit model is estimated using the tobit function in the AER package.
Now, I really like to have some goodness of fit statistic, such as the Pseudo-R2. However, whenever I try to estimate this, the output returns as NA. For example:
Tobit <- censReg(Listing$occupancy_rate ~ ., left = -Inf, right = 1, data = Listing)
PseudoR2(Tobit, which = "McFadden")
[1] NA
So far, I have only seen reported Pseudo-R2's when people use Stata. Does anyone know how to estimate it in R?
Alternatively, Tobit estimates the (log)Sigma, which is basically the standard deviation of the residuals. Could I use this to calculate the R2?
All help is really appreciated.
You can use DescTools package to calculate PseudoR2. You have not provided any sample data. So, it is hard for me to run your model. I am using a default dataset like
library(DescTools)
r.glm <- glm(Survived ~ ., data=Untable(Titanic), family=binomial)
PseudoR2(r.glm, c("McFadden"))
For your model, you can use something like
library(AER)
data("Affairs", package = "AER")
fm.tobit <- tobit(affairs ~ age + yearsmarried + religiousness + occupation + rating,
data = Affairs)
#Create a function for pseudoR2 calculation
pseudoR2 <- function(obj) 1 - as.vector(logLik(obj)/logLik(update(obj, . ~ 1)))
pseudoR2(fm.tobit)
#>[1] 0.05258401
Or using censReg as you have used
library(censReg)
data("Affairs", package = "AER")
estResult <- censReg(affairs ~ age + yearsmarried + religiousness +
occupation + rating, data = Affairs)
summary(estResult)
pseudoR2(estResult)
#>[1] 0.05258401
You can find the details about pseudoR2 in the following link
R squared in logistic regression
I'm working on a project where we'd like to run a follow-up linear regression model on treatment-control data where the treatments have been matched to the controls using the cem package to perform coarsened exact matching:
match <- cem(treatment="cohort", data=df, drop=c("member_id","period","cohort_period"))
est <- att(match, total_cost ~ cohort + period + cohort_period, data = df)
where I'd like to estimate the coefficient and 95% CI on the "cohort_period" interaction term. It seems the att function in the cem package only estimates the coefficient for the specified treatment variable (in this case, "cohort") while adjusting for other variables in the regression.
Is there a way to return the coefficients and 95% CIs for the other regression terms?
Figured it out! I was using the wrong package - instead of cem I discovered the MatchIt and Zelig packages allow me to perform both exact matching and parameteric regression on the matched data:
library(MatchIt)
library(Zelig)
matched_df <- matchit(cohort ~ age_catg + sex + market_code + risk_score_catg, method="exact", data=df)
matched_df_reg <- zelig(total_cost ~ cohort + period + cohort_period, data = match.data(matched_df), model = "ls")
So for random mixed effects, I am making a comparison list of scripts between the 2 packages.
For independent random intercept and slope, if I am using the following code in lme4 package, what is the corresponding script in nlme?
model1 <- lmer(y~A + (1+site) + (0+A|site), data, REML = FALSE)
Also, for nested mixed effects, which calculates the random effect in different way from the above, are my scripts correct?
model2 <- lme(y~A, random = ~1+site/A, data, method="REML")
and
model3 <- lmer(y~A + (1|site) + (1|site:A), data, method=FALSE)
Thank you so much!
I hope this answer is not too late!
For your first model the version in nlme would be:
model1 <- lme(y ~ A ,
random = list(A = pdDiag(~time)),
data=data)
Your seccond and third models are equivalent. Model 3 in lme4 package can be also written as:
model3 <- lmer(y~A + (1|site/A), data, method=FALSE)
I foud this link that might help you a lot to compare nlme and lme4 packages
https://rpsychologist.com/r-guide-longitudinal-lme-lmer#conditional-growth-model-dropping-intercept-slope-covariance
I'm doing a replication of an estimation done with Stata's xtregar command, but I'm using R instead.
The xtregar command implements the method from Baltagi and Wu (1999) "Unequally spaced panel data regressions with AR(1) disturbances" paper. As Stata describes it:
xtregar fits cross-sectional time-series regression models when the disturbance term is first-order autoregressive. xtregar offers a within estimator for fixed-effects models and a GLS estimator for random-effects models. xtregar can accommodate unbalanced panels whose observations are unequally spaced over time.
So far, for the fixed-effects model, I used the plm package for R. The attempt looks like this:
plm(data=A, y ~ x1 + x2, effect = "twoways", model = "within")
Nevertheless is not complete (comparing to xtregar description) and the results are not quite like the ones Stata provides. Furthermore, Stata's command needs to set a panel variable and a time variable, feature that's (as far as I can tell) absent in the plm environment.
Should I settle with plm or is there another way of doing this?
PS: I searched thoroughly different websites but failed to find a equivalent to Stata's xtregar.
Update
After reading Croissant and Millo (2008) "Panel Data Econometrics in R: The plm Package", specifically seccion 7.4 "Some useful 'econometric' models in nlme" I used something like this for the Random Effects part of the estimation:
gls(data=A, y ~ x1 + x2, correlation = corAR1(0, form = ~ year | pays), na.action = na.exclude)
Nevertheless the following has results closer to those of Stata
lme(data=A, y ~ x1 + x2, random = ~ 1 | pays, correlation = corAR1(0, form = ~ year | pays), na.action = na.exclude)
Try {panelAR}. This is a package for regressions in panel data that addresses AR1 type of autocorrelations.
Unfortunately, I do not own Stata, so I can not test which correlation method to replicate in panelCorrMethod
library(panelAR)
model <-
panelAR(formula = y ~ x1 + x2,
data = A,
panelVar = 'pays',
timeVar = 'year',
autoCorr = 'ar1',
rho.na = TRUE,
bound.rho = TRUE,
panelCorrMethod ='phet' # You might need to change this parameter. 'phet' uses the HW Sandwich stimator for heteroskedasticity cases, but others are available.
)