Simulation "zelig style" for GLMER multilevel in r - r

I run a logistic mixed-effects regression with r. The regression is somehow like this:
glmer ( Y~ X1 + X2 + X1:X2 + (1 | country), data = hdp, family = binomial)
Now, with the fixed effects I would like to plot predicted probabilities of Y. I tried with Zelig as this is what I learnt as the easiest way to do simulations and get predicted probabilities, but I've seen the new version does not include multilevel models and the former Zelig Multilevel is very "unstable". Is there any easy alternative? How can I do simulations that could be plotted??
Thanks in advance!

You can use the merTools package.

Related

What is the survest equivalent when running a mixed effects cox regression using coxme in R?

I ran this mixed effects model
survival<-coxme(Surv(tr_dur_a,ftr_2)~mod_muac+ drev_oedema_now_day0+danger_adm+tr_diet2+danger_adm + tr_daysec+hiv_stat+breast_feeding_adm+ sex_adm+mod_diar+diag_sepsis_adm+diag_lrti_pneumonia_adm +other_dx+ (1|site), data=survival1b)
I would like the coxme equivalent of survest. My aim is to get survival probabilities
So I am aiming for something like:
Csur <- survest(survival,newdata = Cnew, times=time)

how to run a GLMM on R

I am trying to run a Generalized linear mixed model (GLMM) on r, I have two fixed factors and two random factors
however there are a lot of holes in my data set and the I am struggling to find a code to run the glmm all I found is the glm
Can someone please walk me through this, I know very little about R and coding
You can use the lme4 package as well. The command for a generalized linear mixed model is glmer().
Example:
install.packages("lme4") #If you still haven't done it.
library(lme4)
myfirstmodel <- glmer(variable_to_study ~ fixed_variable + (1|random_effect_varible), data = mydataset, family = poisson)
Family = poisson was just an example. Choose the 'family' according to the nature of the variable_to_study (eg. poisson for discrete data).

Hierarchical logistic regression

I am trying to predict depression by using two quantitative variables and their interaction. However, before I want to see how much variance they explain, I want to control for a few variables.
My plan was to build a logistic regression model:
Depression = Covariates + IV1 + IV2 + IV1:IV2
Unfortunately, R doesn't seem to care about the order in which you add the variables to the model (Type III sum of squares?). Is there a way to build a logistic regression model in which the order does matter?
Thanks in advance!
-Lukas

Logistic regression with robust clustered standard errors in R

A newbie question: does anyone know how to run a logistic regression with clustered standard errors in R? In Stata it's just logit Y X1 X2 X3, vce(cluster Z), but unfortunately I haven't figured out how to do the same analysis in R. Thanks in advance!
You might want to look at the rms (regression modelling strategies) package. So, lrm is logistic regression model, and if fit is the name of your output, you'd have something like this:
fit=lrm(disease ~ age + study + rcs(bmi,3), x=T, y=T, data=dataf)
fit
robcov(fit, cluster=dataf$id)
bootcov(fit,cluster=dataf$id)
You have to specify x=T, y=T in the model statement. rcs indicates restricted cubic splines with 3 knots.
Another alternative would be to use the sandwich and lmtest package as follows. Suppose that z is a column with the cluster indicators in your dataset dat. Then
# load libraries
library("sandwich")
library("lmtest")
# fit the logistic regression
fit = glm(y ~ x, data = dat, family = binomial)
# get results with clustered standard errors (of type HC0)
coeftest(fit, vcov. = vcovCL(fit, cluster = dat$z, type = "HC0"))
will do the job.
I have been banging my head against this problem for the past two days; I magically found what appears to be a new package which seems destined for great things--for example, I am also running in my analysis some cluster-robust Tobit models, and this package has that functionality built in as well. Not to mention the syntax is much cleaner than in all the other solutions I've seen (we're talking near-Stata levels of clean).
So for your toy example, I'd run:
library(Zelig)
logit<-zelig(Y~X1+X2+X3,data=data,model="logit",robust=T,cluster="Z")
Et voilĂ !
There is a command glm.cluster in the R package miceadds which seems to give the same results for logistic regression as Stata does with the option vce(cluster). See the documentation here.
In one of the examples on this page, the commands
mod2 <- miceadds::glm.cluster(data=dat, formula=highmath ~ hisei + female,
cluster="idschool", family="binomial")
summary(mod2)
give the same robust standard errors as the Stata command
logit highmath hisei female, vce(cluster idschool)
e.g. a standard error of 0.004038 for the variable hisei.

How do you fit a linear mixed model with an AR(1) random effects correlation structure in R?

I am trying to use R to rerun someone else's project, so we need to use some macros in R.
Here comes a very basic question:
m1.nlme = lme(log.bp.dia ~ M25.9to9.ma5iqr + temp.c.9to9.ma4iqr + o3.ma5iqr + sea_spring + sea_summer + sea_fall + BMI + male + age_ini, data=barbara.1.clean, random = ~ 1|study_id)
Since the model is using AR(1) [autocorrelation 1 covariance model] in SAS for within person variance, I am not sure how to do this in R.
And where I can see the index for different models, like unstructured?
Thanks
I don't know what you mean by "index" for different models, but to specify an AR(1) covariance structure for the residuals, you can add corr=corAR1() to your lme call.
The correlation at lag $1$ is say $r$, where $-1< r <1$ for a stationary $AR(1)$ model. The correlation at lag $k \geq 1$ is $r^k$. This gives you the autocovariance matrix by just multiplying by the variance of $X_t$.

Resources