How to get confidence interval around ICC from lmer model - r

I estimate the interrater reliability using ICC from a random-intercept mixed-effects linear regression model. But how to get the confidence interval around the ICC.
elmer <- lmer(A_total ~ (1|Subject), data = data)
summary(elmer)
Linear mixed model fit by REML ['lmerMod']
Formula: A_total ~ 1 + (1 | Subject)
Data: data_A
REML criterion at convergence: 184.7
Scaled residuals:
Min 1Q Median 3Q Max
-1.41507 -0.29387 0.02918 0.45656 1.27346
Random effects:
Groups Name Variance Std.Dev.
Subject (Intercept) 3663.0 60.52
Residual 116.3 10.79
Number of obs: 20, groups: Kodnamn, 10
Fixed effects:
Estimate Std. Error t value
(Intercept) 337.35 19.29 17.49
I calculate ICC to be 3663.0 / (3663.0 + 116.3). But, how can I get the confidence interval around this ICC?

Related

Interpretation of an lmer output

I'm new here, I've tried to run a lmer model:
lmer = lmer(RI ~ SET + LOG_VP + (1|API) + (1|ODOUR), data = a)
Could someone help me interpret the output?
Linear mixed model fit by REML ['lmerMod']
Formula: RI ~ SET + LOG_VP + (1 | API) + (1 | ODOUR)
Data: a
REML criterion at convergence: -349.9
Scaled residuals:
Min 1Q Median 3Q Max
-2.6167 -0.4719 -0.0357 0.5053 8.4850
Random effects:
Groups Name Variance Std.Dev.
API (Intercept) 0.01431 0.11964
ODOUR (Intercept) 0.00415 0.06442
Residual 0.00778 0.08820
Number of obs: 238, groups: API, 34; ODOUR, 14
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.15716 0.08792 1.787
SET 0.08180 0.05490 1.490
LOG_VP 0.03527 0.01968 1.792
Correlation of Fixed Effects:
(Intr) SET
SET -0.950
LOG_VP 0.083 -0.049
Thank you!
It depends on what your research question is, but
the response when both fixed effects are zero is is 0.15716
a 1 unit change in SET is associated with a 0.08180 change in RI
a 1 unit change in LOG_VP is associated with a 0.03527 change in RI
Variance at the API level is 0.01431
Variance at the ODOUR level is 0.00415
Residual (unit level) variance is 0.00778

Why do I get two random slope terms when forcing no correlation between random slope and intercept in lme4?

I am running a mixed effects logistic regression using lme4 in R.
I have one predictor that is a dichotomous categorical variable. It is coded 1/0 and is defined as a factor.
I find that the random item intercept is perfectly correlated with the random item slope for my predictor. So, I run a new model in which they are uncorrelated using the following code:
m1<-glmer(DV~1+PPTGender+(1|Subject)+(1+PPTGender||Item), data = data, family = "binomial")
However, the output gives me two terms for the random slope:
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: DV ~ 1 + PPTGender + (1 | Subject) + (1 + PPTGender || Item)
Data: data
AIC BIC logLik deviance df.resid
499.7 526.9 -242.9 485.7 353
Scaled residuals:
Min 1Q Median 3Q Max
-1.7334 -1.0057 0.6312 0.8807 1.3858
Random effects:
Groups Name Variance Std.Dev. Corr
Subject (Intercept) 6.323e-10 2.514e-05
Item (Intercept) 2.785e-09 5.278e-05
Item.1 PPTGender0 5.229e-01 7.231e-01
PPTGender1 6.889e-03 8.300e-02 -1.00
Number of obs: 360, groups: Subject, 60; Item, 36
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.28229 0.17833 1.583 0.113
PPTGender -0.07718 0.29534 -0.261 0.794
Correlation of Fixed Effects:
(Intr)
PPTGndr -0.635
Can anyone explain why this happens?
If I redefine the PPTGender variable as a numeric character variable like so:
data$PPTGender<-as.numeric(as.character(data$PPTGender))
It goes away:
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: DV ~ 1 + PPTGender + (1 | Subject) + (1 + PPTGender || Item)
Data: data
AIC BIC logLik deviance df.resid
500.8 520.2 -245.4 490.8 355
Scaled residuals:
Min 1Q Median 3Q Max
-1.4075 -1.0489 0.7410 0.8472 1.1603
Random effects:
Groups Name Variance Std.Dev.
Subject (Intercept) 3.638e-10 1.907e-05
PairNumber (Intercept) 2.081e-01 4.562e-01
PairNumber.1 PPTGender 1.091e-08 1.044e-04
Number of obs: 360, groups: Subject, 60; Item, 36
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.26056 0.14625 1.782 0.0748 .
PPTGender -0.03009 0.26720 -0.113 0.9103
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr)
PPTGendr -0.397
Is this just a quirk in R? Is there anything wrong with this latter approach?

How do I grab the AR1 estimate and its SE from the gls function in R?

I am attempting to get the lag one autocorrelation estimates from the gls function (package {nlme}) with its SE. This is being done on a non-stationary univariate time series. Here is the output:
Generalized least squares fit by REML
Model: y ~ year
Data: tempdata
AIC BIC logLik
51.28921 54.37957 -21.64461
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.9699799
Coefficients:
Value Std.Error t-value p-value
(Intercept) -1.1952639 3.318268 -0.3602072 0.7234
year -0.2055264 0.183759 -1.1184567 0.2799
Correlation:
(Intr)
year -0.36
Standardized residuals:
Min Q1 Med Q3 Max
-0.12504485 -0.06476076 0.13948378 0.51581993 0.66030397
Residual standard error: 3.473776
Degrees of freedom: 18 total; 16 residual
The phi coefficient seemed promising since it was under the correlation structure in the output
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.9699799
but it regularly goes over one, which is not possible for correlation. Then there is the
Correlation:
(Intr)
Yearctr -0.36
but I was advised that this was likely not a correct estimate for the data (there were multiple test sites so this is just one of the unexpected estimates). Is there a function that outputs an AR1 estimate and its SE (other than arima)?
sample of autocorrelated data:
set.seed(29)
y = diffinv(rnorm(500))
x = 1:length(y)
gls(y~x, correlation=corAR1(form=~1))
Note: I am comparing the function arima() to gls() (or another method) to compare AR1 estimates and SE's. I am doing this under adviser request.

Interpreting the output of summary(glmer(...)) in R

I'm an R noob, I hope you can help me:
I'm trying to analyse a dataset in R, but I'm not sure how to interpret the output of summary(glmer(...)) and the documentation isn't a big help:
> data_chosen_stim<-glmer(open_chosen_stim~closed_chosen_stim+day+(1|ID),family=binomial,data=chosenMovement)
> summary(data_chosen_stim)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: open_chosen_stim ~ closed_chosen_stim + day + (1 | ID)
Data: chosenMovement
AIC BIC logLik deviance df.resid
96.7 105.5 -44.4 88.7 62
Scaled residuals:
Min 1Q Median 3Q Max
-1.4062 -1.0749 0.7111 0.8787 1.0223
Random effects:
Groups Name Variance Std.Dev.
ID (Intercept) 0 0
Number of obs: 66, groups: ID, 35
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.4511 0.8715 0.518 0.605
closed_chosen_stim2 0.4783 0.5047 0.948 0.343
day -0.2476 0.5060 -0.489 0.625
Correlation of Fixed Effects:
(Intr) cls__2
clsd_chsn_2 -0.347
day -0.916 0.077
I understand the GLM behind it, but I can't see the weights of the independent variables and their error bounds.
update: weights.merMod already has a type argument ...
I think what you're looking for weights(object,type="working").
I believe these are the diagonal elements of W in your notation?
Here's a trivial example that matches up the results of glm and glmer (since the random effect is bogus and gets an estimated variance of zero, the fixed effects, weights, etc etc converges to the same value).
Note that the weights() accessor returns the prior weights by default (these are all equal to 1 for the example below).
Example (from ?glm):
d.AD <- data.frame(treatment=gl(3,3),
outcome=gl(3,1,9),
counts=c(18,17,15,20,10,20,25,13,12))
glm.D93 <- glm(counts ~ outcome + treatment, family = poisson(),
data=d.AD)
library(lme4)
d.AD$f <- 1 ## dummy grouping variable
glmer.D93 <- glmer(counts ~ outcome + treatment + (1|f),
family = poisson(),
data=d.AD,
control=glmerControl(check.nlev.gtr.1="ignore"))
Fixed effects and weights are the same:
all.equal(fixef(glmer.D93),coef(glm.D93)) ## TRUE
all.equal(unname(weights(glm.D93,type="working")),
weights(glmer.D93,type="working"),
tol=1e-7) ## TRUE

Extracting Random effects from nlme summary

I can extract Fixed effects from the nlme summary using summary(fm1). But struggling how to get Random effects: portion.
fm1 <- lme(distance ~ age, Orthodont, random = ~ age | Subject)
summary(fm1)
Linear mixed-effects model fit by REML
Data: Orthodont
AIC BIC logLik
454.6367 470.6173 -221.3183
Random effects:
Formula: ~age | Subject
Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
(Intercept) 2.3270340 (Intr)
age 0.2264278 -0.609
Residual 1.3100397
Fixed effects: distance ~ age
Value Std.Error DF t-value p-value
(Intercept) 16.761111 0.7752460 80 21.620377 0
age 0.660185 0.0712533 80 9.265333 0
Correlation:
(Intr)
age -0.848
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-3.223106086 -0.493761144 0.007316631 0.472151121 3.916033210
Number of Observations: 108
Number of Groups: 27
Any help will be highly appreciated. Thanks
Use ranef(fm1) to extract for each subject.
Updated to give code for extraction from summary table:
>VarCorr(fm1)
Subject = pdLogChol(age)
Variance StdDev Corr
(Intercept) 5.41508758 2.3270341 (Intr)
age 0.05126955 0.2264278 -0.609
Residual 1.71620400 1.3100397
> temp <- VarCorr(fm1)
> temp[,2]
(Intercept) age Residual
"2.3270341" "0.2264278" "1.3100397"
> temp[1,2]
[1] "2.3270341"

Resources