running CFA in lavaan - displaying correlation between latent variables - r

I have run a Confirmatory Factor Analysis and I now would like to apply the Fornell/Larcker Criterion. For doing so, I need the correlation between the latent variables. How can I display/retrieve the correlation between the latent variables?
I have tried the following commands generating an output:
standardizedSolution(fit)
summary(fit, fit.measures=TRUE)
lavInspect(fit,"standardized")
But none of these commands generates a "phi" (covariance between latent variables. Thus, I have two questions:
1) So, does anyone know how to display latent variables of a confirmatory factor analysis in r?
2) Take a look at the output of lavInspect(fit,"standardized") (see the link at the bottom of the text). Instead of a "phi" it generates a "$psi". Does that "psi" may be a "phi"? Because the matrix it generates looks like a correlation matrix
Here is the code:
#packages
library(lavaan)
library(readr)
CNCS<- read_delim("Desktop/20190703 Full Launch/Regressionen/Factor analysis/CNCS -47 Reversed.csv",
";", escape_double = FALSE, trim_ws = TRUE)
View(CNCS)
library(carData)
library(car)
CNCS.model <-
'AttitudeTowardsTheDeal =~ Q42_1 + Q42_2 + Q42_3
SubjectiveNormsImportance =~ Q43_r1 + Q43_r2 + Q43_r3 + Q43_r4
SubjectiveNormsFavour =~ Q44_r1 + Q44_r2 + Q44_r3 + Q44_r4
EaseOfPurchasing =~ Q45_r1 + Q45_r2 + Q45_r3 + Q45_r4 + Q45_r5 + Q45_r6
SE =~ Q3_r1 + Q3_r2 + Q3_r3 + Q4_r4
Consumer Innovativeness =~ Q4_r1 + Q4_r2 + Q4_r3 + Q4_r4 + Q4_r5
Purchase Intention =~ Q41moeglich_1 + Q41gewiss_1 + Q1wahrscheinlich_1 + Q41vorauss_1'
fit <- cfa(CNCS.model, data=CNCS)
summary(fit, fit.measures=TRUE)
lavInspect(fit,"standardized")
standardizedSolution(fit)
Partial OUTPUT of lavInspect(fit,"standardized")
Please follow the link to the screenshot of the partial output of lavInspect()

Take the cfa example given in the manual as
library(lavaan)
## The famous Holzinger and Swineford (1939) example
HS.model <- ' visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9 '
fit <- cfa(HS.model, data=HolzingerSwineford1939)
and include the standardized fit in the summary with
summary(fit, standardized = TRUE)
obtaining
...
Latent Variables:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
visual =~
x1 1.000 0.900 0.772
x2 0.554 0.100 5.554 0.000 0.498 0.424
x3 0.729 0.109 6.685 0.000 0.656 0.581
textual =~
x4 1.000 0.990 0.852
x5 1.113 0.065 17.014 0.000 1.102 0.855
x6 0.926 0.055 16.703 0.000 0.917 0.838
speed =~
x7 1.000 0.619 0.570
x8 1.180 0.165 7.152 0.000 0.731 0.723
x9 1.082 0.151 7.155 0.000 0.670 0.665
Covariances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
visual ~~
textual 0.408 0.074 5.552 0.000 0.459 0.459
speed 0.262 0.056 4.660 0.000 0.471 0.471
textual ~~
speed 0.173 0.049 3.518 0.000 0.283 0.283
Variances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.x1 0.549 0.114 4.833 0.000 0.549 0.404
.x2 1.134 0.102 11.146 0.000 1.134 0.821
.x3 0.844 0.091 9.317 0.000 0.844 0.662
.x4 0.371 0.048 7.779 0.000 0.371 0.275
.x5 0.446 0.058 7.642 0.000 0.446 0.269
.x6 0.356 0.043 8.277 0.000 0.356 0.298
.x7 0.799 0.081 9.823 0.000 0.799 0.676
.x8 0.488 0.074 6.573 0.000 0.488 0.477
.x9 0.566 0.071 8.003 0.000 0.566 0.558
visual 0.809 0.145 5.564 0.000 1.000 1.000
textual 0.979 0.112 8.737 0.000 1.000 1.000
speed 0.384 0.086 4.451 0.000 1.000 1.000
You find the entries of the covariance matrix in the Covariances: and Variances: sections respectively in column Estimate and the entries of the correlation matrix in column Std.lv.
Note that inspect or rather lavInspect provides the argument what which by default is specified with "free". Taken from the manual, the three relevant other options are
"est": A list of model matrices. The values represent the estimated model parameters. Aliases: "estimates", and "x".
"std": A list of model matrices. The values represent the (completely) standardized model parameters (the variances of both the observed and the latent variables are set to unity). Aliases: "std.all", "standardized".
"std.lv": A list of model matrices. The values represent the standardized model parameters (only the variances of the latent variables are set to unity.)
which refer to the summary columns Estimate Std.lv and Std.all. Further try the following line
cov2cor(lavInspect(fit, what = "est")$psi)
In case of any remaining doubt, I recommend you consult the tutorial, the packages support infrastructure or the homepage.

Related

How to not allow error terms to correlate in SEM with lavaan

I am currently working on running a structural equation modelling analysis with a dataset and I am running into a few problems. Before running the full sem, I intended to run a CFA to replicate the psychometric testing done with this measure I am using. This measure has 24 items, which make up 5 subscales (latent variables), which in turn load onto an "total" higher order factor. In the literature they describe that "In all models, the items were constrained to load on one factor only, error terms were not allowed to correlate, and the variance of the factors was fixed to 1".
I've constraint items to load onto one factor, and set the variance of those factors to 1, but I am having trouble specifying in my model that the error terms are not allowed to correlate. Do they mean the error term of the items are not allowed to correlate? Is there an easy way to do this in lavaan or do I have to literally go "y1~~ 0y2","y1~~0y3".. and so on for every item?
Thank you in advance for the help.
By default the error terms do not correlate, the authors intended to mention that they did not use that kind of modification indices. It is usual to correlate items' residuals inside the same factor. Here is an example of a hierarchical model with three first-order factors, with factors variance fixed to one, and with no error terms correlated:
library(lavaan)
#> This is lavaan 0.6-7
#> lavaan is BETA software! Please report any bugs.
#>
HS.model3 <- ' visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9
higher =~ visual + textual + speed'
fit6 <- cfa(HS.model3, data = HolzingerSwineford1939, std.lv=T)
summary(fit6)
#> lavaan 0.6-7 ended normally after 36 iterations
#>
#> Estimator ML
#> Optimization method NLMINB
#> Number of free parameters 21
#>
#> Number of observations 301
#>
#> Model Test User Model:
#>
#> Test statistic 85.306
#> Degrees of freedom 24
#> P-value (Chi-square) 0.000
#>
#> Parameter Estimates:
#>
#> Standard errors Standard
#> Information Expected
#> Information saturated (h1) model Structured
#>
#> Latent Variables:
#> Estimate Std.Err z-value P(>|z|)
#> visual =~
#> x1 0.439 0.194 2.257 0.024
#> x2 0.243 0.108 2.253 0.024
#> x3 0.320 0.138 2.326 0.020
#> textual =~
#> x4 0.842 0.064 13.251 0.000
#> x5 0.937 0.071 13.293 0.000
#> x6 0.780 0.060 13.084 0.000
#> speed =~
#> x7 0.522 0.066 7.908 0.000
#> x8 0.616 0.067 9.129 0.000
#> x9 0.564 0.064 8.808 0.000
#> higher =~
#> visual 1.791 0.990 1.809 0.070
#> textual 0.617 0.129 4.798 0.000
#> speed 0.640 0.143 4.489 0.000
#>
#> Variances:
#> Estimate Std.Err z-value P(>|z|)
#> .x1 0.549 0.114 4.833 0.000
#> .x2 1.134 0.102 11.146 0.000
#> .x3 0.844 0.091 9.317 0.000
#> .x4 0.371 0.048 7.779 0.000
#> .x5 0.446 0.058 7.642 0.000
#> .x6 0.356 0.043 8.277 0.000
#> .x7 0.799 0.081 9.823 0.000
#> .x8 0.488 0.074 6.573 0.000
#> .x9 0.566 0.071 8.003 0.000
#> .visual 1.000 #fixed...
#> .textual 1.000 #fixed...
#> .speed 1.000 #fixed...
#> higher 1.000
Created on 2021-03-08 by the reprex package (v0.3.0)
As you can observe, no correlations, first-order and second-order factor with fixed variances to 1 (i.e. std.lv=T).

Different outputs using ggpredict for glmer and glmmTMB model

I am trying to predict and graph models with species presence as the response. However I've run into the following problem: the ggpredict outputs are wildly different for the same data in glmer and glmmTMB. However, the estimates and AIC are very similar. These are simplified models only including date (which has been centered and scaled), which seems to be the most problematic to predict.
yntest<- glmer(MYOSOD.P~ jdate.z + I(jdate.z^2) + I(jdate.z^3) +
(1|area/SiteID), family = binomial, data = sodpYN)
> summary(yntest)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: MYOSOD.P ~ jdate.z + I(jdate.z^2) + I(jdate.z^3) + (1 | area/SiteID)
Data: sodpYN
AIC BIC logLik deviance df.resid
1260.8 1295.1 -624.4 1248.8 2246
Scaled residuals:
Min 1Q Median 3Q Max
-2.0997 -0.3218 -0.2013 -0.1238 9.4445
Random effects:
Groups Name Variance Std.Dev.
SiteID:area (Intercept) 1.6452 1.2827
area (Intercept) 0.6242 0.7901
Number of obs: 2252, groups: SiteID:area, 27; area, 9
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.96778 0.39190 -7.573 3.65e-14 ***
jdate.z -0.72258 0.17915 -4.033 5.50e-05 ***
I(jdate.z^2) 0.10091 0.08068 1.251 0.21102
I(jdate.z^3) 0.25025 0.08506 2.942 0.00326 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) jdat.z I(.^2)
jdate.z 0.078
I(jdat.z^2) -0.222 -0.154
I(jdat.z^3) -0.071 -0.910 0.199
The glmmTMB model + summary:
Tyntest<- glmmTMB(MYOSOD.P ~ jdate.z + I(jdate.z^2) + I(jdate.z^3) +
(1|area/SiteID), family = binomial("logit"), data = sodpYN)
> summary(Tyntest)
Family: binomial ( logit )
Formula: MYOSOD.P ~ jdate.z + I(jdate.z^2) + I(jdate.z^3) + (1 | area/SiteID)
Data: sodpYN
AIC BIC logLik deviance df.resid
1260.8 1295.1 -624.4 1248.8 2246
Random effects:
Conditional model:
Groups Name Variance Std.Dev.
SiteID:area (Intercept) 1.6490 1.2841
area (Intercept) 0.6253 0.7908
Number of obs: 2252, groups: SiteID:area, 27; area, 9
Conditional model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.96965 0.39638 -7.492 6.78e-14 ***
jdate.z -0.72285 0.18250 -3.961 7.47e-05 ***
I(jdate.z^2) 0.10096 0.08221 1.228 0.21941
I(jdate.z^3) 0.25034 0.08662 2.890 0.00385 **
---
ggpredict outputs
testg<-ggpredict(yntest, terms ="jdate.z[all]")
> testg
# Predicted probabilities of MYOSOD.P
# x = jdate.z
x predicted std.error conf.low conf.high
-1.95 0.046 0.532 0.017 0.120
-1.51 0.075 0.405 0.036 0.153
-1.03 0.084 0.391 0.041 0.165
-0.58 0.072 0.391 0.035 0.142
-0.14 0.054 0.390 0.026 0.109
0.35 0.039 0.399 0.018 0.082
0.79 0.034 0.404 0.016 0.072
1.72 0.067 0.471 0.028 0.152
Adjusted for:
* SiteID = 0 (population-level)
* area = 0 (population-level)
Standard errors are on link-scale (untransformed).
testgTMB<- ggpredict(Tyntest, "jdate.z[all]")
> testgTMB
# Predicted probabilities of MYOSOD.P
# x = jdate.z
x predicted std.error conf.low conf.high
-1.95 0.444 0.826 0.137 0.801
-1.51 0.254 0.612 0.093 0.531
-1.03 0.136 0.464 0.059 0.280
-0.58 0.081 0.404 0.038 0.163
-0.14 0.054 0.395 0.026 0.110
0.35 0.040 0.402 0.019 0.084
0.79 0.035 0.406 0.016 0.074
1.72 0.040 0.444 0.017 0.091
Adjusted for:
* SiteID = NA (population-level)
* area = NA (population-level)
Standard errors are on link-scale (untransformed).
The estimates are completely different and I have no idea why.
I did try to use both the ggeffects package from CRAN and the developer version in case that changed anything. It did not. I am using the most up to date version of glmmTMB.
This is my first time asking a question here so please let me know if I should provide more information to help explain the problem.
I checked and the issue is the same when using predict instead of ggpredict, which would imply that it is a glmmTMB issue?
GLMER:
dayplotg<-expand.grid(jdate.z=seq(min(sodp$jdate.z), max(sodp$jdate.z), length=92))
Dfitg<-predict(yntest, re.form=NA, newdata=dayplotg, type='response')
dayplotg<-data.frame(dayplotg, Dfitg)
head(dayplotg)
> head(dayplotg)
jdate.z Dfitg
1 -1.953206 0.04581691
2 -1.912873 0.04889584
3 -1.872540 0.05195598
4 -1.832207 0.05497553
5 -1.791875 0.05793307
6 -1.751542 0.06080781
glmmTMB:
dayplot<-expand.grid(jdate.z=seq(min(sodp$jdate.z), max(sodp$jdate.z), length=92),
SiteID=NA,
area=NA)
Dfit<-predict(Tyntest, newdata=dayplot, type='response')
head(Dfit)
dayplot<-data.frame(dayplot, Dfit)
head(dayplot)
> head(dayplot)
jdate.z SiteID area Dfit
1 -1.953206 NA NA 0.4458236
2 -1.912873 NA NA 0.4251926
3 -1.872540 NA NA 0.4050944
4 -1.832207 NA NA 0.3855801
5 -1.791875 NA NA 0.3666922
6 -1.751542 NA NA 0.3484646
I contacted the ggpredict developer and figured out that if I used poly(jdate.z,3) rather than jdate.z + I(jdate.z^2) + I(jdate.z^3) in the glmmTMB model, the glmer and glmmTMB predictions were the same.
I'll leave this post up even though I was able to answer my own question in case someone else has this question later.

R glmer warnings: model fails to converge / model is nearly unidentifiable

I have seen questions about this on this forum, and I have also asked it myself in a previous post but I still haven't been able to solve my problem. Therefore I am trying again, formulating the question as clearly as I can this time, with as much detailed information as possible.
My data set has a binomial dependent variable, 3 categorical fixed effects and 2 categorical random effects (item and subject). I am using a mixed effects model using glmer. Here is what I entered in R:
modelall<- glmer(moodR ~ group*context*condition + (1|subject) + ``(1|item), data=RprodHSNS, family="binomial")`
I get 2 warnings:
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.02081 (tol = 0.001, component 11)
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?`
My summary looks like this:
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: moodR ~ group * context * condition + (1 | subject) + (1 | item)
Data: RprodHSNS`
AIC BIC logLik deviance df.resid
1400.0 1479.8 -686.0 1372.0 2195 `
Scaled residuals:
Min 1Q Median 3Q Max
-8.0346 -0.2827 -0.0152 0.2038 20.6578 `
Random effects:
Groups Name Variance Std.Dev.
item (Intercept) 1.475 1.215
subject (Intercept) 1.900 1.378
Number of obs: 2209, groups: item, 54; subject, 45
Fixed effects:`
Estimate Std. Error z value Pr(>|z|)`
(Intercept) -0.61448 42.93639 -0.014 0.988582
group1 -1.29254 42.93612 -0.030 0.975984
context1 0.09359 42.93587 0.002 0.998261
context2 -0.77262 0.22894 -3.375 0.000739***
condition1 4.99219 46.32672 0.108 0.914186
group1:context1 -0.17781 42.93585 -0.004 0.996696
group1:context2 -0.10551 0.09925 -1.063 0.287741
group1:condition1 -3.07516 46.32653 -0.066 0.947075
context1:condition1 -3.47541 46.32648 -0.075 0.940199
context2:condition1 -0.07293 0.22802 -0.320 0.749087
group1:context1:condition1 2.47882 46.32656 0.054 0.957328
group1:context2:condition1 0.30360 0.09900 3.067 0.002165 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) group1 cntxt1 cntxt2 cndtn1 grp1:cnt1 grp1:2 grp1:cnd1 cnt1:1 cnt2:1 g1:1:1
group1 -1.000
context1 -1.000 1.000
context2 0.001 0.000 -0.001
condition1 -0.297 0.297 0.297 0.000
grp1:cntxt1 1.000 -1.000 -1.000 0.001 -0.297
grp1:cntxt2 0.001 0.000 0.000 -0.123 0.000 0.000
grp1:cndtn1 0.297 -0.297 -0.297 -0.001 -1.000 0.297 0.000
cntxt1:cnd1 0.297 -0.297 -0.297 -0.001 -1.000 0.297 0.001 1.000
cntxt2:cnd1 0.000 0.000 -0.001 0.011 0.001 0.000 -0.197 -0.001 -0.001
grp1:cnt1:1 -0.297 0.297 0.297 0.001 1.000 -0.297 -0.001 -1.000 -1.000 0.001
grp1:cnt2:1 0.000 0.000 0.001 -0.198 0.000 -0.001 0.252 0.000 0.001 -0.136 0.000
Extremely high p-values, which does not seem to be possible.
In a previous post I read that one of the problems could be fixed by increasing the amount of iterations by inserting the following in the command: glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 100000))
So that's what I did:
modelall<- glmer(moodR ~ group*context*condition + (1|subject) + (1|item), data=RprodHSNS, family="binomial", glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 100000)))
Now, the second warning is gone, but the first one is still there:
> Warning message:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.005384 (tol = 0.001, component 7)
The summary also still looks odd:
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: moodR ~ group * context * condition + (1 | subject) + (1 | item)
Data: RprodHSNS
Control: glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 1e+05))`
AIC BIC logLik deviance df.resid
1400.0 1479.8 -686.0 1372.0 2195
Scaled residuals:
Min 1Q Median 3Q Max
-8.0334 -0.2827 -0.0152 0.2038 20.6610
Random effects:
Groups Name Variance Std.Dev.
item (Intercept) 1.474 1.214
subject (Intercept) 1.901 1.379
Number of obs: 2209, groups: item, 54; subject, 45
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.64869 26.29368 -0.025 0.980317
group1 -1.25835 26.29352 -0.048 0.961830
context1 0.12772 26.29316 0.005 0.996124
context2 -0.77265 0.22886 -3.376 0.000735 ***
condition1 4.97325 22.80050 0.218 0.827335
group1:context1 -0.21198 26.29303 -0.008 0.993567
group1:context2 -0.10552 0.09924 -1.063 0.287681
group1:condition1 -3.05629 22.80004 -0.134 0.893365
context1:condition1 -3.45656 22.80017 -0.152 0.879500
context2:condition1 -0.07305 0.22794 -0.320 0.748612
group1:context1:condition1 2.45996 22.80001 0.108 0.914081
group1:context2:condition1 0.30347 0.09899 3.066 0.002172 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) group1 cntxt1 cntxt2 cndtn1 grp1:cnt1 grp1:2 grp1:cnd1 cnt1:1 cnt2:1 g1:1:1
group1 -1.000
context1 -1.000 1.000
context2 0.000 0.000 0.000
condition1 0.123 -0.123 -0.123 -0.001
grp1:cntxt1 1.000 -1.000 -1.000 0.001 0.123
grp1:cntxt2 0.001 0.000 0.000 -0.123 0.001 0.000
grp1:cndtn1 -0.123 0.123 0.123 0.000 -1.000 -0.123 -0.001
cntxt1:cnd1 -0.123 0.123 0.123 0.000 -1.000 -0.123 0.000 1.000
cntxt2:cnd1 0.000 0.000 0.000 0.011 -0.001 0.000 -0.197 0.001 0.001
grp1:cnt1:1 0.123 -0.123 -0.123 0.000 1.000 0.123 0.000 -1.000 -1.000 -0.001
grp1:cnt2:1 0.000 -0.001 0.001 -0.198 0.001 -0.001 0.252 -0.001 0.000 -0.136 0.000
What I can do to solve this? Or can anyone tell me what this warning even means? (in a way that an R-newbie like myself can understand) Any help is much appreciated!

How to set the level above which to display factor loadings from factanal() in R?

I was performing factor analysis with data state.x77, which is in R by default. After running the analysis, I inspected the factor loadings.
> output = factanal(state.x77, factors=3, rotation="promax")
> ld = output$loadings
> ld
Loadings:
Factor1 Factor2 Factor3
Population 0.161 0.239 -0.316
Income -0.149 0.681
Illiteracy 0.446 -0.284 -0.393
Life Exp -0.924 0.172 -0.221
Murder 0.917 0.103 -0.129
HS Grad -0.414 0.731
Frost 0.107 1.046
Area 0.387 0.585 0.101
Factor1 Factor2 Factor3
SS loadings 2.274 1.519 1.424
Proportion Var 0.284 0.190 0.178
Cumulative Var 0.284 0.474 0.652
It looks like that by default R is blocking all values less than 0.1. I was wondering if there is a way to set this blocking level by hand, say 0.3 instead of 0.1?
try this:
print(output$loadings, cutoff = 0.3)
see ?print.loadings for the details.

R: how to extract list of covariate p-values from a regression results of an lmer() model?

I have an example mixed lmer model with 8 predictors and I want to extract the names of the covariates, their coefficients, their standard errors and their p-values and place them into a matrix so I can write them out to a .csv.
I've extracted the first 3 into columns fine, but I can't figure out how to extract the p values. How do you do this? Is it a variation of vcov or getME()?
Here is what the model and summary look like:
mod <- lmer(outcome ~ predictor1 + etc...
summary(mod)
Generalized linear mixed model fit by the Laplace approximation
Formula: Freq ~ pm.lag0 + pm.lag1 + pm.lag2 + pm.lag3 + pm.lag4 + pm.lag5
+ temp13 + temp013 + rh13 + rh013 + (1 | county)
Data: dt
AIC BIC logLik deviance
3574 3636 -1775 3550
Random effects:
Groups Name Variance Std.Dev.
county (Intercept) 1.6131 1.2701
Number of obs: 1260, groups: county, 28
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.9356504 0.2614892 11.227 < 2e-16 ***
pm.lag0 0.0012996 0.0005469 2.376 0.017494 *
pm.lag1 0.0005021 0.0005631 0.892 0.372568
pm.lag2 0.0009126 0.0005596 1.631 0.102893
pm.lag3 -0.0007073 0.0005678 -1.246 0.212896
pm.lag4 0.0031566 0.0005316 5.939 2.88e-09 ***
pm.lag5 0.0019598 0.0005359 3.657 0.000255 ***
temp13 -0.0028040 0.0007315 -3.833 0.000126 ***
temp013 -0.0023532 0.0009683 -2.430 0.015087 *
rh13 0.0058769 0.0009909 5.931 3.01e-09 ***
rh013 -0.0028568 0.0006070 -4.706 2.52e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) pm.lg0 pm.lg1 pm.lg2 pm.lg3 pm.lg4 pm.lg5 temp13 tmp013 rh13
pm.lag0 -0.025
pm.lag1 -0.032 -0.154
pm.lag2 -0.021 0.044 -0.179
pm.lag3 0.002 0.003 0.033 -0.176
pm.lag4 0.016 0.102 -0.016 0.041 -0.176
pm.lag5 0.008 0.027 0.090 -0.002 0.040 -0.186
temp13 -0.316 0.026 0.027 0.004 -0.019 -0.055 -0.035
temp013 0.030 -0.015 0.051 0.015 -0.015 0.002 -0.069 -0.205
rh13 -0.350 0.043 0.078 0.056 -0.012 -0.042 -0.030 0.430 0.055
rh013 0.193 -0.008 -0.021 0.011 0.030 0.101 -0.028 -0.278 0.025 -0.524
I've gone ahead here and left a space for the p-value column and entered a colname for it, so this sample of code isn't operational:
mixed.results <- mod
cbind(names(fixef(mod)),as.numeric(fixef(mod)),sqrt(diag(vcov(mod))), ???? )
mixed.results
colnames(mixed.results) <- c("Pred", "Coef", "St. Error", "Pr(>|z|)")
mixed.results
write.csv(mixed.results, file="mixedmod1.csv")
Thank you!
This is just coef(summary(model)), I believe:
gm1 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd),
data = cbpp, family = binomial)
cc <- coef(summary(gm1))
str(cc)
# num [1:4, 1:4] -1.376 -1.058 -1.196 -1.638 0.205 ...
# - attr(*, "dimnames")=List of 2
# ..$ : chr [1:4] "(Intercept)" "period2" "period3" "period4"
# ..$ : chr [1:4] "Estimate" "Std. Error" "z value" "Pr(>|z|)"
cc[,4] ## or cc[,"Pr(>|z)"] to be more explicit
# (Intercept) period2 period3 period4
#1.907080e-11 1.996120e-41 4.634385e-43 4.657952e-47
I used the development version of lme4 but I think this has worked for a while.

Resources