Is there a way to have effect size (such as Cohen's d or the most appropriate) directly using emmeans()?
I cannot find anything for obtaining effect size by using emmeans()
post <- emmeans(fit, pairwise~ favorite.pirate | sex)
emmip(fit, ~ favorite.pirate | sex)
There is not a built-in provision for effect-size calculations, but you can cobble one together by defining a custom contrast function that divides each pairwise comparison by a value of sigma:
mypw.emmc = function(..., sigma = 1) {
result = emmeans:::pairwise.emmc (...)
for (i in seq_along(result[1, ]))
result[[i]] = result[[i]] / sigma
result
}
Here's a test run:
> mypw.emmc(1:3, sigma = 4)
1 - 2 1 - 3 2 - 3
1 0.25 0.25 0.00
2 -0.25 0.00 0.25
3 0.00 -0.25 -0.25
With your model, the error SD is 9.246 (look at summary(fit); so, ...
> emmeans(fit, mypw ~ sex, sigma = 9.246, name = "effect.size")
NOTE: Results may be misleading due to involvement in interactions
$emmeans
sex emmean SE df lower.CL upper.CL
female 63.8 0.434 3.03 62.4 65.2
male 74.5 0.809 15.82 72.8 76.2
other 68.8 1.439 187.08 65.9 71.6
Results are averaged over the levels of: favorite.pirate
Degrees-of-freedom method: kenward-roger
Confidence level used: 0.95
$contrasts
effect.size estimate SE df t.ratio p.value
female - male -1.158 0.0996 399 -11.624 <.0001
female - other -0.537 0.1627 888 -3.299 0.0029
male - other 0.621 0.1717 981 3.617 0.0009
Results are averaged over the levels of: favorite.pirate
Degrees-of-freedom method: kenward-roger
P value adjustment: tukey method for comparing a family of 3 estimates
Some words of caution though:
The SEs of the effect sizes are misleading because they don't account for the variation in sigma.
This is not a very good example because
a. The factors interact (Edward Low is different in his profile).
Also, see the warning message.
b. The model is singular (as warned when the model was fitted), yielding an estimated variance of zero for college)
library(yarrr)
View(pirates)
library(lme4)
library(lmerTest)
fit <- lmer(weight~ favorite.pirate * sex +(1|college), data = pirates)
anova(fit, ddf = "Kenward-Roger")
post <- emmeans(fit, pairwise~ sex)
post
Related
I am new to glmmtmb models, so i have ran into a problem.
I build a model and then based on the AICtab and DHARMa this was the best:
Insecticide_2<- glmmTMB(Insect_abundace~field_element+land_distance+sampling_time+year+treatment_day+(1|field_id),
data=Insect_002,
family= nbinom2)
After glmmTMB i ran Anova (from Car), and then emmeans, but the results of p-values in emmeans are the same (not lower.CL or upper.CL). What may be the problem? Is the model overfitted? Is the way i am doing the emmeans wrong?
Anova also showed that the land_distance, sampling_time, treatment_day were significant, year was almost significant (p= 0.07)
comp_emmeans1<-emmeans(Insect_002, pairwise ~ land_distance|year , type = "response")
> comp_emmeans1
$emmeans
Year = 2018:
land_distance response SE df lower.CL upper.CL
30m 2.46 0.492 474 1.658 3.64
50m 1.84 0.369 474 1.241 2.73
80m 1.36 0.283 474 0.906 2.05
110m 1.25 0.259 474 0.836 1.88
Year = 2019:
land_distance response SE df lower.CL upper.CL
30m 3.42 0.593 474 2.434 4.81
50m 2.56 0.461 474 1.799 3.65
80m 1.90 0.335 474 1.343 2.68
110m 1.75 0.317 474 1.222 2.49
Results are averaged over the levels of: field_element, sampling_time, treatment_day
Confidence level used: 0.95
Intervals are back-transformed from the log scale
$contrasts
year = 2018:
contrast ratio SE df null t.ratio p.value
30m / 50m 1.34 0.203 474 1 1.906 0.2268
30m / 80m 1.80 0.279 474 1 3.798 0.0009
30m / 110m 1.96 0.311 474 1 4.239 0.0002
50m / 80m 1.35 0.213 474 1 1.896 0.2311
50m / 110m 1.47 0.234 474 1 2.405 0.0776
80m / 110m 1.09 0.176 474 1 0.516 0.9552
year = 2019:
contrast ratio SE df null t.ratio p.value
30m / 50m 1.34 0.203 474 1 1.906 0.2268
30m / 80m 1.80 0.279 474 1 3.798 0.0009
30m / 110m 1.96 0.311 474 1 4.239 0.0002
50m / 80m 1.35 0.213 474 1 1.896 0.2311
50m / 110m 1.47 0.234 474 1 2.405 0.0776
80m / 110m 1.09 0.176 474 1 0.516 0.9552
Results are averaged over the levels of: field_element, sampling_time, treatment_day
P value adjustment: tukey method for comparing a family of 4 estimates
Tests are performed on the log scale
Should i use different comparison way? I saw that some use poly~, I tried that, results picture is the same. Also am I comparing the right things?
Last and also important question is how should i report the glmmTMB, Anova and emmeans results?
I don't recall seeing this question before, but it's been 8 months, and maybe I just forgot.
Anyway, I am not sure exactly what the question is, but there are three things going on that might possibly have caused some confusion:
The emmeans() call has the specification pairwise ~ land_distance|year, which causes it to compute both means and pairwise comparisons thereof. I think users are almost always better served by separating those steps, because estimating means and estimating contrasts are two different things.
The default way in which means are summarized (estimates, SEs, and confidence intervals) is different than the default for comparisons or other contrasts (estimates, SEs, t ratios, and adjusted P values). That's because, as I said before, there are two different things, and usually people want CIs for means and P values for contrasts. See below.
There is a log link in this model, and that has special properties when it comes to contrasts, because the difference on a log scale is the log of the ratio. So we display a ratio when we have type = "response". (With most other link functions, there is no way to back-transform the differences of transformed values.)
What I suggest, per (1), is to get the means (and not comparisons) first:
EMM <- emmeans(Insect_002, ~ land_distance|year , type = "response")
EMM # see the estimates
You can get pairwise comparisons next:
CON <- pairs(EMM) # or contrast(EMM, "pairwise")
CON # see the ratios as shown in the OP
confint(CON) # see confidence intervals instead of tests
confint(CON, type = "link") # See the pairwise differences on the log scale
If you actually want differences on the response scale rather than ratios, that's possible too:
pairs(regrid(EMM)) # tests
confint(pairs(regrid(EMM)) # CIs
I am running a multinomial analysis with vglm(). It all works, but then I try to follow the instructions from the following website (https://rcompanion.org/handbook/H_08.html) to do a pairwise test, because emmeans cannot handle pairwise for vglm models. The lrtest() part gives me the following error:
Error in lrtest.default(model) :
'list' object cannot be coerced to type 'double'
I cannot figure out what is wrong, I even copy and pasted the exact code that the website used (see below) and get the same error with their own code and dataset. Any ideas?
Their code and suggestion for doing pairwise testing with vglm() is the only pairwise testing option I found for vglm() anywhere on the web.
Here is the code along with all the expected output and extra details from their website (it is simpler than mine but gets same error anyways).
Input = ("
County Sex Result Count
Bloom Female Pass 9
Bloom Female Fail 5
Bloom Male Pass 7
Bloom Male Fail 17
Cobblestone Female Pass 11
Cobblestone Female Fail 4
Cobblestone Male Pass 9
Cobblestone Male Fail 21
Dougal Female Pass 9
Dougal Female Fail 7
Dougal Male Pass 19
Dougal Male Fail 9
Heimlich Female Pass 15
Heimlich Female Fail 8
Heimlich Male Pass 14
Heimlich Male Fail 17
")
Data = read.table(textConnection(Input),header=TRUE)
### Order factors otherwise R will alphabetize them
Data$County = factor(Data$County,
levels=unique(Data$County))
Data$Sex = factor(Data$Sex,
levels=unique(Data$Sex))
Data$Result = factor(Data$Result,
levels=unique(Data$Result))
### Check the data frame
library(psych)
headTail(Data)
str(Data)
summary(Data)
### Remove unnecessary objects
rm(Input)
Multinomial regression
library(VGAM)
model = vglm(Result ~ Sex + County + Sex:County,
family=multinomial(refLevel=1),
weights = Count,
data = Data)
summary(model)
library(car)
Anova(model,
type="II",
test="Chisq")```
Analysis of Deviance Table (Type II tests)
Response: Result
Df Chisq Pr(>Chisq)
Sex 1 6.7132 0.00957 **
County 3 4.1947 0.24120
Sex:County 3 7.1376 0.06764 .
library(rcompanion)
nagelkerke(model)
$Pseudo.R.squared.for.model.vs.null
Pseudo.R.squared
McFadden 0.0797857
Cox and Snell (ML) 0.7136520
Nagelkerke (Cragg and Uhler) 0.7136520
$Likelihood.ratio.test
Df.diff LogLik.diff Chisq p.value
7 -10.004 20.009 0.0055508
library(lmtest)
lrtest(model)
Likelihood ratio test
Model 1: Result ~ Sex + County + Sex:County
Model 2: Result ~ 1
#Df LogLik Df Chisq Pr(>Chisq)
1 8 -115.39
2 15 -125.39 7 20.009 0.005551 **
Post-hoc analysis
At the time of writing, the lsmeans package cannot be used with vglm models.
One option for post-hoc analysis would be to conduct analyses on reduced models, including only two levels of a factor. For example, if the variable County x Sex term had been significant, the following code could be used to create a reduced dataset with only Bloom–Female and Bloom–Male, and analyze this data with vglm.
Data.b = Data[Data$County=="Bloom" &
(Data$Sex=="Female"| Data$Sex=="Male") , ]
Data.b$County = factor(Data.b$County)
Data.b$Sex = factor(Data.b$Sex)
summary(Data.b)
County Sex Result Count
Bloom:4 Female:2 Pass:2 Min. : 5.0
Male :2 Fail:2 1st Qu.: 6.5
Median : 8.0
Mean : 9.5
3rd Qu.:11.0
Max. :17.0
library(VGAM)
model.b = vglm(Result ~ Sex,
family=multinomial(refLevel=1),
weights = Count,
data = Data.b)
lrtest(model.b)
Likelihood ratio test
#Df LogLik Df Chisq Pr(>Chisq)
1 2 -23.612
2 3 -25.864 1 4.5041 0.03381 *
Summary table of results
Comparison p-value
Bloom–Female - Bloom–Male 0.034
Cobblestone–Female - Cobblestone–Male 0.0052
Dougal–Female - Dougal–Male 0.44
Heimlich–Female - Heimlich–Male 0.14
p.value = c(0.034, 0.0052, 0.44, 0.14)
p.adj = p.adjust(p.value,
method = "fdr")
p.adj = signif(p.adj,
2)
p.adj
[1] 0.068 0.021 0.440 0.190
Comparison p-value p.adj
Bloom–Female - Bloom–Male 0.034 0.068
Cobblestone–Female - Cobblestone–Male 0.0052 0.021
Dougal–Female - Dougal–Male 0.44 0.44
Heimlich–Female - Heimlich–Male 0.14 0.19
It looks to me like qdrq() can be used. As I commented, you can't use the lazy interface, you have to give all the specific needed parameters:
> library(emmeans)
> RG = qdrg(formula(model), Data, coef(model), vcov(model), link = "log")
> RG
'emmGrid' object with variables:
Sex = Female, Male
County = Bloom, Cobblestone, Dougal, Heimlich
Transformation: “log”
> emmeans(RG, consec ~ Sex | County)
$emmeans
County = Bloom:
Sex emmean SE df asymp.LCL asymp.UCL
Female -0.588 0.558 Inf -1.68100 0.5054
Male 0.887 0.449 Inf 0.00711 1.7675
County = Cobblestone:
Sex emmean SE df asymp.LCL asymp.UCL
Female -1.012 0.584 Inf -2.15597 0.1328
Male 0.847 0.398 Inf 0.06643 1.6282
County = Dougal:
Sex emmean SE df asymp.LCL asymp.UCL
Female -0.251 0.504 Inf -1.23904 0.7364
Male -0.747 0.405 Inf -1.54032 0.0459
County = Heimlich:
Sex emmean SE df asymp.LCL asymp.UCL
Female -0.629 0.438 Inf -1.48668 0.2295
Male 0.194 0.361 Inf -0.51320 0.9015
Results are given on the log (not the response) scale.
Confidence level used: 0.95
$contrasts
County = Bloom:
contrast estimate SE df z.ratio p.value
Male - Female 1.475 0.716 Inf 2.060 0.0394
County = Cobblestone:
contrast estimate SE df z.ratio p.value
Male - Female 1.859 0.707 Inf 2.630 0.0085
County = Dougal:
contrast estimate SE df z.ratio p.value
Male - Female -0.496 0.646 Inf -0.767 0.4429
County = Heimlich:
contrast estimate SE df z.ratio p.value
Male - Female 0.823 0.567 Inf 1.450 0.1470
Results are given on the log (not the response) scale.
If I understand this model correctly, the response is the log of the ratio of the 2nd multinomial response to the 1st. So what we see above is estimated differences of logs and setimated differences of those differences. If run with type = "response" you would get estimated ratios, and ratios of those ratios.
Probably something changed in either the VGAM package or the lmtest package since that was written.
But the following will work for a likelihood ratio test for vglm models:
VGAM::lrtest(model)
VGAM::lrtest(model, model2)
I made a glmer model to predict correct responses as a function of two independent variables (2x2 within-subjects design).
Here is the head of the df with ID, stimulus, the two within-subj conditions, the dependent variable "correct" and the predicted probability from the glmer fit (added after model computation).
Note: the correct variable was computed from the rate colum (if participants' rate < 49 and cond_A = near it is correct, if rate > 51 and cond_A = far, it is correct; the opposite for incorrect responses)
ID stimulus cond_A cond_B rate correct prob
1 5ef197dadda04d0008ba9dce FIORE near noncongr 0 1 0.5239730
2 5ef197dadda04d0008ba9dce TRENO near noncongr 19 1 0.5443476
3 5ef197dadda04d0008ba9dce MESE far noncongr 9 0 0.6530908
4 5ef197dadda04d0008ba9dce MONDO far noncongr 28 0 0.7131941
5 5ef197dadda04d0008ba9dce VASO near noncongr 63 0 0.4607776
6 5ef197dadda04d0008ba9dce SEGNO far congr 7 0 0.6626701
the results of the anova are below
fit = glmer(correct ~ spazio*linguag + (1|ID) + (1|stimulus), family = "binomial", data = correct)
car::Anova(fit, type = 3)
Analysis of Deviance Table (Type III Wald chisquare tests)
Response: correct
Chisq Df Pr(>Chisq)
(Intercept) 24.6270 1 6.957e-07 ***
spazio 0.2052 1 0.6505643
linguag 1.2483 1 0.2638814
spazio:linguag 14.1912 1 0.0001651 ***
now I need to explore among which levels the interaction is significant.
Is the emmean code correct? (why df is Inf?)
Any suggestions for improving and/or plotting the post hoc?
Thanks!
post = emmeans(fit, pairwise~linguag|spazio)
$emmeans
spazio = far:
linguag emmean SE df asymp.LCL asymp.UCL
congr 0.6050 0.122 Inf 0.332 0.878
noncongr 0.8009 0.129 Inf 0.512 1.090
spazio = near:
linguag emmean SE df asymp.LCL asymp.UCL
congr 0.5396 0.121 Inf 0.268 0.811
noncongr -0.0529 0.123 Inf -0.328 0.222
Results are given on the logit (not the response) scale.
Confidence level used: 0.95
Conf-level adjustment: bonferroni method for 2 estimates
$contrasts
spazio = far:
contrast estimate SE df z.ratio p.value
congr - noncongr -0.196 0.175 Inf -1.117 0.2639
spazio = near:
contrast estimate SE df z.ratio p.value
congr - noncongr 0.593 0.170 Inf 3.478 0.0005
Results are given on the log odds ratio (not the response) scale.
Hoping that you can clear some confusion in my head.
Linear mixed model is constructed with lmerTest:
MODEL <- lmer(Ca content ~ SYSTEM +(1 | YEAR/replicate) +
(1 | YEAR:SYSTEM), data = IOSDV1)
Fun starts happening when I'm trying to get the confidence intervals for the specific levels of the main effect.
Commands emmeans and lsmeans produce the same intervals (example; SYSTEM A3: 23.9-128.9, mean 76.4, SE:8.96).
However, the command as.data.frame(effect("SYSTEM", MODEL)) produces different, narrower confidence intervals (example; SYSTEM A3: 58.0-94.9, mean 76.4, SE:8.96).
What am I missing and what number should I report?
To summarize, for the content of Ca, i have 6 total measurements per treatment (three per year, each from different replication). I will leave the names in the code in my language, as used. Idea is to test if certain production practices affect the content of specific minerals in the grains. Random effects without residual variance were left in the model for this example.
Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest']
Formula: CA ~ SISTEM + (1 | LETO/ponovitev) + (1 | LETO:SISTEM)
Data: IOSDV1
REML criterion at convergence: 202.1
Scaled residuals:
Min 1Q Median 3Q Max
-1.60767 -0.74339 0.04665 0.73152 1.50519
Random effects:
Groups Name Variance Std.Dev.
LETO:SISTEM (Intercept) 0.0 0.0
ponovitev:LETO (Intercept) 0.0 0.0
LETO (Intercept) 120.9 11.0
Residual 118.7 10.9
Number of obs: 30, groups: LETO:SISTEM, 10; ponovitev:LETO, 8; LETO, 2
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 76.417 8.959 1.548 8.530 0.0276 *
SISTEM[T.C0] -5.183 6.291 24.000 -0.824 0.4181
SISTEM[T.C110] -13.433 6.291 24.000 -2.135 0.0431 *
SISTEM[T.C165] -7.617 6.291 24.000 -1.211 0.2378
SISTEM[T.C55] -10.883 6.291 24.000 -1.730 0.0965 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) SISTEM[T.C0 SISTEM[T.C11 SISTEM[T.C16
SISTEM[T.C0 -0.351
SISTEM[T.C11 -0.351 0.500
SISTEM[T.C16 -0.351 0.500 0.500
SISTEM[T.C5 -0.351 0.500 0.500 0.500
optimizer (nloptwrap) convergence code: 0 (OK)
boundary (singular) fit: see ?isSingular
> ls_means(MODEL, ddf="Kenward-Roger")
Least Squares Means table:
Estimate Std. Error df t value lower upper Pr(>|t|)
SISTEMA3 76.4167 8.9586 1.5 8.5299 23.9091 128.9243 0.02853 *
SISTEMC0 71.2333 8.9586 1.5 7.9514 18.7257 123.7409 0.03171 *
SISTEMC110 62.9833 8.9586 1.5 7.0305 10.4757 115.4909 0.03813 *
SISTEMC165 68.8000 8.9586 1.5 7.6797 16.2924 121.3076 0.03341 *
SISTEMC55 65.5333 8.9586 1.5 7.3151 13.0257 118.0409 0.03594 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Confidence level: 95%
Degrees of freedom method: Kenward-Roger
> emmeans(MODEL, spec = c("SISTEM"))
SISTEM emmean SE df lower.CL upper.CL
A3 76.4 8.96 1.53 23.9 129
C0 71.2 8.96 1.53 18.7 124
C110 63.0 8.96 1.53 10.5 115
C165 68.8 8.96 1.53 16.3 121
C55 65.5 8.96 1.53 13.0 118
Degrees-of-freedom method: kenward-roger
Confidence level used: 0.95
> as.data.frame(effect("SISTEM", MODEL))
SISTEM fit se lower upper
1 A3 76.41667 8.958643 57.96600 94.86734
2 C0 71.23333 8.958643 52.78266 89.68400
3 C110 62.98333 8.958643 44.53266 81.43400
4 C165 68.80000 8.958643 50.34933 87.25067
5 C55 65.53333 8.958643 47.08266 83.98400
Many thanks.
I'm pretty sure this has to do with the dreaded "denominator degrees of freedom" question, i.e. what kind (if any) of finite-sample correction is being employed. tl;dr emmeans is using a Kenward-Roger correction, which is more or less the most accurate available option — the only reason not to use K-R is if you have a large data set for which it becomes unbearably slow.
load packages, simulate data, fit model
library(lmerTest)
library(emmeans)
library(effects)
dd <- expand.grid(f=factor(letters[1:3]),g=factor(1:20),rep=1:10)
set.seed(101)
dd$y <- simulate(~f+(1|g), newdata=dd, newparams=list(beta=rep(1,3),theta=1,sigma=1))[[1]]
m <- lmer(y~f+(1|g), data=dd)
compare default emmeans with effects
emmeans(m, ~f)
## f emmean SE df lower.CL upper.CL
## a 0.848 0.212 21.9 0.409 1.29
## b 1.853 0.212 21.9 1.414 2.29
## c 1.863 0.212 21.9 1.424 2.30
## Degrees-of-freedom method: kenward-roger
## Confidence level used: 0.95
as.data.frame(effect("f",m))
## f fit se lower upper
## 1 a 0.8480161 0.2117093 0.4322306 1.263802
## 2 b 1.8531805 0.2117093 1.4373950 2.268966
## 3 c 1.8632228 0.2117093 1.4474373 2.279008
effects doesn't explicitly tell us what/whether it's using a finite-sample correction: we could dig around in the documentation or the code to try to find out. Alternatively, we can tell emmeans not to use finite-sample correction:
emmeans(m, ~f, lmer.df="asymptotic")
## f emmean SE df asymp.LCL asymp.UCL
## a 0.848 0.212 Inf 0.433 1.26
## b 1.853 0.212 Inf 1.438 2.27
## c 1.863 0.212 Inf 1.448 2.28
## Degrees-of-freedom method: asymptotic
## Confidence level used: 0.95
Testing shows that these are equivalent to about a tolerance of 0.001 (probably close enough). In principle we should be able to specify KR=TRUE to get effects to use Kenward-Roger correction, but I haven't been able to get that to work yet.
However, I will also say that there's something a little bit funky about your example. If we compute the distance between the mean and the lower CI in units of standard error, for emmeans we get (76.4-23.9)/8.96 = 5.86, which implies a very small effect degrees of freedom (e.g. about 1.55). That seems questionable to me unless your data set is extremely small ...
From your updated post, it appears that Kenward-Roger is indeed estimating only 1.5 denominator df.
In general it is dicey/not recommended to try fitting random effects where the grouping variable has a small number of levels (although see here for a counterargument). I would try treating LETO (which has only two levels) as a fixed effect, i.e.
CA ~ SISTEM + LETO + (1 | LETO:ponovitev) + (1 | LETO:SISTEM)
and see if that helps. (I would expect you would then get on the order of 7 df, which would make your CIs ± 2.4 SE instead of ± 6 SE ...)
I have a mixed effect model with a log(x+1) transformed response variable. The output from emmeans with the type as "response" provides the mean and confidence intervals for both groups that I am comparing. However what I want is the mean and CI of the difference between the groups (i.e. the estimate). emmeans only provides the ratio (with type="response") or the log ratio (with type="link") and I am unsure how to change this into absolute values. If you run the model without the log(x+1) transformation then emmeans provides the estimated difference and CI around this difference, not the ratios. How can I also do this when my response variable is log(x+1) transformed?
bmnameF.lme2 = lme(log(bm+1)~TorC*name, random=~TorC|site,
data=matched.cases3F, method='REML')
emmeans(lme, pairwise~TorC,
type='response')%>%confint(OmeanFHR[[2]])%>%as.data.frame
emmeans.TorC emmeans.emmean emmeans.SE emmeans.df emmeans.lower.CL emmeans.upper.CL contrasts.contrast contrasts.estimate contrasts.SE contrasts.df contrasts.lower.CL contrasts.upper.CL
Managed 376.5484 98.66305 25 219.5120 645.9267 Managed - Open 3.390123 1.068689 217 1.821298 6.310297
Open 111.0722 43.15374 25 49.8994 247.2381 Managed - Open 3.390123 1.068689 217 1.821298 6.310297
Let me show a different example so the results are reproducible to all viewers:
mod = lm(log(breaks+1) ~ wool*tension, data = warpbreaks)
As you see, with a log transformation, comparisons/contrasts are expressed as ratios by default. But this can be changed by specifying transform instead of type in the emmeans() call:
> emmeans(mod, pairwise ~ tension|wool, transform = "response")
$emmeans
wool = A:
tension response SE df lower.CL upper.CL
L 42.3 5.06 48 32.1 52.4
M 23.6 2.83 48 17.9 29.3
H 23.7 2.83 48 18.0 29.4
wool = B:
tension response SE df lower.CL upper.CL
L 27.7 3.32 48 21.0 34.4
M 28.4 3.40 48 21.6 35.3
H 19.3 2.31 48 14.6 23.9
Confidence level used: 0.95
$contrasts
wool = A:
contrast estimate SE df t.ratio p.value
L - M 18.6253 5.80 48 3.213 0.0065
L - H 18.5775 5.80 48 3.204 0.0067
M - H -0.0479 4.01 48 -0.012 0.9999
wool = B:
contrast estimate SE df t.ratio p.value
L - M -0.7180 4.75 48 -0.151 0.9875
L - H 8.4247 4.04 48 2.086 0.1035
M - H 9.1426 4.11 48 2.224 0.0772
P value adjustment: tukey method for comparing a family of 3 estimates
Or, you can do this later via the regrid() function:
emm1 = emmeans(mod, ~ tension | wool)
emm2 = regrid(emm1)
emm2 # estimates
pairs(emm2) # comparisons
regrid() creates a new emmGrid object where everything is already back-transformed, thus side-stepping the behavior that happens with contrasts of log-transformed results. (In the previous illustration, the transform argument just calls regrid after it constructs the reference grid.)
But there is another subtle thing going on: The transformation is auto-detected as log; the +1 part is ignored. Thus, the back-transformed estimates are all too large by 1. To get this right, you need to use the make.tran() function to create this generalization of the log transformation:
> emm3 = update(emmeans(mod, ~ tension | wool), tran = make.tran("genlog", 1))
> str(emm3)
'emmGrid' object with variables:
tension = L, M, H
wool = A, B
Transformation: “log(mu + 1)”
> regrid(emm3)
wool = A:
tension response SE df lower.CL upper.CL
L 41.3 5.06 48 31.1 51.4
M 22.6 2.83 48 16.9 28.3
H 22.7 2.83 48 17.0 28.4
wool = B:
tension response SE df lower.CL upper.CL
L 26.7 3.32 48 20.0 33.4
M 27.4 3.40 48 20.6 34.3
H 18.3 2.31 48 13.6 22.9
Confidence level used: 0.95
The comparisons will come out the same as shown earlier, because offsetting all the means by 1 doesn't affect the pairwise differences.
See vignette("transformations", "emmeans") or https://cran.r-project.org/web/packages/emmeans/vignettes/transformations.html for more details.