Interpreting results from emmeans comparison - r

I have a glm model with two fixed effects, Treatment and Date, to estimate Temperature from data collected in a time series. Within Treatment there are three different categories: Fucus, Terrycloth or Control, and temperature is measured beneath those canopies. The model is created like so mod1 <- glm(Temp ~ Treatment * Date, data = aveTerry.df )
I am trying to tell if Terrycloth has a similar effect as Fucus canopy (i.e. replicates it).
I found the emmeans package and believe it could help me compare between these levels within treatment by using my model, and have used it as so to find the estimated marginal means terry.emmeans <- emmeans(modAllTerry, poly ~ Treatment | Date) and plotted the comparisons via plot(terry.emmeans.average, comparison = TRUE) +theme_bw()
Giving me this output linked here.
I am looking for some help understanding what this graphical output is, especially what exactly are the comparisons (which are shown by the red arrows). I somewhat understand the that blue boxes are the confidence intervals for the mean value of temperature for each treatment on one day (based on model), but am wondering how is the comparison made? And why do some days only have a one sided arrow?

As described in the documentation for plot.emmGrid, the comparison arrows are created in such a way that two arrows are disjoint if and only if their respective means are significantly different at the stated level.
The lowest mean in the set has only a right-pointing arrow because that mean will not be compared with anything smaller, obviating the need for a left-pointing arrow. For similar reasons, the highest mean has only a left-pointing arrow. These arrows do not define intervals; their only purpose is depicting comparisons.
In situations where the SEs of pairwise comparisons vary widely, it may not be possible to construct comparison arrows. If that happens, an error message is displayed.
Confidence intervals are available as well, but those CIs should not be used for comparing means.
More information and examples may be found via vignette("comparisons", "emmeans"). Also, details of how the arrows are actually constructed are given in vignette("xplanations", "emmeans")

Related

interpretation of a GAM plot with a square rooted response variable

I have a simple GAM where my goal is to understand the variation of the distance to a feature along the year, and I originally ran it with the following formula:
m1 <- gam(dist ~ s(month, bs="cc",k = 12) + s(id, bs="re"), data=db1, method = "REML")
Where "dist" is the distance in meters to a feature, and "id" is the animal id. When plotting the GAM I obtain the following plot:
First question, if I would be interepreting the plot/writting a figure caption, is it correct to say something like:
"GAM plot showing the partial effects of month (x-axis) on the distance to a feature (y-axis). GAM smooths are centered at zero, therefore the zero line reflects the overall mean of the distance the feature. Thus, values below zero on the y-axis reflect higher proximity to the feature, while values above zero reflect longer distances to the feature."
I say that a negative value (below zero) would mean proximity to the feature as that's also the way to interpret distance coefficients in a GLM, but I would also like to make sure that this is correct and that I'm not misinterpreting the plot.
Second question, are the values on the y-axis directly interpretable? If so, what is the scale? Is it a % of change? (Based on a comment here, but I'm not sure if I understood it properly)
Then I transform the response variable to achieve normality (the original scale was a bit left skewed), and I run this model (residuals look better with the transformation):
m2 <- gam(sqrt(dist) ~ s(month, bs="cc",k = 12) + s(id, bs="re"), data=db1, method = "REML")
And I obtain this plot:
Pretty similar to the previous one, and I believe I can interpret it in the same way as described above. But, third question, if I would want to say exactly what the y axis mean, what would be the most correct way to describe it with the transformation?
Any help with this is very appreciated! Many thanks in advance!

GAM smooths interaction differences - calculate p value using mgcv and gratia 0.6

I am using the useful gratia package by Gavin Simpson to extract the difference in two smooths for two different levels of a factor variable. The smooths are generated by the wonderful mgcv package. For example
library(mgcv)
library(gratia)
m1 <- gam(outcome ~ s(dep_var, by = fact_var) + fact_var, data = my.data)
diff1 <- difference_smooths(m1, smooth = "s(dep_var)")
draw(diff1)
This give me a graph of the difference between the two smooths for each level of the "by" variable in the gam() call. The graph has a shaded 95% credible interval (CI) for the difference.
Statistical significance, or areas of statistical significance at the 0.05 level, is assessed by whether or where the y = 0 line crosses the CI, where the y axis represents the difference between the smooths.
Here is an example from Gavin's site where the "by" factor variable had 3 levels.
The differences are clearly statistically significant (at 0.05) over nearly all of the graphs.
Here is another example I have generated using a "by" variable with 2 levels.
The difference in my example is clearly not statistically significant anywhere.
In the mgcv package, an approximate p value is outputted for a smooth fit that tests the null hypothesis that the coefficients are all = 0, based on a chi square test.
My question is, can anyone suggest a way of calculating a p value that similarly assesses the difference between the two smooths instead of solely relying on graphical evidence?
The output from difference_smooths() is a data frame with differences between the smooth functions at 100 points in the range of the smoothed variable, the standard error for the difference and the upper and lower limits of the CI.
Here is a link to the release of gratia 0.4 that explains the difference_smooths() function
enter link description here
but gratia is now at version 0.6
enter link description here
Thanks in advance for taking the time to consider this.
Don
One way of getting a p value for the interaction between the by factor variables is to manipulate the difference_smooths() function by activating the ci_level option. Default is 0.95. The ci_level can be manipulated to find a level where the y = 0 is no longer within the CI bands. If for example this occurred when ci_level = my_level, the p value for testing the hypothesis that the difference is zero everywhere would be 1 - my_level.
This is not totally satisfactory. For example, it would take a little manual experimentation and it may be difficult to discern accurately when zero drops out of the CI. Although, a function could be written to search the accompanying data frame that is outputted with difference_smooths() as the ci_level is varied. This is not totally satisfactory either because the detection of a non-zero CI would be dependent on the 100 points chosen by difference_smooths() to assess the difference between the two curves. Then again, the standard errors are approximate for a GAM using mgcv, so that shouldn't be too much of a problem.
Here is a graph where the zero first drops out of the CI.
Zero dropped out at ci_level = 0.88 and was still in the interval at ci_level = 0.89. So an approxiamte p value would be 1 - 0.88 = 0.12.
Can anyone think of a better way?
Reply to Gavin Simpson's comments Feb 19
Thanks very much Gavin for taking the time to make your comments.
I am not sure if using the criterion, >= 0 (for negative diffs), is a good way to go. Because of the draws from the posterior, there is likely to be many diffs that meet this criterion. I am interpreting your criterion as sample the posterior distribution and count how many differences meet the criterion, calculate the percentage and that is the p value. Correct me if I have misunderstood. Using this approach, I consistently got p values at around 0.45 - 0.5 for different gam models, even when it was clear the difference in the smooths should be statistically significant, at least at p = 0.05, because the confidence band around the smooth did not contain zero at a number of points.
Instead, I was thinking perhaps it would be better to compare the means of the posterior distribution of each of the diffs. For example
# get coefficients for the by smooths
coeff.level1 <- coef(gam.model1)[31:38]
coeff.level0 <- coef(gam.model1)[23:30]
# these indices are specific to my multi-variable gam.model1
# in my case 8 coefficients per smooth
# get posterior coefficients variances for the by smooths' coefficients
vp_level1 <- gam.model1$Vp[31:38, 31:38]
vp_level0 <- gam.model1$Vp[23:30, 23:30]
#run the simulation to get the distribution of each
#difference coefficient using the joint variance
library(MASS)
no.draws = 1000
sim <- mvrnorm(n = no.draws, (coeff.level1 - coeff.level0),
(vp_level1 + vp_level0))
# sim is a no.draws X no. of coefficients (8 in my case) matrix
# put the results into a data.frame.
y.group <- data.frame(y = as.vector(sim),
group = c(rep(1,no.draws), rep(2,no.draws),
rep(3,no.draws), rep(4,no.draws),
rep(5,no.draws), rep(6,no.draws),
rep(7,no.draws), rep(8,no.draws)) )
# y has the differences sampled from their posterior distributions.
# group is just a grouping name for the 8 sets of differences,
# (one set for each difference in coefficients)
# compare means with a linear regression
lm.test <- lm(y ~ as.factor(group), data = y.group)
summary(lm.test)
# The p value for the F statistic tells you how
# compatible the data are with the null hypothesis that
# all the group means are equal to each other.
# Same F statistic and p value from
anova(lm.test)
One could argue that if all coefficients are not equal to each other then they all can't be equal to zero but that isn't what we want here.
The basis of the smooth tests of fit given by summary(mgcv::gam.model1)
is a joint test of all coefficients == 0. This would be from a type of likelihood ratio test where model fit with and without a term are compared.
I would appreciate some ideas how to do this with the difference between two smooths.
Now that I got this far, I had a rethink of your original suggestion of using the criterion, >= 0 (for negative diffs). I reinterpreted this as meaning for each simulated coefficient difference distribution (in my case 8), count when this occurs and make a table where each row (my case, 8) is for one of these distributions with two columns holding this count and (number of simulation draws minus count), Then on this table run a chi square test. When I did this, I got a very low p value when I believe I shouldn't have as 0 was well within the smooth difference CI across almost all the levels of the exposure. Maybe I am still misunderstanding your suggestion.
Follow up thought Feb 24
In a follow up thought, we could create a variable that represents the interaction between the by factor and continuous variable
library(dplyr)
my.dat <- my.dat %>% mutate(interact.var =
ifelse(factor.2levels == "yes", 1, 0)*cont.var)
Here I am assuming that factor.2levels has the levels ("no", "yes"), and "no" is the reference level. The ifelse function creates a dummy variable which is multiplied by the continuous variable to generate the interactive variable.
Then we place this interactive variable in the GAM and get the usual statistical test for fit, that is, testing all the coefficients == 0.
#GavinSimpson actually posted a method of how to get the difference between two smooths and assess its statistical significance here in 2017. Thanks to Matteo Fasiolo for pointing me in that direction.
In that approach, the by variable is converted to an ordered categorical variable which causes mgcv::gam to produce difference smooths in comparison to the reference level. Statistical significance for the difference smooths is then tested in the usual way with the summary command for the gam model.
However, and correct me if I have misunderstood, the ordered factor approach causes the smooth for the main effect to now be the smooth for the reference level of the ordered factor.
The approach I suggested, see the main post under the heading, Follow up thought Feb 24, where the interaction variable is created, gives an almost identical result for the p value for the difference smooth but does not change the smooth for the main effect. It also does not change the intercept and the linear term for the by categorical variable which also both changed with the ordered variable approach.

Correlation coefficient between nominal and cardinal scale variables

I have to describe the correlation between a variable "Average passes completed per game" (cardinal scale) and a variable "Position" (nominal scale) and measure the strength of the correlation. For that I have to choose the correlation coefficient correctly considering the Scales. Does anyone know what the best way to do that would be? I am not sure what to use since it is two different scales. The full dataset consists of the following variables:
PLAYER: Name of the player
COUNTRY: Country of origin
BIRTHDATE: Birthday Date
HEIGHT_IN_CM: Height of the player
POSITION: Position of the player
PASSES_COMPLETED: Passes completed by the player
DISTANCE_COVERED: Distance covered by the player in km
MINUTES_PLAYED: Minutes played
AVG_PASSES_COMPLETED: Average passes completed by the player
I would very much appreciate if someone could give me some advice on this.
Thank you!
OK, so you need to redefine your question somewhat. Without two continuous variables correlations cannot be used to "describe" a relationship as I guess you are asking. You can, however, see if there are statistically significant differences in pass rates between different positions. As for the questions on the statistics, I agree with Maurtis...CV is best place. As for the code to do the tests, try this:
Firstly you need to make sure you have the right packages installed. You will definitely need ggplot and ggfortify, and maybe others if you have to manipulate data, or other things. And load the libraries:
library(ggplot2)
library(ggfortify)
Next, make sure that your data is tidy: ie, variables in columns.
Then import your data into R:
#find file
data.location = file.choose()
#Import data
curr.data <- read.csv(data.location)
#Check data import
glimpse(curr.data)
Then plot using ggplot:
ggplot(curr.data, aes(x = POSITION, y = AVG_PASSES_COMPLETED)) +
geom_boxplot() +
theme_bw()
Then model using the linear model function (lm()) to see if there is a significant difference in pass rates with regards to position.
passrate_model <- lm(AVG_PASSES_COMPLETED ~ POSITION, data = curr.data)
Before you test your hypothesis, you need to check the appropriateness of the model
autoplot(passrate_model, smooth.colour = NA)
If the residual plots look fine, then we are ready to test. If not then you will have to use another type of model (and I'm not going into that here now....).
The appropriate test for this (I think) would be a Tukey test, which requires an ANOVA. This will give a summary, and should show you if there is variance due to position:
passrate_av <- aov(passrate_model)
summary(passrate_av)
This will perform the Tukey test and give pair-wise comparisons including difference in means, 95% confidence intervals, and adjusted p-values:
tukey.test <- TukeyHSD(passrate_av)
tukey.test
And it can even do a nice plot for you too:
plot(tukey.test)

Interpreting a pattern in a residual plot produced by gam.check()

I'm working on creating a model that examines the effect of ocean characteristics on fishing outcomes. I have spatial data on a 0.5 degree grid and I created the following model:
gam(inverse hyperbolic sine(yvar) ~ s(lat, lon, bs="sos) + s(xvar1) +
s(xvar2) + s(xvar3), data = dat, method = "REML"
The QQ plot and histogram of residuals look okay. However, gam.check() produces an odd pattern in the residuals plot. I know that the points should be scattered around 0, but I have a very odd pattern in the residuals. Can anyone provide some insight on the interpretation of this plot:
Those will be either all the 0s (most likely) or 1s/smallest value in your original data. You don’t say what these data are but as you mentioning fishing outcomes it is highly likely that these have some natural lower bound and this line in the residuals are all the observations that take this lower bound (before transformation).
As you don’t exactly what your data are it is difficult to comment further as to how to proceed (this may not be an issue or you may need to not use the transform that you did, and instead use a GLM or other non-Gaussian response), but
Such patterns are common in ecological/biological data, and
Transforming your response invariably doesn’t work for ecological data.

R ergm - specifying nodematch effect on two attributes

I am currently working on social network data with R ergm package. I want to estimate the conditional probability of a tie who is homophilic on two different variables, but depending on how I specify the model the results are slightly different.
In the first case, I put two nodematch terms in my model, one for each variable that interest me, and I find the conditional log-odd of a doubly-homophilic tie by summing the 3 coefficients of my model (the "edge" terms and the two nodematch terms).
In the second case, I directly specify only one nodematch term, for ties homophilic on both variables.
And the results I get, though close, are still different, while in both cases I should get the log-odd of a tie occurring between individuals sharing both these attributes.
Here is an example from the Sampson data:
# Load the data :
library(statnet)
data(sampson)
#First model: I specify two nodematch terms, one for 'cloisterville' and one for 'group'.
m1 <- ergm(samplike ~ edges + nodematch('cloisterville') + nodematch('group'))
#Second model: this time, I have only one term asking for a `nodematch` on both terms at the same time.
m2 <- ergm(samplike ~ edges + nodematch(c('cloisterville','group')))
#Here is the output of both models:
summary(m1)
summary(m2)
So according to the first model, conditional log-odd of a homophilic tie on both variables should be:
-2.250 + 0.586 + 2.389
That is, 0.725
However, according to the second model, the log-odd of this same doubly homophilic tie should be:
-1.856 + 2.659
That is, 0.803
Corresponding probabilities are 0.6737071 and 0.6906158
Do you know why the results are different in both cases, whereas it should give the same conditional probability of the same kind of tie?
Thank you so much for your help,
Kind regards
Timothée
We should not expect the same results, since the models are evaluating two different things. In essence, model 1 is evaluating homophily on cloisterville or on group, while model 2 is evaluating homophily on both cloisterville and group.
To be more precise, the first model tests homophily on group, net the tendency toward homophily on cloisterville, and vice versa. The second model looks at whether there is a tendency toward homophily on both attributes at the same time. Do monks form ties within groups and based on their location in the cloisters?
See the note in ?ergm.terms for nodematch:
(When multiple names are given, the statistic counts only those on which all the named attributes match.)
This is easy to see visually:
The colors are groups. Squares means cloisterville==TRUE and triangles means cloisterville==FALSE. The term nodematch(c('cloisterville','group')) counts only those edges where colors and shapes match!

Resources