I have a problem when performing a two-way rm ANOVA in R on the following data (link : https://drive.google.com/open?id=1nIlFfijUm4Ib6TJoHUUNeEJnZnnNzO29):
subjectnbr is the id of the subject and blockType and linesTTL are the independent variables. RT2 is the dependent variable
I first performed the rm ANOVA through using ezANOVA with the following code:
ANOVA_RTS <- ezANOVA(
data=castRTs
, dv=RT2
, wid=subjectnbr
, within = .(blockType,linesTTL)
, type = 2
, detailed = TRUE
, return_aov = FALSE
)
ANOVA_RTS
The result is correct (I double-checked using statistica).
However, when I perform the rm ANOVA using the lme function, I do not get the same answer and I have no clue why.
There is my code:
lmeRTs <- lme(
RT2 ~ blockType*linesTTL,
random = ~1|subjectnbr/blockType/linesTTL,
data=castRTs)
anova(lmeRTs)
Here are the outputs of both ezANOVA and lme.
I hope I have been clear enough and have given you all the information needed.
I'm looking forward for your help as I am trying to figure it out for at least 4 hours!
Thanks in advance.
Here is a step-by-step example on how to reproduce ezANOVA results with nlme::lme.
The data
We read in the data and ensure that all categorical variables are factors.
# Read in data
library(tidyverse);
df <- read.csv("castRTs.csv");
df <- df %>%
mutate(
blockType = factor(blockType),
linesTTL = factor(linesTTL));
Results from ezANOVA
As a check, we reproduce the ez::ezANOVA results.
## ANOVA using ez::ezANOVA
library(ez);
model1 <- ezANOVA(
data = df,
dv = RT2,
wid = subjectnbr,
within = .(blockType, linesTTL),
type = 2,
detailed = TRUE,
return_aov = FALSE);
model1;
# $ANOVA
# Effect DFn DFd SSn SSd F p
#1 (Intercept) 1 13 2047405.6654 34886.767 762.9332235 6.260010e-13
#2 blockType 1 13 236.5412 5011.442 0.6136028 4.474711e-01
#3 linesTTL 1 13 6584.7222 7294.620 11.7348665 4.514589e-03
#4 blockType:linesTTL 1 13 1019.1854 2521.860 5.2538251 3.922784e-02
# p<.05 ges
#1 * 0.976293831
#2 0.004735442
#3 * 0.116958989
#4 * 0.020088855
Results from nlme::lme
We now run nlme::lme
## ANOVA using nlme::lme
library(nlme);
model2 <- anova(lme(
RT2 ~ blockType * linesTTL,
random = list(subjectnbr = pdBlocked(list(~1, pdIdent(~blockType - 1), pdIdent(~linesTTL - 1)))),
data = df))
model2;
# numDF denDF F-value p-value
#(Intercept) 1 39 762.9332 <.0001
#blockType 1 39 0.6136 0.4382
#linesTTL 1 39 11.7349 0.0015
#blockType:linesTTL 1 39 5.2538 0.0274
Results/conclusion
We can see that the F test results from both methods are identical. The somewhat complicated structure of the random effect definition in lme arises from the fact that you have two crossed random effects. Here "crossed" means that for every combination of blockType and linesTTL there exists an observation for every subjectnbr.
Some additional (optional) details
To understand the role of pdBlocked and pdIdent we need to take a look at the corresponding two-level mixed effect model
The predictor variables are your categorical variables blockType and linesTTL, which are generally encoded using dummy variables.
The variance-covariance matrix for the random effects can take different forms, depending on the underlying correlation structure of your random effect coefficients. To be consistent with the assumptions of a two-level repeated measure ANOVA, we must specify a block-diagonal variance-covariance matrix pdBlocked, where we create diagonal blocks for the offset ~1, and for the categorical predictor variables blockType pdIdent(~blockType - 1) and linesTTL pdIdent(~linesTTL - 1), respectively. Note that we need to subtract the offset from the last two blocks (since we've already accounted for the offset).
Some relevant/interesting resources
Pinheiro and Bates, Mixed-Effects Models in S and S-PLUS, Springer (2000)
Potvin and Schutz, Statistical power for the two-factor
repeated measures ANOVA, Behavior Research Methods, Instruments & Computers, 32, 347-356 (2000)
Deming Mi, How to understand and apply
mixed-effect models, Department of Biostatistics, Vanderbilt university
Related
I was wondering if it is correct to say that a model-based recursive partitioning model (mob, package partykit) is of
the family of the mixed-effect models.
My point is that a mixed effect model provides different parameters for each random effect and this is also what does a mob model. The main difference I see is that a mob partitions itself the random effects.
Here is an example:
library(partykit); library(lme4)
set.seed(321)
##### Random data
V1 <- runif(100); V2 <- sample(1:3, 100, replace=T)
V3 <- jitter(ifelse(V2 == 1, 2*V1+3, ifelse(V2==2, -1*V1+2, V1)), amount=.2)
##### Mixed-effect model
me <- lmer(V3 ~ V1 + (1 + V1|V2))
coef(me) #linear model coefficients from the mixel effect model
#$V2
# (Intercept) V1
#1 2.99960082 1.9794378
#2 1.96874586 -0.8992926
#3 0.01520725 1.0255424
##### MOB
fit <- function(y, x, start = NULL, weights = NULL, offset = NULL) lm(y ~ x)
mo <- mob(V3 ~ V1|V2, fit=fit) #equivalent to lmtree
coef(mo) #linear model (same) coefficients from the mob
# (Intercept) x(Intercept) xV1
#2 2.99928854 NA 1.9804084
#4 1.97185661 NA -0.9047805
#5 0.01333292 NA 1.0288309
No, the kind of linear regression-based MOB (lmtree) is not a mixed-effects type of model. However, you used the MOB tree to estimate an interaction model (or nested effect) and indeed mixed-effects models can also be used to do so.
Your data-generating process implements a different intercept and V1 slope for every level of V2. If this interaction is known it can be easily recovered by a suitable linear regression with interaction effect (but V2 should be a categorical factor variable for this).
V2 <- factor(V2)
mi <- lm(V3 ~ 0 + V2 / V1)
matrix(coef(mi), ncol = 2)
## [,1] [,2]
## [1,] 2.99928854 1.9804084
## [2,] 1.97185661 -0.9047805
## [3,] 0.01333292 1.0288309
Note that the model fit is equivalent to lm(V3 ~ V1 * V2) but uses a different contrast coding for the coefficients.
The estimates obtained above are exactly identical to th lmtree() output (or manually using mob() + lm() as you did in your post):
coef(lmtree(V3 ~ V1 | V2))
## (Intercept) V1
## 2 2.99928854 1.9804084
## 4 1.97185661 -0.9047805
## 5 0.01333292 1.0288309
The main difference is that you had to tell lm() exactly which interaction to consider. lmtree(), on the other hand, "learned" the interaction in a data-driven way. Admittedly, in this case there is not so much to learn...but lmtree() could have decided without any split or with two splits instead of performing all possible splits.
Finally, your lmer(V3 ~ V1 + (1 + V1 | V2)) specification also estimates a nested (or interaction) effect. However, it uses a different estimation technology with random effects instead of full fixed effects. Also, here you have to prespecify the interaction.
In short: lmtree() can be considered as a way to find interaction effects in a data-driven way. But these interactions are not estimated with random effects, hence not a mixed-effects model.
P.S.: It is possible to combine lmtree() and lmer() but that's a different story. If you are interested see package https://CRAN.R-project.org/package=glmertree and the accomanying paper.
I would like to ask how to perform the simple main effect analysis in R correctly, in case of presence interaction effects between Group and Stage variables ?
One of my friends do same analysis in SPSS (using Bonferroni correction) and I try to reproduce his result in R.
I have data set of following structure:
ID Group Stage Y
1 I pre 0.123
1 I post 0.453
2 II pre 0.676
2 II post 0.867
3 I pre 0.324
3 I post 0.786
4 II pre 0.986
4 II post 0.112
... ... ... ...
This is 2x2 mixed ANOVA schema (1 between subject variable 'Group', 1 within subject variable 'Stage', which constitutes repated measure of y dependent variable).
I analysed it using ezANOVA function:
ezANOVA(data = dat, dv = y, wid = ID, between = Group, within = Stage, detailed = TRUE, type = "III")
I found a significant interaction Stage*Group. So I have determine simple effects using Bonferroni correction. I tried to do that with many methods. For example, if I want to find significant interactions in group I, between levels of Stage variable, I tried to use:
dataControl <- subset(dat, Group == "control" )
ezANOVA(data = dataControl, dv = y, wid = ID, within = Stage, detailed = TRUE, type = "III" ) // method 1
aov(data = dataControl, y ~ Stage + Error(ID/Stage)) // method 2
t.test(y ~ Stage, paired=TRUE) // method 3
But every method gave me different p-value result. None of these p-values matched those calculated with SPSS. Interesingly main effects p-values and other calculation gave the same result in SPSS and R. So I conclude that I am using wrong method in simple main effect analysis.
I would be very thankful I you could help me.
If you want R to give you the same numbers as SPSS, do this:
#pairwise comparisons
library(asbio)
bonf <- pairw.anova(data$dv, data$group, method="bonf") #also try "tukey" or "lsd"
print(bonf)
#plot(bonf) #can plot the CFs
This will give you t(s), mean differences, upper and lower bounds, HLSD Diff Lower Upper Decision Adj. p-value decision, and adjusted p-value.
R's mice contains a function, pool.compare, to compare nested models fit to imputed objects. If I try to include an interaction term:
library(mice)
imput = mice(nhanes2)
mi1 <- with(data=imput, expr=lm(bmi~age*hyp))
mi0 <- with(data=imput, expr=lm(bmi~age+hyp))
pc <- pool.compare(mi1, mi0, method="Wald")
then it returns the following error:
Error in pool(fit1) :
Different number of parameters: coef(fit): 6, vcov(fit): 5
It sounds like the variance-covariance matrix doesn't include the interaction term as its own variable. What's the best way around this?
The problem appears to be that some of your parameters are un-estimatable in some of your imputed data.sets. When I run the code, I see
( fit1<-mi1$analyses[[1]] )
# lm(formula = bmi ~ age * hyp)
#
# Coefficients:
# (Intercept) age2 age3 hyp2 age2:hyp2
# 28.425 -5.425 -3.758 1.200 3.300
# age3:hyp2
# NA
In this set, it was not possible to estimate age3*hyp2 (presumably because there were no observations in this group).
This causes the discrepancy in coef(fit1) and vcov(fit1) since the covariance cannot be estimated for that term.
What to do in this case is more of a statistical problem than a programming problem. If you are unsure of what would be appropriate for your data, I suggest you consult with the statisticians over at Cross Validated.
the study design of the data I have to analyse is simple. There is 1 control group (CTRL) and
2 different treatment groups (TREAT_1 and TREAT_2). The data also includes 2 covariates COV1 and COV2. I have been asked to check if there is a linear or quadratic treatment effect in the data.
I created a dummy data set to explain my situation:
df1 <- data.frame(
Observation = c(rep("CTRL",15), rep("TREAT_1",13), rep("TREAT_2", 12)),
COV1 = c(rep("A1", 30), rep("A2", 10)),
COV2 = c(rep("B1", 5), rep("B2", 5), rep("B3", 10), rep("B1", 5), rep("B2", 5), rep("B3", 10)),
Variable = c(3944133, 3632461, 3351754, 3655975, 3487722, 3644783, 3491138, 3328894,
3654507, 3465627, 3511446, 3507249, 3373233, 3432867, 3640888,
3677593, 3585096, 3441775, 3608574, 3669114, 4000812, 3503511, 3423968,
3647391, 3584604, 3548256, 3505411, 3665138,
4049955, 3425512, 3834061, 3639699, 3522208, 3711928, 3576597, 3786781,
3591042, 3995802, 3493091, 3674475)
)
plot(Variable ~ Observation, data = df1)
As you can see from the plot there is a linear relationship between the control and the treatment groups. To check if this linear effect is statistical significant I change the contrasts using the contr.poly() function and fit a linear model like this:
contrasts(df1$Observation) <- contr.poly(levels(df1$Observation))
lm1 <- lm(log(Variable) ~ Observation, data = df1)
summary.lm(lm1)
From the summary we can see that the linear effect is statistically significant:
Observation.L 0.029141 0.012377 2.355 0.024 *
Observation.Q 0.002233 0.012482 0.179 0.859
However, this first model does not include any of the two covariates. Including them results in a non-significant p-value for the linear relationship:
lm2 <- lm(log(Variable) ~ Observation + COV1 + COV2, data = df1)
summary.lm(lm2)
Observation.L 0.04116 0.02624 1.568 0.126
Observation.Q 0.01003 0.01894 0.530 0.600
COV1A2 -0.01203 0.04202 -0.286 0.776
COV2B2 -0.02071 0.02202 -0.941 0.354
COV2B3 -0.02083 0.02066 -1.008 0.320
So far so good. However, I have been told to conduct a Type II Anova rather than Type I. To conduct a Type II Anova I used the Anova() function from the car package.
Anova(lm2, type="II")
Anova Table (Type II tests)
Response: log(Variable)
Sum Sq Df F value Pr(>F)
Observation 0.006253 2 1.4651 0.2453
COV1 0.000175 1 0.0820 0.7763
COV2 0.002768 2 0.6485 0.5292
Residuals 0.072555 34
The problem here with using Type II is that you do not get a p-value for the linear and quadratic effect. So I do not know if the effect is statistically linear and or quadratic.
I found out that the following code produces the same p-value for Observation as the Anova() function. But the result also does not include any p-values for the linear or quadratic effect:
lm2 <- lm(log(Variable) ~ Observation + COV1 + COV2, data = df1)
lm3 <- lm(log(Variable) ~ COV1 + COV2, data = df1)
anova(lm2, lm3)
Does anybody know how to conduct a Type II anova and the contrasts function to obtain the p-values for the linear and quadratic effects?
Help would be very much appreciated.
Best
Peter
I found one partial workaround for this, but it may require further correction. The documentation for the function drop1() from the stats package indicates that this function produces Type II sums of squares (although this page: http://www.statmethods.net/stats/anova.html ) declares that drop1() produces Type III sums of squares, and I didn't spend too much time poring over this (http://afni.nimh.nih.gov/sscc/gangc/SS.html) to cross-check sums of squares calculations. You could use it to calculate everything manually, but I suspect you're asking this question because it would be nice if someone had already worked through it.
Anyway, I added a second vector to the dummy data called Observation2, and set it up with just the linear contrasts (you can only specify one set of contrasts for a given vector at a given time):
df1[,"Observation2"]<-df1$Observation
contrasts(df1$Observation2, how.many=1)<-contr.poly
Then created a third linear model:
lm3<-lm(log(Variable)~Observation2+COV1+COV2, data=df1)
And conducted F tests with drop1 to compare F statistics from Type II ANOVAs between the two models:
lm2, which contains both the linear and quadratic terms:
drop1(lm2, test="F")
lm3, which contains just the linear contrasts:
drop1(lm3, test="F")
This doesn't include a direct comparison of the models against each other, although the F statistic is higher (and p value accordingly lower) for the linear model, which would lead one to rely upon it instead of the quadratic model.
I'm attempting to run a repeated-meaures ANOVA using R. I've gone through various examples on various websites, but they never seem to talk about the error that I'm encountering. I assume I'm misunderstanding something important.
The ANOVA I'm trying to run is on some data from an experiment using human participants. It has one DV and three IVs. All of the levels of all of the IVs are run on all participants, making it a three-way repeated-measures / within-subjects ANOVA.
The code I'm running in R is as follows:
aov.output = aov(DV~ IV1 * IV2 * IV3 + Error(PARTICIPANT_ID / (IV1 * IV2 * IV3)),
data=fulldata)
When I run this, I get the following warning:
Error() model is singular
Any ideas what I might be doing wrong?
Try using the lmer function in the lme4 package. The aov function is probably not appropriate here. Look for references from Dougles Bates, e.g. http://lme4.r-forge.r-project.org/book/Ch4.pdf (the other chapters are great too, but that is the repeated measures chapter, this is the intro: http://lme4.r-forge.r-project.org/book/Ch1.pdf). The R code is at the same place and for longitudinal data, it seems to be generally considered wrong these days to just fit OLS instead of a components of variance model like in the lme4 package, or in nlme, which to me seems to have been wildly overtaken by lme4 in popularity recently. You may note Brian Ripley's referenced post in the comments section above just recommends switching to lme also.
By the way, a huge advantage off the jump is you will be able to get estimates for the level of each effect as adjustments to the grand mean with the typical syntax:
lmer(DV ~ 1 +IV1*IV2*IV3 +(IV1*IV2*IV3|Subject), dataset))
Note your random effects will be vector valued.
I know the answer has been chosen for this post. I still wish to point out how to specify a correct error term/random effect when fitting a aov or lmer model to a multi-way repeated-measures data. I assume that both independent variables (IVs) are fixed, and are crossed with each other and with subjects, meaning all subjects are exposed to all combinations of the IVs. I am going to use data taken from Kirk’s Experimental Deisign: Procedures for the Behavioral Sciences (2013).
library(lme4)
library(foreign)
library(lmerTest)
library(dplyr)
file_name <- "http://www.ats.ucla.edu/stat/stata/examples/kirk/rbf33.dta" #1
d <- read.dta(file_name) %>% #2
mutate(a_f = factor(a), b_f = factor(b), s_f = factor(s)) #3
head(d)
## a b s y a_f b_f s_f
## 1 1 1 1 37 1 1 1
## 2 1 2 1 43 1 2 1
## 3 1 3 1 48 1 3 1
## 4 2 1 1 39 2 1 1
## 5 2 2 1 35 2 2 1
In this study 5 subjects (s) are exposed to 2 treatments - type of beat (a) and training duration (b) - with 3 levels each. The outcome variable is the attitude toward minority. In #3 I made a, b, and s into factor variables, a_f, b_f, and s_f. Let p and q be the numbers of levels for a_f and b_f (3 each), and n be the sample size (5).
In this example the degrees of freedom (dfs) for the tests of a_f, b_f, and their interaction should be p-1=2, q-1=2, and (p-1)*(q-1)=4, respectively. The df for the s_f error term is (n-1) = 4, and the df for the Within (s_f:a_f:b_f) error term is (n-1)(pq-1)=32. So the correct model(s) should give you these dfs.
Using aov
Now let’s try different model specifications using aov:
aov(y ~ a_f*b_f + Error(s_f), data=d) %>% summary() # m1
aov(y ~ a_f*b_f + Error(s_f/a_f:b_f), data=d) %>% summary() # m2
aov(y ~ a_f*b_f + Error(s_f/a_f*b_f), data=d) %>% summary() # m3
Simply specifying the error as Error(s_f) in m1 gives you the correct dfs and F-ratios matching the values in the book. m2 also gives the same value as m1, but also the infamous “Warning: Error() model is singular”. m3 is doing something strange. It is further partitioning Within residuals in m1 (634.9) into residuals for three error terms: s_f:a_f (174.2), s_f:b_f (173.6), and s_f:a_f:b_f (287.1). This is wrong, since we would not get three error terms when we run a 2-way between-subjects ANOVA! Multiple error terms are also contrary to the point of using block factorial designs, which allows us to use the same error term for the test of A, B, and AB, unlike split-plot designs which requires 2 error terms.
Using lmer
How can we get the same dfs and F-values using lmer? If your data is balanced, the Kenward-Roger approximation used in the lmerTest will give you exact dfs and F-ratio.
lmer(y ~ a_f*b_f + (1|s_f), data=d) %>% anova() # mem1
lmer(y ~ a_f*b_f + (1|s_f/a_f:b_f), data=d) %>% anova() # mem2
lmer(y ~ a_f*b_f + (1|s_f/a_f*b_f), data=d) %>% anova() # mem3
lmer(y ~ a_f*b_f + (1|s_f:a_f:b_f), data=d) %>% anova() # mem4
lmer(y ~ a_f*b_f + (a_f*b_f|s_f), data=d) %>% anova() # mem5
Again simply specifying the random effect as (1|s_f) give you the correct dfs and F-ratios (mem1). mem2-5 did not even give results, presumably the numbers of random effects it needed to estimate were greater than the sample size.