Comparing two curves for difference in trend - r

I have some data about trends over time in drug use across the state. I want to know whether there have been changes in the gender difference in intravenous drug use versus gender differences in all recreational drug use over time.
My data is below. I think I might need to use time-series analysis, but I'm not sure. Any help would be much appreciated.
enter image description here

Since the description in the question does not match the data as there is no information on gender we will assume from the subject that we want to determine if the trends of illicit and iv are the same.
Comparing Trends
Note that there is no autocorrelation in the detrended values of iv or illicit so we will use ordinary linear models.
iv <- c(0.4, 0.3, 0.4, 0.3, 0.2, 0.2)
illicit <- c(5.5, 5.7, 4.8, 4.7, 6.1, 5.3)
time <- 2011:2016
ar(resid(lm(iv ~ time)))
## Call:
## ar(x = resid(lm(iv ~ time)))
##
## Order selected 0 sigma^2 estimated as 0.0024
ar(resid(lm(illicit ~ time)))
## Call:
## ar(x = resid(lm(illicit ~ time)))
##
## Order selected 0 sigma^2 estimated as 0.287
Create a 12x3 data frame long with columns time, value and ind (iv or illicit). Then run a linear model with two slopes and and another with one slope. Both have two intercepts. Then compare them using anova. Evidently they are not significantly different so we cannot reject the hypothesis that the slopes are the same.
wide <- data.frame(iv, illicit)
long <- cbind(time, stack(wide))
fm2 <- lm(values ~ ind/(time + 1) + 0, long)
fm1 <- lm(values ~ ind + time + 0, long)
anova(fm1, fm2)
giving:
Analysis of Variance Table
Model 1: values ~ ind + time + 0
Model 2: values ~ ind/(time + 1) + 0
Res.Df RSS Df Sum of Sq F Pr(>F)
1 9 1.4629
2 8 1.4469 1 0.016071 0.0889 0.7732
Comparing model with slopes to one without slopes
Actually the slopes are not significant in the first place and we cannot reject the hypothesis that both the slopes are zero. Compare to a two intercept model with no slopes.
fm0 <- lm(values ~ ind + 0, long)
anova(fm0, fm2)
giving:
Analysis of Variance Table
Model 1: values ~ ind + 0
Model 2: values ~ ind/(time + 1) + 0
Res.Df RSS Df Sum of Sq F Pr(>F)
1 10 1.4750
2 8 1.4469 2 0.028143 0.0778 0.9258
or running a stepwise regression we find that its favored model is one with two intercepts and no slopes:
step(fm2)
giving:
Start: AIC=-17.39
values ~ ind/(time + 1) + 0
Df Sum of Sq RSS AIC
- ind:time 2 0.028143 1.4750 -21.155
<none> 1.4469 -17.386
Step: AIC=-21.15
values ~ ind - 1
Df Sum of Sq RSS AIC
<none> 1.475 -21.155
- ind 2 172.28 173.750 32.073
Call:
lm(formula = values ~ ind - 1, data = long)
Coefficients:
indiv indillicit
0.30 5.35
log transformed values
If we use log(values) then we similarly find no autocorrelation (not shown) but we do find the slopes of the log transformed values are significantly different.
fm2log <- lm(log(values) ~ ind/(time + 1) + 0, long)
fm1log <- lm(log(values) ~ ind + time + 0, long)
anova(fm1log, fm2log)
giving:
Analysis of Variance Table
Model 1: log(values) ~ ind + time + 0
Model 2: log(values) ~ ind/(time + 1) + 0
Res.Df RSS Df Sum of Sq F Pr(>F)
1 9 0.35898
2 8 0.18275 1 0.17622 7.7141 0.02402 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Related

R: testing whether a coefficient is equal across the different equations in a multivariate regression (using linearhypothesis())?

I have a question about how to compare coefficients in a multivariate regression in R.
I conducted a survey in which I measured three different attitudes (scale variables). My goal is to estimate whether some characteristics of the respondents (age, gender, education and ideological position) can explain their (positve/negative) attitudes.
I was advised to conduct a multivariate multiple regression instead of three univariate multiple regression. The code of my multivariate model is:
MMR <- lm(cbind(Attitude_1, Attitude_2, Attitude_3) ~
Age + Gender + Education + Ideological_position,
data = survey)
summary(MMR)
What I am trying to do next is to estimate whether the coefficients of let's say 'Gender' are statistically significant across the three individual models.
I found a very clear instruction how to do this in Stata (https://stats.idre.ucla.edu/stata/dae/multivariate-regression-analysis/), but I don't have a license, so I have to find an alternative in R. I know a similar question has been asked here before (R - Testing equivalence of coefficients in multivariate multiple regression), but the answer was that there does not exist a package (or function) in R which can be used for this purpose. Because this answer was provided a few years back, I was wondering whether in the meantime some new packages or functions are implemented.
More precisely, I was wondering whether I can use the linearHypothesis() function (https://www.rdocumentation.org/packages/car/versions/3.0-11/topics/linearHypothesis)? I already know that this function allows me to test, for instance, whether the coefficient of Gender equals to coefficient of Education:
linearhypothesis(MMR, c("GenderFemale", "EducationHigh-educated")
Can I also use this function to test whether the coefficient of Gender in the equation modelling Attitude_1 equals the coefficient of Gender in the equation modelling Attitude_2 or Attitude_3?
Any help would be greatly appreciated!
Since the model presented in the question is not reproducible (the input is missing) let us use this model instead.
fm0 <- lm(cbind(cyl, mpg) ~ wt + hp, mtcars)
We will discuss two approaches using as our linear hypothesis that the intercepts of the cyl and mpg groups are the same, that the wt slopes are the same and the hp slopes are the same.
1) Mean/Variance
In this approach we base the entire comparison only on the coefficients and their variance covariance matrix.
library(car)
v <- vcov(fm0)
co <- setNames(c(coef(fm0)), rownames(v))
h1 <- c("cyl:(Intercept) = mpg:(Intercept)", "cyl:wt = mpg:wt", "cyl:hp = mpg:hp")
linearHypothesis(NULL, h1, coef. = co, vcov. = v)
giving:
Linear hypothesis test
Hypothesis:
cyl:((Intercept) - mpg:(Intercept) = 0
cyl:wt - mpg:wt = 0
cyl:hp - mpg:hp = 0
Model 1: restricted model
Model 2: structure(list(), class = "formula", .Environment = <environment>)
Note: Coefficient covariance matrix supplied.
Df Chisq Pr(>Chisq)
1
2 3 878.53 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
To explain what linearHypothesis is doing note that In this case the hypothesis matrix is L <- t(c(1, -1)) %x% diag(3) and given v then as a large sample approximation we have that L %*% co is distributed as N(0, L %*% v %*% t(L)) under the null hypothesis hence t(L %*% co) %*% solve(L %*% v %*% t(L)) %*% L %*% co is distributed as chi squared with nrow(L) degrees of freedom.
L <- t(c(1, -1)) %>% diag(3)
nrow(L) # degrees of freedom
SSH <- t(L %*% co) %*% solve(L %*% v %*% t(L)) %*% L %*% co # chisq
p <- pchisq(SSH, nrow(L), lower.tail = FALSE) # p value
2) Long form model
With this approach (which is not equivalent to the first one shown above) convert mtcars from wide to long form, mt2. We show how to do that using reshape or pivot_longer at the end but for now we will just form it explicitly. Define lhs as the 32x2 matrix on the left hand side of the fm0 formula, i.e. cbind(cyl, mpg). Note that its column names are c("cyl", "mpg"). Stringing out lhs column by column into a 64 long vector of the cyl column followed by the mpg column gives us our new dependent variable y. We also form a grouping variable g. the same length as y which indicates which column in lhs the corresponding element of y is from.
With mt2 defined we can form fm1. In forming fm1 We will use a weight vector w based on the fm0 sigma values to reflect the fact that the two groups, cyl and mpg, have different values of sigma given by the vector sigma(fm0).
We show below that the fm0 and fm1 models have the same coefficients and then run linearHypothesis.
library(car)
lhs <- fm0$model[[1]]
g. <- colnames(lhs)[col(lhs)]
y <- c(lhs)
mt2 <- with(mtcars, data.frame(wt, hp, g., y))
w <- 1 / sigma(fm0)[g.]^2
fm1 <- lm(y ~ g./(wt + hp) + 0, mt2, weights = w)
# note coefficient names
variable.names(fm1)
## [1] "g.cyl" "g.mpg" "g.cyl:wt" "g.mpg:wt" "g.cyl:hp" "g.mpg:hp"
# check that fm0 and fm1 have same coefs
all.equal(c(t(coef(fm0))), coef(fm1), check.attributes = FALSE)
## [1] TRUE
h2 <- c("g.mpg = g.cyl", "g.mpg:wt = g.cyl:wt", "g.mpg:hp = g.cyl:hp")
linearHypothesis(fm1, h2)
giving:
Linear hypothesis test
Hypothesis:
- g.cyl + g.mpg = 0
- g.cyl:wt + g.mpg:wt = 0
- g.cyl:hp + g.mpg:hp = 0
Model 1: restricted model
Model 2: y ~ g./(wt + hp) + 0
Res.Df RSS Df Sum of Sq F Pr(>F)
1 61 1095.8
2 58 58.0 3 1037.8 345.95 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
If L is the hypothesis matrix which is the same as L in (1) except the columns are reordered, q is its number of rows, n is the number of rows of mt2 then SSH/q is distributed F(q, n-q-1) so we have:
n <- nrow(mt2)
L <- diag(3) %x% t(c(1, -1)) # note difference from (1)
q <- nrow(L)
SSH <- t(L %*% coef(fm1)) %*% solve(L %*% vcov(fm1) %*% t(L)) %*% L %*% coef(fm1)
SSH/q # F value
pf(SSH/q, q, n-q-1, lower.tail = FALSE) # p value
anova
An alternative to linearHypothesis is to define the reduced model and then compare the two models using anova. mt2 and w are from above. No packages are used.
fm2 <- lm(y ~ hp + wt, mt2, weights = w)
anova(fm2, fm1)
giving:
Analysis of Variance Table
Model 1: y ~ hp + wt
Model 2: y ~ g./(wt + hp) + 0
Res.Df RSS Df Sum of Sq F Pr(>F)
1 61 1095.8
2 58 58.0 3 1037.8 345.95 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Alternate wide to long calculation
An alternate way to form mt2 is by reshaping mtcars from wide form to long form using reshape.
mt2a <- mtcars |>
reshape(dir = "long", varying = list(colnames(lhs)), v.names = "y",
timevar = "g.", times = colnames(lhs)) |>
subset(select = c("wt", "hp", "g.", "y"))
or using tidyverse (which has rows in a different order but that should not matter as long as mat2b is used consistently in forming fm1 and w.
library(dplyr)
library(tidyr)
mt2b <- mtcars %>%
select(mpg, cyl, wt, hp) %>%
pivot_longer(all_of(colnames(lhs)), names_to = "g.", values_to = "y")

How to parametrize piecewise regression coefficient to represent the slope for the following interval (instead of the change in the slope)

Consider the following dataset
Quantity <- c(25,39,45,57,70,85,89,100,110,124,137,150,177)
Sales <- c(1000,1250,2600,3000,3500,4500,5000,4700,4405,4000,3730,3400,3300)
df <- data.frame(Quantity,Sales)
df
Plotting the data, the distribution of observations is clearly non-linear, but presents a likely breaking-point around Quantity = 89 (I skip the plot here). Therefore, I built a joint piecewise linear model as follows
df$Xbar <- ifelse(df$Quantity>89,1,0)
df$diff <- df$Quantity - 89
reg <- lm(Sales ~ Quantity + I(Xbar * (Quantity - 89)), data = df)
summary(reg)
or simply
df$X <- df$diff*df$Xbar
reg <- lm(Sales ~ Quantity + X, data = df)
summary(reg)
However, according to this parametrization, the coefficient of X represents the change in the slope from the preceding interval.
How can I parametrize the relevant coefficient to rather represent the slope for the second interval?
I did some research but I was unable to find the desired specification, apart from some automatization in stata (see the voice 'marginal' here https://www.stata.com/manuals13/rmkspline.pdf).
Any help is much appreciated. Thank you!
Acknowledgement:
the workable example is retrieved from
https://towardsdatascience.com/unraveling-spline-regression-in-r-937626bc3d96
The key here is to use a logical variable is.right which is TRUE for the points to the right of 89 and FALSE otherwise.
From the the output shown 60.88 is the slope to the left of 89 and -19.97 is the slope to the right. The lines intersect at Quantity = 89, Sales = 4817.30.
is.right <- df$Quantity > 89
fm <- lm(Sales ~ diff : is.right, df)
fm
## Call:
## lm(formula = Sales ~ diff:is.right, data = df)
##
## Coefficients:
## (Intercept) diff:is.rightFALSE diff:is.rightTRUE
## 4817.30 60.88 -19.97
Alternatives
Alternately if you want to use Xbar from the question do it this way. It gives the same coefficients as fm.
fm2 <- lm(Sales ~ diff : factor(Xbar), df)
or
fm3 <- lm(Sales ~ I(Xbar * diff) + I((1 - Xbar) * diff), df)
Double check with nls
We can double check these using nls with the following formulation which makes use of the fact that if we extend both lines the one to use at any Quantity is the lower of the two.
st <- list(a = 0, b1 = 1, b2 = -1)
fm4 <- nls(Sales ~ a + pmin(b1 * (Quantity - 89), b2 * (Quantity - 89)), start = st)
fm4
## Nonlinear regression model
## model: Sales ~ a + pmin(b1 * (Quantity - 89), b2 * (Quantity - 89))
## data: parent.frame()
## a b1 b2
## 4817.30 60.88 -19.97
## residual sum-of-squares: 713120
##
## Number of iterations to convergence: 1
## Achieved convergence tolerance: 2.285e-09
This would also work:
fm5 <- nls(Sales ~ a + ifelse(Quantity > 89, b2, b1) * diff, df, start = st)
Plot
Here is a plot:
plot(Sales ~ Quantity, df)
lines(fitted(fm) ~ Quantity, df)
Model matrix
And here is the model matrix for the linear regression:
> model.matrix(fm)
(Intercept) diff:is.rightFALSE diff:is.rightTRUE
1 1 -64 0
2 1 -50 0
3 1 -44 0
4 1 -32 0
5 1 -19 0
6 1 -4 0
7 1 0 0
8 1 0 11
9 1 0 21
10 1 0 35
11 1 0 48
12 1 0 61
13 1 0 88
If you know the breakpoints, then you almost have the model, it should be:
fit=lm(Sales ~ Quantity + Xbar + Quantity:Xbar,data=df)
Because if you don't introduce a new intercept (Xbar), it will start from the intercept already in the model, which will not work. We can plot it:
plot(df$Quantity,df$Sales)
newdata = data.frame(Quantity=seq(40,200,by=5))
newdata$Xbar= ifelse(newdata$Quantity>89,1,0)
lines(newdata$Quantity,predict(fit,newdata))
The coefficients are:
summary(fit)
Call:
lm(formula = Sales ~ Quantity * Xbar, data = df)
Residuals:
Min 1Q Median 3Q Max
-527.9 -132.2 -15.1 148.1 464.7
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -545.435 327.977 -1.663 0.131
Quantity 59.572 5.746 10.367 2.65e-06 ***
Xbar 7227.288 585.933 12.335 6.09e-07 ***
Quantity:Xbar -80.133 6.856 -11.688 9.64e-07 ***
And the coefficient of the 2nd slope is 59.572+(-80.133) = -20.561

Passing predictors to a cox model from a list

I'm running a pretty simple cox model from the survminer package.
surv_object <- Surv(time, event)
model <- coxph(surv_object ~ female + age + ethnicity + imd, data = df)
I need to run multiple Cox models, and for each model, my predictors change. I have all my predictors stored in a separate data frame such as this (we'll call it pred_df):
> pred_df
# A tibble: 4 x 2
predictor endpoint
<chr> <chr>
1 female Mortality
2 age Mortality
3 ethnicity Mortality
4 imd Mortality
Is there an easy way to pass the items from the predictor column to coxph()? Something like this:
coxph(surv_object ~ predictors, data = df)
What I've tried already:
I've tried a rather clumsy hack along these lines:
pred_vars <- pred_df %>%
pull(predictor) %>% # extract column values as a vector
paste(collapse = " + ") %>% # combine values in a string
parse(text = . ) # parse the string as an expression
model <- coxph(surv_object ~ eval(pred_vars), data = df)
R actually understands this and runs the model. But the output is uninterpretable. The model seems to run but does not output individual predictors i.e. female, age, ethnicity and imd. Instead it just outputs eval(pred_vars)
Call:
coxph(formula = Surv(time, event) ~ eval(pred_vars), data = df)
n= 62976, number of events= 12882
(3287 observations deleted due to missingness)
coef exp(coef) se(coef) z Pr(>|z|)
eval(pred_vars) 3.336e-05 1.000e+00 5.339e-06 6.249 4.14e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
exp(coef) exp(-coef) lower .95 upper .95
eval(pred_vars) 1 1 1 1
Concordance= 0.515 (se = 0.003 )
Rsquare= 0.001 (max possible= 0.989 )
Likelihood ratio test= 38.28 on 1 df, p=6e-10
Wald test = 39.04 on 1 df, p=4e-10
Score (logrank) test = 39.07 on 1 df, p=4e-10
There must be a simpler way of doing this?
Try reformulate.
formula <- reformulate(
termlabels = pred_df[[1, "predictor"]],
response = pred_df[[1, "endpoint"]]
)
coxph(formula = formula, data = df)
You can do this in base R with as.formula and paste(..., collapse = " + "), like...
foo <- as.formula(paste0("Surv(time, event) ~ ", paste(pred_df$predictors, collapse = " + ")))
Result of that line:
> foo
Surv(time, event) ~ female + age + ethnicity + imd
And then you just pass foo to your call to coxph.

How to change unit increment in hazard ratio from coxph and frailty model in R?

I ran a coxph model and a frailty model, but now I would like to change the hazard ratio for continuous variable (age) to show in terms of 5-unit increment instead of 1-unit. Is there a function in R that can perform such task? If so, does the function also work for frailty mode? I used the package frailtypack.
library('survival')
data(veteran)
cox <- coxph(Surv(time, status) ~ age, data = veteran)
summary(cox)
# Call:
# coxph(formula = Surv(time, status) ~ age, data = veteran)
#
# n= 137, number of events= 128
#
# coef exp(coef) se(coef) z Pr(>|z|)
# age 0.007500 1.007528 0.009565 0.784 0.433
#
# exp(coef) exp(-coef) lower .95 upper .95
# age 1.008 0.9925 0.9888 1.027
#
# Concordance= 0.515 (se = 0.029 )
# Likelihood ratio test= 0.63 on 1 df, p=0.4
# Wald test = 0.61 on 1 df, p=0.4
# Score (logrank) test = 0.62 on 1 df, p=0.4
Just add a new variable that represents the age group each subject belongs to; for example 1: 0-4, 2: 5-9, 3: 10-15, etc.
This is an example using the veteran dataset in the survival package. The data has a continuous variable age. Adding this as a predictor to the model will give you the relative risk (hazard ratio) for a one-year increase or increment in age. If you are interested in the x-year increment, you should generate a new variable which groups subjects accordingly. For these data, I applied the following grouping; group 1: younger than 40, group 2: 40 - <50, group 3: 50 - < 60, group 4: 60 - <70, and group 5: 70 or older. As such, the HR for a 10-year increment is 1.049. Alternatively, the risk increases with 5% for every 10 year increase in age. Note that the association is not statistically significant.
library(survival)
data(veteran)
veteran$ageCat <- 5
veteran$ageCat[veteran$age < 70] <- 4
veteran$ageCat[veteran$age < 60] <- 3
veteran$ageCat[veteran$age < 50] <- 2
veteran$ageCat[veteran$age < 40] <- 1
table(veteran$ageCat)
1 2 3 4 5
11 20 22 72 12
cox <- coxph(Surv(time, status) ~ ageCat, data = veteran)
summary(cox)
Call:
coxph(formula = Surv(time, status) ~ ageCat, data = veteran)
n= 137, number of events= 128
coef exp(coef) se(coef) z Pr(>|z|)
ageCat 0.04793 1.04910 0.09265 0.517 0.605
exp(coef) exp(-coef) lower .95 upper .95
ageCat 1.049 0.9532 0.8749 1.258
Concordance= 0.509 (se = 0.028 )
Rsquare= 0.002 (max possible= 0.999 )
Likelihood ratio test= 0.27 on 1 df, p=0.6024
Wald test = 0.27 on 1 df, p=0.6049
Score (logrank) test = 0.27 on 1 df, p=0.6048
#milan's post answers a similar question but not the one as asked. Since age was split into decades and modeled as a continuous variable, the hazard ratio would compare a subject's age-decade compared to the next youngest decade. That is, the HR for subjects aged 51 vs 49 or 59 vs 41 would be the same despite 2 or 18 years between them.
Anyway, the default as you suggest is for a 1-unit increment in the continuous variable, age in this case. It's not always useful to compare subjects by 1-unit change especially when the range gets to be much larger.
You can do the following which is naive to the model, so this should would for a lm, glm, survival::coxph, frailtypack::frailtyPenal, etc.
library('survival')
data(veteran)
## 1-year increase in age
cox <- coxph(Surv(time, status) ~ age, data = veteran)
exp(coef(cox))
# age
# 1.007528
For a multiplicative model like Cox regressions, you can get the x-unit change after the model is fit:
## 5-year increase in age
exp(coef(cox)) ^ 5
# age
# 1.038211
## or equivalently
exp(coef(cox) * 5)
# age
# 1.038211
However, it's easier to create a variable for the age transformation then fit the model:
## or you can create a variable to model
veteran <- within(veteran, {
age5 <- age / 5
})
cox5_1 <- coxph(Surv(time, status) ~ age5, data = veteran)
exp(coef(cox5_1))
# age10
# 1.038211
cox5_2 <- coxph(Surv(time, status) ~ I(age / 5), data = veteran)
exp(coef(cox5_2))
# I(age/5)
# 1.038211
Note you need to use I here in the formula interface since some operators have special meanings in formulae. For example, lm(mpg ~ wt - 1, mtcars) and lm(mpg ~ I(wt - 1), mtcars) are two different models.
You can use these methods in other models, for example frailtyPenal if that is indeed the one you are using:
library('frailtypack')
fp <- frailtyPenal(Surv(time, status) ~ age, data = veteran, n.knots = 12, kappa = 1e5)
exp(fp$coef)
exp(fp$coef) ^ 5
fp5_1 <- frailtyPenal(Surv(time, status) ~ age5, data = veteran, n.knots = 12, kappa = 1e5)
fp5_2 <- frailtyPenal(Surv(time, status) ~ I(age / 5), data = veteran, n.knots = 12, kappa = 1e5)
exp(fp5_1$coef)
exp(fp5_2$coef)

How to get residuals from Repeated measures ANOVA model in R

Normally from aov() you can get residuals after using summary() function on it.
But how can I get residuals when I use Repeated measures ANOVA and formula is different?
## as a test, not particularly sensible statistically
npk.aovE <- aov(yield ~ N*P*K + Error(block), npk)
npk.aovE
summary(npk.aovE)
Error: block
Df Sum Sq Mean Sq F value Pr(>F)
N:P:K 1 37.0 37.00 0.483 0.525
Residuals 4 306.3 76.57
Error: Within
Df Sum Sq Mean Sq F value Pr(>F)
N 1 189.28 189.28 12.259 0.00437 **
P 1 8.40 8.40 0.544 0.47490
K 1 95.20 95.20 6.166 0.02880 *
N:P 1 21.28 21.28 1.378 0.26317
N:K 1 33.14 33.14 2.146 0.16865
P:K 1 0.48 0.48 0.031 0.86275
Residuals 12 185.29 15.44
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Intuitial summary(npk.aovE)$residuals return NULL..
Can anyone can help me with this?
Look at the output of
> names(npk.aovE)
and try
> npk.aovE$residuals
EDIT: I apologize I read your example way too quickly. What I suggested is not possible with multilevel models with aov(). Try the following:
> npk.pr <- proj(npk.aovE)
> npk.pr[[3]][, "Residuals"]
Here's a simpler reproducible anyone can mess around with if they run into the same issue:
x1 <- gl(8, 4)
block <- gl(2, 16)
y <- as.numeric(x1) + rnorm(length(x1))
d <- data.frame(block, x1, y)
m <- aov(y ~ x1 + Error(block), d)
m.pr <- proj(m)
m.pr[[3]][, "Residuals"]
The other option is with lme:
require(MASS) ## for oats data set
require(nlme) ## for lme()
require(multcomp) ## for multiple comparison stuff
Aov.mod <- aov(Y ~ N * V + Error(B/V), data = oats)
the_residuals <- aov.out.pr[[3]][, "Residuals"]
Lme.mod <- lme(Y ~ N * V, random = ~1 | B/V, data = oats)
the_residuals <- residuals(Lme.mod)
The original example came without the interaction (Lme.mod <- lme(Y ~ N * V, random = ~1 | B/V, data = oats)) but it seems to be working with it (and producing different results, so it is doing something).
And that's it...
...but for completeness:
1 - The summaries of the model
summary(Aov.mod)
anova(Lme.mod)
2 - The Tukey test with repeated measures anova (3 hours looking for this!!). It does raises a warning when there is an interaction (* instead of +), but it seems to be safe to ignore it. Notice that V and N are factors inside the formula.
summary(Lme.mod)
summary(glht(Lme.mod, linfct=mcp(V="Tukey")))
summary(glht(Lme.mod, linfct=mcp(N="Tukey")))
3 - The normality and homoscedasticity plots
par(mfrow=c(1,2)) #add room for the rotated labels
aov.out.pr <- proj(aov.mod)
#oats$resi <- aov.out.pr[[3]][, "Residuals"]
oats$resi <- residuals(Lme.mod)
qqnorm(oats$resi, main="Normal Q-Q") # A quantile normal plot - good for checking normality
qqline(oats$resi)
boxplot(resi ~ interaction(N,V), main="Homoscedasticity",
xlab = "Code Categories", ylab = "Residuals", border = "white",
data=oats)
points(resi ~ interaction(N,V), pch = 1,
main="Homoscedasticity", data=oats)

Resources