get p-values from post-hoc duncan test in r - r

I want to perform a post-hoc duncan test (use "agricolae" package in r) after running one-way anova comparing the means of 3 groups.
## run one-way anova
> t1 <- aov(q3a ~ pgy,data = pgy)
> summary(t1)
Df Sum Sq Mean Sq F value Pr(>F)
pgy 2 13 6.602 5.613 0.00367 **
Residuals 6305 7416 1.176
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
1541 observations deleted due to missingness
## run post-hoc duncan test
> duncan.test(t1,"pgy",group = T, console = T)
Study: t1 ~ "pgy"
Duncan's new multiple range test
for q3a
Mean Square Error: 1.176209
pgy, means
q3a std r Min Max
PGY1 1.604292 1.068133 2656 1 5
PGY2 1.711453 1.126446 2017 1 5
PGY3 1.656269 1.057937 1635 1 5
Groups according to probability of means differences and alpha level( 0.05 )
Means with the same letter are not significantly different.
q3a groups
PGY2 1.711453 a
PGY3 1.656269 ab
PGY1 1.604292 b
However, the output only tells me the mean of PGY1 and PGY2 are different without p-values for each group comparison ( post-hoc pairwise t tests would generate p-values for each group comparison).
How can I get p value from a duncan test?
Thanks!!

One solution would be to use PostHocTest from the DescTools package.
Here is an example using the warpbreaks sample data.
require(DescTools);
res <- aov(breaks ~ tension, data = warpbreaks);
PostHocTest(res, method = "duncan");
#
# Posthoc multiple comparisons of means : Duncan's new multiple range test
# 95% family-wise confidence level
#
#$tension
# diff lwr.ci upr.ci pval
#M-L -10.000000 -17.95042 -2.049581 0.01472 *
#H-L -14.722222 -23.08443 -6.360012 0.00072 ***
#H-M -4.722222 -12.67264 3.228197 0.23861
The pairwise differences between the means for every group are given in the first column (e.g. M-L, and so on), along with confidence intervals and p-values.
For example, the difference in the mean breaks between H and M is not statistically significant.
If performing Duncan's test is not a critical requirement, you can also run pairwise.t.test with various other multiple comparison corrections. For example, using Bonferroni's method
with(warpbreaks, pairwise.t.test(breaks, tension, p.adj = "bonferroni"));
#
# Pairwise comparisons using t tests with pooled SD
#
#data: breaks and tension
#
# L M
#M 0.0442 -
#H 0.0015 0.7158
#
#P value adjustment method: bonferroni
Results are consistent with those from the post-hoc Duncan's test.

Related

R code to test the difference between coefficients of regressors from one panel regression

I am trying to compare two regression coefficient from the same panel regression used over two different time periods in order to confirm the statistical significance of difference. Therefore, running my panel regression first with observations over 2007-2009, I get an estimate of one coefficient I am interested in to compare with the estimate of the same coefficient obtained from the same panel model applied over the period 2010-2017.
Based on R code to test the difference between coefficients of regressors from one regression, I tried to compute a likelihood ratio test. In the linked discussion, they use a simple linear equation. If I use the same commands in R than described in the answer, I get results based on a chi-squared distribution and I don't understand if and how I can interpret that or not.
In r, I did the following:
linearHypothesis(reg.pannel.recession.fe, "Exp_Fri=0.311576")
where reg.pannel.recession.fe is the panel regression over the period 2007-2009, Exp_Fri is the coefficient of this regression I want to compare, 0.311576 is the estimated coefficient over the period 2010-2017.
I get the following results using linearHypothesis():
How can I interpret that? Should I use another function as it is plm objects?
Thank you very much for your help.
You get a F test in that example because as stated in the vignette:
The method for "lm" objects calls the default method, but it changes
the
default test to "F" [...]
You can also set the test to F, but basically linearHypothesis works whenever the standard error of the coefficient can be estimated from the variance-covariance matrix, as also said in the vignette:
The default method will work with any model
object for which the coefficient vector can be retrieved by ‘coef’
and the coefficient-covariance matrix by ‘vcov’ (otherwise the
argument ‘vcov.’ has to be set explicitly)
So using an example from the package:
library(plm)
data(Grunfeld)
wi <- plm(inv ~ value + capital,
data = Grunfeld, model = "within", effect = "twoways")
linearHypothesis(wi,"capital=0.3",test="F")
Linear hypothesis test
Hypothesis:
capital = 0.3
Model 1: restricted model
Model 2: inv ~ value + capital
Res.Df Df F Pr(>F)
1 170
2 169 1 6.4986 0.01169 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
linearHypothesis(wi,"capital=0.3")
Linear hypothesis test
Hypothesis:
capital = 0.3
Model 1: restricted model
Model 2: inv ~ value + capital
Res.Df Df Chisq Pr(>Chisq)
1 170
2 169 1 6.4986 0.0108 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
And you can also use a t.test:
tested_value = 0.3
BETA = coefficients(wi)["capital"]
SE = coefficients(summary(wi))["capital",2]
tstat = (BETA- tested_value)/SE
pvalue = as.numeric(2*pt(-tstat,wi$df.residual))
pvalue
[1] 0.01168515

One-way ANOVA for stratified samples in R

I have a stratified sample with three groups ("a","b","c") that where drawn from a larger population N. All groups have 30 observations but their proportions in N are unequal, hence their sampling weights differ.
I use the survey package in R to calculate summary statistics and linear regression models and would like to know how to calculate a one-way ANOVA correcting for the survey design (if necessary).
My assumption is and please correct me if I'm wrong, that the standard error for the variance should be normally higher for a population where the weight is smaller, hence a simple ANOVA that does not account for the survey design should not be reliable.
Here is an example. Any help would be appreciated.
## Oneway- ANOVA tests in R for surveys with stratified sampling-design
library("survey")
# create test data
test.df<-data.frame(
id=1:90,
variable=c(rnorm(n = 30,mean=150,sd=10),
rnorm(n = 30,mean=150,sd=10),
rnorm(n = 30,mean=140,sd=10)),
groups=c(rep("a",30),
rep("b",30),
rep("c",30)),
weights=c(rep(1,30), # undersampled
rep(1,30),
rep(100,30))) # oversampled
# correct for survey design
test.df.survey<-svydesign(id=~id,
strata=~groups,
weights=~weights,
data=test.df)
## descriptive statistics
# boxplot
svyboxplot(~variable~groups,test.df.survey)
# means
svyby(~variable,~groups,test.df.survey,svymean)
# variances
svyby(~variable,~groups,test.df.survey,svyvar)
### ANOVA ###
## One-way ANOVA without correcting for survey design
summary(aov(formula = variable~groups,data = test.df))
Hmm this is a interesting question, as far as I know it'd be difficult to consider weights in one-way anova. Thus I decided to show you the way that I'd solve this problem.
I'm going to use two-way anova and then soem port hoc test.
First of all let's build a linear model based on your data and check how does it look like.
library(car)
library(agricolae)
model.lm = lm(variable ~ groups * weights, data = test.df)
shapiro.test(resid(model.lm))
Shapiro-Wilk normality test
data: resid(model.lm)
W = 0.98238, p-value = 0.263
leveneTest(variable ~ groups * factor(weights), data = test.df)
Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 2 2.6422 0.07692 .
87
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Distribution is close to normal, variances differ between groups, so the variance isn't homogeneic - should be for parametrical test - anova. However let's perform the test anyway.
Several plots to check that our data fits to this test:
hist(resid(model.lm))
plot(model.lm)
Here is interpretation of plots, they don't look bad actually.
Let's run two-way anova:
anova(model.lm)
Analysis of Variance Table
Response: variable
Df Sum Sq Mean Sq F value Pr(>F)
groups 2 2267.8 1133.88 9.9566 0.0001277 ***
Residuals 87 9907.8 113.88
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
As you see, the results are very close to yours. Some post hoc test:
(result.hsd = HSD.test(model.lm, list('groups', 'weights')))
$statistics
MSerror Df Mean CV MSD
113.8831 87 147.8164 7.2195 6.570186
$parameters
test name.t ntr StudentizedRange alpha
Tukey groups:weights 3 3.372163 0.05
$means
variable std r Min Max Q25 Q50 Q75
a:1 150.8601 11.571185 30 113.3240 173.0429 145.2710 151.9689 157.8051
b:1 151.8486 8.330029 30 137.1907 176.9833 147.8404 150.3161 154.7321
c:100 140.7404 11.762979 30 118.0823 163.9753 131.6112 141.1810 147.8231
$comparison
NULL
$groups
variable groups
b:1 151.8486 a
a:1 150.8601 a
c:100 140.7404 b
attr(,"class")
[1] "group"
And maybe some different way:
aov_cont<- aov(test.df$variable ~ test.df$groups * test.df$weights)
summary(aov_cont)
Df Sum Sq Mean Sq F value Pr(>F)
test.df$groups 2 2268 1133.9 9.957 0.000128 ***
Residuals 87 9908 113.9
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(TukeyHSD(aov_cont))
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = test.df$variable ~ test.df$groups * test.df$weights)
$`test.df$groups`
diff lwr upr p adj
b-a 0.9884608 -5.581725 7.558647 0.9315792
c-a -10.1197048 -16.689891 -3.549519 0.0011934
c-b -11.1081657 -17.678352 -4.537980 0.0003461
Summarizing, the results are very close to yours. Personaly I'll run two way anova with (*) symbol or (+) when you are sure that your variables are independent - additive model.
Group c with bigger weight differs from groups a and b substantially.
According to the main statistician of our institute there is no easy implementation of this kind of analysis in any common modeling environment. The reason for that is that ANOVA and ANCOVA are linear models that where not further developed after the emergence of General Linear Models (later Generalized linear models - GLMs) in the 70's.
A normal linear regression model yields practically the same results as an ANOVA, but is much more flexible regarding variable choice. Since weighting methods exist for GLMs (see survey package in R) there is no real need to develop methods to weight for stratified sampling design in ANOVA... simply use a GLM instead.
summary(svyglm(variable~groups,test.df.survey))

PERMANOVA multivariate spread among groups is not similar to variance homogeneity ANOVA

I try to understand a PERMANOVA assumtption that is multivariate spread among groups is similar to variance homogeneity in univariate ANOVA, for this I make a R code and I don't find this results, why?
My code:
library(vegan)
# Four similar populations:
spdf <- matrix(NA, 60, 4, dimnames =
list(1:60, c("sp1", "sp2", "sp3", "sp4")))
spdf <- as.data.frame(spdf)
eff <- sort(rep(1:6, 10))
spdf$sp1 = eff + rnorm(60, 0, 0.25)
spdf$sp2 = eff + rnorm(60, 0, 0.25)
spdf$sp3 = eff + rnorm(60, 0, 0.25)
spdf$sp4 = eff + rnorm(60, 0, 0.25)
#3 Treatment
treat <- gl(3, 20, labels = paste("t", 1:3, sep=""))
# distance matrix
envdist <- vegdist(spdf, method="euclidean")
# when computing beta-dispersion in anova we have no group differences
# but in adonis is different
anova(betadisper(envdist, treat))
adonis(spdf~treat)
You seem to be confusing a lot of things here. PERMANOVA is a multivariate ANOVA with permutation-based testing. PERMANOVA tests for differences between group centroids --- in other words it compares the multivariate means. It does assume homogeneity of variances. To check that any difference between groups in terms of their centroids is not being induced by differences in variances, we might use the multivariate dispersion method implemented in betadisper() in R. adonis() and betadisper() are doing very different things:
adonis() gives an analysis like PERMANOVA,
betadisper() gives an analysis of multivariate spread.
What we can conclude therefore is that the two methods correctly detect a difference in means (adonis() shows a significant treat effect)
> adonis(spdf~treat)
Call:
adonis(formula = spdf ~ treat)
Permutation: free
Number of permutations: 999
Terms added sequentially (first to last)
Df SumsOfSqs MeanSqs F.Model R2 Pr(>F)
treat 2 3.5326 1.76628 113.66 0.79952 0.001 ***
Residuals 57 0.8858 0.01554 0.20048
Total 59 4.4184 1.00000
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
and that betadisper() correctly (all your groups had the same dispersion) fails to reject the null hypothesis of homogeneous multivariate dispersions
> anova(betadisper(envdist, treat))
Analysis of Variance Table
Response: Distances
Df Sum Sq Mean Sq F value Pr(>F)
Groups 2 0.1521 0.076041 1.1099 0.3366
Residuals 57 3.9050 0.068509
This is all in accord with the way you simulated the data.

anova.rq() in quantreg package in R

I'm interested in comparing estimates from different quantiles (same outcome, same covariates) using anova.rqlist function called by anova in the environment of the quantreg package in R. However the math in the function is beyond my rudimentary expertise. Lets say i fit 3 models at different quantiles;
library(quantreg)
data(Mammals) # data in quantreg to be used as a useful example
fit1 <- rq(weight ~ speed + hoppers + specials, tau = .25, data = Mammals)
fit2 <- rq(weight ~ speed + hoppers + specials, tau = .5, data = Mammals)
fit3 <- rq(weight ~ speed + hoppers + specials, tau = .75, data = Mammals)
Then i compare them using;
anova(fit1, fit2, fit3, test="Wald", joint=FALSE)
My question is which is of these models is being used as the basis of the comparison?
My understanding of the Wald test (wiki entry)
where θ^ is the estimate of the parameter(s) of interest θ that is compared with the proposed value θ0.
So my question is what is the anova function in quantreg choosing as the θ0?
Based on the pvalue returned from the anova my best guess is that it is choosing the lowest quantile specified (ie tau=0.25). Is there a way to specify the median (tau = 0.5) or better yet the mean estimate from obtained using lm(y ~ x1 + x2 + x3, data)?
anova(fit1, fit2, fit3, joint=FALSE)
actually produces
Quantile Regression Analysis of Deviance Table
Model: weight ~ speed + hoppers + specials
Tests of Equality of Distinct Slopes: tau in { 0.25 0.5 0.75 }
Df Resid Df F value Pr(>F)
speed 2 319 1.0379 0.35539
hoppersTRUE 2 319 4.4161 0.01283 *
specialsTRUE 2 319 1.7290 0.17911
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
while
anova(fit3, fit1, fit2, joint=FALSE)
produces the exact same result
Quantile Regression Analysis of Deviance Table
Model: weight ~ speed + hoppers + specials
Tests of Equality of Distinct Slopes: tau in { 0.5 0.25 0.75 }
Df Resid Df F value Pr(>F)
speed 2 319 1.0379 0.35539
hoppersTRUE 2 319 4.4161 0.01283 *
specialsTRUE 2 319 1.7290 0.17911
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The order of the models is clearly being changed in the anova, but how is it that the F value and Pr(>F) are identical in both tests?
All the quantiles you input are used and there is not one model used as a reference.
I suggest you read this post and the related answer to understand what your "theta.0" is.
I believe what you are trying to do is to test whether the regression lines are parallel. In other words whether the effects of the predictor variables (only income here) are uniform across quantiles.
You can use the anova() from the quantreg package to answer this question. You should indeed use several fits for each quantile.
When you use joint=FALSE as you did, you get coefficient-wise comparisons. But you only have one coefficient so there is only one line! And your results tells you that the effect of income is not uniform accross quantiles in your example. Use several predictor variables and you will get several p-values.
You can do an overall test of equality of the entire sets of coefficients if you do not use joint=FALSE and that would give you a "Joint Test of Equality of Slopes" and therefore only one p-value.
EDIT:
I think theta.0 is the average slope for all 'tau' values or the actual estimate from 'lm()', rather than a specific slope of any of the models. My reasoning is that 'anova.rq()' does not require any specific low value of 'tau' or even the median 'tau'.
There are several ways to test this. Either do the calculations by hand with theta.0 being equal to the average value, or compare many combinations because then you could a situation where certain of your models are close to the model with a low 'tau' values but not to the 'lm()' value. So if theta.0 is the slope of the first model with lowest 'tau' then your Pr(>F) will be high whereas in the other case, it will be low.
This question should maybe have been asked on cross-validated.

How can I classify post-hoc test results in R?

I am trying to understand how to work with ANOVAs and post-hoc tests in R.
So far, I have used aov() and TukeyHSD() to analyse my data. Example:
uni2.anova <- aov(Sum_Uni ~ Micro, data= uni2)
uni2.anova
Call:
aov(formula = Sum_Uni ~ Micro, data = uni2)
Terms:
Micro Residuals
Sum of Squares 0.04917262 0.00602925
Deg. of Freedom 15 48
Residual standard error: 0.01120756
Estimated effects may be unbalanced
My problem is, now I have a huge list of pairwise comparisons but cannot do anything with it:
TukeyHSD(uni2.anova)
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = Sum_Uni ~ Micro, data = uni2)
$Micro
diff lwr upr p adj
Act_Glu2-Act_Ala2 -0.0180017863 -0.046632157 0.0106285840 0.6448524
Ana_Ala2-Act_Ala2 -0.0250134285 -0.053643799 0.0036169417 0.1493629
NegI_Ala2-Act_Ala2 0.0702274527 0.041597082 0.0988578230 0.0000000
This dataset has 40 rows...
Idealy, I would like to get a dataset that looks something like this:
Act_Glu2 : a
Act_Ala2 : a
NegI_Ala2: b...
I hope you get the point. So far, I have found nothing comparable online... I also tried to select only significant pairs in the file resulting from TukeyHSD, but the file does not "acknowlegde" that it is made up of rows & columns, making selecting impossible...
Maybe there is something fundamentally wrong with my approach?
I think the OP wants the letters to get a view of the comparisons.
library(multcompView)
multcompLetters(extract_p(TukeyHSD(uni2.anova)))
That will get you the letters.
You can also use the multcomp package
library(multcomp)
cld(glht(uni2.anova, linct = mcp(Micro = "Tukey")))
I hope this is what you need.
The results from the TukeyHSD are a list. Use str to look at the structure. In your case you'll see that it's a list of one item and that item is basically a matrix. So, to extract the first column you'll want to save the TukeyHSD result
hsd <- TukeyHSD(uni2.anova)
If you look at str(hsd) you can that you can then get at bits...
hsd$Micro[,1]
That will give you the column of your differences. You should be able to extract what you want now.
Hard to tell without example data, but assuming Micro is just a factor with 4 levels and uni2 looks something like
n = 40
Micro = c('Act_Glu2', 'Act_Ala2', 'Ana_Ala2', 'NegI_Ala2')[sample(4, 40, rep=T)]
Sum_Uni = rnorm(n, 5, 0.5)
Sum_Uni[Micro=='Act_Glu2'] = Sum_Uni[Micro=='Act_Glu2'] + 0.5
uni2 = data.frame(Sum_Uni, Micro)
> uni2
Sum_Uni Micro
1 4.964061 Ana_Ala2
2 4.807680 Ana_Ala2
3 4.643279 NegI_Ala2
4 4.793383 Act_Ala2
5 5.307951 NegI_Ala2
6 5.171687 Act_Glu2
...
then I think what you're actually trying to get at is the basic multiple regression output:
fit = lm(Sum_Uni ~ Micro, data = uni2)
summary(fit)
anova(fit)
> summary(fit)
Call:
lm(formula = Sum_Uni ~ Micro, data = uni2)
Residuals:
Min 1Q Median 3Q Max
-1.26301 -0.35337 -0.04991 0.29544 1.07887
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.8364 0.1659 29.157 < 2e-16 ***
MicroAct_Glu2 0.9542 0.2623 3.638 0.000854 ***
MicroAna_Ala2 0.1844 0.2194 0.841 0.406143
MicroNegI_Ala2 0.1937 0.2158 0.898 0.375239
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4976 on 36 degrees of freedom
Multiple R-squared: 0.2891, Adjusted R-squared: 0.2299
F-statistic: 4.88 on 3 and 36 DF, p-value: 0.005996
> anova(fit)
Analysis of Variance Table
Response: Sum_Uni
Df Sum Sq Mean Sq F value Pr(>F)
Micro 3 3.6254 1.20847 4.8801 0.005996 **
Residuals 36 8.9148 0.24763
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
You can access the numbers in any of these tables like, for example,
> summary(fit)$coef[2,4]
[1] 0.0008536287
To see the list of what is stored in each object, use names():
> names(summary(fit))
[1] "call" "terms" "residuals" "coefficients"
[5] "aliased" "sigma" "df" "r.squared"
[9] "adj.r.squared" "fstatistic" "cov.unscaled"
In addition to the TukeyHSD() function you found, there are many other options for looking at the pairwise tests further, and correcting the p-values if desired. These include pairwise.table(), estimable() in gmodels, the resampling and boot packages, and others...

Resources