I find myself writing this bit of code all the time to produce standard errors for group means ( to then use for plotting confidence intervals).
It would be nice to write my own function to do this in one line of code, though. I have read the nse vignette in dplyr on non-standard evaluation and this blog post as well. I get it somewhat, but I'm too much of a noob to figure this out on my own. Can anyone help out? Thanks.
var1<-sample(c('red', 'green'), size=10, replace=T)
var2<-rnorm(10, mean=5, sd=1)
df<-data.frame(var1, var2)
df %>%
group_by(var1) %>%
summarize(avg=mean(var2), n=n(), sd=sd(var2), se=sd/sqrt(n))
You can use the function enquo to explicitly name the variables in your function call:
my_fun <- function(x, cat_var, num_var){
cat_var <- enquo(cat_var)
num_var <- enquo(num_var)
x %>%
group_by(!!cat_var) %>%
summarize(avg = mean(!!num_var), n = n(),
sd = sd(!!num_var), se = sd/sqrt(n))
}
which gives you:
> my_fun(df, var1, var2)
# A tibble: 2 x 5
var1 avg n sd se
<fctr> <dbl> <int> <dbl> <dbl>
1 green 4.873617 7 0.7515280 0.2840509
2 red 5.337151 3 0.1383129 0.0798550
and that matches the ouput of your example:
> df %>%
+ group_by(var1) %>%
+ summarize(avg=mean(var2), n=n(), sd=sd(var2), se=sd/sqrt(n))
# A tibble: 2 x 5
var1 avg n sd se
<fctr> <dbl> <int> <dbl> <dbl>
1 green 4.873617 7 0.7515280 0.2840509
2 red 5.337151 3 0.1383129 0.0798550
EDIT:
The OP has asked to remove the group_by statement from the function to add the ability to group_by more than one variables. There are two ways to go about this IMO. First, you could simply remove the group_by statement and pipe a grouped data frame into the function. That method would look like this:
my_fun <- function(x, num_var){
num_var <- enquo(num_var)
x %>%
summarize(avg = mean(!!num_var), n = n(),
sd = sd(!!num_var), se = sd/sqrt(n))
}
df %>%
group_by(var1) %>%
my_fun(var2)
Another way to go about this is to use ... and quos to allow for the function to capture multiple arguments for the group_by statement. That would look like this:
#first, build the new dataframe
var1<-sample(c('red', 'green'), size=10, replace=T)
var2<-rnorm(10, mean=5, sd=1)
var3 <- sample(c("A", "B"), size = 10, replace = TRUE)
df<-data.frame(var1, var2, var3)
# using the first version `my_fun`, it would look like this
df %>%
group_by(var1, var3) %>%
my_fun(var2)
# A tibble: 4 x 6
# Groups: var1 [?]
var1 var3 avg n sd se
<fctr> <fctr> <dbl> <int> <dbl> <dbl>
1 green A 5.248095 1 NaN NaN
2 green B 5.589881 2 0.7252621 0.5128378
3 red A 5.364265 2 0.5748759 0.4064986
4 red B 4.908226 5 1.1437186 0.5114865
# Now doing it with a new function `my_fun2`
my_fun2 <- function(x, num_var, ...){
group_var <- quos(...)
num_var <- enquo(num_var)
x %>%
group_by(!!!group_var) %>%
summarize(avg = mean(!!num_var), n = n(),
sd = sd(!!num_var), se = sd/sqrt(n))
}
df %>%
my_fun2(var2, var1, var3)
# A tibble: 4 x 6
# Groups: var1 [?]
var1 var3 avg n sd se
<fctr> <fctr> <dbl> <int> <dbl> <dbl>
1 green A 5.248095 1 NaN NaN
2 green B 5.589881 2 0.7252621 0.5128378
3 red A 5.364265 2 0.5748759 0.4064986
4 red B 4.908226 5 1.1437186 0.5114865
Related
I want to create a re-usable function for a repeating t-test such that the column names can be passed into a formula. However, I cannot find a way to make it work. So the following code is the idea:
library(dplyr)
library(rstatix)
do.function <- function(table, column, category) {
column = sym(column)
category = sym(category)
stat.test <- table %>%
group_by(subset) %>%
t_test(column ~ category)
return(stat.test)
}
tmp = data.frame(id=seq(1:100), value = rnorm(100), subset = rep(c("Set1", "Set2"),each=50,2),categorical_value= rep(c("A", "B"),each=25,4))
do.function(table= tmp, column = "value", category = "categorical_value")
The current error that I get is the following:
Error: Can't extract columns that don't exist.
x Column `category` doesn't exist.
Run `rlang::last_error()` to see where the error occurred.
The question is whether somebody knows how to solve this?
Just make a formula instead of wrapping them in sym:
library(dplyr)
library(rstatix)
do.function <- function(table, column, category) {
formula <- paste0(column, '~', category) %>%
as.formula()
table %>%
group_by(subset) %>%
t_test(formula)
}
tmp = data.frame(id=seq(1:100), value = rnorm(100), subset = rep(c("Set1", "Set2"),each=50,2),categorical_value= rep(c("A", "B"),each=25,4))
do.function(table= tmp, column = "value", category = "categorical_value")
# A tibble: 2 x 9
subset .y. group1 group2 n1 n2 statistic df p
* <chr> <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl>
1 Set1 value A B 50 50 0.484 94.3 0.63
2 Set2 value A B 50 50 -2.15 97.1 0.034
As we are passing string values, we may just use reformulate to create the expression in formula
do.function <- function(table, column, category) {
stat.test <- table %>%
group_by(subset) %>%
t_test(reformulate(category, response = column ))
return(stat.test)
}
-testing
> do.function(table= tmp, column = "value", category = "categorical_value")
# A tibble: 2 × 9
subset .y. group1 group2 n1 n2 statistic df p
* <chr> <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl>
1 Set1 value A B 50 50 1.66 97.5 0.0993
2 Set2 value A B 50 50 0.448 92.0 0.655
Formula actually is already used in rstatix::t_test, and we net to get the variables by their names.
do.function <- function(table, column, category) {
stat.test <- table %>%
mutate(column=get(column),
category=get(category)) %>%
rstatix::t_test(column ~ category)
return(stat.test)
}
do.function(table=tmp, column="value", category="categorical_value")
# # A tibble: 1 × 8
# .y. group1 group2 n1 n2 statistic df p
# * <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl>
# 1 column A B 100 100 0.996 197. 0.32
I would like to perform multiple pairwise t-tests on a dataset containing about 400 different column variables and 3 subject groups, and extract p-values for every comparison. A shorter representative example of the data, using only 2 variables could be the following;
df <- tibble(var1 = rnorm(90, 1, 1), var2 = rnorm(90, 1.5, 1), group = rep(1:3, each = 30))
Ideally the end result will be a summarised data frame containing four columns; one for the variable being tested (var1, var2 etc.), two for the groups being tested every time and a final one for the p-value.
I've tried duplicating the group column in the long form, and doing a double group_by in order to do the comparisons but with no result
result <- df %>%
pivot_longer(var1:var2, "var", "value") %>%
rename(group_a = group) %>%
mutate(group_b = group_a) %>%
group_by(group_a, group_b) %>%
summarise(n = n())
We can reshape the data into 'long' format with pivot_longer, then grouped by 'group', apply the pairwise.t.test, extract the list elements and transform into tibble with tidy (from broom) and unnest the list column
library(dplyr)
library(tidyr)
library(broom)
df %>%
pivot_longer(cols = -group, names_to = 'grp') %>%
group_by(group) %>%
summarise(out = list(pairwise.t.test(value, grp
) %>%
tidy)) %>%
unnest(c(out))
-output
# A tibble: 3 x 4
group group1 group2 p.value
<int> <chr> <chr> <dbl>
1 1 var2 var1 0.0760
2 2 var2 var1 0.0233
3 3 var2 var1 0.000244
In case you end up wanting more information about the t-tests, here is an approach that will allow you to extract more information such as the degrees of freedom and value of the test statistic:
library(dplyr)
library(tidyr)
library(purrr)
library(broom)
df <- tibble(
var1 = rnorm(90, 1, 1),
var2 = rnorm(90, 1.5, 1),
group = rep(1:3, each = 30)
)
df %>%
select(-group) %>%
names() %>%
map_dfr(~ {
y <- .
combn(3, 2) %>%
t() %>%
as.data.frame() %>%
pmap_dfr(function(V1, V2) {
df %>%
select(group, all_of(y)) %>%
filter(group %in% c(V1, V2)) %>%
t.test(as.formula(sprintf("%s ~ group", y)), ., var.equal = TRUE) %>%
tidy() %>%
transmute(y = y,
group_1 = V1,
group_2 = V2,
df = parameter,
t_value = statistic,
p_value = p.value
)
})
})
#> # A tibble: 6 x 6
#> y group_1 group_2 df t_value p_value
#> <chr> <int> <int> <dbl> <dbl> <dbl>
#> 1 var1 1 2 58 -0.337 0.737
#> 2 var1 1 3 58 -1.35 0.183
#> 3 var1 2 3 58 -1.06 0.295
#> 4 var2 1 2 58 -0.152 0.879
#> 5 var2 1 3 58 1.72 0.0908
#> 6 var2 2 3 58 1.67 0.100
And here is #akrun's answer tweaked to give the same p-values as the above approach. Note the p.adjust.method = "none" which gives independent t-tests which will inflate your Type I error rate.
df %>%
pivot_longer(
cols = -group,
names_to = "y"
) %>%
group_by(y) %>%
summarise(
out = list(
tidy(
pairwise.t.test(
value,
group,
p.adjust.method = "none",
pool.sd = FALSE
)
)
)
) %>%
unnest(c(out))
#> # A tibble: 6 x 4
#> y group1 group2 p.value
#> <chr> <chr> <chr> <dbl>
#> 1 var1 2 1 0.737
#> 2 var1 3 1 0.183
#> 3 var1 3 2 0.295
#> 4 var2 2 1 0.879
#> 5 var2 3 1 0.0909
#> 6 var2 3 2 0.100
Created on 2021-07-30 by the reprex package (v1.0.0)
I wrote a simple function to create tables of percentages in dplyr:
library(dplyr)
df = tibble(
Gender = sample(c("Male", "Female"), 100, replace = TRUE),
FavColour = sample(c("Red", "Blue"), 100, replace = TRUE)
)
quick_pct_tab = function(df, col) {
col_quo = enquo(col)
df %>%
count(!! col_quo) %>%
mutate(Percent = (100 * n / sum(n)))
}
df %>% quick_pct_tab(FavColour)
# Output:
# A tibble: 2 x 3
FavColour n Percent
<chr> <int> <dbl>
1 Blue 58 58
2 Red 42 42
This works great. However, when I tried to build on top of this, writing a new function that calculated the same percentages with grouping, I could not figure out how to use quick_pct_tab within the new function - after trying multiple different combinations of quo(col), !! quo(col) and enquo(col), etc.
bygender_tab = function(df, col) {
col_enquo = enquo(col)
# Want to replace this with
# df %>% quick_pct_tab(col)
gender_tab = df %>%
group_by(Gender) %>%
count(!! col_enquo) %>%
mutate(Percent = (100 * n / sum(n)))
gender_tab %>%
select(!! col_enquo, Gender, Percent) %>%
spread(Gender, Percent)
}
> df %>% bygender_tab(FavColour)
# A tibble: 2 x 3
FavColour Female Male
* <chr> <dbl> <dbl>
1 Blue 52.08333 63.46154
2 Red 47.91667 36.53846
From what I understand non-standard evaluation in dplyr is deprecated so it would be great to learn how to achieve this using dplyr > 0.7. How do I have to quote the col argument to pass it through to a further dplyr function?
We need to do !! to trigger the evaluation of the 'col_enquo'
bygender_tab = function(df, col) {
col_enquo = enquo(col)
df %>%
group_by(Gender) %>%
quick_pct_tab(!!col_enquo) %>% ## change
select(!! col_enquo, Gender, Percent) %>%
spread(Gender, Percent)
}
df %>%
bygender_tab(FavColour)
# A tibble: 2 x 3
# FavColour Female Male
#* <chr> <dbl> <dbl>
#1 Blue 54.54545 41.07143
#2 Red 45.45455 58.92857
Using the OP's function, the output is
# A tibble: 2 x 3
# FavColour Female Male
#* <chr> <dbl> <dbl>
#1 Blue 54.54545 41.07143
#2 Red 45.45455 58.92857
Note that the seed was not set while creating the dataset
Update
with rlang version 0.4.0 (ran with dplyr - 0.8.2), we can also use the {{...}} to do quote, unquote, substitution
bygender_tabN = function(df, col) {
df %>%
group_by(Gender) %>%
quick_pct_tab({{col}}) %>% ## change
select({{col}}, Gender, Percent) %>%
spread(Gender, Percent)
}
df %>%
bygender_tabN(FavColour)
# A tibble: 2 x 3
# FavColour Female Male
# <chr> <dbl> <dbl>
#1 Blue 50 46.3
#2 Red 50 53.7
-checking output with previous function (set.seed was not provided)
df %>%
bygender_tab(FavColour)
# A tibble: 2 x 3
# FavColour Female Male
# <chr> <dbl> <dbl>
#1 Blue 50 46.3
#2 Red 50 53.7
I have the following function to describe a variable
library(dplyr)
describe = function(.data, variable){
args <- as.list(match.call())
evalue = eval(args$variable, .data)
summarise(.data,
'n'= length(evalue),
'mean' = mean(evalue),
'sd' = sd(evalue))
}
I want to use dplyr for describing the variable.
set.seed(1)
df = data.frame(
'g' = sample(1:3, 100, replace=T),
'x1' = rnorm(100),
'x2' = rnorm(100)
)
df %>% describe(x1)
# n mean sd
# 1 100 -0.01757949 0.9400179
The problem is that when I try to apply the same descrptive using function group_by the describe function is not applied in each group
df %>% group_by(g) %>% describe(x1)
# # A tibble: 3 x 4
# g n mean sd
# <int> <int> <dbl> <dbl>
# 1 1 100 -0.01757949 0.9400179
# 2 2 100 -0.01757949 0.9400179
# 3 3 100 -0.01757949 0.9400179
How would you change the function to obtain what is desired using an small number of modifications?
You need tidyeval:
describe = function(.data, variable){
evalue = enquo(variable)
summarise(.data,
'n'= length(!!evalue),
'mean' = mean(!!evalue),
'sd' = sd(!!evalue))
}
df %>% group_by(g) %>% describe(x1)
# A tibble: 3 x 4
g n mean sd
<int> <int> <dbl> <dbl>
1 1 27 -0.23852862 1.0597510
2 2 38 0.11327236 0.8470885
3 3 35 0.01079926 0.9351509
The dplyr vignette 'Programming with dplyr' has a thorough description of using enquo and !!
Edit:
In response to Axeman's comment, I'm not 100% why the group_by and describe does not work here.
However, using debugonce with the funciton in it's original form
debugonce(describe)
df %>% group_by(g) %>% describe(x1)
one can see that evalue is not grouped and is just a numeric vector of length 100.
Base NSE appears to work, too:
describe <- function(data, var){
var_q <- substitute(var)
data %>%
summarise(n = n(),
mean = mean(eval(var_q)),
sd = sd(eval(var_q)))
}
df %>% describe(x1)
n mean sd
1 100 -0.1266289 1.006795
df %>% group_by(g) %>% describe(x1)
# A tibble: 3 x 4
g n mean sd
<int> <int> <dbl> <dbl>
1 1 33 -0.1379206 1.107412
2 2 29 -0.4869704 0.748735
3 3 38 0.1581745 1.020831
I want to run Mann-Whitney-U test. But R's wilcox.test(x~y, conf.int=TRUE) does not give such statistics as N, Mean Rank, Sum of Ranks, Z-value for both factors. I need R to give as much information as SPSS does (see here)
I'm wondering whether I didn't select some options, or if there is a good package I could install?
Thanks!
In R, you need to calculate the various outputs of SPSS separately. For example, using dplyr::summarise:
library(dplyr)
mt_filt <- mtcars %>%
filter(cyl > 4) %>%
mutate(rank_mpg = rank(mpg))
mt_filt %>%
group_by(cyl) %>%
summarise(n = n(),
mean_rank_mpg = mean(rank_mpg),
sum_rank_mpg = sum(rank_mpg))
# # A tibble: 2 × 4
# cyl n mean_rank_mpg sum_rank_mpg
# <dbl> <int> <dbl> <dbl>
# 1 6 7 17.4 122
# 2 8 14 7.82 110
# Number in first group
n1 <- sum(as.integer(factor(mt_filt$cyl)) == 1)
wilcox.test(mpg ~ cyl, mt_filt) %>%
with(data_frame(U = statistic,
W = statistic + n1 * (n1 + 1) / 2,
Z = qnorm(p.value / 2),
p = p.value))
# # A tibble: 1 × 4
# U W Z p
# <dbl> <dbl> <dbl> <dbl>
# 1 93.5 121.5 -3.286879 0.001013045
Edit 2020-07-15
Thanks to #Paul for pointing out that the ranks need to be generated prior to grouping.