I want to run Mann-Whitney-U test. But R's wilcox.test(x~y, conf.int=TRUE) does not give such statistics as N, Mean Rank, Sum of Ranks, Z-value for both factors. I need R to give as much information as SPSS does (see here)
I'm wondering whether I didn't select some options, or if there is a good package I could install?
Thanks!
In R, you need to calculate the various outputs of SPSS separately. For example, using dplyr::summarise:
library(dplyr)
mt_filt <- mtcars %>%
filter(cyl > 4) %>%
mutate(rank_mpg = rank(mpg))
mt_filt %>%
group_by(cyl) %>%
summarise(n = n(),
mean_rank_mpg = mean(rank_mpg),
sum_rank_mpg = sum(rank_mpg))
# # A tibble: 2 × 4
# cyl n mean_rank_mpg sum_rank_mpg
# <dbl> <int> <dbl> <dbl>
# 1 6 7 17.4 122
# 2 8 14 7.82 110
# Number in first group
n1 <- sum(as.integer(factor(mt_filt$cyl)) == 1)
wilcox.test(mpg ~ cyl, mt_filt) %>%
with(data_frame(U = statistic,
W = statistic + n1 * (n1 + 1) / 2,
Z = qnorm(p.value / 2),
p = p.value))
# # A tibble: 1 × 4
# U W Z p
# <dbl> <dbl> <dbl> <dbl>
# 1 93.5 121.5 -3.286879 0.001013045
Edit 2020-07-15
Thanks to #Paul for pointing out that the ranks need to be generated prior to grouping.
Related
I have a tibble and I want create several summaries of the same column, specifically the first, second and third quartiles.
To do it, I create a named list of functions and that works fine.
library("tidyverse")
set.seed(1234)
df <- tibble(x = rnorm(100))
df %>%
summarise(
across(x,
list(
Q1 = ~ quantile(., 1 / 4),
Q2 = ~ quantile(., 2 / 4),
Q3 = ~ quantile(., 3 / 4)
),
.names = "{.fn}"
)
)
#> # A tibble: 1 × 3
#> Q1 Q2 Q3
#> <dbl> <dbl> <dbl>
#> 1 -0.895 -0.385 0.471
Can I achieve this by specifying the list of probabilities to pass to quantile? So that I save myself typing and more importantly avoid hard-coding the arguments to pass to the aggregating function.
The following doesn't work because it creates one row per probability rather than one column.
df %>%
summarise(
across(x, quantile, 1:3 / 4)
)
#> # A tibble: 3 × 1
#> x
#> <dbl>
#> 1 -0.895
#> 2 -0.385
#> 3 0.471
you're almost here
df <- tibble(x = rnorm(100))
df %>%
summarise(
across(x,
map(1:3, ~partial(quantile, probs=./4)),
.names = "Q{.fn}"
)
)
# A tibble: 1 x 3
Q1 Q2 Q3
<dbl> <dbl> <dbl>
1 -0.579 0.0815 0.475
If you define the quantiles like this:
Q <- c(0.25, 0.5, 0.75)
Then the following code will produce columns of the appropriate quantiles with sensible labels:
df %>%
summarise(
across(x,
setNames( lapply(Q,
function(x) { f <- ~quantile(., b); f[2][[1]][[3]] <- x; f }),
paste("Q", round(100 * Q), sep = "_")),
.names = "{.fn}"
)
)
#> # A tibble: 1 x 3
#> Q_25 Q_50 Q_75
#> <dbl> <dbl> <dbl>
#> 1 -0.895 -0.385 0.471
Created on 2022-06-29 by the reprex package (v2.0.1)
I am using a self declared function that runs a regression analysis. I want to run this for thousands of companies for multiple years, thus speed is essential. My function creates three outputs (a coefficient, the p value and r-squared). The function runs fine individually, however when I use mutate() to let it run through the whole dataset, it only gives the same values for all rows. The weirdest thing is that I can't reproduce those particular values by running the function individually. I made an reproducible example below. I have used lapply successfully before with this data, but I would like to keep it in mutate and above all I would like to know what's exactly happening here.
So my question is: how can I make this function work for each individual row for the companies dataset using mutate?
library(tidyverse)
companies <- data.frame(comp_id = 1:5)
individuals <- data.frame(id = 1:100,
comp_id = sample(1:5, 100, replace = T),
age = sample(18:67, 100, replace = T),
wage = sample(1700:10000, 100, replace = T))
regger <- function(x){
df <- individuals %>% filter(comp_id == x)
formula <- wage ~ age
regression <- lm(formula, df)
res <- list(coeff = summary(regression)$coefficient[2,1],
p = summary(regression)$coefficients[2,4],
r2 = summary(regression)$r.squared)
return(res)
}
companies %>%
mutate(data = list(regger(comp_id))) %>%
unnest_wider(data)
output:
# A tibble: 5 x 4
comp_id coeff p r2
<int> <dbl> <dbl> <dbl>
1 1 -4.92 0.916 0.000666
2 2 -4.92 0.916 0.000666
3 3 -4.92 0.916 0.000666
4 4 -4.92 0.916 0.000666
5 5 -4.92 0.916 0.000666
Use map from the purrr package if a function is not vectorized:
library(tidyverse)
set.seed(1337)
companies <- data.frame(comp_id = 1:5)
individuals <- data.frame(
id = 1:100,
comp_id = sample(1:5, 100, replace = T),
age = sample(18:67, 100, replace = T),
wage = sample(1700:10000, 100, replace = T)
)
regger <- function(x) {
df <- individuals %>% filter(comp_id == x)
formula <- wage ~ age
regression <- lm(formula, df)
res <- list(
coeff = summary(regression)$coefficient[2, 1],
p = summary(regression)$coefficients[2, 4],
r2 = summary(regression)$r.squared
)
return(res)
}
companies %>%
mutate(data = comp_id %>% map(regger)) %>%
unnest_wider(data)
#> # A tibble: 5 x 4
#> comp_id coeff p r2
#> <int> <dbl> <dbl> <dbl>
#> 1 1 67.1 0.108 0.218
#> 2 2 23.7 0.466 0.0337
#> 3 3 31.2 0.292 0.0462
#> 4 4 18.4 0.582 0.0134
#> 5 5 0.407 0.994 0.00000371
Created on 2021-09-09 by the reprex package (v2.0.1)
I'm not sure what the output should look like, but could it be that you need to work on a row-by-row basis?
companies %>%
rowwise() %>%
mutate(data = list(regger(comp_id))) %>%
unnest_wider(data)
comp_id coeff p r2
<int> <dbl> <dbl> <dbl>
1 1 21.6 0.470 0.0264
2 2 13.5 0.782 0.00390
3 3 0.593 0.984 0.0000175
4 4 -9.33 0.824 0.00394
5 5 64.9 0.145 0.156
I wish to estimate models in one dataframe, but the formula for each model has some "moving parts" which come from another dataframe. For example, say I wish to estimate the following model (I can't post picture and found no way to type latex equations):
mpg = a + b*log(w_1 * drat + w_2 * hp)
where w_1 and w_2 are weights, which for example are either 0.5 or 1. I use expand.grid() to create a dataframe of weights, then mutate() a formula using paste() or paste0() with the variable names and the value of the weights, and then pass it to the lm() function.
However, the model estimated is just using the formula found in the first row of the weights dataframe. This is solved if I use group_by() before estimating the models.
The question is - why? why doesn't the first code work? what does group_by() achieve here that makes it possible?
library(tidyverse)
cars <- mtcars
w <- seq(from=0.5, to=1, by=0.5)
weights <- as_tibble(expand.grid(w1=w,w2=w))
#Doesn't work - the lm model is fit using the formula from the first row only
weights %>%
mutate(formula_weights = paste0("mpg~log(",w1,"*drat+",w2,"*hp)")) %>%
mutate(r2 = summary(lm(data=cars, formula = formula_weights))$r.squared)
#Does work - model is fit using the w1 and w2 values from each row (formula_weights)
weights %>%
mutate(formula_weights = paste0("mpg~log(",w1,"*drat+",w2,"*hp)")) %>%
group_by(formula_weights) %>%
mutate(r2 = summary(lm(data=cars, formula = formula_weights))$r.squared)
The output without group_by():
# A tibble: 4 x 4
w1 w2 formula_weights r2
<dbl> <dbl> <chr> <dbl>
1 0.5 0.5 mpg~log(0.5*drat+0.5*hp) 0.715
2 1 0.5 mpg~log(1*drat+0.5*hp) 0.715
3 0.5 1 mpg~log(0.5*drat+1*hp) 0.715
4 1 1 mpg~log(1*drat+1*hp) 0.715
The output with group_by():
# A tibble: 4 x 4
# Groups: formula_weights [4]
w1 w2 formula_weights r2
<dbl> <dbl> <chr> <dbl>
1 0.5 0.5 mpg~log(0.5*drat+0.5*hp) 0.715
2 1 0.5 mpg~log(1*drat+0.5*hp) 0.709
3 0.5 1 mpg~log(0.5*drat+1*hp) 0.718
4 1 1 mpg~log(1*drat+1*hp) 0.715
We can add rowwise
library(dplyr)
weights %>%
mutate(formula_weights = paste0("mpg~log(",w1,"*drat+",w2,"*hp)")) %>%
rowwise() %>%
mutate(r2 = summary(lm(data=cars, formula = formula_weights))$r.squared)
#Source: local data frame [4 x 4]
#Groups: <by row>
# A tibble: 4 x 4
# w1 w2 formula_weights r2
# <dbl> <dbl> <chr> <dbl>
#1 0.5 0.5 mpg~log(0.5*drat+0.5*hp) 0.715
#2 1 0.5 mpg~log(1*drat+0.5*hp) 0.709
#3 0.5 1 mpg~log(0.5*drat+1*hp) 0.718
#4 1 1 mpg~log(1*drat+1*hp) 0.715
Or use map
library(purrr)
weights %>%
mutate(r2 = map_dbl(paste0("mpg~log(",w1,"*drat+",w2,"*hp)"), ~
summary(lm(data = cars, formula = .x))$r.squared))
# A tibble: 4 x 3
# w1 w2 r2
# <dbl> <dbl> <dbl>
#1 0.5 0.5 0.715
#2 1 0.5 0.709
#3 0.5 1 0.718
#4 1 1 0.715
use sapply inside your mutate. summary/lm are not vectorized
weights %>%
mutate(formula_weights = paste0("mpg~log(",w1,"*drat+",w2,"*hp)")) %>%
mutate(r2 = sapply(formula_weights,
function(fw) summary(lm(data=cars, formula =))$r.squared))
I want to use filter or similar function inside summarise from dplyr package. So I've got a dataframe (e.g. mtcars) where I need to group by factor (e.g. cyl) and then calculate some statistics and a percentage of total wt for every cyl type —> wt.pc.
The question is how can I subset/filter wt column inside summarise function to get a percentage but without last 10 rows?
I've tried this code but it returns NA:(
mtcars %>%
group_by(cyl) %>%
summarise(wt = round(sum(wt)),
wt.pc = sum(wt) * 100 / sum(mtcars[, 6]),
wt.pc.short = sum(wt[1:22]) * 100 / sum(mtcars[1:22, 6]),
drat.max = round(max(drat)))
# A tibble: 3 x 5
cyl wt wt.pc wt.pc.short drat.max
<dbl> <dbl> <dbl> <dbl> <dbl>
1 4 25 24.3 NA 5
2 6 22 21.4 NA 4
3 8 56 54.4 NA 4
wt.pc.short — % of sum(wt) for every cyl for shorter dataframe mtcars[1:22,]
Something like this?
mtcars %>%
mutate(id = row_number()) %>%
group_by(cyl) %>%
summarise(wt_new = round(sum(wt)), # note the change in name here!
wt.pc = sum(wt) * 100 / sum(mtcars[, 6]),
wt.pc.short = sum(wt[id<23]) * 100 / sum(mtcars[1:22, 6]),
drat.max = round(max(drat)))
# A tibble: 3 x 5
cyl wt_new wt.pc wt.pc.short drat.max
<dbl> <dbl> <dbl> <dbl> <dbl>
1 4 25 24.3 22.7 5
2 6 22 21.4 25.8 4
3 8 56 54.4 51.6 4
The important part here is that when you assign wt in the call to summarize, all subsequent references to wt will take the previously assigned wt, not the original wt. A statement such as wt[1:22] is thus somewhat problematic. You can see this here:
mean(mtcars[,"mpg"])
# [1] 20.09062
var(mtcars[,"mpg"])
# [1] 36.3241
mtcars %>% summarise(var_before = var(mpg),
mpg = mean(mpg),
var_after = var(mpg))
# var_before mpg var_after
# 1 36.3241 20.09062 NA
I think you can do it like this. First we calculate the row number within the group, if max(row_number) > 10 then we have enough observations to remove the last 10 rows, in which case we filter to max(ID)-9 (i.e. remove the last 10 rows), otherwise ID==ID returns true and doesn't remove anything.
mtcars %>% group_by(cyl) %>%
mutate(ID = row_number()) %>%
filter(if (max(ID) > 10) ID < (max(ID) - 9) else ID == ID)
Currently, I try to find centers of the clusters in grouped data. By using sample data set and problem definitions I am able to create kmeans cluster withing the each group. However when it comes to address each center of the cluster for given groups I don't know how to get them. https://rdrr.io/cran/broom/man/kmeans_tidiers.html
The sample data is taken from (with little modifications for add gr column)
Sample data
library(dplyr)
library(broom)
library(ggplot2)
set.seed(2015)
sizes_1 <- c(20, 100, 500)
sizes_2 <- c(10, 50, 100)
centers_1 <- data_frame(x = c(1, 4, 6),
y = c(5, 0, 6),
n = sizes_1,
cluster = factor(1:3))
centers_2 <- data_frame(x = c(1, 4, 6),
y = c(5, 0, 6),
n = sizes_2,
cluster = factor(1:3))
points1 <- centers_1 %>%
group_by(cluster) %>%
do(data_frame(x = rnorm(.$n, .$x),
y = rnorm(.$n, .$y),
gr="1"))
points2 <- centers_2 %>%
group_by(cluster) %>%
do(data_frame(x = rnorm(.$n, .$x),
y = rnorm(.$n, .$y),
gr="2"))
combined_points <- rbind(points1, points2)
> combined_points
# A tibble: 780 x 4
# Groups: cluster [3]
cluster x y gr
<fctr> <dbl> <dbl> <chr>
1 1 3.66473833 4.285771 1
2 1 0.51540619 5.565826 1
3 1 0.11556319 5.592178 1
4 1 1.60513712 5.360013 1
5 1 2.18001557 4.955883 1
6 1 1.53998887 4.530316 1
7 1 -1.44165622 4.561338 1
8 1 2.35076259 5.408538 1
9 1 -0.03060973 4.980363 1
10 1 2.22165205 5.125556 1
# ... with 770 more rows
ggplot(combined_points, aes(x, y)) +
facet_wrap(~gr) +
geom_point(aes(color = cluster))
ok I everything is great until here. When I want to extract each cluster center for in each group
clust <- combined_points %>%
group_by(gr) %>%
dplyr::select(x, y) %>%
kmeans(3)
> clust
K-means clustering with 3 clusters of sizes 594, 150, 36
Cluster means:
gr x y
1 1.166667 6.080832 6.0074885
2 1.333333 4.055645 0.0654158
3 1.305556 1.507862 5.2417670
As we can see gr number is changed and I don't know these centers belongs to which group.
as we go one step forward to see tidy format of clust
> tidy(clust)
x1 x2 x3 size withinss cluster
1 1.166667 6.080832 6.0074885 594 1095.3047 1
2 1.333333 4.055645 0.0654158 150 312.4182 2
3 1.305556 1.507862 5.2417670 36 115.2484 3
still I can't see the gr 2 center information.
I hope the problem explained very clear. Let me know if you have any missing part! Thanks in advance!
kmeans doesn't understand dplyr grouping, so it's just finding three overall centers instead of within each group. The preferred idiom at this point to do this is list columns of the input data, e.g.
library(tidyverse)
points_and_models <- combined_points %>%
ungroup() %>% select(-cluster) %>% # cleanup, remove cluster name so data will collapse
nest(x, y) %>% # collapse input data into list column
mutate(model = map(data, kmeans, 3), # iterate model over list column of input data
centers = map(model, broom::tidy)) # extract data from models
points_and_models
#> # A tibble: 2 x 4
#> gr data model centers
#> <chr> <list> <list> <list>
#> 1 1 <tibble [620 × 2]> <S3: kmeans> <data.frame [3 × 5]>
#> 2 2 <tibble [160 × 2]> <S3: kmeans> <data.frame [3 × 5]>
points_and_models %>% unnest(centers)
#> # A tibble: 6 x 6
#> gr x1 x2 size withinss cluster
#> <chr> <dbl> <dbl> <int> <dbl> <fct>
#> 1 1 4.29 5.71 158 441. 1
#> 2 1 3.79 0.121 102 213. 2
#> 3 1 6.39 6.06 360 534. 3
#> 4 2 5.94 5.88 100 194. 1
#> 5 2 4.01 -0.127 50 97.4 2
#> 6 2 1.07 4.57 10 15.7 3
Note that the cluster column is from the model results, not the input data.
You can also do the same thing with do, e.g.
combined_points %>%
group_by(gr) %>%
do(model = kmeans(.[c('x', 'y')], 3)) %>%
ungroup() %>% group_by(gr) %>%
do(map_df(.$model, broom::tidy)) %>% ungroup()
but do and grouping rowwise are sort of soft-deprecated at this point, and the code gets a little janky, as you can see by the need to explicitly ungroup so much.