There must be an R-ly way to call wilcox.test over multiple observations in parallel using group_by. I've spent a good deal of time reading up on this but still can't figure out a call to wilcox.test that does the job. Example data and code below, using magrittr pipes and summarize().
library(dplyr)
library(magrittr)
# create a data frame where x is the dependent variable, id1 is a category variable (here with five levels), and id2 is a binary category variable used for the two-sample wilcoxon test
df <- data.frame(x=abs(rnorm(50)),id1=rep(1:5,10), id2=rep(1:2,25))
# make sure piping and grouping are called correctly, with "sum" function as a well-behaving example function
df %>% group_by(id1) %>% summarise(s=sum(x))
df %>% group_by(id1,id2) %>% summarise(s=sum(x))
# make sure wilcox.test is called correctly
wilcox.test(x~id2, data=df, paired=FALSE)$p.value
# yet, cannot call wilcox.test within pipe with summarise (regardless of group_by). Expected output is five p-values (one for each level of id1)
df %>% group_by(id1) %>% summarise(w=wilcox.test(x~id2, data=., paired=FALSE)$p.value)
df %>% summarise(wilcox.test(x~id2, data=., paired=FALSE))
# even specifying formula argument by name doesn't help
df %>% group_by(id1) %>% summarise(w=wilcox.test(formula=x~id2, data=., paired=FALSE)$p.value)
The buggy calls yield this error:
Error in wilcox.test.formula(c(1.09057358373486,
2.28465932554436, 0.885617572657959, : 'formula' missing or incorrect
Thanks for your help; I hope it will be helpful to others with similar questions as well.
Your task will be easily accomplished using the do function (call ?do after loading the dplyr library). Using your data, the chain will look like this:
df <- data.frame(x=abs(rnorm(50)),id1=rep(1:5,10), id2=rep(1:2,25))
df <- tbl_df(df)
res <- df %>% group_by(id1) %>%
do(w = wilcox.test(x~id2, data=., paired=FALSE)) %>%
summarise(id1, Wilcox = w$p.value)
output
res
Source: local data frame [5 x 2]
id1 Wilcox
(int) (dbl)
1 1 0.6904762
2 2 0.4206349
3 3 1.0000000
4 4 0.6904762
5 5 1.0000000
Note I added the do function between the group_by and summarize.
I hope it helps.
You can do this with base R (although the result is a cumbersome list):
by(df, df$id1, function(x) { wilcox.test(x~id2, data=x, paired=FALSE)$p.value })
or with dplyr:
ddply(df, .(id1), function(x) { wilcox.test(x~id2, data=x, paired=FALSE)$p.value })
id1 V1
1 1 0.3095238
2 2 1.0000000
3 3 0.8412698
4 4 0.6904762
5 5 0.3095238
Related
Sample data frame
Guest <- c("ann","ann","beth","beth","bill","bill","bob","bob","bob","fred","fred","ginger","ginger")
State <- c("TX","IA","IA","MA","AL","TX","TX","AL","MA","MA","IA","TX","AL")
df <- data.frame(Guest,State)
Desired output
I have tried about a dozen different ideas but not getting close. Closest was setting up a crosstab but didn't know how to get counts from that. Long/wide got me nowhere. etc. Too new still to think out of the box I guess.
Try this approach. You can arrange your values and then use group_by() and summarise() to reach a structure similar to those expected:
library(dplyr)
library(tidyr)
#Code
new <- df %>%
arrange(Guest,State) %>%
group_by(Guest) %>%
summarise(Chain=paste0(State,collapse = '-')) %>%
group_by(Chain,.drop = T) %>%
summarise(N=n())
Output:
# A tibble: 4 x 2
Chain N
<chr> <int>
1 AL-MA-TX 1
2 AL-TX 2
3 IA-MA 2
4 IA-TX 1
We can use base R with aggregate and table
table(aggregate(State~ Guest, df[do.call(order, df),], paste, collapse='-')$State)
-output
# AL-MA-TX AL-TX IA-MA IA-TX
# 1 2 2 1
I'm cleaning a dataset that doesn't yet have column names (so I'm working with indexes) and I'm trying to filter two columns of a df by piping the results of the first filter into the second and don't understand why the below doesn't work:
stripcols <- c("","Total+")
df <- df %>%
filter(!df[,1] %in% stripcols) %>%
filter(!df[,2] %in% stripcols)
Running this results in:
Error in filter_impl(.data, quo) : Result must have length 46, not 58
This is easily worked around by running the filter twice, but I don't understand why this didn't work.
I'm also curious as to whether there is a way to do this with one filter command that is applied on both columns rather than two.
The source of the error is that you are always comparing against nrow(df) rows regardless of how many rows hit the second filter. For instance:
dat <- data.frame(a=1:10)
dat %>% filter(a > 5)
# a
# 1 6
# 2 7
# 3 8
# 4 9
# 5 10
The way you're writing it, you're doing
dat %>% filter(dat[,1] > 5)
# a
# 1 6
# 2 7
# 3 8
# 4 9
# 5 10
For this first call, the number of rows that go into filter is 10, and the number of rows being compared inside filter is also 10. However, if you were to do:
dat %>% filter(dat[,1] > 5) %>% filter(dat[,1] > 7)
# Error in filter_impl(.data, quo) : Result must have length 5, not 10
this fails because the number of rows going into the second filter is only 5 not 10, though we are giving the filter command 10 comparisons by using dat[,1].
(N.B.: many comments about names are perfectly appropriate, but let's continue with the theme of using column indices.)
The first trick is to give each filter only as many comparisons as the data coming in. Another way to say this is to do comparisons on the state of the data at that point in time. magrittr (and therefore dplyr) do this with the . placeholder. The dot is always able to be inferred (defaulting to the first argument of the RHS function, the function after %>%), but some feel that being explicit is better. For instance, this is legal:
mtcars %>%
group_by(cyl) %>%
tally()
# # A tibble: 3 x 2
# cyl n
# <dbl> <int>
# 1 4 11
# 2 6 7
# 3 8 14
but an explicit equivalent pipe is this:
mtcars %>%
group_by(., cyl) %>%
tally(.)
If the first argument to the function is not the frame itself, then the %>% inferred way will fail:
mtcars %>%
xtabs(~ cyl + vs)
# Error in as.data.frame.default(data, optional = TRUE) :
# cannot coerce class '"formula"' to a data.frame
(Because it is effectively calling xtabs(., ~cyl + vs), and without named arguments then xtabs assumed the first argument to be a formula.)
so we must be explicit in these situations:
mtcars %>%
xtabs(~ cyl + vs, data = .)
# vs
# cyl 0 1
# 4 1 10
# 6 3 4
# 8 14 0
(contrived example, granted). One could also do mtcars %>% xtabs(formula=~cyl+vs), but my points stands.
So to adapt your code, I would expect this to work:
df %>%
filter(!.[,1] %in% stripcols) %>%
filter(!.[,2] %in% stripcols)
I think I'd prefer the [[ approach (partly because I know that tbl_df and data.frame deal with [,1] slightly differently ... and though it works with it, I still prefer the explicitness of [[):
df %>%
filter(!.[[1]] %in% stripcols) %>%
filter(!.[[2]] %in% stripcols)
which should work. Of course, combining works just fine, too:
df %>%
filter(!.[[1]] %in% stripcols, !.[[2]] %in% stripcols)
Well, I know that there are already tons of related questions, but none gave an answer to my particular need.
I want to use dplyr "summarize" on a table with 50 columns, and I need to apply different summary functions to these.
"Summarize_all" and "summarize_at" both seem to have the disadvantage that it's not possible to apply different functions to different subgroups of variables.
As an example, let's assume the iris dataset would have 50 columns, so we do not want to address columns by names. I want the sum over the first two columns, the mean over the third and the first value for all remaining columns (after a group_by(Species)). How could I do this?
Fortunately, there is a much simpler way available now.
With the new dplyr 1.0.0 coming out soon, you can leverage the across function for this purpose.
All you need to type is:
iris %>%
group_by(Species) %>%
summarize(
# I want the sum over the first two columns,
across(c(1,2), sum),
# the mean over the third
across(3, mean),
# the first value for all remaining columns (after a group_by(Species))
across(-c(1:3), first)
)
Great, isn't it?
I first thought the across is not necessary as the scoped variants worked just fine, but this use case is exactly why the across function can be very beneficial.
You can get the latest version of dplyr by devtools::install_github("tidyverse/dplyr")
As other people have mentioned, this is normally done by calling summarize_each / summarize_at / summarize_if for every group of columns that you want to apply the summarizing function to. As far as I know, you would have to create a custom function that performs summarizations to each subset. You can for example set the colnames in such way that you can use the select helpers (e.g. contains()) to filter just the columns that you want to apply the function to. If not, then you can set the specific column numbers that you want to summarize.
For the example you mentioned, you could try the following:
summarizer <- function(tb, colsone, colstwo, colsthree,
funsone, funstwo, funsthree, group_name) {
return(bind_cols(
summarize_all(select(tb, colsone), .funs = funsone),
summarize_all(select(tb, colstwo), .funs = funstwo) %>%
ungroup() %>% select(-matches(group_name)),
summarize_all(select(tb, colsthree), .funs = funsthree) %>%
ungroup() %>% select(-matches(group_name))
))
}
#With colnames
iris %>% as.tibble() %>%
group_by(Species) %>%
summarizer(colsone = contains("Sepal"),
colstwo = matches("Petal.Length"),
colsthree = c(-contains("Sepal"), -matches("Petal.Length")),
funsone = "sum",
funstwo = "mean",
funsthree = "first",
group_name = "Species")
#With indexes
iris %>% as.tibble() %>%
group_by(Species) %>%
summarizer(colsone = 1:2,
colstwo = 3,
colsthree = 4,
funsone = "sum",
funstwo = "mean",
funsthree = "first",
group_name = "Species")
You could summarise the data with each function separately and then join the data later if needed.
So something like this for the iris example:
sums <- iris %>% group_by(Species) %>% summarise_at(1:2, sum)
means <- iris %>% group_by(Species) %>% summarise_at(3, mean)
firsts <- iris %>% group_by(Species) %>% summarise_at(4, first)
full_join(sums, means) %>% full_join(firsts)
Though I would try to think of something else if there are more than a handful of summarising functions you need to use.
Try this:
library(plyr)
library(dplyr)
dataframe <- data.frame(var = c(1,1,1,2,2,2),var2 = c(10,9,8,7,6,5),var3=c(2,3,4,5,6,7),var4=c(5,5,3,2,4,2))
dataframe
# var var2 var3 var4
#1 1 10 2 5
#2 1 9 3 5
#3 1 8 4 3
#4 2 7 5 2
#5 2 6 6 4
#6 2 5 7 2
funnames<-c(sum,mean,first)
colnums<-c(2,3,4)
ddply(.data = dataframe,.variables = "var",
function(x,funcs,inds){
mapply(function(func,ind){
func(x[,ind])
},funcs,inds)
},funnames,colnums)
# var V1 V2 V3
#1 1 27 3 5
#2 2 18 6 2
See this - feature coming soon
I'm trying to use dplyr with my own function which summarises a data frame to a single value. In the example below, my_func counts the number of missing values. I could do this specific case another way, but I'm interested in knowing how to do this generally. I need this to work with grouped data. I thought something like this might work:
my_func <- function(df) {
return(sum(is.na(df)))
}
data("airquality")
airquality %>% group_by(Month) %>% summarise(my_func(.))
## # A tibble: 5 × 2
## Month `my_func(.)`
## <int> <int>
## 1 5 44
## 2 6 44
## 3 7 44
## 4 8 44
## 5 9 44
But it seems . is the whole data frame, not the individual groups.
dplyr::do can get the correct data frame:
airquality %>% group_by(Month) %>% do(data.frame(m = my_func(.)))
## Source: local data frame [5 x 2]
## Groups: Month [5]
##
## Month m
## <int> <int>
## 1 5 9
## 2 6 21
## 3 7 5
## 4 8 8
## 5 9 1
But this seems like a hack. It's also not consistent with summarise, because the output from do is still a grouped data frame.
Essentially, my question is: can I pass the correct data frame (respecting groups) to my function from within summarise?
After some further checks, it seems that the problem lies with the use of . in summarise. For example, the following works for a single variable:
airquality %>% group_by(Month) %>% summarize(my_func(Ozone))
yet this one doesn't:
airquality %>% group_by(Month) %>% summarize(my_func(.$Ozone))
Similarly, explicitly creating a data.frame with all the variables gives the desired output:
airquality %>%
group_by(Month) %>%
summarize(NAs = my_func(data.frame(Ozone, Solar.R, Wind, Temp, Month, Day)))
so if you insist on using dplyr, you'll need a workaround like that one (or use do as you already mentioned). I believe it's the same bug that has been reported here: dplyr Issue #2752.
So, I think you can use the following struture:
data <- num.missing(lapply(data$Month, my_func))
You also can use:
object <- data %>% summarise_each(funs(my_func), Month)
I hope this helps you!
If you don't mind using the plyr package, that seems to produce the desired output:
plyr::ddply(.data = airquality, .variables = ~ Month, .fun = my_func)
I want to find the rank correlation of various columns in a data.frame using dplyr.
I am sure there is a simple solution to this problem, but I think the problem lies in me not being able to use two inputs in summarize_each_ in dplyr when using the cor function.
For the following df:
df <- data.frame(Universe=c(rep("A",5),rep("B",5)),AA.x=rnorm(10),BB.x=rnorm(10),CC.x=rnorm(10),AA.y=rnorm(10),BB.y=rnorm(10),CC.y=rnorm(10))
I want to get the rank correlations between all the .x and the .y combinations. My problem in the function below where you see ????
cor <- df %>% group_by(Universe) %>%
summarize_each_(funs(cor(.,method = 'spearman',use = "pairwise.complete.obs")),????)
I want cor to just include the correlation pairs: AA.x.AA.y , AA.x,BB.y, ... for each Universe.
Please help!
An alternative approach is to just call the cor function once since this will calculate all required correlations. Repeated calls to cor might be a performance issue for a large data set. Code to do this and extract the correlation pairs with labels could look like:
#
# calculate correlations and display in matrix format
#
cor_matrix <- df %>% group_by(Universe) %>%
do(as.data.frame(cor(.[,-1], method="spearman", use="pairwise.complete.obs")))
#
# to add row names
#
cor_matrix1 <- cor_matrix %>%
data.frame(row=rep(colnames(.)[-1], n_groups(.)))
#
# calculate correlations and display in column format
#
num_col=ncol(df[,-1])
out_indx <- which(upper.tri(diag(num_col)))
cor_cols <- df %>% group_by(Universe) %>%
do(melt(cor(.[,-1], method="spearman", use="pairwise.complete.obs"), value.name="cor")[out_indx,])
So here follows the winning (time-wise) solution to my problem:
d <- df %>% gather(R1,R1v,contains(".x")) %>% gather(R2,R2v,contains(".y"),-Universe) %>% group_by(Universe,R1,R2) %>%
summarize(ICAC = cor(x=R1v, y=R2v,method = 'spearman',use = "pairwise.complete.obs")) %>%
unite(Pair, R1, R2, sep="_")
Albeit 0.005 milliseconds in this example, adding data adds time.
Try this:
library(data.table) # needed for fast melt
setDT(df) # sets by reference, fast
mdf <- melt(df[, id := 1:.N], id.vars = c('Universe','id'))
mdf %>%
mutate(obs_set = substr(variable, 4, 4) ) %>% # ".x" or ".y" subgroup
full_join(.,., by=c('Universe', 'obs_set', 'id')) %>% # see notes
group_by(Universe, variable.x, variable.y) %>%
filter(variable.x != variable.y) %>%
dplyr::summarise(rank_corr = cor(value.x, value.y,
method='spearman', use='pairwise.complete.obs'))
Produces:
Universe variable.x variable.y rank_corr
(fctr) (fctr) (fctr) (dbl)
1 A AA.x BB.x -0.9
2 A AA.x CC.x -0.9
3 A BB.x AA.x -0.9
4 A BB.x CC.x 0.8
5 A CC.x AA.x -0.9
6 A CC.x BB.x 0.8
7 A AA.y BB.y -0.3
8 A AA.y CC.y 0.2
9 A BB.y AA.y -0.3
10 A BB.y CC.y -0.3
.. ... ... ... ...
Explanation:
Melt: converts table to long form, one row per observation. To do the melt in a dplyr chain, you would have to use tidyr::gather, I believe, so pick your dependency. Using data.table there is faster and not hard to understand. The step also creates an id for each observation, 1 to nrow(df). The rest is in dplyr like you wanted.
Full join: joins the melted table to itself to create paired observations from all variable pairings based on common Universe and observation id (edit: and now '.x' or '.y' subgroup).
Filter: we don't need to correlate observations paired to themselves, we know those correlations = 1. If you wanted to include them for a correlation matrix or something, comment out this step.
Summarize using Spearman correlation. Note you should use dplyr::summarise since if you have plyr also loaded you might accidentally call plyr::summarise.