I have a dataset that I want to group_by() and generate a histogram for each group. My current code is as follows:
df %>%
group_by(x2) %>%
with(hist(x3,breaks = 50))
This however generates a single histogram of the entirety of x3 rather than several chunks of x3 here is some example data
df = data.frame(x1 = rep(c(1998,1999,2000),9),
x2 = rep(c(1,1,1,2,2,2,3,3,3),3),
x3 = rnorm(27,.5))
desired output:
actual output:
My comment about do is dated, I guess. ?do points us to the current ?group_walk:
df %>%
group_by(x2) %>%
group_walk(~ hist(.x$x3))
In versions of dplyr < 0.8.0, there is no group_walk, so you can use do:
df %>%
group_by(x2) %>%
do(h = hist(.$x3))
Assuming you only want the side-effects of hist (printed histogram), not the returned values, you can add a %>% invisible() to the end of the chain to not print the resulting tibble.
I think it's time to advance to ggplot, for instance:
library(tidyverse)
df %>%
ggplot(aes(x = x3)) +
geom_histogram(bins = 50) +
facet_wrap(~x2) # optional: use argument "ncols = 1"
You can use split.data.frame command to split the data based on the categories after this you run a hist command in the list of data frames
list_df <- split.data.frame(df, f= df$x2)
par(mfrow = c(round(length(list_df), 0), 1))
for( lnam in names(list_df)){
hist(list_df[[lnam]][, "x3"])
}
I really like #Gregor's answer with group_walk, but it's still listed as experimental in dplyr v0.8.0.1. If you want to avoid working with functions that may break later, I'd use base split, then purrr::walk. I'm using walk and plot to avoid all the text printout that hist gives.
library(dplyr)
library(purrr)
df %>%
split(.$x2) %>%
walk(~hist(.$x3) %>% plot())
Related
I have a dataframe ("df") with a number of columns that I would like to estimate the weighted means of, weighting by population (df$Population), and grouping by commuting zone (df$cz).
This is the list of columns I would like to estimate the weighted means of:
vlist = c("Public_Welf_Total_Exp", "Welf_Cash_Total_Exp", "Welf_Cash_Cash_Assist", "Welf_Ins_Total_Exp","Total_Educ_Direct_Exp", "Higher_Ed_Total_Exp", "Welf_NEC_Cap_Outlay","Welf_NEC_Direct_Expend", "Welf_NEC_Total_Expend", "Total_Educ_Assist___Sub", "Health_Total_Expend", "Total_Hospital_Total_Exp", "Welf_Vend_Pmts_Medical","Hosp_Other_Total_Exp","Unemp_Comp_Total_Exp", "Unemp_Comp_Cash___Sec", "Total_Unemp_Rev", "Hous___Com_Total_Exp", "Hous___Com_Construct")
This is the code I have been using:
df = df %>% group_by(cz) %>% mutate_at(vlist, weighted.mean(., df$Population))
I have also tried:
df = df %>% group_by(cz) %>% mutate_at(vlist, function(x) weighted.mean(x, df$Population))
As well as tested the following code on only 2 columns:
df = df %>% group_by(cz) %>% mutate_at(vars(Public_Welf_Total_Exp, Welf_Cash_Total_Exp), weighted.mean(., df$Population))
However, everything I have tried gives me the following error, even though there are no NAs in any of my variables:
Error in weighted.mean.default(., df$Population) :
'x' and 'w' must have the same length
I understand that I could do the following estimation using lapply, but I don't know how to group by another variable using lapply. I would appreciate any suggestions!
There is a lot to unpack here...
Probably you mean summarise instead of mutate, because with mutate you would just replicate your result for each row.
mutate_at and summarise_at are subseeded and you should use across instead.
the reason why your code wasn't working was because you did not write your function as a formula (you did not add ~ at the beginning), also you were using df$Population instead of Population. When you write Population, summarise knows you're talking about the column Population which, at that point, is grouped like the rest of the dataframe. When you use df$Population you are calling the column of the original dataframe without grouping. Not only it is wrong, but you would also get an error because the length of the variable you are trying to average and the lengths of the weights provided by df$Population would not correspond.
Here is how you could do it:
library(dplyr)
df %>%
group_by(cz) %>%
summarise(across(vlist, weighted.mean, Population),
.groups = "drop")
If you really need to use summarise_at (and probably you are using an old version of dplyr [lower than 1.0.0]), then you could do:
df %>%
group_by(cz) %>%
summarise_at(vlist, ~weighted.mean(., Population)) %>%
ungroup()
I considered df and vlist like the following:
vlist <- c("Public_Welf_Total_Exp", "Welf_Cash_Total_Exp", "Welf_Cash_Cash_Assist", "Welf_Ins_Total_Exp","Total_Educ_Direct_Exp", "Higher_Ed_Total_Exp", "Welf_NEC_Cap_Outlay","Welf_NEC_Direct_Expend", "Welf_NEC_Total_Expend", "Total_Educ_Assist___Sub", "Health_Total_Expend", "Total_Hospital_Total_Exp", "Welf_Vend_Pmts_Medical","Hosp_Other_Total_Exp","Unemp_Comp_Total_Exp", "Unemp_Comp_Cash___Sec", "Total_Unemp_Rev", "Hous___Com_Total_Exp", "Hous___Com_Construct")
df <- as.data.frame(matrix(rnorm(length(vlist) * 100), ncol = length(vlist)))
names(df) <- vlist
df$cz <- rep(letters[1:10], each = 10)
df$Population <- runif(100)
I am looking for a way to define the values of a column into percentiles. The data looks similar to this but with more complex values of column E:
data.frame(Date=c(rep("2010-01-31", 60), rep("2010-02-28", 60)), E=c(rep(1:20, 6)))
The data should be grouped around the data variable. The brackets are to be used to create a histogram like the one attached below. If you could kindly also help me with a code that does that, it would be great.
Do you mean something in line of:
df <- df %>%
group_by(Date) %>%
mutate(first = quantile(E,0.5),
second = quantile(E,0.95))
With data.table:
setDT(df)
df[,c("first","second") := list(quantile(E,0.5),quantile(E,0.95)), by = "Date"]
I probably need an ifelse statement similar to this expanded to include all percentiles.
CombData <- CombData %>%
group_by(Date) %>%
mutate(E_P = ifelse(E
I am pretty sure something like this is already asked but I don't know how to search for it.
I often get data in a wide format like in my little example with 3 experiments (a-c). I normally convert to long format and convert the values by some function (here log2 as an example).
What I often want to do is to plot all experiments against each other and here I am looking for a handy solution. How can I convert my data frame to get facets for example with a~b, a~c and b~c...
So far I tidy::spread the data again and execute 3 times a ggplot command with the individual column names as x and y. Later I merge the individual graphs together.
Is there a more convenient way?
library(dplyr)
library(tidyr)
library(ggplot2)
df <- data.frame(
names=letters,
a=1:26,
b=1:13,
c=11:36
)
df %>%
tidyr::gather(experiment, value, -names) %>%
mutate(log2.value=log2(value))
EDIT
Since I got a very useful answer from #hdkrgr I adapted a bit my code. The inner_join was a great trick which I can implement to automate my idea, what I still miss is a clever filter to get rid of the redundant data, since I don't want to plot c~c or b~a if I already plot a~b.
I solved this now by providing the pairings I want to do, but can anyone think ob a straight forward solution? I couldn't think of something which gives me the unique pairing.
my_pairs <- c('a vs. b', 'a vs. c', 'b vs. c')
df %>%
as_tibble() %>%
tidyr::gather(experiment, value, -names) %>%
mutate(log2.value=log2(value)) %>%
inner_join(., ., by=c("names")) %>%
mutate(pairing=sprintf('%s vs. %s', experiment.x, experiment.y)) %>%
filter(pairing %in% my_pairs) %>%
ggplot(aes(log2.value.x, log2.value.y)) +
geom_point() +
facet_wrap( ~ pairing, labeller=label_both)
One way starting from long format would be to do a self-join on the long-data in order to get all combinations of two experiments in each row:
df %>%
tidyr::gather(experiment, value, -names) %>%
mutate(log2.value=log2(value)) %>%
inner_join(., ., by=c("names")) %>%
ggplot(aes(log2.value.x, log2.value.y)) + geom_point() + facet_grid(experiment.y ~ experiment.x)
Edit: To avoid plotting redundant experiment-pairs, you can do:
df %>%
tidyr::gather(experiment, value, -names) %>%
mutate(log2.value=log2(value)) %>% inner_join(., ., by=c("names")) %>%
filter(experiment.x < experiment.y) %>%
ggplot(aes(log2.value.x, log2.value.y)) + geom_point() + facet_wrap(~experiment.y + experiment.x)
This is really interesting because it's actually more complex than it first seems. One thing that sticks out is getting unique pairs of experiments—it seems like you'd want a vs b but not necessarily b vs a as well. To do that, you need the unique set of experiment pairs.
Initially, I tried to work from your gathered data, but realized it might be simpler to start from the wide version. Take the names of the experiments from the column names—you can do this multiple ways, but I just took the strings that aren't "names"—and get the combinations of them. I pasted them together to make them a little easier to work with.
library(dplyr)
library(tidyr)
library(ggplot2)
df <- data.frame(
names=letters,
a=1:26,
b=1:13,
c=11:36
) %>%
as_tibble()
exp <- stringr::str_subset(names(df), "names", negate = T)
pairs <- combn(exp, 2, paste, simplify = F, collapse = ",") %>%
unlist()
pairs
#> [1] "a,b" "a,c" "b,c"
Then, for each pair, extract the associated column names, do a little tidyeval to select those columns, do the log2 transform that you had. I had to detour here to rename the columns with something I could refer back to—I think this isn't necessary, but I couldn't get my tidyeval working inside the ggplot aes. Someone else might have an idea on that. Then make your plot, and label the axes and title accordingly. That leaves you with a list of 3 plots.
plots <- purrr::map(pairs, function(pair) {
cols <- strsplit(pair, split = ",", fixed = T)[[1]]
df %>%
select(names, !!cols[1], !!cols[2]) %>%
mutate_at(vars(-names), log2) %>%
rename(exp1 = !!cols[1], exp2 = !!cols[2]) %>%
ggplot(aes(x = exp1, y = exp2)) +
geom_point() +
labs(x = cols[1], y = cols[2], title = pair)
})
Use your method of choice to put the plots together however you want. I went with cowplot, but I also like the patchwork package.
cowplot::plot_grid(plotlist = plots, nrow = 1)
This is probably not what you want, but if the purpose is to explore the correlation pattern between each variable, you may want to consider ggpairs from the GGally package. It provides not only scatter plots, but also correlation score and distribution.
library(GGally)
ggpairs(df[, c("a", "b", "c")])
You could start from creating all combinations via combnand then work your way through:
library(purrr)
t(combn(names(df)[-1], 2)) %>% ## get all combinations
as.data.frame(stringsAsFactors = FALSE) %>%
mutate(l = paste(V1, V2, sep = " vs. ")) %>%
pmap_dfr(function(V1, V2, l)
df %>%
select(one_of(c(V1, V2))) %>% ## select the elements given by the combination
mutate_all(log2) %>%
setNames(c("x", "y")) %>%
mutate(experiment = l)) %>%
ggplot(aes(x, y)) + geom_point() + facet_wrap(~experiment)
I have a large data frame with on every rows enough data to calculate a correlation using specific columns of this data frame and add a new column containing the correlations calculated.
Here is a summary of what I would like to do (this one using dplyr):
example_data %>%
mutate(pearsoncor = cor(x = X001_F5_000_A:X030_F5_480_C, y = X031_H5_000_A:X060_H5_480_C))
Obviously it is not working this way as I get only NA's in the pearsoncor column, does anyone has a suggestion? Is there an easy way to do this?
Best,
Example data frame
With tidyr, you can gather separately all x- and y-variables, you'd like to compare. You get a tibble containing the correlation coefficients and their p-values for every combination you provided.
library(dplyr)
library(tidyr)
example_data %>%
gather(x_var, x_val, X001_F5_000_A:X030_F5_480_C) %>%
gather(y_var, y_val, X031_H5_000_A:X060_H5_480_C) %>%
group_by(x_var, y_var) %>%
summarise(cor_coef = cor.test(x_val, y_val)$estimate,
p_val = cor.test(x_val, y_val)$p.value)
edit, update some years later:
library(tidyr)
library(purrr)
library(broom)
library(dplyr)
longley %>%
pivot_longer(GNP.deflator:Armed.Forces, names_to="x_var", values_to="x_val") %>%
pivot_longer(Population:Employed, names_to="y_var", values_to="y_val") %>%
nest(data=c(x_val, y_val)) %>%
mutate(cor_test = map(data, ~cor.test(.x$x_val, .x$y_val)),
tidied = map(cor_test, tidy)) %>%
unnest(tidied)
Here is a solution using the reshape2 package to melt() the data frame into long form so that each value has its own row. The original wide-form data has 60 values per row for each of the 6 genes, while the melted long-form data frame has 360 rows, one for each value. Then we can easily use summarize() from dplyr to calculate the correlations without loops.
library(reshape2)
library(dplyr)
names1 <- names(example_data)[4:33]
names2 <- names(example_data)[34:63]
example_data_longform <- melt(example_data, id.vars = c('Gene','clusterFR','clusterHR'))
example_data_longform %>%
group_by(Gene, clusterFR, clusterHR) %>%
summarize(pearsoncor = cor(x = value[variable %in% names1],
y = value[variable %in% names2]))
You could also generate more detailed results, as in Eudald's answer, using do():
detailed_r <- example_data_longform %>%
group_by(Gene, clusterFR, clusterHR) %>%
do(cor = cor.test(x = .$value[.$variable %in% names1],
y = .$value[.$variable %in% names2]))
This outputs a tibble with the cor column being a list with the results of cor.test() for each gene. We can use lapply() to extract output from the list.
lapply(detailed_r$cor, function(x) c(x$estimate, x$p.value))
I had the same problem a few days back, and I know loops are not optimal in R but that's the only thing I could think of:
df$r = rep(0,nrow(df))
df$cor_p = rep(0,nrow(df))
for (i in 1:nrow(df)){
ct = cor.test(as.numeric(df[i,cols_A]),as.numeric(df[i,cols_B]))
df$r[i] = ct$estimate
df$cor_p[i] = ct$p.value
}
I am attempting to exclusively use piping to rewrite the following code (using babynames data from babynames package:
library(babynames)
library(dplyr)
myDF <- babynames %>%
group_by(year) %>%
summarise(totalBirthsPerYear = sum(n))
slice(myDF, seq(1, nrow(myDF), by = 20))
The closest I have gotten is this code (not working):
myDF <- babyNames %>%
group_by(year) %>%
summarise(totalBirthsPerYear = sum(n)) %>%
slice( XXX, seq(1, nrow(XXX), by = 20))
where XXX is meant to be passed via pipes to slice, but I'm stuck. Any help is appreciated.
You can reference piped data in a different position in the function by using the . In your case:
myDF2 <- babynames %>%
group_by(year) %>%
summarize(totalBirthsPerYear = sum(n)) %>%
slice(seq(1, nrow(.), by = 20))
Not sure if this should be opened as a separate question & answer but in case anybody arrives here as I did looking for the answer to the MULTIPLE in the title:
R: Using piping to pass a single argument to multiple locations in a function
Using the . from Andrew's answer in multiple places also achieves this.
[example] To get the last element of a vector vec <- c("first", "middle", "last")
we could use this code.
vec[length(vec)]
Using piping, the following code achieves the same thing:
vec %>% .[length(.)]
Hopefully this is helpful to others as it would have helped me (I knew about the . but couldn't get it working in multiple locations).