I know how to take a random sample each group from a dataframe using sample_n or sample_frac in dplyr, which can go like this,
dataset %>%
group_by(user_id) %>%
sample_n(10)
However, I have a slightly different question. I want to take a random sample from the whole dataset. It should be as simple as this one,
sample_n(dataset,10)
But, because I have used group_by command on the dataset in a previous case, it seems the group_by still takes effect here. The second command is equivalent to the first here.
I wonder how can I remove the effect of group_by and get a random sample from the whole dataset?
We can use ungroup() to remove any group variable and then apply the sample_n
dataset %>%
group_by(user_id) %>%
ungroup() %>%
sample_n(10)
Related
I have 500 columns. One is a categorical variable with 3 categories and the rest are continuous variables. There are 50 rows that fall under these columns. How do I group the data frame by the categorical variables, and take the mean of the observations that fall within each category for every column that has continuous variables for that DF? ALSO, remove all NA. I want to create a new CD from this info.
Best,
Henry
When posting to SO, please ensure to include a reproducible example of your data (dput is helpful for this). As it is, I can only guess to the structure of your data.
I like doing general grouping/summarising operations with dplyr. Using iris as an example, you might be able to do somehting like this
library(dplyr)
library(tidyr)
data(iris)
iris %>%
drop_na() %>%
group_by(Species) %>%
summarise_all(mean)
summarise_all just automatically uses all non-grouping columns, and takes a function you want to apply.
Note, if you use the dev version of dplyr, you could also do something like
iris %>%
group_by(Species) %>%
summarise(across(is.numeric), mean)
Since summarise_all is being replaced in favor of across
I am trying to understand the way group_by function works in dplyr. I am using the airquality data set, that comes with the datasets package link.
I understand that is if I do the following, it should arrange the records in increasing order of Temp variable
airquality_max1 <- airquality %>% arrange(Temp)
I see that is the case in airquality_max1. I now want to arrange the records by increasing order of Temp but grouped by Month. So the end result should first have all the records for Month == 5 in increasing order of Temp. Then it should have all records of Month == 6 in increasing order of Temp and so on, so I use the following command
airquality_max2 <- airquality %>% group_by(Month) %>% arrange(Temp)
However, what I find is that the results are still in increasing order of Temp only, not grouped by Month, i.e., airquality_max1 and airquality_max2 are equal.
I am not sure why the grouping by Month does not happen before the arrange function. Can anyone help me understand what I am doing wrong here?
More than the problem of trying to sort the data frame by columns, I am trying to understand the behavior of group_by as I am trying to use this to explain the application of group_by to someone.
arrange ignores group_by, see break-changes on dplyr 0.5.0. If you need to order by two columns, you can do:
airquality %>% arrange(Month, Temp)
For grouped data frame, you can also .by_group variable to sort by the group variable first.
airquality %>% group_by(Month) %>% arrange(Temp, .by_group = TRUE)
library(tidyverse)
library(ggmosaic) for "happy" dataset.
I feel like this should be a somewhat simple thing to achieve, but I'm having difficulty with percentages when using purrr::map together with table(). Using the "happy" dataset, I want to create a list of frequency tables for each factor variable. I would also like to have rounded percentages instead of counts, or both if possible.
I can create frequency precentages for each factor variable separately with the code below.
with(happy,round(prop.table(table(marital)),2))
However I can't seem to get the percentages to work correctly when using table() with purrr::map. The code below doesn't work...
happy%>%select_if(is.factor)%>%map(round(prop.table(table)),2)
The second method I tried was using tidyr::gather, and calculating the percentage with dplyr::mutate and then splitting the data and spreading with tidyr::spread.
TABLE<-happy%>%select_if(is.factor)%>%gather()%>%group_by(key,value)%>%summarise(count=n())%>%mutate(perc=count/sum(count))
However, since there are different factor variables, I would have to split the data by "key" before spreading using purrr::map and tidyr::spread, which came close to producing some useful output except for the repeating "key" values in the rows and the NA's.
TABLE%>%split(TABLE$key)%>%map(~spread(.x,value,perc))
So any help on how to make both of the above methods work would be greatly appreciated...
You can use an anonymous function or a formula to get your first option to work. Here's the formula option.
happy %>%
select_if(is.factor) %>%
map(~round(prop.table(table(.x)), 2))
In your second option, removing the NA values and then removing the count variable prior to spreading helps. The order in the result has changed, however.
TABLE = happy %>%
select_if(is.factor) %>%
gather() %>%
filter(!is.na(value)) %>%
group_by(key, value) %>%
summarise(count = n()) %>%
mutate(perc = round(count/sum(count), 2), count = NULL)
TABLE %>%
split(.$key) %>%
map(~spread(.x, value, perc))
I've got a lot of code written in dplyr 0.4.3, that relied on the grouped arrange() function. As of the 0.5 release, arrange no longer applies grouping.
This decision baffles me, as this makes arrange() inconsistent with other dplyr verbs, and surely a user could just ungroup() before arrange() if ungrouped is required. I would have hoped for perhaps a parameter in arrange() to retain grouped_by behavior, but alas!
I therefore have to rewrite my grouped arrange. At this point, my only option seems to be to break up the pipe at the arrange call, loop through the groups and arrange group by group, and then bind() the result again. I'm hoping there might be a more elegant solution?
Below is an MRE, I'd like to run a cumsum on wt per group_by(cyl). Many thanks for ideas/suggestions.
library(dplyr)
mtcars %>%
group_by(cyl) %>%
arrange(desc(mpg)) %>%
mutate(WtCum = cumsum(wt))
To order within groups in dplyr 0.5, add the grouping variable before the other ordering variables within arrange.
mtcars %>%
group_by(cyl) %>%
arrange(cyl, desc(mpg))
If you want to keep around an “old arrange”, you may use this snippet:
arrange_old <- function(.data, ...) {
dplyr::arrange_(.data, .dots = c(groups(.data), lazyeval::lazy_dots(...)))
}
This will respect grouping by basically prepending the group variables to the new arrange call.
Then you can do:
mtcars %>%
group_by(cyl) %>%
arrange_old(desc(mpg))
For what it's worth, I've also found this change confusing and unintuitive, and I keep making the mistake of forgetting to explicitly specify the grouping.
I am trying to summarise the value for one variable after splitting the data with group_by using dplyr package, the following code works fine and the output is listed below, but I can not substitute summarise_each with summriase even only one column need to be calculated, I wonder why?
iris %>% group_by(Species) %>% select(one_of('Sepal.Length')) %>%
summarise_each(funs(mean(.)))
or I will get the output like "S3:lazy".
summarize and summarize_each work quite differently. summarize is in fact simpler — just specify the expression directly:
iris %>%
group_by(Species) %>%
select(Sepal.Length) %>%
summarize(Sepal.Length = mean(Sepal.Length))
You can choose any name for the output column, it doesn’t need to be the same as the input.