For example I want to extract and add all variables based on minimal value of one variable (i.e. year in nested gapminder by country)
library(tidyverse)
data("gapminder")
gap_nested <- gapminder %>%
nest(data = -country) %>%
mutate(year = map(data, ~ min(.x$year)))
How do I do this? )
You can use the filter function
You can use the filter function from they dplyr package (included in tidyverse), like in this example:
gap_nested <- gapminder %>%
nest(data = -country) %>%
mutate(year = map(data, ~ min(.x$year))) %>%
filter(year == 1960)
This will return only countries which have minimum year equals to 1960.
Hope this helps.
Related
I have the following question that I am trying to solve with R:
"For each year, first calculate the mean observed value for each country (to allow for settings where countries may have more than 1 value per year, note that this is true in this data set). Then rank countries by increasing MMR for each year.
Calculate the mean ranking across all years, extract the mean ranking for 10 countries with the lowest ranking across all years, and print the resulting table."
This is what I have so far:
dput(mmr)
tib2 <- mmr %>%
group_by(country, year) %>%
summarise(mean = mean(mmr)) %>%
arrange(mean) %>%
group_by(country)
tib2
My output is so close to where I need it to be, I just need to make each country have only one row (that has the mean ranking for each country).
Here is the result:
Output
Thank you!
Just repeat the same analysis, but instead of grouping by (country, year), just group by country:
tib2 <- mmr %>%
group_by(country, year) %>%
summarise(mean_mmr = mean(mmr)) %>%
arrange(mean) %>%
group_by(country) %>%
summarise(mean_mmr = mean(mean_mmr)) %>%
arrange(mean_mmr) %>%
ungroup() %>%
slice_min(n=10)
tib2
Not sure without the data, but does this work?
tib2 <- mmr %>%
group_by(country, year) %>%
summarise(mean1 = mean(mmr)) %>%
ungroup() %>%
group_by(year) %>%
mutate(rank1 = rank(mean1)) %>%
ungroup() %>%
group_by(country) %>%
summarise(rank = mean(rank1))%>%
ungroup() %>%
arrange(rank) %>%
slice_head(n=10)
I'm trying to calculate cumulative sums over a time span. Is there a way to calculate this within a step? Any package recommendations?
activate_2019 <- activate_rate %>%
filter(
grepl("2019", join_day)
) %>%
summarize(
proportion = sum(if_activate) /n()
)
activate_2020 <- activate_rate %>%
filter(
grepl("2019|2020", join_day)
) %>%
summarize(
proportion = sum(if_activate) /n()
)
activate_2021 <- activate_rate %>%
filter(
grepl("2019|2020|2021", join_day)
) %>%
summarize(
proportion = sum(if_activate) /n()
)
Here is one method with tidyverse
Extract the unique year` from the 'join_day' column
Loop over those, slice the rows in active_rate based on the matching the 'year' looped in 'join_day'
Summarise by taking the mean of 'if_activate'
Bind the output with _dfc i.e. column bind in map
library(stringr)
library(dplyr)
library(purrr)
un1 <- str_extract_all(activate_rate$join_day, "\\d{4}") %>%
unlist %>%
unique %>%
as.integer %>%
sort
map_dfc(un1, ~ activate_rate %>%
arrange(as.Date(join_day)) %>%
slice(seq(max(grep(as.character(.x), join_day)))) %>%
sumarise(!!str_c("proportion", .x) := mean(if_activate)))
If I understand correctly, this should do the trick:
activate_rate %>%
mutate(year = floor_date(join_day, unit = "year")) %>%
group_by(year) %>%
summarise(proportion = sum(if_activate) / n())
Below is my data
library(gapminder)
library(tidyverse)
lst <- unique(gapminder$continent)
ylst = c(2007, 1952)
map2_dfr(lst,ylst, ~gapminder %>% filter(continent == .x & year == .y) %>%
arrange(desc(gdpPercap))
%>% slice(1) %>% select(continent, country,gdpPercap,year))
The data is the gapminder data from the R library 'gapminder'.
I want to find the country with the highest gdpPercap for each year for each continent using purrr.
However this code is giving me the error that the lengths of my two lists are not the same
What is the map syntax to iterate over two lists, when the lengths are not the same? And how should I use that to fix the code and achieve my objective?
I would do this by grouping and nesting:
gapminder %>%
filter(year %in% ylst) %>%
group_by(continent, year) %>%
nest() %>%
mutate(data=map(data, ~top_n(., 1, gdpPercap))) %>%
unnest(c(data)) %>%
select(continent, country,gdpPercap,year)
I'm trying to do a Wilcoxon test on long-formatted data. I want to use dplyr::group_by() to specify the subsets I'd like to do the test on.
The final result would be a new column with the p-value of the Wilcoxon test appended to the original data frame. All of the techniques I have seen require summarizing the data frame. I DO NOT want to summarize the data frame.
Please see an example reformatting the iris dataset to mimic my data, and finally my attempts to perform the task.
I am getting close, but I want to preserve all of my original data from before the Wilcoxon test.
# Reformatting Iris to mimic my data.
long_format <- iris %>%
gather(key = "attribute", value = "measurement", -Species) %>%
mutate(descriptor =
case_when(
str_extract(attribute, pattern = "\\.(.*)") == ".Width" ~ "Width",
str_extract(attribute, pattern = "\\.(.*)") == ".Length" ~ "Length")) %>%
mutate(Feature =
case_when(
str_extract(attribute, pattern = "^(.*?)\\.") == "Sepal." ~ "Sepal",
str_extract(attribute, pattern = "^(.*?)\\.") == "Petal." ~ "Petal"))
# Removing no longer necessary column.
cleaned_up <- long_format %>% select(-attribute)
# Attempt using do(), but I lose important info like "measurement"
cleaned_up %>%
group_by(Species, Feature) %>%
do(w = wilcox.test(measurement~descriptor, data=., paired=FALSE)) %>%
mutate(Wilcox = w$p.value)
# This is an attempt with the dplyr experimental group_map function. If only I could just make this a new column appended to the original df in one step.
cleaned_up %>%
group_by(Species, Feature) %>%
group_map(~ wilcox.test(measurement~descriptor, data=., paired=FALSE)$p.value)
Thanks for your help.
The model object can be wrapped in a list
library(tidyverse)
cleaned_up %>%
group_by(Species, Feature) %>%
nest %>%
mutate(model = map(data, ~
.x %>%
transmute(w = list(wilcox.test(measurement~descriptor,
data=., paired=FALSE)))))
Or another option is group_split into a list, then map through the list, elements create the 'pval' column after applying the model
cleaned_up %>%
group_split(Species, Feature) %>%
map_dfr(~ .x %>%
mutate(pval = wilcox.test(measurement~descriptor,
data=., paired=FALSE)$p.value))
Another option is to avoid the data argument entirely. The wilcox.test function only requires a data argument when the variables being tested aren't in the calling scope, but functions called within mutate have all the columns from the data frame in scope.
cleaned_up %>%
group_by(Species, Feature) %>%
mutate(pval = wilcox.test(measurement~descriptor, paired=FALSE)$p.value)
Same as akrun's output (thanks to his correction in the comments above)
akrun <-
cleaned_up %>%
group_split(Species, Feature) %>%
map_dfr(~ .x %>%
mutate(pval = wilcox.test(measurement~descriptor,
data=., paired=FALSE)$p.value))
me <-
cleaned_up %>%
group_by(Species, Feature) %>%
mutate(pval = wilcox.test(measurement~descriptor, paired=FALSE)$p.value)
all.equal(akrun, me)
# [1] TRUE
I would like to create a set of columns based on papers count for each number of year, therefore filtering multiple conditions in dplyr through summarise:
This is my code:
words_list <- data %>%
select(Keywords, year) %>%
unnest_tokens(word, Keywords) %>%
filter(between(year,1990,2017)) %>%
group_by(word) %>%
summarise(papers_count = n()) %>%
arrange(desc(papers_count))
The code above gives me two columns, 'word' and 'papers_count', I would like to create more columns like papers_count (papers_count1990, papers_count1991, etc..) based on each year between 1990 and 2017.
I Am looking for something like ths:
words_list <- data %>%
select(Keywords, year) %>%
unnest_tokens(word, Keywords) %>%
filter(between(year,1990,2017)) %>%
group_by(word) %>%
summarise(tot_papers_count = n(), papers_count_1991 = n()year="1991", ...) %>%
arrange(desc(papers_count))
please does anybody have any suggestion?
I would suggest adding year to the group_by, and then using spread to create multiple summary columns.
library(tidyr)
words_list_by_year <- data %>%
select(Keywords, year) %>%
unnest_tokens(word, Keywords) %>%
filter(between(year,1990,2017)) %>%
group_by(year,word) %>%
summarise(papers_count = n()) %>%
spread(year,papers_count,fill=0)