Setup:
I have a tibble (named data) with an embedded list of data.frames.
df1 <- data.frame(name = c("columnName1","columnName2","columnName3"),
value = c("yes", 1L, 0L),
stringsAsFactors = F)
df2 <- data.frame(name = c("columnName1","columnName2","columnName3"),
value = c("no", 1L, 1L),
stringsAsFactors = F)
df3 <- data.frame(name = c("columnName1","columnName2","columnName3"),
value = c("yes", 0L, 0L),
stringsAsFactors = F)
responses = list(df1,
df2,
df3)
data <- tibble(ids = c(23L, 42L, 84L),
responses = responses)
Note this is a simplified example of the data. The original data is from a flat json file and loaded with jsonlite::stream_in() function.
Objective:
My goal is to convert this tibble to another tibble where the embedded data.frames are spread (transposed) as columns; for example, my goal tibble is:
goal <- tibble(ids = c(23L, 42L, 84L),
columnName1 = c("yes","no","yes"),
columnName2 = c(1L, 1L, 0L),
columnName3 = c(0L, 1L, 0L))
# goal tibble
> goal
# A tibble: 3 x 4
ids columnName1 columnName2 columnName3
<int> <chr> <int> <int>
1 23 yes 1 0
2 42 no 1 1
3 84 yes 0 0
My inelegant solution:
Use dplyr::bind_rows() and tidyr::spread():
rdf <- dplyr::bind_rows(data$responses, .id = "id") %>%
tidyr::spread(key = "name", -id)
goal2 <- cbind(ids = data$ids, rdf[,-1]) %>%
as.tibble()
Comparing my solution to the goal:
# produced tibble
> goal2
# A tibble: 3 x 4
ids columnName1 columnName2 columnName3
* <int> <chr> <chr> <chr>
1 23 yes 1 0
2 42 no 1 1
3 84 yes 0 0
Overall, my solution works but has a few problems:
I don't know how to pass the unique ids through bind_rows() which forces me to create a dummy id ("id") which can't match to the original id ("ids"). This forces me to use a cbind() (which I don't like) and manually remove the dummy id (using -1 slicing on rdf).
The format of the columns are lost as my approach converts the integer columns to characters.
Any suggestions on how to improve my solution (especially using tidyverse based packages like tidyjson or tidyr)?
We can loop over the 'responses' column with map, spread it to 'wide' with convert = TRUE so that the column types, create that as a column with transmute and then unnest
library(tidyverse)
data %>%
transmute(ids, ind = map(responses, ~.x %>%
spread(name, value, convert = TRUE))) %>%
unnest
# A tibble: 3 x 4
# ids columnName1 columnName2 columnName3
# <int> <chr> <int> <int>
#1 23 yes 1 0
#2 42 no 1 1
#3 84 yes 0 0
Or using the OP's code, we set the names of the list with 'ids' column, do the bind_rows and then spread
bind_rows(setNames(data$responses, data$ids), .id = 'ids') %>%
spread(name, value, convert = TRUE)
Related
I was trying something relatively simple, but having some struggles. Let's say I have two dataframes df1 and df2:
df1:
id expenditure
1 10
2 20
1 30
2 50
df2:
id expenditure
1 30
2 50
1 60
2 10
I also added them to a named list:
table_list = list()
table_list[['a']] = df1
table_list[['b']] = df2
And now I want to perform some summary operation through a function and then bind those rows:
get_summary = function(table){
final_table = table %>% group_by(id) %>% summarise(total_expenditure= sum(expenditure))
}
And then apply this through map_dfr:
summary = table_list %>% map_dfr(get_summary, id='origin_table')
So, this will create a almost what I'm looking for:
origin_table id total_expenditure
a 1 40
a 2 70
b 1 90
b 2 60
But, what if I would like to do something specific depending on the element of the list that is being passed, something like this:
get_summary = function(table, name){
dummy_list = c(TRUE, FALSE)
names(dummy_list) = c('a', 'b')
final_table = table %>% group_by(id) %>% summarise(total_expenditure= sum(expenditure))
is_true = dummy_list[[name]] # Want to use the original name to call another list
if(is_true) final_table = final_table %>% mutate(total_expenditure = total_expenditure + 1)
return(final_table)
}
This would bring something like this:
origin_table id total_expenditure
a 1 41
a 2 71
b 1 90
b 2 60
So is there any way to use list names inside my function? Or any way to identify which element of my list I'm working with? Maybe map_dfr is too restricted and I have to use something else?
Edit: changed example so it is more grounded in reality
Instead of using map, use imap, which can return the names of the list in .y
library(purrr)
library(dplyr)
get_summary = function(dat, name){
dat %>%
group_by(id) %>%
summarise(total_expenditure= sum(expenditure, na.rm = TRUE),
.groups = "drop") %>%
mutate(total_expenditure = if(name=='a')
total_expenditure + 1 else total_expenditure)
}
-testing
> table_list %>%
imap_dfr(~ get_summary(.x, name = .y), .id = 'origin_table')
# A tibble: 4 × 3
origin_table id total_expenditure
<chr> <int> <dbl>
1 a 1 41
2 a 2 71
3 b 1 90
4 b 2 60
data
table_list <- list(a = structure(list(id = c(1L, 2L, 1L, 2L),
expenditure = c(10L,
20L, 30L, 50L)), class = "data.frame", row.names = c(NA, -4L)),
b = structure(list(id = c(1L, 2L, 1L, 2L), expenditure = c(30L,
50L, 60L, 10L)), class = "data.frame", row.names = c(NA,
-4L)))
Managed to do it, by adding origin_table as a pre-existing column on the dataframes:
df1 = df1 %>% mutate(origin_table = 'a')
df2 = df2 %>% mutate(origin_table = 'b')
Then I can extract the origin by doing the following:
get_summary = function(table){
dummy_list = c(TRUE, FALSE)
names(dummy_list) = c('a', 'b')
origin = table %>% distinct(origin_table) %>% pull
final_table = table %>% group_by(id) %>% summarise(total_expenditure= sum(expenditure))
is_true = dummy_list[[origin ]] # Want to use the original name to call another list
if(is_true) final_table = final_table %>% mutate(total_expenditure = total_expenditure + 1)
return(final_table)
}
I'm working with a dataframe with the following structure:
ID origin value1 value2
1 A 100 50
1 A 200 100
2 B 10 2
2 B 150 30
So each row can have different origins and I need to make some calculations by ID, but the value variable I'm using depends on the origin variable. So if origin == 'A' I should use value1 and if it's B I should use value2. My code without taking this last condition into account looks like this:
df2 <- df %>%
group_by(ID) %>%
mutate(mean_value = mean(value1, na.rm = TRUE),
sd_value = sd(value1, na.rm = TRUE),
median_value = median(value1, na.rm = TRUE),
cv_value = sd_value1/mean_value1,
p25_value = quantile(value1, 0.25, na.rm = TRUE),
p75_value = quantile(value1, 0.75, na.rm = TRUE))
I know I could add an if_else statement to each line, but I think my code will lose some readability (In my actual data there's multiple origins, which makes this a bit more cumbersome). So, I was thinking of creating a custom function, maybe using map or maybe something using group_by origin, but I'm not finding a good way to implement these options. Any ideas? My desired dataframe would look like this (I'll add only the first mutate column for simplicity):
ID origin value1 value2 mean_value
1 A 100 50 150
1 A 200 100 150
2 B 10 2 16
2 B 150 30 16
So the first mean value is (100 + 200) / 2 (from value1) and the second is (30 + 2) / 2 (from value2).
Thanks!
We could create a temporary column first and then do the mean afterwards. In this way, we may need to use ifelse/case_when only once
library(dplyr)
df %>%
mutate(valuenew = case_when(origin == 'A' ~ value1,
TRUE ~ value2)) %>%
group_by(ID) %>%
mutate(mean_value = mean(valuenew, na.rm = TRUE), .keep = "unused") %>%
ungroup
-output
# A tibble: 4 × 5
ID origin value1 value2 mean_value
<int> <chr> <int> <int> <dbl>
1 1 A 100 50 150
2 1 A 200 100 150
3 2 B 10 2 16
4 2 B 150 30 16
data
df <- structure(list(ID = c(1L, 1L, 2L, 2L), origin = c("A", "A", "B",
"B"), value1 = c(100L, 200L, 10L, 150L), value2 = c(50L, 100L,
2L, 30L)), class = "data.frame", row.names = c(NA, -4L))
This question already has an answer here:
dplyr::first() to choose first non NA value
(1 answer)
Closed 2 years ago.
I understand we can use the dplyr function coalesce() to unite different columns, but is there such function to unite rows?
I am struggling with a confusing incomplete/doubled dataframe with duplicate rows for the same id, but with different columns filled. E.g.
id sex age source
12 M NA 1
12 NA 3 1
13 NA 2 2
13 NA NA NA
13 F 2 NA
and I am trying to achieve:
id sex age source
12 M 3 1
13 F 2 2
You can try:
library(dplyr)
#Data
df <- structure(list(id = c(12L, 12L, 13L, 13L, 13L), sex = structure(c(2L,
NA, NA, NA, 1L), .Label = c("F", "M"), class = "factor"), age = c(NA,
3L, 2L, NA, 2L), source = c(1L, 1L, 2L, NA, NA)), class = "data.frame", row.names = c(NA,
-5L))
df %>%
group_by(id) %>%
fill(everything(), .direction = "down") %>%
fill(everything(), .direction = "up") %>%
slice(1)
# A tibble: 2 x 4
# Groups: id [2]
id sex age source
<int> <fct> <int> <int>
1 12 M 3 1
2 13 F 2 2
As mentioned by #A5C1D2H2I1M1N2O1R2T1 you can select the first non-NA value in each group. This can be done using dplyr :
library(dplyr)
df %>% group_by(id) %>% summarise(across(.fns = ~na.omit(.)[1]))
# A tibble: 2 x 4
# id sex age source
# <int> <fct> <int> <int>
#1 12 M 3 1
#2 13 F 2 2
Base R :
aggregate(.~id, df, function(x) na.omit(x)[1], na.action = 'na.pass')
Or data.table :
library(data.table)
setDT(df)[, lapply(.SD, function(x) na.omit(x)[1]), id]
I am trying to iterate through columns, and if the column is a whole year, it should be duplicated four times, and renamed to quarters
So this
2000 Q1-01 Q2-01 Q3-01
1 2 3 3
Should become this:
Q1-00 Q2-00 Q3-00 Q4-00 Q1-01 Q2-01 Q3-01
1 1 1 1 2 3 3
Any ideas?
We can use stringr::str_detect to look for colnames with 4 digits then take the last two digits from those columns
library(dplyr)
library(tidyr)
library(stringr)
df %>% gather(key,value) %>% group_by(key) %>%
mutate(key_new = ifelse(str_detect(key,'\\d{4}'),paste0('Q',1:4,'-',str_extract(key,'\\d{2}$'),collapse = ','),key)) %>%
ungroup() %>% select(-key) %>%
separate_rows(key_new,sep = ',') %>% spread(key_new,value)
PS: I hope you don't have a large dataset
Since you want repeated columns, you can just re-index your data frame and then update the column names
df <- structure(list(`2000` = 1L, Q1.01 = 2L, Q2.01 = 3L, Q3.01 = 3L,
`2002` = 1L, Q1.03 = 2L, Q2.03 = 3L, Q3.03 = 3L), row.names = c(NA,
-1L), class = "data.frame")
#> df
#2000 Q1.01 Q2.01 Q3.01 2002 Q1.03 Q2.03 Q3.03
#1 1 2 3 3 1 2 3 3
# Get indices of columns that consist of 4 numbers
col.ids <- grep('^[0-9]{4}$', names(df))
# For each of those, create new names, and for the rest preserve the old names
new.names <- lapply(seq_along(df), function(i) {
if (i %in% col.ids)
return(paste(substr(names(df)[i], 3, 4), c('Q1', 'Q2', 'Q3', 'Q4'), sep = '.'))
return(names(df)[i])
})
# Now repeat each of those columns 4 times
df <- df[rep(seq_along(df), ifelse(seq_along(df) %in% col.ids, 4, 1))]
# ...and finally set the column names to the desired new names
names(df) <- unlist(new.names)
#> df
#00.Q1 00.Q2 00.Q3 00.Q4 Q1.01 Q2.01 Q3.01 02.Q1 02.Q2 02.Q3 02.Q4 Q1.03 Q2.03 Q3.03
#1 1 1 1 1 2 3 3 1 1 1 1 2 3 3
For example, suppose that you had a function that applied some DPLYR functions, but you couldn't expect datasets passed to this function to have the same column names.
For a simplified example of what I mean, say you had a data frame, arizona.trees:
arizona.trees
group arizona.redwoods arizona.oaks
A 23 11
A 24 12
B 9 8
B 10 7
C 88 22
and another very similar data frame, california.trees:
california.trees
group california.redwoods california.oaks
A 25 50
A 11 33
B 90 5
B 77 3
C 90 35
And you wanted to implement a function that returns the mean for the given groups (A, B, ... Z) for a given type of tree that would work for both of these data frames.
foo <- function(dataset, group1, group2, tree.type) {
column.name <- colnames(dataset[2])
result <- filter(dataset, group %in% c(group1, group2) %>%
select(group, contains(tree.type)) %>%
group_by(group) %>%
summarize("mean" = mean(column.name))
return(result)
}
A desired output for a call of foo(california.trees, A, B, redwoods) would be:
result
mean
A 18
B 83.5
For some reason, doing something like the implementation of foo() just doesn't seem to work. This is likely due to some error with the data frame indexing - the function seems to think I am attempting to get the mean of the column.name string, rather than retrieving the column and passing the column to mean(). I'm not sure how to avoid this. There's the issue of the implicit passing of the modified dataframe that can't be directly referenced with the pipe operator that may be causing the issue.
Why is this? Is there some alternative implementation that would work?
We can use the quosure based solution from the devel version of dplyr (soon to be released 0.6.0)
foo <- function(dataset, group1, group2, tree.type){
group1 <- quo_name(enquo(group1))
group2 <- quo_name(enquo(group2))
colN <- rlang::parse_quosure(names(dataset)[2])
tree.type <- quo_name(enquo(tree.type))
dataset %>%
filter(group %in% c(group1, group2)) %>%
select(group, contains(tree.type)) %>%
group_by(group) %>%
summarise(mean = mean(UQ(colN)))
}
foo(california.trees, A, B, redwoods)
# A tibble: 2 × 2
# group mean
# <chr> <dbl>
#1 A 18.0
#2 B 83.5
foo(arizona.trees, A, B, redwoods)
# A tibble: 2 × 2
# group mean
# <chr> <dbl>
#1 A 23.5
#2 B 9.5
The enquotakes the input arguments and converts it to quosure, with quo_name, it is converted to string for using with %in%, the second column name is converted to quosure from string using parse_quosure and then it is unquoted (UQ or !!) for evaluation within summarise
NOTE: This is based on the OP's function about selecting the second column
The above solution was based on selecting the column based on position (as per the OP's code) and it may not work for other columns. So, we can match the 'tree.type' and get the 'mean' of the columns based on that
foo1 <- function(dataset, group1, group2, tree.type){
group1 <- quo_name(enquo(group1))
group2 <- quo_name(enquo(group2))
tree.type <- quo_name(enquo(tree.type))
dataset %>%
filter(group %in% c(group1, group2)) %>%
select(group, contains(tree.type)) %>%
group_by(group) %>%
summarise_at(vars(contains(tree.type)), funs(mean = mean(.)))
}
The function can be tested for different columns in the two datasets
foo1(arizona.trees, A, B, oaks)
# A tibble: 2 × 2
# group mean
# <chr> <dbl>
#1 A 11.5
#2 B 7.5
foo1(arizona.trees, A, B, redwood)
# A tibble: 2 × 2
# group mean
# <chr> <dbl>
#1 A 23.5
#2 B 9.5
foo1(california.trees, A, B, redwood)
# A tibble: 2 × 2
# group mean
# <chr> <dbl>
#1 A 18.0
#2 B 83.5
foo1(california.trees, A, B, oaks)
# A tibble: 2 × 2
# group mean
# <chr> <dbl>
#1 A 41.5
#2 B 4.0
data
arizona.trees <- structure(list(group = c("A", "A", "B", "B", "C"),
arizona.redwoods = c(23L,
24L, 9L, 10L, 88L), arizona.oaks = c(11L, 12L, 8L, 7L, 22L)),
.Names = c("group",
"arizona.redwoods", "arizona.oaks"), class = "data.frame",
row.names = c(NA, -5L))
california.trees <- structure(list(group = c("A", "A", "B", "B", "C"),
california.redwoods = c(25L,
11L, 90L, 77L, 90L), california.oaks = c(50L, 33L, 5L, 3L, 35L
)), .Names = c("group", "california.redwoods", "california.oaks"
), class = "data.frame", row.names = c(NA, -5L))