R Group By and Sum to Ignore NA - r

I have a data frame like this:
library(dplyr)
name <- c("Bob", "Bob", "Bob", "Bob", "John", "John", "John")
count <- c(2, 3, 4, 5, 2, 3, 4)
score <- c(5, NA, NA, NA, 3, 4, 2)
my_df <- data.frame(cbind(name, count, score)) %>%
mutate(count = as.numeric(count),
score = as.numeric(score))
my_df
name count score
1 Bob 2 5
2 Bob 3 NA
3 Bob 4 NA
4 Bob 5 NA
5 John 2 3
6 John 3 4
7 John 4 2
Then I create another column by taking the product between count and score:
my_df %>%
mutate(product = count*score)
name count score product
1 Bob 2 5 10
2 Bob 3 NA NA
3 Bob 4 NA NA
4 Bob 5 NA NA
5 John 2 3 6
6 John 3 4 12
7 John 4 2 8
I want to group by name and aggregate for the sum(product)/sum(count) but I want the sum of product column to ignore any NA values in the sum (I did this below) AND I want any associated count values to be ignored in the summation. This is my current solution, but it is not right. Bob's result is calculated as 10/(2+3+4+5) = 0.71 but I want Bob's result to be 10/2 = 5.
my_df %>%
mutate(product = count*score)
group_by(name) %>%
summarize(result = sum(product, na.rm = TRUE)/sum(count))
name result
<chr> <dbl>
1 Bob 0.714
2 John 2.89

We may need to subset the count by the non-NA values in 'product'
library(dplyr)
my_df %>%
mutate(product = count*score) %>%
group_by(name) %>%
summarise(result = sum(product, na.rm = TRUE)/sum(count[!is.na(product)]))
-output
# A tibble: 2 × 2
name result
<chr> <dbl>
1 Bob 5
2 John 2.89
Or do a filter before the grouping
my_df %>%
filter(complete.cases(score)) %>%
group_by(name) %>%
summarise(result = sum(score * count)/sum(count))
# A tibble: 2 × 2
name result
<chr> <dbl>
1 Bob 5
2 John 2.89

Related

summarise by group returns 0 instead of NA if all values are NA

library(dplyr)
dat <-
data.frame(id = rep(c(1,2,3,4), each = 3),
value = c(NA, NA, NA, 0, 1, 2, 0, 1, NA, 1, 2,3))
dat %>%
dplyr::group_by(id) %>%
dplyr::summarise(value_sum = sum(value, na.rm = T))
# A tibble: 4 x 2
id value_sum
1 0
2 3
3 1
4 6
Is there any way I can return NA if all the entries in a group are NA. For e.g. id 1 has all the entries as NA so I want the value_sum to be NA as well.
# A tibble: 4 x 2
id value_sum
1 NA
2 3
3 1
4 6
One way is to use an if/else statement: If all is Na return NA else return sum():
dat %>%
dplyr::group_by(id) %>%
#dplyr::summarise(value_sum = sum(value, na.rm = F)) %>%
summarise(number = if(all(is.na(value))) NA_real_ else sum(value, na.rm = TRUE))
id number
<dbl> <dbl>
1 1 NA
2 2 3
3 3 1
4 4 6
We could use fsum
library(collapse)
fsum(dat$value, g = dat$id)
1 2 3 4
NA 3 1 6
Or with dplyr
library(dplyr)
dat %>%
group_by(id) %>%
summarise(number = fsum(value))
# A tibble: 4 × 2
id number
<dbl> <dbl>
1 1 NA
2 2 3
3 3 1
4 4 6

fill NA values per group based on first value of a group

I am trying to fill NA values of my dataframe. However, I would like to fill them based on the first value of each group.
#> df = data.frame(
group = c(rep("A", 4), rep("B", 4)),
val = c(1, 2, NA, NA, 4, 3, NA, NA)
)
#> df
group val
1 A 1
2 A 2
3 A NA
4 A NA
5 B 4
6 B 3
7 B NA
8 B NA
#> fill(df, val, .direction = "down")
group val
1 A 1
2 A 2
3 A 2 # -> should be 1
4 A 2 # -> should be 1
5 B 4
6 B 3
7 B 3 # -> should be 4
8 B 3 # -> should be 4
Can I do this with tidyr::fill()? Or is there another (more or less elegant) way how to do this? I need to use this in a longer chain (%>%) operation.
Thank you very much!
Use tidyr::replace_na() and dplyr::first() (or val[[1]]) inside a grouped mutate():
library(dplyr)
library(tidyr)
df %>%
group_by(group) %>%
mutate(val = replace_na(val, first(val))) %>%
ungroup()
#> # A tibble: 8 × 2
#> group val
#> <chr> <dbl>
#> 1 A 1
#> 2 A 2
#> 3 A 1
#> 4 A 1
#> 5 B 4
#> 6 B 3
#> 7 B 4
#> 8 B 4
PS - #richarddmorey points out the case where the first value for a group is NA. The above code would keep all NA values as NA. If you'd like to instead replace with the first non-missing value per group, you could subset the vector using !is.na():
df %>%
group_by(group) %>%
mutate(val = replace_na(val, first(val[!is.na(val)]))) %>%
ungroup()
Created on 2022-11-17 with reprex v2.0.2
This should work, which uses dplyr's case_when
library(dplyr)
df %>%
group_by(group) %>%
mutate(val = case_when(
is.na(val) ~ val[1],
TRUE ~ val
))
Output:
group val
<chr> <dbl>
1 A 1
2 A 2
3 A 1
4 A 1
5 B 4
6 B 3
7 B 4
8 B 4

Find rows that are identical in one column but not another

There should be a fairly simple solution to this but it's giving me trouble. I have a DF similar to this:
> df <- data.frame(name = c("george", "george", "george", "sara", "sara", "sam", "bill", "bill"),
id_num = c(1, 1, 2, 3, 3, 4, 5, 5))
> df
name id_num
1 george 1
2 george 1
3 george 2
4 sara 3
5 sara 3
6 sam 4
7 bill 5
8 bill 5
I'm looking for a way to find rows where the name and ID numbers are inconsistent in a very large dataset. I.e., George should always be "1" but in row three there is a mistake and he has also been assigned ID number "2".
I think the easiest way will be to use dplyr::count twice, hence for your example:
df %>%
count(name, id) %>%
count(name)
The first count will give:
name id n
george 1 2
george 2 1
sara 3 2
sam 4 1
bill 5 2
Then the second count will give:
name n
george 2
sara 1
sam 1
bill 1
Of course, you could add filter(n > 1) to the end of your pipe, too, or arrange(desc(n))
df %>%
count(name, id) %>%
count(name) %>%
arrange(desc(n)) %>%
filter(n > 1)
Using tapply() to calculate number of ID's per name, then subset for greater than 1.
res <- with(df, tapply(id_num, list(name), \(x) length(unique(x))))
res[res > 1]
# george
# 2
You probably want to correct this. A safe way is to rebuild the numeric ID's using as.factor(),
df$id_new <- as.integer(as.factor(df$name))
df
# name id_num id_new
# 1 george 1 2
# 2 george 1 2
# 3 george 2 2
# 4 sara 3 4
# 5 sara 3 4
# 6 sam 4 3
# 7 bill 5 1
# 8 bill 5 1
where numbers are assigned according to the names in alphabetical order, or factor(), reading in the levels in order of appearance.
df$id_new2 <- as.integer(factor(df$name, levels=unique(df$name)))
df
# name id_num id_new id_new2
# 1 george 1 2 1
# 2 george 1 2 1
# 3 george 2 2 1
# 4 sara 3 4 2
# 5 sara 3 4 2
# 6 sam 4 3 3
# 7 bill 5 1 4
# 8 bill 5 1 4
Note: R >= 4.1 used.
Data:
df <- structure(list(name = c("george", "george", "george", "sara",
"sara", "sam", "bill", "bill"), id_num = c(1, 1, 2, 3, 3, 4,
5, 5)), class = "data.frame", row.names = c(NA, -8L))

Replace NA values per group with concatenated values from the same column

I wish to achieve the following:
For each Group, when the ID column is NA, then fill the corresponding NA value in Name with the concatenation of the other values of Name while ignoring other NA values in Name
My data frame looks as follows:
x <- data.frame(Group = c("A","A","A","A","B","B"),ID = c(1,2,3,NA,NA,5),Name = c("Bob","Jane",NA,NA,NA,"Tim"))
This is what I wish to achieve:
y <- data.frame(Group = c("A","A","A","A","B","B"),ID = c(1,2,3,NA,NA,5),Name = c("Bob","Jane",NA,"Bob Jane","Tim","Tim"))
If there's a way to achieve this in the tidyverse I would be very grateful for any pointers.
I've tried the following but it doesn't find the object 'Name'
x %>% group_by(Group) %>% replace_na(list(Name = paste(unique(.Name))))
We may use a conditional expression with replace
library(dplyr)
library(stringr)
x %>%
group_by(Group) %>%
mutate(Name = replace(Name, is.na(ID), str_c(Name[!is.na(Name)],
collapse = ' '))) %>%
ungroup
-output
# A tibble: 6 × 3
Group ID Name
<chr> <dbl> <chr>
1 A 1 Bob
2 A 2 Jane
3 A 3 <NA>
4 A NA Bob Jane
5 B NA Tim
6 B 5 Tim
Does this work:
library(dplyr)
x %>% group_by(Group) %>%
mutate(Name = case_when(is.na(ID) ~ paste(Name[!is.na(Name)], collapse = ' '), TRUE ~ Name))
# A tibble: 6 x 3
# Groups: Group [2]
Group ID Name
<chr> <dbl> <chr>
1 A 1 Bob
2 A 2 Jane
3 A 3 NA
4 A NA Bob Jane
5 B NA Tim
6 B 5 Tim

R: Calculate distance between first and current row of grouped dataframe

I need to calculate the Euclidean distance between the first and current row in a dataframe. Each row is keyed by (group, month) and has a list of values. In the toy example below the key is c(month, student) and the values are in c(A, B). I want to create a distance column C, that's equal to sqrt((A_i-A_1)^2 + (B_i - B_1)^2).
So far I managed to spread my data and pull each group's first values into new columns. While I could create the formula by hand in the toy example, in my actual data I have very many columns instead of just 2. I believe I could create the squared differences within the mutate_all, and then do a row sum and take the square root of that, but no luck so far.
df <- data.frame(month=rep(1:3,2),
student=rep(c("Amy", "Bob"), each=3),
A=c(9, 6, 6, 8, 6, 9),
B=c(6, 2, 8, 5, 6, 7))
# Pull in each column's first values for each group
df %>%
group_by(student) %>%
mutate_all(list(first = first)) %>%
# TODO: Calculate the distance, i.e. SQRT(sum_i[(x_i - x_1)^2]).
#Output:
month student A B month_first A_first B_first
1 1 Amy 9 6 1 9 6
2 2 Amy 6 2 1 9 6
...
Desired output:
#Output:
month student A B month_first A_first B_first dist_from_first
1 1 Amy 9 6 1 9 6 0
2 2 Amy 6 2 1 9 6 5
...
Here is another way using compact dplyr code. This can be used for any number of columns
df %>%
select(-month) %>%
group_by(student) %>%
mutate_each(function(x) (first(x) - x)^2) %>%
ungroup() %>%
mutate(euc.dist = sqrt(rowSums(select(., -1))))
# A tibble: 6 x 4
student A B euc.dist
<chr> <dbl> <dbl> <dbl>
1 Amy 0 0 0
2 Amy 9 16 5
3 Amy 9 4 3.61
4 Bob 0 0 0
5 Bob 4 1 2.24
6 Bob 1 4 2.24
Edit: added alternative formulation using a join. I expect that approach will be much faster for a very wide data frame with many columns to compare.
Approach 1: To get euclidean distance for a large number of columns, one way is to rearrange the data so each row shows one month, one student, and one original column (e.g. A or B in the OP), but then two columns representing current month value and first value. Then we can square the difference, and group across all columns to get the euclidean distance, aka root-mean-squared / RMS for each student-month.
library(tidyverse)
df %>%
group_by(student) %>%
mutate_all(list(first = first)) %>%
ungroup() %>%
# gather into long form; make col show variant, col2 show orig column
gather(col, val, -c(student, month, month_first)) %>%
mutate(col2 = col %>% str_remove("_first")) %>%
mutate(col = if_else(col %>% str_ends("_first"),
"first",
"comparison")) %>%
spread(col, val) %>%
mutate(square_dif = (comparison - first)^2) %>%
group_by(student, month) %>%
summarize(RMS = sqrt(sum(square_dif)))
# A tibble: 6 x 3
# Groups: student [2]
student month RMS
<fct> <int> <dbl>
1 Amy 1 0
2 Amy 2 5
3 Amy 3 3.61
4 Bob 1 0
5 Bob 2 2.24
6 Bob 3 2.24
Approach 2. Here, a long version of the data is joined to a version that is just the earliest month for each student.
library(tidyverse)
df_long <- gather(df, col, val, -c(month, student))
df_long %>% left_join(df_long %>%
group_by(student) %>%
top_n(-1, wt = month) %>%
rename(first_val = val) %>%
select(-month),
by = c("student", "col")) %>%
mutate(square_dif = (val - first_val)^2) %>%
group_by( student, month) %>%
summarize(RMS = sqrt(sum(square_dif)))
# A tibble: 6 x 3
# Groups: student [2]
student month RMS
<fct> <int> <dbl>
1 Amy 1 0
2 Amy 2 5
3 Amy 3 3.61
4 Bob 1 0
5 Bob 2 2.24
6 Bob 3 2.24
Instead of the mutate_all call, it'd be easier to directly calculate the dist_from_first. The only thing I'm unclear about is whether month should be included in the group_by() statement.
library(tidyverse)
df <- tibble(month=rep(1:3,2),
student=rep(c("Amy", "Bob"), each=3),
A=c(9, 6, 6, 8, 6, 9),
B=c(6, 2, 8, 5, 6, 7))
df%>%
group_by(student)%>%
mutate(dist_from_first = sqrt((A - first(A))^2 + (B - first(B))^2))%>%
ungroup()
# A tibble: 6 x 5
# month student A B dist_from_first
# <int> <chr> <dbl> <dbl> <dbl>
#1 1 Amy 9 6 0
#2 2 Amy 6 2 5
#3 3 Amy 6 8 3.61
#4 1 Bob 8 5 0
#5 2 Bob 6 6 2.24
#6 3 Bob 9 7 2.24

Resources