R - Count unique/distinct values in two columns together
Hi everyone. I have a panel of electoral behaviour but I am having problems to compute a new variable that would capture unique values (parties) of my two columns Party and Party2013 per group. The column Party2013 measures the vote in election 2013 and Party measures voters intentions after 2013. Everytime I try n_distinct or length I get the count of unique values in both columns separately but not as a sum.
ID Wave Party Party2013
1 1 A A
1 2 A NA
1 3 B NA
1 4 B NA
Based on the example above I normally get the count of 3 instead of desired 2.
I´ve tried following commands but got only the number of separate unique values:
data %>% group_by(ID) %>% distinct(Party, Party2013, .keep_all = TRUE) %> dplyr::summarise(Party_Party2013 = n())
or
ddply(data, .(ID), mutate, count = length(unique(Party, Party2013)))
The expected outcome would as follows:
ID Wave Party Party2013 Count
1 1 A A 2
1 2 A NA 2
1 3 B NA 2
1 4 B NA 2
2 1 A C 3
2 2 B NA 3
2 3 B NA 3
2 4 B NA 3
I would very much appreciate any advice on how to count the overall number of unique parties across the two columns per group and not the number of distinct values per each one. Thanks.
You can subset the data from cur_data() and unlist the data to get a vector. Use n_distinct to count number of unique values.
library(dplyr)
df %>%
group_by(ID) %>%
mutate(Count = n_distinct(unlist(select(cur_data(),
Party, Party2013)), na.rm = TRUE)) %>%
ungroup
# ID Wave Party Party2013 Count
# <int> <int> <chr> <chr> <int>
#1 1 1 A A 2
#2 1 2 A NA 2
#3 1 3 B NA 2
#4 1 4 B NA 2
#5 2 1 A C 3
#6 2 2 B NA 3
#7 2 3 B NA 3
#8 2 4 B NA 3
data
It is easier to help if you provide data in a reproducible format
df <- structure(list(ID = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L), Wave = c(1L,
2L, 3L, 4L, 1L, 2L, 3L, 4L), Party = c("A", "A", "B", "B", "A",
"B", "B", "B"), Party2013 = c("A", NA, NA, NA, "C", NA, NA, NA
)), class = "data.frame", row.names = c(NA, -8L))
In situations like this I always like to simplify the problem and change the data into the long format since it is easier to solve problems like this if all of your values are in one column. With pivot_longer() you can also use the argument values_drop_na = TRUE to drop NAs which were counted in your example:
library(tidyr)
library(dplyr)
data <- read.table(text =
"ID Wave Party Party2013
1 1 A A
1 2 A NA
1 3 B NA
1 4 B NA
2 1 A C
2 2 B NA
2 3 B NA
2 4 B NA", header = TRUE)
data %>% pivot_longer(cols = starts_with("Party"), values_drop_na = TRUE) %>% group_by(ID) %>%
summarise(Count = n_distinct(value)) %>% merge(data, .)
#> ID Wave Party Party2013 Count
#> 1 1 1 A A 2
#> 2 1 2 A <NA> 2
#> 3 1 3 B <NA> 2
#> 4 1 4 B <NA> 2
#> 5 2 1 A C 3
#> 6 2 2 B <NA> 3
#> 7 2 3 B <NA> 3
#> 8 2 4 B <NA> 3
Created on 2021-08-30 by the reprex package (v2.0.1)
You can also and this way:
library(dplyr)
data <- read.table(text =
"ID Wave Party Party2013
1 1 A A
1 2 A NA
1 3 B NA
1 4 B NA
2 1 A C
2 2 B NA
2 3 B NA
2 4 B NA", header = TRUE)
data %>%
group_by(ID) %>%
mutate(Count = paste(Party, Party2013) %>%
unique %>% length() %>%
rep(length(Party)))
output
# A tibble: 8 x 5
# Groups: ID [2]
ID Wave Party Party2013 Count
<int> <int> <chr> <chr> <int>
1 1 1 A A 3
2 1 2 A NA 3
3 1 3 B NA 3
4 1 4 B NA 3
5 2 1 A C 2
6 2 2 B NA 2
7 2 3 B NA 2
8 2 4 B NA 2
Related
I have the following data
df <- tibble(Type=c(1,2,2,1,1,2),ID=c(6,4,3,2,1,5))
Type ID
1 6
2 4
2 3
1 2
1 1
2 5
For each of the type 2 rows, I want to find the IDs of the type 1 rows just below and above them. For the above dataset, the output will be:
Type ID IDabove IDbelow
1 6 NA NA
2 4 6 2
2 3 6 2
1 2 NA NA
1 1 NA NA
2 5 1 NA
Naively, I can write a for loop to achieve this, but that would be too time consuming for the dataset I am dealing with.
One approach using dplyr lead,lag to get next and previous value respectively and data.table's rleid to create groups of consecutive Type values.
library(dplyr)
library(data.table)
df %>%
mutate(IDabove = ifelse(Type == 2, lag(ID), NA),
IDbelow = ifelse(Type == 2, lead(ID), NA),
grp = rleid(Type)) %>%
group_by(grp) %>%
mutate(IDabove = first(IDabove),
IDbelow = last(IDbelow)) %>%
ungroup() %>%
select(-grp)
# Type ID IDabove IDbelow
# <dbl> <dbl> <dbl> <dbl>
#1 1 6 NA NA
#2 2 4 6 2
#3 2 3 6 2
#4 1 2 NA NA
#5 1 1 NA NA
#6 2 5 1 NA
A dplyr only solution:
You could create your own rleid function then apply the logic provided by Ronak(Many thanks. Upvoted).
library(dplyr)
my_func <- function(x) {
x <- rle(x)$lengths
rep(seq_along(x), times=x)
}
# this part is the same as provided by Ronak.
df %>%
mutate(IDabove = ifelse(Type == 2, lag(ID), NA),
IDbelow = ifelse(Type == 2, lead(ID), NA),
grp = my_func(Type)) %>%
group_by(grp) %>%
mutate(IDabove = first(IDabove),
IDbelow = last(IDbelow)) %>%
ungroup() %>%
select(-grp)
Output:
Type ID IDabove IDbelow
<dbl> <dbl> <dbl> <dbl>
1 1 6 NA NA
2 2 4 6 2
3 2 3 6 2
4 1 2 NA NA
5 1 1 NA NA
6 2 5 1 NA
I have a data frame like this:
name count
a 3
a 5
a 8
b 2
a 9
b 7
so I want to calculate the row differences group by name. so my code is:
data%>%group_by(Name)%>%mutate(last_count = lag(count),diff = count - last_count)
However, I get a result like the below table
name count last_count diff
a 3 NA NA
a 5 3 2
a 8 5 3
b 2 NA NA
a 9 8 1
b 7 2 5
But what I want should look like this:
name count last_count diff
a 3 NA NA
a 5 3 2
a 8 5 3
b 2 NA NA
a 9 NA NA
b 7 NA NA
Thanks in advance to whoever can help me fix it!
Does this work:
> library(dplyr)
> df %>% mutate(last_count = case_when(name == lag(name) ~ lag(count), TRUE ~ NA_real_),
diff = case_when(name == lag(name) ~ count - lag(count), TRUE ~ NA_real_))
# A tibble: 6 x 4
name count last_count diff
<chr> <dbl> <dbl> <dbl>
1 a 3 NA NA
2 a 5 3 2
3 a 8 5 3
4 b 2 NA NA
5 a 9 NA NA
6 b 7 NA NA
>
We could use rleid to create a grouping column based on the adjacent matching values in the 'name' column and then apply the diff
library(dplyr)
library(data.table)
data %>%
group_by(grp = rleid(name)) %>%
mutate(last_count = lag(count), diff = count - last_count) %>%
ungroup %>%
select(-grp)
-output
# A tibble: 6 x 4
# name count last_count diff
# <chr> <int> <int> <int>
#1 a 3 NA NA
#2 a 5 3 2
#3 a 8 5 3
#4 b 2 NA NA
#5 a 9 NA NA
#6 b 7 NA NA
Or using base R with ave and rle
data$diff <- with(data, ave(count, with(rle(name),
rep(seq_along(values), lengths)), FUN = function(x) c(NA, diff(x)))
data
data <- structure(list(name = c("a", "a", "a", "b", "a", "b"), count = c(3L,
5L, 8L, 2L, 9L, 7L)), class = "data.frame", row.names = c(NA,
-6L))
I have the following data frame in R
df1 <- data.frame(
"ID" = c("A", "B", "A", "B"),
"Value" = c(1, 2, 5, 5),
"freq" = c(1, 3, 5, 3)
)
I wish to obtain the following data frame
Value freq ID
1 1 A
2 NA A
3 NA A
4 NA A
5 1 A
1 NA B
2 2 B
3 NA B
4 NA B
5 5 B
I have tried the following code
library(tidyverse)
df_new <- bind_cols(df1 %>%
select(Value, freq, ID) %>%
complete(., expand(.,
Value = min(df1$Value):max(df1$Value))),)
I am getting the following output
Value freq ID
<dbl> <dbl> <fct>
1 1 A
2 3 B
3 NA NA
4 NA NA
5 5 A
5 3 B
I request someone to help me.
Using tidyr::full_seq we can find the full version of Value but nesting(full_seq(Value,1) will return an error:
Error: by can't contain join column full_seq(Value, 1) which is missing from RHS
so we need to add a name, hence nesting(Value=full_seq(Value,1)
library(tidyr)
df1 %>% complete(ID, nesting(Value=full_seq(Value,1)))
# A tibble: 10 x 3
ID Value freq
<fct> <dbl> <dbl>
1 A 1. 1.
2 A 2. NA
3 A 3. NA
4 A 4. NA
5 A 5. 5.
6 B 1. NA
7 B 2. 3.
8 B 3. NA
9 B 4. NA
10 B 5. 3.
Using data.table:
library(data.table)
setDT(df1)
setkey(df1, ID, Value)
df1[CJ(ID = c("A", "B"), Value = 1:5)]
ID Value freq
1: A 1 1
2: A 2 NA
3: A 3 NA
4: A 4 NA
5: A 5 5
6: B 1 NA
7: B 2 3
8: B 3 NA
9: B 4 NA
10: B 5 3
Would the following approach work for you?
with(data = df1,
expr = {
data.frame(Value = rep(wrapr::seqi(min(Value), max(Value)), length(unique(ID))),
ID = unique(ID))
}) %>%
left_join(y = df1,
by = c("ID" = "ID", "Value" = "Value")) %>%
arrange(ID, Value)
Results
Value ID freq
1 1 A 1
2 2 A NA
3 3 A NA
4 4 A NA
5 5 A 5
6 1 B NA
7 2 B 3
8 3 B NA
9 4 B NA
10 5 B 3
Comments
If I'm following your example correctly, your ID group takes values from 1 to 5. If this is the case, my approach would be to generate that reading unique combinations of both from the original data frame.
The only variable that is carried from the original data frame is freq that may / may not be available for a given par ID-Value. I would join that variable via left_join (as you seem to like tidyverse)
In your example, you have freq variable with values 1,3,5 but then in the example you list 1,2,5? In my example, I took original freq and left join it. You can modify it further using normal dplyr pipeline, if this is something you intended to do.
I have a messy dataset (from CATI survey). I am a struggling to prepare and tidy it because of interviewee /partner/child files, deal with doublet (pair of similar questions) in each column
For example a chunk of data for gender is look like this (1 = male , 2 = female)
# A tibble: 7 x 7
Household_size q_1 q_2 q_3 q_4 q_5 q_6
<int> <int> <int> <int> <int> <int> <int>
1 3 1 2 1 NA NA NA
2 2 2 1 NA NA NA NA
3 5 1 2 1 1 2 NA
4 3 2 2 1 NA NA NA
5 6 2 1 1 1 1 1
6 5 1 2 1 2 2 NA
7 3 1 2 2 NA NA NA
Metadata says :
q_1 is interviewee gender
q_2 is interviewee - partner gender (if there is any)
q_3:q_6 interviewee - kid gender (if there is any)
The data has the same format for education, occupation etc (pair of identical questions for interviewee /partner/kid).
How can I tidy up this dataset to be able to easily calculate statistical summary or visualization. I would like to have something like this(total number of male and female in the survey regardless of age):
Male 15
Female 12
The table function in base R might be what you are looking for, it gives you a versatile option which counts all the levels:
table(unlist(df1[,c(2:7)]))
Alter this to make the dataframe name (df1) and column numbers c(2,7) suit your needs.
This replicates your example too:
df1 <- data.frame("v" = LETTERS[1:7], "q1" = c(1,2,1,2,2,1,1), "q2" = c(2,1,2,2,1,2,2), "q3" = c(1,NA,1,1,1,1,2), "q4" = c(NA, NA,1,NA,1,2,NA), "q5" = c(NA, NA,2,NA,1,2,NA), "q6" = c(NA, NA,NA,NA,1,NA,NA))
> table(unlist(df1[,c(2:7)]))
1 2
15 12
Some more examples:
df1 <- data.frame("v" = LETTERS[1:5], "q1" = c(1,2,6,1,1), "q2" = c("k","k","f","h","p"), "q3" = c(1,2,NA,1,NA))
> df1
v q1 q2 q3
1 A 1 k 1
2 B 2 k 2
3 C 6 f NA
4 D 1 h 1
5 E 1 p NA
table(unlist(df1[,c(2,4)]))
table(unlist(df1[,3]))
> table(unlist(df1[,c(2,4)]))
1 2 6
5 2 1
> table(unlist(df1[,3]))
f h k p
1 1 2 1
It's straightforward if you put the data into a long format, filter out the NAs, make gender into a factor, and tally up the counts. I'm using fct_recode from forcats (ships with tidyverse), but you can also change the labels of factor levels in base R.
library(tidyverse)
df %>%
gather(key = person, value = gender, -Household_size) %>%
filter(!is.na(gender)) %>%
mutate(gender_fct = as.factor(gender) %>% forcats::fct_recode("Male" = "1", "Female" = "2")) %>%
count(gender_fct)
#> # A tibble: 2 x 2
#> gender_fct n
#> <fct> <int>
#> 1 Male 15
#> 2 Female 12
Created on 2018-05-05 by the reprex package (v0.2.0).
currently i am trying to count frequency of set of sequence of data frame.
A B
1 a
1 b
1 c
2 a
2 b
2 c
i have this data frame and i would like to count frequency of "B" of another data frame looking like this
C D
1 a
1 a
1 b
1 b
2 b
2 c
2 c
As you can see the number of rows is different so datatable(counts) does not work. i would like to it to look like this after frequency count is done
a b freq
1 a 2
1 b 2
1 c 0
2 a 0
2 b 1
2 c 2
As you can see it makes counts of all the frequency even the 0 as the on some groups there is no data on it.
thanks for anyone that helps!
By using merge and aggregate
df2$freq = 1
df = merge(df1,aggregate(freq~.,df2,length),by.x = c('A','B'),by.y = c('C','D'),all.x = T)
df[is.na(df)] = 0
df
A B freq
1 1 a 2
2 1 b 2
3 1 c 0
4 2 a 0
5 2 b 1
6 2 c 2
More Info
aggregate(freq~.,df2,length)
C D freq
1 1 a 2
2 1 b 2
3 2 b 1
4 2 c 2
Data Input
df1
A B
1 1 a
2 1 b
3 1 c
4 2 a
5 2 b
6 2 c
df2
C D
1 1 a
2 1 a
3 1 b
4 1 b
5 2 b
6 2 c
7 2 c
This looks to be a question of how to tabulate frequencies across two factors without dropping missing levels.
Here's the dplyr solution. This assumes that dfAB, as in your example data, contains no duplicates (dfAB is interchangeable with the output of expand.grid if you don't already have the level combinations in a data frame)
library(dplyr)
dfAB %>%
# need at least one non-joining variable to tell matches from non-matches
left_join(mutate(dfCD, dummy = 1), by = c("A" = "C", "B" = "D")) %>%
group_by(A, B) %>%
summarize(freq = sum(dummy, na.rm = TRUE))
Output:
# A tibble: 6 x 3
# Groups: A [?]
A B freq
<dbl> <chr> <dbl>
1 1 a 2
2 1 b 2
3 1 c 0
4 2 a 0
5 2 b 1
6 2 c 2
(if there are duplicates in dfAB, add a distinct call to the chain before the join)
df1_rows = Reduce(paste, df1)
df2_rows = Reduce(paste, df2)
data.frame(df1, freq = sapply(df1_rows, function(x) sum(df2_rows %in% x)),
row.names = NULL)
# A B freq
#1 1 a 2
#2 1 b 2
#3 1 c 0
#4 2 a 0
#5 2 b 1
#6 2 c 2
DATA
df1 = data.frame(A = c(1L, 1L, 1L, 2L, 2L, 2L),
B = c("a", "b", "c", "a", "b", "c"))
df2 = data.frame(C = c(1L, 1L, 1L, 1L, 2L, 2L, 2L),
D = c("a", "a", "b", "b", "b", "c", "c"))