I am trying to calculate the Jaccard similarity between a source vector and comparison vectors in a tibble.
First, create a tibble with a names_ field (vector of strings). Using dplyr's mutate, create names_vec, a list-column, where each row is now a vector (each element of vector is a letter).
Then, create a new tibble with column jaccard_sim that is supposed to calculate the Jaccard similarity.
source_vec <- c('a', 'b', 'c')
df_comp <- tibble(names_ = c("b d f", "u k g", "m o c"),
names_vec = strsplit(names_, ' '))
df_comp_jaccard <- df_comp %>%
dplyr::mutate(jaccard_sim = length(intersect(names_vec, source_vec))/length(union(names_vec, source_vec)))
All the values in jaccard_sim are zero. However, if we run something like this, we get the correct Jaccard similarity of 0.2 for the first entry:
a <- length(intersect(source_vec, df_comp[[1,2]]))
b <- length(union(source_vec, df_comp[[1,2]]))
a/b
You could simply add rowwise
df_comp_jaccard <- df_comp %>%
rowwise() %>%
dplyr::mutate(jaccard_sim = length(intersect(names_vec, source_vec))/
length(union(names_vec, source_vec)))
# A tibble: 3 x 3
names_ names_vec jaccard_sim
<chr> <list> <dbl>
1 b d f <chr [3]> 0.2
2 u k g <chr [3]> 0.0
3 m o c <chr [3]> 0.2
Using rowwise you get the intuitive behavior some would expect when using mutate : "do this operation for every row".
Not using rowwise means you take advantage of vectorized functions, which is much faster, that's why it's the default, but may yield unexpected results if you're not careful.
The impression that mutate (or other dplyr functions) works row-wise is an illusion due to the fact you're working with vectorized functions, in fact you're always juggling with full columns.
I'll illustrate with a couple of examples:
Sometimes the result is the same, with a vectorized function such as paste:
tibble(a=1:10,b=10:1) %>% mutate(X = paste(a,b,sep="_"))
tibble(a=1:10,b=10:1) %>% rowwise %>% mutate(X = paste(a,b,sep="_"))
# # A tibble: 5 x 3
# a b X
# <int> <int> <chr>
# 1 1 5 1_5
# 2 2 4 2_4
# 3 3 3 3_3
# 4 4 2 4_2
# 5 5 1 5_1
And sometimes it's different, with a function that is not vectorized, such as max:
tibble(a=1:5,b=5:1) %>% mutate(max(a,b))
# # A tibble: 5 x 3
# a b `max(a, b)`
# <int> <int> <int>
# 1 1 5 5
# 2 2 4 5
# 3 3 3 5
# 4 4 2 5
# 5 5 1 5
tibble(a=1:5,b=5:1) %>% rowwise %>% mutate(max(a,b))
# # A tibble: 5 x 3
# a b `max(a, b)`
# <int> <int> <int>
# 1 1 5 5
# 2 2 4 4
# 3 3 3 3
# 4 4 2 4
# 5 5 1 5
Note that in this case you shouldn't use rowwise in a real life situation, but pmax which is vectorized for this purpose:
tibble(a=1:5,b=5:1) %>% mutate(pmax(a,b))
# # A tibble: 5 x 3
# a b `pmax(a, b)`
# <int> <int> <int>
# 1 1 5 5
# 2 2 4 4
# 3 3 3 3
# 4 4 2 4
# 5 5 1 5
Intersect is such function, you fed this function one list column containing vectors and one other vector, these 2 objects have no intersection.
We can use map to loop through the list
library(tidyverse)
df_comp %>%
mutate(jaccard_sim = map_dbl(names_vec, ~length(intersect(.x,
source_vec))/length(union(.x, source_vec))))
# A tibble: 3 x 3
# names_ names_vec jaccard_sim
# <chr> <list> <dbl>
#1 b d f <chr [3]> 0.2
#2 u k g <chr [3]> 0.0
#3 m o c <chr [3]> 0.2
The map functions are optimized. Below are the system.time for a slightly bigger dataset
df_comp1 <- df_comp[rep(1:nrow(df_comp), 1e5),]
system.time({
df_comp1 %>%
rowwise() %>%
dplyr::mutate(jaccard_sim = length(intersect(names_vec, source_vec))/length(union(names_vec, source_vec)))
})
#user system elapsed
# 25.59 0.05 25.96
system.time({
df_comp1 %>%
mutate(jaccard_sim = map_dbl(names_vec, ~length(intersect(.x,
source_vec))/length(union(.x, source_vec))))
})
#user system elapsed
# 13.22 0.00 13.22
Related
I am trying to create a list column within a data frame, specifying the range using existing columns, something like:
# A tibble: 3 x 3
A B C
<dbl> <dbl> <list>
1 1 6 c(1, 2, 3, 4, 5, 6)
2 2 5 c(2, 3, 4, 5)
3 3 4 c(3, 4)
The catch is that it would need to be created as follows:
df %>% mutate(C = c(A:B))
I have a dataset containing integers entered as ranges, i.e someone has entered "7 to 26". I've separated the ranges into two columns A & B, or "start" and "end", and was hoping to use c(A:B) to create a list, but using dplyr I keep getting:
Warning messages:
1: In a:b : numerical expression has 3 elements: only the first used
2: In a:b : numerical expression has 3 elements: only the first used
Which gives:
# A tibble: 3 x 3
A B C
<dbl> <dbl> <list>
1 1 6 list(1:6)
2 2 5 list(1:6)
3 3 4 list(1:6)
Has anyone had a similar issue and found a workaround?
You can use map2() in purrr
library(dplyr)
df %>%
mutate(C = purrr::map2(A, B, seq))
or do rowwise() before mutate()
df %>%
rowwise() %>%
mutate(C = list(A:B)) %>%
ungroup()
Both methods give
# # A tibble: 3 x 3
# A B C
# <int> <int> <list>
# 1 1 6 <int [6]>
# 2 2 5 <int [4]>
# 3 3 4 <int [2]>
Data
df <- tibble::tibble(A = 1:3, B = 6:4)
I am trying to calculate cohen's kapa values for multiple teacher-segment permutations. In this exampe, there are six unique teacher-segment combinations. For example, teacher1-segement1 has two different raters, and would like to see the ICC of these two raters for that teacher1-segement1 (and all the other teacher-segment permuations).
I have a data set such as this.
full.data <- read_table2('Rater teacher segment subject1 subject2 subject3
A 1 1 1 4 1
B 1 1 3 4 3
B 2 2 2 3 2
C 2 2 1 4 1
D 3 1 4 4 4
E 3 1 4 3 4
D 4 2 3 3 3
A 4 2 4 3 4
B 5 2 4 3 4
A 5 2 5 3 5
D 6 1 5 3 5
E 6 1 5 3 5')
I know that if I wanted to get cohen's kapa for just one teacher-segment group, I would tranform the data such as this,
one.permuation<- read_table2('Rater RaterA-teacher1-segment1 RaterB-teacher1-segment1
subject1 1 3
subject2 4 4
subject3 1 3')
and then run,
library(irr)
print(icc(myRatings, model=“twoway”, type=“consistency”, unit=“average”))
Which would give me just ONE kapa value for that particular teacher-segment.
How would I get the values for all the teacher-segment permutations at once? (each group of teacher,segment, has a different observer)?
How do I present these 6 different Kapa values in a way that makes sense? I've never done something like this before; hoping to get some insight from experienced stat folks.
Although not shown here, raters have both an ordinal and nominal scale response (1-4 score) and [yes, No]. Should I be using a different kappa function for these different kinds of scales? From the "Psych" library documentation: "Cohen's kappa (Cohen, 1960) and weighted kappa (Cohen, 1968) may be used to find the agreement of two raters when using nominal scores."
Here is what I tried for you. You said that you want to calculate Cohen's Kappa values. So I decided to use cohen.kappa() in the psych package, rather than icc(), which I am not familiar with. I hope you do not mind that. They key thing was to transform your data in a way that you can run cohen.kappa() all together. Seeing your one.permuation, I tried to create a data frame that has teacher, segment, subject, and raters (A, B, C, D, and E) as columns. pivot_longer() and pivot_wider() handled this. Then, I needed to move numeric values to two columns (rowwise value sorting). I used Ananda Mahto's SOfun package. (Ananda is the author of splitstackshape package.) Then, I grpup the data by teacher and segment and created lists. For each list that contains a data frame, I converted the data frame to matrix and applied cohen.kappa() and obtained results with tidy(). Finally, I used unnest() to see the results.
library(tidyverse)
library(psych)
library(devtools)
install_github("mrdwab/SOfun")
library(SOfun)
library(broom)
pivot_longer(full.data, cols = subject1:subject3,
names_to = "subject", values_to = "rating_score") %>%
pivot_wider(id_cols = c("teacher", "segment", "subject"),
names_from = "Rater", values_from = "rating_score") %>%
as.matrix %>%
naLast(by = "row") %>%
as_tibble %>%
select(-c(subject, C:E)) %>%
type_convert() %>%
group_by(teacher, segment) %>%
nest() %>%
mutate(result = map(.x = data,
.f = function(x) cohen.kappa(as.matrix(x)) %>% tidy())) %>%
unnest(result)
# teacher segment data type estimate conf.low conf.high
# <dbl> <dbl> <list<df[,2]>> <chr> <dbl> <dbl> <dbl>
# 1 1 1 [3 x 2] unweighted 0.25 -0.0501 0.550
# 2 1 1 [3 x 2] weighted 0.571 -0.544 1
# 3 2 2 [3 x 2] unweighted 0 0 0
# 4 2 2 [3 x 2] weighted 0.571 -1 1
# 5 3 1 [3 x 2] unweighted 0 0 0
# 6 3 1 [3 x 2] weighted 0 0 0
# 7 4 2 [3 x 2] unweighted 0 0 0
# 8 4 2 [3 x 2] weighted 0 0 0
# 9 5 2 [3 x 2] unweighted 0.25 -0.0501 0.550
#10 5 2 [3 x 2] weighted 0.571 -0.544 1
#11 6 1 [3 x 2] unweighted 1 1 1
#12 6 1 [3 x 2] weighted 1 1 1
icc version
The data transformation is basically the same. You need to work a bit more when you run multiple stats. icc() returns icclist object. You want to create data frames from the object.
library(irr)
pivot_longer(full.data, cols = subject1:subject3,
names_to = "subject", values_to = "rating_score") %>%
pivot_wider(id_cols = c("teacher", "segment", "subject"),
names_from = "Rater", values_from = "rating_score") %>%
as.matrix %>%
naLast(by = "row") %>%
as_tibble %>%
select(-c(subject, C:E)) %>%
mutate_at(vars(A:B), .funs = list(~as.numeric(.))) %>%
group_by(teacher, segment) %>%
nest() %>%
mutate(result = map(.x = data,
.f = function(x) enframe(unlist(icc(x,
model = "twoway",
type = "consistency",
unit = "average"))) %>%
pivot_wider(names_from = "name",
values_from = "value"))) %>%
unnest(result)
teacher segment data subjects raters model type unit icc.name value r0 Fvalue df1 df2 p.value conf.level lbound ubound
<chr> <chr> <list<d> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 1 1 [3 x 2] 3 2 twow~ cons~ aver~ ICC(C,2) 0.75 0 4 2 2 0.2 0.95 -8.74~ 0.993~
2 2 2 [3 x 2] 3 2 twow~ cons~ aver~ ICC(C,2) 0.75 0 4 2 2 0.2 0.95 -8.75 0.993~
3 3 1 [3 x 2] 3 2 twow~ cons~ aver~ ICC(C,2) 4.99~ 0 1 2 2 0.5 0.95 -38 0.974~
4 4 2 [3 x 2] 3 2 twow~ cons~ aver~ ICC(C,2) -8.3~ 0 0.999~ 2 2 0.5 0.95 -38 0.974~
5 5 2 [3 x 2] 3 2 twow~ cons~ aver~ ICC(C,2) 0.88~ 0 8.999~ 2 2 0.1 0.95 -3.33~ 0.997~
6 6 1 [3 x 2] 3 2 twow~ cons~ aver~ ICC(C,2) 1 0 Inf 2 2 0 0.95 1 1
I am exploring the tidyverse package. So I am interested in how to get the following task down in the tidy way. One can easily circumvent the problem using *apply functions.
Consider the following data
tb <-
lapply(matrix(c("a", "b", "c")), function(x)
rep(x, 3)) %>% unlist %>% c(rep(c(1, 2, 3), 6)) %>% matrix(ncol = 3) %>%
as_tibble(.name_repair = ~ c("tag", "x1", "x2")) %>% type.convert()
# A tibble: 9 x 3
tag x1 x2
<fct> <int> <int>
1 a 1 1
2 a 2 2
3 a 3 3
4 b 1 1
5 b 2 2
6 b 3 3
7 c 1 1
8 c 2 2
9 c 3 3
I group them using nest() function and for each group I want to apply a different function from a list of functions f_1, f_2, f_3
f_1 <- function(x)
x[,1] + x[,2]
f_2 <- function(x)
x[,1] - x[,2]
f_3 <- function(x)
x[,1] * x[,2]
tb_func_attached <-
tb %>% group_by(tag) %>% nest() %>% mutate(func = c(f_0, f_1, f_2))
# A tibble: 3 x 3
tag data func
<fct> <list> <list>
1 a <tibble [3 x 2]> <fn>
2 b <tibble [3 x 2]> <fn>
3 c <tibble [3 x 2]> <fn>
I try to use invoke_map to apply the functions
tb_func_attached %>% {invoke_map(.$func, .$data)}
invoke_map(tb_func_attached$func, tb_func_attached$data)
But I get the error Error in (function (x) : unused arguments (x1 = 1:3, x2 = 1:3), while the following code runs
> tb_func_attached$func[[1]](tb_func_attached$data[[1]])
x1
1 2
2 4
3 6
> tb_func_attached$func[[2]](tb_func_attached$data[[2]])
x1
1 0
2 0
3 0
> tb_func_attached$func[[3]](tb_func_attached$data[[3]])
x1
1 1
2 4
3 9
But invoke_map still does not work.
So the question is, given a nested data tb_func_attached, how to apply the functions tb_func_attached$func 'rowwisely' to tb_func_attached$data?
And a side question, what is the reason for the retirement of invoke_map? It fits quitely well in the concept of vetorisation, IMHO.
Update:
The previous version dealt with single column data (tb has only tag and x1 columns) and #A. Suliman's comment provides a solution.
However when the data column in the nested tibble has a matrix structure, the code stops running again.
Use map2 to iterate over the list of functions first, and over the data column second. Like this:
tb_func_attached %>%
mutate(output = map2(func, data, ~ .x(.y))) %>%
unnest(data, output)
The output looks this way:
# A tibble: 9 x 4
tag x1 x2 x11
<fct> <int> <int> <int>
1 a 1 1 2
2 a 2 2 4
3 a 3 3 6
4 b 1 1 0
5 b 2 2 0
6 b 3 3 0
7 c 1 1 1
8 c 2 2 4
9 c 3 3 9
I wanted to get all unique pairwise combinations of a unique string column of a dataframe using the tidyverse (ideally).
Here is a dummy example:
library(tidyverse)
a <- letters[1:3] %>%
tibble::as_tibble()
a
#> # A tibble: 3 x 1
#> value
#> <chr>
#> 1 a
#> 2 b
#> 3 c
tidyr::crossing(a, a) %>%
magrittr::set_colnames(c("words1", "words2"))
#> # A tibble: 9 x 2
#> words1 words2
#> <chr> <chr>
#> 1 a a
#> 2 a b
#> 3 a c
#> 4 b a
#> 5 b b
#> 6 b c
#> 7 c a
#> 8 c b
#> 9 c c
Is there a way to remove 'duplicate' combinations here. That is have the output be the following in this example:
# A tibble: 9 x 2
#> words1 words2
#> <chr> <chr>
#> 1 a b
#> 2 a c
#> 3 b c
I was hoping there would be a nice purrr::map or filter approach to pipe into to complete the above.
EDIT: There are similar questions to this one e.g. here, marked by #Sotos. Here I am specifically looking for tidyverse (purrr, dplyr) ways to complete the pipeline I have setup. The other answers use various other packages that I do not want to include as dependencies.
wish there was a better way, but I usually use this...
library(tidyverse)
df <- tibble(value = letters[1:3])
df %>%
expand(value, value1 = value) %>%
filter(value < value1)
# # A tibble: 3 x 2
# value value1
# <chr> <chr>
# 1 a b
# 2 a c
# 3 b c
Something like this?
tidyr::crossing(a, a) %>%
magrittr::set_colnames(c("words1", "words2")) %>%
rowwise() %>%
mutate(words1 = sort(c(words1, words2))[1], # sort order of words for each row
words2 = sort(c(words1, words2))[2]) %>%
filter(words1 != words2) %>% # remove word combinations with itself
unique() # remove duplicates
# A tibble: 3 x 2
words1 words2
<chr> <chr>
1 a b
2 a c
3 b c
I have a tibble with one column being a list column, always having two numeric values named a and b (e.g. as a result of calling purrr:map to a function which returns a list), say:
df <- tibble(x = 1:3, y = list(list(a = 1, b = 2), list(a = 3, b = 4), list(a = 5, b = 6)))
df
# A tibble: 3 × 2
x y
<int> <list>
1 1 <list [2]>
2 2 <list [2]>
3 3 <list [2]>
How do I separate the list column y into two columns a and b, and get:
df_res <- tibble(x = 1:3, a = c(1,3,5), b = c(2,4,6))
df_res
# A tibble: 3 × 3
x a b
<int> <dbl> <dbl>
1 1 1 2
2 2 3 4
3 3 5 6
Looking for something like tidyr::separate to deal with a list instead of a string.
Using dplyr (current release: 0.7.0):
bind_cols(df[1], bind_rows(df$y))
# # A tibble: 3 x 3
# x a b
# <int> <dbl> <dbl>
# 1 1 1 2
# 2 2 3 4
# 3 3 5 6
edit based on OP's comment:
To embed this in a pipe and in case you have many non-list columns, we can try:
df %>% select(-y) %>% bind_cols(bind_rows(df$y))
We could also make use the map_df from purrr
library(tidyverse)
df %>%
summarise(x = list(x), new = list(map_df(.$y, bind_rows))) %>%
unnest
# A tibble: 3 x 3
# x a b
# <int> <dbl> <dbl>
#1 1 1 2
#2 2 3 4
#3 3 5 6