I have data frame, my goal is finding the pattern of combination var1 by ID, if at least 3 categories the same for each group, we set "Yes", and then which ID have the same combination.
ID1: I have 4 unique categories (A,B,C,D)
ID2: I have 4 unique categories (B,C,D,F)
ID3: I have 3 unique categories (A,B,C)
ID4: I have 2 unique categories (A,B)
ID5: I have 4 unique categories (C,D,F)
We can see ID1, ID2 have at least 3 categories the same (B,C,D), ID1 and ID3 have (A,B,C),and ID2, ID5 have at least 3 the same (C,D,F). So there are 4 ID will have "Yes" only ID4=="No".
ID <- c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,4,4,5,5,5,5,5)
var1 <- c("A","B","C","A","D","D","C","D","B","F","A","B","C","C",
"A","B","D","D","C","C","F")
df <- data.frame(ID,var1)
ID var1
1 1 A
2 1 B
3 1 C
4 1 A
5 1 D
6 2 D
7 2 C
8 2 D
9 2 B
10 2 F
11 3 A
12 3 B
13 3 C
14 3 C
15 4 A
16 4 B
17 5 D
18 5 D
19 5 C
20 5 C
21 5 F
My output will be
ID var1 var2 var3
1 1 A Yes 1-2
2 1 B Yes 1-2
3 1 C Yes 1-2
4 1 A Yes 1-2
5 1 D Yes 1-2
6 2 D Yes 1-2
7 2 C Yes 1-2
8 2 D Yes 1-2
9 2 B Yes 1-2
10 2 F Yes 1-2
11 3 A Yes 1-3
12 3 B Yes 1-3
13 3 C Yes 1-3
14 3 C Yes 1-3
15 4 A No 4
16 4 B No 4
17 5 D Yes 2-5
18 5 D Yes 2-5
19 5 C Yes 2-5
20 5 C Yes 2-5
21 5 F Yes 2-5
Thanks for advance.
The problem is essentially one of constructing an adjacency table based on common memberships, e.g. Working with Bipartite/Affiliation Network Data in R. To do that, we make a table out of the data (after eliminating duplicates), and then take the cross-product.
dd <- unique(df)
tab <- table(dd)
dd <- crossprod(t(tab))
diag(dd) <- 0
# ID
# ID 1 2 3 4 5
# 1 0 3 3 2 2
# 2 3 0 2 1 3
# 3 3 2 0 2 1
# 4 2 1 2 0 0
# 5 2 3 1 0 0
The table above allows us to see the number of categories that IDs share. Now we just have to go through the rows; for each row, I select the first ID that has a value of at least 3 (matched).
matched <- apply(dd >= 3, MAR = 1, function(x) which(x == TRUE)[1])
# 1 2 3 4 5
# 2 1 1 NA 2
So "1" matched with "2", "2" matched with "1", "3" matched with "1", "4" has no matches, "5" matched with "2". Finish off by manipulating this output to get the desired final product:
out <- apply(cbind(as.numeric(names(matched)), matched), MAR = 1, function(x) {
if (any(is.na(x))) {
data.frame(var2 = "No", var3 = x[1])
} else {
data.frame(var2 = "Yes", var3 = paste(sort(x), collapse = "-"))
}
})
out <- plyr::ldply(out, .id = "ID")
merge(df, out, all.x = TRUE)
# ID var1 var2 var3
# 1 1 A Yes 1-2
# 2 1 B Yes 1-2
# 3 1 C Yes 1-2
# 4 1 A Yes 1-2
# 5 1 D Yes 1-2
# 6 2 D Yes 1-2
# 7 2 C Yes 1-2
# 8 2 D Yes 1-2
# 9 2 B Yes 1-2
# 10 2 F Yes 1-2
# 11 3 A Yes 1-3
# 12 3 B Yes 1-3
# 13 3 C Yes 1-3
# 14 3 C Yes 1-3
# 15 4 A No 4
# 16 4 B No 4
# 17 5 D Yes 2-5
# 18 5 D Yes 2-5
# 19 5 C Yes 2-5
# 20 5 C Yes 2-5
# 21 5 F Yes 2-5
Related
I have a dataset that contains data about the diagnosis of a number of medical conditions in patients. The values in these dimensions/columns are NA (patient was not in the diagnosis process), 1 = yes (diagnosed with the condition), 2 = No (not diagnosed with the condition). I want to create a separate column that counts the number of columns having 1 for each patient ID. My data frame has 1000 separate dimensions and I need to specifically pick based on either the index of the column or column name. my dataset looks like this
ID A B C D
1 NA 1 2 1
2 1 1 2 1
3 NA 2 2 1
4 1 1 2 1
5 2 1 1 1
6 2 1 NA 1
I obviously have way more columns and rows than this and any solution will require to consider the indexing. The reason I need to do this is to know the number of medical condition a patient is suffering from. I was able to solve this problem in SPSS using the 'transform' option as SPSS is my application of choice for data analysis. I have been enjoying R and intrigued by the number of statistical learning algos available in it.
I have tried the apply function and the colSums but I am struggling with indexing as well as setting the count function = +1. I have looked into for loops as well with no luck into my specific problem.
mydata$count <- apply(X=mydata[,-1],1,FUN=function(x) length(which(x=='1')))
using the [,-1] ignores the ID variable
Welcome to SO. From my perspective, what you need to do here is to convert your data format from "wide" to "long" (check out tidy data: https://vita.had.co.nz/papers/tidy-data.pdf).
library(tidyverse)
df <- read.table(header = TRUE,
text = "
ID A B C D
1 NA 1 2 1
2 1 1 2 1
3 NA 2 2 1
4 1 1 2 1
5 2 1 1 1
6 2 1 NA 1")
df %>%
gather(key = "patient", value = "type", A:D) %>%
mutate(contains_1 = ifelse(type == 1, "yes", "no"))
ID patient type contains_1
1 1 A NA <NA>
2 2 A 1 yes
3 3 A NA <NA>
4 4 A 1 yes
5 5 A 2 no
6 6 A 2 no
7 1 B 1 yes
8 2 B 1 yes
9 3 B 2 no
10 4 B 1 yes
11 5 B 1 yes
12 6 B 1 yes
13 1 C 2 no
14 2 C 2 no
15 3 C 2 no
16 4 C 2 no
17 5 C 1 yes
18 6 C NA <NA>
19 1 D 1 yes
20 2 D 1 yes
21 3 D 1 yes
22 4 D 1 yes
23 5 D 1 yes
24 6 D 1 yes
This is my data frame
df <- data.frame(
id = 1:14,
group_id = c(rep(1:2, each = 3), rep(3:4, each = 4)),
type = rep("A", 14), stringsAsFactors = FALSE)
df[c(2,4,8,12),"type"] <- "B"
id group_id type
1 1 1 A
2 2 1 B
3 3 1 A
4 4 2 B
5 5 2 A
6 6 2 A
7 7 3 A
8 8 3 B
9 9 3 A
10 10 3 A
11 11 4 A
12 12 4 B
13 13 4 A
14 14 4 A
I'd like to keep all rows with type B as well as the following row.
I could do...
B <- which(df$type=="B")
afterB <- B+1
df_sel <- df[c(B, afterB), ]
df_sel <- df_sel[order(df_sel$id),]
df_sel
...to get what I want.
id group_id type
2 2 1 B
3 3 1 A
4 4 2 B
5 5 2 A
8 8 3 B
9 9 3 A
12 12 4 B
13 13 4 A
How can this be done in a more generic way.
Another way, very similar to what you do but in one step and without the need to reorder:
df_sel <- df[rep(which(df$type=="B"), e=2)+c(0, 1), ]
df_sel
# id group_id type
# 2 2 1 B
# 3 3 1 A
# 4 4 2 B
# 5 5 2 A
# 8 8 3 B
# 9 9 3 A
# 12 12 4 B
# 13 13 4 A
Using lag from dplyr
library(dplyr)
df[df$type == "B" | lag(df$type == "B", default = FALSE), ]
# id group_id type
#2 2 1 B
#3 3 1 A
#4 4 2 B
#5 5 2 A
#8 8 3 B
#9 9 3 A
#12 12 4 B
#13 13 4 A
using grep will provide a row index of all instances of B - rows; concatenate (c()) this with rows + 1 to select from df will work.
rows <- grep("B", df[, "type"])
df[sort(c(rows, rows + 1)), ]
gives:
id group_id type
2 2 1 B
3 3 1 A
4 4 2 B
5 5 2 A
8 8 3 B
9 9 3 A
12 12 4 B
13 13 4 A
Suppose I have a data frame:
df <- data.frame(SID=sample(1:4,15,replace=T), Var1=c(rep("A",5),rep("B",5),rep("C",5)), Var2=sample(2:4,15,replace=T))
which comes out to something like this:
SID Var1 Var2
1 4 A 2
2 3 A 2
3 4 A 3
4 3 A 3
5 1 A 4
6 1 B 2
7 3 B 2
8 4 B 4
9 4 B 4
10 3 B 2
11 2 C 2
12 2 C 2
13 4 C 4
14 2 C 4
15 3 C 3
What I hope to accomplish is to find the count of unique SIDs (see below under update, this should have said count of unique (SID, Var1) combinations) where the given row's Var1 is excluded from this count and the count is grouped on Var2. So for the example above, I would like to output:
SID Var1 Var2 Count.Excluding.Var1
1 4 A 2 3
2 3 A 2 3
3 4 A 3 1
4 3 A 3 1
5 1 A 4 3
6 1 B 2 3
7 3 B 2 3
8 4 B 4 3
9 4 B 4 3
10 3 B 2 3
11 2 C 2 4
12 2 C 2 4
13 4 C 4 2
14 2 C 4 2
15 3 C 3 2
For the 1st observation, we have a count of 3 because there are 3 unique combinations of (SID, Var1) for the given Var2 value (2, in this case) where Var1 != A (Var1 value of 1st observation) -- specifically, the count includes observation 6, 7 and 11, but not 12 because we already accounted for a (SID, Var1)=(2,C) and not row 2 because we do not want Var1 to be "A". All of these rows have the same Var2 value.
I'd preferably like to use dplyr functions and the %>% operator.
&
UPDATE
I apologize for the confusion and my incorrect explanation above. I have corrected what I intended on asking for in the paranthesis, but I am leaving my original phrasing as well because majority of answers seem to interpret it this way.
As for the example, I apologize for not setting the seed. There seems to have been some confusion with regards to the Count.Excluding.Var1 for rows 11 and 12. With unique (SID, Var1) combinations, rows 11 and 12 should make sense as these count rows 1,2,6, and 7 xor 8.
A simple mapply can do the trick. But as OP requested for %>% based solution, an option could be as:
df %>% mutate(Count.Excluding.Var1 =
mapply(function(x,y)nrow(unique(df[df$Var1 != x & df$Var2 == y,1:2])),.$Var1,.$Var2))
# SID Var1 Var2 Count.Excluding.Var1
# 1 4 A 2 3
# 2 2 A 3 3
# 3 4 A 4 3
# 4 4 A 4 3
# 5 3 A 4 3
# 6 4 B 3 1
# 7 3 B 3 1
# 8 3 B 3 1
# 9 4 B 2 3
# 10 2 B 3 1
# 11 2 C 2 2
# 12 4 C 4 2
# 13 1 C 4 2
# 14 1 C 2 2
# 15 3 C 4 2
Data:
The above results are based on origional data provided by OP.
df <- data.frame(SID=sample(1:4,15,replace=T), Var1=c(rep("A",5),rep("B",5),rep("C",5)), Var2=sample(2:4,15,replace=T))
could not think of a dplyr solution, but here's one with apply
df$Count <- apply(df, 1, function(x) length(unique(df$SID[(df$Var1 != x['Var1']) & (df$Var2 == x['Var2'])])))
# SID Var1 Var2 Count
# 1 4 A 2 3
# 2 3 A 2 3
# 3 4 A 3 1
# 4 3 A 3 1
# 5 1 A 4 2
# 6 1 B 2 3
# 7 3 B 2 3
# 8 4 B 4 3
# 9 4 B 4 3
# 10 3 B 2 3
# 11 2 C 2 3
# 12 2 C 2 3
# 13 4 C 4 2
# 14 2 C 4 2
# 15 3 C 3 2
Here is a dplyr solution, as requested. For future reference, please use set.seed so we can reproduce your desired output with sample, else I have to enter data by hand...
I think this is your logic? You want the n_distinct(SID) for each Var2, but for each row, you want to exclude rows which have the same Var1 as the current row. So a key observation here is row 3, where a simple grouped summarise would yield a count of 2. Of the rows with Var2 = 3, row 3 has SID = 4, row 4 has SID = 3, row 15 has SID = 3, but we don't count row 3 or row 4, so final count is one unique SID.
Here we get first the count of unique SID for each Var2, then the count of unique SID for each Var1, Var2 combo. First count is too large by the amount of additional unique SID for each combo, so we subtract it and add one. There is an edge case where for a Var1, there is only one corresponding Var2. This should return 0 since you exclude all the possible values of SID. I added two rows to illustrate this.
library(tidyverse)
df <- read_table2(
"SID Var1 Var2
4 A 2
3 A 2
4 A 3
3 A 3
1 A 4
1 B 2
3 B 2
4 B 4
4 B 4
3 B 2
2 C 2
2 C 2
4 C 4
2 C 4
3 C 3
1 D 5
2 D 5"
)
df %>%
group_by(Var2) %>%
mutate(SID_per_Var2 = n_distinct(SID)) %>%
group_by(Var1, Var2) %>%
mutate(SID_per_Var1Var2 = n_distinct(SID)) %>%
ungroup() %>%
add_count(Var1) %>%
add_count(Var1, Var2) %>%
mutate(
Count.Excluding.Var1 = if_else(
n > nn,
SID_per_Var2 - SID_per_Var1Var2 + 1,
0
)
) %>%
select(SID, Var1, Var2, Count.Excluding.Var1)
#> # A tibble: 17 x 4
#> SID Var1 Var2 Count.Excluding.Var1
#> <int> <chr> <int> <dbl>
#> 1 4 A 2 3.
#> 2 3 A 2 3.
#> 3 4 A 3 1.
#> 4 3 A 3 1.
#> 5 1 A 4 3.
#> 6 1 B 2 3.
#> 7 3 B 2 3.
#> 8 4 B 4 3.
#> 9 4 B 4 3.
#> 10 3 B 2 3.
#> 11 2 C 2 4.
#> 12 2 C 2 4.
#> 13 4 C 4 2.
#> 14 2 C 4 2.
#> 15 3 C 3 2.
#> 16 1 D 5 0.
#> 17 2 D 5 0.
Created on 2018-04-12 by the reprex package (v0.2.0).
Here's a solution using purrr - you can wrap this in a mutate statement if you want, but I don't know that it adds much in this particular case.
library(purrr)
df$Count.Excluding.Var1 = map_int(1:nrow(df), function(n) {
df %>% filter(Var2 == Var2[n], Var1 != Var1[n]) %>% distinct() %>% nrow()
})
(Updated with input from comments by Calum You. Thanks!)
A 100% tidyverse solution:
library(tidyverse) # dplyr + purrr
df %>%
group_by(Var2) %>%
mutate(count = map_int(Var1,~n_distinct(SID[.x!=Var1],Var1[.x!=Var1])))
# # A tibble: 15 x 4
# # Groups: Var2 [3]
# SID Var1 Var2 count
# <int> <chr> <int> <int>
# 1 4 A 2 3
# 2 3 A 2 3
# 3 4 A 3 1
# 4 3 A 3 1
# 5 1 A 4 3
# 6 1 B 2 3
# 7 3 B 2 3
# 8 4 B 4 3
# 9 4 B 4 3
# 10 3 B 2 3
# 11 2 C 2 4
# 12 2 C 2 4
# 13 4 C 4 2
# 14 2 C 4 2
# 15 3 C 3 2
how do I extract specific row of data when the column has repetitive value? my data looks like this: I want to extract the row of the end of each repeat of x (A 3 10, A 2 3 etc) or the index of the last value
Name X M
A 1 1
A 2 9
A 3 10
A 1 1
A 2 3
A 1 5
A 2 6
A 3 4
A 4 5
A 5 3
B 1 1
B 2 9
B 3 10
B 1 1
B 2 3
Expected output
Index Name X M
3 A 3 10
5 A 2 3
10 A 5 3
13 B 3 10
15 B 2 3
Using base R duplicated and cumsum:
dups <- !duplicated(cumsum(dat$X == 1), fromLast=TRUE)
cbind(dat[dups,], Index=which(dups))
# Name X M Index
#3 A 3 10 3
#5 A 2 3 5
#10 A 5 3 10
#13 B 3 10 13
#15 B 2 3 15
A solution using dplyr.
library(dplyr)
df2 <- df %>%
mutate(Flag = ifelse(lead(X) < X, 1, 0)) %>%
mutate(Index = 1:n()) %>%
filter(Flag == 1 | is.na(Flag)) %>%
select(Index, X, M)
df2
# Index X M
# 1 3 3 10
# 2 5 2 3
# 3 10 5 3
# 4 13 3 10
# 5 15 2 3
Flag is a column showing if the next number in A is smaller than the previous number. If TRUE, Flag is 1, otherwise is 0. We can then filter for Flag == 1 or where Flag is NA, which is the last row. df2 is the final filtered data frame.
DATA
df <- read.table(text = "Name X M
A 1 1
A 2 9
A 3 10
A 1 1
A 2 3
A 1 5
A 2 6
A 3 4
A 4 5
A 5 3
B 1 1
B 2 9
B 3 10
B 1 1
B 2 3",
header = TRUE, stringsAsFactors = FALSE)
Given the following first two columns(id and time_diff), i want to generate the 'block' column
test
id time_diff block
1 a NA 1
2 a 1 1
3 a 1 1
4 a 1 1
5 a 3 1
6 a 3 1
7 b NA 2
8 b 11 3
9 b 1 3
10 b 1 3
11 b 1 3
12 b 12 4
13 b 1 4
14 c NA 5
15 c 4 5
16 c 7 5
The data is already sorted by id and time. The time_diff was computed based on the difference of the previous time and the time value for the row, given the same id. I want to create a block id which is an auto-increment value and increases when a new ID or a time_diff of >10 with the same id is encountered.
How can I achieve this in R?
Importing your data as a data frame with something like:
df = read.table(text='
id time_diff block
1 a NA 1
2 a 1 1
3 a 1 1
4 a 1 1
5 a 3 1
6 a 3 1
7 b NA 2
8 b 11 3
9 b 1 3
10 b 1 3
11 b 1 3
12 b 12 4
13 b 1 4
14 c NA 5
15 c 4 5
16 c 7 5')
You can do a one-liner like this to get occurrences satisfying your two conditions:
> new_col = as.vector(cumsum(
na.exclude(
c(F,diff(as.numeric(as.factor(df$id)))) | # change of id OR
df$time_diff > 10 # time_diff greater than 10
)
))
> new_col
[1] 0 0 0 0 0 1 2 2 2 2 3 3 4 4 4
And finally append this new column to your dataframe with cbind:
> cbind(df, block = c(0,new_col))
id time_diff block block
1 a NA 1 0
2 a 1 1 0
3 a 1 1 0
4 a 1 1 0
5 a 3 1 0
6 a 3 1 0
7 b NA 2 1
8 b 11 3 2
9 b 1 3 2
10 b 1 3 2
11 b 1 3 2
12 b 12 4 3
13 b 1 4 3
14 c NA 5 4
15 c 4 5 4
16 c 7 5 4
You will notice an offset between your wanted block variable and mine: correcting it is easy and can be done at several different step, I will leave it to you :)
Another variation of #Jealie's method would be:
with(test, cumsum(c(TRUE,id[-1]!=id[-nrow(test)])|time_diff>10))
#[1] 1 1 1 1 1 1 2 3 3 3 3 4 4 5 5 5
After learning from Jealie and akrun, I came up with this idea.
mydf %>%
mutate(group = cumsum(time_diff > 10 |!duplicated(id)))
# id time_diff block group
#1 a NA 1 1
#2 a 1 1 1
#3 a 1 1 1
#4 a 1 1 1
#5 a 3 1 1
#6 a 3 1 1
#7 b NA 2 2
#8 b 11 3 3
#9 b 1 3 3
#10 b 1 3 3
#11 b 1 3 3
#12 b 12 4 4
#13 b 1 4 4
#14 c NA 5 5
#15 c 4 5 5
#16 c 7 5 5
Here is an approach using dplyr:
require(dplyr)
set.seed(999)
test <- data.frame(
id = rep(letters[1:4], each = 3),
time_diff = sample(4:15)
)
test %>%
mutate(
b = as.integer(id) - lag(as.integer(id)),
more10 = time_diff > 10,
increment = pmax(b, more10, na.rm = TRUE),
increment = ifelse(row_number() == 1, 1, increment),
block = cumsum(increment)
) %>%
select(id, time_diff, block)
Try:
> df
id time_diff
1 a NA
2 a 1
3 a 1
4 a 1
5 a 3
6 a 3
7 b NA
8 b 11
9 b 1
10 b 1
11 b 1
12 b 12
13 b 1
14 c NA
15 c 4
16 c 7
block= c(1)
for(i in 2:nrow(df))
block[i] = ifelse(df$time_diff[i]>10 || df$id[i]!=df$id[i-1],
block[i-1]+1,
block[i-1])
df$block = block
df
id time_diff block
1 a NA 1
2 a 1 1
3 a 1 1
4 a 1 1
5 a 3 1
6 a 3 1
7 b NA 2
8 b 11 3
9 b 1 3
10 b 1 3
11 b 1 3
12 b 12 4
13 b 1 4
14 c NA 5
15 c 4 5
16 c 7 5