How to add a new column based on conditionnal difference between rows - r

I have a large dataset of ID of patients with delays in days between the surgery and radiotherapy (RT) sessions. Some patients may had two or three RT treatments. To identidy those patients, I consider a delay being greater than 91 days (3 months).
This delay of 91 days corresponds to the end of one RT treatment and the start of another one. For analysis purposes it may be set at 61 days (2 months).
How to make correspond this delay above 91 days between two values to a new RT treatement and add a corresponding order into a new column?
My database looks like this:
df1 <- data.frame (
id = c("a","a","a","a","b","b","b","b","b","b","b","b","b","b","b","b","b", "c","c","c","c"),
delay = c(2,3,5,6, 3,5,7,9, 190,195,201,203,205, 1299,1303,1306,1307, 200,202,204,205))
> df1
id delay
1 a 2
2 a 3
3 a 5
4 a 6
5 b 3
6 b 5
7 b 7
8 b 9
9 b 190
10 b 195
11 b 201
12 b 203
13 b 205
14 b 1299
15 b 1303
16 b 1306
17 b 1307
18 c 200
19 c 202
20 c 204
21 c 205
I failed to produce something like this considering if the time between the first set of delays is greater than 100 days.
df2 <- data.frame (
id = c("a","a","a","a","b","b","b","b","b","b","b","b","b","b","b","b","b", "c","c","c","c"),
delay = c(2,3,5,6, 3,5,7,9, 190,195,201,203,205, 1299,1303,1306,1307, 200,202,204,205),
tt_order = c("1st","1st","1st","1st"," 1st","1st","1st","1st"," 2nd","2nd","2nd","2nd","2nd"," 3rd","3rd","3rd","3rd"," 1st","1st","1st","1st"))
> df2
id delay tt_order
1 a 2 1st
2 a 3 1st
3 a 5 1st
4 a 6 1st
5 b 3 1st
6 b 5 1st
7 b 7 1st
8 b 9 1st
9 b 190 2nd
10 b 195 2nd
11 b 201 2nd
12 b 203 2nd
13 b 205 2nd
14 b 1299 3rd
15 b 1303 3rd
16 b 1306 3rd
17 b 1307 3rd
18 c 200 1st
19 c 202 1st
20 c 204 1st
21 c 205 1st
I will be grateful for any help you can provide.

One way would be to divide delay by 100 and then use match and unique to get unique index in a sequential fashion for each id.
library(dplyr)
df2 %>%
group_by(id) %>%
mutate(n_tt = floor(delay/100),
n_tt = match(n_tt, unique(n_tt)))
# id delay tt_order n_tt
# <chr> <dbl> <dbl> <int>
# 1 a 2 1 1
# 2 a 3 1 1
# 3 a 5 1 1
# 4 a 6 1 1
# 5 b 3 1 1
# 6 b 5 1 1
# 7 b 7 1 1
# 8 b 9 1 1
# 9 b 150 2 2
#10 b 152 2 2
#11 b 155 2 2
#12 b 159 2 2
#13 b 1301 3 3
#14 b 1303 3 3
#15 b 1306 3 3
#16 b 1307 3 3
#17 c 200 1 1
#18 c 202 1 1
#19 c 204 1 1
#20 c 205 1 1
Created a new column n_tt for comparison purposes with tt_order in df2.

#CharlesLDN - perhaps this might be what you are looking for. This will look at differences in delay within each id, and gaps of > 90 days will be considered a new treatment.
library(tidyverse)
df1 %>%
group_by(id) %>%
mutate(tt_order = cumsum(c(0, diff(delay)) > 90) + 1)
Output
id delay tt_order
<chr> <dbl> <dbl>
1 a 2 1
2 a 3 1
3 a 5 1
4 a 6 1
5 b 3 1
6 b 5 1
7 b 7 1
8 b 9 1
9 b 190 2
10 b 195 2
11 b 201 2
12 b 203 2
13 b 205 2
14 b 1299 3
15 b 1303 3
16 b 1306 3
17 b 1307 3
18 c 200 1
19 c 202 1
20 c 204 1
21 c 205 1

Related

add values of one group into another group in R

I have a question on how to add the value from a group to rest of the elements in the group then delete that row. for ex:
df <- data.frame(Year=c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2),
Cluster=c("a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","c","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","d"),
Seed=c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,99,99,99,99,99,99),
Day=c(1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1),
value=c(5,2,1,2,8,6,7,9,3,5,2,1,2,8,6,55,66,77,88,99,10))
in the above example, my data is grouped by Year, Cluster, Seed and Day where seed=99 values need to be added to above rows based on (Year, Cluster and Day) group then delete this row. for ex: Row # 16, is part of (Year=1, Cluster=a,Day=1 and Seed=99) group and the value of Row #16 which is 55 should be added to Row #1 (5+55), Row # 6 (6+55) and Row # 11 (2+55) and row # 16 should be deleted. But when it comes to Row #21, which is in cluster=C with seed=99, should remain in the database as is as it cannot find any matching in year+cluster+day combination.
My actual data is of 1 million records with 10 years, 80 clusters, 500 days and 10+1 (1 to 10 and 99) seeds, so looking for so looking for an efficient solution.
Year Cluster Seed Day value
1 1 a 1 1 60
2 1 a 1 2 68
3 1 a 1 3 78
4 1 a 1 4 90
5 1 a 1 5 107
6 1 a 2 1 61
7 1 a 2 2 73
8 1 a 2 3 86
9 1 a 2 4 91
10 1 a 2 5 104
11 1 a 3 1 57
12 1 a 3 2 67
13 1 a 3 3 79
14 1 a 3 4 96
15 1 a 3 5 105
16 1 c 99 1 10
17 2 b 1 1 60
18 2 b 1 2 68
19 2 b 1 3 78
20 2 b 1 4 90
21 2 b 1 5 107
22 2 b 2 1 61
23 2 b 2 2 73
24 2 b 2 3 86
25 2 b 2 4 91
26 2 b 2 5 104
27 2 b 3 1 57
28 2 b 3 2 67
29 2 b 3 3 79
30 2 b 3 4 96
31 2 b 3 5 105
32 2 d 99 1 10
A data.table approach:
library(data.table)
df <- setDT(df)[, `:=` (value = ifelse(Seed != 99, value + value[Seed == 99], value),
flag = Seed == 99 & .N == 1), by = .(Year, Cluster, Day)][!(Seed == 99 & flag == FALSE),][, "flag" := NULL]
Output:
df[]
Year Cluster Seed Day value
1: 1 a 1 1 60
2: 1 a 1 2 68
3: 1 a 1 3 78
4: 1 a 1 4 90
5: 1 a 1 5 107
6: 1 a 2 1 61
7: 1 a 2 2 73
8: 1 a 2 3 86
9: 1 a 2 4 91
10: 1 a 2 5 104
11: 1 a 3 1 57
12: 1 a 3 2 67
13: 1 a 3 3 79
14: 1 a 3 4 96
15: 1 a 3 5 105
16: 1 c 99 1 10
17: 2 b 1 1 60
18: 2 b 1 2 68
19: 2 b 1 3 78
20: 2 b 1 4 90
21: 2 b 1 5 107
22: 2 b 2 1 61
23: 2 b 2 2 73
24: 2 b 2 3 86
25: 2 b 2 4 91
26: 2 b 2 5 104
27: 2 b 3 1 57
28: 2 b 3 2 67
29: 2 b 3 3 79
30: 2 b 3 4 96
31: 2 b 3 5 105
32: 2 d 99 1 10
Here's an approach using the tidyverse. If you're looking for speed with a million rows, a data.table solution will probably perform better.
library(tidyverse)
df <- data.frame(Year=c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2),
Cluster=c("a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","c","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","b","d"),
Seed=c(1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,99,99,99,99,99,99),
Day=c(1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1,2,3,4,5,1),
value=c(5,2,1,2,8,6,7,9,3,5,2,1,2,8,6,55,66,77,88,99,10))
seeds <- df %>%
filter(Seed == 99)
matches <- df %>%
filter(Seed != 99) %>%
inner_join(select(seeds, -Seed), by = c("Year", "Cluster", "Day")) %>%
mutate(value = value.x + value.y) %>%
select(Year, Cluster, Seed, Day, value)
no_matches <- anti_join(seeds, matches, by = c("Year", "Cluster", "Day"))
bind_rows(matches, no_matches) %>%
arrange(Year, Cluster, Seed, Day)
#> Year Cluster Seed Day value
#> 1 1 a 1 1 60
#> 2 1 a 1 2 68
#> 3 1 a 1 3 78
#> 4 1 a 1 4 90
#> 5 1 a 1 5 107
#> 6 1 a 2 1 61
#> 7 1 a 2 2 73
#> 8 1 a 2 3 86
#> 9 1 a 2 4 91
#> 10 1 a 2 5 104
#> 11 1 a 3 1 57
#> 12 1 a 3 2 67
#> 13 1 a 3 3 79
#> 14 1 a 3 4 96
#> 15 1 a 3 5 105
#> 16 1 c 99 1 10
#> 17 2 b 1 1 60
#> 18 2 b 1 2 68
#> 19 2 b 1 3 78
#> 20 2 b 1 4 90
#> 21 2 b 1 5 107
#> 22 2 b 2 1 61
#> 23 2 b 2 2 73
#> 24 2 b 2 3 86
#> 25 2 b 2 4 91
#> 26 2 b 2 5 104
#> 27 2 b 3 1 57
#> 28 2 b 3 2 67
#> 29 2 b 3 3 79
#> 30 2 b 3 4 96
#> 31 2 b 3 5 105
#> 32 2 d 99 1 10
Created on 2018-11-23 by the reprex package (v0.2.1)

Unnest (seperate) multiple column values into new rows using Sparklyr

I am trying to split column values separated by comma(,) into new rows based on id's. I know how to do this in R using dplyr and tidyr. But I am looking to solve same problem in sparklyr.
id <- c(1,1,1,1,1,2,2,2,3,3,3)
name <- c("A,B,C","B,F","C","D,R,P","E","A,Q,W","B,J","C","D,M","E,X","F,E")
value <- c("1,2,3","2,4,43,2","3,1,2,3","1","1,2","26,6,7","3,3,4","1","1,12","2,3,3","3")
dt <- data.frame(id,name,value)
R solution:
separate_rows(dt, name, sep=",") %>%
separate_rows(value, sep=",")
Desired Output from sparkframe(sparklyr package)-
> final_result
id name value
1 1 A 1
2 1 A 2
3 1 A 3
4 1 B 1
5 1 B 2
6 1 B 3
7 1 C 1
8 1 C 2
9 1 C 3
10 1 B 2
11 1 B 4
12 1 B 43
13 1 B 2
14 1 F 2
15 1 F 4
16 1 F 43
17 1 F 2
18 1 C 3
19 1 C 1
20 1 C 2
21 1 C 3
22 1 D 1
23 1 R 1
24 1 P 1
25 1 E 1
26 1 E 2
27 2 A 26
28 2 A 6
29 2 A 7
30 2 Q 26
31 2 Q 6
32 2 Q 7
33 2 W 26
34 2 W 6
35 2 W 7
36 2 B 3
37 2 B 3
38 2 B 4
39 2 J 3
40 2 J 3
41 2 J 4
42 2 C 1
43 3 D 1
44 3 D 12
45 3 M 1
46 3 M 12
47 3 E 2
48 3 E 3
49 3 E 3
50 3 X 2
51 3 X 3
52 3 X 3
53 3 F 3
54 3 E 3
Note-
I have approx 1000 columns with nested values. so, I need a function which can loop in for each column.
I know we have sdf_unnest() function from package sparklyr.nested. But, I am not sure how to split strings of multiple columns and apply this function. I am quite new in sparklyr.
Any help would be much appreciated.
You have to combine explode and split
sdt %>%
mutate(name = explode(split(name, ","))) %>%
mutate(value = explode(split(value, ",")))
# Source: lazy query [?? x 3]
# Database: spark_connection
id name value
<dbl> <chr> <chr>
1 1.00 A 1
2 1.00 A 2
3 1.00 A 3
4 1.00 B 1
5 1.00 B 2
6 1.00 B 3
7 1.00 C 1
8 1.00 C 2
9 1.00 C 3
10 1.00 B 2
# ... with more rows
Please note that lateral views have be to expressed as separate subqueries, so this:
sdt %>%
mutate(
name = explode(split(name, ",")),
value = explode(split(value, ",")))
won't work

How to keep initial row order

I have run this SQL sentence through the package: sqldf
SELECT A,B, COUNT(*) AS NUM
FROM DF
GROUP BY A,B
I have got the output I wanted, but I would like to keep the initial row order. Unfortunately, the output has a different order.
For example:
> DF
A B C D
1 11 2 432 4
2 11 3 432 4
3 13 4 241 5
4 42 5 2 3
5 51 5 332 2
6 51 5 332 1
7 51 5 332 1
> sqldf("SELECT A,B,C,D, COUNT (*) AS NUM
+ FROM DF
+ GROUP BY A,B,C,D")
A B C D NUM
1 11 2 432 4 1
2 11 3 432 4 1
3 13 4 241 5 1
4 42 5 2 3 1
5 51 5 332 1 2
6 51 5 332 2 1
As you can see the row order changes, (row number 5 and 6). It would be great if someone could help me with this issue.
Regards,
If we need to use this with sqldf, use ORDER.BY with names pasted together
library(sqldf)
nm <- toString(names(DF))
DF1 <- cbind(rn = seq_len(nrow(DF)), DF)
nm1 <- toString(names(DF1))
fn$sqldf("SELECT $nm, COUNT (*) AS NUM
FROM DF1
GROUP BY $nm ORDER BY $nm1")
# A B C D NUM
#1 11 2 432 4 1
#2 11 3 432 4 1
#3 13 4 241 5 1
#4 42 5 2 3 1
#5 51 5 332 2 1
#6 51 5 332 1 2

Give unique identifier to consecutive groupings

I'm trying to identify groups based on sequential numbers. For example, I have a dataframe that looks like this (simplified):
UID
1
2
3
4
5
6
7
11
12
13
15
17
20
21
22
And I would like to add a column that identifies when there are groupings of consecutive numbers, for example, 1 to 7 are first consecutive , then they get 1 , the second consecutive set will get 2 etc .
UID Group
1 1
2 1
3 1
4 1
5 1
6 1
7 1
11 2
12 2
13 2
15 3
17 4
20 5
21 5
22 5
none of the existed code helped me to solved this issue
Here is one base R method that uses diff, a logical check, and cumsum:
cumsum(c(1, diff(df$UID) > 1))
[1] 1 1 1 1 1 1 1 2 2 2 3 4 5 5 5
Adding this onto the data.frame, we get:
df$id <- cumsum(c(1, diff(df$UID) > 1))
df
UID id
1 1 1
2 2 1
3 3 1
4 4 1
5 5 1
6 6 1
7 7 1
8 11 2
9 12 2
10 13 2
11 15 3
12 17 4
13 20 5
14 21 5
15 22 5
Or you can also use dplyr as follows :
library(dplyr)
df %>% mutate(ID=cumsum(c(1, diff(df$UID) > 1)))
# UID ID
#1 1 1
#2 2 1
#3 3 1
#4 4 1
#5 5 1
#6 6 1
#7 7 1
#8 11 2
#9 12 2
#10 13 2
#11 15 3
#12 17 4
#13 20 5
#14 21 5
#15 22 5
We can also get the difference between the current row and the previous row using the shift function from data.table, get the cumulative sum of the logical vector and assign it to create the 'Group' column. This will be faster.
library(data.table)
setDT(df1)[, Group := cumsum(UID- shift(UID, fill = UID[1])>1)+1]
df1
# UID Group
# 1: 1 1
# 2: 2 1
# 3: 3 1
# 4: 4 1
# 5: 5 1
# 6: 6 1
# 7: 7 1
# 8: 11 2
# 9: 12 2
#10: 13 2
#11: 15 3
#12: 17 4
#13: 20 5
#14: 21 5
#15: 22 5

Removing duplicates for each ID

Suppose that there are three variables in my data frame (mydata): 1) id, 2) case, and 3) value.
mydata <- data.frame(id=c(1,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4), case=c("a","b","c","c","b","a","b","c","c","a","b","c","c","a","b","c","a"), value=c(1,34,56,23,34,546,34,67,23,65,23,65,23,87,34,321,87))
mydata
id case value
1 1 a 1
2 1 b 34
3 1 c 56
4 1 c 23
5 1 b 34
6 2 a 546
7 2 b 34
8 2 c 67
9 2 c 23
10 3 a 65
11 3 b 23
12 3 c 65
13 3 c 23
14 4 a 87
15 4 b 34
16 4 c 321
17 4 a 87
For each id, we could have similar ‘case’ characters, and their values could be the same or different. So basically, if their values are the same, I only need to keep one and remove the duplicate.
My final data then would be
id case value
1 1 a 1
2 1 b 34
3 1 c 56
4 1 c 23
5 2 a 546
6 2 b 34
7 2 c 67
8 2 c 23
9 3 a 65
10 3 b 23
11 3 c 65
12 3 c 23
13 4 a 87
14 4 b 34
15 4 c 321
To add to the other answers, here's a dplyr approach:
library(dplyr)
mydata %>% group_by(id, case, value) %>% distinct()
Or
mydata %>% distinct(id, case, value)
You could try duplicated
mydata[!duplicated(mydata[,c('id', 'case', 'value')]),]
# id case value
#1 1 a 1
#2 1 b 34
#3 1 c 56
#4 1 c 23
#6 2 a 546
#7 2 b 34
#8 2 c 67
#9 2 c 23
#10 3 a 65
#11 3 b 23
#12 3 c 65
#13 3 c 23
#14 4 a 87
#15 4 b 34
#16 4 c 321
Or use unique with by option from data.table
library(data.table)
set.seed(25)
mydata1 <- cbind(mydata, value1=rnorm(17))
DT <- as.data.table(mydata1)
unique(DT, by=c('id', 'case', 'value'))
# id case value value1
#1: 1 a 1 -0.21183360
#2: 1 b 34 -1.04159113
#3: 1 c 56 -1.15330756
#4: 1 c 23 0.32153150
#5: 2 a 546 -0.44553326
#6: 2 b 34 1.73404543
#7: 2 c 67 0.51129562
#8: 2 c 23 0.09964504
#9: 3 a 65 -0.05789111
#10: 3 b 23 -1.74278763
#11: 3 c 65 -1.32495298
#12: 3 c 23 -0.54793388
#13: 4 a 87 -1.45638428
#14: 4 b 34 0.08268682
#15: 4 c 321 0.92757895
Case and value only? Easy:
> mydata[!duplicated(mydata[,c("id","case","value")]),]
Even if you have a ton more variables in the dataset, they won't be considered by the duplicated() call.

Resources