Calculate each overlapping date ranges from two independent databases in r - r

I have two independent two databases, one contains followup data (start date and end date). As follows:
> data1 <- data.frame("ID" = c(1,1,1,1,2,2,2), "FUstart" = c("2019-01-01", "2019-04-01", "2019-07-01", "2019-10-01", "2019-04-01", "2019-07-01", "2019-10-01"), "FUend" = c("2019-03-31", "2019-06-30", "2019-09-30", "2019-12-31", "2019-06-30", "2019-09-30", "2019-12-31"))
> data1
ID FUstart FUend
1 1 2019-01-01 2019-03-31
2 1 2019-04-01 2019-06-30
3 1 2019-07-01 2019-09-30
4 1 2019-10-01 2019-12-31
5 2 2019-04-01 2019-06-30
6 2 2019-07-01 2019-09-30
7 2 2019-10-01 2019-12-31
Another contains drug use data (also start date and end date). As follows:
> data2 <- data.frame("ID" = c(1,1,1,2), "Drugstart" = c("2019-01-11", "2019-03-26", "2019-06-26", "2019-03-20"), "Drugend" = c("2019-01-20", "2019-04-05", "2019-10-05", "2019-10-10"))
> data2
ID Drugstart Drugend
1 1 2019-01-11 2019-01-20
2 1 2019-03-26 2019-04-05
3 1 2019-06-26 2019-10-05
4 2 2019-03-20 2019-10-10
The two databases are linked by "ID". The problem is that the rows for each ID may not be the same. I would like to calculate overlapping days and add it into the data1. I would expect to have the following results:
> data1
ID FUstart FUend Overlapping.Days
1 1 2019-01-01 2019-03-31 16
2 1 2019-04-01 2019-06-30 10
3 1 2019-07-01 2019-09-30 92
4 1 2019-10-01 2019-12-31 5
5 2 2019-04-01 2019-06-30 91
6 2 2019-07-01 2019-09-30 92
7 2 2019-10-01 2019-12-31 10
Note that data1 is the basic database. And adds data2's overlapping days into data1. Many many thanks for helping~~

An option using data.table::foverlaps:
foverlaps(data1, data2)[,
sum(1L + pmin(Drugend, FUend) - pmax(Drugstart, FUstart)),
.(ID, FUstart, FUend)]
output and I am also getting slightly diff numbers from OP's expected output:
ID FUstart FUend V1
1: 1 2019-01-01 2019-03-31 16
2: 1 2019-04-01 2019-06-30 10
3: 1 2019-07-01 2019-09-30 92
4: 1 2019-10-01 2019-12-31 5
5: 2 2019-04-01 2019-06-30 91
6: 2 2019-07-01 2019-09-30 92
7: 2 2019-10-01 2019-12-31 10
data:
library(data.table)
setDT(data1)
cols <- paste0("FU", c("start","end"))
data1[, (cols) := lapply(.SD, as.IDate, format="%Y-%m-%d"), .SDcols=cols]
setkeyv(data1, c("ID", cols))
#too lazy to generalize and hence copy paste
setDT(data2)
cols <- paste0("Drug", c("start","end"))
data2[, (cols) := lapply(.SD, as.IDate, format="%Y-%m-%d"), .SDcols=cols]
setkeyv(data2, c("ID", cols))

Related

How to determine difference in days between two dates across two columns and two rows by group?

I am looking to determine the difference in days by groups across two columns and two rows. Essentially subtract from the End Day by the subsequent Start Day in the subsequent row and record the difference as new column in the data frame and start over when a new group (ID) has been identified.
Start_Date End_Date ID
2014-05-09 2015-05-08 01
2015-05-09 2016-05-08 01
2016-05-11 2017-05-10 01
2017-05-11 2018-05-10 01
2016-08-29 2017-08-28 02
2017-08-29 2018-08-28 02
The result should be something like table below.
Start_Date End_Date ID Days_Difference
2014-05-09 2015-05-08 01 NA
2015-05-09 2016-05-08 01 01
2016-05-11 2017-05-10 01 03
2017-05-11 2018-05-10 01 01
2016-08-29 2017-08-28 02 NA
2017-08-29 2018-08-28 02 01
Essentially I want to take the difference of the End Date and its left diagonal Start date across groups (ID). I am having a really hard time with this one. I don't think my code would be helpful. Any solution using tidyverse, data.table, or base R would be greatly appreciated!
We may get the difference between the lead (next element) of 'Start_Date' and 'End_Date' after grouping
library(dplyr)
df1 <- df1 %>%
mutate(across(ends_with("Date"), as.Date)) %>%
group_by(ID) %>%
mutate(Days_Difference = as.numeric(lag(lead(Start_Date) - End_Date))) %>%
ungroup
-output
df1
# A tibble: 6 × 4
Start_Date End_Date ID Days_Difference
<date> <date> <int> <dbl>
1 2014-05-09 2015-05-08 1 NA
2 2015-05-09 2016-05-08 1 1
3 2016-05-11 2017-05-10 1 3
4 2017-05-11 2018-05-10 1 1
5 2016-08-29 2017-08-28 2 NA
6 2017-08-29 2018-08-28 2 1
Or a similar logic with data.table
library(data.table)
setDT(df1)[, Days_Difference :=
as.numeric(shift(shift(as.IDate(Start_Date), type = "lead") -
as.IDate(End_Date))), ID]
-output
> df1
Start_Date End_Date ID Days_Difference
<char> <char> <int> <num>
1: 2014-05-09 2015-05-08 1 NA
2: 2015-05-09 2016-05-08 1 1
3: 2016-05-11 2017-05-10 1 3
4: 2017-05-11 2018-05-10 1 1
5: 2016-08-29 2017-08-28 2 NA
6: 2017-08-29 2018-08-28 2 1
data
df1 <- structure(list(Start_Date = c("2014-05-09", "2015-05-09", "2016-05-11",
"2017-05-11", "2016-08-29", "2017-08-29"), End_Date = c("2015-05-08",
"2016-05-08", "2017-05-10", "2018-05-10", "2017-08-28", "2018-08-28"
), ID = c(1L, 1L, 1L, 1L, 2L, 2L)), class = "data.frame",
row.names = c(NA,
-6L))
Another data.table option
setDT(df)[
,
c(lapply(.SD, as.IDate), .(ID = ID)),
.SDcols = patterns("Date$")
][
,
DayspDiff := Start_Date - shift(End_Date),
ID
][]
yields
Start_Date End_Date ID DayspDiff
1: 2014-05-09 2015-05-08 1 NA
2: 2015-05-09 2016-05-08 1 1
3: 2016-05-11 2017-05-10 1 3
4: 2017-05-11 2018-05-10 1 1
5: 2016-08-29 2017-08-28 2 NA
6: 2017-08-29 2018-08-28 2 1

Select rows based on multiple conditions from two independent database

I have two independent two datasets, one contains event date. Each ID has only one "Eventdate". As follows:
data1 <- data.frame("ID" = c(1,2,3,4,5,6), "Eventdate" = c("2019-01-01", "2019-02-01", "2019-03-01", "2019-04-01", "2019-05-01", "2019-06-01"))
data1
ID Eventdate
1 1 2019-01-01
2 2 2019-02-01
3 3 2019-03-01
4 4 2019-04-01
5 5 2019-05-01
6 6 2019-06-01
In another dataset, one ID have multiple event name (Eventcode) and its event date (Eventdate). As follows:
data2 <- data.frame("ID" = c(1,1,2,3,3,3,4,4,7), "Eventcode"=c(201,202,201,204,205,206,209,208,203),"Eventdate" = c("2019-01-01", "2019-01-01", "2019-02-11", "2019-02-15", "2019-03-01", "2019-03-15", "2019-03-10", "2019-03-20", "2019-06-02"))
data2
ID Eventcode Eventdate
1 1 201 2019-01-01
2 1 202 2019-01-01
3 2 201 2019-02-11
4 3 204 2019-02-15
5 3 205 2019-03-01
6 3 206 2019-03-15
7 4 209 2019-03-10
8 4 208 2019-03-20
9 7 203 2019-06-02
Two datasets were linked by ID. The ID of two datasets were not all the same.
I would like to select cases in data2 with conditions:
Match by ID
Eventdate in data2 >= Eventdate in data1.
If one ID has multiple Eventdates in data2, select the earliest one.
If one ID has multiple Eventcodes at one Eventdate in data2, just randomly select one.
Then merge the selected data2 into data1.
Expected results as follows:
data1
ID Eventdate Eventdate.data2 Eventcode
1 1 2019-01-01 2019-01-01 201
2 2 2019-02-01 2019-02-11 201
3 3 2019-03-01 2019-03-01 205
4 4 2019-04-01
5 5 2019-05-01
6 6 2019-06-01
or
data1
ID Eventdate Eventdate.data2 Eventcode
1 1 2019-01-01 2019-01-01 202
2 2 2019-02-01 2019-02-11 201
3 3 2019-03-01 2019-03-01 205
4 4 2019-04-01
5 5 2019-05-01
6 6 2019-06-01
Thank you very very much!
You can try this approach :
library(dplyr)
left_join(data1, data2, by = 'ID') %>%
group_by(ID, Eventdate.x) %>%
summarise(Eventdate = Eventdate.y[Eventdate.y >= Eventdate.x][1],
Eventcode = {
inds <- Eventdate.y >= Eventdate.x
val <- sum(inds, na.rm = TRUE)
if(val == 1) Eventcode[inds]
else if(val > 1) sample(Eventcode[inds], 1)
else NA_real_
})
# ID Eventdate.x Eventdate Eventcode
# <dbl> <chr> <chr> <dbl>
#1 1 2019-01-01 2019-01-01 201
#2 2 2019-02-01 2019-02-11 201
#3 3 2019-03-01 2019-03-01 205
#4 4 2019-04-01 NA NA
#5 5 2019-05-01 NA NA
#6 6 2019-06-01 NA NA
The complicated logic in Eventcode data is for randomness, if you are ok selecting the 1st value like Eventdate you can simplify it to :
left_join(data1, data2, by = 'ID') %>%
group_by(ID, Eventdate.x) %>%
summarise(Eventdate = Eventdate.y[Eventdate.y >= Eventdate.x][1],
Eventcode = Eventcode[Eventdate.y >= Eventdate.x][1])
Does this work:
library(dplyr)
data1 %>% rename(Eventdate_dat1 = Eventdate) %>% left_join(data2, by = 'ID') %>%
group_by(ID) %>% filter(Eventdate >= Eventdate_dat1) %>%
mutate(Eventdate = case_when(length(unique(Eventdate)) > 1 ~ min(Eventdate), TRUE ~ Eventdate),
Eventcode = case_when(length(unique(Eventcode)) > 1 ~ min(Eventcode), TRUE ~ Eventcode)) %>%
distinct() %>% right_join(data1, by = 'ID') %>% select(ID, 'Eventdate' = Eventdate.y, 'Eventdate.data2' = Eventdate.x, Eventcode)
# A tibble: 6 x 4
# Groups: ID [6]
ID Eventdate Eventdate.data2 Eventcode
<dbl> <chr> <chr> <dbl>
1 1 2019-01-01 2019-01-01 201
2 2 2019-02-01 2019-02-11 201
3 3 2019-03-01 2019-03-01 205
4 4 2019-04-01 NA NA
5 5 2019-05-01 NA NA
6 6 2019-06-01 NA NA

Match rows with the same or close start and end date in data.table r

Following data.table
df <- data.table(id=c(1,2,2,2,3,3,4,4,4),
start_date=c("2019-05-08","2019-08-01","2019-07-12","2017-05-24","2016-05-08","2017-08-01","2019-06-12","2017-02-24","2017-08-24"),
end_date=c("2019-09-08","2019-12-01","2019-07-30","2017-11-24","2017-07-25","2018-08-01","2019-12-12","2017-08-24","2018-08-24"),
variable1=c("a","c","c","d","a",NA,"a","a","b"))
df
id start_date end_date variable1
1: 1 2019-05-08 2019-09-08 a
2: 2 2019-08-01 2019-12-01 c
3: 2 2019-07-12 2019-07-30 c
4: 2 2017-05-24 2017-11-24 d
5: 3 2016-05-08 2017-07-25 a
6: 3 2017-08-01 2018-08-01 <NA>
7: 4 2019-06-12 2019-12-12 a
8: 4 2017-02-24 2017-08-24 a
9: 4 2017-08-24 2018-08-24 b
Within the same ID, I want to compare the start_date and end_date. If the end_date of one row is within 30 days of the start_date of another row, I want to combine the rows. So that it looks like this:
id start_date end_date variable1
1: 1 2019-05-08 2019-09-08 a
2: 2 2019-07-12 2019-12-01 c
3: 2 2017-05-24 2017-11-24 d
4: 3 2016-05-08 2018-08-01 a
5: 4 2019-06-12 2019-12-12 a
6: 4 2017-02-24 2017-08-24 a
7: 4 2017-08-24 2018-08-24 b
If the other variables of the rows are the same, rows should be combined with the earliest start_date and latest end_date as id number 2. If the variable1 is NA it should be replaced with values from the matching row as id number 3. If the variable1 has different values, rows should remain separate as id number 4.
The data.table contains more variables and objects than displayed here. Preferable a function in data.table.
Not clear what happens if an id has 3 overlapping rows with variable1 = c('a', NA, 'b'), what should the variable1 be for the NA for this case? a or b?
If we just choose the first variable1 when there are multiple matches, here is an option to first fill the NA and then borrow the idea from David Aurenburg's solution here
setorder(df, id, start_date, end_date)
df[, end_d := end_date + 30L]
df[is.na(variable1), variable1 :=
df[!is.na(variable1)][.SD, on=.(id, start_date<=start_date, end_d>=start_date), mult="first", x.variable1]]
df[, g:= c(0L, cumsum(shift(start_date, -1L) > cummax(as.integer(end_d)))[-.N]), id][,
.(start_date=min(start_date), end_date=max(end_date)), .(id, variable1, g)]
output:
id variable1 g start_date end_date
1: 1 a 0 2019-05-08 2019-09-08
2: 2 d 0 2017-05-24 2017-11-24
3: 2 c 1 2019-07-12 2019-12-01
4: 3 a 0 2016-05-08 2018-08-01
5: 4 a 0 2017-02-24 2017-08-24
6: 4 b 0 2017-08-24 2018-08-24
7: 4 a 1 2019-06-12 2019-12-12
data:
library(data.table)
df <- data.table(id=c(1,2,2,2,3,3,4,4,4),
start_date=as.IDate(c("2019-05-08","2019-08-01","2019-07-12","2017-05-24","2016-05-08","2017-08-01","2019-06-12","2017-02-24","2017-08-24")),
end_date=as.IDate(c("2019-09-08","2019-12-01","2019-07-30","2017-11-24","2017-07-25","2018-08-01","2019-12-12","2017-08-24","2018-08-24")),
variable1=c("a","c","c","d","a",NA,"a","a","b"))

How to iterate rows between start_date and end_date in R

I have a dataframe that looks like this:
And here is the output I'm hoping for.
This should work. The key is to use uncount from dplyr package. Then you need to do some operations regarding the datetime. There are some tricky issues in calculating the difference in months. What I proposed here may not be the best way to do it, but you get the idea.
library(tidyverse)
library(lubridate)
df = tibble(name = c('Alice', 'Bob', 'Caroline'),
start_date = as.Date(c('2019-01-01','2018-03-01','2019-06-01')),
end_date = as.Date(c('2019-07-01','2019-05-01','2019-09-01')))
# # A tibble: 3 x 3
# name start_date end_date
# <chr> <date> <date>
# 1 Alice 2019-01-01 2019-07-01
# 2 Bob 2018-03-01 2019-05-01
# 3 Caroline 2019-06-01 2019-09-01
df %>% mutate(tenure_in_month = as.integer(difftime(end_date, start_date, units = "days")/365*12+2))%>%
uncount(tenure_in_month)%>%
group_by(name)%>%
mutate(iteratedDate = start_date %m+% months(row_number()-1))%>%
select(name,iteratedDate)
# A tibble: 28 x 2
# Groups: name [3]
name iteratedDate
<chr> <date>
1 Alice 2019-01-01
2 Alice 2019-02-01
3 Alice 2019-03-01
4 Alice 2019-04-01
5 Alice 2019-05-01
6 Alice 2019-06-01
7 Alice 2019-07-01
8 Bob 2018-03-01
9 Bob 2018-04-01
10 Bob 2018-05-01
I use seq function to fix this problem.
library(data.table)
library(lubridate)
# data
original_data <- data.table(
CustomerName = c('Ben','Julie','Angelo','Carlo'),
StartDate = c(ymd(20190101),ymd(20180103),ymd(20190106),ymd(20170108)),
EndDate = c(ymd(20190107),ymd(20190105),ymd(20190109),ymd(20180112))
)
# CustomerName StartDate EndDate
#1: Ben 2019-01-01 2019-01-07
#2: Julie 2018-01-03 2019-01-05
#3: Angelo 2019-01-06 2019-01-09
#4: Carlo 2017-01-08 2018-01-12
finish_data <- original_data %>%
.[,.(IteratedDate = seq(from = StartDate,
to = EndDate, by = 'day')), by = .(CustomerName)]
# CustomerName IteratedDate
#1: Ben 2019-01-01
#2: Ben 2019-01-02
#3: Ben 2019-01-03
#4: Ben 2019-01-04
#5: Ben 2019-01-05
#6: Ben 2019-01-06
#7: Ben 2019-01-07
#8: Julie 2018-01-03
#9: Julie 2018-01-04

Aggregate one data frame by time intervals from another data frame

I'm trying to aggregate two data frames (df1 and df2).
The first contains 3 variables: ID, Date1 and Date2.
df1
ID Date1 Date2
1 2016-03-01 2016-04-01
1 2016-04-01 2016-05-01
2 2016-03-14 2016-04-15
2 2016-04-15 2016-05-17
3 2016-05-01 2016-06-10
3 2016-06-10 2016-07-15
The second also contains 3 variables: ID, Date3 and Value.
df2
ID Date3 Value
1 2016-03-15 5
1 2016-04-04 7
1 2016-04-28 7
2 2016-03-18 3
2 2016-03-27 5
2 2016-04-08 9
2 2016-04-20 2
3 2016-05-05 6
3 2016-05-25 8
3 2016-06-13 3
The idea is to get, for each df1 row, the sum of df2$Value that have the same ID and for which Date3 is between Date1 and Date2:
ID Date1 Date2 SumValue
1 2016-03-01 2016-04-01 5
1 2016-04-01 2016-05-01 14
2 2016-03-14 2016-04-15 17
2 2016-04-15 2016-05-17 2
3 2016-05-01 2016-06-10 14
3 2016-06-10 2016-07-15 3
I know how to make a loop on this, but the data frames are huge! Does someone has an efficient solution? Exploring data.table, plyr and dplyr but could not find a solution.
A couple of data.table solutions that should scale well (and a good stop-gap until non-equi joins are implemented):
Do the comparison in J using by=EACHI.
library(data.table)
setDT(df1)
setDT(df2)
df1[, `:=`(Date1 = as.Date(Date1), Date2 = as.Date(Date2))]
df2[, Date3 := as.Date(Date3)]
df1[ df2,
{
idx = Date1 <= i.Date3 & i.Date3 <= Date2
.(Date1 = Date1[idx],
Date2 = Date2[idx],
Date3 = i.Date3,
Value = i.Value)
},
on=c("ID"),
by=.EACHI][, .(sumValue = sum(Value)), by=.(ID, Date1, Date2)]
# ID Date1 Date2 sumValue
# 1: 1 2016-03-01 2016-04-01 5
# 2: 1 2016-04-01 2016-05-01 14
# 3: 2 2016-03-14 2016-04-15 17
# 4: 2 2016-04-15 2016-05-17 2
# 5: 3 2016-05-01 2016-06-10 14
# 6: 3 2016-06-10 2016-07-15 3
foverlap join (as suggested in the comments)
library(data.table)
setDT(df1)
setDT(df2)
df1[, `:=`(Date1 = as.Date(Date1), Date2 = as.Date(Date2))]
df2[, Date3 := as.Date(Date3)]
df2[, Date4 := Date3]
setkey(df1, ID, Date1, Date2)
foverlaps(df2,
df1,
by.x=c("ID", "Date3", "Date4"),
type="within")[, .(sumValue = sum(Value)), by=.(ID, Date1, Date2)]
# ID Date1 Date2 sumValue
# 1: 1 2016-03-01 2016-04-01 5
# 2: 1 2016-04-01 2016-05-01 14
# 3: 2 2016-03-14 2016-04-15 17
# 4: 2 2016-04-15 2016-05-17 2
# 5: 3 2016-05-01 2016-06-10 14
# 6: 3 2016-06-10 2016-07-15 3
Further reading
Rolling join on data.table with duplicate keys
foverlap joins in data.table
With the recently implemented non-equi joins feature in the current development version of data.table, v1.9.7, this can be done as follows:
dt2[dt1, .(sum = sum(Value)), on=.(ID, Date3>=Date1, Date3<=Date2), by=.EACHI]
# ID Date3 Date3 sum
# 1: 1 2016-03-01 2016-04-01 5
# 2: 1 2016-04-01 2016-05-01 14
# 3: 2 2016-03-14 2016-04-15 17
# 4: 2 2016-04-15 2016-05-17 2
# 5: 3 2016-05-01 2016-06-10 14
# 6: 3 2016-06-10 2016-07-15 3
The column names needs some fixing.. will work on it later.
Here's a base R solution using sapply():
df1 <- data.frame(ID=c(1L,1L,2L,2L,3L,3L),Date1=as.Date(c('2016-03-01','2016-04-01','2016-03-14','2016-04-15','2016-05-01','2016-06-01')),Date2=as.Date(c('2016-04-01','2016-05-01','2016-04-15','2016-05-17','2016-06-15','2016-07-15')));
df2 <- data.frame(ID=c(1L,1L,1L,2L,2L,2L,2L,3L,3L,3L),Date3=as.Date(c('2016-03-15','2016-04-04','2016-04-28','2016-03-18','2016-03-27','2016-04-08','2016-04-20','2016-05-05','2016-05-25','2016-06-13')),Value=c(5L,7L,7L,3L,5L,9L,2L,6L,8L,3L));
cbind(df1,SumValue=sapply(seq_len(nrow(df1)),function(ri) sum(df2$Value[df1$ID[ri]==df2$ID & df1$Date1[ri]<=df2$Date3 & df1$Date2[ri]>df2$Date3])));
## ID Date1 Date2 SumValue
## 1 1 2016-03-01 2016-04-01 5
## 2 1 2016-04-01 2016-05-01 14
## 3 2 2016-03-14 2016-04-15 17
## 4 2 2016-04-15 2016-05-17 2
## 5 3 2016-05-01 2016-06-15 17
## 6 3 2016-06-01 2016-07-15 3
Note that your df1 and expected output have slightly different dates in some cases; I used the df1 dates.
Here's another approach that attempts to be more vectorized: Precompute a cartesian product of indexes into the two frames, then perform a single vectorized conditional expression using the index vectors to get matching pairs of indexes, and finally use the matching indexes to aggregate the desired result:
cbind(df1,SumValue=with(expand.grid(i1=seq_len(nrow(df1)),i2=seq_len(nrow(df2))),{
x <- df1$ID[i1]==df2$ID[i2] & df1$Date1[i1]<=df2$Date3[i2] & df1$Date2[i1]>df2$Date3[i2];
tapply(df2$Value[i2[x]],i1[x],sum);
}));
## ID Date1 Date2 SumValue
## 1 1 2016-03-01 2016-04-01 5
## 2 1 2016-04-01 2016-05-01 14
## 3 2 2016-03-14 2016-04-15 17
## 4 2 2016-04-15 2016-05-17 2
## 5 3 2016-05-01 2016-06-15 17
## 6 3 2016-06-01 2016-07-15 3

Resources