Average days between 2 dates columns R - r

I have 2 dates columns in a dataframe with more than 100k observations
date1
startdate
2020-07-30 23:00:00
NA
2020-12-10 04:00:00
2021-06-30 20:00:00
2020-10-26 21:00:00
NA
2019-12-03 03:01:00
2020-02-01 01:00:00
NA
2020-06-28 07:30:00
I have to fill the missing values in startdate column, so my idea is to compute the average of days between date1 and startdate and to replace the NA in startdate after by doing an addition between this average and the date1 date.
DESIRED OUTPUT
For instance, if the average of days is 70, then :
date1
startdate
2020-07-30 23:00:00
2020-10-08 23:00:00
2020-12-10 04:00:00
2021-06-30 20:00:00
2020-10-26 21:00:00
2021-01-04 21:00:00
2019-12-03 03:01:00
2020-02-01 01:00:00
NA
2020-06-28 07:30:00
Reproducible example :
structure(list(date1 = structure(c(1594069500,
1575320400, 1603742400, NA, 1574975100, 1570845660, 1575061500,
1564714860, 1576544400, 1574802300, 1576198800, 1575338460, 1575666180,
NA, 1594327800, 1595365200, 1594069800, 1591905600, 1594414800,
NA), class = c("POSIXct", "POSIXt"), tzone = ""), startdate = structure(c(1599242400,
1577127600, NA, 1603396800, 1577516400, 1573714800, 1577689200,
1566374400, 1577343600, 1577516400, 1577343600, NA, 1577257200,
NA, 1605193200, 1605106800, 1600358400, 1600358400, 1600272000,
NA), class = c("POSIXct", "POSIXt"), tzone = "UTC")), row.names = c(1L,
2L, 7591L, 8301L, 8692L, 8694L, 8699L, 8703L, 8706L, 8709L, 8710L,
8714L, 8715L, 8730L, 8732L, 8733L, 8736L, 8740L, 8745L, 8749L
), class = "data.frame")

You can use difftime to calculate average duration between startdate and date1. Replace NA values in startdate by adding the average value to date1.
avg <- as.numeric(mean(difftime(df$startdate, df$date1, units = 'secs'), na.rm = TRUE))
df$startdate[is.na(df$startdate)] <- df$date1[is.na(df$startdate)] + avg
df

something like this??
I cannot verify, since your desired output does not match your sample data..
library(data.table)
setDT(mydata)
mydata[is.na(startdate) & !is.na(date1),
startdate := date1 + round(mean(abs(DT$date1 - DT$startdate), na.rm = TRUE))]

Related

if_else statement returning wrong values

i have a dataframe with this data :
# A tibble: 6 × 3
rowid Arrival Depart
<int> <dttm> <dttm>
1 1 2023-02-11 07:00:00 2023-02-11 17:30:00
2 2 2023-02-13 10:00:00 2023-02-13 18:00:00
3 3 2023-02-14 08:00:00 2023-02-14 17:00:00
4 4 2023-02-15 08:00:00 2023-02-15 17:00:00
5 5 2023-02-16 08:00:00 2023-02-16 18:00:00
6 6 2023-02-18 07:00:00 2023-02-18 17:30:00
structure(list(rowid = 1:6, Arrival = structure(c(1676098800,
1676282400, 1676361600, 1676448000, 1676534400, 1676703600), tzone = "UTC", class = c("POSIXct",
"POSIXt")), Depart = structure(c(1676136600, 1676311200, 1676394000,
1676480400, 1676570400, 1676741400), tzone = "UTC", class = c("POSIXct",
"POSIXt"))), row.names = c(NA, -6L), class = c("tbl_df", "tbl",
"data.frame"))
i set the following objects :
ri <- 2
int <- test_int[2]
int (an interval) becomes :
> int
[1] 2023-02-13 06:00:00 EST--2023-02-13 13:00:00 EST
and then run this code :
test <- test %>% mutate(
interval_start = if_else(rowid == ri, int_start(int), Arrival),
interval_end = if_else(rowid == ri, int_end(int), Depart)
) %>% select(Arrival, interval_start, Depart, interval_end)
the result is this :
# A tibble: 6 × 4
Arrival interval_start Depart interval_end
<dttm> <dttm> <dttm> <dttm>
1 2023-02-11 07:00:00 2023-02-11 02:00:00 2023-02-11 17:30:00 2023-02-11 12:30:00
2 2023-02-13 10:00:00 2023-02-13 06:00:00 2023-02-13 18:00:00 2023-02-13 13:00:00
3 2023-02-14 08:00:00 2023-02-14 03:00:00 2023-02-14 17:00:00 2023-02-14 12:00:00
4 2023-02-15 08:00:00 2023-02-15 03:00:00 2023-02-15 17:00:00 2023-02-15 12:00:00
5 2023-02-16 08:00:00 2023-02-16 03:00:00 2023-02-16 18:00:00 2023-02-16 13:00:00
6 2023-02-18 07:00:00 2023-02-18 02:00:00 2023-02-18 17:30:00 2023-02-18 12:30:00
structure(list(Arrival = structure(c(1676098800, 1676282400,
1676361600, 1676448000, 1676534400, 1676703600), tzone = "UTC", class = c("POSIXct",
"POSIXt")), interval_start = structure(c(1676098800, 1676286000,
1676361600, 1676448000, 1676534400, 1676703600), class = c("POSIXct",
"POSIXt")), Depart = structure(c(1676136600, 1676311200, 1676394000,
1676480400, 1676570400, 1676741400), tzone = "UTC", class = c("POSIXct",
"POSIXt")), interval_end = structure(c(1676136600, 1676311200,
1676394000, 1676480400, 1676570400, 1676741400), class = c("POSIXct",
"POSIXt"))), row.names = c(NA, -6L), class = c("tbl_df", "tbl",
"data.frame"))
for some reason the if_else statement is returning some weird times instead of the values for arrival/depart, but it is correctly mutating rowid 2 to the correct time.
anyone knows why this could be and how i can fix it?
thanks to #George Savva, i was able to see that the erroneous values i was getting was because my int variable was in the EST timezone instead of UTC which is why there was a 5 hour difference

R: Loop through a set of values in one dataframe update a second dataframe

Updated to a more realistic example; this time added duplicates in interp_b.
I am trying to populate a field in one dataframe (interp_b) using the values from a second dataframe (bait). I want to look at each row's obs_datetime in interp_b, and determine when that plot-station-year was last baited, prior to the obs_datetime. This will later be used to calculate a time-since-bait for each obs_datetime. Bait times are in the bait dataframe in column bait_datetime. The results should go in a field called latestbait_datetime in the interp_b dataframe.
I was visualizing an iterative process where interp_b "latestbait_datetime" keeps getting recalculated until the last row in the bait dataframe is reached. The for-loop I tried is clearly running through the rows and doing the specified calculations but I can't seem to get the output in the format I want; it is producing output for each loop rather than rewriting and updating the interp_b dataframe.
Here is some code to build the two dataframes; interp_b and bait (please excuse the inelegance)
# interp_b dataframe----
structure(list(plot_station_year = c("Cow_C2_2019", "RidingStable_C3_2018",
"RidingStable_C3_2018", "Raf_C1_2018", "Metcalfe_C2_2019"), obs_datetime = structure(c(1559487600,
1544954400, 1541084400, 1515160800, 1567756800), class = c("POSIXct",
"POSIXt"), tzone = "UTC"), latestbait_datetime = structure(c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_), class = c("POSIXct",
"POSIXt"))), class = c("spec_tbl_df", "tbl_df", "tbl", "data.frame"
), row.names = c(NA, -5L))
# bait dataframe----
structure(list(plot_station_year = c("Cow_C2_2019", "Cow_C2_2019",
"RidingStable_C3_2018", "Raf_C1_2018"), bait_datetime = structure(c(1557500400,
1559746800, 1543676400, 1491318000), class = c("POSIXct", "POSIXt"
), tzone = "UTC")), class = c("spec_tbl_df", "tbl_df", "tbl",
"data.frame"), row.names = c(NA, -4L), spec = structure(list(
cols = list(plot_station_year = structure(list(), class = c("collector_character",
"collector")), bait_datetime = structure(list(format = "%d-%m-%Y %H:%M"), class = c("collector_datetime",
"collector"))), default = structure(list(), class = c("collector_guess",
"collector")), skip = 1), class = "col_spec"))
and the desired result would look like this
Below are two of my attempts. The first resulted in a dataframe that only contained the final run of the loop and the second attempt resulted in a dataframe containing all of the run results (as expected with the bind).
library(tidyverse)
#attempt #1----
for (i in 1:nrow(bait)) {
print(paste("row =",i))
interpbait <- interp_b %>%
mutate(latestbait_datetime = if_else((plot_station_year == bait$plot_station_year[i] & (obs_datetime >= bait$bait_datetime[i] & (is.na(latestbait_datetime) | latestbait_datetime < bait$bait_datetime[i]))), bait$bait_datetime[i], latestbait_datetime))
}
#attempt #2----
resultb <- data.frame()
for (i in 1:nrow(bait)) {
print(paste("row =",i))
interpbait2 <- interp_b %>%
mutate(latestbait_datetime = if_else((plot_station_year == bait$plot_station_year[i] & (obs_datetime >= bait$bait_datetime[i] & (is.na(latestbait_datetime) | latestbait_datetime < bait$bait_datetime[i]))), bait$bait_datetime[i], latestbait_datetime))
resultb <- bind_rows(resultb, interpbait2)
print(resultb)
}
Any help would be greatly appreciated.
I'm not sure how long this will take, but here is a tidyverse solution. For each row in interp_b, we filter the bait dataframe to the correct plot_station_year, and ensure that all date-times are less than the row in interp_b. Then, we arrange the filtered bait data by descending datetime (so that the most recent dates are on top). We slice the first row of that dataframe so that we only get the most recent date. Then we "pull out" the date-time from the dataframe, and add it onto the appropriate row in interp_b.
library(tidyverse)
library(progress) # for progress bar
# create progress bar to update, so that you can estimate the amount of time it will take to finish the entire loop
pb <- progress_bar$new(total = nrow(interp_b))
for (i in 1:nrow(interp_b)) {
last_time_baited <- bait %>%
#filter bait dataframe to appropriate plot, station, year based on
# the row in interp_b
filter(plot_station_year == interp_b$plot_station_year[i],
# ensure all datetimes are less than that row in interp_b
bait_datetime < interp_b$obs_datetime[i]) %>%
# arrange by datetime (most recent datetimes first)
arrange(desc(bait_datetime)) %>%
# take the top row - this will be the most recent date-time that
# the plot-station was baited
slice(1) %>%
# "pull" that value out of the dataframe so you have a value,
# not a tibble
pull(bait_datetime) #
# update the row in interp_b with the date_time baited
interp_b$latestbait_datetime[i] <- last_time_baited
pb$tick() # print progress
}
The resulting table matches your expected output (interp_b):
# A tibble: 5 x 3
plot_station_year obs_datetime latestbait_datetime
<chr> <dttm> <dttm>
1 Cow_C2_2019 2019-06-02 15:00:00 2019-05-10 11:00:00
2 RidingStable_C3_2018 2018-12-16 10:00:00 2018-12-01 10:00:00
3 RidingStable_C3_2018 2018-11-01 15:00:00 NA
4 Raf_C1_2018 2018-01-05 14:00:00 2017-04-04 11:00:00
5 Metcalfe_C2_2019 2019-09-06 08:00:00 NA
You could perform an outer join with data.table, and then select the highest bait_datetime for each plot_station_year.
Edit: I edited my answer to reflect the possibility that there could be multiple obs_datetime for a given unique plot_station_year in interp2. To preserve these, we index them and include the index in the filtering step.
One potential improvement with large files (not tested) could be to merge using roll, instead of performing an outer merge and then to filter.
That version is shown in the end of the reproducible example:
library(data.table)
interp2 <- structure(list(plot_station_year = c("Cow_C2_2019", "Cow_C2_2019", "RidingStable_C3_2018",
"Raf_C1_2018", "Metcalfe_C2_2019"), obs_datetime = structure(c(1559487600, 1559487300,
1544954400, 1515160800, 1567756800), class = c("POSIXct", "POSIXt"
), tzone = "UTC"), latestbait_datetime = structure(c(NA_real_,
NA_real_, NA_real_, NA_real_), class = c("POSIXct", "POSIXt"))), class = c("spec_tbl_df",
"tbl_df", "tbl", "data.frame"), row.names = c(NA, -5L))
bait2 <- structure(list(plot_station_year = c("Cow_C2_2019", "Cow_C2_2019", "Cow_C2_2019",
"RidingStable_C3_2018", "Raf_C1_2018"), bait_datetime = structure(c(1557500400,
1496674800, 1576674800, 1543676400, 1491318000), class = c("POSIXct", "POSIXt"
), tzone = "UTC")), class = c("spec_tbl_df", "tbl_df", "tbl",
"data.frame"), row.names = c(NA, -5L), spec = structure(list(
cols = list(plot_station_year = structure(list(), class = c("collector_character",
"collector")), bait_datetime = structure(list(format = "%d-%m-%Y %H:%M"), class = c("collector_datetime",
"collector"))), default = structure(list(), class = c("collector_guess",
"collector")), skip = 1), class = "col_spec"))
# add index idx by plot_station_year, remove empty column, set keys
setDT(interp2)[, "latestbait_datetime" := NULL][, idx := 1:.N, by=plot_station_year]
setkeyv(interp2, c("plot_station_year", "idx", "obs_datetime"))
# same for bait2: set as data.table, set keys
setDT(bait2, key=c("plot_station_year", "bait_datetime"))
## option 1: merge files, then filter
# outer join on interp2 and bait2 on first column (and order by bait_datetime)
expected_out <- merge(interp2, bait2, by="plot_station_year", all=TRUE)
# set keys for sorting
setkey(expected_out, plot_station_year, idx, bait_datetime)
# select highest bait_datetime below obs_datetime by plot_station_year and idx
expected_out <- expected_out[is.na(bait_datetime) | bait_datetime < obs_datetime][,
tail(.SD, 1), by=.(plot_station_year, idx)]
# rename and sort columns
setnames(expected_out, old="bait_datetime", new="latestbait_datetime")
setorder(expected_out, -latestbait_datetime, idx, na.last = TRUE)[]
#> plot_station_year idx obs_datetime latestbait_datetime
#> 1: Cow_C2_2019 1 2019-06-02 15:00:00 2019-05-10 15:00:00
#> 2: Cow_C2_2019 2 2019-06-02 14:55:00 2019-05-10 15:00:00
#> 3: RidingStable_C3_2018 1 2018-12-16 10:00:00 2018-12-01 15:00:00
#> 4: Raf_C1_2018 1 2018-01-05 14:00:00 2017-04-04 15:00:00
#> 5: Metcalfe_C2_2019 1 2019-09-06 08:00:00 <NA>
## option 2 (might use less memory): rolling join
bait2[, latestbait_datetime := bait_datetime]
out_alt <- bait2[interp2, .(plot_station_year, obs_datetime, idx, latestbait_datetime),
on=c("plot_station_year", "bait_datetime==obs_datetime"), roll=Inf]
# order
setorder(out_alt, -latestbait_datetime, idx, na.last = TRUE)[]
#> plot_station_year obs_datetime idx latestbait_datetime
#> 1: Cow_C2_2019 2019-06-02 15:00:00 1 2019-05-10 15:00:00
#> 2: Cow_C2_2019 2019-06-02 14:55:00 2 2019-05-10 15:00:00
#> 3: RidingStable_C3_2018 2018-12-16 10:00:00 1 2018-12-01 15:00:00
#> 4: Raf_C1_2018 2018-01-05 14:00:00 1 2017-04-04 15:00:00
#> 5: Metcalfe_C2_2019 2019-09-06 08:00:00 1 <NA>
setcolorder(out_alt, c(1,3,2,4))[]
#> plot_station_year idx obs_datetime latestbait_datetime
#> 1: Cow_C2_2019 1 2019-06-02 15:00:00 2019-05-10 15:00:00
#> 2: Cow_C2_2019 2 2019-06-02 14:55:00 2019-05-10 15:00:00
#> 3: RidingStable_C3_2018 1 2018-12-16 10:00:00 2018-12-01 15:00:00
#> 4: Raf_C1_2018 1 2018-01-05 14:00:00 2017-04-04 15:00:00
#> 5: Metcalfe_C2_2019 1 2019-09-06 08:00:00 <NA>
## test that both options give the same result:
identical(expected_out, out_alt)
#> [1] TRUE

dealing with time in r difference between two times

I have two columns
starttime endtime
2019-11-05 18:02:04 2019-11-05 00:02:04
2019-08-02 20:18:00 2019-0802 01:10:00
2019-12-07 17:28:00 2019-12-07 18:00:00
I am trying to find the difference in time between starttime and endtime
mutate(col = difftime(endtime,starttime,units = "hours")
but i am getting negative hours which makes no sense, and i need it to be endtime - startime because it would mess up things for the dataframe that I have I beleive that 0.533 is right I got
col
-18
-19
0.533
We can increment the endtime by 1 day if startttime > endtime and then use difftime
library(dplyr)
df %>%
mutate(endtime = if_else(starttime > endtime, endtime + 86400, endtime),
col = difftime(endtime,starttime,units = "hours"))
# starttime endtime col
#1 2019-11-05 18:02:04 2019-11-06 00:02:04 6.0000000 hours
#2 2019-08-02 20:18:00 2019-08-03 01:10:00 4.8666667 hours
#3 2019-12-07 17:28:00 2019-12-07 18:00:00 0.5333333 hours
data
df <- structure(list(starttime = structure(c(1572976924, 1564777080,
1575739680), class = c("POSIXct", "POSIXt"), tzone = "UTC"),
endtime = structure(c(1572912124, 1564708200, 1575741600), class = c("POSIXct",
"POSIXt"), tzone = "UTC")), row.names = c(NA, -3L), class = "data.frame")

Compare timestamps based on multiple criteria from multiple rows and columns

I have two data frames with timestamps (in as.POSIXct, format="%Y-%m-%d %H:%M:%S") as below.
df_ID1
ID DATETIME TIMEDIFF EV
A 2019-03-26 06:13:00 2019-03-26 00:13:00 1
B 2019-04-03 08:00:00 2019-04-03 02:00:00 1
B 2019-04-04 12:35:00 2019-04-04 06:35:00 1
df_ID0
ID DATETIME
A 2019-03-26 00:02:00
A 2019-03-26 04:55:00
A 2019-03-26 11:22:00
B 2019-04-02 20:43:00
B 2019-04-04 11:03:00
B 2019-04-06 03:12:00
I want to compare the DATETIME in df_ID1 with the DATETIME in df_ID0 that is with the same ID and the DATETIME is "smaller than but closest to" the one in df_ID1,
For the pair in two data frames that matches, I want to further compare the TIMEDIFF in df_ID1 to the matched DATETIME in df_ID0, if TIMEDIFF in df_ID1 greater than the DATETIME in df_ID0, change EV 1 to 4 in df_ID1.
My desired result is
df_ID1
ID DATETIME TIMEDIFF EV
A 2019-03-26 06:13:00 2019-03-26 00:13:00 1
B 2019-04-03 08:00:00 2019-04-03 02:00:00 4
B 2019-04-04 12:35:00 2019-04-04 06:35:00 1
I've checked how to compare timestamps and calculate the time difference, also how to change values based on criteria...
But I cannot find anything to select the "smaller than but closest to" timestamps and cannot figure out how to apply all these logic too..
Any help would be appreciate!
You can do this with a for loop keeping in mind that if your actual data base is very big then the overhead would be quite bad performance wise.
for(i in 1:nrow(df_1)){
sub <- subset(df_0, ID == df_1$ID[i]) # filter on ID
df_0_dt <- max(sub[sub$DATETIME < df_1$DATETIME[i],]$DATETIME) # Take max of those with DATETIME less than (ie less than but closest to)
if(df_0_dt < df_1$TIMEDIFF[i]){ # final condition
df_1[i, "EV"] <- 4
}
}
df_1
# A tibble: 3 x 4
ID DATETIME TIMEDIFF EV
<chr> <dttm> <dttm> <dbl>
1 A 2019-03-26 06:13:00 2019-03-26 00:13:00 1
2 B 2019-04-03 08:00:00 2019-04-03 02:00:00 4
3 B 2019-04-04 12:35:00 2019-04-04 06:35:00 1
One option using nested mapply, is to first split df_ID1 and df_ID0 based on ID. Calculate the difference in time between each value in df_ID1 with that of df_ID0 of same ID. Get the index of "smaller than but closest to" and store it in inds and change the value to 4 if the value of corresponding TIMEDIFF column is greater than the matched DATETIME value.
df_ID1$EV[unlist(mapply(function(x, y) {
mapply(function(p, q) {
vals = as.numeric(difftime(p, y$DATETIME))
inds = which(vals == min(vals[vals > 0]))
q > y$DATETIME[inds]
}, x$DATETIME, x$TIMEDIFF)
}, split(df_ID1, df_ID1$ID), split(df_ID0, df_ID0$ID)))] <- 4
df_ID1
# ID DATETIME TIMEDIFF EV
#1 A 2019-03-26 06:13:00 2019-03-26 00:13:00 1
#2 B 2019-04-03 08:00:00 2019-04-03 02:00:00 4
#3 B 2019-04-04 12:35:00 2019-04-04 06:35:00 1
data
df_ID0 <- structure(list(ID = structure(c(1L, 1L, 1L, 2L, 2L, 2L),
.Label = c("A",
"B"), class = "factor"), DATETIME = structure(c(1553529720, 1553547300,
1553570520, 1554208980, 1554346980, 1554491520), class = c("POSIXct",
"POSIXt"), tzone = "")), row.names = c(NA, -6L), class = "data.frame")
df_ID1 <- structure(list(ID = structure(c(1L, 2L, 2L), .Label = c("A",
"B"), class = "factor"), DATETIME = structure(c(1553551980, 1554249600,
1554352500), class = c("POSIXct", "POSIXt"), tzone = ""), TIMEDIFF =
structure(c(1553530380,
1554228000, 1554330900), class = c("POSIXct", "POSIXt"), tzone = ""),
EV = c(1, 1, 1)), row.names = c(NA, -3L), class = "data.frame")

union/merge overlapping time-ranges [duplicate]

This question already has answers here:
How to flatten / merge overlapping time periods
(5 answers)
Closed 4 years ago.
I know the following problam can be solved using Bioconductor's IRanges-package, using reduce.
But since that function only accepts numeric input, and I am working with data.table anyway, I am wondering is the following van be achieved using data.tables'foverlaps().
Sample data
structure(list(group = c("A", "A", "A", "A", "B", "B", "B", "B"
), subgroup = c(1, 1, 2, 2, 1, 1, 2, 2), start = structure(c(1514793600,
1514795400, 1514794200, 1514798100, 1514815200, 1514817000, 1514815800,
1514818800), class = c("POSIXct", "POSIXt"), tzone = "UTC"),
end = structure(c(1514794500, 1514797200, 1514794800, 1514799000,
1514816100, 1514818800, 1514817600, 1514820600), class = c("POSIXct",
"POSIXt"), tzone = "UTC")), row.names = c(NA, -8L), class = c("tbl_df",
"tbl", "data.frame"))
# group subgroup start end
# 1: A 1 2018-01-01 08:00:00 2018-01-01 08:15:00
# 2: A 1 2018-01-01 08:30:00 2018-01-01 09:00:00
# 3: A 2 2018-01-01 08:10:00 2018-01-01 08:20:00
# 4: A 2 2018-01-01 09:15:00 2018-01-01 09:30:00
# 5: B 1 2018-01-01 14:00:00 2018-01-01 14:15:00
# 6: B 1 2018-01-01 14:30:00 2018-01-01 15:00:00
# 7: B 2 2018-01-01 14:10:00 2018-01-01 14:40:00
# 8: B 2 2018-01-01 15:00:00 2018-01-01 15:30:00
Question
What I would like to achieve, is to join/merge events (by group) when:
a range (start - end) overlaps (or partially overlaps) another range
the start of a range is the end of another range
Subgroups can be ignored
As mentioned above, I'm know this can be done using biocondustor's IRanges reduce, but I wonder if the same can be achieved using data.table. I can't shake the feeling that foverlaps should be able to tackle my problem, but I cannot figure out how...
Since I'm an intermediate R-user, but pretty much a novice in data.table, it's hard for me to 'read' some solutions already provided on stackoverflow. So I'm not sure if a similar quenstion has already been asked and answered (if so, please be gentle ;-) )
Desired output
structure(list(group = c("A", "A", "A", "B"), start = structure(c(1514793600,
1514795400, 1514798100, 1514815200), class = c("POSIXct", "POSIXt"
), tzone = "UTC"), end = structure(c(1514794800, 1514797200,
1514799000, 1514820600), class = c("POSIXct", "POSIXt"), tzone = "UTC")), row.names = c(NA,
-4L), class = c("tbl_df", "tbl", "data.frame"))
# group start end
# 1: A 2018-01-01 08:00:00 2018-01-01 08:20:00
# 2: A 2018-01-01 08:30:00 2018-01-01 09:00:00
# 3: A 2018-01-01 09:15:00 2018-01-01 09:30:00
# 4: B 2018-01-01 14:00:00 2018-01-01 15:30:00
If you arrange on group and start (in that order) and unselect the indx column, this solution posted by David Arenburg works perfectly: How to flatten/merge overlapping time periods in R
library(dplyr)
df1 %>%
group_by(group) %>%
arrange(group, start) %>%
mutate(indx = c(0, cumsum(as.numeric(lead(start)) >
cummax(as.numeric(end)))[-n()])) %>%
group_by(group, indx) %>%
summarise(start = first(start), end = last(end)) %>%
select(-indx)
group start end
<chr> <dttm> <dttm>
1 A 2018-01-01 08:00:00 2018-01-01 08:20:00
2 A 2018-01-01 08:30:00 2018-01-01 09:00:00
3 A 2018-01-01 09:15:00 2018-01-01 09:30:00
4 B 2018-01-01 14:00:00 2018-01-01 15:30:00

Resources