I have a data.frame with patient.time.in and patient.time.out which indicates when they are due to see a clinician and how long they will be take. There are 4 clinicians available: c("Ian", "Dan", "Anita")
patient.time.in <-
c("09:00:00","09:03:00",
"09:30:00","09:38:00",
"10:00:00","10:30:00",
"11:00:00","11:05:00",
"12:00:00","12:30:00",
"14:30:00","15:30:00")
patient.date.in <- "2022/03/29"
appointment.length <- c(runif(n=NROW(patient.time.in),min=10,max=90))
patient.infection <- sample(c("C","P","NA"),replace = TRUE,prob = c(1/2,1/3,1-1/2+1/3),size = NROW(patient.time.in))# c("P","P","NA","P","C","NA","C","P","NA","NA","C","P")
patient.roster <- data.frame(
ID=seq(1:12),
patient.time.in=lubridate::ymd_hms(paste(patient.date.in,patient.time.in)),
patient.time.out=lubridate::ymd_hms(paste(patient.date.in,patient.time.in))+lubridate::minutes(round(appointment.length)),
patient.infection=patient.infection,
seen.yet=rep("No"),
binary.seen.yet=0,
room=0)
How can I allocate the clinicians based on whether they are free?
So far I have:
patient.roster %>%
mutate(clinician = case_when(patient.time.in > lag(patient.time.out,1)~ "Ian",
TRUE~"Dan")) %>%
mutate(clinician = case_when(row_number()!=1 & clinician=="Ian" & patient.time.in > lag(patient.time.out,1)~ "Dan",
TRUE~"Anita")) %>%
select(patient.time.in, patient.time.out, clinician)
Expected Output: Coincidentally they are Ian Dan Anita repeated.
patient.time.in patient.time.out clinician
<dttm> <dttm> <chr>
1 2022-03-29 09:00:00 2022-03-29 10:02:00 Ian
2 2022-03-29 09:03:00 2022-03-29 10:21:00 Dan
3 2022-03-29 09:30:00 2022-03-29 10:53:00 Anita
4 2022-03-29 09:38:00 2022-03-29 10:45:00 Ian
5 2022-03-29 10:00:00 2022-03-29 11:06:00 Dan
6 2022-03-29 10:30:00 2022-03-29 11:34:00 Anita
7 2022-03-29 11:00:00 2022-03-29 12:27:00 Ian
8 2022-03-29 11:05:00 2022-03-29 12:21:00 Dan
9 2022-03-29 12:00:00 2022-03-29 12:18:00 Anita
10 2022-03-29 12:30:00 2022-03-29 13:28:00 Ian
Related
I want to know the maximum number of flight on the ground by station .
I have time when the flight arrive to the station and depart from the station.
the problem that my data frame is in this format
REG DEP ARV STD STA
XYZ ZRH GVA 2021-08-01 07:20:00 2021-08-01 08:35:00
XYZ GVA ZRH 2021-08-01 09:20:00 2021-08-01 10:35:00
KLN MUC GVA 2021-08-01 06:00:00 2021-08-01 07:10:00
KLN GVA CGD 2021-08-01 08:45:00 2021-08-01 10:10:00
So in this example
flight XYZ arrive in GVA AT 08H35 (first line STA) and then depart from GVA AT 09H20( LINE 2 STD) and
flight KLN arrive to GVA AT 07H10 and depart AT 08H45.
so from 08h35 to 08h45 there is 2 flight in GVA..
the output should be 2 if for this day there is only this two flight who meet in GVA.
if in other time in the day there other flight who meet suppose there is 5 flights in the afternoon who meet in GVA.
so the output should be the maximum it mean 5.
so I was thinking to build interval by flight [STA, STD] or [STD,STA] then find Maximal Disjoint Intervals...
I tried this code to builds the interval but is not working..
interval_sta_std<-function(i,j){
for (i in 1:length(df)){
key= df$DEP[i]
min_key=min(df$STD[i])
max_key=max(df$STD[i])
for (j in 1:length(df)){
value= df$ARV[j]
min_value=min(df$STA[j])
max_value=max(df$STA[j])
if(value==key) {
test_inter<-interval(min(min_value,min_key),
max(max_key,max_value))
}
}
}
return(test_inter)}
Perhaps one way is to look at after minute during your data and count how many flights are on deck for that minute. This doesn't always scale well depending on the breadth of your data, but if you limit minutes to a reasonable scope, then it should be fine.
Sample data
quux <- structure(list(REG = c("XYZ", "XYZ", "KLN", "KLN"), DEP = c("ZRH", "GVA", "MUC", "GVA"), ARV = c("GVA", "ZRH", "GVA", "CGD"), STD = structure(c(1627816800, 1627824000, 1627812000, 1627821900), class = c("POSIXct", "POSIXt"), tzone = ""), STA = structure(c(1627821300, 1627828500, 1627816200, 1627827000), class = c("POSIXct", "POSIXt"), tzone = "")), row.names = c(NA, -4L), class = "data.frame")
quux[,c("STD","STA")] <- lapply(quux[,c("STD","STA")], as.POSIXct)
(Converting your STD and STA to POSIXt objects.)
base R with fuzzyjoin
minutes <- seq(min(quux$STD), max(quux$STA), by = "mins")
head(minutes)
# [1] "2021-08-01 06:00:00 EDT" "2021-08-01 06:01:00 EDT" "2021-08-01 06:02:00 EDT" "2021-08-01 06:03:00 EDT"
# [5] "2021-08-01 06:04:00 EDT" "2021-08-01 06:05:00 EDT"
length(minutes)
# [1] 276
range(minutes)
# [1] "2021-08-01 06:00:00 EDT" "2021-08-01 10:35:00 EDT"
Now the join and aggregation.
joined <- fuzzyjoin::fuzzy_left_join(data.frame(M = minutes), quux, by = c("M" = "STD", "M" = "STA"), match_fun = list(`>=`, `<=`))
head(joined)
# M REG DEP ARV STD STA
# 1 2021-08-01 06:00:00 KLN MUC GVA 2021-08-01 06:00:00 2021-08-01 07:10:00
# 2 2021-08-01 06:01:00 KLN MUC GVA 2021-08-01 06:00:00 2021-08-01 07:10:00
# 3 2021-08-01 06:02:00 KLN MUC GVA 2021-08-01 06:00:00 2021-08-01 07:10:00
# 4 2021-08-01 06:03:00 KLN MUC GVA 2021-08-01 06:00:00 2021-08-01 07:10:00
# 5 2021-08-01 06:04:00 KLN MUC GVA 2021-08-01 06:00:00 2021-08-01 07:10:00
# 6 2021-08-01 06:05:00 KLN MUC GVA 2021-08-01 06:00:00 2021-08-01 07:10:00
nrow(joined)
# [1] 327
Recall that we had 276 in minutes. Now we have 327, indicating that 51 rows (each minute) indicate more than one flight on deck at a time.
joined2 <- aggregate(REG ~ M, data = joined[complete.cases(joined),], FUN = length)
nrow(joined2)
# [1] 258
head(joined2)
# M REG
# 1 2021-08-01 06:00:00 1
# 2 2021-08-01 06:01:00 1
# 3 2021-08-01 06:02:00 1
# 4 2021-08-01 06:03:00 1
# 5 2021-08-01 06:04:00 1
# 6 2021-08-01 06:05:00 1
We've reduced a bit, indicating that 258 minutes during the day(s) in the data had at least one plane on deck; if you look at where REG > 1, you'll find where there are two or more.
The final piece:
joined2$Date <- as.Date(joined2$M)
aggregate(REG ~ Date, data = joined2, FUN = max)
# Date REG
# 1 2021-08-01 2
Note: this might be subject to time zone issues, ensure you're confident they are all correct.
I love gt package of R but I am having trouble in coming up with a crisp code for row grouping that is suitable for large tables and where row group labels are unknown.
Consider this toy example, and since this is a small data.table it looks ok.
library(data.table)
library(magrittr)
library(lubridate)
#>
#> Attaching package: 'lubridate'
#> The following objects are masked from 'package:data.table':
#>
#> hour, isoweek, mday, minute, month, quarter, second, wday, week,
#> yday, year
#> The following objects are masked from 'package:base':
#>
#> date, intersect, setdiff, union
library(gt)
# create a toy data.table
dt <- data.table(datetime = seq(ymd_hm(202205100800),by = "5 hours",length.out = 15))[order(datetime)]
dt[,date:=as_date(datetime)]
dt[,time:=format(datetime,"%H:%M")]
dt[,values:=seq(1000,by = 10,length.out=15)]
# Here's how my toy data.table looks like:
print(dt)
#> datetime date time values
#> 1: 2022-05-10 08:00:00 2022-05-10 08:00 1000
#> 2: 2022-05-10 13:00:00 2022-05-10 13:00 1010
#> 3: 2022-05-10 18:00:00 2022-05-10 18:00 1020
#> 4: 2022-05-10 23:00:00 2022-05-10 23:00 1030
#> 5: 2022-05-11 04:00:00 2022-05-11 04:00 1040
#> 6: 2022-05-11 09:00:00 2022-05-11 09:00 1050
#> 7: 2022-05-11 14:00:00 2022-05-11 14:00 1060
#> 8: 2022-05-11 19:00:00 2022-05-11 19:00 1070
#> 9: 2022-05-12 00:00:00 2022-05-12 00:00 1080
#> 10: 2022-05-12 05:00:00 2022-05-12 05:00 1090
#> 11: 2022-05-12 10:00:00 2022-05-12 10:00 1100
#> 12: 2022-05-12 15:00:00 2022-05-12 15:00 1110
#> 13: 2022-05-12 20:00:00 2022-05-12 20:00 1120
#> 14: 2022-05-13 01:00:00 2022-05-13 01:00 1130
#> 15: 2022-05-13 06:00:00 2022-05-13 06:00 1140
# Now let's create a table using the gt package and add row groups.
# We will group on date.
dt %>%
gt %>%
tab_row_group(label = "May 10",id = "may10",rows = date==ymd(20220510)) %>%
tab_row_group(label = "May 11",id = "may11",rows = date==ymd(20220511)) %>%
tab_row_group(label = "May 12",id = "may12",rows = date==ymd(20220512)) %>%
row_group_order(groups = c("may10","may11","may12")) %>%
cols_hide(columns = c(datetime,date))
But in real life there may be hundreds of dates. And the dates are not known in advance. If I use the current method of tab_row_group() in gt, the code will become unwieldy
Is there a way to shorten the code and automate the row groupings?
you can use groupname_col inside the gt function
dt %>%
gt(groupname_col = c("date")) %>%
cols_hide(columns = c(datetime,date))
I have a dataset where new data are recorded at a fixed interval (3-4 minutes). Each 8 records (rows) correspond to a same set of data (CC_01->04 and DC01->04) that I want to stamp to the previous half-hour.
For this I use the floor date function of lubridate that works perfectly:
lubridate::floor_date(data$Date_IV, "30 minutes")
However, sometimes the eighth record starts after the begining of the next half-hour and so the floor_date function stamps it with this new half-hour. But I would like it to be stamped with the previous one (as part of the subset).
Therefore I'm looking for a way to check when this eighth value differs from the previous 7, and correct it if needed.
An exemple :
Label Date_IV Obs. Exp_Flux Floor_date
1 CC_01 2021-07-08 12:38:00 1 -0.290000 2021-07-08 12:30:00
2 DC_01 2021-07-08 12:42:00 2 3.830000 2021-07-08 12:30:00
3 CC_02 2021-07-08 12:45:00 3 -0.527937 2021-07-08 12:30:00
4 DC_02 2021-07-08 12:49:00 4 2.260000 2021-07-08 12:30:00
5 CC_03 2021-07-08 12:52:00 5 -0.743471 2021-07-08 12:30:00
6 DC_03 2021-07-08 12:55:00 6 2.230000 2021-07-08 12:30:00
7 CC_04 2021-07-08 12:59:00 7 -1.510000 2021-07-08 12:30:00
8 DC_04 2021-07-08 13:02:00 8 1.820000 2021-07-08 13:00:00
9 CC_01 2021-07-08 13:05:00 9 -0.190000 2021-07-08 13:00:00
10 DC_01 2021-07-08 13:08:00 10 3.750000 2021-07-08 13:00:00
11 CC_02 2021-07-08 13:11:00 11 -0.423572 2021-07-08 13:00:00
12 DC_02 2021-07-08 13:14:00 12 2.230000 2021-07-08 13:00:00
13 CC_03 2021-07-08 13:18:00 13 -0.635882 2021-07-08 13:00:00
14 DC_03 2021-07-08 13:22:00 14 2.670000 2021-07-08 13:00:00
15 CC_04 2021-07-08 13:25:00 15 -1.440000 2021-07-08 13:00:00
16 DC_04 2021-07-08 13:29:00 16 1.860000 2021-07-08 13:00:00
In my example, the first 8 lines should be stamped to to 12:30:00. The function works for the first 7, but the eighth is stamped to 13:00 as the record was done at 13:02.
This situation doesn't appear for the second measurements set (lines 9->16) as the last measurement started before the next half-hour, so the eight are stamped with 13:00, which is correct. Nothing to correct here.
These measurements are repeated many times, so I cannot modify it by hands.
I hope it makes sens.
Thanks in advance for your help,
Adrien
You can create a group of every 8 rows or create a new group every time CC_01 occurs whichever is most appropriate according to your data and take floor_date value of first value in the group.
library(dplyr)
library(lubridate)
data %>%
group_by(grp = ceiling(Obs/8)) %>%
#Or increment the group value at every occurrence of CC_01
#group_by(grp = cumsum(Label == 'CC_01')) %>%
mutate(Floor_date = floor_date(first(Date_IV), '30 minutes')) %>%
ungroup
I have a data frame, df, that has date and two variables in it. I would like to either extract all of Oct-Dec data or delete the other months data from the data frame.
I have put the data into a data frame but at the moment have the whole year, I just want to extract the wanted data. In future I will also be extracting just winter data. I have attached my chunk of my data frame, I tried using format() with just %m but couldn't get it to work.
14138 2017-09-15 4.655946e-01 0.0603515884
14139 2017-09-16 7.881137e-01 0.0479933304
14140 2017-09-17 5.018990e-01 0.0256871025
14141 2017-09-18 -1.583625e-01 -0.0040893990
14142 2017-09-19 -6.733220e-01 -0.0313100989
14143 2017-09-20 -1.225730e+00 -0.0587706331
14144 2017-09-21 -1.419133e+00 -0.0958125544
14145 2017-09-22 -1.338630e+00 -0.0902803173
14146 2017-09-23 -1.272554e+00 -0.0659170673
14147 2017-09-24 -1.132318e+00 -0.0387240370
14148 2017-09-25 -1.255414e+00 -0.0392615823
14149 2017-09-26 -1.497188e+00 -0.0438491356
14150 2017-09-27 -1.427622e+00 -0.0633879185
14151 2017-09-28 -1.051756e+00 -0.0992427127
14152 2017-09-29 -4.876309e-01 -0.1448044528
14153 2017-09-30 -6.829681e-02 -0.1749463647
14154 2017-10-01 -1.413768e-01 -0.2009916094
14155 2017-10-02 6.359742e-02 -0.1975848313
14156 2017-10-03 9.103277e-01 -0.1828581805
14157 2017-10-04 1.695776e+00 -0.1589352546
14158 2017-10-05 1.913918e+00 -0.1538234614
14159 2017-10-06 1.479714e+00 -0.1937094170
14160 2017-10-07 8.783669e-01 -0.1703790211
14161 2017-10-08 5.706581e-01 -0.1294144428
14162 2017-10-09 4.979405e-01 -0.0666569815
14163 2017-10-10 3.233477e-01 0.0072006102
14164 2017-10-11 3.057630e-01 0.0863445067
14165 2017-10-12 5.877673e-01 0.1097707831
14166 2017-10-13 1.208526e+00 0.1301967193
14167 2017-10-14 1.671705e+00 0.1728109268
14168 2017-10-15 1.810979e+00 0.2264911145
14169 2017-10-16 1.426651e+00 0.2702958315
14170 2017-10-17 1.241140e+00 0.3242637704
14171 2017-10-18 8.997498e-01 0.3879727861
14172 2017-10-19 5.594161e-01 0.4172990825
14173 2017-10-20 3.980254e-01 0.3915170864
14174 2017-10-21 2.138538e-01 0.3249736995
14175 2017-10-22 3.926440e-01 0.2224834840
14176 2017-10-23 2.268644e-01 0.0529143372
14177 2017-10-24 5.664923e-01 -0.0081443464
14178 2017-10-25 6.167520e-01 0.0312073984
14179 2017-10-26 7.751882e-02 0.0043897693
14180 2017-10-27 -5.634851e-02 -0.0726825266
14181 2017-10-28 -2.122061e-01 -0.1711305549
14182 2017-10-29 -8.500991e-01 -0.2068581639
14183 2017-10-30 -1.039685e+00 -0.2909120824
14184 2017-10-31 -3.057745e-01 -0.3933633317
14185 2017-11-01 -1.288774e-01 -0.3726346136
14186 2017-11-02 -5.608007e-03 -0.2425754386
14187 2017-11-03 4.853990e-01 -0.0503543980
14188 2017-11-04 5.822672e-01 0.0896130098
14189 2017-11-05 8.491505e-01 0.1299151006
14190 2017-11-06 1.052999e+00 0.0749888307
14191 2017-11-07 1.170470e+00 0.0287317882
14192 2017-11-08 7.919862e-01 0.0788187381
14193 2017-11-09 4.574565e-01 0.1539981316
14194 2017-11-10 4.552032e-01 0.2034393145
14195 2017-11-11 -3.621350e-01 0.2077476707
14196 2017-11-12 -8.053965e-01 0.1759558604
14197 2017-11-13 -8.307459e-01 0.1802858410
14198 2017-11-14 -9.421325e-01 0.2175529008
14199 2017-11-15 -9.880204e-01 0.2392924580
14200 2017-11-16 -7.448127e-01 0.2519253751
14201 2017-11-17 -8.081435e-01 0.2614254732
14202 2017-11-18 -1.216806e+00 0.2629971336
14203 2017-11-19 -1.122674e+00 0.3469995055
14204 2017-11-20 -1.242597e+00 0.4553094014
14205 2017-11-21 -1.294885e+00 0.5049438231
14206 2017-11-22 -9.325514e-01 0.4684133163
14207 2017-11-23 -4.632281e-01 0.4071673624
14208 2017-11-24 -9.689322e-02 0.3710270269
14209 2017-11-25 4.704467e-01 0.4126721465
14210 2017-11-26 8.682453e-01 0.3745057653
14211 2017-11-27 5.105564e-01 0.2373454931
14212 2017-11-28 4.747265e-01 0.1650783370
14213 2017-11-29 5.905379e-01 0.2632154120
14214 2017-11-30 4.083787e-01 0.3888834762
14215 2017-12-01 3.451736e-01 0.5008047592
14216 2017-12-02 5.161312e-01 0.5388177242
14217 2017-12-03 7.109279e-01 0.5515360710
14218 2017-12-04 4.458635e-01 0.5127537202
14219 2017-12-05 -3.986610e-01 0.3896493238
14220 2017-12-06 -5.968253e-01 0.1095843268
14221 2017-12-07 -1.604398e-01 -0.2455506506
14222 2017-12-08 -4.384744e-01 -0.5801038215
14223 2017-12-09 -7.255016e-01 -0.8384627087
14224 2017-12-10 -9.691828e-01 -0.9223171538
14225 2017-12-11 -1.140588e+00 -0.8177806761
14226 2017-12-12 -1.956622e-01 -0.5250998474
14227 2017-12-13 -1.083792e-01 -0.3430768534
14228 2017-12-14 -8.016345e-02 -0.3163476104
14229 2017-12-15 8.899266e-01 -0.2813253830
14230 2017-12-16 1.322833e+00 -0.2545953062
14231 2017-12-17 1.547972e+00 -0.2275373110
14232 2017-12-18 2.164907e+00 -0.3217205817
14233 2017-12-19 2.276258e+00 -0.5773412429
14234 2017-12-20 1.862291e+00 -0.7728091393
14235 2017-12-21 1.125083e+00 -0.9099696881
14236 2017-12-22 7.737118e-01 -1.2441963604
14237 2017-12-23 7.863508e-01 -1.4802661587
14238 2017-12-24 4.313111e-01 -1.4111320559
14239 2017-12-25 -8.814799e-02 -1.0024805520
14240 2017-12-26 -3.615127e-01 -0.4943077147
14241 2017-12-27 -5.011363e-01 -0.0308588186
14242 2017-12-28 -8.474088e-01 0.3717555895
14243 2017-12-29 -7.283247e-01 0.8230450219
14244 2017-12-30 -4.566981e-01 1.2495961116
14245 2017-12-31 -4.577034e-01 1.4805369230
14246 2018-01-01 1.946166e-01 1.5310004017
14247 2018-01-02 5.203149e-01 1.5384595802
14248 2018-01-03 5.024570e-02 1.4036679018
14249 2018-01-04 -7.065297e-01 1.0749574137
14250 2018-01-05 -8.741815e-01 0.7608524752
14251 2018-01-06 1.589530e-01 0.7891084646
14252 2018-01-07 8.632378e-01 1.1230358751
As requested, the class is "Date".
You can use lubridate and base R:
library(lubridate)
dats[month(ymd(dats$V2)) >= 10,]
# EDIT if the class of the date variable is date, it should be only
dats[month(dats$V2) >= 10,]
Or fully base without any date work:
dats[substr(dats$V2,6,7) %in% c("10","11","12"),]
With data:
V1 V2 V3 V4
1 14138 2017-09-15 0.4655946 0.06035159
2 14139 2017-09-16 0.7881137 0.04799333
...
From your question, it is unclear what format the date variable is in. Maybe add the output of class(your_date_variable) to the question. As a general rule, though, you'll want to use filter from the dplyr package. Something like this:
new_data <- data %>% filter(format(date_variable, "%m") >= 10)
This might change slightly depending on the class of your date variable.
Assuming the 'date_variable' is Date class, extract the month and do a comparison in filter (action verb from dplyr)
library(dplyr)
library(lubridate)
data %>%
filter(month(date_variable) >= 10)
I want to substract timeA with timearriving and timeL with timeleaving but I get this error:
"Error in unclass(e1) - e2 : non-numeric argument to binary operator"
When you see that error message, it means that you're trying to perform a binary operation with something that isn't a number. I understand the error but I wanted to ask is there is a way I can perform these calculations?
I provided a sample image of my dataset
number id location timearriving timeleaving timeA timeL person late
1 214980 900264 1001.18 NULL NULL 2016-09-15 10:00:00 2016-09-15 12:00:00 Teacher
2 215708 900264 1001.18 07:55:06 09:59:58 2016-09-22 10:00:00 2016-09-22 12:00:00 Teacher
3 216388 900264 1001.18 08:00:22 09:54:06 2016-09-29 10:00:00 2016-09-29 12:00:00 Teacher
4 217106 900264 1001.18 08:40:15 09:53:07 2016-10-05 10:00:00 2016-10-05 12:00:00 Teacher
5 217250 900264 1001.18 08:03:47 09:52:59 2016-10-06 10:00:00 2016-10-06 12:00:00 Teacher
6 217808 900264 1001.18 NULL NULL 2016-10-12 10:00:00 2016-10-12 12:00:00 Teacher
7 217952 900264 1001.18 08:01:44 09:51:45 2016-10-13 10:00:00 2016-10-13 12:00:00 Teacher
8 218640 900264 1001.18 08:04:04 09:57:24 2016-10-19 10:00:00 2016-10-19 12:00:00 Teacher
9 218788 900264 1001.18 07:59:52 09:50:17 2016-10-20 10:00:00 2016-10-20 12:00:00 Teacher
10 219397 900264 1001.18 08:01:06 09:51:05 2016-10-26 10:00:00 2016-10-26 12:00:00 Teacher
11 219541 900264 1001.18 08:05:29 09:56:04 2016-10-27 10:00:00 2016-10-27 12:00:00 Teacher
12 220273 900264 1001.18 08:09:20 09:57:46 2016-11-02 09:00:00 2016-11-02 11:00:00 Teacher
13 220419 900264 1001.18 08:09:05 09:59:53 2016-11-03 09:00:00 2016-11-03 11:00:00 Teacher
Here I added a new column with the name "late".
I want to subtract TimeA- timearriving
I did this using this code:
dataset["late"] <- NA
dataset$late <- dataset$timeA - dataset$timearriving
then the error was:
Error in unclass(e1) - e2 : non-numeric argument to binary operator
Now I tried to convert them like you said:
timeA <- ymd_hms(timeA )
timearriving <- hms(timearriving )
Warning message:
In .parse_hms(..., order = "HMS", quiet = quiet) :
Some strings failed to parse
Since you don't provide a reproducible example I will illustrate using one value for each variable e.g.:
library(lubridate)
timeleaving <- hms("09:59:33")
timeA <- ymd_hms("2017-02-16 10:00:00")
You could use:
timeleaving <- ymd_hms(paste(floor_date(timeA, "days"), timeleaving))
dif <- timeA -timeleaving
Time difference of 27 secs
Edited since the data was added to the original question:
data$timeleaving <- hms(data$timeleaving)
data$timearriving <- hms(data$timearriving)
data$timeA <- ymd_hms(data$timeA )
data$timeL <- ymd_hms(data$timeL )
data$timeleaving <- ymd_hms(paste(floor_date(data$timeL, "days"), data$timeleaving))
data$timearriving <- ymd_hms(paste(floor_date(data$timeA, "days"), data$timearriving))
data$late <- data$timeA - data$timearriving