R - exclude weekends from time interval calculations with lubridate - r

I wish to calculate the intervals between dates. The differences in days should take weekends in account. I have over 200 dates stamps.
For example, the currently displayed time difference between 5th (Tuesday) and 11th (Monday) January are 5 days. I would like to obtain 3 days.
I could manage to get to a solution without excluding Saturday and Sunday with the following code and the packages lubridate and dplyr.
Could you please guide me how to exclude the weekends for calculation?
Thank you.
library(lubridate)
library(dplyr)
dates <- c("2021-01-01", "2021-01-04", "2021-01-05", "2021-01-06", "2021-01-11", "2021-01-13", "2021-01-14", "2021-01-18", "2021-01-25", "2021-01-29")
d <- do.call(rbind, lapply(dates, as.data.frame))
dateoverview <- rename(d, Dates = 1)
dateoverview$Dates <- lubridate::ymd(dateoverview$Dates)
datecalculation <- dateoverview %>%
mutate(Days = Dates - lag(Dates)) %>%
mutate(Weekday = wday(Dates, label = FALSE))
datecalculation
## Dates Days Weekday
## 1 2021-01-01 NA days 6
## 2 2021-01-04 3 days 2
## 3 2021-01-05 1 days 3
## 4 2021-01-06 1 days 4
## 5 2021-01-11 5 days 2
## 6 2021-01-13 2 days 4
## 7 2021-01-14 1 days 5
## 8 2021-01-18 4 days 2
## 9 2021-01-25 7 days 2
## 10 2021-01-29 4 days 6

Probably, there is a function somewhere already doing this but here is a custom one which can help you calculate date difference excluding weekends.
library(dplyr)
library(purrr)
date_diff_excluding_wekeends <- function(x, y) {
if(is.na(x) || is.na(y)) return(NA)
sum(!format(seq(x, y - 1, by = '1 day'), '%u') %in% 6:7)
}
datecalculation %>%
mutate(Days = map2_dbl(lag(Dates), Dates, date_diff_excluding_wekeends))
# Dates Days Weekday
#1 2021-01-01 NA 6
#2 2021-01-04 1 2
#3 2021-01-05 1 3
#4 2021-01-06 1 4
#5 2021-01-11 3 2
#6 2021-01-13 2 4
#7 2021-01-14 1 5
#8 2021-01-18 2 2
#9 2021-01-25 5 2
#10 2021-01-29 4 6
seq(x, y - 1, by = '1 day') creates a sequence of dates between previous date and current date - 1.
format(..., "%u") returns day of the week. 1 is for Monday, 7 for Sunday.
Using sum(!format(...) %in% 6:7) we count number of days that are present on weekdays.

Another possible solution:
library(lubridate)
# sample data
df = data.frame(Dates = seq(ymd('2021-01-01'),ymd('2021-12-31'),by='days'))
df_weekdays = df %>% filter(!(weekdays(as.Date(df$Dates)) %in% c('Saturday','Sunday')))
#Application to your data
datecalculation = datecalculation %>%
filter(!(weekdays(as.Date(datecalculation$Dates)) %in% c('Saturday','Sunday')))

Related

if_else with sequence of conditions

I have the following data:
library(tidyverse)
library(lubridate)
df <- tibble(date = as_date(c("2019-11-20", "2019-11-27", "2020-04-01", "2020-04-15", "2020-09-23", "2020-11-25", "2021-03-03")))
# A tibble: 7 x 1
date
<date>
1 2019-11-20
2 2019-11-27
3 2020-04-01
4 2020-04-15
5 2020-09-23
6 2020-11-25
7 2021-03-03
I also have an ordered comparison vector of dates:
comparison <- seq(as_date("2019-12-01"), today(), by = "months") - 1
I now want to compare my dates in df to those comparison dates and so something like:
if date in df is < comparison[1], then assign a 1
if date in df is < comparison[2], then assign a 2
and so on.
I know I could do it with a case_when, e.g.
df %>%
mutate(new_var = case_when(date < comparison[1] ~ 1,
date < comparison[2] ~ 2))
(of course filling this up with all comparisons).
However, this would require to manually write out all sequential conditions and I'm wondering if I couldn't just automate it. I though about creating a match lookup first (i.e. take the comparison vector, then add the respective new_var number (i.e. 1, 2, and so on)) and then match it against my data, but I only know how to do that for exact matches and don't know how I can add the "smaller than" condition.
Expected result:
# A tibble: 7 x 2
date new_var
<date> <dbl>
1 2019-11-20 1
2 2019-11-27 1
3 2020-04-01 6
4 2020-04-15 6
5 2020-09-23 11
6 2020-11-25 13
7 2021-03-03 17
You can use findInterval as follows:
df %>% mutate(new_var = df$date %>% findInterval(comparison) + 1)
# A tibble: 7 x 2
date new_var
<date> <dbl>
1 2019-11-20 1
2 2019-11-27 1
3 2020-04-01 6
4 2020-04-15 6
5 2020-09-23 11
6 2020-11-25 13
7 2021-03-03 17

How to get the last three months data using dplyr in R

I have data like this:
library(lubridate)
library(dplyr)
set.seed(2021)
gen_date <- seq(ymd_h("2021-01-01-00"), ymd_h("2021-09-30-23"), by = "hours")
hourx <- hour(gen_date)
datex <- date(gen_date)
sales <- round(runif(length(datex), 10, 50), 0)*100
mydata <- data.frame(datex, hourx, sales)
How do i get the last three months data using dplyr? or How do i get the last six months data using dplyr?. What i want is full data from "2021-06-01" to "2021-09-30". Thank You.
We may get the max value of 'datex', create a sequnece of 6 or 3 months with seq backwards, and create a logical vector with 'datex' to filter
library(dplyr)
n <- 6
out <- mydata %>%
filter(datex >= seq(floor_date(max(datex), 'month'),
length.out = n + 1, by = '-1 month'))
-checking
> head(out)
datex hourx sales
1 2021-03-01 4 5000
2 2021-03-01 11 3200
3 2021-03-01 18 1500
4 2021-03-02 1 4400
5 2021-03-02 8 4400
6 2021-03-02 15 4400
> max(mydata$datex)
[1] "2021-09-30"
For 3 months
n <- 3
out2 <- mydata %>%
filter(datex >= seq(floor_date(max(datex), 'month'),
length.out = n + 1, by = '-1 month'))
> head(out2)
datex hourx sales
1 2021-06-01 3 2100
2 2021-06-01 7 1300
3 2021-06-01 11 4800
4 2021-06-01 15 1500
5 2021-06-01 19 3200
6 2021-06-01 23 3400
You may try
library(xts)
x <- mydata %>%
mutate(month = month(datex)) %>%
filter(month %in% last(unique(month), 3))
unique(x$month)
[1] 7 8 9

Creating a new date variable that is on the same day of the week, within the same month, and year as original date variable in r

I need to create a new variable "controldates" from a date variable "casedates". This new variable is going to consist of dates that are on the same day of the week as the casedate, within the same month and year as the case date. For example if I have a case date on the 3rd Wednesday of July my control days will be the first 1st Wednesday of July, the second Wednesday of July, and the 4th Wednesday of July. Additionally, I would like to create an indicator variable for each group of dates that are created. I would like to do this using dplyr in r.
Starting data:
Casedate
"01-03-2015"
"08-27-2017"
"10-23-2019"
This is how I would like it to look
Casedate Controldate Index
"01-03-2015" "01-03-2015" 1
"01-03-2015" "01-10-2015" 1
"01-03-2015" "01-17-2015" 1
"01-03-2015" "01-24-2015" 1
"01-03-2015" "01-31-2015" 1
"08-12-2017" "08-05-2017" 2
"08-12-2017" "08-12-2017" 2
"08-12-2017" "08-19-2017" 2
"08-12-2017" "08-26-2017" 2
"10-23-2019" "10-02-2019" 3
"10-23-2019" "10-09-2019" 3
"10-23-2019" "10-16-2019" 3
"10-23-2019" "10-23-2019" 3
"10-23-2019" "10-30-2019" 3
Here is an option with tidyverse. Convert the 'Casedate' to Date class with lubridate, then loop over the elements with map, create a sequence of dates in a list, unnest the list column
library(dplyr)
library(purrr)
library(lubridate)
df1 %>%
mutate(Index = row_number(),
Casedate = mdy(Casedate),
wd = wday(Casedate, label = TRUE),
Controldate = map2(floor_date(Casedate, 'month'), wd, ~ {
x1 <- seq(.x, length.out = 7, by = '1 day')
seq(x1[wday(x1, label = TRUE) == .y],
ceiling_date(.x, 'month'), by = '7 day')})) %>%
unnest(c(Controldate)) %>%
select(Casedate, Controldate, Index)
-output
# A tibble: 14 x 3
# Casedate Controldate Index
# <date> <date> <int>
# 1 2015-01-03 2015-01-03 1
# 2 2015-01-03 2015-01-10 1
# 3 2015-01-03 2015-01-17 1
# 4 2015-01-03 2015-01-24 1
# 5 2015-01-03 2015-01-31 1
# 6 2017-08-27 2017-08-06 2
# 7 2017-08-27 2017-08-13 2
# 8 2017-08-27 2017-08-20 2
# 9 2017-08-27 2017-08-27 2
#10 2019-10-23 2019-10-02 3
#11 2019-10-23 2019-10-09 3
#12 2019-10-23 2019-10-16 3
#13 2019-10-23 2019-10-23 3
#14 2019-10-23 2019-10-30 3
data
df1 <- structure(list(Casedate = c("01-03-2015", "08-27-2017", "10-23-2019"
)), class = "data.frame", row.names = c(NA, -3L))
Since there can only at most be 4 weeks prior or 4 weeks after a date within a month (9 values total), you can get away with calculating that range all in one go with some sequences. That should avoid the need for looping over every value explicitly.
After calculating the values, then subset to those in the same month as the original value in a single sweep. Using #akrun's df1 example data from below:
d <- as.Date(df1$Casedate, format="%m-%d-%Y")
r <- rep(d, each=9)
o <- r + (7 * -4:4)
i <- rep(seq_along(d), each=9)
s <- format(o, "%m") == format(r, "%m")
data.frame(
Casedate = r,
Controldate = o,
Index = i
)[s,]
# Casedate Controldate Index
#5 2015-01-03 2015-01-03 1
#6 2015-01-03 2015-01-10 1
#7 2015-01-03 2015-01-17 1
#8 2015-01-03 2015-01-24 1
#9 2015-01-03 2015-01-31 1
#11 2017-08-27 2017-08-06 2
#12 2017-08-27 2017-08-13 2
#13 2017-08-27 2017-08-20 2
#14 2017-08-27 2017-08-27 2
#20 2019-10-23 2019-10-02 3
#21 2019-10-23 2019-10-09 3
#22 2019-10-23 2019-10-16 3
#23 2019-10-23 2019-10-23 3
#24 2019-10-23 2019-10-30 3
If you want to keep all of the original variables in the dataset, it is a simple fix:
cbind(
df1[i,],
data.frame(Controldate = o, Index = i)
)[s,]
E.g.:
# Casedate othvar1 othvar2 Controldate Index
#1.4 01-03-2015 a B 2015-01-03 1
#1.5 01-03-2015 a B 2015-01-10 1
#1.6 01-03-2015 a B 2015-01-17 1
#1.7 01-03-2015 a B 2015-01-24 1
#...
Even on a moderately large dataset (300K rows), there is a meaningful difference in timing between generating sequence runs (2 seconds) and looping over each value (2 minutes):
Sequence:
df1 <- df1[rep(1:3,each=1e5),,drop=FALSE]
system.time({
d <- as.Date(df1$Casedate, format="%m-%d-%Y")
r <- rep(d, each=9)
o <- r + (7 * -4:4)
i <- rep(seq_along(d), each=9)
s <- format(o, "%m") == format(r, "%m")
data.frame(
Casedate = r,
Controldate = o,
Index = i
)[s,]
})
# user system elapsed
# 1.909 0.128 2.038
Looping:
library(dplyr)
library(purrr)
library(lubridate)
system.time({
df1 %>%
mutate(Index = row_number(),
Casedate = mdy(Casedate),
wd = wday(Casedate, label = TRUE),
Controldate = map2(floor_date(Casedate, 'month'), wd, ~ {
x1 <- seq(.x, length.out = 7, by = '1 day')
seq(x1[wday(x1, label = TRUE) == .y],
ceiling_date(.x, 'month'), by = '7 day')})) %>%
unnest(Controldate) %>%
select(Casedate, Controldate, Index)
})
# user system elapsed
# 131.466 1.143 132.623

How to convert a single date column into three individual columns (y, m, d)?

I have a large dataset with thousands of dates in the ymd format. I want to convert this column so that way there are three individual columns by year, month, and day. There are literally thousands of dates so I am trying to do this with a single code for the entire dataset.
You can use the year(), month(), and day() extractors in lubridate for this. Here's an example:
library('dplyr')
library('tibble')
library('lubridate')
## create some data
df <- tibble(date = seq(ymd(20190101), ymd(20191231), by = '7 days'))
which yields
> df
# A tibble: 53 x 1
date
<date>
1 2019-01-01
2 2019-01-08
3 2019-01-15
4 2019-01-22
5 2019-01-29
6 2019-02-05
7 2019-02-12
8 2019-02-19
9 2019-02-26
10 2019-03-05
# … with 43 more rows
Then mutate df using the relevant extractor function:
df <- mutate(df,
year = year(date),
month = month(date),
day = day(date))
This results in:
> df
# A tibble: 53 x 4
date year month day
<date> <dbl> <dbl> <int>
1 2019-01-01 2019 1 1
2 2019-01-08 2019 1 8
3 2019-01-15 2019 1 15
4 2019-01-22 2019 1 22
5 2019-01-29 2019 1 29
6 2019-02-05 2019 2 5
7 2019-02-12 2019 2 12
8 2019-02-19 2019 2 19
9 2019-02-26 2019 2 26
10 2019-03-05 2019 3 5
# … with 43 more rows
If you only want the new three columns, use transmute() instead of mutate().
Using lubridate but without having to specify a separator:
library(tidyverse)
df <- tibble(d = c('2019/3/18','2018/10/29'))
df %>%
mutate(
date = lubridate::ymd(d),
year = lubridate::year(date),
month = lubridate::month(date),
day = lubridate::day(date)
)
Note that you can change the first entry from ymd to fit other formats.
A slighlty different tidyverse solution that requires less code could be:
Code
tibble(date = "2018-05-01") %>%
mutate_at(vars(date), lst(year, month, day))
Result
# A tibble: 1 x 4
date year month day
<chr> <dbl> <dbl> <int>
1 2018-05-01 2018 5 1
#Data
d = data.frame(date = c("2019-01-01", "2019-02-01", "2012/03/04"))
library(lubridate)
cbind(d,
read.table(header = FALSE,
sep = "-",
text = as.character(ymd(d$date))))
# date V1 V2 V3
#1 2019-01-01 2019 1 1
#2 2019-02-01 2019 2 1
#3 2012/03/04 2012 3 4
OR
library(dplyr)
library(tidyr)
library(lubridate)
d %>%
mutate(date2 = as.character(ymd(date))) %>%
separate(date2, c("year", "month", "day"), "-")
# date year month day
#1 2019-01-01 2019 01 01
#2 2019-02-01 2019 02 01
#3 2012/03/04 2012 03 04

dplyr: grouping and summarizing/mutating data with rolling time windows

I have irregular timeseries data representing a certain type of transaction for users. Each line of data is timestamped and represents a transaction at that time. By the irregular nature of the data some users might have 100 rows in a day and other users might have 0 or 1 transaction in a day.
The data might look something like this:
data.frame(
id = c(1, 1, 1, 1, 1, 2, 2, 3, 4),
date = c("2015-01-01",
"2015-01-01",
"2015-01-05",
"2015-01-25",
"2015-02-15",
"2015-05-05",
"2015-01-01",
"2015-08-01",
"2015-01-01"),
n_widgets = c(1,2,3,4,4,5,2,4,5)
)
id date n_widgets
1 1 2015-01-01 1
2 1 2015-01-01 2
3 1 2015-01-05 3
4 1 2015-01-25 4
5 1 2015-02-15 4
6 2 2015-05-05 5
7 2 2015-01-01 2
8 3 2015-08-01 4
9 4 2015-01-01 5
Often I'd like to know some rolling statistics about users. For example: for this user on a certain day, how many transactions occurred in the previous 30 days, how many widgets were sold in the previous 30 days etc.
Corresponding to the above example, the data should look like:
id date n_widgets n_trans_30 total_widgets_30
1 1 2015-01-01 1 1 1
2 1 2015-01-01 2 2 3
3 1 2015-01-05 3 3 6
4 1 2015-01-25 4 4 10
5 1 2015-02-15 4 2 8
6 2 2015-05-05 5 1 5
7 2 2015-01-01 2 1 2
8 3 2015-08-01 4 1 4
9 4 2015-01-01 5 1 5
If the time window is daily then the solution is simple: data %>% group_by(id, date) %>% summarize(...)
Similarly if the time window is monthly this is also relatively simple with lubridate: data %>% group_by(id, year(date), month(date)) %>% summarize(...)
However the challenge I'm having is how to setup a time window for an arbitrary period: 5-days, 10-days etc.
There's also the RcppRoll library but both RcppRoll and the rolling functions in zoo seem more setup for regular time series. As far as I can tell these window functions work based on the number of rows instead of a specified time period -- the key difference is that a certain time period might have a differing number of rows depending on date and user.
For example, it's possible for user 1, that the number of transactions in the 5 days previous of 2015-01-01 is equal to 100 transactions and for the same user the number of transactions in the 5 days previous of 2015-02-01 is equal to 5 transactions. Thus looking back a set number of rows will simply not work.
Additionally, there is another SO thread discussing rolling dates for irregular time series type data (Create new column based on condition that exists within a rolling date) however the accepted solution was using data.table and I'm specifically looking for a dplyr way of achieving this.
I suppose at the heart of this issue, this problem can be solved by answering this question: how can I group_by arbitrary time periods in dplyr. Alternatively, if there's a different dplyr way to achieve above without a complicated group_by, how can I do it?
EDIT: updated example to make nature of the rolling window more clear.
This can be done using SQL:
library(sqldf)
dd <- transform(data, date = as.Date(date))
sqldf("select a.*, count(*) n_trans30, sum(b.n_widgets) 'total_widgets30'
from dd a
left join dd b on b.date between a.date - 30 and a.date
and b.id = a.id
and b.rowid <= a.rowid
group by a.rowid")
giving:
id date n_widgets n_trans30 total_widgets30
1 1 2015-01-01 1 1 1
2 1 2015-01-01 2 2 3
3 1 2015-01-05 3 3 6
4 1 2015-01-25 4 4 10
5 2 2015-05-05 5 1 5
6 2 2015-01-01 2 1 2
7 3 2015-08-01 4 1 4
8 4 2015-01-01 5 1 5
Another approach is to expand your dataset to contain all possible days (using tidyr::complete), then use a rolling function (RcppRoll::roll_sum)
The fact that you have multiple observations per day is probably creating an issue though...
library(tidyr)
library(RcppRoll)
df2 <- df %>%
mutate(date=as.Date(date))
## create full dataset with all possible dates (go even 30 days back for first observation)
df_full<- df2 %>%
mutate(date=as.Date(date)) %>%
complete(id,
date=seq(from=min(.$date)-30,to=max(.$date), by=1),
fill=list(n_widgets=0))
## now use rolling function, and keep only original rows (left join)
df_roll <- df_full %>%
group_by(id) %>%
mutate(n_trans_30=roll_sum(x=n_widgets!=0, n=30, fill=0, align="right"),
total_widgets_30=roll_sum(x=n_widgets, n=30, fill=0, align="right")) %>%
ungroup() %>%
right_join(df2, by = c("date", "id", "n_widgets"))
The result is the same as yours (by chance)
id date n_widgets n_trans_30 total_widgets_30
<dbl> <date> <dbl> <dbl> <dbl>
1 1 2015-01-01 1 1 1
2 1 2015-01-01 2 2 3
3 1 2015-01-05 3 3 6
4 1 2015-01-25 4 4 10
5 1 2015-02-15 4 2 8
6 2 2015-05-05 5 1 5
7 2 2015-01-01 2 1 2
8 3 2015-08-01 4 1 4
9 4 2015-01-01 5 1 5
But as said, it will fail for some days as it count last 30 obs, not last 30 days. So you might want first to summarise the information by day, then apply this.
EDITED based on comment below.
You can try something like this for up to 5 days:
df %>%
arrange(id, date) %>%
group_by(id) %>%
filter(as.numeric(difftime(Sys.Date(), date, unit = 'days')) <= 5) %>%
summarise(n_total_widgets = sum(n_widgets))
In this case, there are no days within five of current. So, it won't produce any output.
To get last five days for each ID, you can do something like this:
df %>%
arrange(id, date) %>%
group_by(id) %>%
filter(as.numeric(difftime(max(date), date, unit = 'days')) <= 5) %>%
summarise(n_total_widgets = sum(n_widgets))
Resulting output will be:
Source: local data frame [4 x 2]
id n_total_widgets
(dbl) (dbl)
1 1 4
2 2 5
3 3 4
4 4 5
I found a way to do this while working on this question
df <- data.frame(
id = c(1, 1, 1, 1, 1, 2, 2, 3, 4),
date = c("2015-01-01",
"2015-01-01",
"2015-01-05",
"2015-01-25",
"2015-02-15",
"2015-05-05",
"2015-01-01",
"2015-08-01",
"2015-01-01"),
n_widgets = c(1,2,3,4,4,5,2,4,5)
)
count_window <- function(df, date2, w, id2){
min_date <- date2 - w
df2 <- df %>% filter(id == id2, date >= min_date, date <= date2)
out <- length(df2$date)
return(out)
}
v_count_window <- Vectorize(count_window, vectorize.args = c("date2","id2"))
sum_window <- function(df, date2, w, id2){
min_date <- date2 - w
df2 <- df %>% filter(id == id2, date >= min_date, date <= date2)
out <- sum(df2$n_widgets)
return(out)
}
v_sum_window <- Vectorize(sum_window, vectorize.args = c("date2","id2"))
res <- df %>% mutate(date = ymd(date)) %>%
mutate(min_date = date - 30,
n_trans = v_count_window(., date, 30, id),
total_widgets = v_sum_window(., date, 30, id)) %>%
select(id, date, n_widgets, n_trans, total_widgets)
res
id date n_widgets n_trans total_widgets
1 1 2015-01-01 1 2 3
2 1 2015-01-01 2 2 3
3 1 2015-01-05 3 3 6
4 1 2015-01-25 4 4 10
5 1 2015-02-15 4 2 8
6 2 2015-05-05 5 1 5
7 2 2015-01-01 2 1 2
8 3 2015-08-01 4 1 4
9 4 2015-01-01 5 1 5
This version is fairly case specific but you could probably make a version of the functions that is more general.
For simplicity reasons I recommend runner package which handles sliding window operations. In OP request window size k = 30 and windows depend on date idx = date. You can use runner function which applies any R function on given window, and sum_run
library(runner)
library(dplyr)
df %>%
group_by(id) %>%
arrange(date, .by_group = TRUE) %>%
mutate(
n_trans30 = runner(n_widgets, k = 30, idx = date, function(x) length(x)),
n_widgets30 = sum_run(n_widgets, k = 30, idx = date),
)
# id date n_widgets n_trans30 n_widgets30
#<dbl> <date> <dbl> <dbl> <dbl>
# 1 2015-01-01 1 1 1
# 1 2015-01-01 2 2 3
# 1 2015-01-05 3 3 6
# 1 2015-01-25 4 4 10
# 1 2015-02-15 4 2 8
# 2 2015-01-01 2 1 2
# 2 2015-05-05 5 1 5
# 3 2015-08-01 4 1 4
# 4 2015-01-01 5 1 5
Important: idx = date should be in ascending order.
For more go to documentation and vignettes

Resources