I have a log of times for 2 periods (1 & 2) in a data frame. I need to account for the time accumulated for each person based on a third column 'in' vs 'out'. I then need to create an additional column to track the sum of accumulated time for both periods.
Period Time Subs
1 10:00 'Peter in'
1 .
1 .
1 8:00 'Peter out' #In this period he has accumulated 2 minutes
2 10:00 'Peter in'
2 .
2 2:00 'Peter out' #In this period he has accumulated 8 minutes
I know I need to use an if and ifelse statement but I'm not sure how to start. I started and stopped learning R and now I'm trying to pick back up where I left off.
It depends a lot on how your data is formatted, of course. if you have something like
df <- data.frame(Period=c(1,1,1,1,2,2,2), Time=c("10:00",NA,NA,"8:00","10:00",NA,"2:00"))
> df
Period Time
1 1 10:00
2 1 <NA>
3 1 <NA>
4 1 8:00
5 2 10:00
6 2 <NA>
7 2 2:00
If the Time variable is formatted as character, you can strip out the minutes column like so:
df$Min <- as.numeric(sapply(strsplit(as.character(df$Time), ":"), "[[", 1))
> df
Period Time Min
1 1 10:00 10
2 1 <NA> NA
3 1 <NA> NA
4 1 8:00 8
5 2 10:00 10
6 2 <NA> NA
7 2 2:00 2
This is much easier if you can have the Min column already as numeric!
Then, an easy way to return the total time accumulated for each period is the diff of the range for each period, within a tapply() call.
tapply(df$Min, df$Period, function(x) diff(range(x, na.rm=T)))
1 2
2 8
Related
Given a date and the day of the week it is, I want to know if there is a code that tells me which of those days of the month it is. For example in the picture below, given 2/12/2020 and "Wednesday" I want to be given the output "2" for it being the second Wednesday of the month.
You can do that in base R in essentially one operation. You also do not need the second input column.
Here is slower walkthrough:
Code
dates <- c("2/12/2020","2/11/2020","2/10/2020","2/7/2020","2/6/2020", "2/5/2020")
Dates <- anytime::anydate(dates) ## one of several parsers
dow <- weekdays(Dates) ## for illustration, base R function
cnt <- (as.integer(format(Dates, "%d")) - 1) %/% 7 + 1
res <- data.frame(dt=Dates, dow=dow, cnt=cnt)
res
(Final) Output
R> res
dt dow cnt
1 2020-02-12 Wednesday 2
2 2020-02-11 Tuesday 2
3 2020-02-10 Monday 2
4 2020-02-07 Friday 1
5 2020-02-06 Thursday 1
6 2020-02-05 Wednesday 1
R>
Functionality like this is often in dedicated date/time libraries. I wrapped some code from the (C++) Boost date_time library in package RcppBDH -- that allowed to easily find 'the third Wednesday in the last month each quarter' and alike.
(lubridate::day(your_date) - 1) %/% 7 + 1
The idea here is that the first 7 days of the month are all the first for their weekday. Next 7 are 2nd, etc.
> (1:30 - 1) %/% 7 + 1
# [1] 1 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3 3 3 3 3 3 4 4 4 4 4 4 4 5 5
Just to offer an alternative calculation for the nth-weekday of the month, you can just divide the day by 7 and always round up:
date <- lubridate::mdy("02/12/2020")
ceiling(day(date)/7)
I've seen this problem answered here in other languages, but can't find the solution in r:
I have a dataset where order of interactions is crucial, and depending on how the experiment has progressed, the apparatus can have one of two states. The hardware doesn't note the current state though, so the only way to separate the states is to filter the data between 'start' and 'stop' interactions. State 1 is outside of the 'start'-'stop' and state 2 is everything between a 'start' and a 'stop'.
My data is in the following format:
Time Individual Interaction
11:57:31 XX002 2
12:00:00 XX123 Start
12:00:03 XX123 1
12:00:37 XX334 2
12:01:00 NA Stop
12:04:12 XX441 2
How can I filter the data to get two separate dfs, one for all occurrences outside of 'start'-'stop', and another for everything between 'start' and 'stop'? Ideally it would result in the data being searched chronologically for a 'start' interaction, then filtering out all the data between that and the next 'stop', and repeat (as there can sometimes be multiple 'start' interactions before the next stop.
In this example it would result in:
Time Individual Interaction
11:57:31 XX002 2
12:04:12 XX441 2
and
Time Individual Interaction
12:00:00 XX123 Start
12:00:03 XX123 1
12:00:37 XX334 2
12:01:00 NA Stop
Thanks in advance
Using cumsum we accumulate the changes in Start and Stop. Substracting both we get 1 when in between start/stop and 0 when out. Sadly, we need to use lag() to put the value in stop also in the dfin as it also has a 0.
z = cumsum(df$Interaction=="Start")-cumsum(df$Interaction=="Stop")
sep = ifelse(z==0 & lag(z,default=z[1])==1,1,z)
dfoin=df[sep==1,]
dfout=df[sep==0,]
> dfout
Time Individual Interaction
3 12:00:00 XX123 Start
4 12:00:03 XX123 1
5 12:00:37 XX334 2
6 12:01:00 <NA> Stop
> dfin
Time Individual Interaction
2 11:57:31 XX002 2
7 12:04:12 XX441 2
Using dplyrpiping
df2=df%>%mutate(n=cumsum(Interaction=="Start")-cumsum(Interaction=="Stop"))%>%
mutate(n=ifelse(n==0 & lag(z,default=z[1])==1,1,z))%>%split(.$n)
> df2
$`0`
Time Individual Interaction n
1 11:57:31 XX002 2 0
6 12:04:12 XX441 2 0
$`1`
Time Individual Interaction n
2 12:00:00 XX123 Start 1
3 12:00:03 XX123 1 1
4 12:00:37 XX334 2 1
5 12:01:00 <NA> Stop 1
You may try finding the times of the start and stop interactions, and then subset the data frame based on that:
time_start <- df$Time[df$Interaction == "Start"]
time_stop <- df$Time[df$Interaction == "Stop"]
df_in <- df[df$Time >= time_start & df$Time <= time_stop,]
df_out <- df[df$Time < time_start | df$Time > time_stop,]
df_in
Time Individual Interaction
2 12:00:00 XX123 Start
3 12:00:03 XX123 1
4 12:00:37 XX334 2
5 12:01:00 <NA> Stop
df_out
Time Individual Interaction
1 11:57:31 XX002 2
6 12:04:12 XX441 2
So I have some data with a time stamp, and for each row, I want to count the number of rows that fall within a certain time window. For example, if I have the data below with a time stamp in h:mm (column ts), I want to count the number of rows that occur from that time stamp to five minutes in the past (column count). The first n rows that are less than five minutes from the first data point should be NAs.
ts data count
1:01 123 NA
1:02 123 NA
1:03 123 NA
1:04 123 NA
1:06 123 5
1:07 123 5
1:10 123 3
1:11 123 4
1:12 123 4
This is straightforward to do with a for loop, but I've been trying to implement with the apply() family and have not yet found any success. Any suggestions?
EDIT: modified to account for the potential for multiple readings per minute, raised in comment.
Data with new mid-minute reading:
library(dplyr)
df %>%
# Take the text above and convert to datetime
mutate(ts = lubridate::ymd_hms(paste(Sys.Date(), ts))) %>%
# Count how many observations per minute
group_by(ts_min = lubridate::floor_date(ts, "1 minute")) %>%
summarize(obs_per_min = sum(!is.na(data))) %>%
# Add rows for any missing minutes, count as zero observations
padr::pad(interval = "1 min") %>%
replace_na(list(obs_per_min = 0)) %>%
# Count cumulative observations, and calc how many in window that
# begins 5 minutes ago and ends at end of current minute
mutate(cuml_count = cumsum(obs_per_min),
prior_cuml = lag(cuml_count) %>% tidyr::replace_na(0),
in_window = cuml_count - lag(prior_cuml, 5)) %>%
# Exclude unneeded columns and rows
select(-cuml_count, -prior_cuml) %>%
filter(obs_per_min > 0)
Output (now reflects add'l reading at 1:06:30)
# A tibble: 12 x 3
ts_min obs_per_min in_window
<dttm> <dbl> <dbl>
1 2018-09-26 01:01:00 1 NA
2 2018-09-26 01:02:00 1 NA
3 2018-09-26 01:03:00 1 NA
4 2018-09-26 01:04:00 1 NA
5 2018-09-26 01:06:00 2 6
6 2018-09-26 01:07:00 1 6
7 2018-09-26 01:10:00 1 4
8 2018-09-26 01:11:00 1 5
9 2018-09-26 01:12:00 1 4
I have a set of data regarding amounts of something users with unique IDs used between in a data.frame in r.
ID start date end date amount
1 1-15-2012 2-15-2012 6000
1 2-15-2012 3-25-2012 4000
1 3-25-2012 5-26-2012 3000
1 5-26-2012 6-13-2012 1000
2 1-16-2012 2-27-2012 7000
2 2-27-2012 3-18-2012 2000
2 3-18-2012 5-23-2012 3000
....
10000 1-12-2012 2-24-2012 12000
10000 2-24-2012 3-11-2012 22000
10000 3-11-2012 5-27-2012 33000
10000 5-27-2012 6-10-2012 5000
The time series for each ID starts and ends at inconsistent times, and contain an inconsistent number of observations. However, they are all formatted in the above manner; the start and end dates are Date objects.
I would like to standardize the breakdowns for each ID to a monthly time series, with data points at the start of each month, weighing the observed amount numbers which happen to straddle two or more months accordingly.
In other words, I would like to turn this series into something like
ID start date end date amount
1 1-1-2012 2-1-2012 3096 = 6000 * 16/31
1 2-1-2012 3-1-2012 4339 = 6000*15/31+4000*14/39
1 3-1-2012 4-1-2012 etc
....
1 6-1-2012 7-1-2012 etc
2 1-1-2012 2-1-2012 etc
2 2-1-2012 3-1-2012 etc
2 3-1-2012 4-1-2012 etc
2 4-1-2012 5-1-2012 etc
2 5-1-2012 6-1-2012 etc
....
10000 1-1-2012 2-1-2012 etc
....
10000 6-1-2012 7-1-2012 etc
Where the value for ID 1 between 2/1/12 and 3/1/12 is calculated by weighing the number of days in the 1-15-2012 to 2-15-2012 observation that land in February (15 days / 31 days) with the amount in that observation span (6000) with the number of days in the 2-15 to 3-25 observation span that fall in February (14 days/ 39 days, as 2012 was a leap year) times the amount in that observation span (4000), yielding 6000*15/31+4000*14/39 = 4339. This should be done for each ID time series. We do not consider the case where the observation periods all fit into one month; but if they are spread out over more than two months they should be split up over that number of months with the appropriate weighings.
I'm rather new to r and could certainly use some help on this!
Here is using native R:
#The data
df=read.table(text='ID start_date end_date amount
1 1-15-2012 2-15-2012 6000
1 2-15-2012 3-25-2012 4000
1 3-25-2012 5-26-2012 3000
1 5-26-2012 6-13-2012 1000
2 1-16-2012 2-27-2012 7000
2 2-27-2012 3-18-2012 2000
2 3-18-2012 5-23-2012 3000
10000 1-12-2012 2-24-2012 12000
10000 2-24-2012 3-11-2012 22000
10000 3-11-2012 5-27-2012 33000
10000 5-27-2012 6-10-2012 5000',
header=T,row.names = NULL,stringsAsFactors =FALSE)
df[,2]=as.Date(df[,2],"%m-%d-%Y")
df[,3]=as.Date(df[,3],"%m-%d-%Y")
df1=data.frame(n=1:length(df$ID),ID=df$ID)
df1$startm=as.Date(levels(cut(df[,2],"month"))[cut(df[,2],"month")],"%Y-%m-%d")
df1$endm=as.Date(levels(cut(df[,3],"month"))[cut(df[,3],"month")],"%Y-%m-%d")
df1=df1[,-1]
#compute days in month and total days
df$dayin=as.numeric((df1$endm-1)-df$start_date)
df$daytot=as.numeric(df$end_date-df$start_date)
#separate amount this month and next month
df$ammt=df$amount*df$dayin/df$daytot
df$ammt.1=df$amount*(df$daytot-df$dayin)/df$daytot
#using by compute new amount
df1$amount=do.call(c,
by(df[,c("ammt","ammt.1")],df$ID,function(d)d[,1]+c(0,d[-nrow(d),2]))
)
df1
> df1
ID startm endm amount
1 1 2012-01-01 2012-02-01 3096.774
2 1 2012-02-01 2012-03-01 4339.123
3 1 2012-03-01 2012-05-01 4306.038
4 1 2012-05-01 2012-06-01 1535.842
5 2 2012-01-01 2012-02-01 2500.000
6 2 2012-02-01 2012-03-01 4700.000
7 2 2012-03-01 2012-05-01 3754.545
8 10000 2012-01-01 2012-02-01 5302.326
9 10000 2012-02-01 2012-03-01 13572.674
10 10000 2012-03-01 2012-05-01 36553.571
11 10000 2012-05-01 2012-06-01 13000.000
To solve this I think the easiest way is to break it down into two problems.
How can I get a daily breakdown of the figures I'm interested in? This is my assumption based on the information you provided above.
How do I group by a date range and summarise to what I'm interested in?
For the following example, I will use the data set which I created using the code below:
df <- data.frame(
id=c(1,1,1,1,2,2,2),
start_date=as.Date(c("1-15-2012",
"2-15-2012",
"3-25-2012",
"5-26-2012",
"1-16-2012",
"2-27-2012",
"3-18-2012"), "%m-%d-%Y"),
end_date=as.Date(c("2-15-2012",
"3-25-2012",
"5-26-2012",
"6-13-2012",
"2-27-2012",
"3-18-2012",
"5-23-2012"), "%m-%d-%Y"),
amount=c(6000,
4000,
3000,
1000,
7000,
2000,
3000)
)
1. Provide daily figures
To provide the daily figures, firstly we get the daily contribution:
df$daily_contribution = df$amount/as.numeric(df$end_date - df$start_date)
Then, we will expand the date range using the start and end dates. There are a couple ways which you can do it, but seeing that you apply the dplyr tag, using the dplyr way we have:
library(dplyr)
df <- df %>%
rowwise() %>%
do(data.frame(id=.$id,
date=as.Date(seq(from=.$start_date, to=(.$end_date), by="day")),
daily_contribution=.$daily_contribution))
which has some output which looks like this:
Source: local data frame [285 x 3]
Groups: <by row>
id date daily_contribution
1 1 2012-01-15 193.5484
2 1 2012-01-16 193.5484
3 1 2012-01-17 193.5484
4 1 2012-01-18 193.5484
5 1 2012-01-19 193.5484
6 1 2012-01-20 193.5484
7 1 2012-01-21 193.5484
8 1 2012-01-22 193.5484
9 1 2012-01-23 193.5484
10 1 2012-01-24 193.5484
.. .. ... ...
2. Create a grouping variable
Next we create some kind of grouping variable that we're interested in. I've used lubridate for ease to get the month and year of the dates:
library(lubridate)
df$mnth=month(df$date)
df$yr=year(df$date)
Now with all of this we can easily use dplyr to summarise our information by the dates as required.
df %>%
group_by(id, mnth, yr) %>%
summarise(amount=sum(daily_contribution))
with output:
Source: local data frame [11 x 4]
Groups: id, mnth
id mnth yr amount
1 1 1 2012 3290.3226
2 1 2 2012 4441.6873
3 1 3 2012 2902.8122
4 1 4 2012 1451.6129
5 1 5 2012 1591.3978
6 1 6 2012 722.2222
7 2 1 2012 2666.6667
8 2 2 2012 4800.0000
9 2 3 2012 2436.3636
10 2 4 2012 1363.6364
11 2 5 2012 1045.4545
To get it precisely in the format you specified:
df %>% rowwise() %>%
mutate(start_date=as.Date(ISOdate(yr, mnth, 1)),
end_date=as.Date(ISOdate(yr, mnth+1, 1))) %>%
select(id, start_date, end_date, amount)
with output:
Source: local data frame [11 x 4]
Groups: <by row>
id start_date end_date amount
1 1 2012-01-01 2012-02-01 3290.3226
2 1 2012-02-01 2012-03-01 4441.6873
3 1 2012-03-01 2012-04-01 2902.8122
4 1 2012-04-01 2012-05-01 1451.6129
5 1 2012-05-01 2012-06-01 1591.3978
6 1 2012-06-01 2012-07-01 722.2222
7 2 2012-01-01 2012-02-01 2666.6667
8 2 2012-02-01 2012-03-01 4800.0000
9 2 2012-03-01 2012-04-01 2436.3636
10 2 2012-04-01 2012-05-01 1363.6364
11 2 2012-05-01 2012-06-01 1045.4545
as needed.
note: I can see from your example, that you have, 3096 = 6000 * 16/31 and 4339 = 6000*15/31+4000*14/39, but for the first one, as an example, you have 15 of Jan to 31 of Jan which is 17 days if the date range is inclusive. You can trivially alter this information if required.
Here's a solution using plyr and reshape. The numbers aren't the same as what you provided, so I may have misunderstood your intent though this seems to meet your stated goal (weighted average of amount by month).
df$index <- 1:nrow(df) #Create a unique index number
#Format the dates from factors to dates
df$start.date <- as.Date(df$start.date, format="%m/%d/%Y")
df$end.date <- as.Date(df$end.date, format="%m/%d/%Y")
library(plyr); library(reshape) #Load the libraries
#dlaply = (d)ataframe to (l)ist using (ply)r
#Subset on dataframe by "index" and perform a function on each subset called "X"
#Create a list containing:
# ID, each day from start to end date, amount recorded over that day
df2 <- dlply(df, .(index), function(X) {
ID <- X$ID #Keep the ID value
n.days <- as.numeric(difftime( X$end.date, X$start.date )) #Calculate time difference in days, report the result as a number
day <- seq(X$start.date, X$end.date, by="days") #Sequence of days
amount.per.day <- X$amount/n.days #Amount for that day
data.frame(ID, day, amount.per.day) #Last line is the output
})
#Change list back into data.frame
df3 <- ldply(df2, data.frame) #ldply = (l)ist to (d)ataframe using (ply)r
df3$mon <- as.numeric(format(df3$day, "%m")) #Assign a month to all dates
#Summarize by each ID and month: add up the daily amounts
ddply(df3, .(ID, mon), summarise, amount = sum(amount.per.day))
# ID mon amount
# 1 1 1 3290.3226
# 2 1 2 4441.6873
# 3 1 3 2902.8122
# 4 1 4 1451.6129
# 5 1 5 1591.3978
# 6 1 6 722.2222
# 7 2 1 2666.6667
# 8 2 2 4800.0000
# 9 2 3 2436.3636
# 10 2 4 1363.6364
# 11 2 5 1045.4545
Incidentally, for future posts, you can get faster answers if you provide the code to replicate your data. If your code is somewhat complicated, you can use dput(yourdata).
HTH!
I am new in R.
I want the week number of the month, which the date belongs to.
By using the following code:
>CurrentDate<-Sys.Date()
>Week Number <- format(CurrentDate, format="%U")
>Week Number
"31"
%U will return the Week number of the year .
But i want the week number of the month.
If the date is 2014-08-01 then i want to get 1.( The Date belongs to the 1st week of the month).
Eg:
2014-09-04 -> 1 (The Date belongs to the 1st week of the month).
2014-09-10 -> 2 (The Date belongs to the 2nd week of the month).
and so on...
How can i get this?
Reference:
http://astrostatistics.psu.edu/su07/R/html/base/html/strptime.html
By analogy of the weekdays function:
monthweeks <- function(x) {
UseMethod("monthweeks")
}
monthweeks.Date <- function(x) {
ceiling(as.numeric(format(x, "%d")) / 7)
}
monthweeks.POSIXlt <- function(x) {
ceiling(as.numeric(format(x, "%d")) / 7)
}
monthweeks.character <- function(x) {
ceiling(as.numeric(format(as.Date(x), "%d")) / 7)
}
dates <- sample(seq(as.Date("2000-01-01"), as.Date("2015-01-01"), "days"), 7)
dates
#> [1] "2004-09-24" "2002-11-21" "2011-08-13" "2008-09-23" "2000-08-10" "2007-09-10" "2013-04-16"
monthweeks(dates)
#> [1] 4 3 2 4 2 2 3
Another solution to use stri_datetime_fields() from the stringi package:
stringi::stri_datetime_fields(dates)$WeekOfMonth
#> [1] 4 4 2 4 2 3 3
You can use day from the lubridate package. I'm not sure if there's a week-of-month type function in the package, but we can do the math.
library(lubridate)
curr <- Sys.Date()
# [1] "2014-08-08"
day(curr) ## 8th day of the current month
# [1] 8
day(curr) / 7 ## Technically, it's the 1.14th week
# [1] 1.142857
ceiling(day(curr) / 7) ## but ceiling() will take it up to the 2nd week.
# [1] 2
Issue Overview
It was difficult to tell which answers worked, so I built my own function nth_week and tested it against the others.
The issue that's leading to most of the answers being incorrect is this:
The first week of a month is often a short-week
Same with the last week of the month
For example, October 1st 2019 is a Tuesday, so 6 days into October (which is a Sunday) is already the second week. Also, contiguous months often share the same week in their respective counts, meaning that the last week of the prior month is commonly also the first week of the current month. So, we should expect a week count higher than 52 per year and some months that contain a span of 6 weeks.
Results Comparison
Here's a table showing examples where some of the above suggested algorithms go awry:
DATE Tori user206 Scri Klev Stringi Grot Frei Vale epi iso coni
Fri-2016-01-01 1 1 1 1 5 1 1 1 1 1 1
Sat-2016-01-02 1 1 1 1 1 1 1 1 1 1 1
Sun-2016-01-03 2 1 1 1 1 2 2 1 -50 1 2
Mon-2016-01-04 2 1 1 1 2 2 2 1 -50 -51 2
----
Sat-2018-12-29 5 5 5 5 5 5 5 4 5 5 5
Sun-2018-12-30 6 5 5 5 5 6 6 4 -46 5 6
Mon-2018-12-31 6 5 5 5 6 6 6 4 -46 -46 6
Tue-2019-01-01 1 1 1 1 6 1 1 1 1 1 1
You can see that only Grothendieck, conighion, Freitas, and Tori are correct due to their treatment of partial week periods. I compared all days from year 100 to year 3000; there are no differences among those 4. (Stringi is probably correct for noting weekends as separate, incremented periods, but I didn't check to be sure; epiweek() and isoweek(), because of their intended uses, show some odd behavior near year-ends when using them for week incrementation.)
Speed Comparison
Below are the tests for efficiency between the implementations of: Tori, Grothendieck, Conighion, and Freitas
# prep
library(lubridate)
library(tictoc)
kepler<- ymd(15711227) # Kepler's birthday since it's a nice day and gives a long vector of dates
some_dates<- seq(kepler, today(), by='day')
# test speed of Tori algorithm
tic(msg = 'Tori')
Tori<- (5 + day(some_dates) + wday(floor_date(some_dates, 'month'))) %/% 7
toc()
Tori: 0.19 sec elapsed
# test speed of Grothendieck algorithm
wk <- function(x) as.numeric(format(x, "%U"))
tic(msg = 'Grothendieck')
Grothendieck<- (wk(some_dates) - wk(as.Date(cut(some_dates, "month"))) + 1)
toc()
Grothendieck: 1.99 sec elapsed
# test speed of conighion algorithm
tic(msg = 'conighion')
weeknum <- as.integer( format(some_dates, format="%U") )
mindatemonth <- as.Date( paste0(format(some_dates, "%Y-%m"), "-01") )
weeknummin <- as.integer( format(mindatemonth, format="%U") ) # the number of the week of the first week within the month
conighion <- weeknum - (weeknummin - 1) # this is as an integer
toc()
conighion: 2.42 sec elapsed
# test speed of Freitas algorithm
first_day_of_month_wday <- function(dx) {
day(dx) <- 1
wday(dx)
}
tic(msg = 'Freitas')
Freitas<- ceiling((day(some_dates) + first_day_of_month_wday(some_dates) - 1) / 7)
toc()
Freitas: 0.97 sec elapsed
Fastest correct algorithm by about at least 5X
require(lubridate)
(5 + day(some_dates) + wday(floor_date(some_dates, 'month'))) %/% 7
# some_dates above is any vector of dates, like:
some_dates<- seq(ymd(20190101), today(), 'day')
Function Implementation
I also wrote a generalized function for it that performs either month or year week counts, begins on a day you choose (i.e. say you want to start your week on Monday), labels output for easy checking, and is still extremely fast thanks to lubridate.
nth_week<- function(dates = NULL,
count_weeks_in = c("month","year"),
begin_week_on = "Sunday"){
require(lubridate)
count_weeks_in<- tolower(count_weeks_in[1])
# day_names and day_index are for beginning the week on a day other than Sunday
# (this vector ordering matters, so careful about changing it)
day_names<- c("Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday")
# index integer of first match
day_index<- pmatch(tolower(begin_week_on),
tolower(day_names))[1]
### Calculate week index of each day
if (!is.na(pmatch(count_weeks_in, "year"))) {
# For year:
# sum the day of year, index for day of week at start of year, and constant 5
# then integer divide quantity by 7
# (explicit on package so lubridate and data.table don't fight)
n_week<- (5 +
lubridate::yday(dates) +
lubridate::wday(floor_date(dates, 'year'),
week_start = day_index)
) %/% 7
} else {
# For month:
# same algorithm as above, but for month rather than year
n_week<- (5 +
lubridate::day(dates) +
lubridate::wday(floor_date(dates, 'month'),
week_start = day_index)
) %/% 7
}
# naming very helpful for review
names(n_week)<- paste0(lubridate::wday(dates,T), '-', dates)
n_week
}
Function Output
# Example raw vector output:
some_dates<- seq(ymd(20190930), today(), by='day')
nth_week(some_dates)
Mon-2019-09-30 Tue-2019-10-01 Wed-2019-10-02
5 1 1
Thu-2019-10-03 Fri-2019-10-04 Sat-2019-10-05
1 1 1
Sun-2019-10-06 Mon-2019-10-07 Tue-2019-10-08
2 2 2
Wed-2019-10-09 Thu-2019-10-10 Fri-2019-10-11
2 2 2
Sat-2019-10-12 Sun-2019-10-13
2 3
# Example tabled output:
library(tidyverse)
nth_week(some_dates) %>%
enframe('DATE','nth_week_default') %>%
cbind(some_year_day_options = as.vector(nth_week(some_dates, count_weeks_in = 'year', begin_week_on = 'Mon')))
DATE nth_week_default some_year_day_options
1 Mon-2019-09-30 5 40
2 Tue-2019-10-01 1 40
3 Wed-2019-10-02 1 40
4 Thu-2019-10-03 1 40
5 Fri-2019-10-04 1 40
6 Sat-2019-10-05 1 40
7 Sun-2019-10-06 2 40
8 Mon-2019-10-07 2 41
9 Tue-2019-10-08 2 41
10 Wed-2019-10-09 2 41
11 Thu-2019-10-10 2 41
12 Fri-2019-10-11 2 41
13 Sat-2019-10-12 2 41
14 Sun-2019-10-13 3 41
Hope this work saves people the time of having to weed through all the responses to figure out which are correct.
I don't know R but if you take the week of the first day in the month you could use it to get the week in the month
2014-09-18
First day of month = 2014-09-01
Week of first day on month = 36
Week of 2014-09-18 = 38
Week in the month = 1 + (38 - 36) = 3
Using lubridate you can do
ceiling((day(date) + first_day_of_month_wday(date) - 1) / 7)
Where the function first_day_of_month_wday returns the weekday of the first day of month.
first_day_of_month_wday <- function(dx) {
day(dx) <- 1
wday(dx)
}
This adjustment must be done in order to get the correct week number otherwise if you have the 7th day of month on a Monday you will get 1 instead of 2, for example.
This is only a shift in the day of month.
The minus 1 is necessary because when the first day of month is sunday the adjustment is not needed, and the others weekdays follow this rule.
I came across the same issue and I solved it with mday from data.table package. Also, I realized that when using the ceiling() function, one also needs to account for the '5th week' situation. For example ceiling of the 30th day of a month ceiling(30/7) will give 5 ! Therefore, the ifelse statement below.
# Create a sample data table with days from year 0 until present
DT <- data.table(days = seq(as.Date("0-01-01"), Sys.Date(), "days"))
# compute the week of the month and account for the '5th week' case
DT[, week := ifelse( ceiling(mday(days)/7)==5, 4, ceiling(mday(days)/7) )]
> DT
days week
1: 0000-01-01 1
2: 0000-01-02 1
3: 0000-01-03 1
4: 0000-01-04 1
5: 0000-01-05 1
---
736617: 2016-10-14 2
736618: 2016-10-15 3
736619: 2016-10-16 3
736620: 2016-10-17 3
736621: 2016-10-18 3
To have an idea about the speed, then run:
system.time( DT[, week := ifelse( ceiling(mday(days)/7)==5, 4, ceiling(mday(days)/7) )] )
# user system elapsed
# 3.23 0.05 3.27
It took approx. 3 seconds to compute the weeks for more than 700 000 days.
However, the ceiling way above will always create the last week longer than all the other weeks (the four weeks have 7,7,7, and 9 or 10 days). Another way would be to use something like
ceiling(1:31/31*4)
[1] 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4
where you get 7, 8 , 8 and 8 days per respective week in a 31 days month.
DT[, week2 := ceiling(mday(days)/31*4)]
There is a simple way to do it with lubridate package:
isoweek() returns the week as it would appear in the ISO 8601 system, which uses a reoccurring leap week.
epiweek() is the US CDC version of epidemiological week. It follows same rules as
isoweek() but starts on Sunday. In other parts of the world the convention is to start epidemiological weeks on Monday, which is the same as isoweek().
Reference here
I am late to the party and maybe noone is gonna read this answer...
Anyway, why not stay simple and do it like this:
library(lubridate)
x <- ymd(20200311, 20200308)
week(x) - week(floor_date(x, unit = "months")) + 1
[1] 3 2
I don't know any build in functions but a work around would be
CurrentDate <- Sys.Date()
# The number of the week relative to the year
weeknum <- as.integer( format(CurrentDate, format="%U") )
# Find the minimum week of the month relative to the year
mindatemonth <- as.Date( paste0(format(CurrentDate, "%Y-%m"), "-01") )
weeknummin <- as.integer( format(mindatemonth, format="%U") ) # the number of the week of the first week within the month
# Calculate the number of the week relative to the month
weeknum <- weeknum - (weeknummin - 1) # this is as an integer
# With the following you can convert the integer to the same format of
# format(CurrentDate, format="%U")
formatC(weeknum, width = 2, flag = "0")
Simply do this:
library(lubridate)
ds1$Week <- week(ds1$Sale_Date)
This is high performance! It instantly works on my 12 milion rows dataset.
On example above, ds1 is the dataset, Sale_Date is a date column (like "2015-11-23")
The other approach, using "as.integer( format..." might work on small datasets, but on 12 million rows it would keep running forever...