"argument "x" is missing, with no default" in openair package - r

I want to calculate Cluster with the 'Openair' package. I read the trajectory data with:
traj_2010 <- read.csv("C:/Users/stieger/Trajektorien/traj_10.csv", header=TRUE)
traj_2010$date <- as.POSIXct(strptime(traj_2010$date, format="%Y-%m-%d %H:%M", "GMT"))
traj_2010$date2 <- as.POSIXct(strptime(traj_2010$date2, format="%Y-%m-%d %H:%M", "GMT"))
Thus, I have trajectory data in form of:
head(traj_2010)
receptor year month day hour hour.inc lat lon height pressure date2 date
1 1 10 1 1 0 0 51.330 12.420 500.0 912.2 2010-01-01 00:00:00 2010-01-01
2 1 9 12 31 23 -1 51.350 12.523 464.1 915.2 2009-12-31 23:00:00 2010-01-01
3 1 9 12 31 22 -2 51.353 12.668 422.2 919.4 2009-12-31 22:00:00 2010-01-01
4 1 9 12 31 21 -3 51.329 12.840 380.9 922.0 2009-12-31 21:00:00 2010-01-01
5 1 9 12 31 20 -4 51.279 13.029 351.5 923.6 2009-12-31 20:00:00 2010-01-01
6 1 9 12 31 19 -5 51.218 13.240 335.1 924.2 2009-12-31 19:00:00 2010-01-01
I now use the command trajCluster:
cluster_2010<-trajCluster(traj_2010, method = "Angle", n.cluster = 5, col = c("red2", "blue", "green3", "purple", "black"),
map.fill= FALSE, key=FALSE, main="2010", xlab="latitude", ylab="longitude", xlim=range(-35:35), ylim=range(35:70),
par.settings=list(axis.line=list(lwd=1.5), strip.border=list(lwd=2),
fontsize=list(text=32)))
it runs some minutes but with the result:
|==============================================================================|100% ~0 s remaining Error in summarise_impl(.data, dots) :
Evaluation error: argument "x" is missing, with no default.
I am a little bit confused because in the past it worked without problems. Could somebody help me?
I am using the following packages:
library(openair)
library(plyr)
library(reshape2)
library(mapdata)
library(rworldmap)
library(dplyr)

Related

Finding each time of daily max variable in climate data

I have a large dataset over many years which has several variables, but the one I am interested in is wind speed and dateTime. I want to find the time of the max wind speed for every day in the data set. I have hourly data in Posixct format, with WS as a numeric with occasional NAs. Below is a short data set that should hopefully illustrate my point, however my dateTime wasn't working out to be hourly data, but it provides enough for a sample.
dateTime <- seq(as.POSIXct("2011-01-01 00:00:00", tz = "GMT"),
as.POSIXct("2011-01-29 23:00:00", tz = "GMT"),
by = 60*24)
WS <- sample(0:20,1798,rep=TRUE)
WD <- sample(0:390,1798,rep=TRUE)
Temp <- sample(0:40,1798,rep=TRUE)
df <- data.frame(dateTime,WS,WD,Temp)
df$WS[WS>15] <- NA
I have previously tried creating a new column with just a posix date (minus time) to allow for day isolation, however all the things I have tried have only returned a shortened data frame with date and WS (aggregate, splitting, xts). Aggregate was only one that didn't do this, however, it gave me 23:00:00 as a constant time which isn't correct.
I have looked at How to calculate daily means, medians, from weather variables data collected hourly in R?, https://stats.stackexchange.com/questions/7268/how-to-aggregate-by-minute-data-for-a-week-into-hourly-means and others but none have answered this question, or the solutions have not returned an ideal result.
I need to compare the results of this analysis with another data frame, so hence the reason I need the actual time when the max wind speed occurred for each day in the dataset. I have a feeling there is a simple solution, however, this has me frustrated.
A dplyr solution may be:
library(dplyr)
df %>%
mutate(date = as.Date(dateTime)) %>%
left_join(
df %>%
mutate(date = as.Date(dateTime)) %>%
group_by(date) %>%
summarise(max_ws = max(WS, na.rm = TRUE)) %>%
ungroup(),
by = "date"
) %>%
select(-date)
# dateTime WS WD Temp max_ws
# 1 2011-01-01 00:00:00 NA 313 2 15
# 2 2011-01-01 00:24:00 7 376 1 15
# 3 2011-01-01 00:48:00 3 28 28 15
# 4 2011-01-01 01:12:00 15 262 24 15
# 5 2011-01-01 01:36:00 1 149 34 15
# 6 2011-01-01 02:00:00 4 319 33 15
# 7 2011-01-01 02:24:00 15 280 22 15
# 8 2011-01-01 02:48:00 NA 110 23 15
# 9 2011-01-01 03:12:00 12 93 15 15
# 10 2011-01-01 03:36:00 3 5 0 15
Dee asked for: "I want to find the time of the max wind speed for every day in the data set." Other answers have calculated the max(WS) for every day, but not at which hour that occured.
So I propose the following solution with dyplr:
library(dplyr)
set.seed(12345)
dateTime <- seq(as.POSIXct("2011-01-01 00:00:00", tz = "GMT"),
as.POSIXct("2011-01-29 23:00:00", tz = "GMT"),
by = 60*24)
WS <- sample(0:20,1738,rep=TRUE)
WD <- sample(0:390,1738,rep=TRUE)
Temp <- sample(0:40,1738,rep=TRUE)
df <- data.frame(dateTime,WS,WD,Temp)
df$WS[WS>15] <- NA
df %>%
group_by(Date = as.Date(dateTime)) %>%
mutate(Hour = hour(dateTime),
Hour_with_max_ws = Hour[which.max(WS)])
I want to highlight out, that if there are several hours with the same maximal windspeed (in the example below: 15), only the first hour with max(WS) will be shown as result, though the windspeed 15 was reached on that date at the hours 0, 3, 4, 21 and 22! So you might need a more specific logic.
For the sake of completeness (and because I like the concise code) here is a "one-liner" using data.table:
library(data.table)
setDT(df)[, max.ws := max(WS, na.rm = TRUE), by = as.IDate(dateTime)][]
dateTime WS WD Temp max.ws
1: 2011-01-01 00:00:00 NA 293 22 15
2: 2011-01-01 00:24:00 15 55 14 15
3: 2011-01-01 00:48:00 NA 186 24 15
4: 2011-01-01 01:12:00 4 300 22 15
5: 2011-01-01 01:36:00 0 120 36 15
---
1734: 2011-01-29 21:12:00 12 249 5 15
1735: 2011-01-29 21:36:00 9 282 21 15
1736: 2011-01-29 22:00:00 12 238 6 15
1737: 2011-01-29 22:24:00 10 127 21 15
1738: 2011-01-29 22:48:00 13 297 0 15

R : how to get the rolling mean of a variable over the last few days but only at a given hour?

Consider this
time <- seq(ymd_hms("2014-02-24 23:00:00"), ymd_hms("2014-06-25 08:32:00"), by="hour")
group <- rep(LETTERS[1:20], each = length(time))
value <- sample(-10^3:10^3,length(time), replace=TRUE)
df2 <- data.frame(time,group,value)
str(df2)
> head(df2)
time group value
1 2014-02-24 23:00:00 A 246
2 2014-02-25 00:00:00 A -261
3 2014-02-25 01:00:00 A 628
4 2014-02-25 02:00:00 A 429
5 2014-02-25 03:00:00 A -49
6 2014-02-25 04:00:00 A -749
I would like to create a variable that contains, for each group, the rolling mean of value
over the last 5 days (not including the current observation)
only considering observations that fall at the exact same hour as the current observation.
In other words:
At time 2014-02-24 23:00:00, df2['rolling_mean_same_hour'] contains the mean of the values of value observed at 23:00:00 during the last 5 days in the data (not including 2014-02-24 of course).
I would like to do that in either dplyr or data.table. I confess having no ideas how to do that.
Any ideas?
Many thanks!
You can calculate the rollmean() with your data grouped by the group variable and hour of the time variable, normally the rollmean() will include the current observation, but you can use shift() function to exclude the current observation from the rollmean:
library(data.table); library(zoo)
setDT(df2)
df2[, .(rolling_mean_same_hour = shift(
rollmean(value, 5, na.pad = TRUE, align = 'right'),
n = 1,
type = 'lag'),
time), .(hour(time), group)]
# hour group rolling_mean_same_hour time
# 1: 23 A NA 2014-02-24 23:00:00
# 2: 23 A NA 2014-02-25 23:00:00
# 3: 23 A NA 2014-02-26 23:00:00
# 4: 23 A NA 2014-02-27 23:00:00
# 5: 23 A NA 2014-02-28 23:00:00
# ---
#57796: 22 T -267.0 2014-06-20 22:00:00
#57797: 22 T -389.6 2014-06-21 22:00:00
#57798: 22 T -311.6 2014-06-22 22:00:00
#57799: 22 T -260.0 2014-06-23 22:00:00
#57800: 22 T -26.8 2014-06-24 22:00:00

summarize by time interval not working

I have the following data as a list of POSIXct times that span one month. Each of them represent a bike delivery. My aim is to find the average amount of bike deliveries per ten-minute interval over a 24-hour period (producing a total of 144 rows). First all of the trips need to be summed and binned into an interval, then divided by the number of days. So far, I've managed to write a code that sums trips per 10-minute interval, but it produces incorrect values. I am not sure where it went wrong.
The data looks like this:
head(start_times)
[1] "2014-10-21 16:58:13 EST" "2014-10-07 10:14:22 EST" "2014-10-20 01:45:11 EST"
[4] "2014-10-17 08:16:17 EST" "2014-10-07 17:46:36 EST" "2014-10-28 17:32:34 EST"
length(start_times)
[1] 1747
The code looks like this:
library(lubridate)
library(dplyr)
tripduration <- floor(runif(1747) * 1000)
time_bucket <- start_times - minutes(minute(start_times) %% 10) - seconds(second(start_times))
df <- data.frame(tripduration, start_times, time_bucket)
summarized <- df %>%
group_by(time_bucket) %>%
summarize(trip_count = n())
summarized <- as.data.frame(summarized)
out_buckets <- data.frame(out_buckets = seq(as.POSIXlt("2014-10-01 00:00:00"), as.POSIXct("2014-10-31 23:0:00"), by = 600))
out <- left_join(out_buckets, summarized, by = c("out_buckets" = "time_bucket"))
out$trip_count[is.na(out$trip_count)] <- 0
head(out)
out_buckets trip_count
1 2014-10-01 00:00:00 0
2 2014-10-01 00:10:00 0
3 2014-10-01 00:20:00 0
4 2014-10-01 00:30:00 0
5 2014-10-01 00:40:00 0
6 2014-10-01 00:50:00 0
dim(out)
[1] 4459 2
test <- format(out$out_buckets,"%H:%M:%S")
test2 <- out$trip_count
test <- cbind(test, test2)
colnames(test)[1] <- "interval"
colnames(test)[2] <- "count"
test <- as.data.frame(test)
test$count <- as.numeric(test$count)
test <- aggregate(count~interval, test, sum)
head(test, n = 20)
interval count
1 00:00:00 32
2 00:10:00 33
3 00:20:00 32
4 00:30:00 31
5 00:40:00 34
6 00:50:00 34
7 01:00:00 31
8 01:10:00 33
9 01:20:00 39
10 01:30:00 41
11 01:40:00 36
12 01:50:00 31
13 02:00:00 33
14 02:10:00 34
15 02:20:00 32
16 02:30:00 32
17 02:40:00 36
18 02:50:00 32
19 03:00:00 34
20 03:10:00 39
but this is impossible because when I sum the counts
sum(test$count)
[1] 7494
I get 7494 whereas the number should be 1747
I'm not sure where I went wrong and how to simplify this code to get the same result.
I've done what I can, but I can't reproduce your issue without your data.
library(dplyr)
I created the full sequence of 10 minute blocks:
blocks.of.10mins <- data.frame(out_buckets=seq(as.POSIXct("2014/10/01 00:00"), by="10 mins", length.out=30*24*6))
Then split the start_times into the same bins. Note: I created a baseline time of midnight to force the blocks to align to 10 minute intervals. Removing this later is an exercise for the reader. I also changed one of your data points so that there was at least one example of multiple records in the same bin.
start_times <- as.POSIXct(c("2014-10-01 00:00:00", ## added
"2014-10-21 16:58:13",
"2014-10-07 10:14:22",
"2014-10-20 01:45:11",
"2014-10-17 08:16:17",
"2014-10-07 10:16:36", ## modified
"2014-10-28 17:32:34"))
trip_times <- data.frame(start_times) %>%
mutate(out_buckets = as.POSIXct(cut(start_times, breaks="10 mins")))
The start_times and all the 10 minute intervals can then be merged
trips_merged <- merge(trip_times, blocks.of.10mins, by="out_buckets", all=TRUE)
These can then be grouped by 10 minute block and counted
trips_merged %>% filter(!is.na(start_times)) %>%
group_by(out_buckets) %>%
summarise(trip_count=n())
Source: local data frame [6 x 2]
out_buckets trip_count
(time) (int)
1 2014-10-01 00:00:00 1
2 2014-10-07 10:10:00 2
3 2014-10-17 08:10:00 1
4 2014-10-20 01:40:00 1
5 2014-10-21 16:50:00 1
6 2014-10-28 17:30:00 1
Instead, if we only consider time, not date
trips_merged2 <- trips_merged
trips_merged2$out_buckets <- format(trips_merged2$out_buckets, "%H:%M:%S")
trips_merged2 %>% filter(!is.na(start_times)) %>%
group_by(out_buckets) %>%
summarise(trip_count=n())
Source: local data frame [6 x 2]
out_buckets trip_count
(chr) (int)
1 00:00:00 1
2 01:40:00 1
3 08:10:00 1
4 10:10:00 2
5 16:50:00 1
6 17:30:00 1

Given start and end times, create hourly labels to indicate whether an hour is in the duration or not

I have start and end times of some commercial event for a couple of locations. The event may or may not take place on each day and the event duration does not overlap. For example, run this:
inputdata = data.frame(
location = c('x','x','y','z','z'),
start = c(as.POSIXct("2010/1/1 8:28:00"),as.POSIXct("2010/1/2 7:20:00"),
as.POSIXct("2010/1/1 10:22:00"),
as.POSIXct("2010/1/5 13:28:00"),as.POSIXct("2010/1/7 15:39:00")),
end = c(as.POSIXct("2010/1/1 13:25:00"),as.POSIXct("2010/1/2 10:09:00"),
as.POSIXct("2010/1/1 15:24:00"),
as.POSIXct("2010/1/6 00:28:00"),as.POSIXct("2010/1/7 19:34:00"))
)
The input data looks like:
location start end
1 x 2010-01-01 08:28:00 2010-01-01 13:25:00
2 x 2010-01-02 07:20:00 2010-01-02 10:09:00
3 y 2010-01-01 10:22:00 2010-01-01 15:24:00
4 z 2010-01-05 13:28:00 2010-01-06 00:28:00
5 z 2010-01-07 15:39:00 2010-01-07 19:34:00
I want to construct an hourly dataset with three columns: 1.location, 2.hour, and 3.indicator and each row is for a pair of location and sharp hour (for instance, as.POSIXct("2010/1/1 13:00:00")) where indicator is a dummy, =1 if this hour is between some event start and end times for the location.
For instance, let's say the output hourly data are for 2010-01-01 to 2010-01-07. Run this:
output = data.frame(
location = rep(c('x','y','z'),
each=length(seq(as.POSIXct("2010/1/1"), as.POSIXct("2010/1/7 23:00:00"), "hours"))),
hour = rep(seq(as.POSIXct("2010/1/1"), as.POSIXct("2010/1/7 23:00:00"), "hours"),3),
indicator = rep(0,3*length(seq(as.POSIXct("2010/1/1"), as.POSIXct("2010/1/7 23:00:00"), "hours"))))
So we get the first six rows look like this:
location hour indicator
1 x 2010-01-01 00:00:00 0
2 x 2010-01-01 01:00:00 0
3 x 2010-01-01 02:00:00 0
4 x 2010-01-01 03:00:00 0
5 x 2010-01-01 04:00:00 0
6 x 2010-01-01 05:00:00 0
Now, we need to change the value of indicator to 1 if the hour in the same row has an event in effect for the location in the same row.
For instance, since location x has an event between 8:28 am on 2010/1/1 and 13:25 pm on 2010/1/1. So the rows for 7 am to 14 pm should look like this:
location hour indicator
8 x 2010-01-01 07:00:00 0
9 x 2010-01-01 08:00:00 1
10 x 2010-01-01 09:00:00 1
11 x 2010-01-01 10:00:00 1
12 x 2010-01-01 11:00:00 1
13 x 2010-01-01 12:00:00 1
14 x 2010-01-01 13:00:00 1
15 x 2010-01-01 14:00:00 0
It seems that I can do exhaustively search for each pair of location and hour and update the value of indicator is the hour is between the start and end hour of some event at that location. But I doubt this is the best way.
Or I am thinking that I can first, convert the input data to hourly data where the hour would be there only if they are between the start and end hour. In other words, the converted data should look like:
location hour indicator
1 x 2010-01-01 08:00:00 1
2 x 2010-01-01 09:00:00 1
3 x 2010-01-01 10:00:00 1
4 x 2010-01-01 11:00:00 1
5 x 2010-01-01 12:00:00 1
6 x 2010-01-01 13:00:00 1
7 x 2010-01-02 07:00:00 1
8 x 2010-01-02 08:00:00 1
9 x 2010-01-02 09:00:00 1
10 x 2010-01-02 10:00:00 1
11 y 2010-01-01 10:00:00 1
12 y 2010-01-01 11:00:00 1
and then I go from there to get the correct indicators for each hour for each location. Though, I don't know how to convert the start/end hours to hourly observations.
This is all I get for this problem so far.
With this said, I do not have a solution and would like to ask for help.
Also, all I want is that output with three columns. When contributing, please do not constrained by my thoughts which may not be efficient.
It is worth mentioning that the actual problem covers 5 years and there are 30 locations. So the algorithm needs to be efficient.
Here is a way to do this with a cross join.
library(dplyr)
hours =
data_frame(hour = seq(as.POSIXct("2010/1/1"),
as.POSIXct("2010/1/7 23:00:00"),
"hours") ) %>%
merge(inputdata %>% select(location) %>% distinct)
hours %>%
left_join(inputdata) %>%
filter(start <= hour & hour <= end) %>%
right_join(hours) %>%
mutate(indicator = +!is.na(start))

How to get week numbers from dates?

Looking for a function in R to convert dates into week numbers (of year) I went for week from package data.table.
However, I observed some strange behaviour:
> week("2014-03-16") # Sun, expecting 11
[1] 11
> week("2014-03-17") # Mon, expecting 12
[1] 11
> week("2014-03-18") # Tue, expecting 12
[1] 12
Why is the week number switching to 12 on tuesday, instead of monday? What am I missing? (Timezone should be irrelevant as there are just dates?!)
Other suggestions for (base) R functions are appreciated as well.
Base package Using the function strftime passing the argument %V to obtain the week of the year as decimal number (01–53) as defined in ISO 8601. (More details in the documentarion: ?strftime)
strftime(c("2014-03-16", "2014-03-17","2014-03-18", "2014-01-01"), format = "%V")
Output:
[1] "11" "12" "12" "01"
if you try with lubridate:
library(lubridate)
lubridate::week(ymd("2014-03-16", "2014-03-17","2014-03-18", '2014-01-01'))
[1] 11 11 12 1
The pattern is the same. Try isoweek
lubridate::isoweek(ymd("2014-03-16", "2014-03-17","2014-03-18", '2014-01-01'))
[1] 11 12 12 1
I understand the need for packages in certain situations, but the base language is so elegant and so proven (and debugged and optimized).
Why not:
dt <- as.Date("2014-03-16")
dt2 <- as.POSIXlt(dt)
dt2$yday
[1] 74
And then your choice whether the first week of the year is zero (as in indexing in C) or 1 (as in indexing in R).
No packages to learn, update, worry about bugs in.
Actually, I think you may have discovered a bug in the week(...) function, or at least an error in the documentation. Hopefully someone will jump in and explain why I am wrong.
Looking at the code:
library(lubridate)
> week
function (x)
yday(x)%/%7 + 1
<environment: namespace:lubridate>
The documentation states:
Weeks is the number of complete seven day periods that have occured between the date and January 1st, plus one.
But since Jan 1 is the first day of the year (not the zeroth), the first "week" will be a six day period. The code should (??) be
(yday(x)-1)%/%7 + 1
NB: You are using week(...) in the data.table package, which is the same code as lubridate::week except it coerces everything to integer rather than numeric for efficiency. So this function has the same problem (??).
if you want to get the week number with the year use: "%Y-W%V":
e.g yearAndweeks <- strftime(dates, format = "%Y-W%V")
so
> strftime(c("2014-03-16", "2014-03-17","2014-03-18", "2014-01-01"), format = "%Y-W%V")
becomes:
[1] "2014-W11" "2014-W12" "2014-W12" "2014-W01"
If you want to get the week number with the year, Grant Shannon's solution using strftime works, but you need to make some corrections for the dates around january 1st. For instance, 2016-01-03 (yyyy-mm-dd) is week 53 of year 2015, not 2016. And 2018-12-31 is week 1 of 2019, not of 2018. This codes provides some examples and a solution. In column "yearweek" the years are sometimes wrong, in "yearweek2" they are corrected (rows 2 and 5).
library(dplyr)
library(lubridate)
# create a testset
test <- data.frame(matrix(data = c("2015-12-31",
"2016-01-03",
"2016-01-04",
"2018-12-30",
"2018-12-31",
"2019-01-01") , ncol=1, nrow = 6 ))
# add a colname
colnames(test) <- "date_txt"
# this codes provides correct year-week numbers
test <- test %>%
mutate(date = as.Date(date_txt, format = "%Y-%m-%d")) %>%
mutate(yearweek = as.integer(strftime(date, format = "%Y%V"))) %>%
mutate(yearweek2 = ifelse(test = day(date) > 7 & substr(yearweek, 5, 6) == '01',
yes = yearweek + 100,
no = ifelse(test = month(date) == 1 & as.integer(substr(yearweek, 5, 6)) > 51,
yes = yearweek - 100,
no = yearweek)))
# print the result
print(test)
date_txt date yearweek yearweek2
1 2015-12-31 2015-12-31 201553 201553
2 2016-01-03 2016-01-03 201653 201553
3 2016-01-04 2016-01-04 201601 201601
4 2018-12-30 2018-12-30 201852 201852
5 2018-12-31 2018-12-31 201801 201901
6 2019-01-01 2019-01-01 201901 201901
I think the problem is that the week calculation somehow uses the first day of the year. I don't understand the internal mechanics, but you can see what I mean with this example:
library(data.table)
dd <- seq(as.IDate("2013-12-20"), as.IDate("2014-01-20"), 1)
# dd <- seq(as.IDate("2013-12-01"), as.IDate("2014-03-31"), 1)
dt <- data.table(i = 1:length(dd),
day = dd,
weekday = weekdays(dd),
day_rounded = round(dd, "weeks"))
## Now let's add the weekdays for the "rounded" date
dt[ , weekday_rounded := weekdays(day_rounded)]
## This seems to make internal sense with the "week" calculation
dt[ , weeknumber := week(day)]
dt
i day weekday day_rounded weekday_rounded weeknumber
1: 1 2013-12-20 Friday 2013-12-17 Tuesday 51
2: 2 2013-12-21 Saturday 2013-12-17 Tuesday 51
3: 3 2013-12-22 Sunday 2013-12-17 Tuesday 51
4: 4 2013-12-23 Monday 2013-12-24 Tuesday 52
5: 5 2013-12-24 Tuesday 2013-12-24 Tuesday 52
6: 6 2013-12-25 Wednesday 2013-12-24 Tuesday 52
7: 7 2013-12-26 Thursday 2013-12-24 Tuesday 52
8: 8 2013-12-27 Friday 2013-12-24 Tuesday 52
9: 9 2013-12-28 Saturday 2013-12-24 Tuesday 52
10: 10 2013-12-29 Sunday 2013-12-24 Tuesday 52
11: 11 2013-12-30 Monday 2013-12-31 Tuesday 53
12: 12 2013-12-31 Tuesday 2013-12-31 Tuesday 53
13: 13 2014-01-01 Wednesday 2014-01-01 Wednesday 1
14: 14 2014-01-02 Thursday 2014-01-01 Wednesday 1
15: 15 2014-01-03 Friday 2014-01-01 Wednesday 1
16: 16 2014-01-04 Saturday 2014-01-01 Wednesday 1
17: 17 2014-01-05 Sunday 2014-01-01 Wednesday 1
18: 18 2014-01-06 Monday 2014-01-01 Wednesday 1
19: 19 2014-01-07 Tuesday 2014-01-08 Wednesday 2
20: 20 2014-01-08 Wednesday 2014-01-08 Wednesday 2
21: 21 2014-01-09 Thursday 2014-01-08 Wednesday 2
22: 22 2014-01-10 Friday 2014-01-08 Wednesday 2
23: 23 2014-01-11 Saturday 2014-01-08 Wednesday 2
24: 24 2014-01-12 Sunday 2014-01-08 Wednesday 2
25: 25 2014-01-13 Monday 2014-01-08 Wednesday 2
26: 26 2014-01-14 Tuesday 2014-01-15 Wednesday 3
27: 27 2014-01-15 Wednesday 2014-01-15 Wednesday 3
28: 28 2014-01-16 Thursday 2014-01-15 Wednesday 3
29: 29 2014-01-17 Friday 2014-01-15 Wednesday 3
30: 30 2014-01-18 Saturday 2014-01-15 Wednesday 3
31: 31 2014-01-19 Sunday 2014-01-15 Wednesday 3
32: 32 2014-01-20 Monday 2014-01-15 Wednesday 3
i day weekday day_rounded weekday_rounded weeknumber
My workaround is this function:
https://github.com/geneorama/geneorama/blob/master/R/round_weeks.R
round_weeks <- function(x){
require(data.table)
dt <- data.table(i = 1:length(x),
day = x,
weekday = weekdays(x))
offset <- data.table(weekday = c('Sunday', 'Monday', 'Tuesday', 'Wednesday',
'Thursday', 'Friday', 'Saturday'),
offset = -(0:6))
dt <- merge(dt, offset, by="weekday")
dt[ , day_adj := day + offset]
setkey(dt, i)
return(dt[ , day_adj])
}
Of course, you can easily change the offset to make Monday first or whatever. The best way to do this would be to add an offset to the offset... but I haven't done that yet.
I provided a link to my simple geneorama package, but please don't rely on it too much because it's likely to change and not very documented.
Using only base, I wrote the following function.
Note:
Assumes Mon is day number 1 in the week
First week is week 1
Returns 0 if week is 52 from last year
Fine-tune to suit your needs.
findWeekNo <- function(myDate){
# Find out the start day of week 1; that is the date of first Mon in the year
weekday <- switch(weekdays(as.Date(paste(format(as.Date(myDate),"%Y"),"01-01", sep = "-"))),
"Monday"={1},
"Tuesday"={2},
"Wednesday"={3},
"Thursday"={4},
"Friday"={5},
"Saturday"={6},
"Sunday"={7}
)
firstMon <- ifelse(weekday==1,1, 9 - weekday )
weekNo <- floor((as.POSIXlt(myDate)$yday - (firstMon-1))/7)+1
return(weekNo)
}
findWeekNo("2017-01-15") # 2

Resources