rolling computations in xts by month - r

I am familiar with the zoo function rollapply which allows you to do rolling computations on zoo or xts objects and you can specify the rolling increment via the by parameter. I am specifically interested in applying a function every month but using all of the past daily data in the computation. For example say my data set looks like this:
dte, val
1/01/2001, 10
1/02/2001, 11
...
1/31/2001, 2
2/01/2001, 54
2/02/2001, 34
...
2/30/2001, 29
I would like to select the end of each month and apply a function that uses all the daily data. This doesn't seem like it would work with rollapply since the by argument would be 30 sometimes, 29 other months, etc. My current idea is:
f <- function(xts_obj) { coef(lm(a ~ b, data=as.data.frame(xts_obj)))[1] }
month_end <- endpoints(my_xts, on="months", k=1)
rslt <- apply(month_end, 1, function(idx) { my_xts[paste0("/",idx)] })
Surely there is a better way to do this that would be quicker no?
To clarify: I would like to use overlapping periods just the rolling should be done monthly.

If I understand correctly, you can get the dates of your endpoints, then for each endpoint (i.e. using lapply or for), call rollapply using data up to that point.
getSymbols("SPY", src='yahoo', from='2012-01-01', to='2012-08-01')
idx <- index(SPY)[endpoints(SPY, 'months')]
out <- lapply(idx, function(i) {
as.xts(rollapplyr(as.zoo(SPY[paste0("/", i)]), 5,
function(x) coef(lm(x[, 4] ~ x[, 1]))[2], by.column=FALSE))
})
sapply(out, NROW)
#[1] 16 36 58 78 100 121 142 143
I temporarily coerce to zoo for the rollapplyr to make sure the rollapply.zoo method is being used (as opposed to the unexported rollapply.xts method), then coerce back to xts

As an answer to "Is the zoo/xts conversion needed?":
It isn't needed in this case, but rollapply won't work if you send it a dataframe, as I recently discovered from this StackOverflow answer

You want period.apply(), or its convenience helper apply.monthly(), both in xts.
Example:
R> foo <- xts(1:100, order.by=Sys.Date()+0:99)
R> apply.monthly(foo, sum)
[,1]
2012-08-31 105
2012-09-30 885
2012-10-31 1860
2012-11-25 2200
R>
or equally
R> apply.monthly(foo, quantile)
0% 25% 50% 75% 100%
2012-08-31 1 4.25 7.5 10.75 14
2012-09-30 15 22.25 29.5 36.75 44
2012-10-31 45 52.50 60.0 67.50 75
2012-11-25 76 82.00 88.0 94.00 100
R>
just to prove that functions returning more than one value can be used too.

Related

How do I call a function using a specific time window?

Suppose I have a zoo object (or it could be a data.frame) that has an index on "time of day" and has some value (see sample data below):
val
...
2006-08-01 12:00 23
2006-08-01 12:01 24
2006-08-01 12:02 25
2006-08-01 12:03 26
2006-08-01 12:04 27
2006-08-01 12:05 28
2006-08-01 12:06 29
...
2006-08-02 12:00 123
2006-08-02 12:01 124
2006-08-02 12:02 125
2006-08-02 12:03 126
2006-08-02 12:04 127
...
I would like to call a custom function (call it custom.func(vals)) from 12:01 - 12:03 (i.e. something similar to zoo::rollapply) every time that interval occurs so in this example, daily. How would I do that?
NOTES (for robustness, it would also be great to take into account the following edge cases but not necessary):
Don't assume that I have values for 12:01 - 12:03 every day
Don't assume that the entire range 12:01 - 12:03 is present every day. Some days I might only have 12:01 and 12:02 but might be missing 12:03
What if I wanted my custom.func(vals) to be called on day boundaries like using val from 23:58 - 00:12?
Suppose our input is the POSIXct zoo object z given in the Note at the end.
Create a character vector times which has one element per element of z and is in the form HH:MM. Then create a logical ok which indicates which times are between the indicated boundary values. z[ok] is then z reduced to those values. Finally for each day apply sum (can use some other function if desired) using aggregate.zoo :
times <- format(time(z), "%H:%M")
ok <- times >= "12:01" & times <= "12:03"
aggregate(z[ok], as.Date, sum)
## 2006-08-01 2006-08-02
## 75 375
times straddle midnight
The version is for the case where the times straddle midnight. Note that the order of values sent to the function is not the original order but if the function is symmetric that does not matter.
times <- format(time(z), "%H:%M")
ok <- times >= "23:58" | times <= "00:12"
aggregate(z[ok], (as.Date(format(time(z))) + (times >= "23:58"))[ok], sum)
## 2006-08-02
## 41
Variation
The prior code chunk works if the function is symmetric in the components of its argument (which is the case for many functions such as mean and sum) but if the function were not symmetric we would need a slightly different approach. We define to.sec which translates an HH:MM string to numeric seconds and subtract to.sec("23:58") from each POSIXct datetime. Then the components of z to keep are those whose transformed times converted to HH:MM character strings that are less than "00:14".
to.sec <- function(x) with(read.table(text = x, sep = ":"), 3600 * V1 + 60 * V2)
times <- format(time(z) - to.sec("23:58"), "%H:%M")
ok <- times <= "00:14"
aggregate(z[ok], as.Date(time(z)[ok] - to.sec("23:58")), sum)
## 2006-08-01
## 41
Note
Lines <- "datetime val
2006-08-01T12:00 23
2006-08-01T12:01 24
2006-08-01T12:02 25
2006-08-01T12:03 26
2006-08-01T12:04 27
2006-08-01T12:05 28
2006-08-01T12:06 29
2006-08-01T23:58 20
2006-08-02T00:01 21
2006-08-02T12:00 123
2006-08-02T12:01 124
2006-08-02T12:02 125
2006-08-02T12:03 126
2006-08-02T12:04 127"
library(zoo)
z <- read.zoo(text = Lines, tz = "", header = TRUE, format = "%Y-%m-%dT%H:%M")
EDIT
Have revised the non-symmetric code and simplified all code chunks.
I recommend runner package which allows to compute any rolling function on irregular time series. Function runner is equivalent of rollApply with distinction that it can depend on dates. runner allows to apply any R function on window length defined by k with date idx (or any integer). Example below calculates regression on 5-minutes (5*60 sec) window span. Algorithm don't care if there will be day-change, just compute 5-minutes each time (for example 23:56-00:01).
Create data:
set.seed(1)
x <- cumsum(rnorm(1000))
y <- 3 * x + rnorm(1000)
time <- as.POSIXct(cumsum(sample(60:120, 1000, replace = TRUE)),
origin = Sys.Date()) # unequaly spaced time series
data <- data.frame(time, y, x)
Custom function to be called on sliding windows:
library(runner)
running_regression <- function(idx) {
predict(lm(y ~ x, data = data))[max(idx)]
}
data$pred <- runner(seq_along(x),
k = 60 * 5,
idx = time,
f = running_regression)
Once we have created dataset with rolling 5-minute prediction, then we can filter only particular windows - here, only 1-st minute of the hour. It means that we always keep {hh}:56 - {hh+1}:01
library(dplyr)
library(lubridate)
filtered <-
data %>%
filter(minute(time) == 1)
plot(data$time, data$y, type = "l", col = "red")
points(filtered$time, filtered$pred, col = "blue")
There are some other examples in vignette how to do this with runner

How to subset times in R

In R I have data
USER BIRTH
11 "2013-01-11 22:31:11"
121 "2014-12-26 04:07:35"
...
I want to create a new data set data_new that contain all USER in the time 10 o'clock to 11 o'clock.
The types of USER and BIRTH are strings/characters. I tried this:
data_new= data$BIRTH > as.POSIXct("10:00:00", format="%H:%M:%S")
& data$BIRTH < as.POSIXct("11:00:00", format="%H:%M:%S")
but here R gives we FALSE for all entries, so this don't work.
How can I solve this?
Update
Say I want to find the number of users for all hours. I use the answer and try this
u=c()
for(j in 1:24) {
data_new=data[times > "00:00:00"+(j-1) & times < "01:00:00"+j ,]
#saving the number of users in vector u
u[j]=dim(data_new)[1]
}
but R can't figure out the term "00:00:00"+(j-1).
If df is your data frame:
df <- read.table(text = 'USER BIRTH
11 "2013-01-11 22:31:11"
121 "2014-12-26 04:07:35"
121 "2014-12-26 10:07:35"
121 "2014-12-26 11:07:35"
121 "2014-12-26 10:38:35"', header = T)
df$BIRTH <- ymd_hms(df$BIRTH)
times <- strftime(df$BIRTH, format = "%H:%M:%S")
df[times > "10:00:00" & times < "11:00:00",]
Output:
USER BIRTH
3 121 2014-12-26 10:07:35
5 121 2014-12-26 10:38:35
One way to do something to each subset of your data is to use the split-lapply paradigm. In this case, you would convert data$BIRTH to POSIXlt and split by the hour component of the POSIXlt object. That will give you a list where each list element contains all the data for a specific hour.
data <- read.csv(text = "USER,BIRTH
11,2013-01-11 22:31:11
12,2014-12-26 04:07:35
21,2014-12-26 10:07:35
121,2014-12-26 11:07:35
112,2014-12-26 10:38:35")
data_by_hour <- split(data, as.POSIXlt(data$BIRTH)$hour)
Then you can use lapply (or sapply) to do whatever you want to each of those subsets. To count the number of observations per hour:
# number of observations for each hour
sapply(data_by_hour, nrow)
4 10 11 22
1 2 1 1
You can also do this with xts.
library(xts)
# Create xts object from 'data' data.frame
# Note: xts objects are based on a matrix, so you cannot have columns with
# mixed types like you can with a data.frame.
x <- xts(data["USER"], as.POSIXct(data$BIRTH))
period.apply(x, endpoints(x, "hours"), nrow)
# USER
# 2013-01-11 22:31:11 1
# 2014-12-26 04:07:35 1
# 2014-12-26 10:38:35 2
# 2014-12-26 11:07:35 1
Note that you can do time-of-day subsetting with xts. It avoids potential locale-related collation order issues caused by using logical operators on character strings.
x["T10:00/T11:00"]
# USER
# 2014-12-26 10:07:35 21
# 2014-12-26 10:38:35 112

How do I create a string containing quotes and then parse and evaluate it?

I have a data frame with 3 years worth of sales data that I'm trying to convert to a time series. Manually creating subsets for each of the 36 months:
mydfJan2011 <- subset(myDataFrame,
as.Date("2011-01-01") <= myDataFrame$Dates &
myDataFrame$Dates <= as.Date("2011-01-31"))
...
mydfDec2013 <- subset(myDataFrame,
as.Date("2013-12-01") <= myDataFrame$Dates &
myDataFrame$Dates <= as.Date("2013-12-31"))
and then summing them up and putting them into a vector
counts[1] <- sum(mydfJan2011$itemsSold)
...
counts[36] <- sum(mydfDec2013$itemsSold))
to get the values for the time series works fine, but I'd like to make it a little more automatic as I have to create more than one time series, so I'm trying to turn it into a loop.
In order to do that, I need to create a string with a subset command like this:
"subset(myDataFrame,
as.Date("2011-01-01") <= myDataFrame$Dates &
myDataFrame$Dates <= as.Date("2011-01-31"))"
But when I use paste, the result is this:
myString
>"subset(myDataFrame, as.Date(\"2011-02-01\") <= myDataFrame$Dates & myDataFrame$Dates <= as.Date(\"2011-02-28\"))"
and
eval(parse(text = myString))
results in the following error message:
Error in charToDate(x) :
character string is not in a standard unambiguous format
whereas just typing in the command (without escapes) results in the subset I'm trying to create.
I've tried playing around with single and double quotes, substitute and deparse, but none of it results in any kind of subset of my data frame.
Any suggestions?
Even another way of splitting up the data by month and summing it up would be welcome.
Thanks,
Signe
Here is a solution using tapply:
with(sales, tapply(itemsSold, substr(Dates, 1, 7), sum))
Produces monthly sums (I limited my data to 9 months for illustrative purposes, but this extends to longer periods):
2011-01 2011-02 2011-03 2011-04 2011-05 2011-06 2011-07 2011-08 2011-09
1592.097 1468.427 1594.386 1563.014 1595.489 1560.361 1553.128 1663.705 1325.519
tapply computes the sum of values in a vector (sales$sales) grouped by the values of another vector (substr(sales$date, 1, 7), which is basically "yyyy-mm"). with allows me to avoid me typing sales$ repeatedly. You should almost never have to use eval(parse(...)). There is almost always a better, faster way to do it without resorting to that.
And here is the data I used:
set.seed(1)
sales <- data.frame(Dates=seq(as.Date("2011-01-01"), as.Date("2011-09-30"), by="+1 day"))
sales$itemsSold <- runif(nrow(sales), 1, 100)
For reference, there are also several 3rd party packages that simplify this type of computation (see data.table, dplyr).
Here's a data.table approach that aggregates by year and month, using the first of the month as the respective group label:
library(data.table)
##
mDt <- Dt[
,list(monthSold=sum(itemsSold)),
keyby=list(mDay=as.Date(paste0(
year(Dates),"-",month(Dates),"-01")))]
##
R> head(mDt)
mDay monthSold
1: 2012-01-01 179
2: 2012-02-01 128
3: 2012-03-01 152
4: 2012-04-01 160
5: 2012-05-01 152
6: 2012-06-01 141
Data:
set.seed(123)
Dt <- data.table(
Dates=seq.Date(
from=as.Date("2012-01-01"),
to=as.Date("2014-12-31"),
by="day"),
itemsSold=rpois(1096,5))

Aggregation by time period in lubridate

This question asks about aggregation by time period in R, what pandas calls resampling. The most useful answer uses the XTS package to group by a given time period, applying some function such as sum() or mean().
One of the comments suggested there was something similar in lubridate, but didn't elaborate. Can someone provide an idiomatic example using lubridate? I've read through the lubridate vignette a couple times and can imagine some combination of lubridate and plyr, however I want to make sure there isn't an easier way that I'm missing.
To make the example more real, let's say I want the daily sum of bicycles traveling northbound from this dataset:
library(lubridate)
library(reshape2)
bikecounts <- read.csv(url("http://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD"), header=TRUE, stringsAsFactors=FALSE)
names(bikecounts) <- c("Date", "Northbound", "Southbound")
Data looks like this:
> head(bikecounts)
Date Northbound Southbound
1 10/02/2012 12:00:00 AM 0 0
2 10/02/2012 01:00:00 AM 0 0
3 10/02/2012 02:00:00 AM 0 0
4 10/02/2012 03:00:00 AM 0 0
5 10/02/2012 04:00:00 AM 0 0
6 10/02/2012 05:00:00 AM 0 0
I don't know why you'd use lubridate for this. If you're just looking for something less awesome than xts you could try this
tapply(bikecounts$Northbound, as.Date(bikecounts$Date, format="%m/%d/%Y"), sum)
Basically, you just need to split by Date, then apply a function.
lubridate could be used for creating a grouping factor for split-apply problems. So, for example, if you want the sum for each month (ignoring year)
tapply(bikecounts$Northbound, month(mdy_hms(bikecounts$Date)), sum)
But, it's just using wrappers for base R functions, and in the case of the OP, I think the base R function as.Date is the easiest (as evidenced by the fact that the other Answers also ignored your request to use lubridate ;-) ).
Something that wasn't covered by the Answer to the other Question linked to in the OP is split.xts. period.apply splits an xts at endpoints and applies a function to each group. You can find endpoints that are useful for a given task with the endpoints function. For example, if you have an xts object, x, then endpoints(x, "months") would give you the row numbers that are the last row of each month. split.xts leverages that to split an xts object -- split(x, "months") would return a list of xts objects where each component was for a different month.
Although, split.xts() and endpoints() are primarily intended for xts objects, they also work on some other objects as well, including plain time based vectors. Even if you don't want to use xts objects, you still may find uses for endpoints() because of its convenience or its speed (implemented in C)
> split.xts(as.Date("1970-01-01") + 1:10, "weeks")
[[1]]
[1] "1970-01-02" "1970-01-03" "1970-01-04"
[[2]]
[1] "1970-01-05" "1970-01-06" "1970-01-07" "1970-01-08" "1970-01-09"
[6] "1970-01-10" "1970-01-11"
> endpoints(as.Date("1970-01-01") + 1:10, "weeks")
[1] 0 3 10
I think lubridate's best use in this problem is for parsing the "Date" strings into POSIXct objects. i.e. the mdy_hms function in this case.
Here's an xts solution that uses lubridate to parse the "Date" strings.
x <- xts(bikecounts[, -1], mdy_hms(bikecounts$Date))
period.apply(x, endpoints(x, "days"), sum)
apply.daily(x, sum) # identical to above
For this specific task, xts also has an optimized period.sum function (written in Fortran) that is very fast
period.sum(x, endpoints(x, "days"))
Using ddply from plyr package:
library(plyr)
bikecounts$Date<-with(bikecounts,as.Date(Date, format = "%m/%d/%Y"))
x<-ddply(bikecounts,.(Date),summarise, sumnorth=sum(Northbound),sumsouth=sum(Southbound))
> head(x)
Date sumnorth sumsouth
1 2012-10-02 1165 773
2 2012-10-03 1761 1760
3 2012-10-04 1767 1708
4 2012-10-05 1590 1558
5 2012-10-06 926 1080
6 2012-10-07 951 1191
> tail(x)
Date sumnorth sumsouth
298 2013-07-26 1964 1999
299 2013-07-27 1212 1289
300 2013-07-28 902 1078
301 2013-07-29 2040 2048
302 2013-07-30 2314 2226
303 2013-07-31 2008 2076
Here is an option using data.table
after importing the csv:
library(data.table)
# convert the data.frame to data.table
bikecounts <- data.table(bikecounts)
# Calculate
bikecounts[, list(NB=sum(Northbound), SB=sum(Southbound)), by=as.Date(Date, format="%m/%d/%Y")]
as.Date NB SB
1: 2012-10-02 1165 773
2: 2012-10-03 1761 1760
3: 2012-10-04 1767 1708
4: 2012-10-05 1590 1558
5: 2012-10-06 926 1080
---
299: 2013-07-27 1212 1289
300: 2013-07-28 902 1078
301: 2013-07-29 2040 2048
302: 2013-07-30 2314 2226
303: 2013-07-31 2008 2076
Note, you can also use fread() ("fast read") from the data.table package to read in the CSV into a data.table in one step.
The only draw back is you to manually convert the date/time from string.
eg:
bikecounts <- fread("http://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD", header=TRUE, stringsAsFactors=FALSE)
setnames(bikecounts, c("Date", "Northbound", "Southbound"))
bikecounts[, Date := as.POSIXct(D, format="%m/%d/%Y %I:%M:%S %p")]
Here is the requested lubridate solution, which I also added to the linked question. It uses a combination of lubridate and zoo aggregate() for these operations:
ts.month.sum <- aggregate(zoo.ts, month, sum)
ts.daily.mean <- aggregate(zoo.ts, day, mean)
ts.mins.mean <- aggregate(zoo.ts, minutes, mean)
Obviously, you need to first convert your data to a zoo() object, which is easy enough. You can also use yearmon() or yearqtr(), or custom functions for both split and apply. This method is as syntactically sweet as that of pandas.

Aggregating in R conditional on the date

I have some data, where I aggregate the information on a unique minute basis with the below code based on a dataset for 1 day.
I would however like to be able to run this code with a datafile that is combined of multiple days. I have a date column in the dataset, so I can use that as a unique identifier for each day. Is there a way to aggregate the data on a 1 minute basis, given that the dates aren't the same?
The problem is that the unique function extracts the unique events that occur the first day, and then adds all the same events that happen in that minute afterwards. If i base it on the date too, I believe I can create unique 1-minute entries for each day in one long dataset.
Below is the code that works for a single days data.
novo <- read.csv("C:/Users/Morten/Desktop/data.csv", header = TRUE, stringsAsFactors=FALSE )
TimeStamp <- novo[,1]
price <- novo[, 2]
volume <- novo[,3]
nV <- sum(volume)
MinutesFloor <- unique(floor(TimeStamp))
nTradingMinutes <- length(MinutesFloor)
PriceMin <- rep(0, nTradingMinutes)
VolumeMin <- rep(0, nTradingMinutes)
for( j in 1:nTradingMinutes){
ThisMinutes <- (floor(TimeStamp) == MinutesFloor[j])
PriceMin[j] <- mean(price[ThisMinutes])
VolumeMin[j] <- sum(volume[ThisMinutes])
}
Thanks in advance
data format:
date,"ord","shares","finalprice","time","stock"
20100301,C,80,389,540.004,1158
20100301,C,77,389,540.004,1158
20100301,C,60,389,540.004,1158
20100301,C,28,389,540.004,1158
20100301,C,7,389,540.004,1158
20100302,C,25,394.7,540.00293333,1158
20100302,C,170,394.7,540.00293333,1158
20100302,C,40,394.7,540.00293333,1158
20100302,C,75,394.7,540.00293333,1158
20100302,C,100,394.7,540.00293333,1158
20100302,C,1,394.7,540.00293333,1158
I would like to suggest a radically simplified version of your code.
You are doing quite a few things rather inefficient. R is made to compute summary statistics clustered by different data values.
We will use this methods heavily.
I assume your data to be of the form you provided. At my system, this looks like
novo <- read.csv("test.csv", header = TRUE, stringsAsFactors=FALSE )
This gives us:
> str(novo)
'data.frame': 11 obs. of 6 variables:
$ date : int 20100301 20100301 20100301 20100301 20100301 20100302 20100302 20100302 20100302 20100302 ...
$ ord : chr "C" "C" "C" "C" ...
$ shares : int 80 77 60 28 7 25 170 40 75 100 ...
$ finalprice: num 389 389 389 389 389 ...
$ time : num 540 540 540 540 540 ...
$ stock : int 1158 1158 1158 1158 1158 1158 1158 1158 1158 1158 ...
Now, I assume that your date is ordered YearMonthDate. If you have a different ordering, you would have to alter the format command below. Furthermore, your time probably is in minutes.
Then we can create timestamps containing both the date and the time using the POSIXct datatype:
timestamps <- as.POSIXct(as.character(novo$date), format='%Y%m%d') + novo$time*60
Now, we do the rounding up minutes by creating a factor variable and using the cut function:
timestampsByMinute <- droplevels(cut(timestamps, 'min'))
Note that the additional droplevels function just removes the minutes that have no data item s available.
Finally, we may compute the summary statistics you did in the for-loop:
tapply is a function taking it's first argument, dividing it into groups defined by the second argument and applying the function given as third argument to that data. Thus we may just throw the tapply function on your data. (I have the feeling that the column numbers you used in your code do not match the column names in your example data - feel free to adapt to different columns if I interpreted your meaning the wrong way)
PriceMin <- tapply(novo$finalprice, timestampsByMinute, mean)
VolumeMin <- tapply(novo$shares, timestampsByMinute, sum)
This gives us
> PriceMin
2010-03-01 09:00:00 2010-03-02 09:00:00
389.0 394.7
> VolumeMin
2010-03-01 09:00:00 2010-03-02 09:00:00
252 411
which is probably what you want.
Note that tapply is much faster that the loop you used. If you have huge datafiles, this may be important.
I hope there are no errors left in my code - testing was not easy given the fact that you provided only data for one minute per day.
Edit:
As per request, here a small modification that removes the time information from the data:
> unname(VolumeMin)
[1] 252 411
> unname(PriceMin)
[1] 389.0 394.7

Resources