Efficient time-weighted averages - r

given a dataset containing short intervals, and a value representing an average measure of something over each interval, I would like to average those values up to the calendar year, separately for each individual ("id").
The issue is that these intervals are not aligned with calendar year, so time-weighting of those values is necessary in order to get the best estimate of the annual average from the shorter interval averages.
Note that the intervals are inclusive for the start date and exclusive for the end date.
Example data
start_date and end_date are intervals that are unique non-overlapping within levels of id:
set.seed(30)
library(lubridate)
library(data.table)
x <- CJ(id=1:5, start_date=seq(from=as.Date("2005-01-12"),by=14,length=100))
#add noise so intervals don't all start on 2005-01-12
x[,start_date:=start_date + rbinom(1,size=20,prob=.15)*15L,by=id]
#all intervals are two weeks:
x[,end_date:=start_date+14]
x[,value:=rnorm(nrow(x))]
#for each id, calculate the mean value over each calendar year.
years <- c(year(min(x$start_date)), year(max(x$start_date)))
Additional constraints:
works for intervals that aren't exactly two weeks long
works even if the intervals aren't all the same length (as long as
they're non-overlapping)
works even if the earliest start_date isn't the same for each
participant
averages for calendar years that are don't have enough periods for
that id to complete the year should be NA
Potential solution that is too slow for my purposes.
complete_date_seq <- seq(as.Date(ymd(paste0(years[1],"-01-01"))), as.Date(ymd(paste0(years[2],"-12-12"))),by=1)
m <- matrix(NA,nrow=length(unique(x$id)),ncol=length(complete_date_seq))
rownames(m) <- unique(x$id)
colnames(m) <- as.character(complete_date_seq)
for(i in 1:nrow(m)){
temp <- x[id==rownames(m)[i]]
for(j in 1:nrow(temp)){
m[i, as.Date(complete_date_seq) %within% temp[j,interval(start_date,end_date-1)]] <- temp[j,value]
}
}
out <- CJ(id=unique(x$id),year=years[1]:years[2])
intervalfromyear <- function(y) interval(as.Date(ymd(paste0(y,"-01-01"))), as.Date(ymd(paste0(y,"-12-31"))))
out[, annual_avg:=mean(m[rownames(m)==.BY$id,complete_date_seq %within% intervalfromyear(.BY$year)]) ,by=c("id","year")]
I'm guessing there's some package for doing time-weighting that I'm not aware of. Is this true? Ideally there's a native data.table solution that's fast.

This is basically the same approach I posed in the question but a lot more efficient since it creates a long data.table rather than a matrix. I spent some time looking for a different solution (that doesn't actually involve actually creating a cell for each date and instead uses a weighted average product formula) using foverlaps but it was way more work, less easily extensible, and more error-prone.
#switch from exclusive to inclusive end_date
x[, actual_end_date:=as.Date(as.numeric(end_date)-1,origin="1970-01-01")]
z <- x[, list(date=seq(start_date,actual_end_date,by=1),value),by=c("id","start_date")]
complete_date_seq <- seq(from=as.Date(paste0(years[1],"-01-01")),
to=as.Date(paste0(years[2],"-12-31")),by=1)
missing_dates <- z[,list(date=as.Date(setdiff( complete_date_seq,date ),origin="1970-01-01"),value=NA),by=id]
result <- rbind(z,missing_dates,fill=TRUE)[order(id,date)]
result[, year:=substr(date,1,4)]
result[, mean(value),by=c("id","year")]

Related

Create efficient week over week calculation with subsetting

In my working dataset, I'm trying to calculate week-over-week values for the changes in wholesale and revenue. The code seems to work, but my estimates show it'll take about 75hrs to run what is a seemingly simple calculation. Below is the generic reproducible version which takes about 2m to run on this smaller dataset:
########################################################################################################################
# MAKE A GENERIC REPORDUCIBLE STACK OVERFLOW QUESTION
########################################################################################################################
# Create empty data frame of 26,000 observations similar to my data, but populated with noise
exampleData <- data.frame(product = rep(LETTERS,1000),
wholesale = rnorm(1000*26),
revenue = rnorm(1000*26))
# create a week_ending column which increases by one week with every set of 26 "products"
for(i in 1:nrow(exampleData)){
exampleData$week_ending[i] <- as.Date("2016-09-04")+7*floor((i-1)/26)
}
exampleData$week_ending <- as.Date(exampleData$week_ending, origin = "1970-01-01")
# create empty columns to fill
exampleData$wholesale_wow <- NA
exampleData$revenue_wow <- NA
# loop through the wholesale and revenue numbers and append the week-over-week changes
for(i in 1:nrow(exampleData)){
# set a condition where the loop only appends the week-over-week values if it's not the first week
if(exampleData$week_ending[i]!="2016-09-04"){
# set temporary values for the current and past week's wholesale value
currentWholesale <- exampleData$wholesale[i]
lastWeekWholesale <- exampleData$wholesale[which(exampleData$product==exampleData$product[i] &
exampleData$week_ending==exampleData$week_ending[i]-7)]
exampleData$wholesale_wow[i] <- currentWholesale/lastWeekWholesale -1
# set temporary values for the current and past week's revenue
currentRevenue <- exampleData$revenue[i]
lastWeekRevenue <- exampleData$revenue[which(exampleData$product==exampleData$product[i] &
exampleData$week_ending==exampleData$week_ending[i]-7)]
exampleData$revenue_wow[i] <- currentRevenue/lastWeekRevenue -1
}
}
Any help understanding why this takes so long or how to cut down the time would be much appreciated!
The first for loop can be simplified with the following for:
exampleData$week_ending2 <- as.Date("2016-09-04") + 7 * floor((seq_len(nrow(exampleData)) - 1) / 26)
setequal(exampleData$week_ending, exampleData$week_ending2)
[1] TRUE
Replacing second for loop
library(data.table)
dt1 <- as.data.table(exampleData)
dt1[, wholesale_wow := wholesale / shift(wholesale) - 1 , by = product]
dt1[, revenue_wow := revenue / shift(revenue) - 1 , by = product]
setequal(exampleData, dt1)
[1] TRUE
This takes about 4 milliseconds to run on my laptop
Here is a vectorized solution using the tidyr package.
set.seed(123)
# Create empty data frame of 26,000 observations similar to my data, but populated with noise
exampleData <- data.frame(product = rep(LETTERS,1000),
wholesale = rnorm(1000*26),
revenue = rnorm(1000*26))
# create a week_ending column which increases by one week with every set of 26 "products"
#vectorize the creating of the data
i<-1:nrow(exampleData)
exampleData$week_ending <- as.Date("2016-09-04")+7*floor((i-1)/26)
exampleData$week_ending <- as.Date(exampleData$week_ending, origin = "1970-01-01")
# create empty columns to fill
exampleData$wholesale_wow <- NA
exampleData$revenue_wow <- NA
#find the index of rows of interest (ie removing the first week)
i<-i[exampleData$week_ending!="2016-09-04"]
library(tidyr)
#create temp variables and convert into wide format
# the rows are product and the columns are the ending weeks
Wholesale<-exampleData[ ,c(1,2,4)]
Wholesale<-spread(Wholesale, week_ending, wholesale)
Revenue<-exampleData[ ,c(1,3,4)]
Revenue<-spread(Revenue, week_ending, revenue)
#number of columns
numCol<-ncol(Wholesale)
#remove the first two columns for current wholesale
#remove the first and last column for last week's wholesale
#perform calculation on ever element in dataframe (divide this week/lastweek)
Wholesale_wow<- Wholesale[ ,-c(1, 2)]/Wholesale[ ,-c(1, numCol)] - 1
#convert back to long format
Wholesale_wow<-gather(Wholesale_wow)
#repeat for revenue
Revenue_wow<- Revenue[ ,-c(1, 2)]/Revenue[ ,-c(1, numCol)] - 1
#convert back to long format
Revenue_wow<-gather(Revenue_wow)
#assemble calculated values back into the original dataframe
exampleData$wholesale_wow[i]<-Wholesale_wow$value
exampleData$revenue_wow[i]<-Revenue_wow$value
The strategy was to convert the original data into a wide format where the rows were the product id and the columns were the weeks. Then divide the data frames by each other. Convert back into a long format and add the newly calculated values to the exampleData data frame. This works, not very clean but very much faster than the loop. The dplyr package is another tool for this type of work.
To compare this results of this code with you test case use:
print(identical(goldendata, exampleData))
Where goldendata is your known good results, be sure to use the same random numbers with the set.seed() function.

Finding a more elegant was to aggregate hourly data to mean hourly data using zoo

I have a chunk of data logging temperatures from a few dozen devices every hour for over a year. The data are stored as a zoo object. I'd very much like to summarize those data by looking at the average values for every one of the 24 hours in a day (1am, 2am, 3am, etc.). So that for each device I can see what its average value is for all the 1am times, 2am times, and so on. I can do this with a loop but sense that there must be a way to do this in zoo with an artful use of aggregate.zoo. Any help?
require(zoo)
# random hourly data over 30 days for five series
x <- matrix(rnorm(24 * 30 * 5),ncol=5)
# Assign hourly data with a real time and date
x.DateTime <- as.POSIXct("2014-01-01 0100",format = "%Y-%m-%d %H") +
seq(0,24 * 30 * 60 * 60, by=3600)
# make a zoo object
x.zoo <- zoo(x, x.DateTime)
#plot(x.zoo)
# what I want:
# the average value for each series at 1am, 2am, 3am, etc. so that
# the dimensions of the output are 24 (hours) by 5 (series)
# If I were just working on x I might do something like:
res <- matrix(NA,ncol=5,nrow=24)
for(i in 1:nrow(res)){
res[i,] <- apply(x[seq(i,nrow(x),by=24),],2,mean)
}
res
# how can I avoid the loop and write an aggregate statement in zoo that
# will get me what I want?
Calculate the hour for each time point and then aggregate by that:
hr <- as.numeric(format(time(x.zoo), "%H"))
ag <- aggregate(x.zoo, hr, mean)
dim(ag)
## [1] 24 5
ADDED
Alternately use hours from chron or hour from data.table:
library(chron)
ag <- aggregate(x.zoo, hours, mean)
This is quite similar to the other answer but takes advantage of the fact the the by=... argument to aggregate.zoo(...) can be a function which will be applied to time(x.zoo):
as.hour <- function(t) as.numeric(format(t,"%H"))
result <- aggregate(x.zoo,as.hour,mean)
identical(result,ag) # ag from G. Grothendieck answer
# [1] TRUE
Note that this produces a result identical to the other answer, not not the same as yours. This is because your dataset starts at 1:00am, not midnight, so your loop produces a matrix wherein the 1st row corresponds to 1:00am and the last row corresponds to midnight. These solutions produce zoo objects wherein the first row corresponds to midnight.

How do I subset every day except the last five days of zoo data?

I am trying to extract all dates except for the last five days from a zoo dataset into a single object.
This question is somewhat related to How do I subset the last week for every month of a zoo object in R?
You can reproduce the dataset with this code:
set.seed(123)
price <- rnorm(365)
data <- cbind(seq(as.Date("2013-01-01"), by = "day", length.out = 365), price)
zoodata <- zoo(data[,2], as.Date(data[,1]))
For my output, I'm hoping to get a combined dataset of everything except the last five days of each month. For example, if there are 20 days in the first month's data and 19 days in the second month's, I only want to subset the first 15 and 14 days of data respectively.
I tried using the head() function and the first() function to extract the first three weeks, but since each month will have a different amount of days according to month or leap year months, it's not ideal.
Thank you.
Here are a few approaches:
1) as.Date Let tt be the dates. Then we compute a Date vector the same length as tt which has the corresponding last date of the month. We then pick out those dates which are at least 5 days away from that:
tt <- time(zoodata)
last.date.of.month <- as.Date(as.yearmon(tt), frac = 1)
zoodata[ last.date.of.month - tt >= 5 ]
2) tapply/head For each month tapply head(x, -5) to the data and then concatenate the reduced months back together:
do.call("c", tapply(zoodata, as.yearmon(time(zoodata)), head, -5))
3) ave Define revseq which given a vector or zoo object returns sequence numbers in reverse order so that the last element corresponds to 1. Then use ave to create a vector ix the same length as zoodata which assigns such reverse sequence numbers to the days of each month. Thus the ix value for the last day of the month will be 1, for the second last day 2, etc. Finally subset zoodata to those elements corresponding to sequence numbers greater than 5:
revseq <- function(x) rev(seq_along(x))
ix <- ave(seq_along(zoodata), as.yearmon(time(zoodata)), FUN = revseq)
z <- zoodata[ ix > 5 ]
ADDED Solutions (1) and (2).
Exactly the same way as in the answer to your other question:
Split dataset by month, remove last 5 days, just add a "-":
library(xts)
xts.data <- as.xts(zoodata)
lapply(split(xts.data, "months"), last, "-5 days")
And the same way, if you want it on one single object:
do.call(rbind, lapply(split(xts.data, "months"), last, "-5 days"))

Compute average over sliding time interval (7 days ago/later) in R

I've seen a lot of solutions to working with groups of times or date, like aggregate to sum daily observations into weekly observations, or other solutions to compute a moving average, but I haven't found a way do what I want, which is to pluck relative dates out of data keyed by an additional variable.
I have daily sales data for a bunch of stores. So that is a data.frame with columns
store_id date sales
It's nearly complete, but there are some missing data points, and those missing data points are having a strong effect on our models (I suspect). So I used expand.grid to make sure we have a row for every store and every date, but at this point the sales data for those missing data points are NAs. I've found solutions like
dframe[is.na(dframe)] <- 0
or
dframe$sales[is.na(dframe$sales)] <- mean(dframe$sales, na.rm = TRUE)
but I'm not happy with the RHS of either of those. I want to replace missing sales data with our best estimate, and the best estimate of sales for a given store on a given date is the average of the sales 7 days prior and 7 days later. E.g. for Sunday the 8th, the average of Sunday the 1st and Sunday the 15th, because sales is significantly dependent on day of the week.
So I guess I can use
dframe$sales[is.na(dframe$sales)] <- my_func(dframe)
where my_func(dframe) replaces every stores' sales data with the average of the store's sales 7 days prior and 7 days later (ignoring for the first go around the situation where one of those data points is also missing), but I have no idea how to write my_func in an efficient way.
How do I match up the store_id and the dates 7 days prior and future without using a terribly inefficient for loop? Preferably using only base R packages.
Something like:
with(
dframe,
ave(sales, store_id, FUN=function(x) {
naw <- which(is.na(x))
x[naw] <- rowMeans(cbind(x[naw+7],x[naw-7]))
x
}
)
)

Given a list of events with timestamps, how do I use R to calculate the average rate at which such event is happening?

I have a CSV file with a list of posts from an online discussion forum. I have the timestamp for each post in this format: YYYY-MM-DD hh:mm:ss.
I want to calculate how often a new post is submitted, as in "X posts per second". I think what I need is just the mean, median and sd for the rate of posting (posts per second). I just loaded the CSV:
d <- read.csv("posts.csv")
colnames(d) <- c("post.id", "timestamp")
The average number of posts per second is just 1/interval from last posting, so make a vector of diff(times) and then take mean(1/as.numeric(diff(times))).
> posts <- data.frame(ids = paste(letters[sample(1:26, 100, replace=TRUE)],
sample(1:100) ), time=Sys.time() +cumsum(abs(rnorm(100))*100) )
> mean( 1/as.numeric(diff(posts$time)) )
[1] 0.03545346
Edit: I thought that by using cumsum I would get the time series ordered, but that was not the case, so it's amended to take abs(rnorm(100) ).
Something like:
tt <- table(cut(as.POSIXlt(d$timestamp),"1 sec"))
c(mean(tt),median(tt),sd(tt))
You didn't provide a reproducible example so I'm not 100% sure this works, but something like that ... also don't know how well it will scale to giant data sets.
More detail (with example):
set.seed(1001)
n <- 1e5
nt <- 1e5
z <- seq(as.POSIXct("2010-09-01"),length=nt,by="1 sec")
length(z)
z2 <- sample(z,size=n,replace=TRUE)
tt <- table(cut(z2,"1 sec"))
c(mean(tt),median(tt),sd(tt))
This tiny example suggests that the cut() command might be slow.
Play with the 'nt' (number of seconds in the time interval from beginning to end) and 'n' (number of samples) parameters to get a sense of how long your problem will take.
i dont know your programming language, but if you could convert the timestamp to milliseconds, just subtract the lowest from the highest timestamp, then divide by the number of posts (rows in the posts.csv) then divide by 1000 (milliseconds) and your left with posts per second. Or if you can get the timestamp in seconds, it is the same except don't divide by 1000.

Resources