Finding gaps between intervals using data.table - r

I have the following problem: given a set of non-overlapping intervals in a data.table, report the gaps between the intervals.
I have implemented this once in SQL, however I am struggling with data.table due to the lack of a lead function or lag function. For completeness, I have here the SQL code. I know the functionality has been implemented in data.table version 1.9.5. as by the changelog. So is this possible with data.table without doing a lot of merges and without a lag or lead function?
In principle, I am not fully against using merges (aka joins) as long as performance does not suffer. I think this has an easy implementation, but I can't figure out how to "get" the previous end time to be the starting time of my gap table.
For example:
# The numbers represent seconds from 1970-01-01 01:00:01
dat <- structure(
list(ID = c(1L, 1L, 1L, 2L, 2L, 2L),
stime = structure(c(as.POSIXct("2014-01-15 08:00:00"),
as.POSIXct("2014-01-15 11:00:00"),
as.POSIXct("2014-01-16 11:30:00"),
as.POSIXct("2014-01-15 09:30:00"),
as.POSIXct("2014-01-15 12:30:00"),
as.POSIXct("2014-01-15 13:30:00")
),
class = c("POSIXct", "POSIXt"), tzone = ""),
etime = structure(c(as.POSIXct("2014-01-15 10:30:00"),
as.POSIXct("2014-01-15 12:00:00"),
as.POSIXct("2014-01-16 13:00:00"),
as.POSIXct("2014-01-15 11:00:00"),
as.POSIXct("2014-01-15 12:45:00"),
as.POSIXct("2014-01-15 14:30:00")
),
class = c("POSIXct", "POSIXt"), tzone = "")
),
.Names = c("ID", "stime", "etime"),
sorted = c("ID", "stime", "etime"),
class = c("data.table", "data.frame"),
row.names = c(NA,-6L)
)
dat <- data.table(dat)
This results in:
ID stime etime
1 2014-01-15 10:30:00 2014-01-15 11:00:00
1 2014-01-15 12:00:00 2014-01-16 11:30:00
2 2014-01-15 11:00:00 2014-01-15 12:30:00
2 2014-01-15 12:45:00 2014-01-15 13:30:00
Notice: the gaps are reported evenly across days.

A variation on David's answer, likely a little less efficient, but simpler to type out:
setkey(dat, stime)[, .(stime=etime[-.N], etime=stime[-1]), by=ID]
Produces:
ID stime etime
1: 1 2014-01-15 10:30:00 2014-01-15 11:00:00
2: 1 2014-01-15 12:00:00 2014-01-16 11:30:00
3: 2 2014-01-15 11:00:00 2014-01-15 12:30:00
4: 2 2014-01-15 12:45:00 2014-01-15 13:30:00
setkey is just to make sure table is sorted by time.

If I'm not missing something, you are missing a row in your desired output, so here's my attempt using shift from the devel version as you mentioned.
library(data.table) ## v >= 1.9.5
indx <- dat[, .I[-.N], by = ID]$V1
dat[, .(ID, stimes = etime, etime = shift(stime, type = "lead"))][indx]
res
# ID stime etime
# 1: 1 2014-01-15 10:30:00 2014-01-15 11:00:00
# 2: 1 2014-01-15 12:00:00 2014-01-16 11:30:00
# 3: 2 2014-01-15 11:00:00 2014-01-15 12:30:00
# 4: 2 2014-01-15 12:45:00 2014-01-15 13:30:00

Related

Updating Dates and Date Intervals in R

Not even sure if I've described the problem accurately in the title, but here goes.
Suppose I have the following data.table/data.frame:
library(data.table)
library(lubridate)
DT <- data.table(begin = c("2019-06-01 09:00:00","2019-06-01 09:00:00", "2019-06-01 09:00:00",
"2019-06-01 09:00:00", "2016-06-01 09:00:00","2016-06-01 09:00:00"),
end = c("2019-06-03 14:00:00", "2019-06-03 14:00:00", "2019-06-03 14:00:00",
"2019-06-02 05:00:00", "2019-06-02 05:00:00", "2016-06-01 23:15:00"),
person = c("A", "A","A", "B", "B", "C"))
begin end person
1: 2019-06-01 09:00:00 2019-06-03 14:00:00 A
2: 2019-06-01 09:00:00 2019-06-03 14:00:00 A
3: 2019-06-01 09:00:00 2019-06-03 14:00:00 A
4: 2019-06-01 09:00:00 2019-06-02 05:00:00 B
5: 2016-06-01 09:00:00 2019-06-02 05:00:00 B
6: 2016-06-01 09:00:00 2016-06-01 23:15:00 C
This is essentially a dataset summarizing time stamps of when a period began and ended for each person. The number of rows are repeated for each person by the number of days which the time period spans. For example, person A has three entries for the same "shift" because their shift spans three distinct dates, 06-01, 06-02, and 06-03. The entries are repeated by the number of dates which the shifts span, but some shifts begin and end within the same day.
What I want is to update the begin and end dates of the above dataset, so that I can see what time each shift began and ended at the day level. So the dataset should look like:
begin end person
1: 2019-06-01 09:00:00 2019-06-02 00:00:00 A
2: 2019-06-02 00:00:00 2019-06-03 00:00:00 A
3: 2019-06-03 00:00:00 2019-06-03 14:00:00 A
4: 2019-06-01 09:00:00 2019-06-02 00:00:00 B
5: 2016-06-02 00:00:00 2019-06-02 05:00:00 B
6: 2016-06-01 09:00:00 2016-06-01 23:15:00 C
Any help would be greatly appreciated!
First, fixing the dates (and I already fixed row 5's starting in 2016 and going through to 2019, seems unlikely),
DT[, c("begin", "end"):=lapply(.SD, as.POSIXct), .SDcols=c("begin", "end")]
## we get this
DT <- as.data.table(structure(list(begin = structure(c(1559394000, 1559394000, 1559394000, 1559394000, 1559394000, 1464786000), class = c("POSIXct", "POSIXt"), tzone = ""), end = structure(c(1559584800, 1559584800, 1559584800, 1559466000, 1559466000, 1464837300), class = c("POSIXct", "POSIXt"), tzone = ""), person = c("A", "A", "A", "B", "B", "C")), row.names = c(NA, -6L), class = c("data.table", "data.frame")))
Second, we then create this function
func <- function(st, en) {
midns <- lubridate::ceiling_date(seq(st, en, by = "day"), unit = "day")
times <- unique(sort(c(midns[ st < midns & midns < en], st, en)))
data.table(begin = times[-length(times)], end = times[-1])
}
Lastly, we use it, using by=.(person) to preserve that column in the output. I use DT since we do not need (or even want) duplicates for each shift/day:
unique(DT)[, rbindlist(Map(func, begin, end)), by = .(person)]
# person begin end
# <char> <POSc> <POSc>
# 1: A 2019-06-01 09:00:00 2019-06-02 00:00:00
# 2: A 2019-06-02 00:00:00 2019-06-03 00:00:00
# 3: A 2019-06-03 00:00:00 2019-06-03 14:00:00
# 4: B 2019-06-01 09:00:00 2019-06-02 00:00:00
# 5: B 2019-06-02 00:00:00 2019-06-02 05:00:00
# 6: C 2016-06-01 09:00:00 2016-06-01 23:15:00
Assuming you had a typo for row 5 person B (begin 2019 not 2016):
library(data.table)
library(lubridate)
> DT <- data.table(begin = c("2019-06-01 09:00:00","2019-06-01 09:00:00", "2019-06-01 09:00:00",
+ "2019-06-01 09:00:00", "2019-06-01 09:00:00","2016-06-01 09:00:00"),
+ end = c("2019-06-03 14:00:00", "2019-06-03 14:00:00", "2019-06-03 14:00:00",
+ "2019-06-02 05:00:00", "2019-06-02 05:00:00", "2016-06-01 23:15:00"),
+ person = c("A", "A","A", "B", "B", "C"))
>
> DT[, `:=`(min=as.numeric(difftime(end,begin, units="mins")),
+ days=as.numeric(as_date(end)-as_date(begin)+1))][, min_day:=min/days]
>
> unique(DT)
begin end person min days min_day
1: 2019-06-01 09:00:00 2019-06-03 14:00:00 A 3180 3 1060
2: 2019-06-01 09:00:00 2019-06-02 05:00:00 B 1200 2 600
3: 2016-06-01 09:00:00 2016-06-01 23:15:00 C 855 1 855

Summarize values for overlapping time periods

I'm trying to summarize values for overlapping time periods.
I can use only tidyr, ggplot2 and dplyr libraries. Base R is preferred though.
My data looks like this, but usually it has around 100 records:
df <- structure(list(Start = structure(c(1546531200, 1546531200, 546531200, 1546638252.6316, 1546549800, 1546534800, 1546545600, 1546531200, 1546633120, 1547065942.1053), class = c("POSIXct", "POSIXt"), tzone = "UTC"), Stop = structure(c(1546770243.1579, 1546607400, 1547110800, 1546670652.6316, 1547122863.1579, 1546638252.6316, 1546878293.5579, 1546416000, 1546849694.4, 1547186400), class = c("POSIXct", "POSIXt"), tzone = "UTC"), Value = c(12610, 520, 1500, 90, 331380, 27300, 6072, 4200, 61488, 64372)), .Names = c("Start", "Stop", "Value"), row.names = c(41L, 55L, 25L, 29L, 38L, 28L, 1L, 20L, 14L, 31L), class = c("tbl_df", "tbl", "data.frame"))
head(df) and str(df) gives:
Start Stop Value
2019-01-03 16:00:00 2019-01-06 10:24:03 12610
2019-01-03 16:00:00 2019-01-04 13:10:00 520
2019-01-03 16:00:00 2019-01-10 09:00:00 1500
2019-01-04 21:44:12 2019-01-05 06:44:12 90
2019-01-03 21:10:00 2019-01-10 12:21:03 331380
2019-01-03 17:00:00 2019-01-04 21:44:12 27300
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 10 obs. of 3 variables:
$ Start: POSIXct, format: "2019-01-03 16:00:00" "2019-01-03 16:00:00" ...
$ Stop : POSIXct, format: "2019-01-06 10:24:03" "2019-01-04 13:10:00" ...
$ Value: num 12610 520 1500 90 331380 ...
So there are overlapping time periods with "Start" and "Stop" dates with assigned value. In any given record when there is a value between df$Start and df$Stop and outside of this scope the value is 0.
I want to create another dataframe based on which I could show how this values summarize and change over time. The Desired output would look like this (the "sum" column is made up):
> head(df2)
timestamp sum
"2019-01-02 09:00:00 CET" 14352
"2019-01-03 17:00:00 CET" 6253
"2019-01-03 18:00:00 CET" 23465
"2019-01-03 21:00:00 CET" 3241
"2019-01-03 22:10:00 CET" 23235
"2019-01-04 14:10:00 CET" 123321
To get unique timestamps:
timestamps <- sort(unique(c(df$`Start`, df$`Stop`)))
With df2 dataframe I could easily draw a graph with ggplot, but how to get this sums?
I think I should iterate over df data frame either some custom function or any built-it summarize function which would work like this:
fnct <- function(date, min, max, value) {
if (date >= min && date <=max) {
a <- value
}
else {
a <- 0
}
return(a)
}
...for every given date from timestamps iterate through df and give me a sum of values for the timestamp.
It looks really simple and I'm missing something very basic.
Here's a tidyverse solution similar to my response to this recent question. I gather to bring the timestamps (Starts and Stops) into one column, with another column specifying which. The Starts add the value and the Stops subtract it, and then we just take the cumulative sum to get values at all the instants when the sum changes.
For 100 records, there won't be any perceivable speed improvement from using data.table; in my experience it starts to make more of a difference around 1M records, especially when grouping is involved.
library(dplyr); library(tidyr)
df2 <- df %>%
gather(type, time, Start:Stop) %>%
mutate(chg = if_else(type == "Start", Value, -Value)) %>%
arrange(time) %>%
mutate(sum = cumsum(chg)) # EDIT: corrected per OP comment
> head(df2)
## A tibble: 6 x 5
# Value type time chg sum
# <dbl> <chr> <dttm> <dbl> <dbl>
#1 1500 Start 1987-04-27 14:13:20 1500 1500
#2 4200 Stop 2019-01-02 08:00:00 -4200 -2700
#3 12610 Start 2019-01-03 16:00:00 12610 9910
#4 520 Start 2019-01-03 16:00:00 520 10430
#5 4200 Start 2019-01-03 16:00:00 4200 14630
#6 27300 Start 2019-01-03 17:00:00 27300 41930
In the past I have tried to solve similar problems using the tidyverse/baseR... But nothing comes even remotely close to the speeds that data.table provides for these kind of operations, so I encourage you to give it a try...
For questions like this, my favourite finction is foverlaps() from the data.table-package. With this function you can (fast!) perform an overlap-join. If you want more flexibility in your joining than foverlaps() provides, a non-equi-join (again using data.table) is probably the best (and fastest!) option. But foverlaps() will do here (I guess).
I used the sample data you provided, but filtered out rows where Stop <= Start (probably a tyop in your sample data). When df$Start is not before df$Stop, foverlaps give a warning and won't execute.
library( data.table )
#create data.table with periods you wish to simmarise on
#NB: UTC is used as timezone, since this is also the case in the sample data provided!!
dt.dates <- data.table( id = paste0( "Day", 1:31 ),
Start = seq( as.POSIXct( "2019-01-01 00:00:00", format = "%Y-%m-%d %H:%M:%S", tz = "UTC" ),
as.POSIXct( "2019-01-31 00:00:00", format = "%Y-%m-%d %H:%M:%S", tz = "UTC" ),
by = "1 days"),
Stop = seq( as.POSIXct( "2019-01-02 00:00:00", format = "%Y-%m-%d %H:%M:%S", tz = "UTC" ) - 1,
as.POSIXct( "2019-02-01 00:00:00", format = "%Y-%m-%d %H:%M:%S", tz = "UTC" ) - 1,
by = "1 days") )
If you do not want to summarise on a daily basis, but by hour, minute, second, of year. Just change the values (and stepsize) in dt.dates data.table so that they match your periods.
#set df as data.table
dt <- as.data.table( df )
#filter out any row where Stop is smaller than Start
dt <- dt[ Start < Stop, ]
#perform overlap join
#first set keys
setkey(dt, Start, Stop)
#then perform join
result <- foverlaps( dt.dates, dt, type = "within" )
#summarise
result[, .( Value = sum( Value , na.rm = TRUE ) ), by = .(Day = i.Start) ]
output
# Day Value
# 1: 2019-01-01 1500
# 2: 2019-01-02 1500
# 3: 2019-01-03 1500
# 4: 2019-01-04 351562
# 5: 2019-01-05 413050
# 6: 2019-01-06 400440
# 7: 2019-01-07 332880
# 8: 2019-01-08 332880
# 9: 2019-01-09 332880
# 10: 2019-01-10 64372
# 11: 2019-01-11 0
# 12: 2019-01-12 0
# 13: 2019-01-13 0
# 14: 2019-01-14 0
# 15: 2019-01-15 0
# 16: 2019-01-16 0
# 17: 2019-01-17 0
# 18: 2019-01-18 0
# 19: 2019-01-19 0
# 20: 2019-01-20 0
# 21: 2019-01-21 0
# 22: 2019-01-22 0
# 23: 2019-01-23 0
# 24: 2019-01-24 0
# 25: 2019-01-25 0
# 26: 2019-01-26 0
# 27: 2019-01-27 0
# 28: 2019-01-28 0
# 29: 2019-01-29 0
# 30: 2019-01-30 0
# 31: 2019-01-31 0
# Day Value
plot
#summarise for plot
result.plot <- result[, .( Value = sum( Value , na.rm = TRUE ) ), by = .(Day = i.Start) ]
library( ggplot2 )
ggplot( data = result.plot, aes( x = Day, y = Value ) ) + geom_col()

R add specific (different) amounts of times to entire column

I have a table in R like:
start duration
02/01/2012 20:00:00 5
05/01/2012 07:00:00 6
etc... etc...
I got to this by importing a table from Microsoft Excel that looked like this:
date time duration
2012/02/01 20:00:00 5
etc...
I then merged the date and time columns by running the following code:
d.f <- within(d.f, { start=format(as.POSIXct(paste(date, time)), "%m/%d/%Y %H:%M:%S") })
I want to create a third column called 'end', which will be calculated as the number of hours after the start time. I am pretty sure that my time is a POSIXct vector. I have seen how to manipulate one datetime object, but how can I do that for the entire column?
The expected result should look like:
start duration end
02/01/2012 20:00:00 5 02/02/2012 01:00:00
05/01/2012 07:00:00 6 05/01/2012 13:00:00
etc... etc... etc...
Using lubridate
> library(lubridate)
> df$start <- mdy_hms(df$start)
> df$end <- df$start + hours(df$duration)
> df
# start duration end
#1 2012-02-01 20:00:00 5 2012-02-02 01:00:00
#2 2012-05-01 07:00:00 6 2012-05-01 13:00:00
data
df <- structure(list(start = c("02/01/2012 20:00:00", "05/01/2012 07:00:00"
), duration = 5:6), .Names = c("start", "duration"), class = "data.frame", row.names = c(NA,
-2L))
You can simply add dur*3600 to start column of the data frame. E.g. with one date:
start = as.POSIXct("02/01/2012 20:00:00",format="%m/%d/%Y %H:%M:%S")
start
[1] "2012-02-01 20:00:00 CST"
start + 5*3600
[1] "2012-02-02 01:00:00 CST"

Combining time series data with different resolution in R

I have read in and formatted my data set like shown under.
library(xts)
#Read data from file
x <- read.csv("data.dat", header=F)
x[is.na(x)] <- c(0) #If empty fill in zero
#Construct data frames
rawdata.h <- data.frame(x[,2],x[,3],x[,4],x[,5],x[,6],x[,7],x[,8]) #Hourly data
rawdata.15min <- data.frame(x[,10]) #15 min data
#Convert time index to proper format
index.h <- as.POSIXct(strptime(x[,1], "%d.%m.%Y %H:%M"))
index.15min <- as.POSIXct(strptime(x[,9], "%d.%m.%Y %H:%M"))
#Set column names
names(rawdata.h) <- c("spot","RKup", "RKdown","RKcon","anm", "pp.stat","prod.h")
names(rawdata.15min) <- c("prod.15min")
#Convert data frames to time series objects
data.htemp <- xts(rawdata.h,order.by=index.h)
data.15mintemp <- xts(rawdata.15min,order.by=index.15min)
#Select desired subset period
data.h <- data.htemp["2013"]
data.15min <- data.15mintemp["2013"]
I want to be able to combine hourly data from data.h$prod.h with data, with 15 min resolution, from data.15min$prod.15min corresponding to the same hour.
An example would be to take the average of the hourly value at time 2013-12-01 00:00-01:00 with the last 15 minute value in that same hour, i.e. the 15 minute value from time 2013-12-01 00:45-01:00. I'm looking for a flexible way to do this with an arbitrary hour.
Any suggestions?
Edit: Just to clarify further: I want to do something like this:
N <- NROW(data.h$prod.h)
for (i in 1:N){
prod.average[i] <- mean(data.h$prod.h[i] + #INSERT CODE THAT FINDS LAST 15 MIN IN HOUR i )
}
I found a solution to my problem by converting the 15 minute data into hourly data using the very useful .index* function from the xts package like shown under.
prod.new <- data.15min$prod.15min[.indexmin(data.15min$prod.15min) %in% c(45:59)]
This creates a new time series with only the values occuring in the 45-59 minute interval each hour.
For those curious my data looked like this:
Original hourly series:
> data.h$prod.h[1:4]
2013-01-01 00:00:00 19.744
2013-01-01 01:00:00 27.866
2013-01-01 02:00:00 26.227
2013-01-01 03:00:00 16.013
Original 15 minute series:
> data.15min$prod.15min[1:4]
2013-09-30 00:00:00 16.4251
2013-09-30 00:15:00 18.4495
2013-09-30 00:30:00 7.2125
2013-09-30 00:45:00 12.1913
2013-09-30 01:00:00 12.4606
2013-09-30 01:15:00 12.7299
2013-09-30 01:30:00 12.9992
2013-09-30 01:45:00 26.7522
New series with only the last 15 minutes in each hour:
> prod.new[1:4]
2013-09-30 00:45:00 12.1913
2013-09-30 01:45:00 26.7522
2013-09-30 02:45:00 5.0332
2013-09-30 03:45:00 2.6974
Short answer
df %>%
group_by(t = cut(time, "30 min")) %>%
summarise(v = mean(value))
Long answer
Since, you want to compress the 15 minutes time series to a smaller resolution (30 minutes), you should use dplyr package or any other package that computes the "group by" concept.
For instance:
s = seq(as.POSIXct("2017-01-01"), as.POSIXct("2017-01-02"), "15 min")
df = data.frame(time = s, value=1:97)
df is a time series with 97 rows and two columns.
head(df)
time value
1 2017-01-01 00:00:00 1
2 2017-01-01 00:15:00 2
3 2017-01-01 00:30:00 3
4 2017-01-01 00:45:00 4
5 2017-01-01 01:00:00 5
6 2017-01-01 01:15:00 6
The cut.POSIXt, group_by and summarise functions do the work:
df %>%
group_by(t = cut(time, "30 min")) %>%
summarise(v = mean(value))
t v
1 2017-01-01 00:00:00 1.5
2 2017-01-01 00:30:00 3.5
3 2017-01-01 01:00:00 5.5
4 2017-01-01 01:30:00 7.5
5 2017-01-01 02:00:00 9.5
6 2017-01-01 02:30:00 11.5
A more robust way is to convert 15 minutes values into hourly values by taking average. Then do whatever operation you want to.
### 15 Minutes Data
min15 <- structure(list(V1 = structure(1:8, .Label = c("2013-01-01 00:00:00",
"2013-01-01 00:15:00", "2013-01-01 00:30:00", "2013-01-01 00:45:00",
"2013-01-01 01:00:00", "2013-01-01 01:15:00", "2013-01-01 01:30:00",
"2013-01-01 01:45:00"), class = "factor"), V2 = c(16.4251, 18.4495,
7.2125, 12.1913, 12.4606, 12.7299, 12.9992, 26.7522)), .Names = c("V1",
"V2"), class = "data.frame", row.names = c(NA, -8L))
min15
### Hourly Data
hourly <- structure(list(V1 = structure(1:4, .Label = c("2013-01-01 00:00:00",
"2013-01-01 01:00:00", "2013-01-01 02:00:00", "2013-01-01 03:00:00"
), class = "factor"), V2 = c(19.744, 27.866, 26.227, 16.013)), .Names = c("V1",
"V2"), class = "data.frame", row.names = c(NA, -4L))
hourly
### Convert 15min data into hourly data by taking average of 4 values
min15$V1 <- as.POSIXct(min15$V1,origin="1970-01-01 0:0:0")
min15 <- aggregate(. ~ cut(min15$V1,"60 min"),min15[setdiff(names(min15), "V1")],mean)
min15
names(min15) <- c("time","min15")
names(hourly) <- c("time","hourly")
### merge the corresponding values
combined <- merge(hourly,min15)
### average of hourly and 15min values
rowMeans(combined[,2:3])

in R sum the number of rows per factor within another factor, and return 0 when 'specific' data is missing

I'm stuck with a potential issue and I hope you could help me out:)
For example i have the following data table that displays multiple stores and every time a visitor entered the store, the time and date is recorded. This implies that every row/line is 1 visitor that entered the one of the stores.
data <- structure(list(store.ID = c("1", "1", "1", "1", "1",
"2", "2", "2", "2", "2", "3", "3", "3",
"3", "3", "4", "4", "4", "4", "4"), Time = structure(c(6L,
7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 1L, 2L, 3L, 4L, 5L,
16L, 17L, 18L, 19L, 20L), .Label = c(" 12:09:19", " 12:09:25",
" 13:09:30", " 13:09:35", " 14:09:40", " 12:00:03", " 12:00:09",
" 12:00:14", " 14:00:25", " 16:00:32", " 12:27:19", " 13:27:25",
" 14:27:41", " 14:27:46", " 17:27:59", " 12:46:10", " 12:46:19", " 13:46:29",
" 14:46:39", " 15:46:50"), class = "factor"), Date = structure(c(1351728000,
1351728000, 1351728000, 1351728000, 1351728000, 1351814400, 1351814400,
1351814400, 1351814400, 1351814400, 1351814400, 1351814400, 1351814400,
1351814400, 1351814400, 1351814400, 1351814400, 1351814400, 1351814400,
1351814400), class = c("POSIXct", "POSIXt"), tzone = "UTC")), .Names = c("storeID", "Time", "Date"), class = "data.frame", row.names = c(NA,
-20L))
[EDIT] The stores are open 24/7. Now I would like is to have a solution / way that assigns each visit / row to one of the 24 hour periods in a day (i.e., 09.00-10.00 being 1, 10.00-11.00 being 2, etc). Then I would like to have the number of visitors per hour period over two consecutive days. I would like to be able to separate this for certain fixed factors, e.g., storeID and City (not shown in this example).
Also, if no visitors enter the store, I would like the data file to show that within this time interval there was no visitor, which should in this case return 0).
[EDIT]
Note that my data file is huge, having over 700k rows.
I hope I made my issue clear.
MvZB
First method: Using the zoo package as illustrated here very nicely by Dirk. I've explained the code inline. Something like this should do it:
df <- data # I just prefer `df` to `data`
df$storeID <- as.numeric(as.character(df$storeID)) # make sure its numeric
# instantiate the zoo object by providing values corresponding to time
require(zoo)
z <- zoo(as.numeric(as.character(df$storeID)),
as.POSIXct(paste(df$Date, df$Time)))
# create output data.frame with all possible timings
open_time <- paste(9:18, "00", "00", sep=":")
open_date <- as.character(unique(df$Date))
out.df <- data.frame(Date = rep(open_date, each=length(open_time)-1),
Start = rep(head(open_time, -1), length(open_date)),
End = rep(tail(open_time, -1), length(open_date)))
# Pointer for matching later
out.df$Pointer <- as.POSIXct(paste(out.df$Date, out.df$Start))
# initialise count to 0
out.df$count <- 0
# aggregate using zoo's magic function!
# the first part contains the storeID and is aggregated by
# the second column which creates hourly interval from the times in z (your data)
# and the third column sums up all values that fall in each hourly interval
agg.out <- aggregate(z, time(z) - as.numeric(time(z)) %% 3600, length)
# once that is done, just match the corresponding times and place them rightly
m.idx <- match( out.df$Pointer, index(agg.out))
out.df$count[!is.na(m.idx)] <- agg.out[m.idx[!is.na(m.idx)]]
out.df <- subset(out.df, select=-c(Pointer))
# and you're done
> out.df
# Date Start End count
# 1 2012-11-01 9:00:00 10:00:00 0
# 2 2012-11-01 10:00:00 11:00:00 0
# 3 2012-11-01 11:00:00 12:00:00 0
# 4 2012-11-01 12:00:00 13:00:00 3
# 5 2012-11-01 13:00:00 14:00:00 0
# 6 2012-11-01 14:00:00 15:00:00 1
# 7 2012-11-01 15:00:00 16:00:00 0
# 8 2012-11-01 16:00:00 17:00:00 1
# 9 2012-11-01 17:00:00 18:00:00 0
# 10 2012-11-02 9:00:00 10:00:00 0
# 11 2012-11-02 10:00:00 11:00:00 0
# 12 2012-11-02 11:00:00 12:00:00 0
# 13 2012-11-02 12:00:00 13:00:00 5
# 14 2012-11-02 13:00:00 14:00:00 4
# 15 2012-11-02 14:00:00 15:00:00 4
# 16 2012-11-02 15:00:00 16:00:00 1
# 17 2012-11-02 16:00:00 17:00:00 0
# 18 2012-11-02 17:00:00 18:00:00 1
Second Method: Without using zoo package drawing idea from Dirk again here. But I use data.table for fast access. Again look to the inline comments for explanation.
require(data.table)
df <- data # I prefer df than data
# create an id column containing only the hours
df$id <- as.numeric(as.POSIXlt(paste(df$Date, df$Time))$hour)
# convert Date to character
df$Date <- as.character(df$Date)
# load package, create input data.table with Date and id as keys
require(data.table)
dt.in <- data.table(df)
setkey(dt.in, "Date", "id")
# get the count of customers / hour / date
dt.tmp <- dt.in[, .N, by=c("Date", "id")]
# create the output template data.table with Date, Start and End
open_time <- paste(9:18, "00", "00", sep=":")
open_date <- as.character(unique(df$Date))
dt.out <- data.table(Date = rep(open_date, each=length(open_time)-1),
Start = rep(head(open_time, -1), length(open_date)),
End = rep(tail(open_time, -1), length(open_date)))
# create the id again by extracting hour
dt.out[, id := as.numeric(as.POSIXlt(paste(Date, Start))$hour)]
setkey(dt.out, "Date", "id")
# merge the two data.tables to get your output
dt.out <- dt.tmp[dt.out, list(Start, End, N)]
dt.out[, id := NULL]
> dt.out
# Date Start End N
# 1: 2012-11-01 9:00:00 10:00:00 NA
# 2: 2012-11-01 10:00:00 11:00:00 NA
# 3: 2012-11-01 11:00:00 12:00:00 NA
# 4: 2012-11-01 12:00:00 13:00:00 3
# 5: 2012-11-01 13:00:00 14:00:00 NA
# 6: 2012-11-01 14:00:00 15:00:00 1
# 7: 2012-11-01 15:00:00 16:00:00 NA
# 8: 2012-11-01 16:00:00 17:00:00 1
# 9: 2012-11-01 17:00:00 18:00:00 NA
# 10: 2012-11-02 9:00:00 10:00:00 NA
# 11: 2012-11-02 10:00:00 11:00:00 NA
# 12: 2012-11-02 11:00:00 12:00:00 NA
# 13: 2012-11-02 12:00:00 13:00:00 5
# 14: 2012-11-02 13:00:00 14:00:00 4
# 15: 2012-11-02 14:00:00 15:00:00 4
# 16: 2012-11-02 15:00:00 16:00:00 1
# 17: 2012-11-02 16:00:00 17:00:00 NA
# 18: 2012-11-02 17:00:00 18:00:00 1
Here's a simple solution using lubridate and factors:
library(lubridate)
# Create a single date time variable
dt <- ymd_hms(paste(data$Date, data$Time))
# Extract the day
data$day <- floor_date(dt, "day")
# Extract the hour, converting it into a factor, so we
# get all hours shown
data$hour <- factor(hour(dt), 9:18)
# Count up with table
as.data.frame(table(data[c("day", "hour")]))

Resources