Exclude rows with certain time of day - r

I have a time series of continuous data measured at 10 minute intervals for a period of five months. For simplicity's sake, the data is available in two columns as follows:
Timestamp Temp.Diff
2/14/2011 19:00 -0.385
2/14/2011 19:10 -0.535
2/14/2011 19:20 -0.484
2/14/2011 19:30 -0.409
2/14/2011 19:40 -0.385
2/14/2011 19:50 -0.215
... And it goes on for the next five months. I have parsed the Timestamp column using as.POSIXct.
I want to select rows with certain times of the day, (e.g. from 12 noon to 3 PM), I would like either like to exclude the other hours of the day, OR just extract those 3 hours but still have the data flow sequentially (i.e. in a time series).

You seem to know the basic idea, but are just missing the details. As you mentioned, we just transform the Timestamps into POSIX objects then subset.
lubridate Solution
The easiest way is probably with lubridate. First load the package:
library(lubridate)
Next convert the timestamp:
##*m*onth *d*ay *y*ear _ *h*our *m*inute
d = mdy_hm(dd$Timestamp)
Then we select what we want. In this case, I want any dates after 7:30pm (regardless of day):
dd[hour(d) == 19 & minute(d) > 30 | hour(d) >= 20,]
Base R solution
First create an upper limit:
lower = strptime("2/14/2011 19:30","%m/%d/%Y %H:%M")
Next transform the Timestamps in POSIX objects:
d = strptime(dd$Timestamp, "%m/%d/%Y %H:%M")
Finally, a bit of dataframe subsetting:
dd[format(d,"%H:%M") > format(lower,"%H:%M"),]
Thanks to plannapus for this last part
Data for the above example:
dd = read.table(textConnection('Timestamp Temp.Diff
"2/14/2011 19:00" -0.385
"2/14/2011 19:10" -0.535
"2/14/2011 19:20" -0.484
"2/14/2011 19:30" -0.409
"2/14/2011 19:40" -0.385
"2/14/2011 19:50" -0.215'), header=TRUE)

You can do this with easily with the time-based subsetting in the xts package. Assuming your data.frame is named Data:
library(xts)
x <- xts(Data$Temp.Diff, Data$Timestamp)
y <- x["T12:00/T15:00"]
# you need the leading zero if the hour is a single digit
z <- x["T09:00/T12:00"]

Related

computing and formatting averages and squares of time intervals

I have a model which predicts the duration of certain events, and measures of durations for those events. I then want to compute the difference between Predicted and Measured, the mean difference and the RMSE. I'm able to do it, but the formatting is really awkward and not what I expected:
database <- data.frame(Predicted = c(strptime(c("4:00", "3:35", "3:38"), format = "%H:%M")),
Measured = c(strptime(c("3:39", "3:40", "3:53"), format = "%H:%M")))
database
> Predicted Measured
1 2016-11-28 04:00:00 2016-11-28 03:39:00
2 2016-11-28 03:35:00 2016-11-28 03:40:00
3 2016-11-28 03:38:00 2016-11-28 03:53:00
This is the first weirdness: why does R shows me a time and a date, even if I clearly specified a time-only format (%H:%M), and there was no date in my data to start with? It gets weirder:
database$Error <- with(database, Predicted-Measured)
database$Mean_Error <- with(database, mean(Predicted-Measured))
database$RMSE <- with(database, sqrt(mean(as.numeric(Predicted-Measured)^2)))
> database
Predicted Measured Error Mean_Error RMSE
1 2016-11-28 04:00:00 2016-11-28 03:39:00 21 mins 0.3333333 15.17674
2 2016-11-28 03:35:00 2016-11-28 03:40:00 -5 mins 0.3333333 15.17674
3 2016-11-28 03:38:00 2016-11-28 03:53:00 -15 mins 0.3333333 15.17674
Why is the variable Error expressed in minutes? For Error it's not a bad choice, but it becomes quite hard to read for Mean_Error. For RMSE it's even worse, but this could be due to the as.numeric function: if I remove it, R complains that '^' not defined for "difftime" objects. My questions are:
Is it possible to show the first 2 columns (Predicted and Measured) shown in the %H:%M format?
for the other 3 columns ( Error, Mean_Error and RMSE) I would like to compare a %M:%S format and a format in only seconds, and choose among the two. Is it possible?
EDIT: just to be more clear, my goal is to insert observations of time intervals into a dataframe and compute a vector of time interval differences. Then, compute some statistics for that vector: mean, RMSE, etc.. I know I could just enter the time observations in seconds, but that doesn't look very good: it's difficult to tell that 13200 seconds are 3 hours and 40 minutes. Thus I would like to be able to store the time intervals in the %H:%M, but then be able to manipulate them algebraically and show the results in a format of my choosing. Is that possible?
We can use difftime to specify the units for the difference in time. The output of difftime is an object of class difftime. When this difftime object is coerced to numeric using as.numeric, we can change these units (see the examples in ?difftime):
## Note we don't convert to date-time because we just want %H:%M
database <- data.frame(Predicted = c("4:00", "3:35", "3:38"),
Measured = c("3:39", "3:40", "3:53"))
## We now convert to date-time and use difftime to compute difference in minutes
database$Error <- with(database, difftime(strptime(Predicted,format="%H:%M"),strptime(Measured,format="%H:%M"), units="mins"))
## Use as.numeric to change units to seconds
database$Mean_Error <- with(database, mean(as.numeric(Error,units="secs")))
database$RMSE <- with(database, sqrt(mean(as.numeric(Error,units="secs")^2)))
## Predicted Measured Error Mean_Error RMSE
##1 4:00 3:39 21 mins 20 910.6042
##2 3:35 3:40 -5 mins 20 910.6042
##3 3:38 3:53 -15 mins 20 910.6042

Removing rows of data in R below a specified value

I was wondering if anybody could help...
I have a data frame which includes a continuous time column and I am trying to remove all rows below a specified time.
The data starts from approx. 11:29:00 but I want to remove all rows before the time 12:30.00 and after the time 14:20.00.
Since the data is recorded every second, deleting unnecessary rows will be a great help and make managing this data a whole lot easier for me so any help would be greatly appreciated.
This is the head of the data frame, as you can see the time is continuous in seconds. I would like to remove all these rows up to 12:30:00 within the GPS.Time column. Hope that makes sense.
Raw.Vel. Smooth.Vel. GPS.Time
1.486 0.755 11:39:39
1.425 1.167 11:39:40
1.466 1.398 11:39:41
1.533 1.552 11:39:42
1.517 1.594 11:39:43
1.918 1.556 11:39:44
Creating above data frame:
Raw.Vel. <- c(1.486,1.425, 1.466, 1.533, 1.517, 1.918)
Smooth.Vel. <- c(0.755, 1.167, 1.398, 1.552, 1.594, 1.556)
GPS.Time <- c("11:39:39", "11:39:40", "11:39:41", "11:39:42", "11:39:43", "11:39:44")
sample <- data.frame(Raw.Vel., Smooth.Vel., GPS.Time)
Thanks in advance.
Use the lubridate package to transform your string time column into some kind of time class:
library(lubridate)
sample$GPS.Time <- hms(sample$GPS.Time)
To achieve the required output, just use subsetting with brackets ([), with the condition you want. In your example, I removed all rows up to 11:39:42.
output <- sample[sample$GPS.Time < hms("11:39:42"),]
Turn the GPS.Time into a "POSIXct" object:
df$time <- as.POSIXct(df$GPS.Time, format="%H:%M:%S")
Then you can filter using logic:
filtered_df <- df[df$time < as.POSIXct("12:30:00", format="%H:%M:%S"), ]
You can convert the entries in the "GPS.Time" columns into characters (this is originally a factor variable). After that you can separate the set by comparing the times with a specified cutoff-time stored as a character string that should be written in the same format (HH:MM:SS):
sample$GPS.Time <- as.character(sample$GPS.Time)
cutoff_time <- "11:39:42" # modify as necessary
sample <- sample[-which(sample$GPS.Time < cutoff_time),] #remove all rows with times smaller than the cutoff_time
#> sample
# Raw.Vel. Smooth.Vel. GPS.Time
#4 1.533 1.552 11:39:42
#5 1.517 1.594 11:39:43
#6 1.918 1.556 11:39:44

create an index for aggregating daily data to match periodic data

I have daily measurements prec.d and periodic measurements prec.p. The periodic measurements (3-12 days apart) are roughly the sum of the daily measurements between the start and end dates, and I need to compare prec in the two data frames. I have so far manually created an index week that represents the time span of each periodic measurement, but it would be great to make week in a reproducible fashion.
data.frame prec.d
day week prec
6/20/2013 1 0
6/21/2013 1 0
6/22/2013 1 0
6/23/2013 1 0
6/24/2013 1 41.402
6/25/2013 1 2.794
6/26/2013 1 6.096
6/27/2013 2 0.508
6/28/2013 2 0
6/29/2013 2 0
6/30/2013 2 2.54
7/1/2013 2 18.034
7/2/2013 2 4.064
And data.frame prec.p
start end week prec1 prec2 prec3
6/20/2013 6/26/2013 1 50.28 31.78042615 42.76461716
6/27/2013 7/2/2013 2 25.1 15.70964247 20.49507586
I would like to create the week field automatically, which spans from start to end in prec.p. The I can aggregate by week to make prec in both data frames match.
Introduce YYYYWW field in both weekly and minute data, WW stands for week number, that will give you a common index. For example
x <- as.Date(runif(100)*100)
yyyyww <- strftime(x, format="%Y%U")
yyyyww
Or take a look at package quantmod, if I remember correctly, it has functions for time frame conversion.

Aggregating, restructuring hourly time series data in R

I have a year's worth of hourly data in a data frame in R:
> str(df.MHwind_load) # compactly displays structure of data frame
'data.frame': 8760 obs. of 6 variables:
$ Date : Factor w/ 365 levels "2010-04-01","2010-04-02",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Time..HRs. : int 1 2 3 4 5 6 7 8 9 10 ...
$ Hour.of.Year : int 1 2 3 4 5 6 7 8 9 10 ...
$ Wind.MW : int 375 492 483 476 486 512 421 396 456 453 ...
$ MSEDCL.Demand: int 13293 13140 12806 12891 13113 13802 14186 14104 14117 14462 ...
$ Net.Load : int 12918 12648 12323 12415 12627 13290 13765 13708 13661 14009 ...
While preserving the hourly structure, I would like to know how to extract
a particular month/group of months
the first day/first week etc of each month
all mondays, all tuesdays etc of the year
I have tried using "cut" without result and after looking online think that "lubridate" might be able to do so but haven't found suitable examples. I'd greatly appreciate help on this issue.
Edit: a sample of data in the data frame is below:
Date Hour.of.Year Wind.MW datetime
1 2010-04-01 1 375 2010-04-01 00:00:00
2 2010-04-01 2 492 2010-04-01 01:00:00
3 2010-04-01 3 483 2010-04-01 02:00:00
4 2010-04-01 4 476 2010-04-01 03:00:00
5 2010-04-01 5 486 2010-04-01 04:00:00
6 2010-04-01 6 512 2010-04-01 05:00:00
7 2010-04-01 7 421 2010-04-01 06:00:00
8 2010-04-01 8 396 2010-04-01 07:00:00
9 2010-04-01 9 456 2010-04-01 08:00:00
10 2010-04-01 10 453 2010-04-01 09:00:00
.. .. ... .......... ........
8758 2011-03-31 8758 302 2011-03-31 21:00:00
8759 2011-03-31 8759 378 2011-03-31 22:00:00
8760 2011-03-31 8760 356 2011-03-31 23:00:00
EDIT: Additional time-based operations I would like to perform on the same dataset
1. Perform hour-by-hour averaging for all data points i.e average of all values in the first hour of each day in the year. The output will be an "hourly profile" of the entire year (24 time points)
2. Perform the same for each week and each month i.e obtain 52 and 12 hourly profiles respectively
3. Do seasonal averages, for example for June to September
Convert the date to the format which lubridate understands and then use the functions month, mday, wday respectively.
Suppose you have a data.frame with the time stored in column Date, then the answer for your questions would be:
###dummy data.frame
df <- data.frame(Date=c("2012-01-01","2012-02-15","2012-03-01","2012-04-01"),a=1:4)
##1. Select rows for particular month
subset(df,month(Date)==1)
##2a. Select the first day of each month
subset(df,mday(Date)==1)
##2b. Select the first week of each month
##get the week numbers which have the first day of the month
wkd <- subset(week(df$Date),mday(df$Date)==1)
##select the weeks with particular numbers
subset(df,week(Date) %in% wkd)
##3. Select all mondays
subset(df,wday(Date)==1)
First switch to a Date representation: as.Date(df.MHwind_load$Date)
Then call weekdays on the date vector to get a new factor labelled with day of week
Then call months on the date vector to get a new factor labelled with name of month
Optionally create a years variable (see below).
Now subset the data frame using the relevant combination of these.
Step 2. gets an answer to your task 3. Steps 3. and 4. get you to task 1. Task 2 might require a line or two of R. Or just select rows corresponding to, say, all the Mondays in a month and call unique, or its alter-ego duplicated on the results.
To get you going...
newdf <- df.MHwind_load ## build an augmented data set
newdf$d <- as.Date(newdf$Date)
newdf$month <- months(newdf$d)
newdf$day <- weekdays(newdf$d)
## for some reason R has no years function. Here's one
years <- function(x){ format(as.Date(x), format = "%Y") }
newdf$year <- years(newdf$d)
# get observations from January to March of every year
subset(newdf, month %*% in c('January', 'February', 'March'))
# get all Monday observations
subset(newdf, day == 'Monday')
# get all Mondays in 1999
subset(newdf, day == 'Monday' & year == '1999')
# slightly fancier: _first_ Monday of each month
# get the first weeks
first.week.of.month <- !duplicated(cbind(newdf$month, newdf$day))
# now pull out the mondays
subset(newdf, first.monday.of.month & day=='Monday')
Since you're not asking about the time (hourly) part of your data, it is best to then store your data as a Date object. Otherwise, you might be interested in chron, which also has some convenience functions like you'll see below.
With respect to Conjugate Prior's answer, you should store your date data as a Date object. Since your data already follows the default format ('yyyy-mm-dd') you can just call as.Date on it. Otherwise, you would have to specify your string format. I would also use as.character on your factor to make sure you don't get errors inline. I know I've ran into problems with factors-into-Dates for that reason (possibly corrected in current version).
df.MHwind_load <- transform(df.MHwind_load, Date = as.Date(as.character(Date)))
Now you would do well to create wrapper functions that extract the information you desire. You could use transform like I did above to simply add those columns that represent months, days, years, etc, and then subset on them logically. Alternatively, you might do something like this:
getMonth <- function(x, mo) { # This function assumes w/in single year vector
isMonth <- month(x) %in% mo # Boolean of matching months
return(x[which(isMonth)] # Return vector of matching months
} # end function
Or, in short form
getMonth <- function(x, mo) x[month(x) %in% mo]
This is just a tradeoff between storing that information (transform frame) or having it processed when desired (use accessor methods).
A more complicated process is your need for, say, the first day of a month. This is not entirely difficult, though. Below is a function that will return all of those values, but it is rather simple to just subset a sorted vector of values for a given month and take their first one.
getFirstDay <- function(x, mo) {
isMonth <- months(x) %in% mo
x <- sort(x[isMonth]) # Look at only those in the desired month.
# Sort them by date. We only want the first day.
nFirsts <- rle(as.numeric(x))$len[1] # Returns length of 1st days
return(x[seq(nFirsts)])
} # end function
The easier alternative would be
getFirstDayOnly <- function(x, mo) {sort(x[months(x) %in% mo])[1]}
I haven't prototyped these, as you didn't provide any data samples, but this is the sort of approach that can help you get the information you desire. It is up to you to figure out how to put these into your work flow. For instance, say you want to get the first day for each month of a given year (assuming we're only looking at one year; you can create wrappers or pre-process your vector to a single year beforehand).
# Return a vector of first days for each month
df <- transform(df, date = as.Date(as.character(date)))
sapply(unique(months(df$date)), # Iterate through months in Dates
function(month) {getFirstDayOnly(df$date, month)})
The above could also be designed as a separate convenience function that uses the other accessor function. In this way, you create a series of direct but concise methods for getting pieces of the information you want. Then you simply pull them together to create very simple and easy to interpret functions that you can use in your scripts to get you precise what you desire in the most efficient manner.
You should be able to use the above examples to figure out how to prototype other wrappers for accessing the date information you require. If you need help on those, feel free to ask in a comment.

Assigning week numbers in a time series to obtain weekly average price

Let's say I have a time series with daily data (business days), and I would like to organize the data by business weeks. (Monday-Friday) in a similar fashion as the one in this webpage from the EIA on futures prices of crude oil:
http://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=PET&s=RCLC1&f=D
As you can see the prices are nicely organized by weeks in this webpage.
Is there any function in R that could organize the data in a similar fashion?
You can obtain the data in .xls format at:
http://www.eia.gov/dnav/pet/hist_xls/RCLC1d.xls
What I would like to do is to assign a week number to each daily observation something like this: (Look at the weeks column)
Date Price weeks day
1983-04-04 29.44 1 Monday
1983-04-05 29.71 1 Tuesday
1983-04-06 29.92 1 Wednesday
1983-04-07 30.17 1 Thursday
1983-04-08 30.38 1 Friday
1983-04-11 30.26 2 Monday
...
...
So far I have used the week function of the lubridate package but is not working well. It seems like once a year hits the 53rd week the function fails to initiate properly the week of the following year.
I have been trying to stay away from rep, seq /5 or /7 kind of solutions since there may be some observations that I may need to filter from the data later on, so I would like to have a solution that doesn't depend on the particular vector of my data but rather I would prefer the solution to be more general, that is to depend on the date class, i.e POSIcxt, xts or zoo class
Any hints would be greatly appreciated.
Wouldn't this work?:
as.POSIXlt()$yday %/% 7
I realize that it does have part of what you wanted to avoid but it does draw its starting point from a recognized class. For your data noting that I read it in with colClasses=c("Date", "numeric","numeric","character") :
> 1 + as.POSIXlt(dat$Date)$yday %/% 7
[1] 14 14 14 14 14 15
If you want to replicate those interval labels, try adding multiples of 7 to any Monday and Friday:
paste(as.Date(strptime("1983 Apr- 4",format="%Y %b- %d"))+(39)*7,
" to ",
as.Date(strptime("1983 Apr- 8",format="%Y %b- %d"))+(39)*7,
sep="")
#[1] "1984-01-02 to 1984-01-06" # The first new year change
paste(as.Date(strptime("1983 Apr- 4",format="%Y %b- %d"))+(39+52)*7,
" to ",
as.Date(strptime("1983 Apr- 8",format="%Y %b- %d"))+(39+52)*7,
sep="")
#[1] "1984-12-31 to 1985-01-04" # The second new year change
Here's a function that will accept an integer vector:
from8Apr83dts <- function(numwks) {
paste(as.Date(strptime("1983 Apr- 4",format="%Y %b- %d"))+(numwks)*7,
" to ",
as.Date(strptime("1983 Apr- 8",format="%Y %b- %d"))+(numwks)*7,
sep="")
}
# Usage
from8Apr83dts(39:40)
#[1] "1984-01-02 to 1984-01-06" "1984-01-09 to 1984-01-13"

Resources