Resampling time series with xts and zoo packages in R - r

I am trying to resample a dataset with a given temporal resolution of 5 min (source). In order to get a 30 min resampled temporal resolution I've tried:
#Date and Time together
SRI_2010$Date_Time = paste(SRI_2010$Date, SRI_2010$Time, sep=" ")
SRI_2010$Date_Time=as.character(SRI_2010$Date_Time)
SRI_2010$Date_Time=as.POSIXct(SRI_2010$Date_Time,format="%d/%m/%Y %H:%M")
#Creating the zoo object
SRI_2010.zoo <- zoo(SRI_2010,as.POSIXct(SRI_2010$Date_Time))
#Criteria for the resampling
ends2010 <- endpoints(SRI_2010.zoo,'minutes', 30)
SRI_30m_2010 <-period.apply(SRI_2010.zoo$SRI..W.m2.,ends2010,mean)
At the very beginning, I was quite satisfied because the code worked out, but after a double-check, I've realised it calculates the mean values at min 25 and 55, instead of at min 00 and 30 that I am interested in.
Example:
> SRI_30m_2010
2010-07-28 04:55:00 2010-07-28 05:25:00
3.80000000 12.06666667
2010-07-28 05:55:00 2010-07-28 06:25:00
19.73333333 28.46666667
2010-07-28 06:55:00 2010-07-28 07:25:00
40.30000000 61.60000000
This small issue is super annoying when I aim to combine different datasets with different temporal resolutions into a communal one. Does anyone know how could I sort this issue out?

The "issue" is that endpoints is doing what it was designed to do. It's returning the last timestamp of each period. I recommend you use align.time to move the index timestamp forward to the minutes you're interested in.
s <- align.time(as.xts(SRI_30m_2010), 60*30)
It's also not much of an issue if you're trying to combine multiple series with different resolutions into a single xts object. You could just merge them all, use na.locf or similar to fill in missing values, then extract the resolution you're interested in. I believe the xts FAQ shows you how to do this, and I know I've demonstrated it more than a couple times in my other answers on stackoverflow.

Related

Correct imputation for a zooreg object?

My objective is to impute NAs in a zooreg time series object. The pattern of the time series is cyclic. My code is:
#load libraries required
library("zoo")
# create sequence every 15 minutes from 1st Dec to 20th Dec, 2018
timeStamp <- seq.POSIXt(from=as.POSIXct('2018-01-01 00:00:00', tz="UTC"), to=as.POSIXct('2018-01-20 23:45:00', tz="UTC"), by = "15 min")
# data which increases from 12am to 12pm, then decreases till 12 am of next day, for 20 days
readings <- rep(c(seq(1,48,1), seq(48,1,-1)), 20)
dF <- data.frame(timeStamp=timeStamp, readings=readings)
# create a regular zooreg object, frequency is 1 day( 4 readings * 24 hours)
readingsZooReg <- zooreg(dF$readings, order.by = dF$timeStamp, frequency = 4*24)
plot(readingsZooReg)
# force some data to be NAs
window(readingsZooReg, start = as.POSIXct("2018-01-14 00:00:00", tz="UTC"), end = as.POSIXct("2018-01-16 23:45:00", tz="UTC")) <- NA
plot(readingsZooReg)
# plot imputed values
plot(na.approx(readingsZooReg))
The plots are:
Full time series, NAs added, Imputed time series
I'm purposely using zoo here, since the time series I work on are irregular(eg. solar, oil wells, etc)
1) Is my usage of "zooreg" correct? Or would a "zoo" object suffice ?
2) Is my frequency variable right?
3) Why won't na.approx work? I've also tried na.StructTs, the R script hangs.
4) Is there a solution using any other package? xts, ts, etc?
Your current example time-series is a regular time-series.
(a irregular time series would have time-steps with different time distances between observations)
E.g.:
10:00:10, 10:00:20, 10:00:30, 10:00:40, 10:00:50 (regular spaced)
10:00:10, 10:00:17, 10:00:33, 10:00:37, 10:00:50 (irregular spaced)
If you really need to handle irregular spaced time-series, zoo is your go to package. Otherwise you can also use other time series classes as xts and ts.
About the frequency:
You set the frequency of a time-series usually according to a value where you expect patterns to repeat. (in your example this could be 96). In real live this is often 1 day, 1 week, 1 month,....but it can be also different from these like 1,5 days. (e.g. if you have daily returning patterns and 1 minute observations you would set the frequency to 1440).
na.approx of zoo workes perfectly. It is exactly doing what it is expected to. A interpolation between the points 0 before the gap and 0 at the end of the gap will give a straight line at 0. Of course that is probably not the result you expected, because it does not account for seasonality. That is why G. Grothendieck suggests you na.StructTS as a method to choose. (this method is usually better in accounting for seasonality)
The best choice if you are not bound to zoo would in this specific case be using na_seadec from the imputeTS package ( a package solely dedicated to time series imputation).
I have added you a example also with nice plots from the imputeTS package
library(imputeTS)
yourTS <- ts(coredata(readingsZooReg), frequency = 96)
ggplot_na_distribution(yourTS)
imputedTS <- na_seadec(yourTS)
ggplot_na_imputations(yourTS, imputedTS)
Usually imputeTS also works perfectly with zoo time-series as input. I only changed it to ts again, because something with your zoo object seems odd...that is also why na.StructTS from zoo itself breaks. Maybe somebody with better knowledge can help out here.
Beware, if you really should have irregular time series do not use other packages / imputation functions than from zoo. Because they all assume the data to be regular spaced and will give results accordingly.

R - Datetimes with ggplot

What is the correct way to deal with datetimes in ggplot ?
I have data at several different dates and I would like to facet each date by the same time of day, e.g. between 1:30PM and 1:35PM, and plot the points between this time frame, how can I achieve this?
My data looks like:
datetime col1
2015-01-02 00:00:01 20
... ...
2015-01-02 11:59:59 34
2015-02-19 00:00:03 12
... ...
2015-02-19 11:59:58 27
I find myself often wanting to ggplot time series using datetime objects as the x-axis but I don't know how to use times only when dates aren't of interest.
The lubridate package will do the trick. There are commands you could use, specifically floor_date or ceiling_date to transform your datetime array.
I always use the chron package for times. It completely disregards dates and stores your time numerically (e.g. 1:30PM is stored as 13.5 because it's 13.5 hours into the day). That allows you to perform math on times, which is great for a lot of reasons, including calculating average time, the time between two points, etc.
For specific help with your plot you'll need to share a sample data frame in an easily copy-able format, and show the code you've tried so far.
This is a question I'd asked previously regarding the chron package, and it also gives an idea of how to share your data/ask a question that's easier for folks to reproduce and therefore answer:
Clear labeling of times class data on horizontal barplot/geom_segment

Time series analysis applicability?

I have a sample data frame like this (date column format is mm-dd-YYYY):
date count grp
01-09-2009 54 1
01-09-2009 100 2
01-09-2009 546 3
01-10-2009 67 4
01-11-2009 80 5
01-11-2009 45 6
I want to convert this data frame into time series using ts(), but the problem is: the current data frame has multiple values for the same date. Can we apply time series in this case?
Can I convert data frame into time series, and build a model (ARIMA) which can forecast count value on a daily basis?
OR should I forecast count value based on grp, but in that case, I have to select only grp and count column of a data frame. So in that case, I have to skip date column, and daily forecast for count value is not possible?
Suppose if I want to aggregate count value on per day basis. I tried with aggregate function, but there we have to specify date value, but I have a very large data set? Any other option available in r?
Can somebody, please, suggest if there is a better approach to follow? My assumption is that the time series forcast works only for bivariate data? Is this assumption right?
It seems like there are two aspects of your problem:
i want to convert this data frame into time series using ts(), but the
problem is- current data frame having multiple values for the same
date. can we apply time series in this case?
If you are happy making use of the xts package you could attempt:
dta2$date <- as.Date(dta2$date, "%d-%m-%Y")
dtaXTS <- xts::as.xts(dta2[,2:3], dta2$date)
which would result in:
>> head(dtaXTS)
count grp
2009-09-01 54 1
2009-09-01 100 2
2009-09-01 546 3
2009-10-01 67 4
2009-11-01 80 5
2009-11-01 45 6
of the following classes:
>> class(dtaXTS)
[1] "xts" "zoo"
You could then use your time series object as univariate time series and refer to the selected variable or as a multivariate time series, example using PerformanceAnalytics packages:
PerformanceAnalytics::chart.TimeSeries(dtaXTS)
Side points
Concerning your second question:
can somebody plz suggest me what is the better approach to follow, my
assumption is time series forcast is works only for bivariate data? is
this assumption also right?
IMHO, this is rather broad. I would suggest that you use created xts object and elaborate on the model you want to utilise and why, if it's a conceptual question about nature of time series analysis you may prefer to post your follow-up question on CrossValidated.
Data sourced via: dta2 <- read.delim(pipe("pbpaste"), sep = "") using the provided example.
Since daily forecasts are wanted we need to aggregate to daily. Using DF from the Note at the end, read the first two columns of data into a zoo series z using read.zoo and argument aggregate=sum. We could optionally convert that to a "ts" series (tser <- as.ts(z)) although this is unnecessary for many forecasting functions. In particular, checking out the source code of auto.arima we see that it runs x <- as.ts(x) on its input before further processing. Finally run auto.arima, forecast or other forecasting function.
library(forecast)
library(zoo)
z <- read.zoo(DF[1:2], format = "%m-%d-%Y", aggregate = sum)
auto.arima(z)
forecast(z)
Note: DF is given reproducibly here:
Lines <- "date count grp
01-09-2009 54 1
01-09-2009 100 2
01-09-2009 546 3
01-10-2009 67 4
01-11-2009 80 5
01-11-2009 45 6"
DF <- read.table(text = Lines, header = TRUE)
Updated: Revised after re-reading question.

minute wise time series forcasting?

I've been working on R for last one week or so, this website has helped a lot in understanding the basics.
I am doing an minute wise forecast for my company,
data is something like this:
REFEE ENTRY_DATE
1.00 01-01-2011 00:00:00
2.00 01-01-2011 00:01:00
3.00 01-01-2011 00:02:00
4.00 01-01-2011 00:03:00
5.00 01-01-2011 00:04:00
6.00 01-01-2011 00:05:00
7.00 01-01-2011 00:06:00
8.00 01-01-2011 00:07:00
9.00 01-01-2011 00:08:00
10.00 01-01-2011 00:09:00
......so on for four years till 2014
thats roughly more than 133921*12 samples. I have tried all the codes for forecasting, HoltWinters(), forecast() and all other forcasting methods....
The problem is, the application hangs everytime I try these functions; doesn't R support so many data for forecasting?
Is there any other package that can help me get the forecast for such enormous amount of data?
This actually is quite a lot of data, at least for R. You could look at ets() in the forecast package. I like recommending this free online forecasting textbook from the same authors.
You could of course think about your data. Do you actually expect dynamics that can only be seen on this level, e.g., sub-hourly patterns? Do you actually need your forecasts on a minute-by-minute basis, e.g., for operational decisions? (From what I know, even short-term electricity forecasting is done in 15 minute buckets - and if you are actually into high frequency trading, you'd likely have shorter time periods.)
If yes, you should probably look into specific methods that can actually model multiple types of seasonality. Electricity load forecasting may be a good point to start, since these people do deal with multiple overlaid seasonal patterns.
If no, you could think about aggregating your data, say to days, then forecasting aggregates and diaggregating, e.g., using historical proportions of minutes within days. This would at least make forecasting less of a data problem.
For large data sets I would recommend using predict() from the R base as opposed to forecast(). While forecast() provides more information (predict() only provides the forecast and standard errors), using rbenchmark for the two functions suggests predict() is much faster.
Additionally forecast() drops the century in its dates for their forecasted ts object, which is annoying...
As Stephan Kosla stated having such granular data may be an issue. A speed up could be found by taking a daily/weekly/monthly average of your data before performing the forecast. You can do this using the one of the apply functions, lubridate and a bit of ingenuity. I've shown and example below of how I would do this:
library(lubridate)
# Create dataframe for AirPassengers dataset (frome base)
df <- data.frame(data=as.vector(AirPassengers),
date=as.Date((time(AirPassengers))),
year=year(as.Date((time(AirPassengers)))))
# Split by year, then take average
average.by.year <- unsplit(lapply(split(df$data,df$year),mean), #lapply takes the mean
df$year)

Data aggregation loop in R

I am facing a problem concerning aggregating my data to daily data.
I have a data frame where NAs have been removed (Link of picture of data is given below). Data has been collected 3 times a day, but sometimes due to NAs, there is just 1 or 2 entries per day; some days data is missing completely.
I am now interested in calculating the daily mean of "dist": this means summing up the data of "dist" of one day and dividing it by number of entries per day (so 3 if there is no data missing that day). I would like to do this via a loop.
How can I do this with a loop? The problem is that sometimes I have 3 entries per day and sometimes just 2 or even 1. I would like to tell R that for every day, it should sum up "dist" and divide it by the number of entries that are available for every day.
I just have no idea how to formulate a for loop for this purpose. I would really appreciate if you could give me any advice on that problem. Thanks for your efforts and kind regards,
Jan
Data frame: http://www.pic-upload.de/view-11435581/Data_loop.jpg.html
Edit: I used aggregate and tapply as suggested, however, the mean value of the data was not really calculated:
Group.1 x
1 2006-10-06 12:00:00 636.5395
2 2006-10-06 20:00:00 859.0109
3 2006-10-07 04:00:00 301.8548
4 2006-10-07 12:00:00 649.3357
5 2006-10-07 20:00:00 944.8272
6 2006-10-08 04:00:00 136.7393
7 2006-10-08 12:00:00 360.9560
8 2006-10-08 20:00:00 NaN
The code used was:
dates<-Dis_sub$date
distance<-Dis_sub$dist
aggregate(distance,list(dates),mean,na.rm=TRUE)
tapply(distance,dates,mean,na.rm=TRUE)
Don't use a loop. Use R. Some example data :
dates <- rep(seq(as.Date("2001-01-05"),
as.Date("2001-01-20"),
by="day"),
each=3)
values <- rep(1:16,each=3)
values[c(4,5,6,10,14,15,30)] <- NA
and any of :
aggregate(values,list(dates),mean,na.rm=TRUE)
tapply(values,dates,mean,na.rm=TRUE)
gives you what you want. See also ?aggregate and ?tapply.
If you want a dataframe back, you can look at the package plyr :
Data <- as.data.frame(dates,values)
require(plyr)
ddply(data,"dates",mean,na.rm=TRUE)
Keep in mind that ddply is not fully supporting the date format (yet).
Look at the data.table package especially if your data is huge. Here is some code that calculates the mean of dist by day.
library(data.table)
dt = data.table(Data)
Data[,list(avg_dist = mean(dist, na.rm = T)),'date']
It looks like your main problem is that your date field has times attached. The first thing you need to do is create a column that has just the date using something like
Dis_sub$date_only <- as.Date(Dis_sub$date)
Then using Joris Meys' solution (which is the right way to do it) should work.
However if for some reason you really want to use a loop you could try something like
newFrame <- data.frame()
for d in unique(Dis_sub$date){
meanDist <- mean(Dis_sub$dist[Dis_sub$date==d],na.rm=TRUE)
newFrame <- rbind(newFrame,c(d,meanDist))
}
But keep in mind that this will be slow and memory-inefficient.

Resources