r, write_csv is changing all times/dates to UTC - r

I have found a very annoying problem that I want to share with the community. This is a question that I have found an acceptable solution for (detailed below), but I now have several follow-up questions. My knowledge of time stamps and POSIX variables is limited, particularity how plyr, dplyr, and readr handle these.
When working with POSIX variables (aka, date and time stamps), I found that write_csv from readr changed these variables into UTC time.
I am downloading data from an API and preserving the time stamp. Each time I grab data, I bind it to an existing file and save the file. My timezone is MDT, and I am requesting data using MDT time, which I am then trying to bind to a file in UTC time, and the times don't match...it gets messy and frustrating. In essence the beautiful time stamp database I am trying to create is turning into a pile of garbage.
To remedy this problem, I converted the POSIX time column to character column using:
df.time <- as.character(df.time)
This allowed me to save the files in a time zone consistent with the time stamps being returned to me by the API.
This leads me to the following series of questions:
Is there a program that can join POSIX variables across time zones? For instance, if its noon MDT, its 6pm UTC. Could I join two dataframes based on these time stamps without having to convert them to the same time zone first?
Is it possible to prevent write_csv from changing POSIX variables to UTC?
Is there a csv write function that doesn't change POSIX variables?
EDIT: I have included some example data of what I am talking about:
> df1 <- as.data.frame(fromJSON("https://api.pro.coinbase.com/products/BTC-USD/candles?start=2018-07-23&12:57:00?stop=2018-07-23&19:34:58granularity=300"))
> colnames(df1) <- c("time", "low", "high", "open", "close", "volume")
> df1$time <- anytime(df1$time)
> df1Sort <- df1[order(df1$time),]
> head(df1Sort, 5)
time low high open close volume
299 2018-07-23 16:13:00 7747.00 7747.01 7747.01 7747.01 9.2029168
298 2018-07-23 16:14:00 7743.17 7747.01 7747.00 7747.01 7.0205668
297 2018-07-23 16:15:00 7745.47 7745.73 7745.67 7745.73 0.9075707
296 2018-07-23 16:16:00 7745.72 7745.73 7745.72 7745.73 4.6715157
295 2018-07-23 16:17:00 7745.72 7745.73 7745.72 7745.72 2.4921921
> write_csv(df1Sort, "df1Sort.csv", col_names = TRUE)
> df2 <- read_csv("df1Sort.csv", col_names = TRUE)
Parsed with column specification:
cols(
time = col_datetime(format = ""),
low = col_double(),
high = col_double(),
open = col_double(),
close = col_double(),
volume = col_double()
)
> head(df2, 5)
# A tibble: 5 x 6
time low high open close volume
<dttm> <dbl> <dbl> <dbl> <dbl> <dbl>
1 2018-07-23 22:13:00 7747 7747 7747 7747 9.20
2 2018-07-23 22:14:00 7743 7747 7747 7747 7.02
3 2018-07-23 22:15:00 7745 7746 7746 7746 0.908
4 2018-07-23 22:16:00 7746 7746 7746 7746 4.67
5 2018-07-23 22:17:00 7746 7746 7746 7746 2.49

"Is there a program that can join POSIX variables across time zones... without having to convert them to the same time zone first?"
Maybe? But if so, they're almost certainly just converting to UTC under the hood and just hiding it from you. I'm unaware of any thing like this in R. (data.table being the only package I'm aware of that can join on anything other than exact equality, and it doesn't have this feature.) If I were you, I'd just convert everything to one timezone - probably UTC.
For more reading for best practices this SQL-focused answer seems very good.
"Is it possible to prevent write_csv from changing POSIX variables to UTC?"
Not built-in. The ?write_csv documentation is pretty clear: It doesn't list any options for this and does say "POSIXct's are formatted as ISO8601."
"Is there a csv write function that doesn't change POSIX variables?"
Sure, the built-in write.csv doesn't change to UTC (I think it uses system settings), and data.table::fwrite offers quite a few options. If you want to control how your dates are saved, I think your best bet is to convert them to character in whatever format you want, and then any of the writing functions should handle them just fine. You should check out the ?data.table::fwrite documentation, it's got good info. They warn that the "write.csv" option can be quite slow.
You should include reproducible examples with your questions. Here's one for this:
t = as.POSIXct("2018-01-01 01:30:00", tz = "Africa/Addis_Ababa")
t
# [1] "2018-01-01 01:30:00 EAT"
d = data.frame(t)
library(readr)
write_csv(d, "tz_test.csv")
system("head tz_test.csv")
# 2017-12-31T22:30:00Z
library(data.table)
fwrite(d, "tz_test_dt.csv", dateTimeAs = "write.csv")
system("head tz_test_dt.csv")
# t
# 2018-01-01 01:30:00
write.csv(d, "tz_test_base.csv")
system("head tz_test_base.csv")
# "","t"
# "1",2018-01-01 01:30:00

It looks like you're using libraries from the tidyverse; have you had a look at the lubridate library?
The help file for as_date() may help you here convert a date-time variable to your desired timezone before you append/join your data.
For example:
> dt_utc <- ymd_hms("2010-08-03 00:50:50")
> dt_utc
[1] "2010-08-03 00:50:50 UTC"
> as_datetime(dt_utc, tz = "Australia/Melbourne")
[1] "2010-08-03 10:50:50 AEST"

Related

best practices for avoiding roundoff gotchas in date manipulation

I am doing some date/time manipulation and experiencing explicable, but unpleasant, round-tripping problems when converting date -> time -> date . I have temporarily overcome this problem by rounding at appropriate points, but I wonder if there are best practices for date handling that would be cleaner. I'm using a mix of base-R and lubridate functions.
tl;dr is there a good, simple way to convert from decimal date (YYYY.fff) to the Date class (and back) without going through POSIXt and incurring round-off (and potentially time-zone) complications??
Start with a few days from 1918, as separate year/month/day columns (not a critical part of my problem, but it's where my pipeline happens to start):
library(lubridate)
dd <- data.frame(year=1918,month=9,day=1:12)
Convert year/month/day -> date -> time:
dd <- transform(dd,
time=decimal_date(make_date(year, month, day)))
The successive differences in the resulting time vector are not exactly 1 because of roundoff: this is understandable but leads to problems down the road.
table(diff(dd$time)*365)
## 0.999999999985448 1.00000000006844
## 9 2
Now suppose I convert back to a date: the dates are slightly before or after midnight (off by <1 second in either direction):
d2 <- lubridate::date_decimal(dd$time)
# [1] "1918-09-01 00:00:00 UTC" "1918-09-02 00:00:00 UTC"
# [3] "1918-09-03 00:00:00 UTC" "1918-09-03 23:59:59 UTC"
# [5] "1918-09-04 23:59:59 UTC" "1918-09-05 23:59:59 UTC"
# [7] "1918-09-07 00:00:00 UTC" "1918-09-08 00:00:00 UTC"
# [9] "1918-09-09 00:00:00 UTC" "1918-09-09 23:59:59 UTC"
# [11] "1918-09-10 23:59:59 UTC" "1918-09-12 00:00:00 UTC"
If I now want dates (rather than POSIXct objects) I can use as.Date(), but to my dismay as.Date() truncates rather than rounding ...
tt <- as.Date(d2)
## [1] "1918-09-01" "1918-09-02" "1918-09-03" "1918-09-03" "1918-09-04"
## [6] "1918-09-05" "1918-09-07" "1918-09-08" "1918-09-09" "1918-09-09"
##[11] "1918-09-10" "1918-09-12"
So the differences are now 0/1/2 days:
table(diff(tt))
# 0 1 2
# 2 7 2
I can fix this by rounding first:
table(diff(as.Date(round(d2))))
## 1
## 11
but I wonder if there is a better way (e.g. keeping POSIXct out of my pipeline and staying with dates ...
As suggested by this R-help desk article from 2004 by Grothendieck and Petzoldt:
When considering which class to use, always
choose the least complex class that will support the
application. That is, use Date if possible, otherwise use
chron and otherwise use the POSIX classes. Such a strategy will greatly reduce the potential for error and increase the reliability of your application.
The extensive table in this article shows how to translate among Date, chron, and POSIXct, but doesn't include decimal time as one of the candidates ...
It seems like it would be best to avoid converting back from decimal time if at all possible.
When converting from date to decimal date, one also needs to account for time. Since Date does not have a specific time associated with it, decimal_date inherently assumes it to be 00:00:00.
However, if we are concerned only with the date (and not the time), we could assume the time to be anything. Arguably, middle of the day (12:00:00) is as good as the beginning of the day (00:00:00). This would make the conversion back to Date more reliable as we are not at the midnight mark and a few seconds off does not affect the output. One of the ways to do this would be to add 12*60*60/(365*24*60*60) to dd$time
dd$time2 = dd$time + 12*60*60/(365*24*60*60)
data.frame(dd[1:3],
"00:00:00" = as.Date(date_decimal(dd$time)),
"12:00:00" = as.Date(date_decimal(dd$time2)),
check.names = FALSE)
# year month day 00:00:00 12:00:00
#1 1918 9 1 1918-09-01 1918-09-01
#2 1918 9 2 1918-09-02 1918-09-02
#3 1918 9 3 1918-09-03 1918-09-03
#4 1918 9 4 1918-09-03 1918-09-04
#5 1918 9 5 1918-09-04 1918-09-05
#6 1918 9 6 1918-09-05 1918-09-06
#7 1918 9 7 1918-09-07 1918-09-07
#8 1918 9 8 1918-09-08 1918-09-08
#9 1918 9 9 1918-09-09 1918-09-09
#10 1918 9 10 1918-09-09 1918-09-10
#11 1918 9 11 1918-09-10 1918-09-11
#12 1918 9 12 1918-09-12 1918-09-12
It should be noted, however, that the value of decimal time obtained in this way will be different.
lubridate::decimal_date() is returning a numeric. If I understand you correctly, the question is how to convert that numeric into Date and have it round appropriately without bouncing through POSIXct.
as.Date(1L, origin = '1970-01-01') shows us that we can provide as.Date with days since some specified origin and convert immediately to the Date type. Knowing this, we can skip the year part entirely and set it as origin. Then we can convert our decimal dates to days:
as.Date((dd$time-trunc(dd$time)) * 365, origin = "1918-01-01").
So, a function like this might do the trick (at least for years without leap days):
date_decimal2 <- function(decimal_date) {
years <- trunc(decimal_date)
origins <- paste0(years, "-01-01")
# c.f. https://stackoverflow.com/questions/14449166/dates-with-lapply-and-sapply
do.call(c, mapply(as.Date.numeric, x = (decimal_date-years) * 365, origin = origins, SIMPLIFY = FALSE))
}
Side note: I admit I went down a bit of a rabbit hole with trying to move origin around deal with the pre-1970 date. I found that the further origin shifted from the target date, the more weird the results got (and not in ways that seemed to be easily explained by leap days). Since origin is flexible, I decided to target it right on top of the target values. For leap days, seconds, and whatever other weirdness time has in store for us, on your own head be it. =)

Remove time from XTS

I am trying to compare different timeseries, by day.
Currently a typical XTS object looks like:
> vwap.crs
QUANTITY QUANTITY.1
2014-03-03 13:00:00 3423.500 200000
2014-03-04 17:00:00 3459.941 4010106
2014-03-05 16:00:00 3510.794 1971234
2014-03-06 17:00:00 3510.582 185822
now, i can strip the time out of the index as follows:
> round(index(vwap.crs),"day")
[1] "2014-03-04" "2014-03-05" "2014-03-06" "2014-03-07"
My question is, how do I replace the existing index in variable vwap.crs, with the rounded output above?
EDIT: to.daily fixed it
This should do it
indexClass(vwap.crs) <- "Date"
Also, take a look at the code in xts:::.drop.time
You could also do it the way you're trying to do it if you use index<-
index(vwap.crs) <- round(index(vwap.crs),"day")

Deseasonalize a "zoo" object containing intraday data

Using the zoo package (and help from SO) I have created a time series from the following:
z <- read.zoo("D:\\Futures Data\\BNVol3.csv", sep = ",", header = TRUE, index = 1:2,
tz="", format = "%d/%m/%Y %H:%M")
This holds data in the following format:(Intra-day from 07:00 to 20.50)
2012-10-01 14:50:00 2012-10-01 15:00:00 2012-10-01 15:10:00 2012-10-01 15:20:00
8638 9014 9402 9505
I want to "deseasonalize" the intra-day component of this data so that 1 day is considered a complete seasonal cycle. (I am using the day component because not all days will run from 07.00 to 20.50 due to bank holidays etc, but running from 07.00 to 20.50 is usually the standard. I assume that if i used the 84 intra-day points as 1 seasonal cycle then as some point the deseasonalizing will begin to get thrown off track)
I have tried to use the decompose method but this has not worked.
x <- Decompose(z)
Not sure "zoo" and decompose method are compatible but I thought "zoo" and "ts" were designed to be. Is there another way to do this?
Thanks in advance for any help.

Exclude rows with certain time of day

I have a time series of continuous data measured at 10 minute intervals for a period of five months. For simplicity's sake, the data is available in two columns as follows:
Timestamp Temp.Diff
2/14/2011 19:00 -0.385
2/14/2011 19:10 -0.535
2/14/2011 19:20 -0.484
2/14/2011 19:30 -0.409
2/14/2011 19:40 -0.385
2/14/2011 19:50 -0.215
... And it goes on for the next five months. I have parsed the Timestamp column using as.POSIXct.
I want to select rows with certain times of the day, (e.g. from 12 noon to 3 PM), I would like either like to exclude the other hours of the day, OR just extract those 3 hours but still have the data flow sequentially (i.e. in a time series).
You seem to know the basic idea, but are just missing the details. As you mentioned, we just transform the Timestamps into POSIX objects then subset.
lubridate Solution
The easiest way is probably with lubridate. First load the package:
library(lubridate)
Next convert the timestamp:
##*m*onth *d*ay *y*ear _ *h*our *m*inute
d = mdy_hm(dd$Timestamp)
Then we select what we want. In this case, I want any dates after 7:30pm (regardless of day):
dd[hour(d) == 19 & minute(d) > 30 | hour(d) >= 20,]
Base R solution
First create an upper limit:
lower = strptime("2/14/2011 19:30","%m/%d/%Y %H:%M")
Next transform the Timestamps in POSIX objects:
d = strptime(dd$Timestamp, "%m/%d/%Y %H:%M")
Finally, a bit of dataframe subsetting:
dd[format(d,"%H:%M") > format(lower,"%H:%M"),]
Thanks to plannapus for this last part
Data for the above example:
dd = read.table(textConnection('Timestamp Temp.Diff
"2/14/2011 19:00" -0.385
"2/14/2011 19:10" -0.535
"2/14/2011 19:20" -0.484
"2/14/2011 19:30" -0.409
"2/14/2011 19:40" -0.385
"2/14/2011 19:50" -0.215'), header=TRUE)
You can do this with easily with the time-based subsetting in the xts package. Assuming your data.frame is named Data:
library(xts)
x <- xts(Data$Temp.Diff, Data$Timestamp)
y <- x["T12:00/T15:00"]
# you need the leading zero if the hour is a single digit
z <- x["T09:00/T12:00"]

Importing timeSeries from csv file into R with correct dates

I have looked all over the internet to find an answer to my problem and failed.
I am using R with the Rmetrics package.
I tried reading my own dataset.csv via the readSeries function but sadly the dates I entered are not imported correctly, now every row has the current date.
I tried using their sample data sets, exported them to csv and re-imported an it creates the same problem.
You can test it using this code:
data <- head(SWX.RET[,1:3])
write.csv(data, file="myData.csv")
data2 <- readSeries(file="myData.csv",header=T,sep=",")
If you now check the data2 time series you will notice that the row date is the current date.
I am confused why this is and what to do to fix it.
Your help is much appreciated!
This can be achieved with the extra option row.names=FALSE to the write.csv() functions; see its help page for details. Here is a worked example:
R> fakeData <- data.frame(date=Sys.Date()+seq(-7,-1), value=runif(7))
R> fakeData
date value
1 2011-02-14 0.261088
2 2011-02-15 0.514413
3 2011-02-16 0.675607
4 2011-02-17 0.982817
5 2011-02-18 0.759544
6 2011-02-19 0.566488
7 2011-02-20 0.849690
R> write.csv(fakeData, "/tmp/fakeDate.csv", row.names=FALSE, quote=FALSE)
R> readSeries("/tmp/fakeDate.csv", header=TRUE, sep=",")
GMT
value
2011-02-14 0.261088
2011-02-15 0.514413
2011-02-16 0.675607
2011-02-17 0.982817
2011-02-18 0.759544
2011-02-19 0.566488
2011-02-20 0.849690
R>

Resources