I'm still trying to better understand how mlply works. Here is a simplified version of my dataset:
days <- list(c(as.POSIXct("2010-08-29 00:00:00 EDT"), as.POSIXct("2010-08-30 00:00:00 EDT")))
day2 <- list(c(as.POSIXct("2010-07-22 00:00:00 EDT"), as.POSIXct("2010-07-23 00:00:00 EDT"), as.POSIXct("2010-07-24 00:00:00 EDT")))
days <- append(day2, days)
arrivals <- data.frame(date=as.POSIXct("2010-08-29 21:00:00 EDT"), size=72)
arrivals <- rbind(arrivals, c("2010-07-22 17:30:00 EDT",84))
using mapply with pmax to pick the maximum between days and arrivals, I get the following:
starting <- mapply(function(x,y){pmax(x,y)},days,arrivals$date)
starting[[1]]
"2010-08-29 21:00:00 EDT" "2010-08-29 21:00:00 EDT" "2010-08-29 21:00:00 EDT"
I'm sure the next version using mlply is not the equivalent and is obviously my error but, I'm not quite sure why the output differs.
starts <- mlply( cbind(arrivals$date,days), function(date,days){pmax(date,days)})
as.POSIXct(starts[[1]], origin='1970-1-1')
[1] "2010-08-30 02:00:00 EDT" "2010-08-30 02:00:00 EDT" "2010-08-30 02:00:00 EDT"
Ideally, I'm looking how to rewrite the mapply statement using mlply.
Thanks in advance,
--JT
Compare
> starts[[1]]
[1] 1283112000 1283112000 1283112000
> as.numeric(starting[[1]])
[1] 1283112000 1283112000 1283112000
>
POSIX references to UTC/GMT. You appear to be 5 hours ahead in your example. This is an output issue internally they appear to be the same times. Further comment is difficult. It would depend on what OS you are running. It could be affected by your locale setings etc.
Also as.POSIXct gives an unexpected timezone suggests there maybe an issue with as.POSIXct.date but Im not sure if this is still an issue.
Related
I am trying to parse strings of the form 25.10.2013 17:30 (the timezone is CET/CEST but this is not specified in the strings themselves) as POSIXct using lubridates dmy_hm(..., tz = 'Europe/Brussels') function.
I have the problem that after parsing, there are duplicate values on the day CEST switches to CET (the clock jumps one hour back). The cause seems to be the way this shift is indicated in my data: 02A:00 for 2 o'clock CEST and 02B:00 2 o'clock CET, which is one hour later. dmy_hm(..., tz = 'Europe/Brussels') interprets both as CET.
Minimal working example:
> library(lubridate)
> times = c("27.10.2013 01:00", "27.10.2013 02A:00",
"27.10.2013 02B:00", "27.10.2013 03:00")
> times = dmy_hm(times, tz = "Europe/Brussels")
> times
[1] "2013-10-27 01:00:00 CEST" "2013-10-27 02:00:00 CET"
[3] "2013-10-27 02:00:00 CET" "2013-10-27 03:00:00 CET"
My question is: What would be the best way to fix the "wrong" dates?
I tried to use which(duplicated(times)) to find the indices of the duplicate values and remove one hour from the "wrong" values, however there seems to be another problem:
> times[2] - hours(1)
[1] "2013-10-27 01:00:00 CEST"
Why does removing one hour from '"2013-10-27 02:00:00 CET"' bring me to '"2013-10-27 01:00:00 CEST"'? Isn't that a jump of two hours? I would expect to land at '"2013-10-27 02:00:00 CEST"'.
EDIT: The last part is a know issue (see https://github.com/tidyverse/lubridate/issues/498). The solution is to use dhours() instead of hours()
> times[2] - dhours(1)
[1] "2013-10-27 02:00:00 CEST"
My day starts at 2016-03-02 00:00:00. Not 2016-03-02 00:00:01.
How do I get the beginning of the day in POSIXct in local time?
My confusing probably comes from the fact that R sees this as the end-date of 2016-03-01? Given that R uses an ISO 8601?
For example if I try to find the beginning of the day using Sys.Date():
as.POSIXct(Sys.Date(), tz = "CET")
"2016-03-01 01:00:00 CET"
Which is not correct - but are there other ways?
I know I can hack my way out using a simple
as.POSIXct(paste(Sys.Date(), "00:00:00", sep = " "), tz = "CET")
But there has to be a more correct way to do this? Base R preferred.
It's a single command---but you want as.POSIXlt():
R> as.POSIXlt(Sys.Date())
[1] "2016-03-02 UTC"
R> format(as.POSIXlt(Sys.Date()), "%Y-%m-%d %H:%M:%S")
[1] "2016-03-02 00:00:00"
R>
It is only when converting to POSIXct happens that the timezone offset to UTC (six hours for me) enters:
R> as.POSIXct(Sys.Date())
[1] "2016-03-01 18:00:00 CST"
R>
Needless to say by wrapping both you get the desired type and value:
R> as.POSIXct(as.POSIXlt(Sys.Date()))
[1] "2016-03-02 UTC"
R>
Filed under once again no need for lubridate or other non-Base R packages.
Notwithstanding that you understandably prefer base R, a "smart way," for certain meaning of "smart," would be:
library(lubridate)
x <- floor_date(Sys.Date(),"day")
> format(x,"%Y-%m-%d-%H-%M-%S")
[1] "2016-03-02-00-00-00"
From ?floor_date:
floor_date takes a date-time object and rounds it down to the nearest
integer value of the specified time unit.
Pretty handy.
Your example is a bit unclear.
You are talking about a 1 minute difference for the day start, but your example shows a 1 hour difference due to the timezone.
You can try
?POSIXct
to get the functionality explained.
Using Sys.Date() withing POSIXct somehow overwrites your timezone setting.
as.POSIXct(Sys.Date(), tz="EET")
"2016-03-01 01:00:00 CET"
While entering a string gives you
as.POSIXct("2016-03-01 00:00:00", tz="EET")
"2016-03-01 EET"
It looks like 00:00:00 is actually the beginning of the day. You can conclude it from the results of the following 2 inequalities
as.POSIXct("2016-03-02 00:00:02 CET")>as.POSIXct("2016-03-02 00:00:01 CET")
TRUE
as.POSIXct("2016-03-02 00:00:01 CET")>as.POSIXct("2016-03-02 00:00:00 CET")
TRUE
So somehow this is a timezone issue. Notice that 00:00:00 is automatically removed from the as.POSIXct result.
as.POSIXct("2016-03-02 00:00:00 CET")
"2016-03-02 CET"
lets say we have this date "2014-05-11 14:45:00 UTC". I would like to get the exact POSIXct object for 1 year before so "2013-05-11 14:45:00 UTC".
My first thought is to create a whole new POSIXct object by subtracting one from the year bit and pasting it together with the remainder of the string and then creating a new POSIXct object with that string like so:
time <- as.POSIXct("2014-05-11 14:45:00 UTC",tz="UTC",origin="1970-01-01")
newTime <- as.POSIXct(paste(as.character(as.numeric(substr(time,1,4)) - 1),substr(time,5,19),sep=""),tz="UTC",origin="1970-01-01")
this works fine (except in case of leap years!) but the thing is I need to do this in a large data.table for each row and preferably put the results right back in data.table.
Is there any other way of subtracting a year off an object like this?
Some extra I need to apply this to a data.table like this one:
Time
1: 1349206200
2: 1349207100
3: 1349208000
4: 1349208900
5: 1349209800
6: 1349210700
7: 1349211600
8: 1349212500
9: 1349213400
10: 1349214300
11: 1349215200
but this happens when I do:
SOdata[,Time:=as.numeric(as.POSIXct(paste(as.character(as.numeric(substr(Time,1,4)) - 1),substr(Time,5,19),sep=""),tz="UTC",origin="1970-01-01"))]
Error in as.POSIXlt.character(x, tz, ...) :
character string is not in a standard unambiguous format
I am guessing I need to use something like lapply, but I always mess up syntax when using that function. So does anyone know how?
lubridate is your friend.
library(lubridate)
time <- as.POSIXct("2014-05-11 14:45:00 UTC",tz="UTC",origin="1970-01-01")
time-dyears(1)
#[1] "2013-05-11 14:45:00 UTC"
time+dyears(1)
#[1] "2015-05-11 14:45:00 UTC"
For leap years
> x <- as.POSIXct(c("2012-02-28", "2012-02-29"), tz="UTC",origin="1970-01-01")
> x - dyears(1)
[1] "2011-02-28 UTC" "2011-03-01 UTC"
I haven't tested the other answers, but the following should work as required regardless of leap years:
time <- as.POSIXct("2014-05-11 14:45:00 UTC",tz="UTC",origin="1970-01-01")
time <- as.POSIXlt(time)
time$year <- time$year - 1
time <- as.POSIXct(time)
#[1] "2013-05-11 14:45:00 UTC"
With Gabor's leap year example:
time <- as.POSIXct("2012-02-29 14:45:00 UTC",tz="UTC",origin="1970-01-01")
time <- as.POSIXlt(time)
time$year <- time$year - 1
time <- as.POSIXct(time)
#[1] "2011-03-01 14:45:00 UTC"
seq in base can be used:
LastYr <- function(x) seq(x, length = 2, by = "-1 year")[2]
toPOSIXct <- function(x) as.POSIXct(x, origin = "1970-01-01")
# example 1
LastYr(as.POSIXct("2012-02-28"))
## [1] "2011-02-28 EST"
# example 2 - leap year
LastYr(as.POSIXct("2012-02-29"))
## [1] "2011-03-01 EST"
# example 3 - vector case
x <- as.POSIXct(c("2012-02-28", "2012-02-29")) # test data
toPOSIXct(sapply(x, LastYr))
## [1] "2011-02-28 EST" "2011-03-01 EST"
# example 4 - data.table shown in question
DT[, Time := sapply(toPOSIXct(Time), LastYr)]
Revised simplified using functions LastYr and toPOSIXct.
or you can try, in base R :
> time + as.difftime(52*7+1,units="days")
[1] "2015-05-11 14:45:00 UTC"
> time - as.difftime(52*7+1,units="days")
[1] "2013-05-11 14:45:00 UTC"
of course, it would be easier if units could be years...
When I put a single date to be parsed, it parses accurately
> ymd("20011001")
[1] "2001-10-01 UTC"
But when I try to create a vector of dates they all come out one day off:
> b=c(ymd("20111001"),ymd("20101001"),ymd("20091001"),ymd("20081001"),ymd("20071001"),ymd("20061001"),ymd("20051001"),ymd("20041001"),ymd("20031001"),ymd("20021001"),ymd("20011001"))
> b
[1] "2011-09-30 19:00:00 CDT" "2010-09-30 19:00:00 CDT" "2009-09-30 19:00:00 CDT"
[4] "2008-09-30 19:00:00 CDT" "2007-09-30 19:00:00 CDT" "2006-09-30 19:00:00 CDT"
[7] "2005-09-30 19:00:00 CDT" "2004-09-30 19:00:00 CDT" "2003-09-30 19:00:00 CDT"
[10] "2002-09-30 19:00:00 CDT" "2001-09-30 19:00:00 CDT"
how can I fix this??? Many thanks.
I don't claim to understand exactly what's going on here, but the proximal problem is that c() strips attributes, so using c() on a POSIX[c?]t vector changes it from UTC to the time zone specified by your locale strips the time zone attribute, messing it up (even if you set the time zone to agree with the one specified by your locale). On my system:
library(lubridate)
(y1 <- ymd("20011001"))
## [1] "2001-10-01 UTC"
(y2 <- ymd("20011002"))
c(y1,y2)
## now in EDT (and a day earlier/4 hours before UTC):
## [1] "2001-09-30 20:00:00 EDT" "2001-10-01 20:00:00 EDT"
(y12 <- ymd(c("20011001","20011002")))
## [1] "2001-10-01 UTC" "2001-10-02 UTC"
c(y12)
## back in EDT
## [1] "2001-09-30 20:00:00 EDT" "2001-10-01 20:00:00 EDT"
You can set the time zone explicitly ...
y3 <- ymd("20011001",tz="EDT")
## [1] "2001-10-01 EDT"
But c() is still problematic.
(y3c <- c(y3))
## [1] "2001-09-30 20:00:00 EDT"
So two solutions are
convert a character vector rather than combining the objects after converting them one by one or
restore the tzone attribute after combining.
For example:
attr(y3c,"tzone") <- attr(y3,"tzone")
#Joran points out that this is almost certainly a general property of applying c() to POSIX[c?]t objects, not specifically lubridate-related. I hope someone will chime in and explain whether this is a well-known design decision/infelicity/misfeature.
Update: there is some discussion of this on R-help in 2012, and Brian Ripley comments:
But in any case, the documentation (?c.POSIXct) is clear:
Using ‘c’ on ‘"POSIXlt"’ objects converts them to the current time
zone, and on ‘"POSIXct"’ objects drops any ‘"tzone"’ attributes
(even if they are all marked with the same time zone).
So the recommended way is to add a "tzone" attribute if you know what
you want it to be. POSIXct objects are absolute times: the timezone
merely affects how they are converted (including to character for
printing).
It might be nice if lubridate added a method to do this ...
From a data frame with timestamped rows (strptime results), what is the best method for aggregating statistics for intervals?
Intervals could be an hour, a day, etc.
There's the aggregate function, but that doesn't help with assigning each row to an interval. I'm planning on adding a column to the data frame that denotes interval and using that with aggregate, but if there's a better solution it'd be great to hear it.
Thanks for any pointers!
Example Data
Five rows with timestamps divided into 15-minute intervals starting at 03:00.
Interval 1
"2010-01-13 03:02:38 UTC"
"2010-01-13 03:08:14 UTC"
"2010-01-13 03:14:52 UTC"
Interval 2
"2010-01-13 03:20:42 UTC"
"2010-01-13 03:22:19 UTC"
Conclusion
Using a time series package such as xts should be the solution; however I had no success using them and winded up using cut. As I presently only need to plot histograms, with rows grouped by interval, this was enough.
cut is used liked so:
interv <- function(x, start, period, num.intervals) {
return(cut(x, as.POSIXlt(start)+0:num.intervals*period))
}
Standard functions to split vectors are cut and findInterval:
v <- as.POSIXct(c(
"2010-01-13 03:02:38 UTC",
"2010-01-13 03:08:14 UTC",
"2010-01-13 03:14:52 UTC",
"2010-01-13 03:20:42 UTC",
"2010-01-13 03:22:19 UTC"
))
# Your function return list:
interv(v, as.POSIXlt("2010-01-13 03:00:00 UTC"), 900)
# [[1]]
# [1] "2010-01-13 03:00:00"
# [[2]]
# [1] "2010-01-13 03:00:00"
# [[3]]
# [1] "2010-01-13 03:00:00"
# [[4]]
# [1] "2010-01-13 03:15:00 CET"
# [[5]]
# [1] "2010-01-13 03:15:00 CET"
# cut returns factor, you must provide proper breaks:
cut(v, as.POSIXlt("2010-01-13 03:00:00 UTC")+0:2*900)
# [1] 2010-01-13 03:00:00 2010-01-13 03:00:00 2010-01-13 03:00:00
# [4] 2010-01-13 03:15:00 2010-01-13 03:15:00
# Levels: 2010-01-13 03:00:00 2010-01-13 03:15:00
# findInterval returns vector of interval id (breaks like in cut)
findInterval(v, as.POSIXlt("2010-01-13 03:00:00 UTC")+0:2*900)
# [1] 1 1 1 2 2
For the record: cut has a method for POSIXt type, but unfortunately there is no way to provide start argument, effect is:
cut(v,"15 min")
# [1] 2010-01-13 03:02:00 2010-01-13 03:02:00 2010-01-13 03:02:00
# [4] 2010-01-13 03:17:00 2010-01-13 03:17:00
# Levels: 2010-01-13 03:02:00 2010-01-13 03:17:00
As you see it's start at 03:02:00. You could mess with labels of output factor (convert labels to time, round somehow and convert back to character).
Use a time series package. The xts package has functions designed specifically to do that. Or look at the aggregate and rollapply functions in the zoo package.
The rmetrics ebook has a useful discussion, including a performance comparison of the various packages: https://www.rmetrics.org/files/freepdf/TimeSeriesFAQ.pdf
Edit: Look at my answer to this question. Basically you need to truncate every timestamp into a specific interval and then do the aggregation using those new truncated timestamps as your grouping vector.
This is an interesting question; with the proliferation of the various time series packages and methods, there ought to be an approach for binning irregular time series other than by brute force that the OP suggests. Here is one "high-level" way to get the intervals that you can then use for aggregate et al, using a version of cut defined for chron objects.
require(chron)
require(timeSeries)
my.times <- "
2010-01-13 03:02:38 UTC
2010-01-13 03:08:14 UTC
2010-01-13 03:14:52 UTC
2010-01-13 03:20:42 UTC
2010-01-13 03:22:19 UTC
"
time.df <- read.delim(textConnection(my.times),header=FALSE,sep="\n",strip.white=FALSE)
time.seq <- seq(trunc(timeDate(time.df[1,1]),units="hours"),by=15*60,length=nrow(time.df))
intervals <- as.numeric(cut(as.chron(as.character(time.df$V1)),breaks=as.chron(as.character(time.seq))))
You get
intervals
[1] 1 1 1 2 2
which you can now append to the data frame and aggregate.
The coersion acrobatics above (from character to timeDate to character to chron) is a little unfortunate, so if there are cleaner solutions for binning irregular time data using xts or any of the other timeSeries packages, I'd love to hear about them as well!..
I am also curious to know what would be the most efficient approach for binning large high-frequency irregular time series, e.g. creating 1-minute volume bars on tick data for a very liquid stock.