I have data
library(data.table); library(lubridate)
dat <- data.table(t=as.POSIXct(c("2014-09-26 01:01:00","2014-09-26 02:01:00","2014-09-26 03:01:00"), tz="CET"))
> dat
t
1: 2014-09-26 01:01:00
2: 2014-09-26 02:01:00
3: 2014-09-26 03:01:00
and I would like to subtract 180 days. Because of DST change, the result using days(.) of lubridate is
> dat$t - days(180)
[1] "2014-03-30 01:01:00 CET" NA "2014-03-30 03:01:00 CEST"
and I wonder whether there is a way of subtracting days that accounts for DST changes.
Subtract the number of seconds in 180 days:
dat$t - 180*60*24*60
[1] "2014-03-30 00:01:00 CET" "2014-03-30 01:01:00 CET" "2014-03-30 03:01:00 CEST"
Related
Say I have a POSIXct vector like
timestamps = seq(as.POSIXct("2021-01-23"), as.POSIXct("2021-01-24"), length.out = 6)
I would like to round these times up to the nearest hour of the day in a vector:
hours_of_day = c(6, 14, 20)
i.e., the following result:
timestamps result
1 2021-01-23 00:00:00 2021-01-23 02:00:00
2 2021-01-23 04:48:00 2021-01-23 14:00:00
3 2021-01-23 09:36:00 2021-01-23 14:00:00
4 2021-01-23 14:24:00 2021-01-23 20:00:00
5 2021-01-23 19:12:00 2021-01-23 20:00:00
6 2021-01-24 00:00:00 2021-01-24 02:00:00
Is there a vectorized solution to this (or otherwise fast)? I have a few million timestamps and need to apply it for several hours_of_day.
One way to simplify this problem is to (1) find the next hours_of_day for each lubridate::hour(timestamps) and then (2) result = lubridate::floor_date(timestamps) + next_hour_of_day * 3600. But how to do step 1 vectorized?
Convert to as.POSIXlt, which allows you to extract hours and minutes, and calculate decimal hours. In an lapply/sapply combination first look up where these are less than the hours of the day vector, and choose the maximum hour using which.max. Now create new date-time using ISOdate and add one day ifelse date-time is smaller than original time.
timestamps <- as.POSIXlt(timestamps)
h <- hours_of_day[sapply(lapply(with(timestamps, hour + min/60 + sec/3600),
`<=`, hours_of_day), which.max)]
r <- with(timestamps, ISOdate(1900 + year, mon + 1, mday, h,
tz=attr(timestamps, "tzone")[[1]]))
r[r < timestamps] <- r[r < timestamps] + 86400
Result
r
# [1] "2021-01-23 06:00:00 CET" "2021-01-23 06:00:00 CET"
# [3] "2021-01-23 14:00:00 CET" "2021-01-23 20:00:00 CET"
# [5] "2021-01-23 20:00:00 CET" "2021-01-24 06:00:00 CET"
# [7] "2021-01-25 06:00:00 CET" "2021-01-27 20:00:00 CET"
data.frame(timestamps, r)
# timestamps r
# 1 2021-01-23 00:00:00 2021-01-23 06:00:00
# 2 2021-01-23 04:48:00 2021-01-23 06:00:00
# 3 2021-01-23 09:36:00 2021-01-23 14:00:00
# 4 2021-01-23 14:24:00 2021-01-23 20:00:00
# 5 2021-01-23 19:12:00 2021-01-23 20:00:00
# 6 2021-01-24 00:00:00 2021-01-24 06:00:00
# 7 2021-01-24 23:59:00 2021-01-25 06:00:00
# 8 2021-01-27 20:00:00 2021-01-27 20:00:00
Note: I've added "2021-01-24 23:59:00 CET" to timestamps to demonstrate the date change.
Benchmark
Tested on a length 1.4e6 vector.
# Unit: seconds
# expr min lq mean median uq max neval cld
# POSIX() 32.96197 33.06495 33.32104 33.16793 33.50057 33.83321 3 a
# lubridate() 47.36412 47.57762 47.75280 47.79113 47.94715 48.10316 3 b
Data:
timestamps <- structure(c(1611356400, 1611373680, 1611390960, 1611408240, 1611425520,
1611442800, 1611529140, 1611774000), class = c("POSIXct", "POSIXt"
))
hours_of_day <- c(6, 14, 20)
I would extract the hour component, use cut to bin it, and assign the binned hours back to the original:
hours_of_day = c(2, 14, 20)
library(lubridate)
library(magrittr) ## just for the pipe
new_hours = timestamps %>%
hour %>%
cut(breaks = c(0, hours_of_day), labels = hours_of_day, include.lowest = TRUE) %>%
as.character() %>%
as.integer()
result = floor_date(timestamps, "hour")
hour(result) = new_hours
result
# [1] "2021-01-23 02:00:00 EST" "2021-01-23 14:00:00 EST" "2021-01-23 14:00:00 EST"
# [4] "2021-01-23 14:00:00 EST" "2021-01-23 20:00:00 EST" "2021-01-24 02:00:00 EST"
Building on the approach by #jay.sf, I made a function for floor as well while adding support for NA values.
floor_date_to = function(timestamps, hours_of_day) {
# Handle NA with a temporary filler so code below doesn't break
na_timestamps = is.na(timestamps)
timestamps[na_timestamps] = as.POSIXct("9999-12-31")
# Proceed as usual
timestamps = as.POSIXlt(timestamps)
hours_of_day = rev(hours_of_day) # floor-specific: because which.max returns the first index by default
nearest_hour = hours_of_day[sapply(lapply(with(timestamps, hour + min/60 + sec/3600), `<`, hours_of_day), function(x) which.max(-x))] # floor-specific: negative which.max()
rounded = with(timestamps, ISOdate(1900 + year, mon + 1, mday, nearest_hour, tz = attr(timestamps, "tzone")[1]))
rounded[rounded > timestamps] = rounded[rounded > timestamps] - 86400 # floor: use minus
return(rounded)
timestamps[na_timestamps] = NA # Overwrite with NA again
}
I have time-series data sampled at 10 minutes rate. I want to split it hour-wise, but to my surprise split.xts is not producing intended results. Steps used are:
library(xts)
set.seed(123)
Sys.setenv(TZ="Asia/Kolkata")
timeind <- seq(as.POSIXct("2017-01-20 00:00:00 IST"),
as.POSIXct("2017-01-20 23:59:59 IST"),by="10 min") #for indexing
df <- xts(runif(length(timeind),30,50),timeind) #xts data frame
split(df,"hours",k=1)
OUTPUT IS:
[[1]]
[,1]
2017-01-20 00:00:00 31.24343
2017-01-20 00:10:00 32.57921
2017-01-20 00:20:00 40.17684
[[2]]
[,1]
2017-01-20 00:30:00 41.89185
2017-01-20 00:40:00 30.93997
2017-01-20 00:50:00 31.76651
2017-01-20 01:00:00 49.07364
2017-01-20 01:10:00 34.79113
2017-01-20 01:20:00 48.13881
Expected output is:
[[1]]
[,1]
2017-01-20 00:00:00 31.24343
2017-01-20 00:10:00 32.57921
2017-01-20 00:20:00 40.17684
2017-01-20 00:30:00 41.89185
2017-01-20 00:40:00 30.93997
2017-01-20 00:50:00 31.76651
[[2]]
2017-01-20 01:00:00 49.07364
2017-01-20 01:10:00 34.79113
2017-01-20 01:20:00 48.13881
...
Why split.xts is not working properly?
It's a known bug. If the index timezone happens to be one that is not a round hour offset from UTC, endpoints does not work correctly (because its calculations are based on UTC).
For example, Asia/Kolkata is UTC+0530, so endpoints aligns on half-hours.
A possible work-around would be to add 30 minutes to the index before calling split, then subtracting 30 minutes from each element of the result. Though that might cause issues around daylight saving time, if the timezone observes one.
df_adjusted <- df
.index(df_adjusted) <- .index(df_adjusted) - 60 * 30
by_hour <- lapply(split(df_adjusted, "hours"),
function(x) { .index(x) <- .index(x) + 60 * 30; x })
I have one column with two different formats but the same class 'factor'.
D$date
2009-05-12 11:30:00
2009-05-13 11:30:00
2009-05-14 11:30:00
2009-05-15 11:30:00
42115.652
2876
8765
class(D$date)
factor
What I need is to convert the number to date.
D$date <- as.character(D$date)
D$date=ifelse(!is.na(as.numeric(D$date)),
as.POSIXct(as.numeric(D$date) * (60*60*24), origin="1899-12-30", tz="UTC"),
D$date)
Now the number was converted but to a strange number "1429630800".
I tried without ifelse:
as.POSIXct(as.numeric(42115.652) * (60*60*24), origin="1899-12-30", tz="UTC")
[1] "2015-04-21 15:38:52 UTC"
It was converted nicely.
The problem is that you are mixing up classes in the true/false halves of your ifelse. You can fix this by adding as.character like this
D$date = ifelse(!is.na(as.numeric(D$date)),
as.character(as.POSIXct(as.numeric(D$date) * (60*60*24), origin="1899-12-30", tz="UTC")),
D$date)
#D
# date
#1 2009-05-12 11:30:00
#2 2009-05-13 11:30:00
#3 2009-05-14 11:30:00
#4 2009-05-15 11:30:00
#5 2015-04-21 15:38:52
#6 1907-11-15 00:00:00
#7 1923-12-30 00:00:00
You can also create a function which transforms each value in POSIX, then using lapply and do.call.
b <- c("2009-05-12 11:30:00", "2009-05-13 11:30:00", "2009-05-14 11:30:00",
"2009-05-15 11:30:00", "42115.652", "2876", "8765")
foo <- function(x){
if(!is.na(as.numeric(x))){
as.POSIXct(as.numeric(x) * (60*60*24), origin="1899-12-30", tz="UTC")
}else{
as.POSIXct(x, origin="1899-12-30", tz="UTC")
}
}
do.call("c", lapply(b, foo))
[1] "2009-05-12 13:30:00 CEST" "2009-05-13 13:30:00 CEST" "2009-05-14 13:30:00 CEST" "2009-05-15 13:30:00 CEST"
[5] "2015-04-21 17:38:52 CEST" "1907-11-15 01:00:00 CET" "1923-12-30 01:00:00 CET"
as I failed to solve my problem with PHP/MySQL or Excel due to the data size, I'm trying to do my very first steps with R now and struggle a bit. The problem is this: I have a second-by-second CSV-file with half a year of data, that looks like this:
metering,timestamp
123,2016-01-01 00:00:00
345,2016-01-01 00:00:01
243,2016-01-01 00:00:02
101,2016-01-01 00:00:04
134,2016-01-01 00:00:06
As you see, there are some seconds missing every once in a while (don't ask me, why the values are written before the timestamp, but that's how I received the data…). Now I try to calculate the amount of values (= seconds) that are missing.
So my idea was
to create a vector that is correct (includes all sec-by-sec timestamps),
match the given CSV file with that new vector, and
sum up all the timestamps with no value.
I managed to make step 1 happen with the following code:
RegularTimeSeries <- seq(as.POSIXct("2016-01-01 00:00:00", tz = "UTC"), as.POSIXct("2016-01-01 00:00:30", tz = "UTC"), by = "1 sec")
write.csv(RegularTimeSeries, file = "RegularTimeSeries.csv")
To have an idea what I did I also exported the vector to a CSV that looks like this:
"1",2016-01-01 00:00:00
"2",2016-01-01 00:00:01
"3",2016-01-01 00:00:02
"4",2016-01-01 00:00:03
"5",2016-01-01 00:00:04
"6",2016-01-01 00:00:05
"7",2016-01-01 00:00:06
Unfortunately I have no idea how to go on with step 2 and 3. I found some very similar examples (http://www.r-bloggers.com/fix-missing-dates-with-r/, R: Insert rows for missing dates/times), but as a total R noob I struggled to translate these examples to my given sec-by-sec data.
Some hints for the greenhorn would be very very helpful – thank you very much in advance :)
In the tidyverse,
library(dplyr)
library(tidyr)
# parse datetimes
df %>% mutate(timestamp = as.POSIXct(timestamp)) %>%
# complete sequence to full sequence from min to max by second
complete(timestamp = seq.POSIXt(min(timestamp), max(timestamp), by = 'sec'))
## # A tibble: 7 x 2
## timestamp metering
## <time> <int>
## 1 2016-01-01 00:00:00 123
## 2 2016-01-01 00:00:01 345
## 3 2016-01-01 00:00:02 243
## 4 2016-01-01 00:00:03 NA
## 5 2016-01-01 00:00:04 101
## 6 2016-01-01 00:00:05 NA
## 7 2016-01-01 00:00:06 134
If you want the number of NAs (i.e. the number of seconds with no data), add on
%>% tally(is.na(metering))
## # A tibble: 1 x 1
## n
## <int>
## 1 2
You can check which values of your RegularTimeSeries are in your broken time series using which and %in%. First create BrokenTimeSeries from your example:
RegularTimeSeries <- seq(as.POSIXct("2016-01-01 00:00:00", tz = "UTC"), as.POSIXct("2016-01-01 00:00:30", tz = "UTC"), by = "1 sec")
BrokenTimeSeries <- RegularTimeSeries[-c(3,6,9)] # remove some seconds
This will give you the indeces of values within RegularTimeSeries that are not in BrokenTimeSeries:
> which(!(RegularTimeSeries %in% BrokenTimeSeries))
[1] 3 6 9
This will return the actual values:
> RegularTimeSeries[which(!(RegularTimeSeries %in% BrokenTimeSeries))]
[1] "2016-01-01 00:00:02 UTC" "2016-01-01 00:00:05 UTC" "2016-01-01 00:00:08 UTC"
Maybe I'm misunderstanding your problem but you can count the number of missing seconds simply subtracting the length of your broken time series from RegularTimeSeries or getting the length of any of the two resulting vectors above.
> length(RegularTimeSeries) - length(BrokenTimeSeries)
[1] 3
> length(which(!(RegularTimeSeries %in% BrokenTimeSeries)))
[1] 3
> length(RegularTimeSeries[which(!(RegularTimeSeries %in% BrokenTimeSeries))])
[1] 3
If you want to merge the files together to see the missing values you can do something like this:
#data with regular time series and a "step"
df <- data.frame(
RegularTimeSeries
)
df$BrokenTimeSeries[RegularTimeSeries %in% BrokenTimeSeries] <- df$RegularTimeSeries
df$BrokenTimeSeries <- as.POSIXct(df$BrokenTimeSeries, origin="2015-01-01", tz="UTC")
resulting in:
> df[1:12,]
RegularTimeSeries BrokenTimeSeries
1 2016-01-01 00:00:00 2016-01-01 00:00:00
2 2016-01-01 00:00:01 2016-01-01 00:00:01
3 2016-01-01 00:00:02 <NA>
4 2016-01-01 00:00:03 2016-01-01 00:00:02
5 2016-01-01 00:00:04 2016-01-01 00:00:03
6 2016-01-01 00:00:05 <NA>
7 2016-01-01 00:00:06 2016-01-01 00:00:04
8 2016-01-01 00:00:07 2016-01-01 00:00:05
9 2016-01-01 00:00:08 <NA>
10 2016-01-01 00:00:09 2016-01-01 00:00:06
11 2016-01-01 00:00:10 2016-01-01 00:00:07
12 2016-01-01 00:00:11 2016-01-01 00:00:08
If all you want is the number of missing seconds, it can be done much more simply. First find the number of seconds in your timerange, and then subtract the number of rows in your dataset. This could be done in R along these lines:
n.seconds <- difftime("2016-06-01 00:00:00", "2016-01-01 00:00:00", units="secs")
n.rows <- nrow(my.data.frame)
n.missing.values <- n.seconds - n.rows
You might change the time range and the variable of your data frame.
Hope it helps
d <- (c("2016-01-01 00:00:01",
"2016-01-01 00:00:02",
"2016-01-01 00:00:03",
"2016-01-01 00:00:04",
"2016-01-01 00:00:05",
"2016-01-01 00:00:06",
"2016-01-01 00:00:10",
"2016-01-01 00:00:12",
"2016-01-01 00:00:14",
"2016-01-01 00:00:16",
"2016-01-01 00:00:18",
"2016-01-01 00:00:20",
"2016-01-01 00:00:22"))
d <- as.POSIXct(d)
for (i in 2:length(d)){
if(difftime(d[i-1],d[i], units = "secs") < -1 ){
c[i] <- d[i]
}
}
class(c) <- c('POSIXt','POSIXct')
c
[1] NA NA NA
NA NA
[6] NA "2016-01-01 00:00:10 EST" "2016-01-01 00:00:12
EST" "2016-01-01 00:00:14 EST" "2016-01-01 00:00:16 EST"
[11] "2016-01-01 00:00:18 EST" "2016-01-01 00:00:20 EST" "2016-01-01
00:00:22 EST"
I have several files :
dir<- list.files("/data/test", "*.img$", full.names = TRUE)
dir:
/data/test/data.df_df_fg.20141231.jh.ds.0930.edfr.img
/data/test/data.df_df_fg.20141231.jh.ds.1030.edfr.img
/data/test/data.df_df_fg.20141231.jh.ds.1130.edfr.img
I want to extract the date from the file names:
dt <- as.POSIXct(strptime(basename(dir),"data.df_df_fg.%Y%m%d.jh.ds.%H%M.edfr", tz = "GMT"))
dt:
[1] "2014-12-31 09:30:00 GMT"
[2]"2014-12-31 10:30:00 GMT"
[3] "2014-12-31 11:30:00 GMT"
What I need is to subtract 1 hour from dt so I get:
[1] "2014-12-31 08:30:00 GMT"
[2]"2014-12-31 09:30:00 GMT"
[3] "2014-12-31 10:30:00 GMT"
and if the hour is 2014-12-31 24:30:00 GMT , make it 23:30:00 GMT but also reduce the date to 2014-12-30.because we are already in the previous day
Try:
dt-as.difftime(1,units="hours")