This question already has answers here:
Insert rows for missing dates/times
(9 answers)
Closed 5 years ago.
I have a dataframe that contains hourly weather information. I would like to increase the granularity of the time measurements (5 minute intervals instead of 60 minute intervals) while copying the other columns data into the new rows created:
Current Dataframe Structure:
Date Temperature Humidity
2015-01-01 00:00:00 25 0.67
2015-01-01 01:00:00 26 0.69
Target Dataframe Structure:
Date Temperature Humidity
2015-01-01 00:00:00 25 0.67
2015-01-01 00:05:00 25 0.67
2015-01-01 00:10:00 25 0.67
.
.
.
2015-01-01 00:55:00 25 0.67
2015-01-01 01:00:00 26 0.69
2015-01-01 01:05:00 26 0.69
2015-01-01 01:10:00 26 0.69
.
.
.
What I've Tried:
for(i in 1:nrow(df)) {
five.minutes <- seq(df$date[i], length = 12, by = "5 mins")
for(j in 1:length(five.minutes)) {
df$date[i]<-rbind(five.minutes[j])
}
}
Error I'm getting:
Error in as.POSIXct.numeric(value) : 'origin' must be supplied
The one possible solution can be using fill from tidyr and right_join from dplyr.
The approach is to create date/time series between min and max+55mins times from dataframe. Left join dataframe with timeseries which will provide you all desired rows but NA for Temperature and Humidity. Now use fill to populated NA values with previous valid values.
# Data
df <- read.table(text = "Date Temperature Humidity
'2015-01-01 00:00:00' 25 0.67
'2015-01-01 01:00:00' 26 0.69
'2015-01-01 02:00:00' 28 0.69
'2015-01-01 03:00:00' 25 0.69", header = T, stringsAsFactors = F)
df$Date <- as.POSIXct(df$Date, format = "%Y-%m-%d %H:%M:%S")
# Create a dataframe with all possible date/time at intervale of 5 mins
Dates <- data.frame(Date = seq(min(df$Date), max(df$Date)+3540, by = 5*60))
result <- df %>%
right_join(Dates, by="Date") %>%
fill(Temperature, Humidity)
result
# Date Temperature Humidity
#1 2015-01-01 00:00:00 25 0.67
#2 2015-01-01 00:05:00 25 0.67
#3 2015-01-01 00:10:00 25 0.67
#4 2015-01-01 00:15:00 25 0.67
#5 2015-01-01 00:20:00 25 0.67
#6 2015-01-01 00:25:00 25 0.67
#7 2015-01-01 00:30:00 25 0.67
#8 2015-01-01 00:35:00 25 0.67
#9 2015-01-01 00:40:00 25 0.67
#10 2015-01-01 00:45:00 25 0.67
#11 2015-01-01 00:50:00 25 0.67
#12 2015-01-01 00:55:00 25 0.67
#13 2015-01-01 01:00:00 26 0.69
#14 2015-01-01 01:05:00 26 0.69
#.....
#.....
#44 2015-01-01 03:35:00 25 0.69
#45 2015-01-01 03:40:00 25 0.69
#46 2015-01-01 03:45:00 25 0.69
#47 2015-01-01 03:50:00 25 0.69
#48 2015-01-01 03:55:00 25 0.69
I think this might do:
df=tibble(DateTime=c("2015-01-01 00:00:00","2015-01-01 01:00:00"),Temperature=c(25,26),Humidity=c(.67,.69))
df$DateTime<-ymd_hms(df$DateTime)
DateTime=as.POSIXct((sapply(1:(nrow(df)-1),function(x) seq(from=df$DateTime[x],to=df$DateTime[x+1],by="5 min"))),
origin="1970-01-01", tz="UTC")
Temperature=c(sapply(1:(nrow(df)-1),function(x) rep(df$Temperature[x],12)),df$Temperature[nrow(df)])
Humidity=c(sapply(1:(nrow(df)-1),function(x) rep(df$Humidity[x],12)),df$Humidity[nrow(df)])
tibble(as.character(DateTime),Temperature,Humidity)
<chr> <dbl> <dbl>
1 2015-01-01 00:00:00 25.0 0.670
2 2015-01-01 00:05:00 25.0 0.670
3 2015-01-01 00:10:00 25.0 0.670
4 2015-01-01 00:15:00 25.0 0.670
5 2015-01-01 00:20:00 25.0 0.670
6 2015-01-01 00:25:00 25.0 0.670
7 2015-01-01 00:30:00 25.0 0.670
8 2015-01-01 00:35:00 25.0 0.670
9 2015-01-01 00:40:00 25.0 0.670
10 2015-01-01 00:45:00 25.0 0.670
11 2015-01-01 00:50:00 25.0 0.670
12 2015-01-01 00:55:00 25.0 0.670
13 2015-01-01 01:00:00 26.0 0.690
Related
I have a dataset of hourly observations with the format %Y%m%d %H:%M that results like this 2020-03-01 01:00:00 for various days. How can filter filter out a certain time interval? My goal is to maintain the observations between 08:00 and 20:00.
You can extract the hour value from the column and keep the rows between 8 and 20 hours.
df$hour <- as.integer(format(df$datetime, '%H'))
result <- subset(df, hour >= 8 & hour <= 20)
result
# datetime hour
#9 2020-01-01 08:00:00 8
#10 2020-01-01 09:00:00 9
#11 2020-01-01 10:00:00 10
#12 2020-01-01 11:00:00 11
#13 2020-01-01 12:00:00 12
#14 2020-01-01 13:00:00 13
#15 2020-01-01 14:00:00 14
#16 2020-01-01 15:00:00 15
#17 2020-01-01 16:00:00 16
#18 2020-01-01 17:00:00 17
#19 2020-01-01 18:00:00 18
#20 2020-01-01 19:00:00 19
#21 2020-01-01 20:00:00 20
#33 2020-01-02 08:00:00 8
#34 2020-01-02 09:00:00 9
#35 2020-01-02 10:00:00 10
#...
#...
data
df <- data.frame(datetime = seq(as.POSIXct('2020-01-01 00:00:00', tz = 'UTC'),
as.POSIXct('2020-01-10 00:00:00', tz = 'UTC'), 'hour'))
between(hour( your_date_value ), 8, 19)
I have two long time series to compare, however, the sampling of them is completely different. The first one is with hourly, the second one is with irregular sampling.
I would like to compare Value1 and Value2, so, I would like to select Value1 records from df1 at 02:00 according to df2 dates. How can I solve it in R?
df1:
Date1
Value1
2014-01-01 01:00:00
0.16
2014-01-01 02:00:00
0.13
2014-01-01 03:00:00
0.6
2014-01-02 01:00:00
0.5
2014-01-02 02:00:00
0.22
2014-01-02 03:00:00
0.17
2014-01-19 01:00:00
0.2
2014-01-19 02:00:00
0.11
2014-01-19 03:00:00
0.15
2014-01-21 01:00:00
0.13
2014-01-21 02:00:00
0.33
2014-01-21 03:00:00
0.1
2014-01-23 01:00:00
0.09
2014-01-23 02:00:00
0.02
2014-01-23 03:00:00
0.16
df2:
Date2
Value2
2014-01-01
13
2014-01-19
76
2014-01-23
8
desired output:
df_fused:
Date1
Value1
Value2
2014-01-01 02:00:00
0.13
13
2014-01-19 02:00:00
0.11
76
2014-01-23 02:00:00
0.02
8
here is a data.table approach
library( data.table )
#sample data can also be setDT(df1);setDT(df2)
df1 <- fread("Date1 Value1
2014-01-01 01:00:00 0.16
2014-01-01 02:00:00 0.13
2014-01-01 03:00:00 0.6
2014-01-02 01:00:00 0.5
2014-01-02 02:00:00 0.22
2014-01-02 03:00:00 0.17
2014-01-19 01:00:00 0.2
2014-01-19 02:00:00 0.11
2014-01-19 03:00:00 0.15
2014-01-21 01:00:00 0.13
2014-01-21 02:00:00 0.33
2014-01-21 03:00:00 0.1
2014-01-23 01:00:00 0.09
2014-01-23 02:00:00 0.02
2014-01-23 03:00:00 0.16")
df2 <- fread("Date2 Value2
2014-01-01 13
2014-01-19 76
2014-01-23 8")
#set dates to posix
df1[, Date1 := as.POSIXct( Date1, format = "%Y-%m-%d %H:%M:%S", tz = "UTC" )]
#set df2 dates to 02:00:00 time
df2[, Date2 := as.POSIXct( paste0( Date2, "02:00:00" ), format = "%Y-%m-%d %H:%M:%S", tz = "UTC" )]
#join
df2[ df1, Value1 := i.Value1, on = .(Date2 = Date1)][]
# Date2 Value2 Value1
# 1: 2014-01-01 02:00:00 13 0.13
# 2: 2014-01-19 02:00:00 76 0.11
# 3: 2014-01-23 02:00:00 8 0.02
I am a big fan of Hyndman's packages, but stumbled with Box-Cox transformation.
I have a dataframe
class(chicago_sales)
[1] "tbl_ts" "tbl_df" "tbl" "data.frame"
I am trying to mutate an extra column, where the Mean_price variable will be transformed.
foo <- chicago_sales %>%
mutate(bc = BoxCox(x = chicago_sales$Median_price, lambda =
BoxCox.lambda(chicago_sales$Median_price)))
gives me some result (probably wrong too) and cannot apply autoplot.
I also tried to apply the code from Hyndman's book, but failed.
What am I doing wrong? Thanks!
UPDATED:
Issue, inside tsibbles, when using dplyr, you do not call chicago_sales$Median_price, but just Median_price. When using tsibbles I would advice using fable and fabletools, but if you are using forecast, it should work like this:
library(tsibble)
library(dplyr)
library(forecast)
pedestrian %>%
mutate(bc = BoxCox(Count, BoxCox.lambda(Count)))
# A tsibble: 66,037 x 6 [1h] <Australia/Melbourne>
# Key: Sensor [4]
Sensor Date_Time Date Time Count bc
<chr> <dttm> <date> <int> <int> <dbl>
1 Birrarung Marr 2015-01-01 00:00:00 2015-01-01 0 1630 11.3
2 Birrarung Marr 2015-01-01 01:00:00 2015-01-01 1 826 9.87
3 Birrarung Marr 2015-01-01 02:00:00 2015-01-01 2 567 9.10
4 Birrarung Marr 2015-01-01 03:00:00 2015-01-01 3 264 7.65
5 Birrarung Marr 2015-01-01 04:00:00 2015-01-01 4 139 6.52
6 Birrarung Marr 2015-01-01 05:00:00 2015-01-01 5 77 5.54
7 Birrarung Marr 2015-01-01 06:00:00 2015-01-01 6 44 4.67
8 Birrarung Marr 2015-01-01 07:00:00 2015-01-01 7 56 5.04
9 Birrarung Marr 2015-01-01 08:00:00 2015-01-01 8 113 6.17
10 Birrarung Marr 2015-01-01 09:00:00 2015-01-01 9 166 6.82
# ... with 66,027 more rows
I used a built in dataset from the tsibble package as you did not provide a dput of chicago_sales.
I have hourly data of CO2 values and I would like to know what is the CO2 concentration during the night (e.g. 9pm-7am). A reproducible example:
library(tidyverse); library(lubridate)
times <- seq(ymd_hms("2020-01-01 08:00:00"),
ymd_hms("2020-01-04 08:00:00"), by = "1 hours")
values <- runif(length(times), 1, 15)
df <- tibble(times, values)
How to get mean nightime values (e.g. between 9pm and 7am)? Of course I can filter like this:
df <- df %>%
filter(!hour(times) %in% c(8:20))
And then give id to each observation during the night
df$ID <- rep(LETTERS[1:round(nrow(df)/11)],
times = 1, each = 11)
And finally group and summarise
df_grouped <- df %>%
group_by(., ID) %>%
summarise(value_mean =mean(values))
But this is not a good way I am sure. How to do this better? Especially the part where we give ID to the nighttime values
You can use data.table::frollmean to get the means for a certain window time. In your case you want the means for the last 10 hours, so we set the n argument of the function to 10:
> df$means <- data.table::frollmean(df$values, 10)
> df
> head(df, 20)
# A tibble: 20 x 3
times values means
<dttm> <dbl> <dbl>
1 2020-01-01 08:00:00 4.15 NA
2 2020-01-01 09:00:00 6.24 NA
3 2020-01-01 10:00:00 5.17 NA
4 2020-01-01 11:00:00 9.20 NA
5 2020-01-01 12:00:00 12.3 NA
6 2020-01-01 13:00:00 2.93 NA
7 2020-01-01 14:00:00 9.12 NA
8 2020-01-01 15:00:00 9.72 NA
9 2020-01-01 16:00:00 12.0 NA
10 2020-01-01 17:00:00 13.4 8.41
11 2020-01-01 18:00:00 10.2 9.01
12 2020-01-01 19:00:00 1.97 8.59
13 2020-01-01 20:00:00 11.9 9.26
14 2020-01-01 21:00:00 8.84 9.23
15 2020-01-01 22:00:00 10.1 9.01
16 2020-01-01 23:00:00 3.76 9.09
17 2020-01-02 00:00:00 9.98 9.18
18 2020-01-02 01:00:00 5.56 8.76
19 2020-01-02 02:00:00 5.22 8.09
20 2020-01-02 03:00:00 6.36 7.39
Each row in the mean column will be the mean of that same row value column with the 9 last rows of the value column. Of course there will be some NAs.
Maybe you should give some look to the tsibble package, built to manipulate time series.
You can parametrize the difference between the times you want, but they need to be evenly spaced in your data to use this solution:
n <- diff(which(grepl('20:00:00|08:00:00', df$times))) + 1
n <- unique(n)
df$means <- data.table::frollmean(df$values, n)
> head(df, 20)
# A tibble: 20 x 3
times values means
<dttm> <dbl> <dbl>
1 2020-01-01 08:00:00 11.4 NA
2 2020-01-01 09:00:00 7.03 NA
3 2020-01-01 10:00:00 7.15 NA
4 2020-01-01 11:00:00 6.91 NA
5 2020-01-01 12:00:00 8.18 NA
6 2020-01-01 13:00:00 4.70 NA
7 2020-01-01 14:00:00 13.8 NA
8 2020-01-01 15:00:00 5.16 NA
9 2020-01-01 16:00:00 12.3 NA
10 2020-01-01 17:00:00 3.81 NA
11 2020-01-01 18:00:00 3.09 NA
12 2020-01-01 19:00:00 9.89 NA
13 2020-01-01 20:00:00 1.24 7.28
14 2020-01-01 21:00:00 8.07 7.02
15 2020-01-01 22:00:00 5.59 6.91
16 2020-01-01 23:00:00 5.77 6.81
17 2020-01-02 00:00:00 10.7 7.10
18 2020-01-02 01:00:00 3.44 6.73
19 2020-01-02 02:00:00 10.3 7.16
20 2020-01-02 03:00:00 4.61 6.45
How can one split the following datetime into year-month-day-hour-minute-second? The date was created using:
datetime = seq.POSIXt(as.POSIXct("2015-04-01 0:00:00", tz = 'GMT'),
as.POSIXct("2015-11-30 23:59:59", tz = 'GMT'),
by="hour",tz="GMT"))
The ultimate goal is to aggregate x which is at hourly resolution into 6-hourly resolution. Probably it is possible to aggregate datetime without needing to split it?
datetime x
1 2015-04-01 00:00:00 0.0
2 2015-04-01 01:00:00 0.0
3 2015-04-01 02:00:00 0.0
4 2015-04-01 03:00:00 0.0
5 2015-04-01 04:00:00 0.0
6 2015-04-01 05:00:00 0.0
7 2015-04-01 06:00:00 0.0
8 2015-04-01 07:00:00 0.0
9 2015-04-01 08:00:00 0.0
10 2015-04-01 09:00:00 0.0
11 2015-04-01 10:00:00 0.0
12 2015-04-01 11:00:00 0.0
13 2015-04-01 12:00:00 0.0
14 2015-04-01 13:00:00 0.0
15 2015-04-01 14:00:00 0.0
16 2015-04-01 15:00:00 0.0
17 2015-04-01 16:00:00 0.0
18 2015-04-01 17:00:00 0.0
19 2015-04-01 18:00:00 0.0
20 2015-04-01 19:00:00 0.0
21 2015-04-01 20:00:00 0.0
22 2015-04-01 21:00:00 0.0
23 2015-04-01 22:00:00 1.6
24 2015-04-01 23:00:00 0.2
25 2015-04-02 00:00:00 1.5
26 2015-04-02 01:00:00 1.5
27 2015-04-02 02:00:00 0.5
28 2015-04-02 03:00:00 0.0
29 2015-04-02 04:00:00 0.0
30 2015-04-02 05:00:00 0.0
31 2015-04-02 06:00:00 0.0
32 2015-04-02 07:00:00 0.5
33 2015-04-02 08:00:00 0.3
34 2015-04-02 09:00:00 0.0
35 2015-04-02 10:00:00 0.0
36 2015-04-02 11:00:00 0.0
37 2015-04-02 12:00:00 0.0
38 2015-04-02 13:00:00 0.0
39 2015-04-02 14:00:00 0.0
40 2015-04-02 15:00:00 0.0
41 2015-04-02 16:00:00 0.0
42 2015-04-02 17:00:00 0.0
43 2015-04-02 18:00:00 0.0
44 2015-04-02 19:00:00 0.0
45 2015-04-02 20:00:00 0.0
46 2015-04-02 21:00:00 0.0
47 2015-04-02 22:00:00 0.0
48 2015-04-02 23:00:00 0.0
....
The output should be very close to:
YYYY-MM-DD hh:mm:ss YYYY-MM-DD hh:mm:ss YYYY-MM-DD hh:mm:ss YYYY-MM-DD hh:mm:ss
2015-04-01 00:00:00 2015-04-01 06:00:00 2015-04-01 12:00:00 2015-04-01 18:00:00
2015-04-02 00:00:00 2015-04-02 06:00:00 2015-04-02 12:00:00 2015-04-02 18:00:00
.....
I appreciate your thoughts on this.
EDIT
How to implement #r2evans answer on a list object such as:
x = runif(5856)
flst1=list(x,x,x,x)
flst1=lapply(flst1, function(x){x$datetime <- as.POSIXct(x$datetime, tz = "GMT"); x})
sixhours1=lapply(flst1, function(x) {x$bin <- cut(x$datetime,sixhours);x})
head(sixhours1[[1]],n=7)
ret=lapply(sixhours1, function(x) aggregate(x$precip, list(x$bin), sum,na.rm=T))
head(ret[[1]],n=20)
Your minimal data is incomplete, so I'll generate something random:
dat <- data.frame(datetime = seq.POSIXt(as.POSIXct("2015-04-01 0:00:00", tz = "GMT"),
as.POSIXct("2015-11-30 23:59:59", tz = "GMT"),
by = "hour",tz = "GMT"),
x = runif(5856))
# the "1+" ensures we extend at least to the end of the datetimes;
# without it, the last several rows in "bin" would be NA
sixhours <- seq.POSIXt(as.POSIXct("2015-04-01 0:00:00", tz = "GMT"),
1 + as.POSIXct("2015-11-30 23:59:59", tz = "GMT"),
by = "6 hours",tz = "GMT")
# this doesn't have to go into the data.frame (could be a separate
# vector), but I'm including it for easy row-wise comparison
dat$bin <- cut(dat$datetime, sixhours)
head(dat, n=7)
# datetime x bin
# 1 2015-04-01 00:00:00 0.91022534 2015-04-01 00:00:00
# 2 2015-04-01 01:00:00 0.02638850 2015-04-01 00:00:00
# 3 2015-04-01 02:00:00 0.42486354 2015-04-01 00:00:00
# 4 2015-04-01 03:00:00 0.90722845 2015-04-01 00:00:00
# 5 2015-04-01 04:00:00 0.24540085 2015-04-01 00:00:00
# 6 2015-04-01 05:00:00 0.60360906 2015-04-01 00:00:00
# 7 2015-04-01 06:00:00 0.01843313 2015-04-01 06:00:00
tail(dat)
# datetime x bin
# 5851 2015-11-30 18:00:00 0.5963204 2015-11-30 18:00:00
# 5852 2015-11-30 19:00:00 0.2503440 2015-11-30 18:00:00
# 5853 2015-11-30 20:00:00 0.9600476 2015-11-30 18:00:00
# 5854 2015-11-30 21:00:00 0.6837394 2015-11-30 18:00:00
# 5855 2015-11-30 22:00:00 0.9093506 2015-11-30 18:00:00
# 5856 2015-11-30 23:00:00 0.9197769 2015-11-30 18:00:00
nrow(dat)
# [1] 5856
The work:
ret <- aggregate(dat$x, list(dat$bin), mean)
nrow(ret)
# [1] 976
head(ret)
# Group.1 x
# 1 2015-04-01 00:00:00 0.5196193
# 2 2015-04-01 06:00:00 0.4770019
# 3 2015-04-01 12:00:00 0.5359483
# 4 2015-04-01 18:00:00 0.8140603
# 5 2015-04-02 00:00:00 0.4874332
# 6 2015-04-02 06:00:00 0.6139554
tail(ret)
# Group.1 x
# 971 2015-11-29 12:00:00 0.6881228
# 972 2015-11-29 18:00:00 0.4791925
# 973 2015-11-30 00:00:00 0.5793872
# 974 2015-11-30 06:00:00 0.4809868
# 975 2015-11-30 12:00:00 0.5157432
# 976 2015-11-30 18:00:00 0.7199298
I got a solution using:
library(xts)
flst<- list.files(pattern=".csv")
flst1<- lapply(flst,function(x) read.csv(x,header = TRUE,stringsAsFactors=FALSE,sep = ",",fill=TRUE,
dec = ".",quote = "\"",colClasses=c('factor', 'numeric', 'NULL'))) # read files ignoring 3 column
head(flst1[[1]])
dat.xts=lapply(flst1, function(x) xts(x$precip,as.POSIXct(x$datetime)))
head(dat.xts[[1]])
ep.xts=lapply(dat.xts, function(x) endpoints(x, on="hours", k=6))#k=by .... see endpoints for "on"
head(ep.xts[[1]])
stations6hrly<-lapply(dat.xts, function(x) period.apply(x, FUN=sum,INDEX=ep))
head(stations6hrly[[703]])
[,1]
2015-04-01 05:00:00 0.3
2015-04-01 11:00:00 1.2
2015-04-01 17:00:00 0.0
2015-04-01 23:00:00 0.2
2015-04-02 05:00:00 0.0
2015-04-02 11:00:00 1.4
The dates are not as I wanted them to be but the values are correct. I doubt if there is a -shifttime function in R just as in CDO