Get mean every n days of a month - r

Let's say I have this dataframe df
day
time
temperature
2022/01/01
00:00:00
23
2022/01/01
06:00:00
14
2022/01/01
12:00:00
21
2022/01/01
18:00:00
13
2022/02/01
00:00:00
25
2022/02/01
06:00:00
23
2022/02/01
12:00:00
15
2022/02/01
18:00:00
17
and so on until August 31st. I would like to get everyday mean temperature but with a step of two measurements. Hence, I want to know mean temperature of timepoint 1 and 2 of the same day (and so, in this case, two means per day: one from 00:00:00 to 06:00:00 and one from 12:00:00 to 18:00:00 of every day).
Actually my df is not that clean and timestamps aren't every 6 hours exact; that's why I need the most general code possible.
What can I do?

This should ignore the irregularities in your data and take the average of the temperature recordings.
library(tidyverse)
library(lubridate)
df %>%
group_by(day,
group = case_when(
hms(time) >= hms("00:00:00") &
hms(time) <= hms("06:00:00") ~ "early",
TRUE ~ "late"
)) %>%
summarise(
avg_temperature = mean(temperature, na.rm = TRUE),
.groups = "drop") %>%
pivot_wider(names_from = group, values_from = avg_temperature)
# A tibble: 2 × 3
day early late
<date> <dbl> <dbl>
1 2022-01-01 18.5 17
2 2022-02-01 24 16

Related

How to group by a time window in R?

I want to find the highest average of departure delay in time windows of length 1 week in flights dataset of nycflights13 package.
I've used
seq(min(flights:time_hour), max(flights:time_hour), by = "week")
to find the dates with the difference of one week. But I don't know how to group by these dates to find the average departure delay of each period. How can I do this using tidyverse package?
Thank you for your help in advance.
We can use {lubridate} to round each date to the nearest week. Two wrinkles to think about:
To count weeks beginning with Jan 1, you'll need to specify the week_start arg. Otherwise lubridate will count from the previous Sunday, which in this case is 12/30/2012.
You also need to deal with incomplete weeks. In this case, the last week of the year only contains one day. I chose to drop weeks with < 7 days for this demo.
library(tidyverse)
library(lubridate)
library(nycflights13)
data(flights)
# what weekday was the first of the year?
weekdays(min(flights$time_hour))
#> [1] "Tuesday"
# Tuesday = day #2 so we'll pass `2` to `week_start`
flights %>%
group_by(week = floor_date(time_hour, unit = "week", week_start = 2)) %>%
filter(n_distinct(day) == 7) %>% # drop incomplete weeks
summarize(dep_delay_avg = mean(dep_delay, na.rm = TRUE)) %>%
arrange(desc(dep_delay_avg))
#> # A tibble: 52 x 2
#> week dep_delay_avg
#> <dttm> <dbl>
#> 1 2013-06-25 00:00:00 40.6 # week of June 25 had longest delays
#> 2 2013-07-09 00:00:00 24.4
#> 3 2013-12-17 00:00:00 24.0
#> 4 2013-07-23 00:00:00 21.8
#> 5 2013-03-05 00:00:00 21.7
#> 6 2013-04-16 00:00:00 21.6
#> 7 2013-07-16 00:00:00 20.4
#> 8 2013-07-02 00:00:00 20.1
#> 9 2013-12-03 00:00:00 19.9
#> 10 2013-05-21 00:00:00 19.2
#> # ... with 42 more rows
Created on 2022-03-06 by the reprex package (v2.0.1)
Edit: as requested by OP, here is a solution using only core {tidyverse} packages, without {lubridate}:
library(tidyverse)
library(nycflights13)
data(flights)
flights %>%
group_by(week = (as.POSIXlt(time_hour)$yday) %/% 7) %>%
filter(n_distinct(day) == 7) %>%
summarize(
week = as.Date(min(time_hour)),
dep_delay_avg = mean(dep_delay, na.rm = TRUE)
) %>%
arrange(desc(dep_delay_avg))

R Summarise Data based on logical index for multiple criteria including date difference

I have a dataset where I have a date in which a medication was ordered RequestedDtm and I want to get a specific datapoint pc_before_rx which is recorded at an ObsDtm
RxUnique %>%
inner_join(PatientVS) %>%
group_by(UniqueID, RequestedDtm) %>%
arrange(UniqueID, ObsDtm)%>%
summarise(
pc_before_rx = last(ObsValue[ObsCatalogName == 'pc_recording' & difftime(RequestedDtm, ObsDtm, units = 'hours') < 24]),
dt_settings_before_rx = last(ObsDtm[ObsCatalogName == 'pc_recording' & difftime(RequestedDtm, ObsDtm, units = 'hours') < 24])) %>%
mutate(time = difftime(dt_settings_before_rx, RequestedDtm, units = 'hours'))
Produces:
UniqueID RequestedDtm pc_before_rx dt_before_rx time
<chr> <dttm> <dbl> <dttm> <drtn>
1 5936655 2020-05-02 11:03:38 NA NA NA hours
2 5925423 2020-04-23 06:01:43 14 2020-05-01 23:26:00 209.404508 hours
3 5917885 2020-04-12 16:35:53 12 2020-05-08 23:55:00 631.318448 hours
4 5930494 2020-05-01 10:36:54 15 2020-05-05 00:00:00 85.384895 hours
Clearly, the logical index for getting the last result of that day prior to the RequestedDtm is not working. Is there any way to get this filter/logical index to work?

How to select the earliest date in a month from a Date series in R?

I have a database containing the value of different indices with different frequency (weekly, monthly, daily)of data. I hope to calculate monthly returns by abstracting beginning of month value from the time series.
I have tried to use a loop to partition the time series month by month then use min() to get the earliest date in the month. However, I am wondering whether there is a more efficient way to speed up the calculation.
library(data.table)
df<-fread("statistic_date index_value funds_number
2013-1-1 1000.000 0
2013-1-4 996.096 21
2013-1-11 1011.141 21
2013-1-18 1057.344 21
2013-1-25 1073.376 21
2013-2-1 1150.479 22
2013-2-8 1150.288 19
2013-2-22 1112.993 18
2013-3-1 1148.826 20
2013-3-8 1093.515 18
2013-3-15 1092.352 17
2013-3-22 1138.346 18
2013-3-29 1107.440 17
2013-4-3 1101.897 17
2013-4-12 1093.344 17")
I expect to filter to get the rows of the earliest date of each month, such as:
2013-1-1 1000.000 0
2013-2-1 1150.479 22
2013-3-1 1148.826 20
2013-4-3 1101.897 17
Your help will be much appreciated!
Using the tidyverse and lubridate packages,
library(lubridate)
library(tidyverse)
df %>% mutate(statistic_date = ymd(statistic_date), # convert statistic_date to date format
month = month(statistic_date), #create month and year columns
year= year(statistic_date)) %>%
group_by(month,year) %>% # group by month and year
arrange(statistic_date) %>% # make sure the df is sorted by date
filter(row_number()==1) # select first row within each group
# A tibble: 4 x 5
# Groups: month, year [4]
# statistic_date index_value funds_number month year
# <date> <dbl> <int> <dbl> <dbl>
#1 2013-01-01 1000 0 1 2013
#2 2013-02-01 1150. 22 2 2013
#3 2013-03-01 1149. 20 3 2013
#4 2013-04-03 1102. 17 4 2013
First make statistic_date a Date:
df$statistic_date <- as.Date(df$statistic_date)
The you can use nth_day to find the first day of every month in statistic_date.
library("datetimeutils")
dates <- nth_day(df$statistic_date, period = "month", n = "first")
## [1] "2013-01-01" "2013-02-01" "2013-03-01" "2013-04-03"
df[statistic_date %in% dates]
## statistic_date index_value funds_number
## 1: 2013-01-01 1000.000 0
## 2: 2013-02-01 1150.479 22
## 3: 2013-03-01 1148.826 20
## 4: 2013-04-03 1101.897 17

Missing data in R - How to skip grouping of days with missing information?

I have hourly values of temperature measurements and I wish to calculate the average per day only for complete (i.e. with 24 measurements) days. Incomplete days would then be summarized as "NA".
I have grouped the values together per year, month and day and call summarize().
I have three month of data missing which appears as a gap in my ggplot function and which is what I want to achieve with the rest. The problem is that when I call summarize() to calculate the mean of my values, days with only 1 or 2 measurements also get called. Only those with all missing values (24) appear as "NA".
Date TempUrb TempRur UHI
1 2011-03-21 22:00:00 10.1 11.67000 -1.570000
2 2011-03-21 23:00:00 9.9 11.67000 -1.770000
3 2011-03-22 00:00:00 10.9 11.11000 -0.210000
4 2011-03-22 01:00:00 10.7 10.56000 0.140000
5 2011-03-22 02:00:00 9.7 10.00000 -0.300000
6 2011-03-22 03:00:00 9.5 10.00000 -0.500000
7 2011-03-22 04:00:00 9.4 8.89000 0.510000
8 2011-03-22 05:00:00 8.4 8.33500 0.065000
9 2011-03-22 06:00:00 8.2 7.50000 0.700000
AvgUHI <- UHI %>% group_by(year(Date), add = TRUE) %>%
group_by(month(Date), add = TRUE) %>%
group_by(day(Date), add = TRUE, .drop = TRUE) %>%
summarize(AvgUHI = mean(UHI, na.rm = TRUE))
# A tibble: 2,844 x 4
# Groups: year(Date), month(Date) [95]
`year(Date)` `month(Date)` `day(Date)` AvgUHI
<int> <int> <int> <dbl>
1476 2015 4 4 0.96625000
1477 2015 4 5 -0.11909722
1478 2015 4 6 -0.60416667
1479 2015 4 7 -0.92916667
1480 2015 4 8 NA
1481 2015 4 9 NA
AvgUHI<- AvgUHI %>% group_by(`year(Date)`, add = TRUE) %>%
group_by(`month(Date)`, add = TRUE) %>%
summarize(AvgUHI= mean(AvgUHI, na.rm = TRUE))
# A tibble: 95 x 3
# Groups: year(Date) [9]
`year(Date)` `month(Date)` AvgUHI
<int> <int> <dbl>
50 2015 4 0.580887346
51 2015 5 0.453815051
52 2015 6 0.008479618
As you can see above on the final table, I have an average for 04-2015, while I am missing data on that month (08 - 09/04/2015 on this example represented on the second table).
The same happens when I calculate AvgUHI and I'm missing hourly data.
I simply would like to see on the last table the AvgUHI for 04-2015 be NA.
E.g: of my graph1
The following will give a dataframe aggregated by day, where only the complete days, with 4 observations, are not NA. Then you can group by month to have the final dataframe.
UHI %>%
mutate(Day = as.Date(Date)) %>%
group_by(Day) %>%
mutate(n = n(), tmpUHI = if_else(n == 24, UHI, NA_real_)) %>%
summarize(AvgUHI = mean(tmpUHI)) %>%
full_join(data.frame(Day = seq(min(.$Day), max(.$Day), by = "day"))) %>%
arrange(Day) -> AvgUHI
For hours look at Rui Barradas' answer. For months the following code worked:
AvgUHI %>%
group_by(year(Day), add = TRUE) %>%
group_by(month(Day), add = TRUE) %>%
mutate(sum = sum(is.na(AvgUHI)), tmpUHI = if_else(sum <= 10, AvgUHI, NA_real_)) %>%
summarise(AvgUHI = mean(tmpUHI, na.rm = TRUE)) -> AvgUHI

Sum between two weeks interval

Suppose I have a daily rain data.frame like this:
df.meteoro = data.frame(Dates = seq(as.Date("2017/1/19"), as.Date("2018/1/18"), "days"),
rain = rnorm(length(seq(as.Date("2017/1/19"), as.Date("2018/1/18"), "days"))))
I'm trying to sum the accumulated rain between a 14 days interval with this code:
library(tidyverse)
library(lubridate)
df.rain <- df.meteoro %>%
mutate(TwoWeeks = round_date(df.meteoro$data, "14 days")) %>%
group_by(TwoWeeks) %>%
summarise(sum_rain = sum(rain))
The problem is that it isn't starting on 2017-01-19 but on 2017-01-15 and I was expecting my output dates to be:
"2017-02-02" "2017-02-16" "2017-03-02" "2017-03-16" "2017-03-30" "2017-04-13"
"2017-04-27" "2017-05-11" "2017-05-25" "2017-06-08" "2017-06-22" "2017-07-06" "2017-07-20"
"2017-08-03" "2017-08-17" "2017-08-31" "2017-09-14" "2017-09-28" "2017-10-12" "2017-10-26"
"2017-11-09" "2017-11-23" "2017-12-07" "2017-12-21" "2018-01-04" "2018-01-18"
TL;DR I have a year long daily rain data.frame and want to sum the accumulate rain for the dates above.
Please help.
Use of round_date in the way you have shown it will not give you 14-day periods as you might expect. I have taken a different approach in this solution and generated a sequence of dates between your first and last dates and grouped these into 14-day periods then joined the dates to your observations.
startdate = min(df.meteoro$Dates)
enddate = max(df.meteoro$Dates)
dateseq =
data.frame(Dates = seq.Date(startdate, enddate, by = 1)) %>%
mutate(group = as.numeric(Dates - startdate) %/% 14) %>%
group_by(group) %>%
mutate(starts = min(Dates))
df.rain <- df.meteoro %>%
right_join(dateseq) %>%
group_by(starts) %>%
summarise(sum_rain = sum(rain))
head(df.rain)
> head(df.rain)
# A tibble: 6 x 2
starts sum_rain
<date> <dbl>
1 2017-01-19 6.09
2 2017-02-02 5.55
3 2017-02-16 -3.40
4 2017-03-02 2.55
5 2017-03-16 -0.12
6 2017-03-30 8.95
Using a right-join to the date sequence is to ensure that if there are missing observation days that spanned a complete time period you'd still get that period listed in the result (though in your case you have a complete year of dates anyway).
round_date rounds to the nearest multiple of unit (here, 14 days) since some epoch (probably the Unix epoch of 1970-01-01 00:00:00), which doesn't line up with your purpose.
To get what you want, you can do the following:
df.rain = df.meteoro %>%
mutate(days_since_start = as.numeric(Dates - as.Date("2017/1/18")),
TwoWeeks = as.Date("2017/1/18") + 14*ceiling(days_since_start/14)) %>%
group_by(TwoWeeks) %>%
summarise(sum_rain = sum(rain))
This computes days_since_start as the days since 2017/1/18 and then manually rounds to the next multiple of two weeks.
Assuming you want to round to the closest date from the ones you have specified I guess the following will work
targetDates<-seq(ymd("2017-02-02"),ymd("2018-01-18"),by='14 days')
df.meteoro$Dates=targetDates[sapply(df.meteoro$Dates,function(x) which.min(abs(interval(targetDates,x))))]
sum_rain=ddply(df.meteoro,.(Dates),summarize,sum_rain=sum(rain,na.rm=T))
as you can see not all dates have the same number of observations. Date "2017-02-02" for instance has all the records between "2017-01-19" until "2017-02-09", which are 22 records. From "2017-02-10" on dates are rounded to "2017-02-16" etc.
This may be a cheat, but assuming each row/observation is a separate day, then why not just group by every 14 rows and sum.
# Assign interval groups, each 14 rows
df.meteoro$my_group <-rep(1:100, each=14, length.out=nrow(df.meteoro))
# Grab Interval Names
my_interval_names <- df.meteoro %>%
select(-rain) %>%
group_by(my_group) %>%
slice(1)
# Summarise
df.meteoro %>%
group_by(my_group) %>%
summarise(rain = sum(rain)) %>%
left_join(., my_interval_names)
#> Joining, by = "my_group"
#> # A tibble: 27 x 3
#> my_group rain Dates
#> <int> <dbl> <date>
#> 1 1 3.86 2017-01-19
#> 2 2 -0.581 2017-02-02
#> 3 3 -0.876 2017-02-16
#> 4 4 1.80 2017-03-02
#> 5 5 3.79 2017-03-16
#> 6 6 -3.50 2017-03-30
#> 7 7 5.31 2017-04-13
#> 8 8 2.57 2017-04-27
#> 9 9 -1.33 2017-05-11
#> 10 10 5.41 2017-05-25
#> # ... with 17 more rows
Created on 2018-03-01 by the reprex package (v0.2.0).

Resources