Last 3 month lags in r - r

The data is :
Category <- c(rep("A",4))
Month <- c(1,2,3,4)
Sales <- c(10,15,20,25)
df <- data.frame(Category,Month,Sales)
df <- df %>% filter(Category=='A') %>%
group_by(Month) %>%
summarise(Sales=sum(Sales,na.rm=TRUE)) %>%
mutate(lag_1 = dplyr::lag(Sales, 1),
lag_2 = dplyr::lag(Sales, 2),
lag_3 = dplyr::lag(Sales, 3),
lag_3_mean = rollapply(Sales,3,mean,align='right',fill=NA))
Present Output
I want the lag_3_mean to be the mean of last 3 months, not including the present month. For example, in Month 4 lag_3_mean = Average(Sales value in month 3,2,1).
The expected output should be:

Use a width of list(-(1:3)) to get offsets of -1, -2, -3.
rollapplyr(Sales, list(-(1:3)), mean, fill = NA)
Note that this recent question is very similar Variable frameshift rolling average for multiple variables

Related

Using dplyr to remove duplicates conditionally

I have a dataset in longformat that contains both visit and measure dates for each ID. What I want is to remove the duplicate visit dates for each ID conditionally, namely:
IF visit date - measure date does not equal 0, then I want the to include the first visit date.
IF visit date - measure date is a draw, however, then I want to include the lastest visit date.
I already wrote part of the code using dplyr. However, I cannot seem to figure out how to code the second part of the condition.
Any help would be very much appreciated.
library(dplyr)
df <- data.frame(ID = c(1, 1),
VISIT = c(as.Date("2020-01-01"), as.Date("2020-01-01")),
MEASURE = c(as.Date("2020-01-01"), as.Date("2020-01-01")),
VALUE = c(5, 10))
df2 <- df %>%
mutate(DIFF = abs(VISIT - MEASURE)) %>%
arrange(DIFF) %>%
group_by(ID) %>%
group_by(VISIT) %>%
# If DIFF dates is != 0, I want the first value
# If DIFF dates is a draw, I want the latest value
slice(1) %>%
ungroup()
I am not sure what exactly you try to achieve, but maybe this could help you. I adjusted the example dataframe a bit, maybe you will need to edit yours in your question such that it makes sense. In your example data DIFF dates is never unequal to 0.
library(dplyr)
df <- data.frame(
ID = c(1, 1, 2, 2),
VISIT = c(
as.Date("2020-01-01"),
as.Date("2020-01-01"),
as.Date("2020-01-01"),
as.Date("2020-01-02")
),
MEASURE = c(
as.Date("2020-01-01"),
as.Date("2020-01-01"),
as.Date("2020-01-01"),
as.Date("2020-01-03")
),
VALUE = c(5, 10, 15, 20)
)
df2 <- df %>%
group_by(ID) %>%
mutate(
DIFF = abs(VISIT - MEASURE),
# get days as a digit
DIFF = stringr::str_extract(DIFF, "\\d+") %>% as.numeric(),
# your if conditions
DIFF_filter = case_when(
DIFF != 0 ~ min(VISIT),
DIFF == 0 ~ max(VISIT)
)
)

sum up values of compressed time series over time

I try to describe my problem via the code below. I have a data frame of a 'compressed' time series in the form of data frame: have. It contains the start and end date of a period with a value over time. I want to repeat the data as in data frame: want to ultimately get to the data frame: ultimately_want which sums up the value over time. Maybe I do not need want and get straight to ultimately_want somehow? Thanks.
library(dplyr)
start_date <- as.Date(c("2004-08-02", "2004-08-03"))
end_date <- as.Date(c("2004-08-04", "2004-08-05"))
value <- c(5, 6)
have <- data.frame(start_date, end_date, value)
have
date <- as.Date(c("2004-08-02", "2004-08-03", "2004-08-04", "2004-08-03", "2004-08-04", "2004-08-05"))
value <- c(5, 5, 5, 6, 6, 6)
want <- data.frame(date, value)
want
ultimately_want <- want %>%
group_by(date) %>%
summarise(total = sum(value))
ultimately_want
Here is a data.table approach,
library(data.table)
setDT(have)[, .(value = value, date = seq(start_date, end_date, by = "day")),
by = 1:nrow(have)][,.(total = sum(value)), date][]
# date total
#1: 2004-08-02 5
#2: 2004-08-03 11
#3: 2004-08-04 11
#4: 2004-08-05 6

Time series function in dplyr

I am working with data that stops in a specific year and is NA afterwards. And I need to calculate allot of variables based on lagged values of other variables. I would like to find a way that a whole series is calculated instead of each time one year when one of the variables is NA. I was looking at dplyr given that I am working with panel data and thus need to group it by ID.
I provide the example below:
set.seed(1)
df <- data.frame( year = c(seq(2000, 2018), seq(2000, 2018)) , id = c(rep(1, 19),rep(2, 19)), varA = floor(rnorm(38)*100), varB= floor(rnorm(38)*100), varC= floor(rnorm(38)*100))
df <- df %>% mutate(varA = if_else(year>2010, as.double(NA) , varA) ,
varB = if_else(year>2010, as.double(NA) , varB),
varC = if_else(year>2010, as.double(NA) , varC)) %>% group_by(id) %>% arrange(year)
What I would like is to find a way to calculate a variable that is equal to variable C when it is available, but afterwards is equal to a formula based on lagged values of variable C, B and A. When executing the code below, varResult and D are ony calculated for one year given that the lags are only available for one year:
df <- df %>% mutate( varD = lag(varA)*lag(varB),
varRESULT = if_else(is.na(varC), lag(varC, 1)/lag(varD, 2)*lag(varD, 1), varC))
But I would like to find a way to calculate immidiatly the whole serries (taking into account the panel dimension of the data) instead of heaving to repeat the code 7 times. Preferably a solution where you can calculate varD seperatly from varResults, given that in the final application I have multiple variables that are linked to each other.
Proposed solution:
Starting with the first NA, the "recursive" lags of vars varA, varB, and varC are equal to the last value of these variables.
Thus, starting from these initial variables, we can create new variables: varA1, varB1, and varC1 where we fill the NAs with the last value, by id:
library(dplyr)
library(tidyr) # for the function `fill`
df <- df %>%
mutate(varA1 = varA, varB1 = varB, varC1 = varC) %>%
group_by(id) %>%
arrange(year) %>%
fill(varA1, varB1, varC1) # fills with last value
Then, we apply the formula:
df <- df %>%
mutate( varD = lag(varA1)*lag(varB1),
varRESULT = if_else(is.na(varC), lag(varC1, 1)/lag(varD, 2)*lag(varD, 1), varC)) %>%
select(-varA1, -varB1, -varC1)

summarize weekly average using daily data in R

How to add one column price.wk.average to the data such that price.wk.average is equal to the average price of last week, and also add one column price.mo.average to the data such that it equals to the average price of last month? The price.wk.average will be the same for the entire week.
Dates Price Demand Price.wk.average Price.mo.average
2010-1-1 x x
2010-1-2 x x
......
2015-1-1 x x
jkl,
try to post reproducible examples. It will make it easier to help you. you can use dplyr:
library(dplyr)
df <- data.frame(date = seq(as.Date("2017-1-1"),by="day",length.out = 100), price = round(runif(100)*100+50,0))
df <- df %>%
group_by(week = week(date)) %>%
mutate(Price.wk.average = mean(price)) %>%
ungroup() %>%
group_by(month = month(date)) %>%
mutate(Price.mo.average = mean(price))
(Since I don't have enough points to comment)
I wanted to point out that Eric's answer will not distinguish average weekly price by year. Therefore, if you are interested in unique weeks (Week 1 of 2012 != Week 1 of 2015 ), you will need to do extra work to group by unique weeks.
df <- data.frame( Dates = c("2010-1-1", "2010-1-2", "2015-01-3"),
Price = c(50, 20, 40) )
Dates Price
1 2010-1-1 50
2 2010-1-2 20
3 2015-01-3 40
Just to keep your data frame tidy, I suggest converting dates to POSIX format then sorting the data frame:
library(lubridate)
df <- df %>%
mutate(Dates = lubridate::parse_date_time(Dates,"ymd")) %>%
arrange( Dates )
To group by unique weeks:
df <- df %>%
group_by( yw = paste( year(Dates), week(Dates)))
Then mutate and ungroup.
To group by unique months:
df <- df %>%
group_by( ym = paste( year(Dates), month(Dates)))
and mutate and ungroup.

In R, is it possible to include the same row in multiple groups, or is there other workaround?

I've measured N20 flux from soil at multiple timepoints in the day (not equally spaced). I'm trying to calculate the total N20 flux from soil for a subset of days by finding the area under the curve for the given day. I know how to do this when using only measures from the given day, however, I'd like to include the last measure of the previous day and the first measure of the following day to improve the estimation of the curve.
Here's an example to give a more concrete idea:
library(MESS)
library(lubridate)
library(dplyr)
Generate Reproducible Example
datetime <- seq(ymd_hm('2015-04-07 11:20'),ymd('2015-04-13'), by = 'hours')
dat <- data.frame(datetime, day = day(datetime), Flux = rnorm(n = length(datetime), mean = 400, sd = 20))
useDate <- data.frame(day = c(7:12), DateGood = c("No", "Yes", "Yes", "No", "Yes", "No"))
dat <- left_join(dat, useDate)
Some days are "bad" (too many missing measures) and some are "Good" (usable). The goal is to filter all measurements (rows) that occurred on a "Good" day as well as the last measurement from the day before and the first measurement on the next day.
out <- dat %>%
mutate(lagDateGood = lag(DateGood),
leadDateGood = lead(DateGood)) %>%
filter(lagDateGood != "No" | leadDateGood != "No")
Now I need to calculate the area under the curve - this is not correct
out2 <- out %>%
group_by(day) %>%
mutate(hourOfday = hour(datetime) + minute(datetime)/60) %>%
summarize(auc = auc(x = hourOfday, y = Flux, from = 0, to = 24, type = "spline"))
The trouble is that I don't include the measurements on end of previous day and start of following day when calculating AUC. Also, I get an estimate of flux for day 10, which is a "bad" day.
I think the crux of my question has to do with groups. Some measurements need to be in multiple groups (for example the last measurement on day 8 would be used in estimating AUC for day 8 and day 9). Do you have suggestions for how I could form new groups? Or might there be a completely different way to achieve the goal?
For what it's worth, this is what I did. The answer really lies in the question I linked to in the comments. Starting with the dataframe "out" from the question:
#Now I need to calculate the area under the curve for each day
n <- nrow(out)
extract <- function(ix) out[seq(max(1, min(ix)-1), min(n, max(ix) + 1)), ]
res <- lapply(split(1:n, out$day), extract)
calcTotalFlux <- function(df) {
if (nrow(df) < 10) { # make sure the day has at least 10 measures
NA
} else {
day_midnight <- floor_date(df$datetime[2], "day")
df %>%
mutate(time = datetime - day_midnight) %>%
summarize(TotalFlux = auc(x = time, y = Flux, from = 0, to = 1440, type = "spline"))}
}
do.call("rbind",lapply(res, calcTotalFlux))
TotalFlux
7 NA
8 585230.2
9 579017.3
10 NA
11 563689.7
12 NA
Here's another way. More in line with the suggestions of #Alex Brown.
# Another way
last <- out %>%
group_by(day) %>%
filter(datetime == max(datetime)) %>%
ungroup() %>%
mutate(day = day + 1)
first <- out %>%
group_by(day) %>%
filter(datetime == min(datetime)) %>%
ungroup() %>%
mutate(day = day - 1)
d <- rbind(out, last, first) %>%
group_by(day) %>%
arrange(datetime)
n_measures_per_day <- d %>%
summarize(n = n())
d <- left_join(d, n_measures_per_day) %>%
filter(n > 4)
TotalFluxDF <- d %>%
mutate(timeAtMidnight = floor_date(datetime[3], "day"),
time = datetime - timeAtMidnight) %>%
summarize(auc = auc(x = time, y = Flux, from = 0, to = 1440, type = "spline"))
TotalFluxDF
Source: local data frame [3 x 2]
day auc
(dbl) (dbl)
1 8 585230.2
2 9 579017.3
3 11 563689.7

Resources