Rolling Regression by Group - r

Hi I have a panel data set. I'd like to do a rolling window regression for each firm and extract the coefficient of the independent var. y is the dependent var and x is the independent var. Rolling window is 12. That is,
the first regression uses row 1 to row 12 data,
the second regression uses row 2 to row 13 data, etc.
Rollapply is used.
Here is a question that has the exact same error that I encountered:
Rolling by group in data.table R
The lucky thing about that question is that it only takes one column but mine takes two columns for regression so I can't make the change accordingly to the recommended answer in that post.
Here is another post that uses a for loop. My real data has more than 2 million observations so it is too slow:
rolling regression with dplyr
Can any one help?
My fake data set is as follows:
dt<-rep(c("AAA","BBB","CCC"),each=24)
dt<-as.data.frame(dt)
names(dt)[names(dt)=="dt"] <- "firm"
a<-c(20100131,20100228,20100331,20100430,20100531,20100630,20100731,20100831,20100930,20101031,20101130,20101231,20110131,20110228,20110331,20110430,20110531,20110630,20110731,20110831,20110930,20111031,20111130,20111231)
dt$time<-rep(a,3)
dt<-dt%>% group_by(firm)%>%
mutate(y=rnorm(24,10,5))
dt<-dt%>% group_by(firm)%>%
mutate(x=rnorm(24,5,2))
dt<-as.data.table(dt)
I tried this code:
# create rolling regression function
roll <- function(Z)
{
t = lm(formula=y~x, data = as.data.frame(Z), na.rm=T);
return(t$coef[2])
}
dt[,beta := rollapply(dt, width=12, roll, fill=NA, by.column=FALSE, align="right") , by=firm]
I am trying to create a column called "beta" that shows the coefficient of var x. So for each firm, the first data should kick in from the 12th observation.
It looks like the regression takes x and y from the 1st row for different groups and the coefficients seems a bit off compared to the result I got from EXCEL.
The second method I tried is the dplyr version:
dt %>%
group_by(firm) %>%
mutate(dt,beta = rollapply(dt,12,function(x) coef(lm(y~x,data=as.data.frame(x)))[2],by.column= FALSE, fill = NA, align = "right"))
It gives me the same issue. each group has the same number. Looks like for each firm, the regression takes y and x from the 1st row.
Any thoughts? Thank you so much.

Here is a solution that uses the rollRegres package and data.table package. I have also added a modified version of the OP's solution which works (see eddi's comment) and used an example with 2 million observations as the OP mentions
#####
# setup data
library(rollRegres)
library(data.table)
library(dplyr)
set.seed(33700919)
n_firms <- 83334 # yields ~ the 2M firm as the OP mentions
dt <- rep(1:n_firms, each = 24)
dt <- data.frame(firm = dt)
a <-c(20100131,20100228,20100331,20100430,20100531,20100630,20100731,20100831,20100930,20101031,20101130,20101231,20110131,20110228,20110331,20110430,20110531,20110630,20110731,20110831,20110930,20111031,20111130,20111231)
dt$time <- rep(a, n_firms)
dt <- dt %>% group_by(firm) %>% mutate(y=rnorm(24,10,5))
dt <- dt %>% group_by(firm) %>% mutate(x=rnorm(24,5,2))
dt <- as.data.table(dt)
nrow(dt) # roughly the 2M rows that the OP mentions
#R [1] 2000016
#####
# fit models
setkey(dt, firm, time) # make sure data is sorted correctly
start_time <- Sys.time() # to show computation time
dt[
, beta :=
roll_regres.fit(x = cbind(1, .SD[["x"]]), y = .SD[["y"]],
width = 12L)$coefs[, 2],
by = firm]
Sys.time() - start_time
#R Time difference of 6.526595 secs
# gives the same as OP's solution with minor corrections
library(zoo)
start_time <- Sys.time()
roll <- function(Z)
lm.fit(x = cbind(1, Z[, "x"]), y = Z[, "y"])$coef[2]
dt[
, beta_zoo :=
rollapply(.SD, width=12, roll, fill=NA, by.column=FALSE, align="right"),
by=firm]
Sys.time() - start_time # much slower
#R Time difference of 1.87341 mins
# gives the same
all.equal(dt$beta, dt$beta_zoo)
#R [1] TRUE

Maybe you can try to change the first argument in rollapply, replace dt to column, dt[, c("y","x")]. See if it works

Related

How can I re-write code that applies a function on subset of rows based on another vector in different R ecosystems?

in my problem I have to apply a function on a subset of individual time-series based on a set of dates extracted from the original data.
So, I have a data.frame with a time-series for each individual between 2005-01-01 and 2010-12-31 (test_final_ind_series) and a sample of pairs individual-date (sample_events) ideally extracted from the same data.
With these, in my example I attempt to calculate an average on a subset of the time-series values exp conditional on individual and date in the sample_events.
I did this in 2 different ways:
1: a simple but effective code that gets the job done very quickly
I simply ask the user to input the data for a specific individual and define a lag of time and a window width (like a rolling average). The function exp_summary then outputs the requested average.
To repeat the operation for each row in sample_events I decided to nest the individual series by ID of the individuals and then attach the sample of dates. Eventually, I just run a loop that applies the function to each individual nested dataframe.
#Sample data
set.seed(111)
exp_series <- data.frame(
id = as.character(rep(1:10000, each=2191)),
date = rep(seq(as.Date('2005-01-01'),
as.Date('2010-12-31'), by = 'day'),times=10000),
exp = rep(rnorm(n=10000, mean=10, sd=5),times=2191)
)
sample_dates <- data.frame(
Event_id = as.character(replicate(10000,sample(1:10000,size = 1,replace = TRUE))),
Event_date = sample(
seq(as.Date('2005-01-01'),
as.Date('2010-12-31'), by = 'day'),
size =10000,replace = TRUE)
)
#This function, given a dataframe with dates and exposure series (df)
#an event_date
#a lag value
#a width of the window
#Outputs the average for a user-defined time window
exp_summary<- function(df, event_date, lag=0,width=0){
df<-as.data.table(df)
end<-as.character(as.Date(event_date)-lag)
start<-as.character(max(as.Date(end)-width, min(df$date)))# I need this in case the time window goes beyond the time limits (earliest date)
return(mean(df[date %between% c(start,end)]$exp))
}
#Nest dataframes
exp_series_nest <- exp_series %>%
group_by(id) %>%
nest()
#Merge with sample events, including only the necessary dates
full_data<-merge(exp_series_nest,sample_dates, by.x="id", by.y="Event_id",all.x = FALSE, all.y=TRUE)
#Initialize dataframe in advance
summaries1<-setNames(data.frame(matrix(ncol = 2, nrow = nrow(full_data))), c("id", "mean"))
summaries1$id<-full_data$id
#Loop over each id, which is nasted data.frame
system.time(for (i in 1:nrow(full_data)){
summaries1$mean[i]<-exp_summary(full_data$data[[i]], full_data$Event_date[i], lag=1, width=365)
})
2: using the highly-flexible package runner
With the same data I need to properly specify the arguments properly. I have also opened an issue on the Github repository to speed-up this code with parallelization.
system.time(summaries2 <- sample_dates %>%
group_by(Event_id) %>%
mutate(
mean = runner(
x = exp_series[exp_series$id == Event_id[1],],
k = "365 days",
lag = "1 days",
idx =exp_series$date[exp_series$id == Event_id[1]],
at = Event_date,
f = function(x) {mean(x$exp)},
na_pad=FALSE
)
)
)
They give very same results up to the second decimal, but method 1 is much faster than 2, and you can see the difference when you use very datasets.
My question is, for method 1, how can I write the last loop in a more concise way within the data.table and/or tidyverse ecosystems? I really struggle in making work together nested lists and "normal" columns embedded in the same dataframe.
Also, if you have any other recommendation I am open to hear it! I am here more for curiosity than need, as my problem is solved by method 1 already acceptably.
With data.table, you could join exp_series with the range you wish in sample_dates and calculate mean by=.EACHI:
library(data.table)
setDT(exp_series)
setDT(sample_dates)
lag <- 1
width <- 365
# Define range
sample_dates[,':='(begin=Event_date-width-lag,end=Event_date-lag)]
# Calculate mean by .EACHI
summariesDT <- exp_series[sample_dates,.(id,mean=mean(exp))
,on=.(id=Event_id,date>=begin,date<=end),by=.EACHI][
,.(id,mean)]
Note that this returns the same results as summaries1 only for Event_id without duplicates in sample_dates.
The results are different in case of duplicates, for instance Event_id==1002:
sample_dates[Event_id==1002]
Event_id Event_date begin end
<char> <Date> <Date> <Date>
1: 1002 2010-08-17 2009-08-16 2010-08-16
2: 1002 2010-06-23 2009-06-22 2010-06-22
If you don't have duplicates in your real data, this shouldn't be a problem.

Simulating samples from Gamma distribution in R

I'm having trouble with a programming assignment.
From the previous questions, I have a list of 49 elements.
Each element is sample data of size=10000. For the last question, I have to calculate the mean of the first n sample values.
With n between one and ten thousand, within each dataset.
I then have to plot these running averages for each data set.
I've been trying to create lists/vectors of the running averages but it's not working out.
Is there anything I can do?
Function for running average:
run_avg <- function(x, n_max){
a <- c(1:n_max)
r_avg <- sapply(a, FUN = function(y) mean(x[1:y]))
return(r_avg)
}
In your case, n_max should equal 10000;
This function then creates, for one dataset, the running averages.
This has then to be applied to all datasets. You could use lapply for this, if your datasets are stored within a list. Another approach could be a loop or something like that.
Edit: I see that your datasets are in a list, so simply use:
lapply(my_list, run_avg, n_max = 10000)
The running averages can be computed with the following.
res <- lapply(x, function(y){
sapply(seq_along(y), function(k) mean(y[1:k]))
})
Then in order to have the resulting list in a format better for plotting with package ggplot2, format it as a data frame first, with the row names as a column.
df_res <- do.call(cbind.data.frame, res)
names(df_res) <- paste("Mean", seq_len(ncol(df_res)), sep = ".")
df_res <- cbind(df_res, id = as.integer(row.names(df_res)))
Now reshape from wide to long and plot.
library(tidyverse)
df_res %>%
pivot_longer(
cols = starts_with("Mean"),
names_to = "Vector",
values_to = "Mean"
) %>%
ggplot(aes(id, Mean, colour = Vector)) +
geom_point() +
geom_line()
Test data.
set.seed(1234)
list_size <- 4 # 49 in the question
samp_size <- 20 # 10000 in the question
x <- lapply(seq.int(list_size), function(i) rgamma(samp_size, shape = i))

Rolling correlation with 'grouped by' - Error: incorrect number of dimensions

I'm trying to calculate rolling correlations with a five year window based on daily stock data. My dataframe test consists of 20 columns, with "logRet3" being located in column #17 and "logMarRet3" in #18. I want to calculate the correlation of these two return measures.
What makes it difficult is the fact that I want the rolling correlation to be grouped by my share indicator "PERMNO" in column #1. By that I mean that the rolling correlation "restarts" whenever the time-series data of a particular stock ends.
Through research I came up with the following code, using the dplyr, zoo and magrittr packages:
test <- test %>%
group_by(PERMNO) %>%
mutate(CorSecMar = zoo::rollapply(test, width = 1255, function(x) cor(x[,logRet3], x[,logMarRet3]), fill = NA, align = "right"))
However, when I run this code, I get the following error:
Error in x[,logMarRet3]: Incorrect number of dimensions
Me being a newbie, I tried adjusting the code by deleting the ,:
test <- test %>%
group_by(PERMNO) %>%
mutate(CorSecMar = zoo::rollapply(test, width = 1255, function(x) cor(x[logRet3], x[logMarRet3]), fill = NA, align = "right"))
resulting in the following error (translated to English):
Error in x[logMarRet3]: Only zeros are allowed to be mixed with negative indices
Any help on how to fix these errors or alternative ways of calculating the rolling correlation by group would be greatly appreciated.
EDIT: Thanks to G. Grothendieck for pointing out some flaws in my question. I'm referring to his answer for reproducible input and will keep that in mind for further posts.
There are several problems:
rollapply applies to each column separately unless by.column = FALSE is used.
using test within group_by will not cause test to be subsetted. It will refer to the entire dataset. Use individual column names instead.
the column names in the code in the question must have quotes around them; otherwise, it is saying there are variables of those names containing the column names.
when posting to SO you need to reduce your problem to a complete reproducible example and post that. I have done it this time for you in the Note at the end.
With reference to the Note, use this code:
library(dplyr)
library(zoo)
mycor <- function(x) cor(x[, 1], x[, 2])
DF %>%
group_by(stock) %>%
mutate(Cor = rollapplyr(cbind(a, b), 4, mycor, by.column = FALSE, fill = NA)) %>%
ungroup
or this code which only uses zoo. mycor is from above.
library(zoo)
n <- nrow(DF)
roll <- function(i) rollapplyr(DF[i, c("a", "b")], 4, mycor, by.column = FALSE, fill = NA)
transform(DF, Cor = ave(1:n, stock, FUN = roll))
Note
The input in reproducible form is:
DF <- data.frame(stock = rep(LETTERS[1:2], each = 6), a = 1:6, b = (1:6)^3)

Moving averages

I have daily data for over 100 years that looks like
01.01.1856 12
02.01.1956 9
03.01.1956 -12
04.01.1956 7
etc.
I wish to calculate the 30 year running average for this huge data. I tried converting the data into a time series but cant still figure out how to go about it. I will prefer a simple method that has to do with working with a data.frame.
I guess the preparation is the difficulty considering some leapyears.
So I try to show some way for preparing, before using the already mentioned function runmean of package require(caTools).
First we create example data (which is not necessary for you, but for the understanding).
Second I divide the data frame into a list of data frames, one for each year and taking the mean values for each year. These two steps could be done at once, but I think the separated way is easier to understand and to adapt.
#example data
Days <- seq(as.Date("1958-01-01"), as.Date("2015-12-31"), by="days")
Values <- runif(length(Days))
DF <- data.frame(Days = Days, Values = Values)
#start of script
Years <- format(DF$Days, "%Y")
UniqueYears <- unique(format(DF$Days, "%Y"))
#Create subset of years
#look for every unique year which element of days is in this year.
YearlySubset <- lapply(UniqueYears, function(x){
DF[which(Years == x), ]
})
YearlyMeanValues <- sapply(YearlySubset, function(x){
mean(x$Values)
})
Now the running mean is applied:
#install.packages("caTools")
require(caTools)
RM <- data.frame(Years = UniqueYears, RunningMean30y = runmean(YearlyMeanValues, 30))
Just if I didn't got you right at first and you want some running mean for every day within about 30 years, of course you could simply do:
RM <- cbind(DF, runmean(DF$Values, 365 * 30))
And considering your problems creating a timeseries:
DF[ , 1] <- as.Date(DF[ , 1], format = "%Y.%m.%d")
I would also suggest exploring RcppRoll in combination with dplyr which provides a fairly convenient solution to calculate rolling averages, sums, etc.
Code
# Libs
library(RcppRoll) # 'roll'-ing functions for R vectors and matrices.
library(dplyr) # data grammar (convenience)
library(zoo) # time series (convenience)
library(magrittr) # compound assignment pipe-operator (convenience)
# Data
data("UKgas")
## Convert to data frame to make example better
UKgas <- data.frame(Y = as.matrix(UKgas), date = time(UKgas))
# Calculations
UKgas %<>%
# To make example more illustrative I converted the data to a quarterly format
mutate(date = as.yearqtr(date)) %>%
arrange(date) %>%
# The window size can be changed to reflect any period
mutate(roll_mean = roll_mean(Y, n = 4, align = "right", fill = NA))
Notes
As the data provided in the example was fairly modest I used quarterly UK gas consumption data available via the data function in the utils package.

Dividing columns by group (Grouping in data frame)

I would like to calculate relative response values by dividing each response/column by its' group mean.
I have managed to produce an exhaustive (and thus unsatisfying) method. My data set is very large and contains multiple groups and responses.
###############
# example
# used packages
require(plyr)
# sample data
group <- c(rep("alpha", 3), rep("beta", 3), rep("gamma", 3))
a <- rnorm(9, 10,1) #some random data as response
b <- rnorm(9, 10,1)
df <- data.frame(group, a, b)
# my approach
# means for each group and response
df.means <- ddply(df, "group", colwise(mean))
# clunky method
df$rel.a[df$group=="alpha"] <-
df$a[df$group=="alpha"]/df.means$a[df.means$group=="alpha"]
df$rel.a[df$group=="beta"] <-
df$a[df$group=="beta"]/df.means$a[df.means$group=="beta"]
# ... etc
df$rel.b[df$group=="gamma"] <-
df$b[df$group=="gamma"]/df.means$b[df.means$group=="gamma"]
#desired outcome (well, perhaps with no missing values)
df
###############
I have been using r for a while now, but I still struggle with trivial data handling procedures. I believe I must be missing something, How can I better address these group(s)?
It's quite easily understandable with the package dplyr, the next version of plyr for data frames:
library(dplyr)
df %>% group_by(group) %>% mutate_each(funs(./mean(.)))
The . represents the data in each column (by group). mutate_each is used to modify each column except the grouping variables. You specify inside the funs argument which functions should be applied to each column.
With data.table package you can do this whole thing fast and easy in one line (without creating the df.means at all), simply
library(data.table)
setDT(df)[, paste0("real.", names(df)[-1]) :=
lapply(.SD, function(x) x/mean(x)),
group]
This will run over all the column within df (except group) by group and divide each value by the group mean
Edit: If you want to override the original columns (like in the dplyr answer, you can do this with small modification (remove the paste0 part):
setDT(df)[, names(df)[-1] := lapply(.SD, function(x) x/mean(x)), group]
If i understand you correctly, you can also do this easily in dplyr. Given the above data
library(dplyr)
df %>% group_by(group) %>% mutate(aresp = a/ mean(a), bresp= b/mean(b))
returns:
group a b aresp bresp
1 alpha 10.052847 8.076405 1.0132828 0.8288214
2 alpha 10.002243 11.447665 1.0081822 1.1747888
3 alpha 9.708111 9.709265 0.9785350 0.9963898
4 beta 10.732693 7.483065 0.9751125 0.8202278
5 beta 11.719656 11.270522 1.0647824 1.2353754
6 beta 10.567513 8.615878 0.9601051 0.9443968
7 gamma 10.221040 11.181763 1.0035630 0.9723315
8 gamma 10.302611 11.286443 1.0115721 0.9814341
9 gamma 10.030605 12.031643 0.9848649 1.0462344

Resources