Extracting ARIMA coefficients for use in custom function - r

I want to run a customized function based on ARIMA model. The function calls the ma3 coefficient from ARIMA (2, 0, 3) model ran on daily data of a year and subtracts ma3 coefficient from 2, for every firm. I have 5 years daily data for five firms, so each firm should have 5 year-wise values. My code:
>Stressy =function(x) 2-summary(arima(x, order=c(2,0,3)))$coefficients[1, "ma3"]
>Funny = aggregate(cbind(QQ) ~ Year + Firm , df, FUN = Stressy)
Running my code gives the following error:
Error in summary(arima(x, order = c(2, 0, 3)))$coefficients : $ operator is invalid for atomic vectors
I know the result can be estimated manually but my data set is large enough to be confusing when handled manually. Please suggest an edit to fix this.

There are two ways you could get the ma3 coefficient:
Stressy <- function(x) 2-coef(arima(x, order=c(2,0,3)))["ma3"]
or
Stressy <- function(x) 2-arima(x, order=c(2,0,3))$coef["ma3"]
Your original custom function didn't work because summary(arima_object) gives you a table, to which you cannot apply the $ operator:
x <- arima(df, c(2,0,3))
class(summary(x))
[1] "summaryDefault" "table"

Related

Why does just one (of 8) numeric predictor variables return NA when I run lm()?

I'm trying to build a linear regression model using eight independent variables, but when I run lm() one variable--what I anticipate being my best predictor!--keeps returning NA. I'm still new to R, and I cannot find a solution.
Here are my independent variables:
TEMPERATURE
HUMIDITY
WIND_SPEED
VISIBILITY
DEW_POINT_TEMPERATURE
SOLAR_RADIATION
RAINFALL
SNOWFALL
My df is training_set and looks like:
I'm not sure whether this matters, but training_set is 75% of my original df, and testing_set is 25%. Created thusly:
set.seed(1234)
split_bike_sharing <- sample(c(rep(0, round(0.75 * nrow(bike_sharing_df))), rep(1, round(0.25 * nrow(bike_sharing_df)))))
This gave me table(split_bike_sharing):
0
1
6349
2116
And then I did:
training_set <- bike_sharing_df[split_bike_sharing == 0, ]
testing_set <- bike_sharing_df[split_bike_sharing == 1, ]
The structure of training_set is like:
To create the model I run the code:
lm_model_weather=lm(RENTED_BIKE_COUNT ~ TEMPERATURE + HUMIDITY + WIND_SPEED + VISIBILITY + DEW_POINT_TEMPERATURE +
SOLAR_RADIATION + RAINFALL + SNOWFALL, data = training_set)
However, as you can see the resultant model returns RAINFALL as NA. Here is the resultant model:
My first thought was to check RAINFALL datatype, which is numeric with range 0-1 (because at an earlier step I performed min-max normalization). But SNOWFALL also is numeric, and I've done nothing (that I know of!) to the one but not the other. My second thought was to confirm that RAINFALL contains enough values to work, and that does not appear to be an issue: summary(training_set$RAINFALL):
So, how do I correct the NAs in RAINFALL? Truly I will be most grateful for your guidance to a solution.
UPDATE 10 MARCH 2022
I've now checked for collinearity:
X <- model.matrix(RENTED_BIKE_COUNT ~ ., data = training_set)
X2 <- caret::findLinearCombos(X)
print(X2)
This gave me:
I believe this means certain columns are jointly multicollinear. As you can see, columns 8, 13, and 38 are:
[8] is RAINFALL
[13] is SEASONS_WINTER
[38] is HOUR_23
Question: if I want to preserve RAINFALL as a predictor variable (viz., return proper values rather than NAs when I run lm()), what do I do? Remove columns [13] and [38] from the dataset?

Fama Macbeth Regression in R pmg

In the past few days I have been trying to find how to do Fama Macbeth regressions in R. It is advised to use the plm package with pmg, however every attempt I do returns me that I have an insufficient number of time periods.
My Dataset consists of 2828419 observations with 13 columns of variables of which I am looking to do multiple cross-sectional regressions.
My firms are specified by seriesis, I have got a variable date and want to do the following Fama Macbeth regressions:
totret ~ size
totret ~ momentum
totret ~ reversal
totret ~ volatility
totret ~ value size
totret ~ value + size + momentum
totret ~ value + size + momentum + reversal + volatility
I have been using this command:
fpmg <- pmg(totret ~ momentum, Data, index = c("date", "seriesid")
Which returns: Error in pmg(totret ~ mom, Dataset, index = c("seriesid", "datem")) : Insufficient number of time periods
I tried it with my dataset being a datatable, dataframe and pdataframe. Switching the index does not work as well.
My data contains NAs as well.
Who can fix this, or find a different way for me to do Fama Macbeth?
This is almost certainly due to having NAs in the variables in your formula. The error message is not very helpful - it is probably not a case of "too few time periods to estimate" and very likely a case of "there are firm/unit IDs that are not represented across all time periods" due to missing data being dropped.
You have two options - impute the missing data or drop observations with missing data (the latter being a quick test that the model works without missing points before deciding what you want to do that is valid for estimtation).
If the missingness in your data is truly random, you might be okay just dropping observations with missingness. Otherwise you should probably impute. A common strategy here is to impute multiple times - at least 5 - and then estimate for each of those 5 resulting data sets and average the effect together. Amelia or mice are very strong imputation packages. I like Amelia because with one call you can impute n times for that many resulting data sets and it's easy to pass in a set of variables to not impute (e.g., id variable or time period) with the idvars parameter.
EDIT: I dug into the source code to see where the error was triggered and here is what the issue is - again likely caused by missing data, but it does interact with your degrees of freedom:
...
# part of the code where error is triggered below, here is context:
# X = matrix of the RHS of your model including intercept, so X[,1] is all 1s
# k = number of coefficients used determined by length(coef(plm.model))
# ind = vector of ID values
# so t here is the minimum value from a count of occurrences for each unique ID
t <- min(tapply(X[,1], ind, length))
# then if the minimum number of times a single ID appears across time is
# less than the number of coefficients + 1, you do not have enough time
# points (for that ID/those IDs) to estimate.
if (t < (k + 1))
stop("Insufficient number of time periods")
That is what is triggering your error. So imputation is definitely a solution, but there might be a single offender in your data and importantly, once this condition is satisfied your model will run just fine with missing data.
Lately, I fixed the Fama Macbeth regression in R.
From a Data Table with all of the characteristics within the rows, the following works and gives the opportunity to equally weight or apply weights to the regression (remove the ",weights = marketcap" for equally weighted). totret is a total return variable, logmarket is the logarithm of market capitalization.
logmarket<- df %>%
group_by(date) %>%
summarise(constant = summary(lm(totret~logmarket, weights = marketcap))$coefficient[1], rsquared = summary(lm(totret~logmarket*, weights = marketcap*))$r.squared, beta= summary(lm(totret~logmarket, weights = marketcap))$coefficient[2])
You obtain a DataFrame with monthly alphas (constant), betas (beta), the R squared (rsquared).
To retrieve coefficients with t-statistics in a dataframe:
Summarystatistics <- as.data.frame(matrix(data=NA, nrow=6, ncol=1)
names(Summarystatistics) <- "logmarket"
row.names(Summarystatistics) <- c("constant","t-stat", "beta", "tstat", "R^2", "observations")
Summarystatistics[1,1] <- mean(logmarket$constant)
Summarystatistics[2,1] <- coeftest(lm(logmarket$constant~1))[1,3]
Summarystatistics[3,1] <- mean(logmarket$beta)
Summarystatistics[4,1] <- coeftest(lm(logmarket$beta~1))[1,3]
Summarystatistics[5,1] <- mean(logmarket$rsquared)
Summarystatistics[6,1] <- nrow(subset(df, !is.na(logmarket)))
There are some entries of "seriesid" with only one entry. Therefore the pmg gives the error. If you do something like this (with variable names you use), it will stop the error:
try2 <- try2 %>%
group_by(cusip) %>%
mutate(flag = (if (length(cusip)==1) {1} else {0})) %>%
ungroup() %>%
filter(flag == 0)

TS object; rollapply(), lm() regression:

First post here. Trying to run a rolling linear regression on a time series: code is as follows
MX_data <- merge.zoo(as.zoo(MX_tr),as.zoo(MX_RER_2))
MX_tr and MX_RER_2 are ts objects with 37 rows. the value for width below, w, is a vector that defines the length of my desired windows on which regressions will be calculated
w<- c(4,4,4,4,4,4,4,4,5)
rollingbeta <- rollapply(MX_data,
width= w,
FUN = function(Z) coef(lm(formula=MX_tr~MX_RER_2, data =
as.data.frame(MX_data))),
by.column=FALSE, align="right")
the result i get gives me a matrix looking object with two columns, one for the intercept and one for the Beta term. Problem is that every value in each column is the same: -1.14 for the beta term, and 0.0866 for the intercept. These are the values i get from running the regression on the whole series.
help is much appreciated. thanks

R: multicollinearity issues using glib(), Bayesian Model Averaging (BMA-package)

I am experiencing difficulties estimating a BMA-model via glib(), due to multicollinearity issues, even though I have clearly specified which columns to use. Please find the details below.
The data I'll be using for the estimation via Bayesian Model Averaging:
Cij <- c(357848,766940,610542,482940,527326,574398,146342,139950,227229,67948,
352118,884021,933894,1183289,445745,320996,527804,266172,425046,
290507,1001799,926219,1016654,750816,146923,495992,280405,
310608,1108250,776189,1562400,272482,352053,206286,
443160,693190,991983,769488,504851,470639,
396132,937085,847498,805037,705960,
440832,847631,1131398,1063269,
359480,1061648,1443370,
376686,986608,
344014)
n <- length(Cij);
TT <- trunc(sqrt(2*n))
i <- rep(1:TT,TT:1); #row numbers: year of origin
j <- sequence(TT:1) #col numbers: year of development
k <- i+j-1 #diagonal numbers: year of payment
#Since k=i+j-1, we have to leave out another dummy in order to avoid multicollinearity
k <- ifelse(k == 2, 1, k)
I want to evaluate the effect of i and j both via levels and factors, but of course not in the same model. Since I can decide to include i and j as factors, levels, or not include them at all and for k either to include as level, or exclude, there are a total of 18 (3x3x2) models. This brings us to the following data frame:
X <- data.frame(Cij,i.factor=as.factor(i),j.factor=as.factor(j),k,i,j)
X <- model.matrix(Cij ~ -1 + i.factor + j.factor + k + i + j,X)
X <- as.data.frame(X[,-1])
Next, via the following declaration I specify which variables to consider in each of the 18 models. According to me, no linear dependence exists in these specifications.
model.set <- rbind(
c(rep(0,9),rep(0,9),0,0,0),
c(rep(0,9),rep(0,9),0,1,0),
c(rep(0,9),rep(0,9),0,0,1),
c(rep(0,9),rep(0,9),1,0,0),
c(rep(1,9),rep(0,9),0,0,0),
c(rep(0,9),rep(1,9),0,0,0),
c(rep(0,9),rep(0,9),0,1,1),
c(rep(0,9),rep(0,9),1,1,0),
c(rep(0,9),rep(1,9),0,1,0),
c(rep(0,9),rep(0,9),1,0,1),
c(rep(1,9),rep(0,9),0,0,1),
c(rep(1,9),rep(0,9),1,0,0),
c(rep(0,9),rep(1,9),1,0,0),
c(rep(1,9),rep(1,9),0,0,0),
c(rep(0,9),rep(0,9),1,1,1),
c(rep(0,9),rep(1,9),1,1,0),
c(rep(1,9),rep(0,9),1,0,1),
c(rep(1,9),rep(1,9),1,0,0))
Then I call the glib() function, telling it to select the specified columns from X according to model.set.
library(BMA)
model.glib <- glib(X,Cij,error="poisson", link="log",models=model.set)
which results in the error
Error in glim(x, y, n, error = error, link = link, scale = scale) : X matrix is not full rank
The function first checks whether the matrix is f.c.r, before it evaluates which columns to select from X via model.set. How do I circumvent this, or is there any other way to include all 18 models in the glib() function?
Thank you in advance.

Calculating correlation between residuals of linear regression with NAs and independent variable in R

I am trying to calculate the correlation coefficient between the residuals of a linear regression and the independent variable p.
Basically, the linear regression estimates the current sales as a function of the current price p and the past price p1.
The vector of current prices mydf$p has length 8, but the residuals is a vector of length 7 because one entry has been deleted due to the NA value of p1.
# lag vector and pad with NAs
# Source: http://heuristically.wordpress.com/2012/10/29/lag-function-for-data-frames/
lagpad <- function(x, k) {
if (!is.vector(x))
stop('x must be a vector')
if (!is.numeric(x))
stop('x must be numeric')
if (!is.numeric(k))
stop('k must be numeric')
if (1 != length(k))
stop('k must be a single number')
c(rep(NA, k), x)[1 : length(x)]
}
mydf <- data.frame(p = c(10, 8, 10, 9, 10, 9, 10, 8))
mydf$p1 <- lagpad(mydf$p,1)
mydf$sales <- with(mydf, 200 - 15 * p + 5 * p1) + rnorm(nrow(mydf), 0,0.13)
model <- lm(data = mydf, formula = 'sales ~ p + p1')
print(summary(model))
print(cor(residuals(model), mydf$p))
# Error in cor(residuals(model), mydf$p) : incompatible dimensions
In this particular case, it is easy to use mydf$p[2:8] instead of mydf$p.
However, in general, there may be multiple rows at random locations where then NAs are deleted.
How do I access the independent variables that were actually used in the regression after removing the rows containing NA?
One of my attempts was based on the R documentation for lm. I tried to access the "x" matrix through model[['x']] but that did not work.
You can get the actual data used to fit the model from model$model, and from there the p column:
cor(residuals(model), model$model$p)
Alternatively, is.na(mydf$p1) will tell you which rows in mydf have an NA in column p1:
cor(residuals(model), mydf$p[!is.na(mydf$p1)])
In general, is.na(x) tells us whether elements in x are NA or not:
> is.na(c(1,2,NA,4,NA,6))
[1] FALSE FALSE TRUE FALSE TRUE FALSE
model.matrix(model) seems to be what you are looking for
Then you can select the variables you want with [] and the column number or name
The x matrix is only created if you specify x=T in your call to lm. Then model$x will give you the value of x (this is more idiomatic that model[['x']].
lm handles missing values by just completely omitting an observation where a value is missing. Maybe you want to do something like:
cor(residuals(model), mydf$p[!is.na(mydf$p)])
?

Resources