arima estimate validation through arima.sim - r

This is out of my curiosity trying to compare time series input to an ARMA model and reconstructed series after an ARMA estimate is obtained. These are the steps I am thinking:
construct simulation time series
arma.sim <- arima.sim(model=list(ar=c(0.9),ma=c(0.2)),n = 100)
estimate the model from arma.sim, assuming we know it is a (1,0,1) model
arma.est1 <- arima(arma.sim, order=c(1,0,1))
also say we get arma.est1 in this form, which is close to the original (0.9,0,0.2):
Coefficients:
ar1 ma1 intercept
0.9115 0.0104 -0.4486
s.e. 0.0456 0.1270 1.1396
sigma^2 estimated as 1.15: log likelihood = -149.79, aic = 307.57
If I try to reconstruct another time series from arma.est1, how do I incorporate intercept or s.e. in arima.sim? Something like this doesn't seem to work well because arma.sim and arma.rec are far off:
arma.rec <- arima.sim(n=100, list(ar=c(0.9115),ma=c(0.0104)))
Normally we use predict() to check the estimate. But is this a legit way to look at the estimate?

Related

interpreting results from auto.arima R

First time using doing any forecasting and was looking into using auto.arima but I'm not really sure what the results mean, basically from 'Coefficients' onwards.
Can someone please be kind enough to give me an explanation
Series: total[, "total"]
ARIMA(2,0,0) with non-zero mean
Coefficients:
ar1 ar2 mean
1.1055 -0.4207 138805.107
s.e. 0.2020 0.2002 4664.756
sigma^2 = 53468931: log likelihood = -205.36
AIC=418.72 AICc=421.39 BIC=422.7
It means that the main model fitted by autoarima() converges to:
Y_t = 138805.107 - 0.4207·Y_(t-1) + 1.1055·Y_(t-2)
For the rest of the output, the standard deviation (s.e.) for each of the coefficients is also shown (necessary to see if the coefficients are significant). Finally, sigma^2 is the variance of the residual values, the log-likelihood is a "quality measure" of the model (the closer to zero the better) and necessary for comparing this fit against others) and the AIC, AICc and BIC are other "quality measures" based of the log-likelihood, sample size and amount of coefficients estimated.

In the auto.arima() function in R, how do I find the p,d,q values for that arima

I used an R code with an auto.arima function on a time series data set to forecast. From here, Id like to know how to find the p,d,q values for the arima. Is there a quick way to determine that, thank you.
The forecast::auto.arima() function was written to pick the optimal p, d, and q with respect to some optimization criterion (e.g. AIC). If you want to see which model was picked, use the summary() function.
For example:
fit <- auto.arima(lynx)
summary(fit)
Series: lynx
ARIMA(2,0,2) with non-zero mean
Coefficients:
ar1 ar2 ma1 ma2 mean
1.3421 -0.6738 -0.2027 -0.2564 1544.4039
s.e. 0.0984 0.0801 0.1261 0.1097 131.9242
sigma^2 estimated as 761965: log likelihood=-932.08
AIC=1876.17 AICc=1876.95 BIC=1892.58
Training set error measures:
ME RMSE MAE MPE MAPE MASE ACF1
Training set -1.608903 853.5488 610.1112 -63.90926 140.7693 0.7343143 -0.01267127
Where you can see the particular specification in the second row of the output. In this example, auto.arima picks an ARIMA(2,0,2).
Note that I did this naively here for demonstration purposes. I didn't check whether this is an accurate representation of the dependency structure in the lynx data set.
Other than summary(), you could also use arimaorder(fit) to get the vector c(p,d,q) or as.character(fit) to get "ARIMA(p,d,q)".

Lbq test and ARMA results, Should I include lags?

I have a return time series in daily frequency, which is stationary(proofed by ADF test) , has no autocorrelation up to lag10(proofed by lbq test with lag10) and has ARCH effect(proofed by LM test). My initial though is just directly applying GARCH model. Rather than the usual procedure: first using ARMA(p,q) to get the residuals, and then fit GARCH to this ARMA residuals.
However, out of curiosity, I still use ARMA(p,q) model loop through (p,q) lags with range [0,1,..,10] to see whether ARMA(0,0) has the smallest AIC among all. After looping though those 121 (p,q) combinations, I find the smallest AIC does NOT belong to ARMA(0,0) model, but ARMA(2,7). Then I check the coef of this ARMA(2,7) model and find the may lags included are significant. The two AR lags are both significant at 1% level.
Now, I am quite confusing. Based on the result of lbq(10) test, I should use ARMA(0,0). Based on the results of smallest AIC of ARMA models, I should use ARMA(2,7). May I please ask, in this case, should I use ARMA(0,0) or ARMA(2,7)? My preference is to use ARMA(2,7), but how can I explain to others when they ask: why still use ARMA model when lbq test shows no autocorrelation?
Any of your kind thoughts is greatly appreciated!
Please see the code and results below
lbqtest(returns,'Lags',1:10)
I could also use the following code to only get autocorrelation up to lag10
lbqtest(returns,'Lags',10)
The p results of lbq(1) till lbq(10) are:
p =
0.3425 0.5612 0.4180 0.5356 0.6637 0.7696 0.7770 0.8448 0.8995 0.9198
The AIC results of ARMA(2,7) and ARMA(0,0) are
AIC AR MA
-1498.252431 2 7
-1494.028 0 0
The estimation result of ARMA(2,7) using R is
arima(x = returns, order = c(2, 0, 7))
Coefficients:
ar1 ar2 ma1 ma2 ma3 ma4 ma5 ma6 ma7 intercept
-1.6786 -0.8756 1.6808 0.8128 -0.1691 -0.1736 -0.1065 0.0419 0.0411 -0.0006
s.e. 0.0381 0.0308 0.0660 0.1044 0.1078 0.1097 0.1082 0.1017 0.0642 0.0015

How to interpret the second part of an auto arima result in R?

I am performing a time series analysis on my data and I have run the auto arima function to determine the best coefficients to use in my ARIMA model.
model1 <- auto.arima(log(mydata_ts))
model1
Series: log(mydata_ts)
ARIMA(2,1,1)(1,0,0)[12]
Coefficients:
ar1 ar2 ma1 sar1
-1.1413 -0.3872 0.9453 0.7572
s.e. 0.1432 0.1362 0.0593 0.0830
sigma^2 estimated as 0.006575: log likelihood=48.35
AIC=-86.69 AICc=-85.23 BIC=-77.44
I understand that (2,1,1) in the result above refer to the values of p, d and q that will be used in the ARIMA model.
But what about (1,0,0)?
ARIMA(2,1,1)(1,0,0)[12] is seasonal ARIMA. [12] stands for number of periods in season, i.e. months in year in this case. (1,0,0) stands for seasonal part of model. Take a look at this.

List of Models for auto.arima function

I am using the forecast package in R and I want to know what the list of models are that the auto.arima function is going through in order to decide which ARIMA model fits best. Is there a way I can extract a list of all the models being tested to ensure it wasn't missing anything or so that it isn't so much of a blackbox?
Here is an example:
library(forecast)
fit <- auto.arima(WWWusage)
fit
Series: WWWusage
ARIMA(1,1,1)
Coefficients:
ar1 ma1
0.6504 0.5256
s.e. 0.0842 0.0896
sigma^2 estimated as 9.995: log likelihood=-254.15
AIC=514.3 AICc=514.55 BIC=522.08
plot(forecast(fit,h=20))
Thanks!
See here and here for documentation/explanation on the auto.arima() algorithm.

Resources