I'm looking for something like
arima.sim()
but for sarima models. I've looked at simulate.Arima() in the forecast package, but it seems to require an input dataset parsed by Arima(), which I don't want to do. I also looked at the gsarima library, but it seems to be only able to simulate seasonal AR models. Is there any way to simulate a sarima model if you only want to provide the following information:
The values of all pure ARMA and seasonal ARMA terms for different lags.
The number of differences for both the seasonal and non-seasonal integrated terms.
The length of a season.
The number of terms I want to simulate.
Related
Suppose I fit AR(p) model using R arima function from stats package. I fit it using a sample x_1,...,x_n. In theory, when predicting x_{n+1} using this model, it needs an access x_n,...x_{n-p}.
How does the model know which observation I want to predict? What if I wanted to actually predict x_n based on x_{n-1},...,x_{n-p-1} and how my code would differ in this case? Can I make in-sample forecasts, similar to Python's functionality?
If my questions imply that I think about forecasting in a wrong way, please kindly correct my understanding of the subject.
I am working on a LDA model with textmineR, have calculated coherence, log-likelihood measures and optimized my model.
As a last step I would like to see how well the model predicts topics on unseen data. Thus, I am using the predict() function from the textminer package in combination with GIBBS sampling on my testset-sample.
This results in predicted "Theta" values for each document in my testset-sample.
While I have read in another post that perplexity-calculations are not available with the texminer package (See this post here: How do i measure perplexity scores on a LDA model made with the textmineR package in R?), I am now wondering what the purpose of the prediction function is then for? Especially with a large dataset of over 100.000 Documents it is hard to just visually assess whether the prediction has performed well or not.
I do not want to use perplexity for model selection (I am using coherence/log-likelihood instead), but as far as I understand, perplexity would help me to understand how well the prediction is and how "surprised" the model is with new, previously unseen data.
Since this does not seem to be available for textmineR, I am not sure how to assess the model prediction. Is there anything else that I could use to measure the prediction quality of my textminer model?
Thank you!
I have a mydata.ts which is around 200 rows. I used stationary tests, took differences and examined ACF and PACF. So I decided to try ARIMA(1,1,1)(0,1,1) for instance.
Which R function should I use to find fitted values and forecasts? Arima, arima or auto.arima?
And can I trust the MAPE, MAD and other error results on summary(model)? Because I read an answer and it was saying the results are not the real but approximated or something.
auto.arima will find the whole model specification that is the 'best' based on AIC, BIC.
IF you know the order (1,1,1) or (0,1,1) then use Arima from forecast package(same as arima, but little more general)
Arima(your_data, order=c(1,1,1)) will give the basic answer.
Seee the documentation for forecast.
then actual out-of-sample forecast can be done with the forecast function.
I've been using the caret package in R to run some boosted regression tree and random forest models and am hoping to generate prediction intervals for a set of new cases using the inbuilt cross-validation routine.
The trainControl function allows you to save the hold-out predictions at each of the n-folds, but I'm wondering whether unknown cases can also be predicted at each fold using the built-in functions, or whether I need to use a separate loop to build the models n-times.
Any advice much appreciated
Check the R package quantregForest, available at CRAN. It can easily calculate prediction intervals for random forest models. There's a nice paper by the author of the package, explaining the backgrounds of the method. (Sorry, I can't say anything about prediction intervals for BRT models; I'm looking for them by myself...)
Does anyone here know how I can specify additional external variables to an ARIMA model ?
In my case I am trying to make a volatility model and I would like to add the squared returns to model an ARCH.
The reason I am not using GARCH models, is that I am only interested in the volatility forecasts and the GARCH models present their errors on their returns which is not the subject of my study.
I would like to add an external variable and see the R^2 and p-values to see if the coefficient is statistically significant.
I know that this is a very old question but for people like me who were wondering this you need to use cbind with xreg.
For Example:
Arima(X,order=c(3,1,3),xreg = cbind(ts1,ts2,ts3))
Each external time series should be the same length as the original.