How to check accuracy() in VAR model and How to determine right seasonality( is there a function) - r

How to check accuracy() in VAR model and How to determine the right seasonality( is there a function)
I am trying to create a VAR model. I have monthly data
Var_model <- VAR(cb, p = 1, type = "both", season = 12, exog = NULL)
I put season=12 by default since my data is monthly. How to determine seasonality?
mstl decomposition graphics
peca
waln
almo
pean
Here is the main problem. How to run accuracy() in var model?
forecast <- predict(Var_model, n.ahead = 24, ci = 0.95)
accuracy(forecast$fcst[[1]][,"fcst"], almo)
Here I think I am following procedure. accuracy(forecast, data) but still getting an error
Error in testaccuracy(object, x, test, d, D) : Not enough forecasts.
Check that forecasts and test data match

There is varresult inside of your model Var_model, so you see different accuracy metrics and define how good your model on training set like
accuracy(your_model_name$varresult$col_name)
Also you should check decomposition waives for seasonal period and lags correlation and cross-correlation and if you using VAR because all variable will impacting each other on forecast level.

Related

How to decide the frequency while using the forecast function in R?

I have a series of daily data from 01-01-2014 to 31-01-2022. I want to predict the next 30 days. I am using auto.arima and it has some exogenous variables attached.
Here's the code: -
datax$NMD1<-(datax$NMD1/1000000000)
#Here to make an Arima series out of NMD 1. Exogenous variables here.
ts1<- ts(datax, frequency = 1)
class(ts1)
colnames(ts1)
autoplot(ts1[,"NMD1"])
#defining the set of exogenous variables
xset<- as.matrix(ts1[,"1Y TD INTEREST RATE"], ts1[,"BSE"], ts1[,"Repo Rate"], ts1[,"MIBOR Rate"], ts1[,"1Y OIS Rate" ], ts1[,"3M CD rate(PSU)"], ts1[,"2 Y GSec Rate"])
#Fitting the model
model1 <- auto.arima(ts1[,'NMD1'], xreg=xset, approximation = FALSE, allowmean = FALSE, allowdrift = FALSE)
summary(model1)
checkresiduals(model1)
fcast <- forecast(model1,xreg=xset, h=1)
print(summary(fcast))
autoplot(fcast)
My problems: -
While my model seems to work fine, I am not able to understand what value of h shall i put while forecasting. I also don't understand what frequency really is while we define a time series.
Please help.

Forecasting of multivariate data through Vector Autoregression model

I am working in the functional time series using the multivariate time series data(hourly time series data). I am using FAR model more than one order for which no statistical package is available in R, so for this I convert my data into functional form and obtained the functional principle component and from those FPCA I extract their corresponding** FPCscores**. Know I use the VAR model on those FPCscores for the forecasting of each 24 hours through the VAR model, but the VAR give me the forecasted value for all 23hours when I put phat=23, but whenever I put phat=24 for example want to predict each 24 hours its give the results in the form of NA. the code is given below
library(vars)
library(fda)
fdata<- function(mat){
nb = 27 # number of basis functions for the data
fbf = create.fourier.basis(rangeval=c(0,1), nbasis=nb) # basis for data
args=seq(0,1,length=24)
fdata1=Data2fd(args,y=t(mat),fbf) # functions generated from discretized y
return(fdata1)
}
prediction.ffpe = function(fdata1){
n = ncol(fdata1$coef)
D = nrow(fdata1$coef)
#center the data
#mu = mean.fd(fdata1)
data = center.fd(fdata1)
#ffpe = fFPE(fdata1, Pmax=10)
#p.hat = ffpe[2] #order of the model
d.hat=23
p.hat=6
#fPCA
fpca = pca.fd(data,nharm=D, centerfns=TRUE)
scores = fpca$scores[,0:d.hat]
# to avoid warnings from vars predict function below
colnames(scores) <- as.character(seq(1:d.hat))
VAR.pre= predict(VAR(scores, p.hat), n.ahead=1, type="const")$fcst
}
kindly guide me that how can I solve out my problem or what error I doing. THANKS

Auto.Arima incorrectly predicts first point

I'm trying to complete a time series analysis of some reservoir data and am using auto.arima with a Fourier component to account for seasonality, as described here https://otexts.com/fpp2/dhr.html#dhr The code I have used is shown below and the dataset I used can be found here https://www.dropbox.com/sh/563nu3daeid0agb/AAB6NSddVUKgBCCbQtuqXPsZa?dl=0
Reservoir = read.csv("Reservoir1.csv",TRUE,",")
#impute missing data from data set
Reservoir = imputeTS::na_interpolation(Reservoir)
#Create Time Series
Reservoir = ts(Reservoir[,2],frequency = (365.25),start = c(2013,116))
plots = list()
for (i in seq (10)) {
fit = auto.arima(Reservoir, xreg = fourier(Reservoir, K = i), seasonal = FALSE)
plots[[i]] = autoplot(forecast(fit, xreg = fourier(Reservoir, K = i, h=10))) +
xlab(paste("K=",i,"AICC=",round(fit[["aicc"]],2))) + ylab("")
}
gridExtra::grid.arrange(plots[[1]],plots[[2]],plots[[3]],plots[[4]],plots[[5]],
plots[[6]],plots[[7]],plots[[8]],plots[[9]],plots[[10]],
nrow=5)
bestfit = auto.arima(Reservoir, xreg=fourier(Reservoir, K=9), seasonal=FALSE)
summary(bestfit)
checkresiduals(bestfit)
plot(Reservoir,col="red")
lines(fitted(bestfit),col="blue")
The model fits well, except for the incorrect first prediction. I'm lost as to why only this value would be so far off. Or, is this an acceptable error?
The residuals are the one-step forecast errors using all previous observations. At time 1, the residual is the forecast error with no previous observations, so it is simply based on the fitted model. In fact, it is an artificially "good" forecast because the differencing means there is no way for the model to know the location of the data until there is an observation. But the way ARIMA models are implemented in R makes the first prediction use a little more information than it should.

How to simulate the posterior filtered estimates of a Kalman Filter using the DSE package in R

How do I call for the posterior (refined) state estimates from a Kalman Filter simulation in R using the DSE package?
I have added an example below. Assume that I have created a simple random walk state space with the error being a standard normal distribution. The model is created using the SS function, with initialised state and covariance estimates of zero. The theoretical model form is thus:
X(t) = X(t-1) + e(t)~N(0,1) for state evolution
Y(t) = X(t) + w(t)~N(0,1)
We now implement this in R by following the instructions on page 6 and 7 of the "Kalman Filtering in R" article in the Journal of Statistical Software. First we create the state space model using the SS() function and store it in the variable called kalman.filter:
kalman.filter=dse::SS(F = matrix(1,1,1),
Q = matrix(1,1,1),
H = matrix(1,1,1),
R = matrix(1,1,1),
z0 = matrix(0,1,1),
P0 = matrix(0,1,1)
)
Then we simulate a 100 observations from the model form using simulate() and put them in a variable called simulate.kalman.filter:
simulate.kalman.filter=simulate(kalman.filter, start = 1, freq = 1, sampleT = 100)
Then we run the kalman filter against the measurements using l() and store it under the variable called test:
test=l(kalman.filter, simulate.kalman.filter)
From the outputs, which ones are my filtered estimates?
I have found the answer to this question.
Firstly, the filtered estimates of the model are not given in the l() function. This function only gives the one step ahead predictions. The above framing of my problem was coded as:
kalman.filter=dse::SS(F = matrix(1,1,1),
Q = matrix(1,1,1),
H = matrix(1,1,1),
R = matrix(1,1,1),
z0 = matrix(0,1,1),
P0 = matrix(0,1,1)
)
simulate.kalman.filter=simulate(kalman.filter, start = 1, freq = 1, sampleT = 100)
test=l(kalman.filter, simulate.kalman.filter)
The one step ahead predictions are given by:
predictions = test$estimates$pred
A quick way to visualize this is given by:
tfplot(test)
This allows you to quickly plot your one step ahead predictions against the actual data. To get your filtered estimates, you need to use the smoother() function, in the same dse package. It inputs the state model as well as the data, in this case it is kalman.filter and simulate.kalman.filter respectively. The output is smoothed estimates for all the time points. But note that it does this after considering the full data set, so it does not do this as each observation comes in. See code below. The first line of the code gives you your smoothed estimates, the following lines plot the example:
smooth = smoother(test, simulate.kalman.filter)
plot(test$estimates$pred, ylim=c(max(test$estimates$pred,smooth$filter$track,simulate.kalman.filter$outpu), min(test$estimates$pred,smooth$filter$track,simulate.kalman.filter$output)))
points(smooth$smooth$state, col = 3)
points(simulate.kalman.filter$output, col = 4)
The above plot plots all your actual data, model estimates and smoothed estimates against one another.

survfit.coxph ; Predicting Survival using newdata and ID option

I am attempting to use surfit.coxph to predict an estimate of the survival function using the newdata and Id option. I am aware of the limitations of this; the baseline hazard is defined as the average of all covariates and what constitutes a typical patient, but please can we put this aside for one moment;
I am fitting the model;
Model.Cox <- coxph(Surv(Start,Stop, censor) ~ baseline,data = data)
I then try to use;
summary(survfit(Model.Cox, newdata = data,id = Id ))
to predict new data. However, both and
summary(survfit(Model.Cox, newdata = data,id = Id ))$time
summary(survfit(Model.Cox, newdata = data,id = Id ))$surv
give different times than the original data? I would expect predictions for the times in the original dataset, is there a time when this would not be the case?
If times is missing (the default) and censored=FALSE (also its default) then you get only predictions at event times. If your expectation is for predictions only for a limited number of individuals, but at all the times in the original dataset ,then you need to provide a vector of times to the times parameter.
allT <- data$Stop
summfitID <- summary(survfit(Model.Cox, newdata = data,id = Id ), times=allT)
summfitID$time
summfitID$surv
Looking at the code I wondered if the same effect could be had just by setting censored-TRUE in the summary.survfit arguments.

Resources