How to get forecast dataset from R language? - r

I am following along in this guide to forecast data in ARIMA data.
The question I have is how do I extract the data points from the forecasted data?
I would like to have those points so I could graph the exact same thing in excel. Is this possible?
Thank you.

Suppose you use something like
library(forecast)
m_aa <- auto.arima(AirPassengers)
f_aa <- forecast(m_aa, h=24)
then you can show values for the forecast, for example with
f_aa
which gives
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
Jan 1961 446.7582 431.7435 461.7729 423.7953 469.7211
Feb 1961 420.7582 402.5878 438.9286 392.9690 448.5474
Mar 1961 448.7582 427.9043 469.6121 416.8649 480.6515
Apr 1961 490.7582 467.5287 513.9877 455.2318 526.2846
May 1961 501.7582 476.3745 527.1419 462.9372 540.5792
Jun 1961 564.7582 537.3894 592.1270 522.9012 606.6152
Jul 1961 651.7582 622.5388 680.9776 607.0709 696.4455
Aug 1961 635.7582 604.7986 666.7178 588.4096 683.1069
Sep 1961 537.7582 505.1511 570.3653 487.8900 587.6264
Oct 1961 490.7582 456.5830 524.9334 438.4918 543.0246
Nov 1961 419.7582 384.0838 455.4326 365.1989 474.3176
Dec 1961 461.7582 424.6450 498.8714 404.9985 518.5179
Jan 1962 476.5164 431.6293 521.4035 407.8675 545.1653
Feb 1962 450.5164 401.1834 499.8494 375.0681 525.9647
Mar 1962 478.5164 425.1064 531.9265 396.8328 560.2000
Apr 1962 520.5164 463.3192 577.7137 433.0408 607.9920
May 1962 531.5164 470.7676 592.2652 438.6092 624.4237
Jun 1962 594.5164 530.4126 658.6203 496.4780 692.5548
Jul 1962 681.5164 614.2245 748.8083 578.6024 784.4304
Aug 1962 665.5164 595.1809 735.8519 557.9475 773.0853
Sep 1962 567.5164 494.2636 640.7692 455.4859 679.5469
Oct 1962 520.5164 444.4581 596.5747 404.1953 636.8376
Nov 1962 449.5164 370.7525 528.2803 329.0574 569.9754
Dec 1962 491.5164 410.1368 572.8961 367.0570 615.9758
and you can save these values with something like
write.csv(f_aa, file="location_and_filename.csv")

Related

How to change negative values to 0 of forecasts in R?

As the data is of rainfall, I want to replace the negative values both in point forecasts and intervals with 0. How can this be done in R ? Looking for the R codes that can make the required changes.
The Forecast values obtained in R using an ARIMA model are given below
> Predictions
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
Jan 2021 -1.6625108 -165.62072 162.2957 -252.41495 249.0899
Feb 2021 0.8439712 -165.57869 167.2666 -253.67752 255.3655
Mar 2021 35.9618300 -130.53491 202.4586 -218.67297 290.5966
Apr 2021 53.4407679 -113.05822 219.9398 -201.19746 308.0790
May 2021 206.7464927 40.24744 373.2455 -47.89184 461.3848
Jun 2021 436.2547446 269.75569 602.7538 181.61641 690.8931
Jul 2021 408.2814434 241.78239 574.7805 153.64311 662.9198
Aug 2021 431.7649076 265.26585 598.2640 177.12657 686.4032
Sep 2021 243.5520546 77.05300 410.0511 -11.08628 498.1904
Oct 2021 117.4581047 -49.04095 283.9572 -137.18023 372.0964
Nov 2021 25.0773401 -141.42171 191.5764 -229.56098 279.7157
Dec 2021 28.9468415 -137.55188 195.4456 -225.69098 283.5847
Jan 2022 -0.4912674 -171.51955 170.5370 -262.05645 261.0739
Feb 2022 2.2963271 -168.86759 173.4602 -259.47630 264.0690
Mar 2022 43.3561613 -127.81187 214.5242 -218.42275 305.1351
Apr 2022 48.6538398 -122.51431 219.8220 -213.12526 310.4329
May 2022 228.4762035 57.30805 399.6444 -33.30290 490.2553
Jun 2022 445.3540781 274.18592 616.5222 183.57497 707.1332
Jul 2022 441.8287867 270.66063 612.9969 180.04968 703.6079
Aug 2022 592.5766086 421.40845 763.7448 330.79751 854.3557
Sep 2022 220.6996396 49.53148 391.8678 -41.07946 482.4787
Oct 2022 158.7952154 -12.37294 329.9634 -102.98389 420.5743
Nov 2022 29.9052184 -141.26288 201.0733 -231.87380 291.6842
Dec 2022 25.9432583 -145.22303 197.1095 -235.83298 287.7195
In this context, try using:
Predictions[Predictions < 0] <- 0
Which will replace all values less than 0 with 0. Because of the processing, the use of for loops is discouraged in applications where vectorization can be applied.

Can this time series forecasting model (in R) be further improved?

I am trying to build this forecasting model but can't get impressive results. The low no. of records to train the model is one of the reasons for not so good results, I believe, and so I am seeking help.
Here is the predictor variables' time series matrix. Here the Paidts7 variable is actually a lag variable of Paidts6.
XREG =
Paidts2 Paidts6 Paidts7 Paidts4 Paidts5 Paidts8
Jan 2014 32932400 29703000 58010000 21833 38820 102000.0
Feb 2014 33332497 35953000 29703000 10284 38930 104550.0
Mar 2014 35811723 40128000 35953000 11132 39840 104550.0
Apr 2014 28387000 29167000 40128000 13171 40010 104550.0
May 2014 27941601 27942000 29167000 9192 39640 104550.0
Jun 2014 34236746 35010000 27942000 8766 39430 104550.0
Jul 2014 22986887 26891000 35010000 11217 39060 104550.0
Aug 2014 31616679 31990000 26891000 8118 38840 104550.0
Sep 2014 41839591 46052000 31990000 10954 38380 104550.0
Oct 2014 36945266 36495000 46052000 14336 37920 104550.0
Nov 2014 44026966 41716000 36495000 12362 36810 104550.0
Dec 2014 57689000 60437000 41716000 14498 36470 104550.0
Jan 2015 35150678 35263000 60437000 22336 34110 104550.0
Feb 2015 33477565 33749000 35263000 12188 29970 107163.8
Mar 2015 41226928 41412000 33749000 11122 28580 107163.8
Apr 2015 31031405 30588000 41412000 12605 28970 107163.8
May 2015 31091543 29327000 30588000 9520 27820 107163.8
Jun 2015 38212015 35818000 29327000 10445 28880 107163.8
Jul 2015 32523660 32102000 35818000 12006 28730 107163.8
Aug 2015 33749299 33482000 32102000 9303 27880 107163.8
Sep 2015 48275932 44432000 33482000 10624 25950 107163.8
Oct 2015 32067045 32542000 44432000 15324 25050 107163.8
Nov 2015 46361434 40862000 32542000 10706 25190 107163.8
Dec 2015 68206802 71005000 40862000 14499 24670 107163.8
Jan 2016 34847451 29226000 71005000 23578 23100 107163.8
Feb 2016 34249625 43835001 29226000 13520 21430 109842.9
Mar 2016 45707923 56087003 43835001 15247 19980 109842.9
Apr 2016 33512366 37116000 56087003 18797 20900 109842.9
May 2016 33844153 42902002 37116000 11870 21520 109842.9
Jun 2016 40251630 53203010 42902002 14374 23150 109842.9
Jul 2016 33947604 38411008 53203010 18436 24230 109842.9
Aug 2016 35391779 38545003 38411008 11654 24050 109842.9
Sep 2016 49399281 55589008 38545003 13448 23510 109842.9
Oct 2016 36463617 45751005 55589008 19871 23940 109842.9
Nov 2016 45182618 51641006 45751005 14998 24540 109842.9
Dec 2016 64894588 79141002 51641006 18143 24390 109842.9
Here is the Y variable (to be predicted)
Jan Feb Mar Apr May Jun
2014 1266757.8 1076023.4 1285495.7 1026840.2 910148.8 1111744.5
2015 1654745.7 1281946.6 1372669.3 1017266.6 841578.4 1353995.5
2016 1062048.8 1860531.1 1684564.3 1261672.0 1249547.7 1829317.9
Jul Aug Sep Oct Nov Dec
2014 799973.1 870778.9 1224827.3 1179754.0 1186726.3 1673259.5
2015 1127006.2 779374.9 1223445.6 925473.6 1460704.8 1632066.2
2016 1410316.4 1276771.1 1668296.7 1477083.3 1466419.2 2265343.3
I tried Forecast::ARIMA and Forecast::NNETAR models with external regressor but couldn't bring MAPE below 7. I am targetting MAPE below 3 and RMSE under 50000. You are welcome to use any other package and function.
Here is the test data: XREG =
Paidts2test Paidts6test Paidts7test Paidts4test
Jan 2017 31012640 36892000 79141002 27912
Feb 2017 33009746 39020000 36892000 9724
Mar 2017 39296653 52787000 39020000 11335
Apr 2017 36387649 36475000 52787000 17002
May 2017 40269571 41053000 36475000 11436
Paidts5test Paidts8test
Jan 2017 25100 109842.9
Feb 2017 25800 112589.0
Mar 2017 25680 112589.0
Apr 2017 25540 112589.0
May 2017 25830 112589.0
Y =
1627598 1041766 1381536 1346429 1314992
If you find out that removing one or more of the predictor variables is improving the result significantly, please go ahead. Your help will be greatly appreciated and please suggest in 'R' only not in some other tool.
-Thanks
Try auto.arima, it will also allow you to use xreg.
https://www.rdocumentation.org/packages/forecast/versions/8.1/topics/auto.arima

decompose() for yearly time series in R

I'm trying to perform analysis on a time series data of inflation rates from the year 1960 to 2015. The dataset is a yearly time series over 56 years with 1 real value per each year, which is the following:
Year Inflation percentage
1960 1.783264746
1961 1.752021563
1962 3.57615894
1963 2.941176471
1964 13.35403727
1965 9.479452055
1966 10.81081081
1967 13.0532972
1968 2.996404315
1969 0.574712644
1970 5.095238095
1971 3.081105573
1972 6.461538462
1973 16.92815855
1974 28.60169492
1975 5.738605162
1976 -7.63438068
1977 8.321619342
1978 2.517518817
1979 6.253164557
1980 11.3652609
1981 13.11510484
1982 7.887270664
1983 11.86886396
1984 8.32157969
1985 5.555555556
1986 8.730811404
1987 8.798689021
1988 9.384775808
1989 3.26256011
1990 8.971233545
1991 13.87024609
1992 11.78781925
1993 6.362038664
1994 10.21150033
1995 10.22488756
1996 8.977149075
1997 7.16425362
1998 13.2308409
1999 4.669821024
2000 4.009433962
2001 3.684807256
2002 4.392199745
2003 3.805865922
2004 3.76723848
2005 4.246353323
2006 6.145522388
2007 6.369996746
2008 8.351816444
2009 10.87739112
2010 11.99229692
2011 8.857845297
2012 9.312445605
2013 10.90764331
2014 6.353194544
2015 5.872426595
'stock1' contains my data where the first column stands for Year, and the second for 'Inflation.percentage', as follows:
stock1<-read.csv("India-Inflation time series.csv", header=TRUE, stringsAsFactors=FALSE, as.is=TRUE)
The following is my code for creating the time series object:
stock <- ts(stock1$Inflation.percentage,start=(1960), end=(2015),frequency=1)
Following this, I am trying to decompose the time series object 'stock' using the following line of code:
decom_add <- (decompose(stock, type ="additive"))
Here I get an error:
Error in decompose(stock, type = "additive") : time series has no
or less than 2 periods
Why is this so? I initially thought it has something to do with frequency, but since the data is annual, the frequency has to be 1 right? If it is 1, then aren't there definitely more than 2 periods in the data?
Why isn't decompose() working? What am I doing wrong?
Thanks a lot in advance!
Please try for frequency=2, because frequency needs to be greater than 1. Because this action will change your model, for me the better way is to load data which contain and month column, so the frequency will be 12.

Confidence Interval for Training data

I am building a time series model in R with training data and predicting the future values.
fit_arima <- auto.arima(train.ts, xreg=xreg.vars.train)
I get the CI for the predicted data using the model that I developed with training data.
fcast_arima <- forecast(fit_arima, xreg = xreg.vars.test, h= nrow(test.data), level=95)
Point Forecast Lo 95 Hi 95
Apr 2015 2.000000 1.396790 2.603210
May 2015 2.000000 1.396790 2.603210
Jun 2015 2.397746 1.794537 3.000956
Jul 2015 2.000000 1.396790 2.603210
Aug 2015 2.397746 1.794537 3.000956
Sep 2015 2.000000 1.396790 2.603210
Oct 2015 2.000000 1.396790 2.603210
Nov 2015 2.397746 1.794537 3.000956
Dec 2015 2.795493 2.192283 3.398702
But I am looking for a way to get CI for training data as well. Can someone help to find the way to do?
Thanks,
Kaly

Get average value by month

I’ve used zoo aggregate function to get the monthly average from daily data using:
monthlyMeanTemp <- aggregate(merged.precip.flow.and.T[,'E'], as.yearmon, mean, na.rm = TRUE) # ‘E’ is the column of temperature
Here is the head and tail of the result:
Jan 1979 Feb 1979 Mar 1979 Apr 1979 May 1979 Jun 1979
-14.05354839 -11.83078929 -7.32150645 -0.03214333 6.16986774 14.00944000
…
Apr 1997 May 1997 Jun 1997 Jul 1997 Aug 1997 Sep 1997
1.438547 7.421910 12.764450 15.086206 17.376026 10.125013`
Is it possible to get the mean by month (i.e., the mean of all the January values, mean of all the February values etc.) without resorting to padding missing months with NA, forming a n x 12 matrix (where n is the number of years), and then using the colMeans function?
...just found the answer: From the hydroTSM package: monthlyfunction(merged.precip.flow.and.T[,'E'], FUN=mean, na.rm=TRUE)

Resources