I am trying to build this forecasting model but can't get impressive results. The low no. of records to train the model is one of the reasons for not so good results, I believe, and so I am seeking help.
Here is the predictor variables' time series matrix. Here the Paidts7 variable is actually a lag variable of Paidts6.
XREG =
Paidts2 Paidts6 Paidts7 Paidts4 Paidts5 Paidts8
Jan 2014 32932400 29703000 58010000 21833 38820 102000.0
Feb 2014 33332497 35953000 29703000 10284 38930 104550.0
Mar 2014 35811723 40128000 35953000 11132 39840 104550.0
Apr 2014 28387000 29167000 40128000 13171 40010 104550.0
May 2014 27941601 27942000 29167000 9192 39640 104550.0
Jun 2014 34236746 35010000 27942000 8766 39430 104550.0
Jul 2014 22986887 26891000 35010000 11217 39060 104550.0
Aug 2014 31616679 31990000 26891000 8118 38840 104550.0
Sep 2014 41839591 46052000 31990000 10954 38380 104550.0
Oct 2014 36945266 36495000 46052000 14336 37920 104550.0
Nov 2014 44026966 41716000 36495000 12362 36810 104550.0
Dec 2014 57689000 60437000 41716000 14498 36470 104550.0
Jan 2015 35150678 35263000 60437000 22336 34110 104550.0
Feb 2015 33477565 33749000 35263000 12188 29970 107163.8
Mar 2015 41226928 41412000 33749000 11122 28580 107163.8
Apr 2015 31031405 30588000 41412000 12605 28970 107163.8
May 2015 31091543 29327000 30588000 9520 27820 107163.8
Jun 2015 38212015 35818000 29327000 10445 28880 107163.8
Jul 2015 32523660 32102000 35818000 12006 28730 107163.8
Aug 2015 33749299 33482000 32102000 9303 27880 107163.8
Sep 2015 48275932 44432000 33482000 10624 25950 107163.8
Oct 2015 32067045 32542000 44432000 15324 25050 107163.8
Nov 2015 46361434 40862000 32542000 10706 25190 107163.8
Dec 2015 68206802 71005000 40862000 14499 24670 107163.8
Jan 2016 34847451 29226000 71005000 23578 23100 107163.8
Feb 2016 34249625 43835001 29226000 13520 21430 109842.9
Mar 2016 45707923 56087003 43835001 15247 19980 109842.9
Apr 2016 33512366 37116000 56087003 18797 20900 109842.9
May 2016 33844153 42902002 37116000 11870 21520 109842.9
Jun 2016 40251630 53203010 42902002 14374 23150 109842.9
Jul 2016 33947604 38411008 53203010 18436 24230 109842.9
Aug 2016 35391779 38545003 38411008 11654 24050 109842.9
Sep 2016 49399281 55589008 38545003 13448 23510 109842.9
Oct 2016 36463617 45751005 55589008 19871 23940 109842.9
Nov 2016 45182618 51641006 45751005 14998 24540 109842.9
Dec 2016 64894588 79141002 51641006 18143 24390 109842.9
Here is the Y variable (to be predicted)
Jan Feb Mar Apr May Jun
2014 1266757.8 1076023.4 1285495.7 1026840.2 910148.8 1111744.5
2015 1654745.7 1281946.6 1372669.3 1017266.6 841578.4 1353995.5
2016 1062048.8 1860531.1 1684564.3 1261672.0 1249547.7 1829317.9
Jul Aug Sep Oct Nov Dec
2014 799973.1 870778.9 1224827.3 1179754.0 1186726.3 1673259.5
2015 1127006.2 779374.9 1223445.6 925473.6 1460704.8 1632066.2
2016 1410316.4 1276771.1 1668296.7 1477083.3 1466419.2 2265343.3
I tried Forecast::ARIMA and Forecast::NNETAR models with external regressor but couldn't bring MAPE below 7. I am targetting MAPE below 3 and RMSE under 50000. You are welcome to use any other package and function.
Here is the test data: XREG =
Paidts2test Paidts6test Paidts7test Paidts4test
Jan 2017 31012640 36892000 79141002 27912
Feb 2017 33009746 39020000 36892000 9724
Mar 2017 39296653 52787000 39020000 11335
Apr 2017 36387649 36475000 52787000 17002
May 2017 40269571 41053000 36475000 11436
Paidts5test Paidts8test
Jan 2017 25100 109842.9
Feb 2017 25800 112589.0
Mar 2017 25680 112589.0
Apr 2017 25540 112589.0
May 2017 25830 112589.0
Y =
1627598 1041766 1381536 1346429 1314992
If you find out that removing one or more of the predictor variables is improving the result significantly, please go ahead. Your help will be greatly appreciated and please suggest in 'R' only not in some other tool.
-Thanks
Try auto.arima, it will also allow you to use xreg.
https://www.rdocumentation.org/packages/forecast/versions/8.1/topics/auto.arima
I am using sentiment analysis function sentiment_by() from R package sentimentr (by trinker). I have a dataframe containing the following columns:
review comments
month
year
I ran the sentiment_by function on the dataframe to find the average polarity score based on the year and month and i get the following values.
review_year review_month word_count sd ave_sentiment
2015 March 8722 0.381686065 0.163440921
2015 April 7758 0.387046768 0.158812775
2015 May 7333 0.389256472 0.149220636
2015 November 14020 0.394711478 0.14691745
2016 February 7974 0.400406931 0.142345278
2015 September 8238 0.379989344 0.141740366
2015 February 7642 0.361415304 0.141624745
2015 December 24863 0.387409099 0.141606892
2016 March 8229 0.389033232 0.138552943
2016 January 10472 0.388300946 0.134302612
2015 August 7520 0.3640285 0.127980712
2016 May 3432 0.422246851 0.125041218
2015 June 8678 0.356612924 0.119333949
2015 January 9930 0.351126449 0.119225549
2016 April 9344 0.397066458 0.111879315
2015 July 8450 0.349963536 0.108881821
2015 October 7630 0.38017201 0.1044298
Now i run the sentiment_by function on the dataframe based on the comments alone and then i run the following function on the resultant data frame to find the average polarity score based on year and months.
sentiment_df[,list(avg=mean(ave_sentiment)),by="month,year"]
I get the following results.
month year avg
January 2015 0.110950199
February 2015 0.126943461
March 2015 0.146546669
April 2015 0.148264268
May 2015 0.143924126
June 2015 0.110691204
July 2015 0.106472437
August 2015 0.118976304
September 2015 0.135362187
October 2015 0.111441484
November 2015 0.137699548
December 2015 0.136786867
January 2016 0.128645808
February 2016 0.129139898
March 2016 0.134595706
April 2016 0.12106743
May 2016 0.142801514
As per my understanding both should return the same results, correct me if I am wrong. Reason for me to go for the second approach is because i need to average polarity based on both month and year, as well as based on months and i don't want to use the method twice as it will cause additional time delay. Could some one let me know what i am doing wrong here?
Here is an idea: Maybe the first function is taking the averages from the individual sentences, and the second one is taking the average from the "ave sentiment", which is already an average. So, the average of averages is not always equal to the average of the individual elements.
I am following along in this guide to forecast data in ARIMA data.
The question I have is how do I extract the data points from the forecasted data?
I would like to have those points so I could graph the exact same thing in excel. Is this possible?
Thank you.
Suppose you use something like
library(forecast)
m_aa <- auto.arima(AirPassengers)
f_aa <- forecast(m_aa, h=24)
then you can show values for the forecast, for example with
f_aa
which gives
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
Jan 1961 446.7582 431.7435 461.7729 423.7953 469.7211
Feb 1961 420.7582 402.5878 438.9286 392.9690 448.5474
Mar 1961 448.7582 427.9043 469.6121 416.8649 480.6515
Apr 1961 490.7582 467.5287 513.9877 455.2318 526.2846
May 1961 501.7582 476.3745 527.1419 462.9372 540.5792
Jun 1961 564.7582 537.3894 592.1270 522.9012 606.6152
Jul 1961 651.7582 622.5388 680.9776 607.0709 696.4455
Aug 1961 635.7582 604.7986 666.7178 588.4096 683.1069
Sep 1961 537.7582 505.1511 570.3653 487.8900 587.6264
Oct 1961 490.7582 456.5830 524.9334 438.4918 543.0246
Nov 1961 419.7582 384.0838 455.4326 365.1989 474.3176
Dec 1961 461.7582 424.6450 498.8714 404.9985 518.5179
Jan 1962 476.5164 431.6293 521.4035 407.8675 545.1653
Feb 1962 450.5164 401.1834 499.8494 375.0681 525.9647
Mar 1962 478.5164 425.1064 531.9265 396.8328 560.2000
Apr 1962 520.5164 463.3192 577.7137 433.0408 607.9920
May 1962 531.5164 470.7676 592.2652 438.6092 624.4237
Jun 1962 594.5164 530.4126 658.6203 496.4780 692.5548
Jul 1962 681.5164 614.2245 748.8083 578.6024 784.4304
Aug 1962 665.5164 595.1809 735.8519 557.9475 773.0853
Sep 1962 567.5164 494.2636 640.7692 455.4859 679.5469
Oct 1962 520.5164 444.4581 596.5747 404.1953 636.8376
Nov 1962 449.5164 370.7525 528.2803 329.0574 569.9754
Dec 1962 491.5164 410.1368 572.8961 367.0570 615.9758
and you can save these values with something like
write.csv(f_aa, file="location_and_filename.csv")
I am building a time series model in R with training data and predicting the future values.
fit_arima <- auto.arima(train.ts, xreg=xreg.vars.train)
I get the CI for the predicted data using the model that I developed with training data.
fcast_arima <- forecast(fit_arima, xreg = xreg.vars.test, h= nrow(test.data), level=95)
Point Forecast Lo 95 Hi 95
Apr 2015 2.000000 1.396790 2.603210
May 2015 2.000000 1.396790 2.603210
Jun 2015 2.397746 1.794537 3.000956
Jul 2015 2.000000 1.396790 2.603210
Aug 2015 2.397746 1.794537 3.000956
Sep 2015 2.000000 1.396790 2.603210
Oct 2015 2.000000 1.396790 2.603210
Nov 2015 2.397746 1.794537 3.000956
Dec 2015 2.795493 2.192283 3.398702
But I am looking for a way to get CI for training data as well. Can someone help to find the way to do?
Thanks,
Kaly
I have collected some time series data from the web and the timestamp that I got looks like below.
24 Jun
21 Mar
20 Jan
10 Dec
20 Jun
20 Jan
10 Dec
...
The interesting part is that the year is missing in the data, however, all the records are ordered, and you can infer the year from the record and fill in the missing data. So the data after imputing should be like this:
24 Jun 2014
21 Mar 2014
20 Jan 2014
10 Dec 2013
20 Jun 2013
20 Jan 2013
10 Dec 2012
...
Before lifting my sleeves and start writing a for loop with nested logic.. is there a easy way that might work out of box in R to impute the missing year.
Thanks a lot for any suggestion!
Here's one idea
## Make data easily reproducible
df <- data.frame(day=c(24, 21, 20, 10, 20, 20, 10),
month = c("Jun", "Mar", "Jan", "Dec", "Jun", "Jan", "Dec"))
## Convert each month-day combo to its corresponding "julian date"
datestring <- paste("2012", match(df[[2]], month.abb), df[[1]], sep = "-")
date <- strptime(datestring, format = "%Y-%m-%d")
julian <- as.integer(strftime(date, format = "%j"))
## Transitions between years occur wherever julian date increases between
## two observations
df$year <- 2014 - cumsum(diff(c(julian[1], julian))>0)
## Check that it worked
df
# day month year
# 1 24 Jun 2014
# 2 21 Mar 2014
# 3 20 Jan 2014
# 4 10 Dec 2013
# 5 20 Jun 2013
# 6 20 Jan 2013
# 7 10 Dec 2012
The OP has requested to complete the years in descending order starting in 2014.
Here is an alternative approach which works without date conversion and fake dates. Furthermore, this approach can be modified to work with fiscal years which start on a different month than January.
# create sample dataset
df <- data.frame(
day = c(24L, 21L, 20L, 10L, 20L, 20L, 21L, 10L, 30L, 10L, 10L, 7L),
month = c("Jun", "Mar", "Jan", "Dec", "Jun", "Jan", "Jan", "Dec", "Jan",
"Jan", "Jan", "Jun"))
df$year <- 2014 - cumsum(c(0L, diff(100L*as.integer(
factor(df$month, levels = month.abb)) + df$day) > 0))
df
day month year
1 24 Jun 2014
2 21 Mar 2014
3 20 Jan 2014
4 10 Dec 2013
5 20 Jun 2013
6 20 Jan 2013
7 21 Jan 2012
8 10 Dec 2011
9 30 Jan 2011
10 10 Jan 2011
11 10 Jan 2011
12 7 Jun 2010
Completion of fiscal years
Let's assume the business has decided to start its fiscal year on February 1. Thus, January lies in a different fiscal year than February or March of the same calendar year.
To handle fiscal years, we only need to shuffle the factor levels accordingly:
df$fy <- 2014 - cumsum(c(0L, diff(100L*as.integer(
factor(df$month, levels = month.abb[c(2:12, 1)])) + df$day) > 0))
df
day month year fy
1 24 Jun 2014 2014
2 21 Mar 2014 2014
3 20 Jan 2014 2013
4 10 Dec 2013 2013
5 20 Jun 2013 2013
6 20 Jan 2013 2012
7 21 Jan 2012 2011
8 10 Dec 2011 2011
9 30 Jan 2011 2010
10 10 Jan 2011 2010
11 10 Jan 2011 2010
12 7 Jun 2010 2010