Negative values in timeseries when removing seasonal values with HoltWinters (R) - r
i'm new to R, so I'm having trouble with this time series data
For example (the real data is way larger)
data <- c(7,5,3,2,5,2,4,11,5,4,7,22,5,14,18,20,14,22,23,20,23,16,21,23,42,64,39,34,39,43,49,59,30,15,10,12,4,2,4,6,7)
ts <- ts(data,frequency = 12, start = c(2010,1))
So if I try to decompose the data to adjust it
ts.decompose <- decompose(ts)
ts.adjust <- ts - ts.decompose$seasonal
ts.hw <- HoltWinters(ts.adjust)
ts.forecast <- forecast.HoltWinters(ts.hw, h = 10)
plot.forecast(ts.forecast)
But for the first values I got negative values, why this is happening?
Well, you are forecasting the seasonally adjusted time series, and of course the deseasonalized series ts.adjust can already contain negative values by itself, and in fact, it actually does.
In addition, even if the original series contained only positive values, Holt-Winters can yield negative forecasts. It is not constrained.
I would suggest trying to model your original (not seasonally adjusted) time series directly using ets() in the forecast package. It usually does a good job in detecting seasonality. (And it can also yield negative forecasts or prediction intervals.)
I very much recommend this free online forecasting textbook. Given your specific question, this may also be helpful.
Related
Time Series Forecasting using Support Vector Machine (SVM) in R
I've tried searching but couldn't find a specific answer to this question. So far I'm able to realize that Time Series Forecasting is possible using SVM. I've gone through a few papers/articles who've performed the same but didn't mention any code, instead explained the algorithm (which I didn't quite understand). And some have done it using python. My problem here is that: I have a company data(say univariate) of sales from 2010 to 2017. And I need to forecast the sales value for 2018 using SVM in R. Would you be kind enough to simply present and explain the R code to perform the same using a small example? I really do appreciate your inputs and efforts! Thanks!!!
let's assume you have monthly data, for example derived from Air Passengers data set. You don't need the timeseries-type data, just a data frame containing time steps and values. Let's name them x and y. Next you develop an svm model, and specify the time steps you need to forecast. Use the predict function to compute the forecast for given time steps. That's it. However, support vector machine is not commonly regarded as the best method for time series forecasting, especially for long series of data. It can perform good for few observations ahead, but I wouldn't expect good results for forecasting eg. daily data for a whole next year (but it obviously depends on data). Simple R code for SVM-based forecast: # prepare sample data in the form of data frame with cols of timesteps (x) and values (y) data(AirPassengers) monthly_data <- unclass(AirPassengers) months <- 1:144 DF <- data.frame(months,monthly_data) colnames(DF)<-c("x","y") # train an svm model, consider further tuning parameters for lower MSE svmodel <- svm(y ~ x,data=DF, type="eps-regression",kernel="radial",cost=10000, gamma=10) #specify timesteps for forecast, eg for all series + 12 months ahead nd <- 1:156 #compute forecast for all the 156 months prognoza <- predict(svmodel, newdata=data.frame(x=nd)) #plot the results ylim <- c(min(DF$y), max(DF$y)) xlim <- c(min(nd),max(nd)) plot(DF$y, col="blue", ylim=ylim, xlim=xlim, type="l") par(new=TRUE) plot(prognoza, col="red", ylim=ylim, xlim=xlim)
Result of nnetar is strangely flat
I'm new to R but have some experience with ARIMA models. Now I wanted to learn a bit about neural networks for forecasting. I tried to repeat the procedure from Rob's post. It worked great for the data set he used. It also worked great for imaginary datasets I created. But then I tried to use real-life data (revenue data for 7 years monthly) and the resulting forecasts are strangely flat. My code: read.csv("Revenue.csv",header=TRUE) x <-read.csv("Revenue.csv",header=TRUE) y<-ts(x,freq=12,start=c(2011,1)) (fit<-nnetar(y)) fcast <- forecast(fit, PI=TRUE, h=20, bootstrap=TRUE) autoplot(fcast) The result is an almost straight line (attached as picture 1). That strikes me as odd, because the trend has been positive so far: there was a revenue growth of more than 100% every year. Still the result of nnetar is that the revenue will stabilise. How is that possible? As a comparison I used Auto.arima for the same data set (picture 2). It shows a clear upward trend.
One suggestion, even if its hard to help without the data sample. It appears than nnetar is not capturing very well the trend in your data. Probably you could try to use a trend as external regressors ( xreg argument) For example for a deterministic trend. Trend=seq(from=start, to=end, by=1) (fit <- nnetar(y, xreg=Trend)) (f <- forecast(fit,h=h, xreg=seq(from=end, to=end+h, by=1)) An alternative would be to use more lag or seasonal lags (p and P argument in your nnetar model)
Match "next day" using forecast() in R
I am working through the "Forecasting Using R" DataCamp course. I have completed the entire thing except for the last part of one particular exercise (link here, if you have an account), where I'm totally lost. The error help it's giving me isn't helping either. I'll put the various parts of the task down with the code I'm using to solve them: Produce time plots of only the daily demand and maximum temperatures with facetting. autoplot(elec[, c("Demand", "Temperature")], facets = TRUE) Index elec accordingly to set up the matrix of regressors to include MaxTemp for the maximum temperatures, MaxTempSq which represents the squared value of the maximum temperature, and Workday, in that order. xreg <- cbind(MaxTemp = elec[, "Temperature"], MaxTempSq = elec[, "Temperature"] ^2, Workday = elec[,"Workday"]) Fit a dynamic regression model of the demand column with ARIMA errors and call this fit. fit <- auto.arima(elec[,"Demand"], xreg = xreg) If the next day is a working day (indicator is 1) with maximum temperature forecast to be 20°C, what is the forecast demand? Fill out the appropriate values in cbind() for the xreg argument in forecast(). This is where I'm stuck. The sample code they supply looks like this: forecast(___, xreg = cbind(___, ___, ___)) I have managed to work out that the first blank is fit, so I'm trying code that looks like this: forecast(fit, xreg = cbind(elec[,"Workday"]==1, elec[, "Temperature"]==20, elec[,"Demand"])) But that is giving me the error hint "Make sure to forecast the next day using the inputs given in the instructions." Which... doesn't tell me anything useful. Any ideas what I should be doing instead?
When you are forecasting ahead of time, you use new data that was not included in elec (which is the data set you used to fit your model). The new data was given to you in the question (temperature 20C and workday 1). Therefore, you do not need elec in your forecastcall. Just use the new data to forecast ahead: forecast(fit, xreg = cbind(20, 20^2, 1))
Time series forecasting, dealing with known big orders
I have many data sets with known outliers (big orders) data <- matrix(c("08Q1","08Q2","08Q3","08Q4","09Q1","09Q2","09Q3","09Q4","10Q1","10Q2","10Q3","10Q4","11Q1","11Q2","11Q3","11Q4","12Q1","12Q2","12Q3","12Q4","13Q1","13Q2","13Q3","13Q4","14Q1","14Q2","14Q3","14Q4","15Q1", 155782698, 159463653.4, 172741125.6, 204547180, 126049319.8, 138648461.5, 135678842.1, 242568446.1, 177019289.3, 200397120.6, 182516217.1, 306143365.6, 222890269.2, 239062450.2, 229124263.2, 370575384.7, 257757410.5, 256125841.6, 231879306.6, 419580274, 268211059, 276378232.1, 261739468.7, 429127062.8, 254776725.6, 329429882.8, 264012891.6, 496745973.9, 284484362.55),ncol=2,byrow=FALSE) The top 11 outliers of this specific series are: outliers <- matrix(c("14Q4","14Q2","12Q1","13Q1","14Q2","11Q1","11Q4","14Q2","13Q4","14Q4","13Q1",20193525.68, 18319234.7, 12896323.62, 12718744.01, 12353002.09, 11936190.13, 11356476.28, 11351192.31, 10101527.85, 9723641.25, 9643214.018),ncol=2,byrow=FALSE) What methods are there that i can forecast the time series taking these outliers into consideration? I have already tried replacing the next biggest outlier (so running the data set 10 times replacing the outliers with the next biggest until the 10th data set has all the outliers replaced). I have also tried simply removing the outliers (so again running the data set 10 times removing an outlier each time until all 10 are removed in the 10th data set) I just want to point out that removing these big orders does not delete the data point completely as there are other deals that happen in that quarter My code tests the data through multiple forecasting models (ARIMA weighted on the out sample, ARIMA weighted on the in sample, ARIMA weighted, ARIMA, Additive Holt-winters weighted and Multiplcative Holt-winters weighted) so it needs to be something that can be adapted to these multiple models. Here are a couple more data sets that i used, i do not have the outliers for these series yet though data <- matrix(c("08Q1","08Q2","08Q3","08Q4","09Q1","09Q2","09Q3","09Q4","10Q1","10Q2","10Q3","10Q4","11Q1","11Q2","11Q3","11Q4","12Q1","12Q2","12Q3","12Q4","13Q1","13Q2","13Q3","13Q4","14Q1","14Q2","14Q3", 26393.99306, 13820.5037, 23115.82432, 25894.41036, 14926.12574, 15855.8857, 21565.19002, 49373.89675, 27629.10141, 43248.9778, 34231.73851, 83379.26027, 54883.33752, 62863.47728, 47215.92508, 107819.9903, 53239.10602, 71853.5, 59912.7624, 168416.2995, 64565.6211, 94698.38748, 80229.9716, 169205.0023, 70485.55409, 133196.032, 78106.02227), ncol=2,byrow=FALSE) data <- matrix(c("08Q1","08Q2","08Q3","08Q4","09Q1","09Q2","09Q3","09Q4","10Q1","10Q2","10Q3","10Q4","11Q1","11Q2","11Q3","11Q4","12Q1","12Q2","12Q3","12Q4","13Q1","13Q2","13Q3","13Q4","14Q1","14Q2","14Q3",3311.5124, 3459.15634, 2721.486863, 3286.51708, 3087.234059, 2873.810071, 2803.969394, 4336.4792, 4722.894582, 4382.349583, 3668.105825, 4410.45429, 4249.507839, 3861.148928, 3842.57616, 5223.671347, 5969.066896, 4814.551389, 3907.677816, 4944.283864, 4750.734617, 4440.221993, 3580.866991, 3942.253996, 3409.597269, 3615.729974, 3174.395507),ncol=2,byrow=FALSE) If this is too complicated then an explanation of how, in R, once outliers are detected using certain commands, the data is dealt with to forecast. e.g smoothing etc and how i can approach that writing a code myself (not using the commands that detect outliers)
Your outliers appear to be seasonal variations with the largest orders appearing in the 4-th quarter. Many of the forecasting models you mentioned include the capability for seasonal adjustments. As an example, the simplest model could have a linear dependence on year with corrections for all seasons. Code would look like: df <- data.frame(period= c("08Q1","08Q2","08Q3","08Q4","09Q1","09Q2","09Q3","09Q4","10Q1","10Q2","10Q3", "10Q4","11Q1","11Q2","11Q3","11Q4","12Q1","12Q2","12Q3","12Q4","13Q1","13Q2", "13Q3","13Q4","14Q1","14Q2","14Q3","14Q4","15Q1"), order= c(155782698, 159463653.4, 172741125.6, 204547180, 126049319.8, 138648461.5, 135678842.1, 242568446.1, 177019289.3, 200397120.6, 182516217.1, 306143365.6, 222890269.2, 239062450.2, 229124263.2, 370575384.7, 257757410.5, 256125841.6, 231879306.6, 419580274, 268211059, 276378232.1, 261739468.7, 429127062.8, 254776725.6, 329429882.8, 264012891.6, 496745973.9, 42748656.73)) seasonal <- data.frame(year=as.numeric(substr(df$period, 1,2)), qtr=substr(df$period, 3,4), data=df$order) ord_model <- lm(data ~ year + qtr, data=seasonal) seasonal <- cbind(seasonal, fitted=ord_model$fitted) library(reshape2) library(ggplot2) plot_fit <- melt(seasonal,id.vars=c("year", "qtr"), variable.name = "Source", value.name="Order" ) ggplot(plot_fit, aes(x=year, y = Order, colour = qtr, shape=Source)) + geom_point(size=3) which gives the results shown in the chart below: Models with a seasonal adjustment but nonlinear dependence upon year may give better fits.
You already said you tried different Arima-models, but as mentioned by WaltS, your series don't seem to contain big outliers, but a seasonal-component, which is nicely captured by auto.arima() in the forecast package: myTs <- ts(as.numeric(data[,2]), start=c(2008, 1), frequency=4) myArima <- auto.arima(myTs, lambda=0) myForecast <- forecast(myArima) plot(myForecast) where the lambda=0 argument to auto.arima() forces a transformation (or you could take log) of the data by boxcox to take the increasing amplitude of the seasonal-component into account.
The approach you are trying to use to cleanse your data of outliers is not going to be robust enough to identify them. I should add that there is a free outlier package in R called tsoutliers, but it won't do the things I am about to show you.... You have an interesting time series here. The trend changes over time with the upward trend weakening a bit. If you bring in two time trend variables with the first beginning at 1 and another beginning at period 14 and forward you will capture this change. As for seasonality, you can capture the high 4th quarter with a dummy variable. The model is parsimonios as the other 3 quarters are not different from the average plus no need for an AR12, seasonal differencing or 3 seasonal dummies. You can also capture the impact of the last two observations being outliers with two dummy variables. Ignore the 49 above the word trend as that is just the name of the series being modeled.
Is it possibile to arrange a time series in the way that a specific autocorrleation is created?
I have a file containing 2,500 random numbers. Is it possible to rearrange these saved numbers in the way that a specific autocorrelation is created? Lets say, autocorrelation to the lag 1 of 0.2, autocorrelation to the lag 2 of 0.4, etc.etc. Any help is greatly appreciated! To be more specific: The time series of a daily return in percent of an asset has the following characteristics that I am trying to recreate: Leptokurtic, symmetric distribution, let's say centered at a daily return of zero No significant autocorrelations (because the sign of a daily return is not predictable) Significant autocorrleations if the time series is squared The aim is to produce a random time series which satisfies all these three characteristics. The only two inputs should be the leptokurtic distribution (this I have already created) and the specific autocorrelation of the squared resulting time series (e.g. the final squared time series should have an autocorrelation at lag 1 of 0.2). I only know how to produce random numbers out of my own mixed-distribution. Naturally if I would square this resulting time series, there would be no autocorrelation. I would like to find a way which takes this into account.
Generally the most straightforward way to create autocorrelated data is to generate the data so that it's autocorrelated. For example, you could create an auto correlated path by always using the value at p-1 as the mean for the random draw at time period p. Rearranging is not only hard, but sort of odd conceptually. What are you really trying to do in the end? Giving some context might allow better answers.
There are functions for simulating correlated data. arima.sim() from stats package and simulate.Arima() from the forecast package. simulate.Arima() has the advantages that (1.) it can simulate seasonal ARIMA models (maybe sometimes called "SARIMA") and (2.) It can simulate a continuation of an existing timeseries to which you have already fit an ARIMA model. To use simulate.Arima(), you do need to already have an Arima object. UPDATE: type ?arima.sim then scroll down to "examples". Alternatively: install.packages("forecast") library(forecast) fit <- auto.arima(USAccDeaths) plot(USAccDeaths,xlim=c(1973,1982)) lines(simulate(fit, 36),col="red")