I am currently encountering a problem with the HoltWinters method implemented in the forecast package. If I use the method for some of my time series I have a larger error (SSE) when I use the Holt Winters with trend than without trend (Single Exponential Smoothing).
As I understand, Holt Winters with trend should at least be as good as Holt Winters without trend.
Can anybody explain this? I added an example below.
Best wishes,
Chris
library(forecast)
x = c(50,50,70,50,90,70,90,80,70,40,60,20,60,60,40,40,40,50,50,30,60,40,40,40,50,10,20,60,70, 60,60,80,70,80,90,80,70,30,30,80, 100,80,80,20,40,30,40,50,60,30,80, 100)
mSES = HoltWinters(x, alpha = TRUE, beta = FALSE, gamma = FALSE)
mHW = HoltWinters(x, alpha = TRUE, beta = TRUE, gamma = FALSE)
mSES$SSE
mHW$SSE
Don't set alpha or beta to TRUE. If beta is set to FALSE then it will do exponential smoothing but otherwise the input is taken as the specific parameter value. So TRUE gets coerced to be 1. So instead of letting the function choose the parameter that minimizes the SSE you are explicitly setting them to be 1.
Related
I am trying to find a best model fitting on my data using library(nlme) and lme function in R. Here is my model when the slope is fixed:
FixedRopeLength <- lme(EnergyCost~ RopeLength,
data = data,
random=~1|Subject, method = "ML")
summary(FixedRopeLength)
To see whether a random slope provides a better model than a fixed slope, I let the slope to vary across Subject as follows:
RandomRopeLength <- lme(EnergyCost~RopeLength,
data = data,
random=~RopeLength|Subject, method = "ML")
summary(RandomRopeLength)
However, I got this error:
Error in lme.formula(EnergyCost ~ RopeLength, data = data, random =
~RopeLength | : nlminb problem, convergence error code = 1
message = iteration limit reached without convergence (10)
Any solution??
Thank you so much for your help. Your code worked. I only needed to justify your code based on lme function. Here is the code which can be used for aforementioned error:
RandomRopeLength<-lme(EnergyCost~RopeLength, data = data, random=~RopeLength|Subject, method = "ML", control =list(msMaxIter = 1000, msMaxEval = 1000))
summary(RandomRopeLength)
Thanks!
?lme shows that there is a control argument, which redirects you to ?lmerControl, which gives you
msMaxIter: maximum number of iterations for the optimization step
inside the ‘lme’ optimization. Default is ‘50’.
and
msMaxEval: maximum number of evaluations of the objective function
permitted for nlminb. Default is ‘200’.
These correspond to eval.max and iter.max from ?nlminb. Since I'm not sure which of these is the problem, I would re-run the model with
control = lmeControl(msMaxIter = 1000, msMaxEval = 1000)
However, I'll warn you that once you have a problem that experiences numerical problems with the default parameter settings, adjusting the parameter settings may just lead to other problems farther down the line ...
How to check accuracy() in VAR model and How to determine the right seasonality( is there a function)
I am trying to create a VAR model. I have monthly data
Var_model <- VAR(cb, p = 1, type = "both", season = 12, exog = NULL)
I put season=12 by default since my data is monthly. How to determine seasonality?
mstl decomposition graphics
peca
waln
almo
pean
Here is the main problem. How to run accuracy() in var model?
forecast <- predict(Var_model, n.ahead = 24, ci = 0.95)
accuracy(forecast$fcst[[1]][,"fcst"], almo)
Here I think I am following procedure. accuracy(forecast, data) but still getting an error
Error in testaccuracy(object, x, test, d, D) : Not enough forecasts.
Check that forecasts and test data match
There is varresult inside of your model Var_model, so you see different accuracy metrics and define how good your model on training set like
accuracy(your_model_name$varresult$col_name)
Also you should check decomposition waives for seasonal period and lags correlation and cross-correlation and if you using VAR because all variable will impacting each other on forecast level.
surv.gbm in the mlr3 framework outputs linear predictors, however what I'm really interested in are predicted survival times per case, which I want to compare with the actual survival times. Is there a way to obtain actual survival times?
In the mlr3 book, there is an example of a transformation between linear predictors and a distribution.
pod = po("distrcompose", param_vals = list(form = "ph", overwrite = FALSE))
prediction = pod$predict(list(base = prediction_distr, pred = prediction_lp))$output
Is there a way to change this pipeline so that it converts "lp" to "response" ?
Any help would be appriciated.
Yes this is definitely possible it just requires another transformation. Your first step is correct to compose a distribution from a linear predictor; as you're using surv.gbm only Cox PH is possible as the underlying model so default for distrcompose works for this.
Now you need to use crankcompose in order to create a survival time prediction from the distribution, you could use the mean, median, or mode of the distribution, people usually pick mean or median but that's your choice! Just make sure to include response = TRUE, overwrite = FALSE. Example code below, includes creating predictions and scoring with RMSE (surprisingly quite good!). I think the book may need updating...
Thanks,
Raphael
library(mlr3extralearners)
library(mlr3proba)
library(mlr3pipelines)
library(mlr3)
learn = ppl("crankcompositor", ppl("distrcompositor", lrn("surv.gbm")),
response = TRUE, overwrite = FALSE, method = "mean",
graph_learner = TRUE)
set.seed(1)
task = tgen("simsurv")$generate(50)
learn$train(task)
p = learn$predict(task)
p$score(msr("surv.rmse"))
I am trying to calculate the temporal autocorrelation of a poison distributed mixed model, and was wondering how to do so. I get an error that says "$ operator not defined for this S4 class" I can successfully run the the dwtest on a linear model, with a poisson distribution, but not the one I really want.
Successful model and code:
temp.nem.cuc.glm<-glm(AllDat$nem.cuc~ AllDat$year.collected, family=poisson(link="log"))
summary(temp.nem.cuc.lm)
time<-AllDat$year.collected
dwnem.cuc<-dwtest(temp.nem.cuc.lm, order.by = time, alternative = "two.sided", iterations = 50, exact = FALSE, tol = 1e-10)
dwnem.cuc
Unsuccessful model and code
#the model I am really interested in
nem.cuc.pois=glmer(nem.cuc~ I(year.collected-1930)+I(standard.length..mm./100) + (1|sites1), family = "poisson", data=AllDat)
time<-AllDat$year.collected
dwnemresid.cuc<-dwtest(nem.cuc.pois, order.by = time, alternative = "two.sided", iterations = 50, exact = FALSE, tol = 1e-10)
dwnem.cuc
For future reference, I found the "check_autocorrelation" function from the performance package, but it returns only whether it detected autocorrelation or not.
I would also be interested in finding a DW test that returns a statistic, if anyone knows one - or in a way to solve the S4 class error, since I encounter it repeatedly.
I am using the caret package to train an elastic net model on my dataset modDat. I take a grid search approach paired with repeated cross validation to select the optimal values of the lambda and fraction parameters required by the elastic net function. My code is shown below.
library(caret)
library(elasticnet)
grid <- expand.grid(
lambda = seq(0.5, 0.7, by=0.1),
fraction = seq(0, 1, by=0.1)
)
ctrl <- trainControl(
method = 'repeatedcv',
number = 5, #folds
repeats = 10, #repeats
classProbs = FALSE
)
set.seed(1)
enetTune <- train(
y ~ .,
data = modDat,
method = 'enet',
metric = 'RMSE',
tuneGrid = grid,
verbose = FALSE,
trControl = ctrl
)
I can get predictions using y_hat <- predict(enetTune, modDat), but I cannot view the coefficients underlying the predictions.
I have tried coef(enetTune$finalModel) but the only thing returned is NULL. I am suspecting that I have to give the coef() function more information but not sure how to do this.
In addition, I would like to produce a box plot of the 50 sets of coefficients (10 repeats of 5 folds) associated with the optimal lambda and fraction parameters.
To see the coefficients, use predict:
predict(enetTune$finalModel, type = "coefficients")
See ?predict.enet for more information on how to get specific coefficients.
Following on from the answer by #Weihuang Wong, you can get the coefficients from the final model using the following code:
predict.enet(enetTune$finalModel, s=enetTune$bestTune[1, "fraction"], type="coef", mode="fraction")$coefficients
To me what works best is stats::predict, as is #Weihuang Wong answer. However, as OP pointed out in a comment, that provides a list of coefficients for every value of lambda tested.
The important thing to understand here is that when you are using predict, your intention is precisely to predict the value of the parameters, and not really to retrieve them. You should then be aware of that an explore the options available.
In this case, you could use the same function with the argument s for the penalty parameter lambda. Remebember that you are still predicting, but this time you will get the coefficients you are looking for.
stats::predict(enetTune$finalModel, type = "coefficients", s = enetTune$bestTune$lambda)