Model is not generated from ARFIMA in R - r

I want to forecast ARFIMA with Kalman filter and not able to fit the arfima model into the Kalmanforecast.
library(base)
library(stats)
library(parallel)
library(forecast)
sink(file='/home/nero/KF_arfima.log')
f=COST$COST
x=logb(p,10)
# Start the clock!
ptm <- proc.time()
p=arfima(x[1:50], drange=c(0, 0.5),estim=c("mle"))
pr <- KalmanForecast(2, p$model)
y=x[51:52]
yhat=pr$pred #predicted value
map=mean(abs((y - yhat)/y)) #MAPE
proc.time() - ptm
print(map)
I am getting the error
Error in KalmanForecast(2, p$model) : invalid argument type"
I also check and found that there is no object called model. I lost three days to solve it. I tried with various R packages, but none of them has solved it. Please let me know how to fix it.
Data Sample:
Timestamp,COST
2015-09-21T00:00:00+00:00,6
2015-09-21T00:06:00+00:00,7
2015-09-21T00:12:00+00:00,7
2015-09-21T00:18:00+00:00,7
2015-09-21T00:24:00+00:00,7
2015-09-21T00:30:00+00:00,7
2015-09-21T00:36:00+00:00,7
2015-09-21T00:42:00+00:00,6
2015-09-21T00:48:00+00:00,7
2015-09-21T00:54:00+00:00,6
2015-09-21T01:00:00+00:00,6
2015-09-21T01:06:00+00:00,7
2015-09-21T01:12:00+00:00,7
2015-09-21T01:18:00+00:00,7
2015-09-21T01:24:00+00:00,7
2015-09-21T01:30:00+00:00,7
2015-09-21T01:36:00+00:00,7
2015-09-21T01:42:00+00:00,6
2015-09-21T01:48:00+00:00,7
2015-09-21T01:54:00+00:00,6
2015-09-21T02:00:00+00:00,6
2015-09-21T02:06:00+00:00,8
2015-09-21T02:12:00+00:00,8
2015-09-21T02:18:00+00:00,7
2015-09-21T02:24:00+00:00,8
2015-09-21T02:30:00+00:00,7
2015-09-21T02:36:00+00:00,7
2015-09-21T02:42:00+00:00,7
2015-09-21T02:48:00+00:00,8
2015-09-21T02:54:00+00:00,7
2015-09-21T03:00:00+00:00,6
2015-09-21T03:06:00+00:00,7
2015-09-21T03:12:00+00:00,7
2015-09-21T03:18:00+00:00,7
2015-09-21T03:24:00+00:00,7
2015-09-21T03:30:00+00:00,7
2015-09-21T03:36:00+00:00,7
2015-09-21T03:42:00+00:00,7
2015-09-21T03:48:00+00:00,6
2015-09-21T03:54:00+00:00,6
2015-09-21T04:00:00+00:00,6
2015-09-21T04:06:00+00:00,7
2015-09-21T04:12:00+00:00,7
2015-09-21T04:18:00+00:00,7
2015-09-21T04:24:00+00:00,7
2015-09-21T04:30:00+00:00,6
2015-09-21T04:36:00+00:00,6
2015-09-21T04:42:00+00:00,6
2015-09-21T04:48:00+00:00,6
2015-09-21T04:54:00+00:00,7
2015-09-21T05:00:00+00:00,6
2015-09-21T05:06:00+00:00,7
2015-09-21T05:12:00+00:00,7

The help file for KalmanForecast clearly describes what sort of model is required. The arfima function does not produce output of the required kind.
Rather than use KalmanForecast, you can use the forecast function from the forecast package to produce the forecasts. It also uses a Kalman filter to compute the forecasts.
If you really want to use the KalmanForecast to do the work, you will have to figure out how to create the mod argument yourself.

Related

Avoid failure of confint.merMod on weighted models in lme4 when data object modified in calling frame

I'm facing a problem when using lme4 glmer function with weights, where if the data object passed to glmer is modified, some functions such as confint no longer work on the model. Here is an example:
library(lme4)
set.seed(1)
n <- 1000
df <- data.frame(
y=rbinom(n,1,.5),
w=runif(n,0,1)*.1+.95,
g=as.integer(round(runif(n,0,4)))
)
m <- glmer(cbind(y,1-y)~(1|g),data=df,weights=w,family=binomial())
confint(m)
df$w <- df$w*2
confint(m)
The 2nd call to confint gives this error:
Computing profile confidence intervals ...
Error in profile.merMod(object, which = parm, signames = oldNames, ...) :
Profiling over both the residual variance and
fixed effects is not numerically consistent with
profiling over the fixed effects only
It seems this has something to do with the profile function, as that function doesn't work after modifying the data frame.
The following seems to work to remove the dependency on the data object, but I am a bit uneasy not knowing if there might ever be bad side effects:
glmer2 <- function(...){
cl <- match.call()
df <- eval.parent(cl$data)
cl[1] <- call("glmer")
cl$data <- as.name("df")
eval(cl)
}
m <- glmer2(cbind(y,1-y)~(1|g),data=df,weights=w,family=binomial())
confint(m)
df$w <- df$w*2
confint(m)
(results of confint don't change)
The reason I need something like this is that I am creating a series of models, and need to re-compute the weights between each one, and it would be quite messy to keep all of the data objects.
Why do model functions seem to rely on the data object still being present and unchanged in the calling environment? And is there a better way to solve this issue?
(R version 3.6.3 (2020-02-29), x86_64-redhat-linux-gnu, lme4_1.1-21)

How to fit an inverse guassian distribution to my data, preferably using fitdist {fitdistrplus}

I am trying to analyze some Reaction Time data using GLMM. to find a distribution that fits my data best.I used fitdist() for gamma and lognormal distributions. the results showed that lognormal fits my data better.
However, recently i read that the inverse gaussian distribution might be a better fit for reaction time data.
I used nigFitStart to obtain the start values:
library(GeneralizedHyperbolic)
invstrt <- nigFitStart(RTtotal, startValues = "FN")
which gave me this:
$paramStart
mu delta alpha beta
775.953984862 314.662306398 0.007477984 -0.004930604
so i tried using the start parameteres for fitdist:
require(fitdistrplus)
fitinvgauss <- fitdist(RTtotal, "invgauss", start = list(mu=776, delta=314, alpha=0.007, beta=-0.05))
but i get the following error:
Error in checkparamlist(arg_startfix$start.arg, arg_startfix$fix.arg, :
'start' must specify names which are arguments to 'distr'.
i also used ig_fit{goft} and got the following results:
Inverse Gaussian MLE
mu 775.954
lambda 5279.089
so, this time i used these two parameters for the start argument in fitdist and still got the exact same error:
> fitinvgauss <- fitdist(RTtotal, "invgauss", start = list(mu=776, lambda=5279))
Error in checkparamlist(arg_startfix$start.arg, arg_startfix$fix.arg, :
'start' must specify names which are arguments to 'distr'.
someone had mentioned that changing the parametere names from mu and lambda to mean and shape had solved their problem, but i tried it and still got the same error.
Any idea how i can fix this? or could you suggest an alternative way to fit inverse gaussian to my data?
thank you
dput(RTtotal)
c(594.96, 659.5, 706.14, 620.92, 811.05, 420.63, 457.08, 585.53,
488.59, 484.87, 496.72, 769.01, 458.92, 521.76, 889.08, 514.11,
553.09, 564.68, 1057.19, 437.79, 660.33, 639.58, 643.45, 419.47,
469.16, 457.78, 530.58, 538.73, 557.17, 1140.09, 560.03, 543.18,
1093.29, 607.59, 430.2, 712.06, 716.6, 566.69, 989.71, 449.96,
653.22, 556.52, 654.8, 472.54, 600.26, 548.36, 597.51, 471.97,
596.72, 600.29, 706.77, 511.6, 475.89, 599.13, 570.12, 767.57,
402.68, 601.56, 610.02, 891.95, 483.22, 588.78, 505.95, 554.15,
445.54, 489.02, 678.13, 532.06, 652.61, 654.79, 535.08, 1215.66,
633.6, 645.92, 454.37, 535.81, 508.97, 690.78, 685.97, 703.04,
731.99, 592.75, 662.03, 1400.33, 599.73, 1021.34, 1232.35, 855.1,
780.32, 554.4, 1965.77, 841.89, 1262.76, 721.62, 788.95, 1104.24,
1237.4, 1193.04, 513.91, 474.74, 380.56, 570.63, 700.96, 380.89,
481.96, 723.63, 835.22, 781.1, 468.76, 555.1, 522.22, 944.29,
541.06, 559.18, 738.68, 880.58, 500.14, 1856.97, 1001.59, 703.7,
1022.35, 1813.35, 1128.73, 864.75, 1166.77, 1220.4, 776.56, 2073.72,
1223.88, 617, 1387.71, 595.57, 1506.13, 678.41, 1797.87, 2111.04,
1116.61, 1038.6, 894.25, 778.51, 908.51, 1346.69, 989.09, 1334.17,
877.31, 649.31, 978.22, 1276.84, 1001.58, 1049.66, 1131.83, 700.8,
1267.21, 693.52, 1182.3)
So I'm guessing that you failed to tell us that you also have the statmod-package loaded (or perhaps some other package with a 'invgauss'-family including a dinvgauss function). You should be able to tell which package dinvgauss comes from by reading the top line of the help page for the function, So after installing that package and reading the help page (which one should ALWAYS do) for ?dinvgauss:
fitinvgauss <- fitdist(RTtotal, "invgauss",
start = list(mean=776, dispersion=314, shape=1))
fitinvgauss
# --------------
Fitting of the distribution ' invgauss ' by maximum likelihood
Parameters:
estimate Std. Error
mean 779.2535 NA
dispersion -1007.5490 NA
shape 4972.5745 NA
All I did was read the error message and then read the help page and use the correct names for that function's parameters. (And then play around a bit to get the parameter starting values into the feasible range of values.)

Error in ts(x) : 'ts' object must have one or more observations

When I do forecast using forecast library, I noticed following code does not run as expected:
library(forecast)
library(dplyr)
df1 <- data.frame(gp=gl(20,5), dt=seq(1:100))
get <- function (df1){
ts1 <- ts((df1%>%filter(gp==2))$dt)
as.numeric(forecast(ar(ts1),15)$mean)
}
print(get(df1))
The error return is:
Error in ts(x) : 'ts' object must have one or more observations
May be it is caused by ar or ar.burg function. Because if you change the function to ets or something else the function works well.
What is more strange is that if you change the code to:
library(forecast)
library(dplyr)
df1 <- data.frame(gp=gl(20,5), dt=seq(1:100))
ts1 <- ts((df1%>%filter(gp==2))$dt)
get <- function (ts1){
as.numeric(forecast(ar(ts1),15)$mean)
}
print(get(ts1))
The code is also running correctly. I think this may be a bug in ar function, and the problem is somehow related to scope. Any thoughts about this?
The problem is to do with scoping. forecast() tries to find the time series used to fit the model. The functions from the forecast package (such as ets) store this information in the model object, so it is easy for forecast() to find it. But ar() is from the stats package, and it does not store the time series used to fit the model. So forecast() goes looking for it. If you run your code outside of the get() function, it works ok because forecast() manages to find the ts1 object in the local environment. But within the get() function it causes an error.
One simple fix is to add the information to the fitted model before calling forecast:
library(forecast)
library(dplyr)
df1 <- data.frame(gp=gl(20,5), dt=seq(1:100))
ts1 <- ts((df1%>%filter(gp==2))$dt)
get <- function (ts1){
fit <- ar(ts1)
fit$x <- ts1
as.numeric(forecast(fit,15)$mean)
}
print(get(ts1))
Alternatively, use predict instead of forecast:
library(dplyr)
df1 <- data.frame(gp=gl(20,5), dt=seq(1:100))
ts1 <- ts((df1%>%filter(gp==2))$dt)
get <- function (ts1){
fit <- ar(ts1)
as.numeric(predict(fit,n.ahead=15)$pred)
}
print(get(ts1))

How to feed data into ode while doing optimisation

I'm new to R. I found very useful code, which I've tried to use for my purposes. however, I get an error:
Error in func(time, state, parms, ...) : object 'k4' not found and Error in func(time, state, parms, ...) : object 'E' not found
I don't know where the problem is as I can see all parameters and data.frame is correct as well.
Thank you everyone for taking time to look at this. I've tried to reduce the number of parameters to3 (k10, k11,k12), and using estimated values for the remaining (embeded values in the code). However, I still get an error message, the E value from data.frame is not passed into rxnrate function and as result ode can't use it. I've tried to use events and forcing functions but it doesn't seem to work. Thank you for spotting P4, it was a typo, should be P, I've corrected already.
Editors note: This was crossposted to Rhelp and that message included the source of this code as a stackoverflow question "r-parameter and initial conditions fitting ODE models with nls.lm."
#set working directory
setwd("~/R/wkspace")
#load libraries
library(ggplot2)
library(reshape2)
library(deSolve)
library(minpack.lm)
time=c(22,23,24,46,47,48)
cE=c(15.92,24.01,25.29,15.92,24.01,25.29)
cP=c(0.3,0.14,0.29,0.3,0.14,0.29)
cL=c(6.13,3.91,38.4,6.13,3.91,38.4)
df<-data.frame(time,cE,cP,cL)
df
names(df)=c("time","cE","cP","cL")
#rate function
rxnrate=function(t,c,parms){
#rate constant passed through a list called
k1=parms$k1
k2=parms$k2
k3=parms$k3
k4=parms$k4
k5=parms$k5
k6=parms$k6
k7=parms$k7
k8=parms$k8
k9=parms$k9
k10=parms$k10
#c is the concentration of species
#derivatives dc/dt are computed below
r=rep(0,length(c))
r[1]=(k1+(k2*E^k10)/(k3^k10+E^k10))/(1+P/k6)-k4* ((1+k5*P)/(1+k7*E))*c["pLH"]; #dRP_LH/dt
r[2]=(1/k8)*k4*((1+k5*P)/(1+k7*E))*c["p"]-k9*c["L"] #dL/dt
return(list(r))
}
ssq=function(myparms){
#initial concentration
cinit=c(pLH=unname(myparms[11]),LH=unname(myparms[12]))
print(cinit)
#time points for which conc is reported
#include the points where data is available
t=c(seq(0,46,2),df$time)
t=sort(unique(t))
#parameters from the parameters estimation
k1=myparms[1]
k2=myparms[2]
k3=myparms[3]
k4=myparms[4]
k5=myparms[5]
k6=myparms[6]
k7=myparms[7]
k8=myparms[8]
k9=myparms[9]
k10=myparms[10]
#solve ODE for a given set of parameters
out=ode(y=cinit,times=t,func=rxnrate,parms=list(k1=k1,k2=k2,k3=k3,k4=k4,k5=k5,k6=k6,k7=k7,k8=k8,k9=k9,k10=k10,E=cE,P=cP))
#Filter data that contains time points
outdf=data.frame(out)
outdf=outdf[outdf$time%in% df$time,]
#Evaluate predicted vs experimental residual
preddf=melt(outdf,id.var="time",variable.name="species",value.name="conc")
expdf=melt(df,id.var="time",variable.name="species",value.name="conc")
ssqres=preddf$conc-expdf$conc
return(ssqres)
}
# parameter fitting using levenberg marquart
#initial guess for parameters
myparms=c(k1=500, k2=4500, k3=200,k4=2.42, k5=0.26,k6=12.2,k7=0.004,k8=55,k9=24,k10=8,pLH=14.5,LH=3.55)
#fitting
fitval=nls.lm(par=myparms,fn=ssq)
#summary of fit
summary(fitval)
#estimated parameter
parest=as.list(coef(fitval))

Is there any way to impliment Elliott–Rothenberg–Stock (ERS) with Hannan-Quinn lag selection criterion in R?

Hi TimeSeries and R gurus,
Is there any way to impliment Elliott–Rothenberg–Stock (ERS) with Hannan-Quinn lag selection criterion in R?
Libraries like urca, fUnitRoots and fSeries have commands for Elliott–Rothenberg–Stock unitroot test. However, I could not find any option where I can use HQIC for automatic lag length selection.
For example, here is what is available:
gnp <- na.omit(nporg[, "gnp.r"])
ers.gnp <- ur.ers(gnp, type="DF-GLS", model="const", lag.max=4)
summary(ers.gnp)
I am after something like:
gnp <- na.omit(nporg[, "gnp.r"])
ers.gnp <- ur.ers(gnp, type="DF-GLS", model="const", lag.max="HQIC")
summary(ers.gnp)
Aim is to get something like following produced in Eviews:
Any help in this regard is much appreciated.
Mubashir

Resources