Data imputation with mtsdi package R - r

I found the package mtsdi and the corresponding function mnimput:
R Documentation
This function gives very good results for my data sets. However, I would like to understand the mathematical background of the function. I am familiar with the EM algorithm but how exactly does the function work? How are splines used here and is there a connection with the algorithm derived by Schafer?

Related

Preventing underforecasting of support vector regression in R

I'm currently using the e1071 package in R to forecast product demand using support vector regression via the svm function in the package. While support vector regression yields much higher forecast accuracy for my data compared to other methods (e.g. ARIMA, simple exponential smoothing), my results show that the svm function tends to underforecast. In my particular case, underforecasting is worse and much more expensive than overforecasting. Therefore, I want to implement something in R to tells support vector regression to penalize underforecasting much more than overforecasting.
Unfortunately, I can't really find any possibility to do this. There seems to be nothing on this in the e1071 package. The kernlab package has a support vector function (ksvm) that implements an 'eps-bsvr bound-constraint svm regression' but I can't find any information what is meant by bound-constraint or how to define that bound.
Has anyone seen any examples how to do this in R? I'm only finding very mathematical papers on asymmetric loss functions for support vector regression, and I don't have the skills to translate this into R code, so i'm looking for an already existing solution in R.

R simulation periodic ARMA(1,1)

I'd like to simulate a\ periodic ARMA(1,1) using R. I found the R package perARMA but I don't understand how to use it.
There is the function makeparma that permit to simulated the parma(1,1). But I don't understand the input parameters and the model used to simulate the periodic stuff.
This is the source code provided by the package trying to simulate a parma(2,1):
T=12
nlen=480
p=1
a=matrix(0,T,p)
q=1
b=matrix(0,T,q)
a[1,1]=.8
a[2,1]=.3
phia<-ab2phth(a)
phi0=phia$phi
phi0=as.matrix(phi0)
b[1,1]=-.7
b[2,1]=-.6
thetab<-ab2phth(b)
theta0=thetab$phi
theta0=as.matrix(theta0)
del0=matrix(1,T,1)
PARMA21<-makeparma(nlen,phi0,theta0,del0)
parma<-PARMA21$y
I don't understand why we should specify two beta value. And why del0 is a matrix.
I solved using the R package sarima and for the simulation I used the function prepareSimSarima

Has anyone used RNN package in R for Recursive Neural Network? How do I use that for prediction?

rnn() function R has no return statement. It generates synapses for input, hidden and output layer. How to use these for prediction with a test sample of a time series data?
There was just an update of the rnn package, with the version 0.5.0, it can generalize outside of the toy example of binary addition.
You must use the trainr function to train the model and the predictr to predict the values on your data.
So far it only support synchronized many to many learning, understand that each new time point input will produce an output.

Using a 'gbm' model created in R package 'dismo' with functions in R package 'gbm'

This is a follow-up to a previous question I asked a while back that was recently answered.
I have built several gbm models with dismo::gbm.step, which relies on the gbm fitting functions found in R package gbm, as well as cross validation tools from R package splines.
As part of my analysis, I would like to use some of the graphical tools available in R (e. g. perspective plots) to visualize pairwise interactions in the data. Both the gbm and the dismo packages have functions for detecting and modelling interactions in the data.
The implementation in dismo is explained in Elith et. al (2008) and returns a statistic which indicates departures of the model predictions from a linear combination of the predictors, while holding all other predictors at their means.
The implementation in gbm uses Friedman`s H statistic (Friedman & Popescue, 2005), and returns a different metric, and also does NOT set the other variables at their means.
The interactions modelled and plotted with dismo::gbm.interactions are great and have been very informative. However, I would also like to use gbm::interact.gbm, partly for publication strength and also to compare the results from the two methods.
If I try to run gbm::interact.gbm in a gbm.object created with dismo, an error is returned…
"Error in is.factor(data[, x$var.names[j]]) :
argument "data" is missing, with no default"
I understand dismo::gmb.step adds extra data the authors thought would be useful to the gbm model.
I also understand that the answer to my question lies somewherein the source code.
My questions is...
Is it possible to modify a gbm object created in dismo to be used in gbm::gbm.interact? If so, would this be accomplished by...
a. Modifying the gbm object created in dismo::gbm.step?
b. Modifying the source code for gbm::interact.gbm?
c. Doing something else?
I will be going through the source code trying to solve this myself, if I come up with a solution before anyone answers I will answer my own question.
The gbm::interact.gbm function requires data as an argument interact.gbm <- function(x, data, i.var = 1, n.trees = x$n.trees).
The dismo gbm.object is essentially the same as the gbm gbm.object, but with extra information attached so I don't imagine changing the gbm.object would help.

GAMM with autocorrelation and binary data

Can someone recommend an approach for a gamm function in R that includes an autocorrelation (like the gamm(...,correlation=corAR1()) function in mgcv) but that is also recommended for handling binary response data? The gamm() help file has an explicit warning about using it for binary data.
"gamm performs poorly with binary data, since it uses PQL. It is better to use gam with s(...,bs="re") terms, or gamm4."
Best as I can tell, gamm4 currently doesn't have an implementation for autocorrelation.
Hopefully I'm just missing something obvious.
Thanks!

Resources