R neuralnet does not converge within stepmax for time series - r

I'm writing a neural network for prediction of elements in a time series x + sin(x^2) in R, using the neuralnet package. This is how training data is being generated, assuming a window of 4 elements, and that the last one is the one that has to be predicted:
nntr0 <- ((1:25) + sin((1:25)^2))
nntr1 <- ((2:26) + sin((2:26)^2))
nntr2 <- ((3:27) + sin((3:27)^2))
nntr3 <- ((4:28) + sin((4:28)^2))
nntr4 <- ((5:29) + sin((5:29)^2))
Then, I turn these into a data.frame:
nntr <- data.frame(nntr0, nntr1, nntr2, nntr3, nntr4)
Then, I proceed to train the NN:
net.sinp <- neuralnet(nntr4 ~ nntr0 + nntr1 + nntr2 + nntr3, data=nntr, hidden=10, threshold=0.04, act.fct="tanh", linear.output=TRUE, stepmax=100000)
Which, after a while, gives me the message
Warning message:
algorithm did not converge in 1 of 1 repetition(s) within the stepmax
Call: neuralnet(formula = nntr4 ~ nntr0 + nntr1 + nntr2 + nntr3, data = nntr, hidden = 10, threshold = 0.04, stepmax = 100000, act.fct = "tanh", linear.output = TRUE)
Can anyone help me figure out why it is not converging? Many thanks

With tanh as an activation function (it is bounded),
it is very difficult to reproduce the linear trend in your signal.
You can use linear activation functions instead,
or try to detrend the signal.
# Data
dx <- 1
n <- 25
x <- seq(0,by=dx,length=n+4)
y <- x + sin(x^2)
y0 <- y[1:n]
y1 <- y[1 + 1:n]
y2 <- y[2 + 1:n]
y3 <- y[3 + 1:n]
y4 <- y[4 + 1:n]
d <- data.frame(y0, y1, y2, y3, y4)
library(neuralnet)
# Linear activation functions
r <- neuralnet(y4 ~ y0 + y1 + y2 + y3, data=d, hidden=10)
plot(y4, compute(r, d[,-5])$net.result)
# No trend
d2 <- data.frame(
y0 = y0 - x[1:n],
y1 = y1 - x[1 + 1:n],
y2 = y2 - x[2 + 1:n],
y3 = y3 - x[3 + 1:n],
y4 = y4 - x[4 + 1:n]
)
r <- neuralnet(y4 ~ y0 + y1 + y2 + y3, data=d2, hidden=10, act.fct="tanh" )
plot(d2$y4, compute(r, d2[,-5])$net.result)

Warning message:
algorithm did not converge in 1 of 1 repetition(s) within the stepmaxmeans your algorithm reached the limited steps before it is converged. If you type ?neuralnet and see the definition for stepmax it says,
the maximum steps for the training of the neural network. Reaching this maximum leads to a stop of the neural network's training process.
For your problem, I recommend you to increase your stepmax value to 1e7 and see what happens.
The code will be,
net.sinp <- neuralnet(nntr4 ~ nntr0 + nntr1 + nntr2 + nntr3, data=nntr, hidden=10, threshold=0.04, act.fct="tanh", linear.output=TRUE, stepmax=1e7)

Related

Issues related to Gamma Regression

I want to derive coefficients of Gamma regression by iterated reweighted method (manually). When I run this code with out for{} loop it works properly but with loop it produce NaN. My code is:
n<-10
y <- rgamma(n, 10, 0.1)
x1 <- rnorm(n, -1,1)
x2 <- rnorm(n, -1,1)
x3 <- rnorm(n, -1,1)
x<-as.matrix(cbind(1,x1,x2,x3))
reg <-glm(y~x1+x2+x3, family=Gamma(link = "inverse"))
### step 1
W<-G<-matrix(0,ncol=length(y),nrow=length(y))
b<-rep(0,4)
for(i in 1:50) {
### step 2
eta<-x%*%b
mu<-pnorm(eta)
diag(G)<-1/dnorm(eta)
z<-eta + G%*%(y - mu)
diag(W)<-(dnorm(eta)^2)/(mu*(1-mu))
### step 3
b <- solve(t(x)%*%W%*%x)%*%t(x)%*%W%*%z
}
Kindly help. My 2nd question is related to glm(). Is there any way which describe that how many iterations has glm() used?
Regards.
Updates
with help of this I update this code but its not working.
library(gnlm)
# custom link / inverse
inv <- function(eta) -1/(eta)
n<-10
y <- rgamma(n, 10, 0.1)
x1 <- rnorm(n, -1,1)
x2 <- rnorm(n, -1,1)
x3 <- rnorm(n, -1,1)
x<-as.matrix(cbind(1,x1,x2,x3))
reg <-glm(y~x1+x2+x3, family=Gamma(link = "inverse"))
library(gnlm)
reg1<- gnlr(y=y,
distribution = "gamma",
mu = ~ inv(beta0 + beta1*x1 + beta2*x2 + beta3*x3),
pmu = list(beta0=1, beta1=1, beta2=1, beta3=1),
pshape=0.1
)
I want to derive reg and reg1 same results.
Kindly help.
For the first code chunk, the algorithm is for probit regression, not gamma. To perform the iterations manually using glm's default of no weights and no offset for family = Gamma(link = "inverse"), update the code as follows.
n <- 10
y <- rgamma(n, 10, 0.1)
x1 <- rnorm(n, -1,1)
x2 <- rnorm(n, -1,1)
x3 <- rnorm(n, -1,1)
x <- as.matrix(cbind("(Intercept)" = 1,x1,x2,x3))
reg <- glm(y~x1+x2+x3, family = Gamma(link = "inverse"))
### step 1
eta <- 1/y
for(i in 1:reg$iter) {
tX <- t(X <- x/eta)
b <- drop(solve(tX%*%X)%*%tX%*%(2 - y*eta))
eta <- drop(x %*% b)
}
reg$iter is the number of iterations performed by the glm function. Check that b is equal to the coefficients given by glm:
all.equal(reg$coefficients, b)
#> [1] TRUE
Your inverse function is negative. Take away the minus sign.
Also, change pshape to 1.0.
I'm setting a seed for reproducibility.
Initial values for small datasets is key. Setting them using glm results is a common approach if you can get a similar enough link. Another approach would be that in the answer by #jblood94. Yet another one would be to use nls() for (rough) initial estimates.
argument trace=TRUE in glm() will show how many iterations
set.seed(111)
library(gnlm)
# custom link / inverse
inv <- function(eta) 1/(eta)
n<-10
y <- rgamma(n, 10, 0.1)
x1 <- rnorm(n, -1,1)
x2 <- rnorm(n, -1,1)
x3 <- rnorm(n, -1,1)
x<-as.matrix(cbind(1,x1,x2,x3))
reg <-glm(y~x1+x2+x3, family=Gamma(link = "inverse"), trace=TRUE)
library(gnlm)
reg1<- gnlr(y=y,
distribution = "gamma",
mu = ~ inv(beta0 + beta1*x1 + beta2*x2 + beta3*x3),
pmu = c(0.002, -0.002, -0.001, -0.001), ## or set to reg$coeff,
pshape=1
)
cbind(c(reg$coeff,NA), reg1$coeff)
Which gives:
> cbind(c(reg$coeff,NA), reg1$coeff)
[,1] [,2]
(Intercept) 0.0033899338 0.0033914440
x1 -0.0037481699 -0.0037476263
x2 -0.0007462714 -0.0007463346
x3 -0.0014941431 -0.0014936034
NA 2.8592334563
An example of different link and using nls to get starting values:
nls.init3 <-
nls(y ~ beta0 + 1/(beta1+1)*x1 + sqrt(beta2)*x2 + beta3^2*x3,
data=data.frame(y=y, x1=x1, x2=x2, x3=x3),
start=list(beta0=1,beta1=.1,beta2=.1,beta3=.1)
)
summary(nls.init3)$coefficients[,1]
reg3<- gnlr(y=y,
distribution = "gamma",
mu = ~ beta0 + 1/(beta1+1)*x1 + sqrt(beta2)*x2 + beta3^2*x3,
pmu = summary(nls.init3)$coefficients[,1],
pshape=1
)
reg3$coeff
And another
nls.init4 <-
nls(y ~ exp(beta0 + 1/(beta1+1)*x1),
data=data.frame(y=y, x1=x1),
start=list(beta0=0, beta1=0)
)
summary(nls.init4)$coefficients[,1]
reg4<- gnlr(y=y,
distribution = "gamma",
mu = ~ exp(beta0 + 1/(beta1+1)*x1),
pmu = summary(nls.init4)$coefficients[,1],
pshape=1
)
reg4$coeff

How to estimate the Kalman Filter with 'KFAS' R package, with an AR(1) transition equation and covariates?

I am using 'KFAS' package from R to estimate a state-space model with the Kalman filter. My measurement and transition equations are:
y_t = b_0 + b_1xx_t + Z_t * x_t + \eps_t (measurement)
x_t = T_t * x_{t-1} + R_t * \eta_t (transition),
with \eps_t ~ N(0,H_t) and \eta_t ~ N(0,Q_t),
where xx_t are covariates. I have read this question and wrote the following code
library(KFAS)
set.seed(100)
xx <- rnorm(200)
beta0 <- 0.1
beta1 <- 0.1
eps <- rt(200, 4, 1)
y <- as.matrix(beta0 + beta1*xx + (arima.sim(n=200, list(ar=0.6), innov = rnorm(200)*sqrt(0.5)) + eps),
ncol=1)
Zt <- 1
Ht <- matrix(NA)
Tt <- matrix(NA)
Rt <- 1
Qt <- matrix(NA)
ss_model <- SSModel(y ~ xx + SSMcustom(Z = Zt, T = Tt, R = Rt,
Q = Qt), H = Ht)
updatefn <- function(pars, model) {
model$H[1] <- pars[1]
model$T[1] <- pars[2]
model$Q[1] <- pars[3]
model
}
fit <- fitSSM(ss_model, c(1, 0.5, 1), updatefn, method = "L-BFGS-B",
lower = c(0, -0.99, 0), upper = c(100, 0.99, 100))
I get the error
Error in is.SSModel(do.call(updatefn, args = c(list(inits, model), update_args)), :
System matrices (excluding Z) contain NA or infinite values, covariance matrices contain values larger than 1e+07
I have tried to change the initial vector to c(1, 0.5, 1, 1, 1) but it returns the same message. Does anyone know how can I do this?
Thanks!

Covariate dependent Markov models? Plot state transition probability along gradient of covariate values

Data consists of 4 variable, id, x1 and x2, continuous variables which are correlated with y, a binary variable. 0 and 1 in the binary variable represent different states. Is it possible to use Markov chain models to calculate and plot state transition probability along the gradient of covariate values for each id and subsequently for the pooled data?
set.seed(1)
id =rep(1, 100)
x1 = rnorm(100)
x2 = rnorm(100)
z = 1 + 2*x1 + 3*x2
pr = 1/(1+exp(-z))
y = rbinom(100,1,pr)
a<-data.frame(id,x1,x2, y)
set.seed(2)
id =rep(2, 100)
x1 = rnorm(100)
x2 = rnorm(100)
z = 1 + 2*x1 + 3*x2
pr = 1/(1+exp(-z))
y = rbinom(100,1,pr)
b<-data.frame(id,x1,x2, y)
set.seed(3)
id =rep(3, 100)
x1 = rnorm(100)
x2 = rnorm(100)
z = 1 + 2*x1 + 3*x2
pr = 1/(1+exp(-z))
y = rbinom(100,1,pr)
c<-data.frame(id,x1,x2, y)
d<-rbind(a,b,c)

How to automatically fit data with several normal cumulative distribution functions in R

I have several data sets (hundreds of them actually), that I know can be fitted with the sum of several normal cumulative distributions (see here).
Here is one example of such data set, here with two cumulative distribution functions:
library(pracma)
library(minpack.lm)
x <- seq(1, 1000, length.out = 50)
k1 <- 0.5
mu1 <- 500
sigma1 <- 100
y1 <- k1 * (1 + erf((x - mu1) / (sqrt(2) * sigma1)))
k2 <- 0.5
mu2 <- 300
sigma2 <- 50
y2 <- k2 * (1 + erf((x - mu2) / (sqrt(2) * sigma2)))
my.df <- data.frame(x, y = y1 + y2, type = "data")
ggplot(my.df, aes(x, y)) + geom_line()
Now I want to fit those curves, so I use nls to do so:
model <- nlsLM(y ~ k1 * (1 + erf((x - mu1) / (sqrt(2) * sigma1)))
+ k2 * (1 + erf((x - mu2) / (sqrt(2) * sigma2))),
start= c(mu1 = 500 , sigma1 = 50, k1 = 0.5,
mu2 = 300 , sigma2 = 50, k2 = 0.5),
data = my.df,
control = nls.lm.control(maxiter = 500))
tmp <- data.frame(x, y = predict(model), type = "fit")
combined <- rbind(my.df, tmp)
ggplot(combined, aes(x, y, colour = type, shape = type)) + geom_line() + geom_point()
Here is what I get:
The fit is great. However, I helped nls a lot:
I gave it a perfect fitting curve as input, not raw data
I told it my curve was the sum of two functions (not one or three)
And I almost gave the solution by providing very close parameter values
To fix the first point, I compute 3 models for one, two and three functions and choose the one with the minimum deviance.
For the second point, with my hundreds of data sets unfortunately, the parameters change quite a bit and I have disappointing results when I give the same starting parameters for all sets.
Is there a better way to select those starting values?
I heard of the mixtools library, but I'm not sure it works for CDF (cumulative distribution functions).

R script - NLS not working

I have 5 (x,y) data points and I'm trying to find a best fit solution consisting of two lines which intersect at a point (x0,y0), and which follow these equations:
y1 = (m1)(x1 - x0) + y0
y2 = (m2)(x2 - x0) + y0
Specifically, I require that the intersection must occur between x=2 and x=3. Have a look at the code:
#Initialize x1, y1, x2, y2
x1 <- c(1,2)
y1 <- c(10,10)
x2 <- c(3,4,5)
y2 <- c(20,30,40)
g <- c(TRUE, TRUE, FALSE, FALSE, FALSE)
q <- nls(c(y1, y2) ~ ifelse(g == TRUE, m1 * (x1 - x0) + y0, m2 * (x2 - x0) + y0), start = c(m1 = -1, m2 = 1, y0 = 0, x0 = 2), algorithm = "port", lower = c(m1 = -Inf, m2 = -Inf, y0 = -Inf, x0 = 2), upper = c(m1 = Inf, m2 = Inf, y0 = Inf, x0 = 3))
coef <- coef(q)
m1 <- coef[1]
m2 <- coef[2]
y0 <- coef[3]
x0 <- coef[4]
#Plot the original x1, y1, and x2, y2
plot(x1,y1,xlim=c(1,5),ylim=c(0,50))
points(x2,y2)
#Plot the fits
x1 <- c(1,2,3,4,5)
fit1 <- m1 * (x1 - x0) + y0
lines(x1, fit1, col="red")
x2 <- c(1,2,3,4,5)
fit2 <- m2 * (x2 - x0) + y0
lines(x2, fit2, col="blue")
So, you can see the data points listed there. Then, I run it through my nls, get my parameters m1, m2, x0, y0 (the slopes, and the intersection point).
But, take a look at the solution:
Clearly, the red line (which is supposed to only be based on the first 2 points) is not the best line of fit for the first 2 points. This is the same case with the blue line (the 2nd fit), which supposed to be is dependent on the last 3 points). What is wrong here?
This is segmented regression:
# input data
x1 <- c(1,2); y1 <- c(10,10); x2 <- c(3,4,5); y2 <- c(20,30,40)
x <- c(x1, x2); y <- c(y1, y2)
# segmented regression
library(segmented)
fm <- segmented.lm(lm(y ~ x), ~ x, NA, seg.control(stop.if.error = FALSE, K = 2))
summary(fm)
# plot
plot(fm)
points(y ~ x)
See ?lm, ?segmented.lm and ?seg.control for more info.
I'm not exactly sure what's wrong but I can get it to work by rearranging things a bit. Please note the comment in ?nls about "Do not use ‘nls’ on artificial "zero-residual" data."; I added a bit of noise.
## Initialize x1, y1, x2, y2
x1 <- c(1,2)
y1 <- c(10,10)
x2 <- c(3,4,5)
y2 <- c(20,30,40)
## make single x, y vector
x <- c(x1,x2)
set.seed(1001)
## (add a bit of noise to avoid zero-residual artificiality)
y <- c(y1,y2)+rnorm(5,sd=0.01)
g <- c(TRUE,TRUE,FALSE,FALSE,FALSE) ## specify identities of points
## particular changes:
## * you have lower=upper=2 for x0. Did you want 2<x0<3?
## * specified data argument explicitly (allows use of predict() etc.)
## * changed name from 'q' to 'fit1' (avoid R built-in function)
fit1 <- nls(y ~ ifelse(g,m1,m1+delta_m)*(x - x0) + y0,
start = c(m1 = -1, delta_m = 2, y0 = 0, x0 = 2),
algorithm = "port",
lower = c(m1 = -Inf, delta_m = 0, y0 = -Inf, x0 = 2),
upper = c(m1 = Inf, delta_m = Inf, y0 = Inf, x0 = 3),
data=data.frame(x,y))
#Plot the original 'data'
plot(x,y,col=rep(c("red","blue"),c(2,3)),
xlim=c(1,5),ylim=c(0,50))
## add predicted values
xvec <- seq(1,5,length.out=101)
lines(xvec,predict(fit1,newdata=data.frame(x=xvec)))
edit: based ifelse clause on point identity, not x position
edit: changed to require second slope to be > first slope
On a second look, I think the issue above is probably due to the use of separate vectors for x1 and x2 above, rather than a single x vector: I suspect these got replicated by R to match up with the g vector, which would have messed things up pretty badly. For example, this stripped-down example:
g <- c(TRUE, TRUE, FALSE, FALSE, FALSE)
ifelse(g,x1,x2)
## [1] 1 2 5 3 4
shows that x2 gets extended to (3 4 5 3 4) before being used in the ifelse clause. The scariest part is that normally one gets a warning such as this:
> x2 + 1:5
[1] 4 6 8 7 9
Warning message:
In x2 + 1:5 :
longer object length is not a multiple of shorter object length
but in this case there is no warning ...

Resources