optim in R failing due to NAs - r

I have been trying to estimate a rather messy nonlinear regression model in R for quite some time now. After countless failed attempts using the nls function, I am now trying my luck with optim, which I have used many times in the past. For this example, I'll use the following data:
x1 <- runif(1000,0,7)
x2 <- runif(1000,0,7)
x3 <- runif(1000,0,7)
y <- log(.5 + .5*x1 + .7*x2 + .4*x3 + .05*x1^2 + .1*x2^2 + .15*x3^2 - .05*x1*x2 - .1*x1*x3 - .07*x2*x3 + .02*x1*x2*x2) + rnorm(1000)
I would like to estimate the parameters in the polynomial expression inside the log() function above, and so I have defined the following function to replicate a nonlinear least squares regression:
g <- function(coefs){
fitted <- coefs[1] + coefs[2]*x1 + coefs[3]*x2 + coefs[4]*x3 + coefs[5]*x1^2 + coefs[6]*x2^2 + coefs[7]*x3^2 + coefs[8]*x1*x2 + coefs[9]*x1*x3 + coefs[10]*x2*x3 + coefs[11]*x1*x2*x3
error <- y - log(fitted)
return(sum(error^2))
}
In order to avoid negative starting values inside the log() expression, I first estimate the linear model below:
lm.1 <- lm(I(exp(y)) ~ x1 + x2 + x3 + I(x1^2) + I(x2^2) + I(x3^2) + I(x1*x2) + I(x1*x3) + I(x2*x3) + I(x1*x2*x3))
intercept.start <- ifelse((min(fitted(lm.1)-lm.1$coefficients[1])) <= 0, -(min(fitted(lm.1)-lm.1$coefficients[1])) + .5, .5)
coefs.start <- c(intercept.start,lm.1$coefficients[-1])
Defining intercept.start above guarantees that the expression inside of log() will be strictly positive at the outset. However, when I run the optim command
nl.model <- optim(coefs.start, g, method="L-BFGS-B")
I get the following error message
Error in optim(coefs.start, g, method = "L-BFGS-B") :
L-BFGS-B needs finite values of 'fn'
In addition: Warning message:
In log(fitted) : NaNs produced
Does anyone know how I can force the optim routine to simply disregard parameter estimates that would produce negative values inside of the log() expression? Thanks in advance.

Here's a slightly different approach.
Aside from the typo mentioned in the comment, if the issue is that the argument to the log(...) is < 0 for certain parameter estimates, you can change the function definition to prevent that.
# just some setup - we'll need this later
set.seed(1)
err <- rnorm(1000, sd=0.1) # note smaller error sd
x1 <- runif(1000,0,7)
x2 <- runif(1000,0,7)
x3 <- runif(1000,0,7)
par <- c(0.5, 0.5, 0.7, 0.4, 0.05, 0.1, 0.15, -0.05, -0.1, -0.07, 0.02)
m <- cbind(1, x1, x2, x3, x1^2, x2^2, x3^2, x1*x2, x1*x3, x2*x3, x1*x2*x3)
y <- as.numeric(log(m %*% par)) + err
# note slight change in the model function definition
g <- function(coefs){
fitted <- coefs[1] + coefs[2]*x1 + coefs[3]*x2 + coefs[4]*x3 + coefs[5]*x1^2 + coefs[6]*x2^2 + coefs[7]*x3^2 + coefs[8]*x1*x2 + coefs[9]*x1*x3 + coefs[10]*x2*x3 + coefs[11]*x1*x2*x3
fitted <- ifelse(fitted<=0, 1, fitted) # ensures fitted > 0
error <- y - log(fitted)
return(sum(error^2))
}
lm.1 <- lm(I(exp(y)) ~ x1 + x2 + x3 + I(x1^2) + I(x2^2) + I(x3^2) + I(x1*x2) + I(x1*x3) + I(x2*x3) + I(x1*x2*x3))
nl.model <- optim(coef(lm.1), g, method="L-BFGS-B", control=list(maxit=1000))
nl.model$par
# (Intercept) x1 x2 x3 I(x1^2) I(x2^2) I(x3^2) I(x1 * x2) I(x1 * x3) I(x2 * x3) I(x1 * x2 * x3)
# 0.40453182 0.50136222 0.71696293 0.45335893 0.05461253 0.10210854 0.14913914 -0.06169715 -0.11195476 -0.08497180 0.02531717
with(nl.model, cat(convergence, message))
# 0 CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH
Note that these estimates are pretty close to the actual values. That's because in the setup I used a smaller error term (sd = 0.2 instead of 1). In your example, the error is large compared to the response (y), so you're basically fitting random error.
If you fit the model using the actual parameter values as starting estimates, you get nearly identical results, no closer to the "true" values.
nl.model <- optim(par, g, method="L-BFGS-B", control=list(maxit=1000))
nl.model$par
# [1] 0.40222956 0.50159930 0.71734810 0.45459606 0.05465654 0.10206887 0.14899640 -0.06177640 -0.11209065 -0.08497423 0.02533085
with(nl.model, cat(convergence, message))
# 0 CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH
Try this with the original error (sd = 1) and see what happens.

Here's a log of my efforts to investigate. I put a maximum on the fitted values and got convergence. I then asked myself if increasing that max would do anything th the estimated parameters and found that there was no change... AND there was no difference from the starting values, so I think you messed up in building the function. Perhaps you can investigate further:
> gp <- function(coefs){
+
+ fitted <- coefs[1] + coefs[2]*x1 + coefs[3]*x2 + coefs[4]*x3 + coefs[5]*x1^2 + coefs[6]*x2^2 + coefs[7]*x3^2 + coefs[8]*x1*x2 + coefs[9]*x1*x3 + coefs[10]*x2*x3 + coefs[11]*x1*x2*x3 }
> describe( gp( coefs.start) ) #describe is from pkg:Hmisc
gp(coefs.start)
n missing unique Info Mean .05 .10 .25 .50 .75
1000 0 1000 1 13.99 2.953 4.692 8.417 12.475 18.478
.90 .95
25.476 28.183
lowest : 0.5000 0.5228 0.5684 0.9235 1.1487
highest: 41.0125 42.6003 43.1457 43.5950 47.2234
> g <- function(coefs){
+
+ fitted <- max( coefs[1] + coefs[2]*x1 + coefs[3]*x2 + coefs[4]*x3 + coefs[5]*x1^2 + coefs[6]*x2^2 + coefs[7]*x3^2 + coefs[8]*x1*x2 + coefs[9]*x1*x3 + coefs[10]*x2*x3 + coefs[11]*x1*x2*x3 , 1000)
+ error <- y - log(fitted)
+ return(sum(error^2))
+ }
> nl.model <- optim(coefs.start, g, method="L-BFGS-B")
> nl.model
$par
x1 x2 x3 I(x1^2)
0.77811231 -0.94586233 -1.33540959 1.65454871 0.31537594
I(x2^2) I(x3^2) I(x1 * x2) I(x1 * x3) I(x2 * x3)
0.45717138 0.11051418 0.59197115 -0.25800792 0.04931727
I(x1 * x2 * x3)
-0.08124126
$value
[1] 24178.62
$counts
function gradient
1 1
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
> g <- function(coefs){
+
+ fitted <- max( coefs[1] + coefs[2]*x1 + coefs[3]*x2 + coefs[4]*x3 + coefs[5]*x1^2 + coefs[6]*x2^2 + coefs[7]*x3^2 + coefs[8]*x1*x2 + coefs[9]*x1*x3 + coefs[10]*x2*x3 + coefs[11]*x1*x2*x3 , 100000)
+ error <- y - log(fitted)
+ return(sum(error^2))
+ }
> nl.model <- optim(coefs.start, g, method="L-BFGS-B")
> nl.model
$par
x1 x2 x3 I(x1^2)
0.77811231 -0.94586233 -1.33540959 1.65454871 0.31537594
I(x2^2) I(x3^2) I(x1 * x2) I(x1 * x3) I(x2 * x3)
0.45717138 0.11051418 0.59197115 -0.25800792 0.04931727
I(x1 * x2 * x3)
-0.08124126
$value
[1] 89493.99
$counts
function gradient
1 1
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
.

Related

Shortening the formula syntax of a regression model

I was wondering if the syntax of the regression model below could be made more concise (shorter) than it currently is?
dat <- read.csv('https://raw.githubusercontent.com/rnorouzian/v/main/bv1.csv')
library(nlme)
model <- lme(achieve ~ 0 + D1 + D2+
D1:time + D2:time+
D1:schcontext + D2:schcontext +
D1:female + D2:female+
D1:I(female*time) + D2:I(female*time)+
D1:I(schcontext*time) + D2:I(schcontext*time), correlation = corSymm(),
random = ~0 + D1:time | schcode/id, data = dat, weights = varIdent(form = ~1|factor(math)),
na.action = na.omit, control = lmeControl(maxIter = 200, msMaxIter = 200, niterEM = 50,
msMaxEval = 400))
coef(summary(model))
Focusing on the fixed-effect component only.
Original formula:
form1 <- ~ 0 + D1 + D2+
D1:time + D2:time+
D1:schcontext + D2:schcontext +
D1:female + D2:female+
D1:I(female*time) + D2:I(female*time)+
D1:I(schcontext*time) + D2:I(schcontext*time)
X1 <- model.matrix(form1, data=dat)
I think this is equivalent
form2 <- ~0 +
D1 + D2 +
(D1+D2):(time + schcontext + female + female:time+schcontext:time)
X2 <- model.matrix(form2, data=dat)
(Unfortunately ~ 0 + (D1 + D2):(1 + time + ...) doesn't work as I would have liked/expected.)
For a start, the model matrix has the right dimensions. Staring at the column names of the model matrices and reordering the columns manually:
X2o <- X2[,c(1:3,6,4,7,5,8,9,11,10,12)]
all.equal(c(X1),c(X2o)) ##TRUE
(For numerical predictors, you don't need I(A*B): A:B is equivalent.)
Actually you can do a little better using the * operator
form3 <- ~0 +
D1 + D2 +
(D1+D2):(time*(schcontext+female))
X3 <- model.matrix(form3, data=dat)
X3o <- X3[,c(1:3,6,4,7,5,8,10,12,9,11)]
all.equal(c(X1),c(X3o)) ## TRUE
Compare formula length:
sapply(list(form1,form2,form3),
function(x) nchar(as.character(x)[[2]]))
## [1] 183 84 54

R: Force regression coefficients to add up to 1

I'm trying to run a simple OLS regression with a restriction that the sum of the coefficients of two variables add up to 1.
I want:
Y = α + β1 * x1 + β2 * x2 + β3 * x3,
where β1 + β2 = 1
I have found how to make a relation between coefficients like:
β1 = 2* β2
But I haven't found how to make restrictions like:
β1 = 1 - β2
How would I do it in this simple example?
data <- data.frame(
A = c(1,2,3,4),
B = c(3,2,2,3),
C = c(3,3,2,3),
D = c(5,3,3,4)
)
lm(formula = 'D ~ A + B + C', data = data)
Thanks!
β1 + β2 = 1
To have β1 + β2 = 1 the model you have to fit is
fit <- lm(Y ~ offset(x1) + I(x2 - x1) + x3, data = df)
That is
Y = α + x1 + β2 * (x2 - x1) + β3 * x3
after substituting β1 = 1 - β2; x_new = x2 - x1 and the coefficient for x1 is 1.
β1 + β2 + β3 = 1
fit <- lm(Y ~ offset(x1) + I(x2 - x1) + I(x3 - x1), data = df)
Y = α + x1 + β2 * (x2 - x1) + β3 * (x3 - x1)
after substituting β1 = 1 - β2 - β3
β1 + β2 + β3 + ... = 1
I think the pattern is clear... you just have to subtract one variable, x1, from the remaining variables(x2, x3, ...) and have the coefficient of that variable, x1, to 1.
Example β1 + β2 = 1
# Data
df <- iris[, 1:4]
colnames(df) <- c("Y", paste0("x", 1:3, collaapse=""))
# β1 + β2 = 1
fit <- lm(Y ~ offset(x1) + I(x2 - x1) + x3, data = df)
coef_2 <- coef(fit)
beta_1 <- 1 - coef_2[2]
beta_2 <- coef_2[2]
1) CVXR We can compute the coefficients using CVXR directly by specifying the objective and constraint. We assume that D is the response, the coefficients of A and B must sum to 1, b[1] is the intercept and b[2], b[3] and b[4] are the coefficients of A, B and C respectively.
library(CVXR)
b <- Variable(4)
X <- cbind(1, as.matrix(data[-4]))
obj <- Minimize(sum((data$D - X %*% b)^2))
constraints <- list(b[2] + b[3] == 1)
problem <- Problem(obj, constraints)
soln <- solve(problem)
bval <- soln$getValue(b)
bval
## [,1]
## [1,] 1.6428605
## [2,] -0.3571428
## [3,] 1.3571428
## [4,] -0.1428588
The objective is the residual sum of squares and it equals:
soln$value
## [1] 0.07142857
2) pracma We can also use the pracma package to compute the coefficients. We specify the X matrix, response vector, the constraint matrix (in this case the vector given as the third argument is regarded as a one row matrix) and the right hand side of the constraint.
library(pracma)
lsqlincon(X, data$D, Aeq = c(0, 1, 1, 0), beq = 1) # X is from above
## [1] 1.6428571 -0.3571429 1.3571429 -0.1428571
3) limSolve This package can also solve for the coefficients of regression problems with constraints. The arguments are the same as in (2).
library(limSolve)
lsei(X, data$D, c(0, 1, 1, 0), 1)
giving:
$X
A B C
1.6428571 -0.3571429 1.3571429 -0.1428571
$residualNorm
[1] 0
$solutionNorm
[1] 0.07142857
$IsError
[1] FALSE
$type
[1] "lsei"
4) nls This can be formulated as a problem for nls with the B coefficient equal to one minus the A coefficient.
nls(D ~ b0 + b1 * A + (1-b1) * B + b2 * C, data,
start = list(b0 = 1, b1 = 1, b2 = 1))
## D ~ b0 + b1 * A + (1 - b1) * B + b2 * C
## data: data
## b0 b1 b2
## 1.6429 -0.3571 -0.1429
## residual sum-of-squares: 0.07143
##
## Number of iterations to convergence: 1
## Achieved convergence tolerance: 2.803e-08
Check
We can double check the above by using the lm approach in the other answer:
lm(D ~ I(A-B) + C + offset(B), data)
giving:
Call:
lm(formula = D ~ I(A - B) + C + offset(B), data = data)
Coefficients:
(Intercept) I(A - B) C
1.6429 -0.3571 -0.1429
The I(A-B) coefficient equals the coefficient of A in the original formulation and one minus it is the coefficient of C. We see that all approaches do lead to the same coefficients.

Fitting a multivariate polynomial of generic degree in R without having to write the explicit formula

I would like to fit a multivariate polynomial of arbitrary degree and in an arbitrary number of variables, to some data. The number of variables can be high (for example 40) and the code should work for different numbers of variables (e.g., 10, 20, 40, etc.)., so it's not possible to write out the formula explicitly. For a degree 1 polynomial (i.e., the classic linear model), the solution is trivial: suppose I have my data in the dataframe df, then
mymodel <- lm(y ~ ., data = df)
Unfortunately I don't know of a similar compact formula when the polynomial is of arbitrary degree. Can you help me?
This combines both options from my earlier posting (interactions and polynomial terms) in a hypothetical situation where the column names look like "X1", "X2", ...., "X30". You would take out the terms() call which is just in there to demonstrate that it was successful:
terms( as.formula(
paste(" ~ (", paste0("X", 1:30 , collapse="+"), ")^2", "+",
paste( "poly(", paste0("X", 1:30), ", degree=2)",
collapse="+"),
collapse="")
) )
You could use an expression like names(dfrm)[!names(dfrm) %in% "y"] instead of the inner paste0 calls.
Note that the interaction terms are constructed by way of the R formula process in with the (...)^2 mechanism which is no creating squared terms but rather all of hte two way interactions:
as.formula(
paste(" ~ (", paste0("X", 1:30 , collapse="+"), ")^2", "+", paste( "poly(", paste0("X", 1:30), ", degree=2)", collapse="+"), collapse="")
)
#----output----
~(X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 + X10 + X11 + X12 +
X13 + X14 + X15 + X16 + X17 + X18 + X19 + X20 + X21 + X22 +
X23 + X24 + X25 + X26 + X27 + X28 + X29 + X30)^2 + poly(X1,
degree = 2) + poly(X2, degree = 2) +
poly(X3, degree = 2) +
poly(X4, degree = 2) + poly(X5, degree = 2) + poly(X6, degree = 2) +
poly(X7, degree = 2) + poly(X8, degree = 2) + poly(X9, degree = 2) +
poly(X10, degree = 2) + poly(X11, degree = 2) + poly(X12,
degree = 2) + poly(X13, degree = 2) + poly(X14, degree = 2) +
poly(X15, degree = 2) + poly(X16, degree = 2) + poly(X17,
degree = 2) + poly(X18, degree = 2) + poly(X19, degree = 2) +
poly(X20, degree = 2) + poly(X21, degree = 2) + poly(X22,
degree = 2) + poly(X23, degree = 2) + poly(X24, degree = 2) +
poly(X25, degree = 2) + poly(X26, degree = 2) + poly(X27,
degree = 2) + poly(X28, degree = 2) + poly(X29, degree = 2) +
poly(X30, degree = 2)
You can use this function makepoly that generates a formula with polynomial terms based on a formula and a data frame.
makepoly <- function(form, data, degree = 1) {
mt <- terms(form, data = data)
tl <- attr(mt, "term.labels")
resp <- tl[attr(mt, "response")]
reformulate(paste0("poly(", tl, ", ", degree, ")"),
response = form[[2]])
}
A test data set:
set.seed(1)
df <- data.frame(y = rnorm(10),
x1 = rnorm(10), x2 = rnorm(10), x3 = rnorm(10))
Create the formula and run the regression:
form <- makepoly(y ~ ., df, degree = 2)
# y ~ poly(x1, 2) + poly(x2, 2) + poly(x3, 2)
lm(form, df)
#
# Call:
# lm(formula = form, data = df)
#
# Coefficients:
# (Intercept) poly(x1, 2)1 poly(x1, 2)2 poly(x2, 2)1
# 0.1322 0.1445 -5.5757 -5.2132
# poly(x2, 2)2 poly(x3, 2)1 poly(x3, 2)2
# 4.2297 0.7895 3.9796

What is the difference between x^2 and I(x^2) in R?

What is the difference between these two models in R?
model1 <- glm(y~ x + x^2, family=binomial(link=logit), weights=numbers))
model2 <- glm(y~ x + I(x^2),family=binomial(link=logit), weights=numbers))
Also what is the equvalent of I(x^2) in SAS?
The I() function means 'as is' whereas the ^n (to the power of n) operator means 'include these variables and all interactions up to n way'
This means:
I(X^2) is literally regressing Y against X squared and
X^2 means include X and the 2 way interaction of X but since it is only one variable there is no interaction so it returns only itself i.e. X. Note that in your formula you say X + X^2 which translates to X + X which in the formula syntax is only taken into account once. I.e. one of the two Xs will be removed.
Demonstration:
Y <- runif(100)
X2 <- runif(100)
df <- data.frame(Y,X1,X2)
b <- lm( Y ~ X2 + X2^2 + X2,data=df)
> b
Call:
lm(formula = Y ~ X2 + X2^2 + X2, data = df)
Coefficients:
(Intercept) X2
0.48470 0.05098
a <- lm( Y ~ X2 + I(X2^2),data=df)
> a
Call:
lm(formula = Y ~ X2 + I(X2^2), data = df)
Coefficients:
(Intercept) X2 I(X2^2)
0.47545 0.11339 -0.06682
Hope it helps!

R neuralnet does not converge within stepmax for time series

I'm writing a neural network for prediction of elements in a time series x + sin(x^2) in R, using the neuralnet package. This is how training data is being generated, assuming a window of 4 elements, and that the last one is the one that has to be predicted:
nntr0 <- ((1:25) + sin((1:25)^2))
nntr1 <- ((2:26) + sin((2:26)^2))
nntr2 <- ((3:27) + sin((3:27)^2))
nntr3 <- ((4:28) + sin((4:28)^2))
nntr4 <- ((5:29) + sin((5:29)^2))
Then, I turn these into a data.frame:
nntr <- data.frame(nntr0, nntr1, nntr2, nntr3, nntr4)
Then, I proceed to train the NN:
net.sinp <- neuralnet(nntr4 ~ nntr0 + nntr1 + nntr2 + nntr3, data=nntr, hidden=10, threshold=0.04, act.fct="tanh", linear.output=TRUE, stepmax=100000)
Which, after a while, gives me the message
Warning message:
algorithm did not converge in 1 of 1 repetition(s) within the stepmax
Call: neuralnet(formula = nntr4 ~ nntr0 + nntr1 + nntr2 + nntr3, data = nntr, hidden = 10, threshold = 0.04, stepmax = 100000, act.fct = "tanh", linear.output = TRUE)
Can anyone help me figure out why it is not converging? Many thanks
With tanh as an activation function (it is bounded),
it is very difficult to reproduce the linear trend in your signal.
You can use linear activation functions instead,
or try to detrend the signal.
# Data
dx <- 1
n <- 25
x <- seq(0,by=dx,length=n+4)
y <- x + sin(x^2)
y0 <- y[1:n]
y1 <- y[1 + 1:n]
y2 <- y[2 + 1:n]
y3 <- y[3 + 1:n]
y4 <- y[4 + 1:n]
d <- data.frame(y0, y1, y2, y3, y4)
library(neuralnet)
# Linear activation functions
r <- neuralnet(y4 ~ y0 + y1 + y2 + y3, data=d, hidden=10)
plot(y4, compute(r, d[,-5])$net.result)
# No trend
d2 <- data.frame(
y0 = y0 - x[1:n],
y1 = y1 - x[1 + 1:n],
y2 = y2 - x[2 + 1:n],
y3 = y3 - x[3 + 1:n],
y4 = y4 - x[4 + 1:n]
)
r <- neuralnet(y4 ~ y0 + y1 + y2 + y3, data=d2, hidden=10, act.fct="tanh" )
plot(d2$y4, compute(r, d2[,-5])$net.result)
Warning message:
algorithm did not converge in 1 of 1 repetition(s) within the stepmaxmeans your algorithm reached the limited steps before it is converged. If you type ?neuralnet and see the definition for stepmax it says,
the maximum steps for the training of the neural network. Reaching this maximum leads to a stop of the neural network's training process.
For your problem, I recommend you to increase your stepmax value to 1e7 and see what happens.
The code will be,
net.sinp <- neuralnet(nntr4 ~ nntr0 + nntr1 + nntr2 + nntr3, data=nntr, hidden=10, threshold=0.04, act.fct="tanh", linear.output=TRUE, stepmax=1e7)

Resources