Constraints for nls coefficients - r

I'm trying to fit data with nls() function where the nature of data gives me bounds for one coefficient and for sum of two coefficients. Let me introduce short example to see where is the problem. I want parameter b1 to be between 0 and 1 and I want sum of parameters b1 and b2 to be between 0 and 1 as well.
set.seed(123)
# example where everything is OK
x <- 1:200
g <- rbinom(200, 1, 0.5)
y <- 3 + (0.7 + 0.2 * g) * x
yeps <- y + rnorm(length(y), sd = 0.1)
# both parameter b1 and sum of parameters b1 and b2 are between 0 and 1
nls(yeps ~ a + (b1 + b2 * g) * x, start = list(a = 0.12345, b1 = 0.54321, b2 = 0.4213))
# using more extreme values
x <- 1:200
g <- rbinom(200, 1, 0.5)
y <- 3 + (0.9 - 0.99 * g) * x
yeps <- y + rnorm(length(y), sd = 15)
# b1 is OK, but b1 + b2 < 0
nls(yeps ~ a + (b1 + b2 * g) * x,
start = list(a = 0.12345, b1 = 0.54321, b2 = 0.4213))
# trying constraints, not good, sum is still out of range
nls(yeps ~ a + (b1 + b2 * g) * x,
start = list(a = 0.12345, b1 = 0.54321, b2 = 0.4213),
lower = list(a = -Inf, b1 = 0, b2 = -1),
upper = list(a = Inf, b1 = 1, b2 = 1),
algorithm = "port")
What I'm looking for is something like that (does not work):
nls(yeps ~ a + (b1 + b2 * g) * x,
start = list(a = 0.12345, b1 = 0.54321, b2 = 0.4213),
lower = list(a = -Inf, b1 = 0, b2 = -b1),
upper = list(a = Inf, b1 = 1, b2 = 1 - b1),
algorithm = "port")
Is it possible to set constraints with other parameters in nls() function? Thanks for any suggestions!

Let B2 = b1+b2 so b2 = B2-b1 and substituting B2-b1 for b2 we get a problem in terms of a, b1 and B2 of which the latter two are between 0 and 1 so:
fm <- nls(yeps ~ a + (b1 + (B2-b1) * g) * x, lower = c(-Inf, 0, 0), upper = c(Inf, 1, 1),
start = list(a = 0.1, b1 = 0.5, B2 = 0.1), alg = "port")
giving the following (hence b2 = B2 - b1 = 0 - 0.9788 = -0.9788)
> fm
Nonlinear regression model
model: yeps ~ a + (b1 + (B2 - b1) * g) * x
data: parent.frame()
a b1 B2
-5.3699 0.9788 0.0000
residual sum-of-squares: 42143
Algorithm "port", convergence message: both X-convergence and relative convergence (5)
and plotting:
plot(yeps ~ x)
points(fitted(fm) ~ x, pch = 20, col = "red")

Related

Linear constrains Z and D in MARSS package

Recently, I need to establish a MARSS model such as:
y_t = c + beta * d1_t + alpha * x_t + v_t; x_t = x_(t-1) + w_t
then I try in R and I meet the problem:
Error in optim(pars, negloglike, method = "BFGS") :
objective function in optim evaluates to length 0 not 1
My benchmark model:
y_t = c + beta * d1_t + alpha * d2_t + alpha * x_t + v_t; x_t = x_(t-1) + w_t
My code:
y = rnorm(10)
x = matrix(0, 3, 10)
x[1:2, ] = matrix(rnorm(20), 2, 10)
x[3, ]= 1
x0 = matrix(0.1, 1, 1)
V0 = matrix(0.01, 1, 1)
B = matrix(1, 1, 1)
U = A = 'zero'
# pars to be estimated
pars = c(beta=0.5, alpha=0.5, c=0.5, q=1)
# calculate loglike
negloglike = function(pars){
Q = matrix(list('q'), 1, 1)
R = matrix(1, 1, 1)
Z = matrix(list(pars['alpha']), 1, 1)
D = matrix(list(pars['beta'], pars['alpha'], pars['c']), 1, 3)
model.list = list(B=B, U=U, Q=Q, Z=Z, A=A, D=D, d=x, R=R, x0=x0, V0=V0)
-1 * MARSS(y, model=model.list, control=list(
maxit=200,conv.test.slope.tol=0.1,abstol=0.1),method='kem', silent=TRUE)$loglik
}
optim(pars, negloglike, method = 'BFGS')
but it denoted:
Error in optim(pars, negloglike, method = "BFGS") :
objective function in optim evaluates to length 0 not 1
I need help, Thanks!

Convergence Failure: Iteration limit reached without convergence (10)

I have some difficulties getting a specific curve to fit data to an nls model.
This is the formula for the data:
((b1 * ((b2 * x)^b4)) / (1 + ((b2 * x)^b4)))^(b3 / b4)
I use nls2 package with a random algorithm to find the inital values.
library(nls2)
#FORMULA
eq <- y ~ (b1 * ((b2 * x)^b4)) / (1 + ((b2 * x)^b4))^(b3 / b4)
#LIMITS
values <- data.frame(
b1 = c(60, 63)
b2 = c(0, 0.05)
b3 = c(0, 1)
b4 = c(0, 0.9)
fit <- nls2(eq,
data = .data,
start = values,
algorithm = "random",
control = mls.control(maxiter = 1000))
nls(eq, .data, start = coef(fit), alg = "port", lower = 0)
plot(.data)
The values should be:
b1 = 62.2060
b2 = 0.0438
b3 = 0.9692
b4 = 0.8693
However, when I try to run the codes, I always ended on an error message: Convergence Failure: Iteration limit reached without convergence (10)
How can I avoid the convergence failure error? Any help is highly appreciated. Thank You.
0. TLDR
You did not set the lower and upper bound in nls, so you didn't get a converging result. If you set them your will get a result near the boundary. See the code I wrote in the last paragraph.
Actually, even if you set the boundary, due to the bad data quality(sample size is small and do not consist with you formula), it's hard to fit a optimal value near your true b1,'b2','b3' and b4. See nontechnical reason.
1. Nontechnical reason of convergence failure
I think your code is right, and this convergence fail is due to your data quality or your misspecification of formula.
In general, it's hard for you to estimate 4 parameters with only 6 point. If you have good data which actully fits your model well, nlm will converge. In your case, either your data is wrong or you formula specification bias is huge.
I draw a plot to show your that:
Code
# generate a line using true parameters:b1,b2,b3,b4
b1 = 62.2060
b2 = 0.0438
b3 = 0.9692
b4 = 0.8693
x_points = seq(50,420,length.out = 200)
y_points = (b1 * ((b2 * x_points)^b4)) / (1 + ((b2 * x_points)^b4))^(b3 / b4)
# plot the function
plot(x = x_points ,y = y_points, type ='l',col ='black',lwd = 5,
xlim = c(min(yourdata$x)-5,max(yourdata$x)+5),
ylim = c(min(yourdata$y)-5,max(yourdata$y)+5))
# plot the data your got
points(yourdata$x,yourdata$y,cex = 2)
Output:
If we generate a data from your formula, we can fit them quite easily, like this:
## generate data
b1 = 62.2060
b2 = 0.0438
b3 = 0.9692
b4 = 0.8693
x <- runif(6,60,450)
y <- (b1 * ((b2 * x)^b4)) / (1 + ((b2 * x)^b4))^(b3 / b4)
data <- data.frame(x,y)
yourdata <- data.frame(x = c(409.56, 195.25, 60.53, 359.56, 188.79, 67.12),
y = c(39.76100, 20.11875, 7.23675, 41.01100, 20.28035, 7.07200))
#FORMULA
eq <- y ~ (b1 * ((b2 * x)^b4)) / (1 + ((b2 * x)^b4))^(b3 / b4)
#LIMITS
values <- data.frame(
b1 = c(60, 63),
b2 = c(0, 0.05),
b3 = c(0, 1),
b4 = c(0, 0.9))
fit <- nls2(eq,
data = data,
start = values,
algorithm = "random",
control = nls.control(maxiter = 1000))
nls(eq, data, start = coef(fit), alg = "port",
control = nls.control(maxiter = 1000,tol = 1e-05),
low = c(60,0,0,0),upper =c(63,0.05,1,0.9) ,trace = TRUE)
plot(x,y)
Output:
Nonlinear regression model
model: y ~ (b1 * ((b2 * x)^b4))/(1 + ((b2 * x)^b4))^(b3/b4)
data: data
b1 b2 b3 b4
62.2060 0.0438 0.9692 0.8693
residual sum-of-squares: 3.616e-24
Algorithm "port", convergence message: absolute function convergence (6)
Alse note that, in the above, I generate only6 numbers to fit the model. If you generate more data, for instance 60, you will have a better convergency!
2.Technical reason
After reading the PORT docs, I think that this error can mean
gradient is calculated incorrectly
stopping tolerances are too tight
gradient is discontinous near some iterate
And all these may have a relationship with you data and training task(your boundary and formula).
Try code below and you will get a better result:
Code:
yourdata <- data.frame(x = c(409.56, 195.25, 60.53, 359.56, 188.79, 67.12),
y = c(39.76100, 20.11875, 7.23675, 41.01100, 20.28035, 7.07200))
#FORMULA
eq <- y ~ (b1 * ((b2 * x)^b4)) / (1 + ((b2 * x)^b4))^(b3 / b4)
#LIMITS
values <- data.frame(
b1 = c(60, 63),
b2 = c(0, 0.05),
b3 = c(0, 1),
b4 = c(0, 0.9))
fit <- nls2(eq,
data = yourdata,
start = values,
algorithm = "random",
control = nls.control(maxiter = 1000))
nls(eq, yourdata, start = coef(fit), alg = "port",
control = nls.control(maxiter = 1000,tol = 1e-05),
low = c(60,0,0,0),upper =c(63,0.05,1,0.9) ,trace = TRUE)
plot(x,y)
Outputs:
Nonlinear regression model
model: y ~ (b1 * ((b2 * x)^b4))/(1 + ((b2 * x)^b4))^(b3/b4)
data: yourdata
b1 b2 b3 b4
63.00000 0.00155 0.00000 0.90000
residual sum-of-squares: 22.28
Algorithm "port", convergence message: both X-convergence and relative convergence (5)
As we can see, it converges to the boundary, which means that your data is unconsitant with your settings(formula or boundary).

GAM R variance explained by variable

My current problem is to calculate the variance explained by the different variables of a general additive model (GAM) with R.
I followed the explanation given by Wood here :
https://stat.ethz.ch/pipermail/r-help/2007-October/142743.html
But I would like to do it with three variables.
I tried this :
library(mgcv)
set.seed(0)
n<-400
x1 <- runif(n, 0, 1)
x2 <- runif(n, 0, 1)
x3 <- runif(n, 0, 1)
f1 <- function(x) exp(2 * x) - 3.75887
f2 <- function(x) 0.2*x^11*(10*(1-x))^6+10*(10*x)^3*(1-x)^10
f3 <- function(x) 0.008*x^2 - 1.8*x + 874
f <- f1(x1) + f2(x2) + f3(x3)
e <- rnorm(n, 0, 2)
y <- f + e
b <- gam(y ~ s(x1, k = 3)+s(x2, k = 3)+ s(x3, k = 3))
b3 <- gam(y ~ s(x1) + s(x2), sp = c(b$sp[1], b$sp[2]))
b2 <- gam(y ~ s(x1) + s(x3), sp = c(b$sp[1], b$sp[3]))
b1 <- gam(y ~ s(x2) + s(x3), sp = c(b$sp[2], b$sp[3]))
b0 <- gam(y~1)
(deviance(b1)-deviance(b))/deviance(b0)
(deviance(b2)-deviance(b))/deviance(b0)
(deviance(b3)-deviance(b))/deviance(b0)
But I don't understand results. For example, the model with only x1 and x2 has a deviance smaller than deviance with the three explanatory variable.
Does the method I used to extract variance explained by variable with three variables is correct?
Does it mean that there is a confounding effect in the global model? Or is there another explanation?
Thanks a lot.
You did something wrong here:
b <- gam(y ~ s(x1, k = 3) + s(x2, k = 3) + s(x3, k = 3))
b3 <- gam(y ~ s(x1) + s(x2), sp = c(b$sp[1], b$sp[2]))
b2 <- gam(y ~ s(x1) + s(x3), sp = c(b$sp[1], b$sp[3]))
b1 <- gam(y ~ s(x2) + s(x3), sp = c(b$sp[2], b$sp[3]))
Why did you set k = 3 in the first line, while not setting k = 3 for the rest? Without specifying k, s() will take default value k = 10. Now you get a problem: b1, b2, b3 are not nested in b.
In Simon Wood's original example, he left k unspecified, so that k=10 is taken for all s(). In fact, you can vary k values, but you must gurantee that you always have the same k for the same covariate (to ensure nesting). For example, you can do:
b <- gam(y ~ s(x1, k = 4) + s(x2, k = 6) + s(x3, k = 3))
b3 <- gam(y ~ s(x1, k = 4) + s(x2, k = 6), sp = c(b$sp[1], b$sp[2])) ## droping s(x3) from b
b2 <- gam(y ~ s(x1, k = 4) + s(x3, k = 3), sp = c(b$sp[1], b$sp[3])) ## droping s(x2) from b
b1 <- gam(y ~ s(x2, k = 6) + s(x3, k = 3), sp = c(b$sp[2], b$sp[3])) ## droping s(x1) from b
Then let's do:
(deviance(b1)-deviance(b))/deviance(b0)
# [1] 0.2073421
(deviance(b2)-deviance(b))/deviance(b0)
# [1] 0.4323154
(deviance(b3)-deviance(b))/deviance(b0)
# [1] 0.02094997
The positive values imply that dropping any model term will inflate the deviance, which is sensible as our true model have all three terms.

Fiting 1 - exp(x) giving higher weight to the first values

I want to fit to a 1 - exp(x) function to a data set , but giving higher weight to the first values. However, the following code is not working in such way:
x <-sqrt((0.05)^2+(0.05)^2)*seq(from = 1, to = 20, by = 1)
y <- c(11,20,27,32,35,36,36.5,25,16,9,4,1,7.87e-16,2.07e-15,-9.36e-16,1.61e-15,-3.81e-16,3.92e-16,7.65e-16,-8.26e-16)
temp <- data.frame(cbind(x,y))
we <- 1/(log1p(seq_along(x)))
# fit non-linear model
mod <- nls(y ~ (1 - exp(a + b * x)), data = temp, start = list(a = 0, b = 0), weights = we)
#add fitted curve
lines(temp$x, predict(mod, list(x = temp$x)))
Here is the output:
Your specification of weights is correct. The bad fit you obtained is due to your faulty model assumption. You assumed:
y ~ 1 - exp(a + b * x)
Note that exp() gives strictly positive values, so y will be no larger than 1. However, y values in your data range up to 35.
My idea is not perfect, but it might give you a better starting point. Consider:
y ~ a * x * exp(b * x * x + c * x)
Using your data:
x <- c(0, sqrt((0.05)^2+(0.05)^2)*seq(from = 1, to = 20, by = 1))
y <- c(0, 11,20,27,32,35,36,36.5,25,16,9,4,1,7.87e-16,2.07e-15,-9.36e-16,1.61e-15,-3.81e-16,3.92e-16,7.65e-16,-8.26e-16)
fit <- nls(y ~ a * x * exp(b * x * x + c * x), start = list(a = 30, b= -1, c = -1))
plot(x, y)
lines(x, predict(fit, list(x)))

R script - NLS not working

I have 5 (x,y) data points and I'm trying to find a best fit solution consisting of two lines which intersect at a point (x0,y0), and which follow these equations:
y1 = (m1)(x1 - x0) + y0
y2 = (m2)(x2 - x0) + y0
Specifically, I require that the intersection must occur between x=2 and x=3. Have a look at the code:
#Initialize x1, y1, x2, y2
x1 <- c(1,2)
y1 <- c(10,10)
x2 <- c(3,4,5)
y2 <- c(20,30,40)
g <- c(TRUE, TRUE, FALSE, FALSE, FALSE)
q <- nls(c(y1, y2) ~ ifelse(g == TRUE, m1 * (x1 - x0) + y0, m2 * (x2 - x0) + y0), start = c(m1 = -1, m2 = 1, y0 = 0, x0 = 2), algorithm = "port", lower = c(m1 = -Inf, m2 = -Inf, y0 = -Inf, x0 = 2), upper = c(m1 = Inf, m2 = Inf, y0 = Inf, x0 = 3))
coef <- coef(q)
m1 <- coef[1]
m2 <- coef[2]
y0 <- coef[3]
x0 <- coef[4]
#Plot the original x1, y1, and x2, y2
plot(x1,y1,xlim=c(1,5),ylim=c(0,50))
points(x2,y2)
#Plot the fits
x1 <- c(1,2,3,4,5)
fit1 <- m1 * (x1 - x0) + y0
lines(x1, fit1, col="red")
x2 <- c(1,2,3,4,5)
fit2 <- m2 * (x2 - x0) + y0
lines(x2, fit2, col="blue")
So, you can see the data points listed there. Then, I run it through my nls, get my parameters m1, m2, x0, y0 (the slopes, and the intersection point).
But, take a look at the solution:
Clearly, the red line (which is supposed to only be based on the first 2 points) is not the best line of fit for the first 2 points. This is the same case with the blue line (the 2nd fit), which supposed to be is dependent on the last 3 points). What is wrong here?
This is segmented regression:
# input data
x1 <- c(1,2); y1 <- c(10,10); x2 <- c(3,4,5); y2 <- c(20,30,40)
x <- c(x1, x2); y <- c(y1, y2)
# segmented regression
library(segmented)
fm <- segmented.lm(lm(y ~ x), ~ x, NA, seg.control(stop.if.error = FALSE, K = 2))
summary(fm)
# plot
plot(fm)
points(y ~ x)
See ?lm, ?segmented.lm and ?seg.control for more info.
I'm not exactly sure what's wrong but I can get it to work by rearranging things a bit. Please note the comment in ?nls about "Do not use ‘nls’ on artificial "zero-residual" data."; I added a bit of noise.
## Initialize x1, y1, x2, y2
x1 <- c(1,2)
y1 <- c(10,10)
x2 <- c(3,4,5)
y2 <- c(20,30,40)
## make single x, y vector
x <- c(x1,x2)
set.seed(1001)
## (add a bit of noise to avoid zero-residual artificiality)
y <- c(y1,y2)+rnorm(5,sd=0.01)
g <- c(TRUE,TRUE,FALSE,FALSE,FALSE) ## specify identities of points
## particular changes:
## * you have lower=upper=2 for x0. Did you want 2<x0<3?
## * specified data argument explicitly (allows use of predict() etc.)
## * changed name from 'q' to 'fit1' (avoid R built-in function)
fit1 <- nls(y ~ ifelse(g,m1,m1+delta_m)*(x - x0) + y0,
start = c(m1 = -1, delta_m = 2, y0 = 0, x0 = 2),
algorithm = "port",
lower = c(m1 = -Inf, delta_m = 0, y0 = -Inf, x0 = 2),
upper = c(m1 = Inf, delta_m = Inf, y0 = Inf, x0 = 3),
data=data.frame(x,y))
#Plot the original 'data'
plot(x,y,col=rep(c("red","blue"),c(2,3)),
xlim=c(1,5),ylim=c(0,50))
## add predicted values
xvec <- seq(1,5,length.out=101)
lines(xvec,predict(fit1,newdata=data.frame(x=xvec)))
edit: based ifelse clause on point identity, not x position
edit: changed to require second slope to be > first slope
On a second look, I think the issue above is probably due to the use of separate vectors for x1 and x2 above, rather than a single x vector: I suspect these got replicated by R to match up with the g vector, which would have messed things up pretty badly. For example, this stripped-down example:
g <- c(TRUE, TRUE, FALSE, FALSE, FALSE)
ifelse(g,x1,x2)
## [1] 1 2 5 3 4
shows that x2 gets extended to (3 4 5 3 4) before being used in the ifelse clause. The scariest part is that normally one gets a warning such as this:
> x2 + 1:5
[1] 4 6 8 7 9
Warning message:
In x2 + 1:5 :
longer object length is not a multiple of shorter object length
but in this case there is no warning ...

Resources