I've read many similar questions but still couldn't find the answer.
Here is some data that I'm using to calibrate the equation below:
set.seed(100)
i <- sort(rexp(n = 100,rate = 0.01))
Tr <- sort(runif(n = 100,min = 5,max = 100))
k_start <- 3259
u_start <- 0.464
t0_start <- 38
n_start <- -1
i_test <- k_start*Tr^u_start * (5 + t0_start)^n_start
m <- nls(i~(k * Tr^u / (5+t0)^n), start = list(k = k_start, u = u_start,
t0 = t0_start, n = n_start))
When I used nlsLM and the same error came up:
Error in nlsModel(formula, mf, start, wts) : singular gradient matrix at initial parameter estimates
For the start values, I tried to use the values from calibration in Python and still the same error occurs.
There's also another way to use that equation that is like this:
However, the result is the same error.
d_start <- 43
m <- nls(i ~ (k * Tr^u / d),
start = list(k = k_start, u = u_start,d=d_start))
When I use only the numerator it works, but that's not what I need.
Any help will be very much appreciated.
In the first nls, the right hand side depends on k, t0 and n only through
k / (5+t0)^n so it is over parameterized as one parameter could represent
their combined effect. In the second nls the right hand side depends only
on k and d through k / d so again the problem has been over parameterized and
one parameter could represent their combined effect.
Getting rid of the excess parameters and getting the starting values using a linear model it converges.
fit.lm <- lm(log(i) ~ log(Tr))
co <- coef(fit.lm)
fit <- nls(i ~ k * Tr ^ u, start = list(k = exp(co[[1]]), u = co[[2]]))
fit
## Nonlinear regression model
## model: i ~ k * Tr^u
## data: parent.frame()
## k u
## 0.0002139 3.0941602
## residual sum-of-squares: 79402
##
## Number of iterations to convergence: 43
## Achieved convergence tolerance: 5.354e-06
Reciprocal Model
Below we fit a "reciprocal model" which has the same number of parameters but a better fit as measured by the deviance which is the residual sum of squares. A lower value means better fit.
# reciprocal model
fit.recip <- nls(i ~ 1/(a + b * log(Tr)), start = list(a = 1, b = 1))
deviance(fit)
## [1] 79402.17
deviance(fit.recip)
## [1] 25488.1
Graphics
Below we plot both fit (red) and fit.recip (blue) models.
plot(i ~ Tr)
lines(fitted(fit) ~ Tr, col = "red")
lines(fitted(fit.recip) ~ Tr, col = "blue")
legend("topleft", legend = c("fit", "fit.recip"), lty = 1, col = c("red", "blue"))
(continued after plot)
plinear
Note that the plinear algorithm could be used as an alternative algorithm to fit the fit model above to avoid having to supply a starting value for k. It also has the additional benefit that it requires substantially fewer iterations in this case (14 vs. 45). With plinear the formula should omit the linear argument, k, as it is implied by the algorithm and will be reported as .lin .
nls(i ~ Tr ^ u, start = list(u = co[[2]]), algorithm = "plinear")
## Nonlinear regression model
## model: i ~ Tr^u
## data: parent.frame()
## u .lin
## 3.0941725 0.0002139
## residual sum-of-squares: 79402
##
## Number of iterations to convergence: 14
## Achieved convergence tolerance: 3.848e-06
Related
I am trying to fit my photosynthesis data to a nls function, which is a nonrectangular hyperbola function. So far, I have some issues with getting the right start value for nls and, therefore, I am getting a lot of errors such as 'singular gradient ', 'NaNs produced', or 'step factor 0.000488281 reduced below 'minFactor' of 0.000976562'. Would you please give some suggestions for finding the best starting values? Thanks in advance!
The codes and data are below:
#Dataframe
PPFD <- c(0,0,0,50,50,50,100,100,100,200,200,200,400,400,400,700,700,700,1000,1000,1000,1500,1500,1500)
Cultivar <- c(-0.7,-0.8,-0.6,0.6,0.5,0.8,2.0,2.0,2.3,3.6,3.7,3.7,5.7,5.5,5.8,9.7,9.6,10.0,14.7,14.4,14.9,20.4,20.6,20.9)
NLRC <-data.frame(PPFD,Cultivar)
#nls regression
reg_nrh <- nls(Cultivar ~ (1/(2*Theta))*(AQY*PPFD+Am-sqrt((AQY*PPFD+Am)^2-4*AQY*Theta*Am*PPFD))-Rd, data = NLRC, start=list(Am = max(NLRC$Cultivar)-min(NLRC$Cultivar), AQY = 0.05, Rd=-min(NLRC$Cultivar), Theta = 1))
#estimated parameters for plotting
Amnrh <- coef(reg_nrh)[1]
AQYnrh <- coef(reg_nrh)[2]
Rdnrh <- coef(reg_nrh)[3]
Theta <- coef(reg_nrh)[4]
#plot
plot(NLRC$PPFD, NLRC$Cultivar, main = c("Cultivar"), xlab="", ylab="", ylim=c(-2,40),cex.lab=1.2,cex.axis=1.5,cex=2)+mtext(expression("PPFD ("*mu*"mol photons "*m^-2*s^-1*")"),side=1,line=3.3,cex=1.5)+mtext(expression(P[net]*" ("*mu*"mol "*CO[2]*" "*m^-2*s^-1*")"),side=2,line=2.5,cex=1.5)
#simulated value
ppfd = seq(from = 0, to = 1500)
pnnrh <- (1/(2*Theta))*(AQYnrh*ppfd+Amnrh-sqrt((AQYnrh*ppfd+Amnrh)^2-4*AQYnrh*Theta*Amnrh*ppfd))- Rdnrh
lines(ppfd, pnnrh, col="Green")
If we
take the maximum of 0 and the expression within the sqrt to avoid taking negative square roots
fix Theta at 0.8
use lm to get starting values for AQY and Am
then it converges
Theta <- 0.8
fm <- lm(Cultivar ~ PPFD, NLRC)
st <- list(AQY = coef(fm)[[2]], Rd = -min(NLRC$Cultivar), Am = coef(fm)[[1]])
fo <- Cultivar ~
(1/(2*Theta))*(AQY*PPFD+Am-sqrt(pmax(0, (AQY*PPFD+Am)^2-4*AQY*Theta*Am*PPFD)))-Rd
reg <- nls(fo, data = NLRC, start = st)
deviance(reg) # residual sum of squares
## [1] 5.607943
plot(Cultivar ~ PPFD, NLRC)
lines(fitted(reg) ~ PPFD, NLRC, col = "red")
(continued after image)
Note that the first model below has only two parameters yet has lower residual sum of squares (lower is better).
reg2 <- nls(Cultivar ~ a * PPFD^b, NLRC, start = list(a = 1, b = 1))
deviance(reg2)
## [1] 5.098796
These have higher residual sum of squares but do have the advantage that they are very simple.
deviance(fm) # fm defined above
## [1] 6.938648
fm0 <- lm(Cultivar ~ PPFD + 0, NLRC) # same as fm except no intercept
deviance(fm0)
## [1] 7.381632
I have existing code for fitting a sigmoid curve to data in R. How can I used selfstart (or another method) to automatically find start values for the regression?
sigmoid = function(params, x) {
params[1] / (1 + exp(-params[2] * (x - params[3])))
}
dataset = data.frame("x" = 1:53, "y" =c(0,0,0,0,0,0,0,0,0,0,0,0,0,0.1,0.18,0.18,0.18,0.33,0.33,0.33,0.33,0.41,0.41,0.41,0.41,0.41,0.41,0.5,0.5,0.5,0.5,0.68,0.58,0.58,0.68,0.83,0.83,0.83,0.74,0.74,0.74,0.83,0.83,0.9,0.9,0.9,1,1,1,1,1,1,1) )
x = dataset$x
y = dataset$y
# fitting code
fitmodel <- nls(y~a/(1 + exp(-b * (x-c))), start=list(a=1,b=.5,c=25))
# visualization code
# get the coefficients using the coef function
params=coef(fitmodel)
y2 <- sigmoid(params,x)
plot(y2,type="l")
points(y)
This is a common (and interesting) problem in non-linear curve fitting.
Background
We can find sensible starting values if we take a closer look at the function sigmoid
We first note that
So for large values of x, the function approaches a. In other words, as a starting value for a we may choose the value of y for the largest value of x.
In R language, this translates to y[which.max(x)].
Now that we have a starting value for a, we need to decide on starting values for b and c. To do that, we can make use of the geometric series
and expand f(x) = y by keeping only the first two terms
We now set a = 1 (our starting value for a), re-arrange the equation and take the logarithm on both sides
We can now fit a linear model of the form log(1 - y) ~ x to obtain estimates for the slope and offset, which in turn provide the starting values for b and c.
R implementation
Let's define a function that takes as an argument the values x and y and returns a list of parameter starting values
start_val_sigmoid <- function(x, y) {
fit <- lm(log(y[which.max(x)] - y + 1e-6) ~ x)
list(
a = y[which.max(x)],
b = unname(-coef(fit)[2]),
c = unname(-coef(fit)[1] / coef(fit)[2]))
}
Based on the data for x and y you give, we obtain the following starting values
start_val_sigmoid(x, y)
#$a
#[1] 1
#
#$b
#[1] 0.2027444
#
#$c
#[1] 15.01613
Since start_val_sigmoid returns a list we can use its output directly as the start argument in nls
nls(y ~ a / ( 1 + exp(-b * (x - c))), start = start_val_sigmoid(x, y))
#Nonlinear regression model
# model: y ~ a/(1 + exp(-b * (x - c)))
# data: parent.frame()
# a b c
# 1.0395 0.1254 29.1725
# residual sum-of-squares: 0.2119
#
#Number of iterations to convergence: 9
#Achieved convergence tolerance: 9.373e-06
Sample data
dataset = data.frame("x" = 1:53, "y" =c(0,0,0,0,0,0,0,0,0,0,0,0,0,0.1,0.18,0.18,0.18,0.33,0.33,0.33,0.33,0.41,0.41,0.41,0.41,0.41,0.41,0.5,0.5,0.5,0.5,0.68,0.58,0.58,0.68,0.83,0.83,0.83,0.74,0.74,0.74,0.83,0.83,0.9,0.9,0.9,1,1,1,1,1,1,1) )
x = dataset$x
y = dataset$y
I have a series of data I have fit a power curve to, and I use the predict function in R to allow me predict y values based on additional x values.
set.seed(1485)
len <- 24
x <- runif(len)
y <- x^3 + rnorm(len, 0, 0.06)
ds <- data.frame(x = x, y = y)
mydata=data.frame(x,y)
z <- nls(y ~ a * x^b, data = mydata, start = list(a=1, b=1))
#z is same as M!
power <- round(summary(z)$coefficients[1], 3)
power.se <- round(summary(z)$coefficients[2], 3)
plot(y ~ x, main = "Fitted power model", sub = "Blue: fit; green: known")
s <- seq(0, 1, length = 100)
lines(s, s^3, lty = 2, col = "green")
lines(s, predict(z, list(x = s)), lty = 1, col = "blue")
text(0, 0.5, paste("y =x^ (", power, " +/- ", power.se,")", sep = ""), pos = 4)
Instead of using the predict function here, how could I manually calculate estimated y values based on additional x values based on this power function. If this were just a simple linear regression, I would calculate the slope and y intercept and calculate my y values by
y= mx + b
Is there a similar equation I can use from the output of z that will allow me to estimate y values from additional x values?
> z
Nonlinear regression model
model: y ~ a * x^b
data: mydata
a b
1.026 3.201
residual sum-of-squares: 0.07525
Number of iterations to convergence: 5
Achieved convergence tolerance: 5.162e-06
You would do it the same way except you use the power equation you modeled. You can access the parameters the model calculated using z$m$getPars()
Here is a simple example to illustrate:
predict(z, list(x = 1))
Results in: 1.026125
Which equals the results of
z$m$getPars()["a"] * 1 ^ z$m$getPars()["b"]
Which is equivalet to y = a * x^b
Here are some ways.
1) with This evaluates the formula with respect to the coefficients:
x <- 1:2 # input
with(as.list(coef(z)), a * x^b)
## [1] 1.026125 9.437504
2) attach We could also use attach although it is generally frowned upon:
attach(as.list(coef(z)))
a * x^b
## [1] 1.026125 9.437504
3) explicit Explicit definition:
a <- coef(z)[["a"]]; b <- coef(z)[["b"]]
a * x^b
## [1] 1.026125 9.437504
4) eval This one extracts the formula from z so that we don't have to specify it again. formula(z)[[3]] is the right hand side of the formula used to produce z. Use of eval is sometimes frowned upon but this does avoid
the redundant specification of the formula.
eval(formula(z)[[3]], as.list(coef(z)))
## [1] 1.026125 9.437504
I am trying to estimate the constants for Heaps law.
I have the following dataset novels_colection:
Number of novels DistinctWords WordOccurrences
1 1 13575 117795
2 1 34224 947652
3 1 40353 1146953
4 1 55392 1661664
5 1 60656 1968274
Then I build the next function:
# Function for Heaps law
heaps <- function(K, n, B){
K*n^B
}
heaps(2,117795,.7) #Just to test it works
So n = Word Occurrences, and K and B are values that should be constants in order to find my prediction of Distinct Words.
I tried this but it gives me an error:
fitHeaps <- nls(DistinctWords ~ heaps(K,WordOccurrences,B),
data = novels_collection[,2:3],
start = list(K = .1, B = .1), trace = T)
Error = Error in numericDeriv(form[[3L]], names(ind), env) :
Missing value or an infinity produced when evaluating the model
Any idea in how could I fix this or a method to fit the function and get the values for K and B?
If you take log transform on both sides of y = K * n ^ B, you get log(y) = log(K) + B * log(n). This is a linear relationship between log(y) and log(n), hence you can fit a linear regression model to find log(K) and B.
logy <- log(DistinctWords)
logn <- log(WordOccurrences)
fit <- lm(logy ~ logn)
para <- coef(fit) ## log(K) and B
para[1] <- exp(para[1]) ## K and B
With minpack.lm we can fit a non-linear model but I guess it will be prone to overfitting more than a linear model on the log-transformed variables will do (as done by Zheyuan), but we may compare the residuals of linear / non-linear model on some held-out dataset to get the empirical results, which will be interesting to see.
library(minpack.lm)
fitHeaps = nlsLM(DistinctWords ~ heaps(K, WordOccurrences, B),
data = novels_collection[,2:3],
start = list(K = .01, B = .01))
coef(fitHeaps)
# K B
# 5.0452566 0.6472176
plot(novels_collection$WordOccurrences, novels_collection$DistinctWords, pch=19)
lines(novels_collection$WordOccurrences, predict(fitHeaps, newdata = novels_collection[,2:3]), col='red')
I'm trying to fit a bi exponential function:
t = seq(0, 30, by = 0.1)
A = 20 ; B = 10 ; alpha = 0.25 ; beta = 0.01
y = A*exp(-alpha*t) + B*exp(-beta*(t))
df = as.data.frame(cbind(t,y))
ggplot(df, aes(t, y)) + geom_line() + scale_y_continuous(limits=c(0, 50))
This problem can't be solve by a simple transformation like log so I wanted to use the nls2 package:
library(nls2)
fo <- y ~ Ahat*exp(-alphahat*t) + Bhat*exp(-betahat*t)
fit <- nls2(fo,
start = list(Ahat=5, Bhat=5, alphahat=0.5,betahat=0.5),
algorithm = "brute-force",
trace = TRUE,
lower = c(Ahat=0, Bhat=0, alphahat=0, betahat=0),
upper = c(Ahat=50, Bhat=50, alphahat=10,betahat=10))
fit
Here is the result:
Nonlinear regression model
model: y ~ Ahat * exp(-alphahat * t) + Bhat * exp(-betahat * t)
data: NULL
Ahat Bhat alphahat betahat
5.0 5.0 0.5 0.5
residual sum-of-squares: 37910
Number of iterations to convergence: 4
Achieved convergence tolerance: NA
I assume something is wrong in my code because :
data: NULL ?
Why only 4 iterations ?
Hard to think nls2 didn't find a better solution than the starting point.
The result is far from the solution
From the documentation, the start parameter should be a data.frame of two rows that define the grid to search in, or a data.frame with more rows corresponding to parameter combinations to test if you are using brute-force. Also, nls will have trouble with your fit because it is a perfect curve, there is no noise. The brute-force method is slow, so here is an example where the search space is decreased for nls2. The result of the brute-force nls2 is then used as the starting values with nls default algorithm (or you could use nls2), after adding a tiny bit of noise to the data.
## Data
t = seq(0, 30, by = 0.1)
A = 20 ; B = 10 ; alpha = 0.25 ; beta = 0.01
y = A*exp(-alpha*t) + B*exp(-beta*(t))
df = as.data.frame(cbind(t,y))
library(nls2)
fo <- y ~ Ahat*exp(-alphahat*t) + Bhat*exp(-betahat*t)
## Define the grid to search in,
## Note: decreased the grid size
grd <- data.frame(Ahat=c(10,30),
Bhat=c(10, 30),
alphahat=c(0,2),
betahat=c(0,1))
## Do the brute-force
fit <- nls2(fo,
data=df,
start = grd,
algorithm = "brute-force",
control=list(maxiter=100))
coef(fit)
# Ahat Bhat alphahat betahat
# 10.0000000 23.3333333 0.0000000 0.3333333
## Now, run through nls:
## Fails, because there is no noise
final <- nls(fo, data=df, start=as.list(coef(fit)))
## Add a little bit of noise
df$y <- df$y+rnorm(nrow(df),0,0.001)
coef((final <- nls(fo, data=df, start=as.list(coef(fit)))))
# Ahat Bhat alphahat betahat
# 10.00034000 19.99956016 0.01000137 0.25000966
## Plot
plot(df, col="steelblue", pch=16)
points(df$t, predict(final), col="salmon", type="l")
Your data is null because you didn't add in any data into the nls2 statement.
This is how nls2 needs to be set up:
nls2(formula, data = parent.frame(), start, control = nls.control(),
algorithm = c("default", "plinear", "port", "brute-force",
"grid-search", "random-search", "plinear-brute", "plinear-random"),
trace = FALSE, weights, ..., all = FALSE)
Take a look at the official documentation for a full example.