How to put a complicated equation into a R formula? - r

We have the diameter of trees as the predictor and tree height as the dependent variable. A number of different equations exist for this kind of data and we try to model some of them and compare the results.
However, we we can't figure out how to correctly put one equation into the corresponding R formula format.
The trees data set in R can be used as an example.
data(trees)
df <- trees
df$h <- df$Height * 0.3048 #transform to metric system
df$dbh <- (trees$Girth * 0.3048) / pi #transform tree girth to diameter
First, the example of an equation that seems to work well:
form1 <- h ~ I(dbh ^ -1) + I( dbh ^ 2)
m1 <- lm(form1, data = df)
m1
Call:
lm(formula = form1, data = df)
Coefficients:
(Intercept) I(dbh^-1) I(dbh^2)
27.1147 -5.0553 0.1124
Coefficients a, b and c are estimated, which is what we are interested in.
Now the problematic equation:
Trying to fit it like this:
form2 <- h ~ I(dbh ^ 2) / dbh + I(dbh ^ 2) + 1.3
gives an error:
m1 <- lm(form2, data = df)
Error in terms.formula(formula, data = data)
invalid model formula in ExtractVars
I guess this is because / is interpreted as a nested model and not an arithmetic operator?
This doesn't give an error:
form2 <- h ~ I(I(dbh ^ 2) / dbh + I(dbh ^ 2) + 1.3)
m1 <- lm(form2, data = df)
But the result is not the one we want:
m1
Call:
lm(formula = form2, data = df)
Coefficients:
(Intercept) I(I(dbh^2)/dbh + I(dbh^2) + 1.3)
19.3883 0.8727
Only one coefficient is given for the whole term within the outer I(), which seems to be logic.
How can we fit the second equation to our data?

Assuming you are using nls the R formula can use an ordinary R function, H(a, b, c, D), so the formula can be just h ~ H(a, b, c, dbh) and this works:
# use lm to get startingf values
lm1 <- lm(1/(h - 1.3) ~ I(1/dbh) + I(1/dbh^2), df)
start <- rev(setNames(coef(lm1), c("c", "b", "a")))
# run nls
H <- function(a, b, c, D) 1.3 + D^2 / (a + b * D + c * D^2)
nls1 <- nls(h ~ H(a, b, c, dbh), df, start = start)
nls1 # display result
Graphing the output:
plot(h ~ dbh, df)
lines(fitted(nls1) ~ dbh, df)

You've got a couple problems. (1) You're missing parentheses for the denominator of form2 (and R has no way to know that you want to add a constant a in the denominator, or where to put any of the parameters, really), and much more problematic: (2) your 2nd model isn't linear, so lm won't work.
Fixing (1) is easy:
form2 <- h ~ 1.3 + I(dbh^2) / (a + b * dbh + c * I(dbh^2))
Fixing (2), though there are many ways to estimate parameters for a nonlinear model, the nls (nonlinear least squares) is a good place to start:
m2 <- nls(form2, data = df, start = list(a = 1, b = 1, c = 1))
You need to provide starting guesses for the parameters in nls. I just picked 1's, but you should use better guesses that ballpark what the parameters might be.

edit: fixed, no longer incorrectly using offset ...
An answer that complements #shujaa's:
You can transform your problem from
H = 1.3 + D^2/(a+b*D+c*D^2)
to
1/(H-1.3) = a/D^2+b/D+c
This would normally mess up the assumptions of the model (i.e., if H were normally distributed with constant variance, then 1/(H-1.3) wouldn't be. However, let's try it anyway:
data(trees)
df <- transform(trees,
h=Height * 0.3048, #transform to metric system
dbh=Girth * 0.3048 / pi #transform tree girth to diameter
)
lm(1/(h-1.3) ~ poly(I(1/dbh),2,raw=TRUE),data=df)
## Coefficients:
## (Intercept) poly(I(1/dbh), 2, raw = TRUE)1
## 0.043502 -0.006136
## poly(I(1/dbh), 2, raw = TRUE)2
## 0.010792
These results would normally be good enough to get good starting values for the nls fit. However, you can do better than that via glm, which uses a link function to allow for some forms of non-linearity. Specifically,
(fit2 <- glm(h-1.3 ~ poly(I(1/dbh),2,raw=TRUE),
family=gaussian(link="inverse"),data=df))
## Coefficients:
## (Intercept) poly(I(1/dbh), 2, raw = TRUE)1
## 0.041795 -0.002119
## poly(I(1/dbh), 2, raw = TRUE)2
## 0.008175
##
## Degrees of Freedom: 30 Total (i.e. Null); 28 Residual
## Null Deviance: 113.2
## Residual Deviance: 80.05 AIC: 125.4
##
You can see that the results are approximately the same as the linear fit, but not quite.
pframe <- data.frame(dbh=seq(0.8,2,length=51))
We use predict, but need to correct the prediction to account for the fact that we subtracted a constant from the LHS:
pframe$h <- predict(fit2,newdata=pframe,type="response")+1.3
p2 <- predict(fit2,newdata=pframe,se.fit=TRUE) ## predict on link scale
pframe$h_lwr <- with(p2,1/(fit+1.96*se.fit))+1.3
pframe$h_upr <- with(p2,1/(fit-1.96*se.fit))+1.3
png("dbh_tmp1.png",height=4,width=6,units="in",res=150)
par(las=1,bty="l")
plot(h~dbh,data=df)
with(pframe,lines(dbh,h,col=2))
with(pframe,polygon(c(dbh,rev(dbh)),c(h_lwr,rev(h_upr)),
border=NA,col=adjustcolor("black",alpha=0.3)))
dev.off()
Because we have used the constant on the LHS (this almost, but doesn't quite, fit into the framework of using an offset -- we could only use an offset if our formula were 1/H - 1.3 = a/D^2 + ..., i.e. if the constant adjustment were on the link (inverse) scale rather than the original scale), this doesn't fit perfectly into ggplot's geom_smooth framework
library("ggplot2")
ggplot(df,aes(dbh,h))+geom_point()+theme_bw()+
geom_line(data=pframe,colour="red")+
geom_ribbon(data=pframe,colour=NA,alpha=0.3,
aes(ymin=h_lwr,ymax=h_upr))
ggsave("dbh_tmp2.png",height=4,width=6)

Related

Syntax for three-piece segmented regression using NLS in R when concave

My goal is to fit a three-piece (i.e., two break-point) regression model to make predictions using propagate's predictNLS function, making sure to define knots as parameters, but my model formula seems off.
I've used the segmented package to estimate the breakpoint locations (used as starting values in NLS), but would like to keep my models in the NLS format, specifically, nlsLM {minipack.lm} because I am fitting other types of curves to my data using NLS, want to allow NLS to optimize the knot values, am sometimes using variable weights, and need to be able to easily calculate the Monte Carlo confidence intervals from propagate. Though I'm very close to having the right syntax for the formula, I'm not getting the expected/required behaviour near the breakpoint(s). The segments SHOULD meet directly at the breakpoints (without any jumps), but at least on this data, I'm getting a weird local minimum at the breakpoint (see plots below).
Below is an example of my data and general process. I believe my issue to be in the NLS formula.
library(minpack.lm)
library(segmented)
y <- c(-3.99448113, -3.82447011, -3.65447803, -3.48447030, -3.31447855, -3.14448753, -2.97447972, -2.80448401, -2.63448380, -2.46448069, -2.29448796, -2.12448912, -1.95448783, -1.78448797, -1.61448563, -1.44448719, -1.27448469, -1.10448651, -0.93448525, -0.76448637, -0.59448626, -0.42448586, -0.25448588, -0.08448548, 0.08551417, 0.25551393, 0.42551411, 0.59551395, 0.76551389, 0.93551398)
x <- c(61586.1711, 60330.5550, 54219.9925, 50927.5381, 48402.8700, 45661.9175, 37375.6023, 33249.1248, 30808.6131, 28378.6508, 22533.3782, 13901.0882, 11716.5669, 11004.7305, 10340.3429, 9587.7994, 8736.3200, 8372.1482, 8074.3709, 7788.1847, 7499.6721, 7204.3168, 6870.8192, 6413.0828, 5523.8097, 3961.6114, 3460.0913, 2907.8614, 2016.1158, 452.8841)
df<- data.frame(x,y)
#Use Segmented to get estimates for parameters with 2 breakpoints
my.seg2 <- segmented(lm(y ~ x, data = df), seg.Z = ~ x, npsi = 2)
#extract knot, intercept, and coefficient values to use as NLS start points
my.knot1 <- my.seg2$psi[1,2]
my.knot2 <- my.seg2$psi[2,2]
my.m_2 <- slope(my.seg2)$x[1,1]
my.b1 <- my.seg2$coefficients[[1]]
my.b2 <- my.seg2$coefficients[[2]]
my.b3 <- my.seg2$coefficients[[3]]
#Fit a NLS model to ~replicate segmented model. Presumably my model formula is where the problem lies
my.model <- nlsLM(y~m*x+b+(b2*(ifelse(x>=knot1&x<=knot2,1,0)*(x-knot1))+(b3*ifelse(x>knot2,1,0)*(x-knot2-knot1))),data=df, start = c(m = my.m_2, b = my.b1, b2 = my.b2, b3 = my.b3, knot1 = my.knot1, knot2 = my.knot2))
How it should look
plot(my.seg2)
How it does look
plot(x, y)
lines(x=x, y=predict(my.model), col='black', lty = 1, lwd = 1)
I was pretty sure I had it "right", but when the 95% confidence intervals are plotted with the line and prediction resolution (e.g., the density of x points) is increased, things seem dramatically incorrect.
Thank you all for your help.
Define g to be a grouping vector having the same length as x which takes on values 1, 2, 3 for the 3 sections of the X axis and create an nls model from these. The resulting plot looks ok.
my.knots <- c(my.knot1, my.knot2)
g <- cut(x, c(-Inf, my.knots, Inf), label = FALSE)
fm <- nls(y ~ a[g] + b[g] * x, df, start = list(a = c(1, 1, 1), b = c(1, 1, 1)))
plot(y ~ x, df)
lines(fitted(fm) ~ x, df, col = "red")
(continued after graph)
Constraints
Although the above looks ok and may be sufficient it does not guarantee that the segments intersect at the knots. To do that we must impose the constraints that both sides are equal at the knots:
a[2] + b[2] * my.knots[1] = a[1] + b[1] * my.knots[1]
a[3] + b[3] * my.knots[2] = a[2] + b[2] * my.knots[2]
so
a[2] = a[1] + (b[1] - b[2]) * my.knots[1]
a[3] = a[2] + (b[2] - b[3]) * my.knots[2]
= a[1] + (b[1] - b[2]) * my.knots[1] + (b[2] - b[3]) * my.knots[2]
giving:
# returns a vector of the three a values
avals <- function(a1, b) unname(cumsum(c(a1, -diff(b) * my.knots)))
fm2 <- nls(y ~ avals(a1, b)[g] + b[g] * x, df, start = list(a1 = 1, b = c(1, 1, 1)))
To get the three a values we can use:
co <- coef(fm2)
avals(co[1], co[-1])
To get the residual sum of squares:
deviance(fm2)
## [1] 0.193077
Polynomial
Although it involves a large number of parameters, a polynomial fit could be used in place of the segmented linear regression. A 12th degree polynomial involves 13 parameters but has a lower residual sum of squares than the segmented linear regression. A lower degree could be used with corresponding increase in residual sum of squares. A 7th degree polynomial involves 8 parameters and visually looks not too bad although it has a higher residual sum of squares.
fm12 <- nls(y ~ cbind(1, poly(x, 12)) %*% b, df, start = list(b = rep(1, 13)))
deviance(fm12)
## [1] 0.1899218
It may, in part, reflect a limitation in segmented. segmented returns a single change point value without quantifying the associated uncertainty. Redoing the analysis using mcp which returns Bayesian posteriors, we see that the second change point is bimodally distributed:
library(mcp)
model = list(
y ~ 1 + x, # Intercept + slope in first segment
~ 0 + x, # Only slope changes in the next segments
~ 0 + x
)
# Fit it with a large number of samples and plot the change point posteriors
fit = mcp(model, data = data.frame(x, y), iter = 50000, adapt = 10000)
plot_pars(fit, regex_pars = "^cp*", type = "dens_overlay")
FYI, mcp can plot credible intervals as well (the red dashed lines):
plot(fit, q_fit = TRUE)

Fit 'nls': singular gradient matrix at initial parameter estimates

I'm new using 'nls' and I'm encountering problems finding the starting parameters. I've read several posts and tried various parameters and formula constructions but I keep getting errors.
This is a small example of what I'm doing and I'd very much appreciate if anyone could give me some tips!
# Data to which I want to fit a non-linear function
x <- c(0, 4, 13, 30, 63, 92)
y <- c(0.00000000, 0.00508822, 0.01103990, 0.02115466, 0.04036655, 0.05865331)
z <- 0.98
# STEPS:
# 1 pool, z fixed. This works.
fit <- nls(y ~ z * ((1 - exp(-k1*x))),
start=list(k1=0))
# 2 pool model, z fixed
fit2 <- nls(y ~ z * (1 - exp(-k1*x)) + (1 - exp(-k2*x)),
start=list(k1=0, k2=0)) # Error: singular gradient matrix at initial parameter estimates
# My goal: 2 pool model, z free
fit3 <- nls(y ~ z * (1 - exp(-k1*x)) + (1 - exp(-k2*x)),
start=list(z=0.5, k1=0, k2=0))
It has been a while since you asked the question but maybe you are still interested in some comments:
At least your fit2 works fine when one varies the starting parameters (see code and plots below). I guess that fit3 is then just a "too complicated" model given these data which follow basically just a linear trend. That implies that two parameters are usually sufficient to describe the data reasonable well (see second plot).
So as a general hint: When you obtain
singular gradient matrix at initial parameter estimates
you can
1) vary the starting values/your initial parameter estimates
and/or
2) try to simplify your model by looking for redundant parameters which usually cause troubles.
I also highly recommend to always plot the data first together with your initial guesses (check also this question).
Here is a plot showing the outcome for your fit, fit2 and a third function defined by me which is given in the code below:
As you can see, there is almost no difference between your fit2 and the function which has a variable z and one additional exponential. Two parameters seem pretty much enough to describe the system reasonable well (also one is already quite good represented by the black line in the plot above). If you then want to fit a line through a certain data point, you can also check out this answer.
So how does it now look like when one uses a linear function with two free parameters and a function with variable z, one exponential term and a variable offset? That is shown in the following plot; again there is not much of a difference:
How do the residuals compare?
> fit
Nonlinear regression model
model: y ~ zfix * ((1 - exp(-k1 * x)))
data: parent.frame()
k1
0.0006775
residual sum-of-squares: 1.464e-05
> fit2
Nonlinear regression model
model: y ~ zfix * (1 - exp(-k1 * x)) + (1 - exp(-k2 * x))
data: parent.frame()
k1 k2
-0.0006767 0.0014014
residual sum-of-squares: 9.881e-06
> fit3
Nonlinear regression model
model: y ~ Z * (1 - exp(-k1 * x))
data: parent.frame()
Z k1
0.196195 0.003806
residual sum-of-squares: 9.59e-06
> fit4
Nonlinear regression model
model: y ~ a * x + b
data: parent.frame()
a b
0.0006176 0.0019234
residual sum-of-squares: 6.084e-06
> fit5
Nonlinear regression model
model: y ~ z * (1 - exp(-k1 * x)) + k2
data: parent.frame()
z k1 k2
0.395106 0.001685 0.001519
residual sum-of-squares: 5.143e-06
As one could guess, the fit with only one free parameter gives the worst while the one with three free parameters gives the best result; however, there is not much of a difference (in my opinion).
Here is the code I used:
x <- c(0, 4, 13, 30, 63, 92)
y <- c(0.00000000, 0.00508822, 0.01103990, 0.02115466, 0.04036655, 0.05865331)
zfix <- 0.98
plot(x,y)
# STEPS:
# 1 pool, z fixed. This works.
fit <- nls(y ~ zfix * ((1 - exp(-k1*x))), start=list(k1=0))
xr = data.frame(x = seq(min(x),max(x),len=200))
lines(xr$x,predict(fit,newdata=xr))
# 2 pool model, z fixed
fit2 <- nls(y ~ zfix * (1 - exp(-k1*x)) + (1 - exp(-k2*x)), start=list(k1=0, k2=0.5))
lines(xr$x,predict(fit2,newdata=xr), col='red')
# 3 z variable
fit3 <- nls(y ~ Z * (1 - exp(-k1*x)), start=list(Z=zfix, k1=0.2))
lines(xr$x,predict(fit3,newdata=xr), col='blue')
legend('topleft',c('fixed z, single exp', 'fixed z, two exp', 'variable z, single exp'),
lty=c(1,1,1),
lwd=c(2.5,2.5,2.5),
col=c('black', 'red','blue'))
#dev.new()
plot(x,y)
# 4 fit linear function a*x + b
fit4 <- nls(y ~ a *x + b, start=list(a=1, b=0.))
lines(xr$x,predict(fit4,newdata=xr), col='blue')
fit5 <- nls(y ~ z * (1 - exp(-k1*x)) + k2, start=list(z=zfix, k1=0.1, k2=0.5))
lines(xr$x,predict(fit5,newdata=xr), col='red')
legend('topleft',c('linear approach', 'variable z, single exp, offset'),
lty=c(1,1),
lwd=c(2.5,2.5),
col=c('blue', 'red'))

R: Dynamic linear regression with dynlm package, how to predict()?

I am trying to build a dynamic regression model and so far I did it with the dynlm package. Basically the model looks like this
y_t = a*x1_t + b*x2_t + ... + c*y_(t-1).
y_t shall be predicted, x1_t and x2_t will be given and so is y_(t-1).
Building the model with the dynlm package worked fine, but when it came to predict y_t I got confused...
I found this, which seems to be a very similar problem, but it did not help me to handle my own problem.
Here is the problem I am facing (basically what predict() does, seems to be weird. See comments!):
library(dynlm)
# Create Data
set.seed(1)
y <- arima.sim(model = list(ar = c(.9)), n = 11) #Create AR(1) dependant variable
A <- rnorm(11) #Create independent variables
B <- rnorm(11)
y <- y + .5 * A + .2 * B #Add relationship to independent variables
data = cbind(y, A, B)
# subset used for the fitting of the model
reg <- data[1:10, ]
# Fit dynamic linear model
model <- dynlm(y ~ A + B + L(y, k = 1), data = reg) # dynlm
model
# Time series regression with "zooreg" data:
# Start = 2, End = 11
#
# Call:
# dynlm(formula = y ~ A + B + L(y, k = 1), data = reg)
# Coefficients:
# (Intercept) A B L(y, k = 1)
# 0.8930 -0.2175 0.2892 0.5176
# subset last two rows.
# the last row (r11) for which y_t shall be predicted, where from the same time A and B are input for the prediction
# and the second last row (r10), so y_(t-1) can be input for the model as well
pred <- as.data.frame(data[10:11, ])
# prediction using predict()
predict(model, newdata = pred)
# 1 2
# 1.833134 1.483809
# manual calculation of prediction of y in r11 (how I thought it should be...), taking y_(t-1) as input
predicted_value <- model$coefficients[1] + model$coefficients[2] * pred[2, 2] + model$coefficients[3] * pred[2, 3] + model$coefficients[4] * pred[1, 1]
predicted_value
# (Intercept)
# 1.743334
# and then what gives the value from predict() above taking y_t into the model (which is the value that should be predicted and not y_(t-1))
predicted_value <- model$coefficients[1] + model$coefficients[2] * pred[2, 2] + model$coefficients[3] * pred[2, 3] + model$coefficients[4] * pred[2, 1]
predicted_value
# (Intercept)
# 1.483809
Of course I could just use my own prediction function, but the problem is that my real model will have way more variables (which can even vary as I use the the step function to optimize the model according to AIC) and that I is why I want to use the predict() function.
Any ideas, how to solve this?
Unfortunately, the dynlm package does not provide a predict() method. At the moment the package completely separates the data pre-processing (which knows about functions like d(), L(), trend(), season() etc.) and the model fitting (which itself is not aware of the functions). A predict() method has been on my wishlist but so far I did not get round to write one because the flexibility of the interface allows so many models where it is not quite straightforward what to do. In the meantime, I should probably add a method that throws a warning before the lm method is found by inheritance.

Fitting non-linear Langmuir Isotherm in R

I want to fit Isotherm models for the following data in R. The simplest isotherm model is Langmuir model given here model is given in the bottom of the page. My MWE is given below which throw the error. I wonder if there is any R package for Isotherm models.
X <- c(10, 30, 50, 70, 100, 125)
Y <- c(155, 250, 270, 330, 320, 323)
Data <- data.frame(X, Y)
LangIMfm2 <- nls(formula = Y ~ Q*b*X/(1+b*X), data = Data, start = list(Q = 1, b = 0.5), algorith = "port")
Error in nls(formula = Y ~ Q * b * X/(1 + b * X), data = Data, start = list(Q = 1, :
Convergence failure: singular convergence (7)
Edited
Some nonlinear models can be transform to linear models. My understanding is that there might be one-to-one relationship between the estimates of nonlinear model and its linear model form but their corresponding standard errors are not related to each other. Is this assertion true? Are there any pitfalls in fitting Nonlinear Models by transforming to linearity?
I am not aware of such packages and personally I don't think that you need one as the problem can be solved using a base R.
nls is sensitive to the starting parameters, so you should begin with a good starting guess. You can easily evaluate Q because it corresponds to the asymptotic limit of the isotherm at x-->Inf, so it is reasonable to begin with Q=323 (which is the last value of Y in your sample data set).
Next, you could do plot(Data) and add a line with an isotherm that corresponds to your starting parameters Q and b and tweak b to come up with a reasonable guess.
The plot below shows your data set (points) and a probe isotherm with Q = 323 and b = 0.5, generated by with(Data,lines(X,323*0.5*X/(1+0.5*X),col='red')) (red line). It seemed a reasonable starting guess to me, and I gave it a try with nls:
LangIMfm2 <- nls(formula = Y ~ Q*b*X/(1+b*X), data = Data, start = list(Q = 300, b = 1), algorith = "port")
# Nonlinear regression model
# model: Y ~ Q * b * X/(1 + b * X)
# data: Data
# Q b
# 366.2778 0.0721
# residual sum-of-squares: 920.6
#
# Algorithm "port", convergence message: relative convergence (4)
and plotted predicted line to make sure that nls found the right solution:
lines(Data$X,predict(LangIMfm2),col='green')
Having said that, I would suggest to use a more effective strategy, based on the linearization of the model by rewriting the isotherm equation in reciprocal coordinates:
z <- 1/Data
plot(Y~X,z)
abline(lm(Y~X,z))
M <- lm(Y~X,z)
Q <- 1/coef(M)[1]
# 363.2488
b <- coef(M)[1]/coef(M)[2]
# 0.0741759
As you could see, both approaches produce essentially the same result, but the linear model is more robust and doesn't require starting parameters (and, as far as I remember, it is the standard way of the isotherm analysis in the experimental physical chemistry).
You can use the SSmicmen self-starter function (see Ritz and Streibig, 2008, Nonlinear Regression with R) in the nlme package for R, which calculates initial parameters from the fit of the linearized form of the Michaelis-Menten (MM) equation. Fortunately, the MM equation possesses a form that can be adapted for the Langmuir equation, S = Smax*x/(KL + x). I've found the nlshelper and tidyverse packages useful for modeling and exporting the results of the nls command into tables and plots, particularly when modeling sample groups. Here's my code for modeling a single set of sorption data:
library(tidyverse)
library(nlme)
library(nlshelper)
lang.fit <- nls(Y ~ SSmicmen(X,Smax,InvKL), data=Data)
fit.summary <- tidy(lang.fit)
fit.coefs <- coef(lang.fit)
For simplicity, the Langmuir affinity constant is modeled here as 1/KL. Applying this code, I get the same parameter estimates as #Marat given above.
The simple code below allows for wrangling the data in order to create a ggplot object, containing the original points and fitted line (i.e., geom_point would represent the original X and Y data, geom_line would represent the original X plus YHat).
FitY <- tibble(predict(lang.fit))
YHat <- FitY[,1]
Data2 <- cbind(Data, YHat)
If you want to model multiple groups of data (say, based on a "Sample_name" column, then the lang.fit variable would be calculated as below, this time using the nlsList command:
lang.fit <- nlsList(Y ~ SSmicmen(X,Smax,InvKL) | Sample_name, data=Data)
The problem is the starting values. We show two approaches to this as well as an alternative that converges even using the starting values in the question.
1) plinear The right hand side is linear in Q*b so it would be better to absorb b into Q and then we have a parameter that enters linearly so it is easier to solve. Also with the plinear algorithm no starting values are needed for the linear parameter so only the starting value for b need be specified. With plinear the right hand side of the nls formula should be specified as the vector that multiplies the linear parameter. The result of running nls giving fm0 below will be coefficients named b and .lin where Q = .lin / b.
We already have our answer from fm0 but if we want a clean run in terms of b and Q rather than b and .lin we can run the original formula in the question using the starting values implied by the coefficients returned by fm0 as shown.
fm0 <- nls(Y ~ X/(1+b*X), Data, start = list(b = 0.5), alg = "plinear")
st <- with(as.list(coef(fm0)), list(b = b, Q = .lin/b))
fm <- nls(Y ~ Q*b*X/(1+b*X), Data, start = st)
fm
giving
Nonlinear regression model
model: Y ~ Q * b * X/(1 + b * X)
data: Data
b Q
0.0721 366.2778
residual sum-of-squares: 920.6
Number of iterations to convergence: 0
Achieved convergence tolerance: 9.611e-07
We can display the result. The points are the data and the red line is the fitted curve.
plot(Data)
lines(fitted(fm) ~ X, Data, col = "red")
(contineud after plot)
2) mean Alternately, using a starting value of mean(Data$Y) for Q seems to work well.
nls(Y ~ Q*b*X/(1+b*X), Data, start = list(b = 0.5, Q = mean(Data$Y)))
giving:
Nonlinear regression model
model: Y ~ Q * b * X/(1 + b * X)
data: Data
b Q
0.0721 366.2779
residual sum-of-squares: 920.6
Number of iterations to convergence: 6
Achieved convergence tolerance: 5.818e-06
The question already had a reasonable starting value for b which we used but if one were needed one could set Y to Q*b so that they cancel and X to mean(Data$X) and solve for b to give b = 1 - 1/mean(Data$X) as a possible starting value. Although not shown using this starting value for b with mean(Data$Y) as the starting value for Q also resulted in convergence.
3) optim If we use optim the algorithm converges even with the initial values used in the question. We form the residual sum of squares and minimize that:
rss <- function(p) {
Q <- p[1]
b <- p[2]
with(Data, sum((Y - b*Q*X/(1+b*X))^2))
}
optim(c(1, 0.5), rss)
giving:
$par
[1] 366.27028219 0.07213613
$value
[1] 920.62
$counts
function gradient
249 NA
$convergence
[1] 0
$message
NULL

Regression line and fitted curve for scatter plots in r

I have a set of data of HEIGHT and DIAMETER of trees. I want to find a regression relationship between them and plot it. For example I want to try a * DIAMETER + b * DIAMETER^2 + C and show its curve in a scatterplot.
By bellow instruction I reach several lines, but I want just a trend line related to developed Model. what should I do?
setwd('D:\\PhD\\Data\\Field Measurments\\Data Analysis\\')
dat1 = read.table('Fagus.csv', header = TRUE, sep =',')
# fit a non-linear regression
Height = dat1$Height
Diameter = dat1$Diameter
plot(Diameter, Height, main="Height Curve", xlab="Diameter", ylab="Height", pch=19)
nls1 <- nls(Height ~ a*(Diameter)^2+b*Diameter+c, data = dat1, start = list(a =a, b=b,c=c), algorithm="port")
lines(fitted(nls1) ~ Diameter, lty = 1, col = "red") # solid red line
Is above instruction wrong for my purpose?
As stated above, you should not put the coefficients into your formulas. Try:
nls1 <- nls(Height ~ I(Diameter^2) + Diameter, data = dat1, algorithm="port")
Regarding the I(Diameter ^2):
"To avoid this confusion, the function I() can be used to bracket those portions of a model formula where the operators are used in their arithmetic sense. For example, in the formula y ~ a + I(b+c), the term b+c is to be interpreted as the sum of b and c." ~ formula{stats} documentation
I did not run the rest (on mobile), but your code looks OK at first glance.
There seems to be a misunderstanding here about linear vs. non-linear models. A linear model is linear in the coefficients. A non-linear model is not. Whether the model is linear in the predictor variables (Diameter in your case) is irrelevant. So in your case a model of the form:
Height = a * Diameter + b * Diameter^2 + c
is a linear model. You don't need to use nls(...). You can specify the model formula in either of two ways, both of which lead to identical results:
Height~Diameter + I(Diameter^2)
or
Height~poly(Diameter,2,raw=TRUE)
The second form uses the poly(...) function to create a polynomial of order 2. raw=T tells poly(...) to generate raw polynomials, rather than orthogonal polynomials (the default). The first form is a bit simpler unless you want polynomials of order greater than 2. Here's an example using both forms.
set.seed(1) # for reproducible example
df <- data.frame(Diameter=sample(1:50,50))
df$Height <- with(df,2*Diameter + .5*Diameter^2 + 4 + rnorm(50,sd=30))
fit <- lm(Height~Diameter + I(Diameter^2),df)
summary(fit)
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) -6.85088 12.26720 -0.558 0.57917
# Diameter 3.31030 1.10964 2.983 0.00451 **
# I(Diameter^2) 0.47717 0.02109 22.622 < 2e-16 ***
fit.poly<- lm(Height~poly(Diameter,2,raw=TRUE),df)
summary(fit.poly)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) -6.85088 12.26720 -0.558 0.57917
# poly(Diameter, 2, raw = TRUE)1 3.31030 1.10964 2.983 0.00451 **
# poly(Diameter, 2, raw = TRUE)2 0.47717 0.02109 22.622 < 2e-16 ***
To plot the data and the trend curve:
df$pred <- predict(fit)
with(df,plot(Height~Diameter))
with(df[order(df$Diameter),],lines(pred~Diameter,col="red",lty=2))
Your problem is your start= parameter. You need to supply actual values for the a, b, and c parameters. Here's a reproducible example
#sample data
dat<-data.frame(Diameter = runif(50, 2, 6))
dat<-transform(dat,Height=2*Diameter + .75 * Diameter^2 +4 + rnorm(50))
dat<-dat[order(dat$Diameter), ]
#now fit the model
mynls<-nls(Height ~ a*I(Diameter^2) + b*Diameter + c, dat,
start=list(a=1, b=1, c=1), algorithm="port")
Notice how we set default values of 1 for each of the coefficients. You can set whatever you think would be most appropriate. And how we can plot the raw values with the fitted results
plot(Height~Diameter,dat, main="Height Curve",
xlab="Diameter", ylab="Height", pch=19)
lines(fitted(mynls)~ dat$Diameter, col="red")
This gives

Resources