Linear regression with specified slope - r

I want to fit a linear regression line with a specified slope to a data set. I read this thread about doing the same with an explicit intercept.
0+ suppresses the fitting of the intercept; what is the corresponding trick for the slope?
For example, to fit a line with slope 1.5, I tried the following
set.seed(6)
x <- runif(100, -3, 3)
y <- 2 + x + rnorm(100)
model1<-lm(y ~ x)
plot(x,y)
abline(model1,col="red")
abline(coef(model1),1.5,col="dark green")
but second abline function just takes the intercept from model1 and slope 1.5. Whereas I would like the regression line to have slope 1.5, find the best fit to the data points, and then compute intercept from that regression line.

To find the value of the intercept, you don't actually need a regression. Since Y = a + b * X + ϵ, then E[Y - b * X] = E[a] + E[ϵ], and by assumption E[a] = a and E[ϵ] = 0, where E[] is the expectation operator. Therefore, a = E[Y - b * X].
Translated into R, this means the intercept a is:
b1 <- 1.5
a <- mean(y - b1 * x)
This is inspired by the comments to this question.

I suppose one approach would be to subtract 1.5*x from y and then fit y using only an intercept term:
mod2 <- lm(I(y-1.5*x)~1)
plot(x, y)
abline(mod2$coefficients, 1.5)
This represents the best linear fit with fixed slope 1.5. Of course, this fit is not very visually appealing because the simulated slope is 1 while the fixed slope is 1.5.

Related

Syntax for three-piece segmented regression using NLS in R when concave

My goal is to fit a three-piece (i.e., two break-point) regression model to make predictions using propagate's predictNLS function, making sure to define knots as parameters, but my model formula seems off.
I've used the segmented package to estimate the breakpoint locations (used as starting values in NLS), but would like to keep my models in the NLS format, specifically, nlsLM {minipack.lm} because I am fitting other types of curves to my data using NLS, want to allow NLS to optimize the knot values, am sometimes using variable weights, and need to be able to easily calculate the Monte Carlo confidence intervals from propagate. Though I'm very close to having the right syntax for the formula, I'm not getting the expected/required behaviour near the breakpoint(s). The segments SHOULD meet directly at the breakpoints (without any jumps), but at least on this data, I'm getting a weird local minimum at the breakpoint (see plots below).
Below is an example of my data and general process. I believe my issue to be in the NLS formula.
library(minpack.lm)
library(segmented)
y <- c(-3.99448113, -3.82447011, -3.65447803, -3.48447030, -3.31447855, -3.14448753, -2.97447972, -2.80448401, -2.63448380, -2.46448069, -2.29448796, -2.12448912, -1.95448783, -1.78448797, -1.61448563, -1.44448719, -1.27448469, -1.10448651, -0.93448525, -0.76448637, -0.59448626, -0.42448586, -0.25448588, -0.08448548, 0.08551417, 0.25551393, 0.42551411, 0.59551395, 0.76551389, 0.93551398)
x <- c(61586.1711, 60330.5550, 54219.9925, 50927.5381, 48402.8700, 45661.9175, 37375.6023, 33249.1248, 30808.6131, 28378.6508, 22533.3782, 13901.0882, 11716.5669, 11004.7305, 10340.3429, 9587.7994, 8736.3200, 8372.1482, 8074.3709, 7788.1847, 7499.6721, 7204.3168, 6870.8192, 6413.0828, 5523.8097, 3961.6114, 3460.0913, 2907.8614, 2016.1158, 452.8841)
df<- data.frame(x,y)
#Use Segmented to get estimates for parameters with 2 breakpoints
my.seg2 <- segmented(lm(y ~ x, data = df), seg.Z = ~ x, npsi = 2)
#extract knot, intercept, and coefficient values to use as NLS start points
my.knot1 <- my.seg2$psi[1,2]
my.knot2 <- my.seg2$psi[2,2]
my.m_2 <- slope(my.seg2)$x[1,1]
my.b1 <- my.seg2$coefficients[[1]]
my.b2 <- my.seg2$coefficients[[2]]
my.b3 <- my.seg2$coefficients[[3]]
#Fit a NLS model to ~replicate segmented model. Presumably my model formula is where the problem lies
my.model <- nlsLM(y~m*x+b+(b2*(ifelse(x>=knot1&x<=knot2,1,0)*(x-knot1))+(b3*ifelse(x>knot2,1,0)*(x-knot2-knot1))),data=df, start = c(m = my.m_2, b = my.b1, b2 = my.b2, b3 = my.b3, knot1 = my.knot1, knot2 = my.knot2))
How it should look
plot(my.seg2)
How it does look
plot(x, y)
lines(x=x, y=predict(my.model), col='black', lty = 1, lwd = 1)
I was pretty sure I had it "right", but when the 95% confidence intervals are plotted with the line and prediction resolution (e.g., the density of x points) is increased, things seem dramatically incorrect.
Thank you all for your help.
Define g to be a grouping vector having the same length as x which takes on values 1, 2, 3 for the 3 sections of the X axis and create an nls model from these. The resulting plot looks ok.
my.knots <- c(my.knot1, my.knot2)
g <- cut(x, c(-Inf, my.knots, Inf), label = FALSE)
fm <- nls(y ~ a[g] + b[g] * x, df, start = list(a = c(1, 1, 1), b = c(1, 1, 1)))
plot(y ~ x, df)
lines(fitted(fm) ~ x, df, col = "red")
(continued after graph)
Constraints
Although the above looks ok and may be sufficient it does not guarantee that the segments intersect at the knots. To do that we must impose the constraints that both sides are equal at the knots:
a[2] + b[2] * my.knots[1] = a[1] + b[1] * my.knots[1]
a[3] + b[3] * my.knots[2] = a[2] + b[2] * my.knots[2]
so
a[2] = a[1] + (b[1] - b[2]) * my.knots[1]
a[3] = a[2] + (b[2] - b[3]) * my.knots[2]
= a[1] + (b[1] - b[2]) * my.knots[1] + (b[2] - b[3]) * my.knots[2]
giving:
# returns a vector of the three a values
avals <- function(a1, b) unname(cumsum(c(a1, -diff(b) * my.knots)))
fm2 <- nls(y ~ avals(a1, b)[g] + b[g] * x, df, start = list(a1 = 1, b = c(1, 1, 1)))
To get the three a values we can use:
co <- coef(fm2)
avals(co[1], co[-1])
To get the residual sum of squares:
deviance(fm2)
## [1] 0.193077
Polynomial
Although it involves a large number of parameters, a polynomial fit could be used in place of the segmented linear regression. A 12th degree polynomial involves 13 parameters but has a lower residual sum of squares than the segmented linear regression. A lower degree could be used with corresponding increase in residual sum of squares. A 7th degree polynomial involves 8 parameters and visually looks not too bad although it has a higher residual sum of squares.
fm12 <- nls(y ~ cbind(1, poly(x, 12)) %*% b, df, start = list(b = rep(1, 13)))
deviance(fm12)
## [1] 0.1899218
It may, in part, reflect a limitation in segmented. segmented returns a single change point value without quantifying the associated uncertainty. Redoing the analysis using mcp which returns Bayesian posteriors, we see that the second change point is bimodally distributed:
library(mcp)
model = list(
y ~ 1 + x, # Intercept + slope in first segment
~ 0 + x, # Only slope changes in the next segments
~ 0 + x
)
# Fit it with a large number of samples and plot the change point posteriors
fit = mcp(model, data = data.frame(x, y), iter = 50000, adapt = 10000)
plot_pars(fit, regex_pars = "^cp*", type = "dens_overlay")
FYI, mcp can plot credible intervals as well (the red dashed lines):
plot(fit, q_fit = TRUE)

`gam` package: extra shift spotted when sketching data on `plot.gam`

I try to fit a GAM using the gam package (I know mgcv is more flexible, but I need to use gam here). I now have the problem that the model looks good, but in comparison with the original data it seems to be offset along the y-axis by a constant value, for which I cannot figure out where this comes from.
This code reproduces the problem:
library(gam)
data(gam.data)
x <- gam.data$x
y <- gam.data$y
fit <- gam(y ~ s(x,6))
fit$coefficients
#(Intercept) s(x, 6)
# 1.921819 -2.318771
plot(fit, ylim = range(y))
points(x, y)
points(x, y -1.921819, col=2)
legend("topright", pch=1, col=1:2, legend=c("Original", "Minus intercept"))
Chambers, J. M. and Hastie, T. J. (1993) Statistical Models in S (Chapman & Hall) shows that there should not be an offset, and this is also intuitively correct (the smooth should describe the data).
I noticed something comparable in mgcv, which can be solved by providing the shift parameter with the intercept value of the model (because the smooth is seemingly centred). I thought the same could be true here, so I subtracted the intercept from the original data-points. However, the plot above shows this idea wrong. I don't know where the extra shift comes from. I hope someone here may be able to help me.
(R version. 3.3.1; gam version 1.12)
I think I should first explain various output in the fitted GAM model:
library(gam)
data(gam.data)
x <- gam.data$x
y <- gam.data$y
fit <-gam(y ~ s(x,6), model = FALSE)
## coefficients for parametric part
## this includes intercept and null space of spline
beta <- coef(fit)
## null space of spline smooth (a linear term, just `x`)
nullspace <- fit$smooth.frame[,1]
nullspace - x ## all 0
## smooth space that are penalized
## note, the backfitting procedure guarantees that this is centred
pensmooth <- fit$smooth[,1]
sum(pensmooth) ## centred
# [1] 5.89806e-17
## estimated smooth function (null space + penalized space)
smooth <- nullspace * beta[2] + pensmooth
## centred smooth function (this is what `plot.gam` is going to plot)
c0 <- mean(smooth)
censmooth <- smooth - c0
## additive predictors (this is just fitted values in Gaussian case)
addpred <- beta[1] + smooth
You can first verify that addpred is what fit$additive.predictors gives, and since we are fitting additive models with Gaussian response, this is also as same as fit$fitted.values.
What plot.gam does, is to plot censmooth:
plot.gam(fit, col = 4, ylim = c(-1.5,1.5))
points(x, censmooth, col = "gray")
Remember, there is
addpred = beta[0] + censmooth + c0
If you want to shift original data y to match this plot, you not only need to subtract intercept (beta[0]), but also c0 from y:
points(x, y - beta[1] - c0)

Exponential curve fitting in R

time = 1:100
head(y)
0.07841589 0.07686316 0.07534116 0.07384931 0.07238699 0.07095363
plot(time,y)
This is an exponential curve.
How can I fit line on this curve without knowing the formula ? I can't use 'nls' as the formula is unknown (only data points are given).
How can I get the equation for this curve and determine the constants in the equation?
I tried loess but it doesn't give the intercepts.
You need a model to fit to the data.
Without knowing the full details of your model, let's say that this is an
exponential growth model,
which one could write as: y = a * e r*t
Where y is your measured variable, t is the time at which it was measured,
a is the value of y when t = 0 and r is the growth constant.
We want to estimate a and r.
This is a non-linear problem because we want to estimate the exponent, r.
However, in this case we can use some algebra and transform it into a linear equation by taking the log on both sides and solving (remember
logarithmic rules), resulting in:
log(y) = log(a) + r * t
We can visualise this with an example, by generating a curve from our model, assuming some values for a and r:
t <- 1:100 # these are your time points
a <- 10 # assume the size at t = 0 is 10
r <- 0.1 # assume a growth constant
y <- a*exp(r*t) # generate some y observations from our exponential model
# visualise
par(mfrow = c(1, 2))
plot(t, y) # on the original scale
plot(t, log(y)) # taking the log(y)
So, for this case, we could explore two possibilies:
Fit our non-linear model to the original data (for example using nls() function)
Fit our "linearised" model to the log-transformed data (for example using the lm() function)
Which option to choose (and there's more options), depends on what we think
(or assume) is the data-generating process behind our data.
Let's illustrate with some simulations that include added noise (sampled from
a normal distribution), to mimic real data. Please look at this
StackExchange post
for the reasoning behind this simulation (pointed out by Alejo Bernardin's comment).
set.seed(12) # for reproducible results
# errors constant across time - additive
y_add <- a*exp(r*t) + rnorm(length(t), sd = 5000) # or: rnorm(length(t), mean = a*exp(r*t), sd = 5000)
# errors grow as y grows - multiplicative (constant on the log-scale)
y_mult <- a*exp(r*t + rnorm(length(t), sd = 1)) # or: rlnorm(length(t), mean = log(a) + r*t, sd = 1)
# visualise
par(mfrow = c(1, 2))
plot(t, y_add, main = "additive error")
lines(t, a*exp(t*r), col = "red")
plot(t, y_mult, main = "multiplicative error")
lines(t, a*exp(t*r), col = "red")
For the additive model, we could use nls(), because the error is constant across
t. When using nls() we need to specify some starting values for the optimization algorithm (try to "guesstimate" what these are, because nls() often struggles to converge on a solution).
add_nls <- nls(y_add ~ a*exp(r*t),
start = list(a = 0.5, r = 0.2))
coef(add_nls)
# a r
# 11.30876845 0.09867135
Using the coef() function we can get the estimates for the two parameters.
This gives us OK estimates, close to what we simulated (a = 10 and r = 0.1).
You could see that the error variance is reasonably constant across the range of the data, by plotting the residuals of the model:
plot(t, resid(add_nls))
abline(h = 0, lty = 2)
For the multiplicative error case (our y_mult simulated values), we should use lm() on log-transformed data, because
the error is constant on that scale instead.
mult_lm <- lm(log(y_mult) ~ t)
coef(mult_lm)
# (Intercept) t
# 2.39448488 0.09837215
To interpret this output, remember again that our linearised model is log(y) = log(a) + r*t, which is equivalent to a linear model of the form Y = β0 + β1 * X, where β0 is our intercept and β1 our slope.
Therefore, in this output (Intercept) is equivalent to log(a) of our model and t is the coefficient for the time variable, so equivalent to our r.
To meaningfully interpret the (Intercept) we can take its exponential (exp(2.39448488)), giving us ~10.96, which is quite close to our simulated value.
It's worth noting what would happen if we'd fit data where the error is multiplicative
using the nls function instead:
mult_nls <- nls(y_mult ~ a*exp(r*t), start = list(a = 0.5, r = 0.2))
coef(mult_nls)
# a r
# 281.06913343 0.06955642
Now we over-estimate a and under-estimate r
(Mario Reutter
highlighted this in his comment). We can visualise the consequence of using the wrong approach to fit our model:
# get the model's coefficients
lm_coef <- coef(mult_lm)
nls_coef <- coef(mult_nls)
# make the plot
plot(t, y_mult)
lines(t, a*exp(r*t), col = "brown", lwd = 5)
lines(t, exp(lm_coef[1])*exp(lm_coef[2]*t), col = "dodgerblue", lwd = 2)
lines(t, nls_coef[1]*exp(nls_coef[2]*t), col = "orange2", lwd = 2)
legend("topleft", col = c("brown", "dodgerblue", "orange2"),
legend = c("known model", "nls fit", "lm fit"), lwd = 3)
We can see how the lm() fit to log-transformed data was substantially better than the nls() fit on the original data.
You can again plot the residuals of this model, to see that the variance is not constant across the range of the data (we can also see this in the graphs above, where the spread of the data increases for higher values of t):
plot(t, resid(mult_nls))
abline(h = 0, lty = 2)
Unfortunately taking the logarithm and fitting a linear model is not optimal.
The reason is that the errors for large y-values weight much more than those
for small y-values when apply the exponential function to go back to the
original model.
Here is one example:
f <- function(x){exp(0.3*x+5)}
squaredError <- function(a,b,x,y) {sum((exp(a*x+b)-f(x))^2)}
x <- 0:12
y <- f(x) * ( 1 + sample(-300:300,length(x),replace=TRUE)/10000 )
x
y
#--------------------------------------------------------------------
M <- lm(log(y)~x)
a <- unlist(M[1])[2]
b <- unlist(M[1])[1]
print(c(a,b))
squaredError(a,b,x,y)
approxPartAbl_a <- (squaredError(a+1e-8,b,x,y) - squaredError(a,b,x,y))/1e-8
for ( i in 0:10 )
{
eps <- -i*sign(approxPartAbl_a)*1e-5
print(c(eps,squaredError(a+eps,b,x,y)))
}
Result:
> f <- function(x){exp(0.3*x+5)}
> squaredError <- function(a,b,x,y) {sum((exp(a*x+b)-f(x))^2)}
> x <- 0:12
> y <- f(x) * ( 1 + sample(-300:300,length(x),replace=TRUE)/10000 )
> x
[1] 0 1 2 3 4 5 6 7 8 9 10 11 12
> y
[1] 151.2182 203.4020 278.3769 366.8992 503.5895 682.4353 880.1597 1186.5158 1630.9129 2238.1607 3035.8076 4094.6925 5559.3036
> #--------------------------------------------------------------------
>
> M <- lm(log(y)~x)
> a <- unlist(M[1])[2]
> b <- unlist(M[1])[1]
> print(c(a,b))
coefficients.x coefficients.(Intercept)
0.2995808 5.0135529
> squaredError(a,b,x,y)
[1] 5409.752
> approxPartAbl_a <- (squaredError(a+1e-8,b,x,y) - squaredError(a,b,x,y))/1e-8
> for ( i in 0:10 )
+ {
+ eps <- -i*sign(approxPartAbl_a)*1e-5
+ print(c(eps,squaredError(a+eps,b,x,y)))
+ }
[1] 0.000 5409.752
[1] -0.00001 5282.91927
[1] -0.00002 5157.68422
[1] -0.00003 5034.04589
[1] -0.00004 4912.00375
[1] -0.00005 4791.55728
[1] -0.00006 4672.70592
[1] -0.00007 4555.44917
[1] -0.00008 4439.78647
[1] -0.00009 4325.71730
[1] -0.0001 4213.2411
>
Perhaps one can try some numeric method, i.e. gradient search, to find the
minimum of the squared error function.
If it really is exponential, you can try taking the logarithm of your variable and fitting a linear model to that.

Fit 'nls': singular gradient matrix at initial parameter estimates

I'm new using 'nls' and I'm encountering problems finding the starting parameters. I've read several posts and tried various parameters and formula constructions but I keep getting errors.
This is a small example of what I'm doing and I'd very much appreciate if anyone could give me some tips!
# Data to which I want to fit a non-linear function
x <- c(0, 4, 13, 30, 63, 92)
y <- c(0.00000000, 0.00508822, 0.01103990, 0.02115466, 0.04036655, 0.05865331)
z <- 0.98
# STEPS:
# 1 pool, z fixed. This works.
fit <- nls(y ~ z * ((1 - exp(-k1*x))),
start=list(k1=0))
# 2 pool model, z fixed
fit2 <- nls(y ~ z * (1 - exp(-k1*x)) + (1 - exp(-k2*x)),
start=list(k1=0, k2=0)) # Error: singular gradient matrix at initial parameter estimates
# My goal: 2 pool model, z free
fit3 <- nls(y ~ z * (1 - exp(-k1*x)) + (1 - exp(-k2*x)),
start=list(z=0.5, k1=0, k2=0))
It has been a while since you asked the question but maybe you are still interested in some comments:
At least your fit2 works fine when one varies the starting parameters (see code and plots below). I guess that fit3 is then just a "too complicated" model given these data which follow basically just a linear trend. That implies that two parameters are usually sufficient to describe the data reasonable well (see second plot).
So as a general hint: When you obtain
singular gradient matrix at initial parameter estimates
you can
1) vary the starting values/your initial parameter estimates
and/or
2) try to simplify your model by looking for redundant parameters which usually cause troubles.
I also highly recommend to always plot the data first together with your initial guesses (check also this question).
Here is a plot showing the outcome for your fit, fit2 and a third function defined by me which is given in the code below:
As you can see, there is almost no difference between your fit2 and the function which has a variable z and one additional exponential. Two parameters seem pretty much enough to describe the system reasonable well (also one is already quite good represented by the black line in the plot above). If you then want to fit a line through a certain data point, you can also check out this answer.
So how does it now look like when one uses a linear function with two free parameters and a function with variable z, one exponential term and a variable offset? That is shown in the following plot; again there is not much of a difference:
How do the residuals compare?
> fit
Nonlinear regression model
model: y ~ zfix * ((1 - exp(-k1 * x)))
data: parent.frame()
k1
0.0006775
residual sum-of-squares: 1.464e-05
> fit2
Nonlinear regression model
model: y ~ zfix * (1 - exp(-k1 * x)) + (1 - exp(-k2 * x))
data: parent.frame()
k1 k2
-0.0006767 0.0014014
residual sum-of-squares: 9.881e-06
> fit3
Nonlinear regression model
model: y ~ Z * (1 - exp(-k1 * x))
data: parent.frame()
Z k1
0.196195 0.003806
residual sum-of-squares: 9.59e-06
> fit4
Nonlinear regression model
model: y ~ a * x + b
data: parent.frame()
a b
0.0006176 0.0019234
residual sum-of-squares: 6.084e-06
> fit5
Nonlinear regression model
model: y ~ z * (1 - exp(-k1 * x)) + k2
data: parent.frame()
z k1 k2
0.395106 0.001685 0.001519
residual sum-of-squares: 5.143e-06
As one could guess, the fit with only one free parameter gives the worst while the one with three free parameters gives the best result; however, there is not much of a difference (in my opinion).
Here is the code I used:
x <- c(0, 4, 13, 30, 63, 92)
y <- c(0.00000000, 0.00508822, 0.01103990, 0.02115466, 0.04036655, 0.05865331)
zfix <- 0.98
plot(x,y)
# STEPS:
# 1 pool, z fixed. This works.
fit <- nls(y ~ zfix * ((1 - exp(-k1*x))), start=list(k1=0))
xr = data.frame(x = seq(min(x),max(x),len=200))
lines(xr$x,predict(fit,newdata=xr))
# 2 pool model, z fixed
fit2 <- nls(y ~ zfix * (1 - exp(-k1*x)) + (1 - exp(-k2*x)), start=list(k1=0, k2=0.5))
lines(xr$x,predict(fit2,newdata=xr), col='red')
# 3 z variable
fit3 <- nls(y ~ Z * (1 - exp(-k1*x)), start=list(Z=zfix, k1=0.2))
lines(xr$x,predict(fit3,newdata=xr), col='blue')
legend('topleft',c('fixed z, single exp', 'fixed z, two exp', 'variable z, single exp'),
lty=c(1,1,1),
lwd=c(2.5,2.5,2.5),
col=c('black', 'red','blue'))
#dev.new()
plot(x,y)
# 4 fit linear function a*x + b
fit4 <- nls(y ~ a *x + b, start=list(a=1, b=0.))
lines(xr$x,predict(fit4,newdata=xr), col='blue')
fit5 <- nls(y ~ z * (1 - exp(-k1*x)) + k2, start=list(z=zfix, k1=0.1, k2=0.5))
lines(xr$x,predict(fit5,newdata=xr), col='red')
legend('topleft',c('linear approach', 'variable z, single exp, offset'),
lty=c(1,1),
lwd=c(2.5,2.5),
col=c('blue', 'red'))

Regression line and fitted curve for scatter plots in r

I have a set of data of HEIGHT and DIAMETER of trees. I want to find a regression relationship between them and plot it. For example I want to try a * DIAMETER + b * DIAMETER^2 + C and show its curve in a scatterplot.
By bellow instruction I reach several lines, but I want just a trend line related to developed Model. what should I do?
setwd('D:\\PhD\\Data\\Field Measurments\\Data Analysis\\')
dat1 = read.table('Fagus.csv', header = TRUE, sep =',')
# fit a non-linear regression
Height = dat1$Height
Diameter = dat1$Diameter
plot(Diameter, Height, main="Height Curve", xlab="Diameter", ylab="Height", pch=19)
nls1 <- nls(Height ~ a*(Diameter)^2+b*Diameter+c, data = dat1, start = list(a =a, b=b,c=c), algorithm="port")
lines(fitted(nls1) ~ Diameter, lty = 1, col = "red") # solid red line
Is above instruction wrong for my purpose?
As stated above, you should not put the coefficients into your formulas. Try:
nls1 <- nls(Height ~ I(Diameter^2) + Diameter, data = dat1, algorithm="port")
Regarding the I(Diameter ^2):
"To avoid this confusion, the function I() can be used to bracket those portions of a model formula where the operators are used in their arithmetic sense. For example, in the formula y ~ a + I(b+c), the term b+c is to be interpreted as the sum of b and c." ~ formula{stats} documentation
I did not run the rest (on mobile), but your code looks OK at first glance.
There seems to be a misunderstanding here about linear vs. non-linear models. A linear model is linear in the coefficients. A non-linear model is not. Whether the model is linear in the predictor variables (Diameter in your case) is irrelevant. So in your case a model of the form:
Height = a * Diameter + b * Diameter^2 + c
is a linear model. You don't need to use nls(...). You can specify the model formula in either of two ways, both of which lead to identical results:
Height~Diameter + I(Diameter^2)
or
Height~poly(Diameter,2,raw=TRUE)
The second form uses the poly(...) function to create a polynomial of order 2. raw=T tells poly(...) to generate raw polynomials, rather than orthogonal polynomials (the default). The first form is a bit simpler unless you want polynomials of order greater than 2. Here's an example using both forms.
set.seed(1) # for reproducible example
df <- data.frame(Diameter=sample(1:50,50))
df$Height <- with(df,2*Diameter + .5*Diameter^2 + 4 + rnorm(50,sd=30))
fit <- lm(Height~Diameter + I(Diameter^2),df)
summary(fit)
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) -6.85088 12.26720 -0.558 0.57917
# Diameter 3.31030 1.10964 2.983 0.00451 **
# I(Diameter^2) 0.47717 0.02109 22.622 < 2e-16 ***
fit.poly<- lm(Height~poly(Diameter,2,raw=TRUE),df)
summary(fit.poly)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) -6.85088 12.26720 -0.558 0.57917
# poly(Diameter, 2, raw = TRUE)1 3.31030 1.10964 2.983 0.00451 **
# poly(Diameter, 2, raw = TRUE)2 0.47717 0.02109 22.622 < 2e-16 ***
To plot the data and the trend curve:
df$pred <- predict(fit)
with(df,plot(Height~Diameter))
with(df[order(df$Diameter),],lines(pred~Diameter,col="red",lty=2))
Your problem is your start= parameter. You need to supply actual values for the a, b, and c parameters. Here's a reproducible example
#sample data
dat<-data.frame(Diameter = runif(50, 2, 6))
dat<-transform(dat,Height=2*Diameter + .75 * Diameter^2 +4 + rnorm(50))
dat<-dat[order(dat$Diameter), ]
#now fit the model
mynls<-nls(Height ~ a*I(Diameter^2) + b*Diameter + c, dat,
start=list(a=1, b=1, c=1), algorithm="port")
Notice how we set default values of 1 for each of the coefficients. You can set whatever you think would be most appropriate. And how we can plot the raw values with the fitted results
plot(Height~Diameter,dat, main="Height Curve",
xlab="Diameter", ylab="Height", pch=19)
lines(fitted(mynls)~ dat$Diameter, col="red")
This gives

Resources