Regression line and fitted curve for scatter plots in r - r

I have a set of data of HEIGHT and DIAMETER of trees. I want to find a regression relationship between them and plot it. For example I want to try a * DIAMETER + b * DIAMETER^2 + C and show its curve in a scatterplot.
By bellow instruction I reach several lines, but I want just a trend line related to developed Model. what should I do?
setwd('D:\\PhD\\Data\\Field Measurments\\Data Analysis\\')
dat1 = read.table('Fagus.csv', header = TRUE, sep =',')
# fit a non-linear regression
Height = dat1$Height
Diameter = dat1$Diameter
plot(Diameter, Height, main="Height Curve", xlab="Diameter", ylab="Height", pch=19)
nls1 <- nls(Height ~ a*(Diameter)^2+b*Diameter+c, data = dat1, start = list(a =a, b=b,c=c), algorithm="port")
lines(fitted(nls1) ~ Diameter, lty = 1, col = "red") # solid red line
Is above instruction wrong for my purpose?

As stated above, you should not put the coefficients into your formulas. Try:
nls1 <- nls(Height ~ I(Diameter^2) + Diameter, data = dat1, algorithm="port")
Regarding the I(Diameter ^2):
"To avoid this confusion, the function I() can be used to bracket those portions of a model formula where the operators are used in their arithmetic sense. For example, in the formula y ~ a + I(b+c), the term b+c is to be interpreted as the sum of b and c." ~ formula{stats} documentation
I did not run the rest (on mobile), but your code looks OK at first glance.

There seems to be a misunderstanding here about linear vs. non-linear models. A linear model is linear in the coefficients. A non-linear model is not. Whether the model is linear in the predictor variables (Diameter in your case) is irrelevant. So in your case a model of the form:
Height = a * Diameter + b * Diameter^2 + c
is a linear model. You don't need to use nls(...). You can specify the model formula in either of two ways, both of which lead to identical results:
Height~Diameter + I(Diameter^2)
or
Height~poly(Diameter,2,raw=TRUE)
The second form uses the poly(...) function to create a polynomial of order 2. raw=T tells poly(...) to generate raw polynomials, rather than orthogonal polynomials (the default). The first form is a bit simpler unless you want polynomials of order greater than 2. Here's an example using both forms.
set.seed(1) # for reproducible example
df <- data.frame(Diameter=sample(1:50,50))
df$Height <- with(df,2*Diameter + .5*Diameter^2 + 4 + rnorm(50,sd=30))
fit <- lm(Height~Diameter + I(Diameter^2),df)
summary(fit)
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) -6.85088 12.26720 -0.558 0.57917
# Diameter 3.31030 1.10964 2.983 0.00451 **
# I(Diameter^2) 0.47717 0.02109 22.622 < 2e-16 ***
fit.poly<- lm(Height~poly(Diameter,2,raw=TRUE),df)
summary(fit.poly)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) -6.85088 12.26720 -0.558 0.57917
# poly(Diameter, 2, raw = TRUE)1 3.31030 1.10964 2.983 0.00451 **
# poly(Diameter, 2, raw = TRUE)2 0.47717 0.02109 22.622 < 2e-16 ***
To plot the data and the trend curve:
df$pred <- predict(fit)
with(df,plot(Height~Diameter))
with(df[order(df$Diameter),],lines(pred~Diameter,col="red",lty=2))

Your problem is your start= parameter. You need to supply actual values for the a, b, and c parameters. Here's a reproducible example
#sample data
dat<-data.frame(Diameter = runif(50, 2, 6))
dat<-transform(dat,Height=2*Diameter + .75 * Diameter^2 +4 + rnorm(50))
dat<-dat[order(dat$Diameter), ]
#now fit the model
mynls<-nls(Height ~ a*I(Diameter^2) + b*Diameter + c, dat,
start=list(a=1, b=1, c=1), algorithm="port")
Notice how we set default values of 1 for each of the coefficients. You can set whatever you think would be most appropriate. And how we can plot the raw values with the fitted results
plot(Height~Diameter,dat, main="Height Curve",
xlab="Diameter", ylab="Height", pch=19)
lines(fitted(mynls)~ dat$Diameter, col="red")
This gives

Related

Syntax for three-piece segmented regression using NLS in R when concave

My goal is to fit a three-piece (i.e., two break-point) regression model to make predictions using propagate's predictNLS function, making sure to define knots as parameters, but my model formula seems off.
I've used the segmented package to estimate the breakpoint locations (used as starting values in NLS), but would like to keep my models in the NLS format, specifically, nlsLM {minipack.lm} because I am fitting other types of curves to my data using NLS, want to allow NLS to optimize the knot values, am sometimes using variable weights, and need to be able to easily calculate the Monte Carlo confidence intervals from propagate. Though I'm very close to having the right syntax for the formula, I'm not getting the expected/required behaviour near the breakpoint(s). The segments SHOULD meet directly at the breakpoints (without any jumps), but at least on this data, I'm getting a weird local minimum at the breakpoint (see plots below).
Below is an example of my data and general process. I believe my issue to be in the NLS formula.
library(minpack.lm)
library(segmented)
y <- c(-3.99448113, -3.82447011, -3.65447803, -3.48447030, -3.31447855, -3.14448753, -2.97447972, -2.80448401, -2.63448380, -2.46448069, -2.29448796, -2.12448912, -1.95448783, -1.78448797, -1.61448563, -1.44448719, -1.27448469, -1.10448651, -0.93448525, -0.76448637, -0.59448626, -0.42448586, -0.25448588, -0.08448548, 0.08551417, 0.25551393, 0.42551411, 0.59551395, 0.76551389, 0.93551398)
x <- c(61586.1711, 60330.5550, 54219.9925, 50927.5381, 48402.8700, 45661.9175, 37375.6023, 33249.1248, 30808.6131, 28378.6508, 22533.3782, 13901.0882, 11716.5669, 11004.7305, 10340.3429, 9587.7994, 8736.3200, 8372.1482, 8074.3709, 7788.1847, 7499.6721, 7204.3168, 6870.8192, 6413.0828, 5523.8097, 3961.6114, 3460.0913, 2907.8614, 2016.1158, 452.8841)
df<- data.frame(x,y)
#Use Segmented to get estimates for parameters with 2 breakpoints
my.seg2 <- segmented(lm(y ~ x, data = df), seg.Z = ~ x, npsi = 2)
#extract knot, intercept, and coefficient values to use as NLS start points
my.knot1 <- my.seg2$psi[1,2]
my.knot2 <- my.seg2$psi[2,2]
my.m_2 <- slope(my.seg2)$x[1,1]
my.b1 <- my.seg2$coefficients[[1]]
my.b2 <- my.seg2$coefficients[[2]]
my.b3 <- my.seg2$coefficients[[3]]
#Fit a NLS model to ~replicate segmented model. Presumably my model formula is where the problem lies
my.model <- nlsLM(y~m*x+b+(b2*(ifelse(x>=knot1&x<=knot2,1,0)*(x-knot1))+(b3*ifelse(x>knot2,1,0)*(x-knot2-knot1))),data=df, start = c(m = my.m_2, b = my.b1, b2 = my.b2, b3 = my.b3, knot1 = my.knot1, knot2 = my.knot2))
How it should look
plot(my.seg2)
How it does look
plot(x, y)
lines(x=x, y=predict(my.model), col='black', lty = 1, lwd = 1)
I was pretty sure I had it "right", but when the 95% confidence intervals are plotted with the line and prediction resolution (e.g., the density of x points) is increased, things seem dramatically incorrect.
Thank you all for your help.
Define g to be a grouping vector having the same length as x which takes on values 1, 2, 3 for the 3 sections of the X axis and create an nls model from these. The resulting plot looks ok.
my.knots <- c(my.knot1, my.knot2)
g <- cut(x, c(-Inf, my.knots, Inf), label = FALSE)
fm <- nls(y ~ a[g] + b[g] * x, df, start = list(a = c(1, 1, 1), b = c(1, 1, 1)))
plot(y ~ x, df)
lines(fitted(fm) ~ x, df, col = "red")
(continued after graph)
Constraints
Although the above looks ok and may be sufficient it does not guarantee that the segments intersect at the knots. To do that we must impose the constraints that both sides are equal at the knots:
a[2] + b[2] * my.knots[1] = a[1] + b[1] * my.knots[1]
a[3] + b[3] * my.knots[2] = a[2] + b[2] * my.knots[2]
so
a[2] = a[1] + (b[1] - b[2]) * my.knots[1]
a[3] = a[2] + (b[2] - b[3]) * my.knots[2]
= a[1] + (b[1] - b[2]) * my.knots[1] + (b[2] - b[3]) * my.knots[2]
giving:
# returns a vector of the three a values
avals <- function(a1, b) unname(cumsum(c(a1, -diff(b) * my.knots)))
fm2 <- nls(y ~ avals(a1, b)[g] + b[g] * x, df, start = list(a1 = 1, b = c(1, 1, 1)))
To get the three a values we can use:
co <- coef(fm2)
avals(co[1], co[-1])
To get the residual sum of squares:
deviance(fm2)
## [1] 0.193077
Polynomial
Although it involves a large number of parameters, a polynomial fit could be used in place of the segmented linear regression. A 12th degree polynomial involves 13 parameters but has a lower residual sum of squares than the segmented linear regression. A lower degree could be used with corresponding increase in residual sum of squares. A 7th degree polynomial involves 8 parameters and visually looks not too bad although it has a higher residual sum of squares.
fm12 <- nls(y ~ cbind(1, poly(x, 12)) %*% b, df, start = list(b = rep(1, 13)))
deviance(fm12)
## [1] 0.1899218
It may, in part, reflect a limitation in segmented. segmented returns a single change point value without quantifying the associated uncertainty. Redoing the analysis using mcp which returns Bayesian posteriors, we see that the second change point is bimodally distributed:
library(mcp)
model = list(
y ~ 1 + x, # Intercept + slope in first segment
~ 0 + x, # Only slope changes in the next segments
~ 0 + x
)
# Fit it with a large number of samples and plot the change point posteriors
fit = mcp(model, data = data.frame(x, y), iter = 50000, adapt = 10000)
plot_pars(fit, regex_pars = "^cp*", type = "dens_overlay")
FYI, mcp can plot credible intervals as well (the red dashed lines):
plot(fit, q_fit = TRUE)

How to interpret lm() coefficient estimates when using bs() function for splines

I'm using a set of points which go from (-5,5) to (0,0) and (5,5) in a "symmetric V-shape". I'm fitting a model with lm() and the bs() function to fit a "V-shape" spline:
lm(formula = y ~ bs(x, degree = 1, knots = c(0)))
I get the "V-shape" when I predict outcomes by predict() and draw the prediction line. But when I look at the model estimates coef(), I see estimates that I don't expect.
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.93821 0.16117 30.639 1.40e-09 ***
bs(x, degree = 1, knots = c(0))1 -5.12079 0.24026 -21.313 2.47e-08 ***
bs(x, degree = 1, knots = c(0))2 -0.05545 0.21701 -0.256 0.805
I would expect a -1 coefficient for the first part and a +1 coefficient for the second part. Must I interpret the estimates in a different way?
If I fill the knot in the lm() function manually than I get these coefficients:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.18258 0.13558 -1.347 0.215
x -1.02416 0.04805 -21.313 2.47e-08 ***
z 2.03723 0.08575 23.759 1.05e-08 ***
That's more like it. Z's (point of knot) relative change to x is ~ +1
I want to understand how to interpret the bs() result. I've checked, the manual and bs model prediction values are exact the same.
I would expect a -1 coefficient for the first part and a +1 coefficient for the second part.
I think your question is really about what is a B-spline function. If you want to understand the meaning of coefficients, you need to know what basis functions are for your spline. See the following:
library(splines)
x <- seq(-5, 5, length = 100)
b <- bs(x, degree = 1, knots = 0) ## returns a basis matrix
str(b) ## check structure
b1 <- b[, 1] ## basis 1
b2 <- b[, 2] ## basis 2
par(mfrow = c(1, 2))
plot(x, b1, type = "l", main = "basis 1: b1")
plot(x, b2, type = "l", main = "basis 2: b2")
Note:
B-splines of degree-1 are tent functions, as you can see from b1;
B-splines of degree-1 are scaled, so that their functional value is between (0, 1);
a knots of a B-spline of degree-1 is where it bends;
B-splines of degree-1 are compact, and are only non-zero over (no more than) three adjacent knots.
You can get the (recursive) expression of B-splines from Definition of B-spline. B-spline of degree 0 is the most basis class, while
B-spline of degree 1 is a linear combination of B-spline of degree 0
B-spline of degree 2 is a linear combination of B-spline of degree 1
B-spline of degree 3 is a linear combination of B-spline of degree 2
(Sorry, I was getting off-topic...)
Your linear regression using B-splines:
y ~ bs(x, degree = 1, knots = 0)
is just doing:
y ~ b1 + b2
Now, you should be able to understand what coefficient you get mean, it means that the spline function is:
-5.12079 * b1 - 0.05545 * b2
In summary table:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.93821 0.16117 30.639 1.40e-09 ***
bs(x, degree = 1, knots = c(0))1 -5.12079 0.24026 -21.313 2.47e-08 ***
bs(x, degree = 1, knots = c(0))2 -0.05545 0.21701 -0.256 0.805
You might wonder why the coefficient of b2 is not significant. Well, compare your y and b1: Your y is symmetric V-shape, while b1 is reverse symmetric V-shape. If you first multiply -1 to b1, and rescale it by multiplying 5, (this explains the coefficient -5 for b1), what do you get? Good match, right? So there is no need for b2.
However, if your y is asymmetric, running trough (-5,5) to (0,0), then to (5,10), then you will notice that coefficients for b1 and b2 are both significant. I think the other answer already gave you such example.
Reparametrization of fitted B-spline to piecewise polynomial is demonstrated here: Reparametrize fitted regression spline as piece-wise polynomials and export polynomial coefficients.
A simple example of first degree spline with single knot and interpretation of the estimated coefficients to calculate the slope of the fitted lines:
library(splines)
set.seed(313)
x<-seq(-5,+5,len=1000)
y<-c(seq(5,0,len=500)+rnorm(500,0,0.25),
seq(0,10,len=500)+rnorm(500,0,0.25))
plot(x,y, xlim = c(-6,+6), ylim = c(0,+8))
fit <- lm(formula = y ~ bs(x, degree = 1, knots = c(0)))
x.predict <- seq(-2.5,+2.5,len = 100)
lines(x.predict, predict(fit, data.frame(x = x.predict)), col =2, lwd = 2)
produces plot
Since we are fitting a spline with degree=1 (i.e. straight line) and with a knot at x=0, we have two lines for x<=0 and x>0.
The coefficients are
> round(summary(fit)$coefficients,3)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.014 0.021 241.961 0
bs(x, degree = 1, knots = c(0))1 -5.041 0.030 -166.156 0
bs(x, degree = 1, knots = c(0))2 4.964 0.027 182.915 0
Which can be translated into the slopes for each of the straight line using the knot (which we specified at x=0) and boundary knots (min/max of the explanatory data):
# two boundary knots and one specified
knot.boundary.left <- min(x)
knot <- 0
knot.boundary.right <- max(x)
slope.1 <- summary(fit)$coefficients[2,1] /(knot - knot.boundary.left)
slope.2 <- (summary(fit)$coefficients[3,1] - summary(fit)$coefficients[2,1]) / (knot.boundary.right - knot)
slope.1
slope.2
> slope.1
[1] -1.008238
> slope.2
[1] 2.000988

Exponential curve fitting in R

time = 1:100
head(y)
0.07841589 0.07686316 0.07534116 0.07384931 0.07238699 0.07095363
plot(time,y)
This is an exponential curve.
How can I fit line on this curve without knowing the formula ? I can't use 'nls' as the formula is unknown (only data points are given).
How can I get the equation for this curve and determine the constants in the equation?
I tried loess but it doesn't give the intercepts.
You need a model to fit to the data.
Without knowing the full details of your model, let's say that this is an
exponential growth model,
which one could write as: y = a * e r*t
Where y is your measured variable, t is the time at which it was measured,
a is the value of y when t = 0 and r is the growth constant.
We want to estimate a and r.
This is a non-linear problem because we want to estimate the exponent, r.
However, in this case we can use some algebra and transform it into a linear equation by taking the log on both sides and solving (remember
logarithmic rules), resulting in:
log(y) = log(a) + r * t
We can visualise this with an example, by generating a curve from our model, assuming some values for a and r:
t <- 1:100 # these are your time points
a <- 10 # assume the size at t = 0 is 10
r <- 0.1 # assume a growth constant
y <- a*exp(r*t) # generate some y observations from our exponential model
# visualise
par(mfrow = c(1, 2))
plot(t, y) # on the original scale
plot(t, log(y)) # taking the log(y)
So, for this case, we could explore two possibilies:
Fit our non-linear model to the original data (for example using nls() function)
Fit our "linearised" model to the log-transformed data (for example using the lm() function)
Which option to choose (and there's more options), depends on what we think
(or assume) is the data-generating process behind our data.
Let's illustrate with some simulations that include added noise (sampled from
a normal distribution), to mimic real data. Please look at this
StackExchange post
for the reasoning behind this simulation (pointed out by Alejo Bernardin's comment).
set.seed(12) # for reproducible results
# errors constant across time - additive
y_add <- a*exp(r*t) + rnorm(length(t), sd = 5000) # or: rnorm(length(t), mean = a*exp(r*t), sd = 5000)
# errors grow as y grows - multiplicative (constant on the log-scale)
y_mult <- a*exp(r*t + rnorm(length(t), sd = 1)) # or: rlnorm(length(t), mean = log(a) + r*t, sd = 1)
# visualise
par(mfrow = c(1, 2))
plot(t, y_add, main = "additive error")
lines(t, a*exp(t*r), col = "red")
plot(t, y_mult, main = "multiplicative error")
lines(t, a*exp(t*r), col = "red")
For the additive model, we could use nls(), because the error is constant across
t. When using nls() we need to specify some starting values for the optimization algorithm (try to "guesstimate" what these are, because nls() often struggles to converge on a solution).
add_nls <- nls(y_add ~ a*exp(r*t),
start = list(a = 0.5, r = 0.2))
coef(add_nls)
# a r
# 11.30876845 0.09867135
Using the coef() function we can get the estimates for the two parameters.
This gives us OK estimates, close to what we simulated (a = 10 and r = 0.1).
You could see that the error variance is reasonably constant across the range of the data, by plotting the residuals of the model:
plot(t, resid(add_nls))
abline(h = 0, lty = 2)
For the multiplicative error case (our y_mult simulated values), we should use lm() on log-transformed data, because
the error is constant on that scale instead.
mult_lm <- lm(log(y_mult) ~ t)
coef(mult_lm)
# (Intercept) t
# 2.39448488 0.09837215
To interpret this output, remember again that our linearised model is log(y) = log(a) + r*t, which is equivalent to a linear model of the form Y = β0 + β1 * X, where β0 is our intercept and β1 our slope.
Therefore, in this output (Intercept) is equivalent to log(a) of our model and t is the coefficient for the time variable, so equivalent to our r.
To meaningfully interpret the (Intercept) we can take its exponential (exp(2.39448488)), giving us ~10.96, which is quite close to our simulated value.
It's worth noting what would happen if we'd fit data where the error is multiplicative
using the nls function instead:
mult_nls <- nls(y_mult ~ a*exp(r*t), start = list(a = 0.5, r = 0.2))
coef(mult_nls)
# a r
# 281.06913343 0.06955642
Now we over-estimate a and under-estimate r
(Mario Reutter
highlighted this in his comment). We can visualise the consequence of using the wrong approach to fit our model:
# get the model's coefficients
lm_coef <- coef(mult_lm)
nls_coef <- coef(mult_nls)
# make the plot
plot(t, y_mult)
lines(t, a*exp(r*t), col = "brown", lwd = 5)
lines(t, exp(lm_coef[1])*exp(lm_coef[2]*t), col = "dodgerblue", lwd = 2)
lines(t, nls_coef[1]*exp(nls_coef[2]*t), col = "orange2", lwd = 2)
legend("topleft", col = c("brown", "dodgerblue", "orange2"),
legend = c("known model", "nls fit", "lm fit"), lwd = 3)
We can see how the lm() fit to log-transformed data was substantially better than the nls() fit on the original data.
You can again plot the residuals of this model, to see that the variance is not constant across the range of the data (we can also see this in the graphs above, where the spread of the data increases for higher values of t):
plot(t, resid(mult_nls))
abline(h = 0, lty = 2)
Unfortunately taking the logarithm and fitting a linear model is not optimal.
The reason is that the errors for large y-values weight much more than those
for small y-values when apply the exponential function to go back to the
original model.
Here is one example:
f <- function(x){exp(0.3*x+5)}
squaredError <- function(a,b,x,y) {sum((exp(a*x+b)-f(x))^2)}
x <- 0:12
y <- f(x) * ( 1 + sample(-300:300,length(x),replace=TRUE)/10000 )
x
y
#--------------------------------------------------------------------
M <- lm(log(y)~x)
a <- unlist(M[1])[2]
b <- unlist(M[1])[1]
print(c(a,b))
squaredError(a,b,x,y)
approxPartAbl_a <- (squaredError(a+1e-8,b,x,y) - squaredError(a,b,x,y))/1e-8
for ( i in 0:10 )
{
eps <- -i*sign(approxPartAbl_a)*1e-5
print(c(eps,squaredError(a+eps,b,x,y)))
}
Result:
> f <- function(x){exp(0.3*x+5)}
> squaredError <- function(a,b,x,y) {sum((exp(a*x+b)-f(x))^2)}
> x <- 0:12
> y <- f(x) * ( 1 + sample(-300:300,length(x),replace=TRUE)/10000 )
> x
[1] 0 1 2 3 4 5 6 7 8 9 10 11 12
> y
[1] 151.2182 203.4020 278.3769 366.8992 503.5895 682.4353 880.1597 1186.5158 1630.9129 2238.1607 3035.8076 4094.6925 5559.3036
> #--------------------------------------------------------------------
>
> M <- lm(log(y)~x)
> a <- unlist(M[1])[2]
> b <- unlist(M[1])[1]
> print(c(a,b))
coefficients.x coefficients.(Intercept)
0.2995808 5.0135529
> squaredError(a,b,x,y)
[1] 5409.752
> approxPartAbl_a <- (squaredError(a+1e-8,b,x,y) - squaredError(a,b,x,y))/1e-8
> for ( i in 0:10 )
+ {
+ eps <- -i*sign(approxPartAbl_a)*1e-5
+ print(c(eps,squaredError(a+eps,b,x,y)))
+ }
[1] 0.000 5409.752
[1] -0.00001 5282.91927
[1] -0.00002 5157.68422
[1] -0.00003 5034.04589
[1] -0.00004 4912.00375
[1] -0.00005 4791.55728
[1] -0.00006 4672.70592
[1] -0.00007 4555.44917
[1] -0.00008 4439.78647
[1] -0.00009 4325.71730
[1] -0.0001 4213.2411
>
Perhaps one can try some numeric method, i.e. gradient search, to find the
minimum of the squared error function.
If it really is exponential, you can try taking the logarithm of your variable and fitting a linear model to that.

obtain derivative by spline interpolation

there is a series of x and y values I have (but not the function itself). I would like to get derivative of the unknown function by spline interpolation of the x and y values (getting the derivat...).
My example
EDITED
x<-c(1,2,3,4,5,6,7,8,9,10)
y<-c(0.1,0.3,0.8,0.9,0.91,0.93,0.95,0.98,0.99,0.999)
is it possible in R to interpolate and to get the functional form of the derivative?
My problem is that I have only x and y values of a cdf function but would need to obtain the probability denisty function..so I want to get the derivative by spline interpolation...
The reason for the question is that I would need to obtain the pdf of that cdf so I am trying to spline interpolate the xy values of the cdf - please note that this is a simple example here and not a real cdf
I haven't found the functional form of restricted cubic splines to be particularly difficult to grasp after reading the explanation by Frank Harrell in his book: "Regression Modeling Strategies".
require(rms)
df <- data.frame( x = c(1,2,3,4,5,6,7,8,9,10),
y =c(12,2,-3,5,6,9,8,10,11,10.5))
ols( y ~ rcs(x, 3), df)
#--------------
Linear Regression Model
ols(formula = y ~ rcs(x, 3), data = df)
Model Likelihood Discrimination
Ratio Test Indexes
Obs 10 LR chi2 3.61 R2 0.303
sigma 4.4318 d.f. 2 R2 adj 0.104
d.f. 7 Pr(> chi2) 0.1646 g 2.811
Residuals
Min 1Q Median 3Q Max
-8.1333 -1.1625 0.5333 0.9833 6.9000
Coef S.E. t Pr(>|t|)
Intercept 5.0833 4.2431 1.20 0.2699
x 0.0167 1.1046 0.02 0.9884
x' 1.0000 1.3213 0.76 0.4738
#----------
The rms package has an odd system for storing summary information that needs to be done for some of its special
dd <- datadist(df)
options(datadist="dd")
mymod <- ols( y ~ rcs(x, 3), df)
# cannot imagine that more than 3 knots would make sense in such a small example
Function(mymod)
# --- reformatted to allow inspection of separate terms
function(x = 5.5) {5.0833333+0.016666667* x +
1*pmax(x-5, 0)^3 -
2*pmax(x-5.5, 0)^3 +
1*pmax(x-6, 0)^3 }
<environment: 0x1304ad940>
The zeros in the pmax functions basically suppress any contribution to the total from the term when the x value is less than the knots ( 5, 5.5 and 6 in this case)
Compare three versus four knots (and if you wanted smooth curves then include a finer grained ...-data argument to Predict):
png()
plot(df$x,df$y )
mymod <- ols( y ~ rcs(x, 3), df)
lines(df$x, predict(mymod) ,col="blue")
mymod <- ols( y ~ rcs(x, 4), df)
lines(df$x, predict(mymod) ,col="red")
dev.off()
Take a look at monotone cubic splines, which are nondecreasing by construction. A web search for "monotone cubic spline R" turns up some hits. I haven't used any of the packages mentioned.

How to put a complicated equation into a R formula?

We have the diameter of trees as the predictor and tree height as the dependent variable. A number of different equations exist for this kind of data and we try to model some of them and compare the results.
However, we we can't figure out how to correctly put one equation into the corresponding R formula format.
The trees data set in R can be used as an example.
data(trees)
df <- trees
df$h <- df$Height * 0.3048 #transform to metric system
df$dbh <- (trees$Girth * 0.3048) / pi #transform tree girth to diameter
First, the example of an equation that seems to work well:
form1 <- h ~ I(dbh ^ -1) + I( dbh ^ 2)
m1 <- lm(form1, data = df)
m1
Call:
lm(formula = form1, data = df)
Coefficients:
(Intercept) I(dbh^-1) I(dbh^2)
27.1147 -5.0553 0.1124
Coefficients a, b and c are estimated, which is what we are interested in.
Now the problematic equation:
Trying to fit it like this:
form2 <- h ~ I(dbh ^ 2) / dbh + I(dbh ^ 2) + 1.3
gives an error:
m1 <- lm(form2, data = df)
Error in terms.formula(formula, data = data)
invalid model formula in ExtractVars
I guess this is because / is interpreted as a nested model and not an arithmetic operator?
This doesn't give an error:
form2 <- h ~ I(I(dbh ^ 2) / dbh + I(dbh ^ 2) + 1.3)
m1 <- lm(form2, data = df)
But the result is not the one we want:
m1
Call:
lm(formula = form2, data = df)
Coefficients:
(Intercept) I(I(dbh^2)/dbh + I(dbh^2) + 1.3)
19.3883 0.8727
Only one coefficient is given for the whole term within the outer I(), which seems to be logic.
How can we fit the second equation to our data?
Assuming you are using nls the R formula can use an ordinary R function, H(a, b, c, D), so the formula can be just h ~ H(a, b, c, dbh) and this works:
# use lm to get startingf values
lm1 <- lm(1/(h - 1.3) ~ I(1/dbh) + I(1/dbh^2), df)
start <- rev(setNames(coef(lm1), c("c", "b", "a")))
# run nls
H <- function(a, b, c, D) 1.3 + D^2 / (a + b * D + c * D^2)
nls1 <- nls(h ~ H(a, b, c, dbh), df, start = start)
nls1 # display result
Graphing the output:
plot(h ~ dbh, df)
lines(fitted(nls1) ~ dbh, df)
You've got a couple problems. (1) You're missing parentheses for the denominator of form2 (and R has no way to know that you want to add a constant a in the denominator, or where to put any of the parameters, really), and much more problematic: (2) your 2nd model isn't linear, so lm won't work.
Fixing (1) is easy:
form2 <- h ~ 1.3 + I(dbh^2) / (a + b * dbh + c * I(dbh^2))
Fixing (2), though there are many ways to estimate parameters for a nonlinear model, the nls (nonlinear least squares) is a good place to start:
m2 <- nls(form2, data = df, start = list(a = 1, b = 1, c = 1))
You need to provide starting guesses for the parameters in nls. I just picked 1's, but you should use better guesses that ballpark what the parameters might be.
edit: fixed, no longer incorrectly using offset ...
An answer that complements #shujaa's:
You can transform your problem from
H = 1.3 + D^2/(a+b*D+c*D^2)
to
1/(H-1.3) = a/D^2+b/D+c
This would normally mess up the assumptions of the model (i.e., if H were normally distributed with constant variance, then 1/(H-1.3) wouldn't be. However, let's try it anyway:
data(trees)
df <- transform(trees,
h=Height * 0.3048, #transform to metric system
dbh=Girth * 0.3048 / pi #transform tree girth to diameter
)
lm(1/(h-1.3) ~ poly(I(1/dbh),2,raw=TRUE),data=df)
## Coefficients:
## (Intercept) poly(I(1/dbh), 2, raw = TRUE)1
## 0.043502 -0.006136
## poly(I(1/dbh), 2, raw = TRUE)2
## 0.010792
These results would normally be good enough to get good starting values for the nls fit. However, you can do better than that via glm, which uses a link function to allow for some forms of non-linearity. Specifically,
(fit2 <- glm(h-1.3 ~ poly(I(1/dbh),2,raw=TRUE),
family=gaussian(link="inverse"),data=df))
## Coefficients:
## (Intercept) poly(I(1/dbh), 2, raw = TRUE)1
## 0.041795 -0.002119
## poly(I(1/dbh), 2, raw = TRUE)2
## 0.008175
##
## Degrees of Freedom: 30 Total (i.e. Null); 28 Residual
## Null Deviance: 113.2
## Residual Deviance: 80.05 AIC: 125.4
##
You can see that the results are approximately the same as the linear fit, but not quite.
pframe <- data.frame(dbh=seq(0.8,2,length=51))
We use predict, but need to correct the prediction to account for the fact that we subtracted a constant from the LHS:
pframe$h <- predict(fit2,newdata=pframe,type="response")+1.3
p2 <- predict(fit2,newdata=pframe,se.fit=TRUE) ## predict on link scale
pframe$h_lwr <- with(p2,1/(fit+1.96*se.fit))+1.3
pframe$h_upr <- with(p2,1/(fit-1.96*se.fit))+1.3
png("dbh_tmp1.png",height=4,width=6,units="in",res=150)
par(las=1,bty="l")
plot(h~dbh,data=df)
with(pframe,lines(dbh,h,col=2))
with(pframe,polygon(c(dbh,rev(dbh)),c(h_lwr,rev(h_upr)),
border=NA,col=adjustcolor("black",alpha=0.3)))
dev.off()
Because we have used the constant on the LHS (this almost, but doesn't quite, fit into the framework of using an offset -- we could only use an offset if our formula were 1/H - 1.3 = a/D^2 + ..., i.e. if the constant adjustment were on the link (inverse) scale rather than the original scale), this doesn't fit perfectly into ggplot's geom_smooth framework
library("ggplot2")
ggplot(df,aes(dbh,h))+geom_point()+theme_bw()+
geom_line(data=pframe,colour="red")+
geom_ribbon(data=pframe,colour=NA,alpha=0.3,
aes(ymin=h_lwr,ymax=h_upr))
ggsave("dbh_tmp2.png",height=4,width=6)

Resources