ARIMA model with nonlinear exogenous variable in R - r

I'm doing a non-linear regression in R and want to add one moving-average term to my model to eliminate the autocorrelations in residuals.
Basically, here is the model:
y[n] = a + log((x1[n])^g + (x2[n])^g) + c*e[n-1] + e[n]
where [e] is the moving average term.
I plan to use ARIMA(0, 0, 1) to model residuals. However, I do not know which function I should use in R to add non-linear exogenous part to ARIMA model.
More information: I know how to use nls command to estimate a and g, but do not know how to deal with e[n].
I know that xreg in arima can handle ARIMA model with linear exogenous variables. Is there a similar function to handle ARIMA model with nonlinear exogenous variables?
Thank you for the help in advance!

nlme has such capability, as it is fitting non-linear mixed models. You can think of it an extension to nls (a fixed-effect only non-linear regression), by allowing random effect and correlated errors.
nlme can handle ARMA correlation, by something like correlation = corARMA(0.2, ~ 1, p = 0, q = 1, fixed = TRUE). This means, that residuals are MA(1) process, with initial guess of coefficient 0.2, but to be updated during model fitting. The ~ 1 suggests that MA(1) is on intercept and there is no further grouping structure.
I am not an expert in nlme, but I know nlme is what you need. I produce the following example, but since I am not an expert, I can't get nlme work at the moment. I post it here to give a start / flavour.
set.seed(0)
x1 <- runif(100)
x2 <- runif(100)
## MA(1) correlated error, with innovation standard deviation 0.1
e <- arima.sim(model = list(ma = 0.5), n = 100, sd = 0.1)
## a true model, with `a = 0.2, g = 0.5`
y0 <- 0.2 + log(x1 ^ 0.5 + x2 ^ 0.5)
## observations
y <- y0 + e
## no need to install; it comes with R; just `library()` it
library(nlme)
fit <- nlme(y ~ a + log(x1 ^ g + x2 ^ g), fixed = a + g ~ 1,
start = list(a = 0.5, g = 1),
correlation = corARMA(0.2, form = ~ 1, p = 0, q = 1, fixed = FALSE))
Similar to nls, we have an overall model formula y ~ a + log(x1 ^ g + x2 ^ g), and starting values are required for iteration process. I have chosen start = list(a = 0.5, g = 1). The correlation bit has been explained in the beginning.
fixed and random arguments in nlme specify what should be seen as fixed effects and random effects in the overall formula. Since we have no random effect, we leave it unspecified. We want a and g as fixed effect, so I tried something like fixed = a + g ~ 1. Unfortunately it does not quite work, for some reason I don't know. I read the ?nlme, and thought this formula means that we want a common a and g for all observations, but later nlme reports an error saying this is not a valid group formula.
I am also investing at this; as I said, the above gives us a start. We are already fairly close to the final answer.
Thanks to user20650 for point out my awkward error. I should use gnls function rather than nlme. By design nature of nlme package, functions lme and nlme have to take a random argument to work. Luckily, there are several other routines in nlme package for extending linear models and non-linear models.
gls and gnls extend lm and nls by allowing non-diagonal variance functions.
So, I should really use gnls instead:
## no `fixed` argument as `gnls` is a fixed-effect only
fit <- gnls(y ~ a + log(x1 ^ g + x2 ^ g), start = list(a = 0.5, g = 1),
correlation = corARMA(0.2, form = ~ 1, p = 0, q = 1, fixed = FALSE))
#Generalized nonlinear least squares fit
# Model: y ~ a + log(x1^g + x2^g)
# Data: NULL
# Log-likelihood: 92.44078
#
#Coefficients:
# a g
#0.1915396 0.5007640
#
#Correlation Structure: ARMA(0,1)
# Formula: ~1
# Parameter estimate(s):
# Theta1
#0.4184961
#Degrees of freedom: 100 total; 98 residual
#Residual standard error: 0.1050295

Related

R nls model fails to converge on Linux, not on macOS

I have a grouped/nested data frame on which I want to run an nls model fit for each group. This code used to work fine on one machine. Running it on another machine results in an error.
The minimally reproducible example is this:
third_order_polynomial <- function(video_bitrate, a, b, c, d = 0) {
return(a * log10(video_bitrate)^3 + b * log10(video_bitrate)^2 + c * log10(video_bitrate) + d)
}
problematic_data = read.csv(text="video_name,video_target_bitrate,video_height,video_width,video_frame_rate,video_bitrate,video_duration,video_size,video_bitrate_log,score
water_netflix_200kbps_360p_59.94fps_vp9.mkv,200,360,640,59.94,201.18,9.994,0.2454337060546875,2.3035848038124636,1.31034482758621
water_netflix_750kbps_360p_59.94fps_vp9.mkv,750,360,640,59.94,735.98,9.994,0.8978740380859375,2.866866012696663,1.9655172413793105
water_netflix_2000kbps_720p_59.94fps_vp9.mkv,2000,720,1280,59.94,1737.01,9.994,2.119101311035156,3.239802318695986,2.68965517241379
water_netflix_7500kbps_1080p_59.94fps_vp9.mkv,7500,1080,1920,59.94,7374.88,9.994,8.9971375390625,3.8677549581062247,3.6551724137931
water_netflix_15000kbps_1080p_59.94fps_vp9.mkv,15000,1080,1920,59.94,14738.57,9.994,17.98062360595703,4.168455348432381,4.20689655172414
water_netflix_40000kbps_2160p_59.94fps_vp9.mkv,40000,2160,3840,59.94,37534.27,9.994,45.790709763183585,4.574427973737644,4.48275862068965
")
nls(
formula = score ~ third_order_polynomial(video_bitrate, a, b, c, d),
data = problematic_data,
start = list(a = 1, b = 1, c = 1, d = 0),
lower = list(a = -1, b = -1, c = -5, d = 0),
upper = list(a = 5, b = 5, c = 5, d = 5),
algorithm = "port"
)
#> Error in nls(formula = score ~ third_order_polynomial(video_bitrate, a, : Convergence failure: singular convergence (7)
Created on 2022-05-06 by the reprex package (v2.0.1)
The error appears under Ubuntu 20.04 using R 4.2.0 x86_64-pc-linux-gnu.
The error does not occur under macOS 12.3.1 using R 4.2.0 aarch64-apple-darwin20 (64-bit).
is this a bug in the stats package? Some numerical instability? I guess I will create a bug report but I'm wondering how I could prevent this.
Note that the start parameters have been chosen to get a good fit for the entire data set. The whole code is here (link to the particular revision that can also be reproducibly run via simply sourcing the script).
Your data are well-behaved, but nevertheless you're fitting 4 parameters to only 6 data points, so this kind of problem is not surprising. It's extremely common to get differences in convergence between different platforms/versions of R/etc.: tiny changes (compiler versions, compiler optimization flags, etc.) can easily flip a fitting attempt from just-barely-working to just-barely-failing.
Your starting values may be generally appropriate, but they're pretty far off for this data set (see plot below).
Do you need to use nls, or even non-linear fitting?
This model can be fitted easily with a linear model:
lm(score ~ poly(log10(video_bitrate), 3), data = problematic_data)
(this uses an orthogonal polynomial basis, which will be most numerically stable; you can probably get away with poly(log10(video_bitrate), 3, raw = TRUE) which will give you interpretable parameters). This doesn't easily allow you to incorporate constraints on the parameters, but for this particular data set the constraints don't seem to be binding.
The "singular convergence" warning from nls is notoriously opaque: see here:
A return with IV(1) = 7 occurs if a more favorable stopping test is not satisfied and if the algorithm thinks
f(x) − min{f(y): ||D(y−x)|| ≤ V(LMAXS)} < V(SCTOL)*|f(x)|,
where D is given by (4.1). When this test is satisfied, it appears that x has too many
degrees of freedom — and you should ponder whether f was properly formulated.
Default = max{ 10^(−10) , MACHEP 2 / 3 }
I don't think it's possible to adjust this tolerance from the R level. If you don't use algorithm = "port" you won't see this error (but you might be discarding a meaningful warning).
You could try nls.lm() from the minpack.lm package.
You can also cook up your own least-squares minimizers (albeit slightly less efficiently) using optim or nlminb (you have to write an objective function that computes the residual sum of squares yourself).
library(ggplot2)
ggplot(problematic_data, aes(log10(video_bitrate), score)) +
geom_point() +
geom_smooth(method = "lm",
formula = y ~poly(x, 3)) +
## starting values
geom_function(fun = function(x) x^3 + x^2 + x, colour = "red") +
scale_y_log10()
1) This is a linear model, i.e. linear in the coefficients, so we can use lm.
fm3 <- lm(formula = score ~ poly(log10(video_bitrate), 3), problematic_data)
2) or use raw = TRUE so that the coefficients correspond to those in the question; fm3 and fm3r give the same predictions and are only parameterized differently. That is all.equal(fitted(fm3), fitted(fm3r)) is TRUE.
fm3r <- lm(formula = score ~ poly(log10(video_bitrate), 3, raw = TRUE), problematic_data)
3) Also note that the model is not significantly different than the simpler score ~ poly(log10(video_bitrate), 1) model which is equivalent to score ~ log10(video_bitrate) modulo parametrization.
fm1 <- lm(formula = score ~ poly(log10(video_bitrate), 1), data = problematic_data)
anova(fm1, fm3)
giving:
Analysis of Variance Table
Model 1: score ~ poly(log10(video_bitrate), 1)
Model 2: score ~ poly(log10(video_bitrate), 3)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 4 0.079747
2 2 0.012857 2 0.066891 5.2028 0.1612
graphics--
plot(score ~ log10(video_bitrate), problematic_data)
lines(fitted(fm3) ~ log10(video_bitrate), problematic_data)
lines(fitted(fm1) ~ log10(video_bitrate), problematic_data, lty = 2)
legend("topleft", c("fm3", "fm1"), lty = 1:2)
Update
Revised to include log10 which had been omitted. Added graphics. Removed some output for brevity.

bivariate Probit/logit R : how to find ALL coefficients and marginal effects with the "zeligverse" package

I am running a bivariate logit model in R with the zeligverse package.I want to calculate the impact of my independant variables on P(Y1=1), P(Y2=1), P(Y1=1,Y2=0), P(Y1=1,Y2=1), P(Y1=0,Y2=1), P(Y1=0,Y2=0), P(Y1=1|Y2=0) and all the other conditional probabilities (Y1 and Y2 are my dependant variables. They both equal 0 or 1). I also want all the marginal effects associated with these probabilities for each independant variable.
Do you know how to find those in this package (or in another package if it works better)?
Not sure this is what you are looking for (feel free to mark me down if not). Zelig packages do seem to be a right choice for your specific question.
library(Zelig)
## Let X_i be independent variable
## Assume you are working with a univariate target variable Y where Y \in {0, 1}
set.seed(123)
m <- 100
df <- data.frame(
Y = rbinom(m, 1, 0.5),
X1 = rbinom(m, 1, 0.95),
X2 = rbinom(m, 1, 0.95)
)
## Fit model once:
fit <- zelig(
Y ~ .,
model = "logit",
data = df,
cite = FALSE
)
summary(fit)
## Let's focus on the binomial predictor 2
x.out1 <- setx(fit, X2=1)
## Run estimation based on a posterior distribution:
postFit <- Zelig::sim(fit, x=x.out1)
summary(postFit)
# plot(postFit)

mgcv: obtain predictive distribution of response given new data (negative binomial example)

In GAM (and GLM, for that matter), we're fitting a conditional likelihood model. So after fitting the model, for a new input x and response y, I should be able to compute the predictive probability or density of a specific value of y given x. I might want to do this to compare the fit of various models on validation data, for example. Is there a convenient way to do this with a fitted GAM in mgcv? Otherwise, how do I figure out the exact form of the density that is used so I can plug in the parameters appropriately?
As a specific example, consider a negative binomial GAM :
## From ?negbin
library(mgcv)
set.seed(3)
n<-400
dat <- gamSim(1,n=n)
g <- exp(dat$f/5)
## negative binomial data...
dat$y <- rnbinom(g,size=3,mu=g)
## fit with theta estimation...
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=nb(),data=dat)
And now I want to compute the predictive probability of, say, y=7, given x=(.1,.2,.3,.4).
Yes. mgcv is doing (empirical) Bayesian estimation, so you can obtain predictive distribution. For your example, here is how.
# prediction on the link (with standard error)
l <- predict(b, newdata = data.frame(x0 = 0.1, x1 = 0.2, x2 = 0.3, x3 = 0.4), se.fit = TRUE)
# Under central limit theory in GLM theory, link value is normally distributed
# for negative binomial with `log` link, the response is log-normal
p.mu <- function (mu) dlnorm(mu, l[[1]], l[[2]])
# joint density of `y` and `mu`
p.y.mu <- function (y, mu) dnbinom(y, size = 3, mu = mu) * p.mu(mu)
# marginal probability (not density as negative binomial is discrete) of `y` (integrating out `mu`)
# I have carefully written this function so it can take vector input
p.y <- function (y) {
scalar.p.y <- function (scalar.y) integrate(p.y.mu, lower = 0, upper = Inf, y = scalar.y)[[1]]
sapply(y, scalar.p.y)
}
Now since you want probability of y = 7, conditional on specified new data, use
p.y(7)
# 0.07810065
In general, this approach by numerical integration is not easy. For example, if other link functions like sqrt() is used for negative binomial, the distribution of response is not that straightforward (though also not difficult to derive).
Now I offer a sampling based approach, or Monte Carlo approach. This is most similar to Bayesian procedure.
N <- 1000 # samples size
set.seed(0)
## draw N samples from posterior of `mu`
sample.mu <- b$family$linkinv(rnorm(N, l[[1]], l[[2]]))
## draw N samples from likelihood `Pr(y|mu)`
sample.y <- rnbinom(1000, size = 3, mu = sample.mu)
## Monte Carlo estimation for `Pr(y = 7)`
mean(sample.y == 7)
# 0.076
Remark 1
Note that as empirical Bayes, all above methods are conditional on estimated smoothing parameters. If you want something like a "full Bayes", set unconditional = TRUE in predict().
Remark 2
Perhaps some people are assuming the solution as simple as this:
mu <- predict(b, newdata = data.frame(x0 = 0.1, x1 = 0.2, x2 = 0.3, x3 = 0.4), type = "response")
dnbinom(7, size = 3, mu = mu)
Such result is conditional on regression coefficients (assumed fixed without uncertainty), thus mu becomes fixed and not random. This is not predictive distribution. Predictive distribution would integrate out uncertainty of model estimation.

Fitting non-linear Langmuir Isotherm in R

I want to fit Isotherm models for the following data in R. The simplest isotherm model is Langmuir model given here model is given in the bottom of the page. My MWE is given below which throw the error. I wonder if there is any R package for Isotherm models.
X <- c(10, 30, 50, 70, 100, 125)
Y <- c(155, 250, 270, 330, 320, 323)
Data <- data.frame(X, Y)
LangIMfm2 <- nls(formula = Y ~ Q*b*X/(1+b*X), data = Data, start = list(Q = 1, b = 0.5), algorith = "port")
Error in nls(formula = Y ~ Q * b * X/(1 + b * X), data = Data, start = list(Q = 1, :
Convergence failure: singular convergence (7)
Edited
Some nonlinear models can be transform to linear models. My understanding is that there might be one-to-one relationship between the estimates of nonlinear model and its linear model form but their corresponding standard errors are not related to each other. Is this assertion true? Are there any pitfalls in fitting Nonlinear Models by transforming to linearity?
I am not aware of such packages and personally I don't think that you need one as the problem can be solved using a base R.
nls is sensitive to the starting parameters, so you should begin with a good starting guess. You can easily evaluate Q because it corresponds to the asymptotic limit of the isotherm at x-->Inf, so it is reasonable to begin with Q=323 (which is the last value of Y in your sample data set).
Next, you could do plot(Data) and add a line with an isotherm that corresponds to your starting parameters Q and b and tweak b to come up with a reasonable guess.
The plot below shows your data set (points) and a probe isotherm with Q = 323 and b = 0.5, generated by with(Data,lines(X,323*0.5*X/(1+0.5*X),col='red')) (red line). It seemed a reasonable starting guess to me, and I gave it a try with nls:
LangIMfm2 <- nls(formula = Y ~ Q*b*X/(1+b*X), data = Data, start = list(Q = 300, b = 1), algorith = "port")
# Nonlinear regression model
# model: Y ~ Q * b * X/(1 + b * X)
# data: Data
# Q b
# 366.2778 0.0721
# residual sum-of-squares: 920.6
#
# Algorithm "port", convergence message: relative convergence (4)
and plotted predicted line to make sure that nls found the right solution:
lines(Data$X,predict(LangIMfm2),col='green')
Having said that, I would suggest to use a more effective strategy, based on the linearization of the model by rewriting the isotherm equation in reciprocal coordinates:
z <- 1/Data
plot(Y~X,z)
abline(lm(Y~X,z))
M <- lm(Y~X,z)
Q <- 1/coef(M)[1]
# 363.2488
b <- coef(M)[1]/coef(M)[2]
# 0.0741759
As you could see, both approaches produce essentially the same result, but the linear model is more robust and doesn't require starting parameters (and, as far as I remember, it is the standard way of the isotherm analysis in the experimental physical chemistry).
You can use the SSmicmen self-starter function (see Ritz and Streibig, 2008, Nonlinear Regression with R) in the nlme package for R, which calculates initial parameters from the fit of the linearized form of the Michaelis-Menten (MM) equation. Fortunately, the MM equation possesses a form that can be adapted for the Langmuir equation, S = Smax*x/(KL + x). I've found the nlshelper and tidyverse packages useful for modeling and exporting the results of the nls command into tables and plots, particularly when modeling sample groups. Here's my code for modeling a single set of sorption data:
library(tidyverse)
library(nlme)
library(nlshelper)
lang.fit <- nls(Y ~ SSmicmen(X,Smax,InvKL), data=Data)
fit.summary <- tidy(lang.fit)
fit.coefs <- coef(lang.fit)
For simplicity, the Langmuir affinity constant is modeled here as 1/KL. Applying this code, I get the same parameter estimates as #Marat given above.
The simple code below allows for wrangling the data in order to create a ggplot object, containing the original points and fitted line (i.e., geom_point would represent the original X and Y data, geom_line would represent the original X plus YHat).
FitY <- tibble(predict(lang.fit))
YHat <- FitY[,1]
Data2 <- cbind(Data, YHat)
If you want to model multiple groups of data (say, based on a "Sample_name" column, then the lang.fit variable would be calculated as below, this time using the nlsList command:
lang.fit <- nlsList(Y ~ SSmicmen(X,Smax,InvKL) | Sample_name, data=Data)
The problem is the starting values. We show two approaches to this as well as an alternative that converges even using the starting values in the question.
1) plinear The right hand side is linear in Q*b so it would be better to absorb b into Q and then we have a parameter that enters linearly so it is easier to solve. Also with the plinear algorithm no starting values are needed for the linear parameter so only the starting value for b need be specified. With plinear the right hand side of the nls formula should be specified as the vector that multiplies the linear parameter. The result of running nls giving fm0 below will be coefficients named b and .lin where Q = .lin / b.
We already have our answer from fm0 but if we want a clean run in terms of b and Q rather than b and .lin we can run the original formula in the question using the starting values implied by the coefficients returned by fm0 as shown.
fm0 <- nls(Y ~ X/(1+b*X), Data, start = list(b = 0.5), alg = "plinear")
st <- with(as.list(coef(fm0)), list(b = b, Q = .lin/b))
fm <- nls(Y ~ Q*b*X/(1+b*X), Data, start = st)
fm
giving
Nonlinear regression model
model: Y ~ Q * b * X/(1 + b * X)
data: Data
b Q
0.0721 366.2778
residual sum-of-squares: 920.6
Number of iterations to convergence: 0
Achieved convergence tolerance: 9.611e-07
We can display the result. The points are the data and the red line is the fitted curve.
plot(Data)
lines(fitted(fm) ~ X, Data, col = "red")
(contineud after plot)
2) mean Alternately, using a starting value of mean(Data$Y) for Q seems to work well.
nls(Y ~ Q*b*X/(1+b*X), Data, start = list(b = 0.5, Q = mean(Data$Y)))
giving:
Nonlinear regression model
model: Y ~ Q * b * X/(1 + b * X)
data: Data
b Q
0.0721 366.2779
residual sum-of-squares: 920.6
Number of iterations to convergence: 6
Achieved convergence tolerance: 5.818e-06
The question already had a reasonable starting value for b which we used but if one were needed one could set Y to Q*b so that they cancel and X to mean(Data$X) and solve for b to give b = 1 - 1/mean(Data$X) as a possible starting value. Although not shown using this starting value for b with mean(Data$Y) as the starting value for Q also resulted in convergence.
3) optim If we use optim the algorithm converges even with the initial values used in the question. We form the residual sum of squares and minimize that:
rss <- function(p) {
Q <- p[1]
b <- p[2]
with(Data, sum((Y - b*Q*X/(1+b*X))^2))
}
optim(c(1, 0.5), rss)
giving:
$par
[1] 366.27028219 0.07213613
$value
[1] 920.62
$counts
function gradient
249 NA
$convergence
[1] 0
$message
NULL

How to put a complicated equation into a R formula?

We have the diameter of trees as the predictor and tree height as the dependent variable. A number of different equations exist for this kind of data and we try to model some of them and compare the results.
However, we we can't figure out how to correctly put one equation into the corresponding R formula format.
The trees data set in R can be used as an example.
data(trees)
df <- trees
df$h <- df$Height * 0.3048 #transform to metric system
df$dbh <- (trees$Girth * 0.3048) / pi #transform tree girth to diameter
First, the example of an equation that seems to work well:
form1 <- h ~ I(dbh ^ -1) + I( dbh ^ 2)
m1 <- lm(form1, data = df)
m1
Call:
lm(formula = form1, data = df)
Coefficients:
(Intercept) I(dbh^-1) I(dbh^2)
27.1147 -5.0553 0.1124
Coefficients a, b and c are estimated, which is what we are interested in.
Now the problematic equation:
Trying to fit it like this:
form2 <- h ~ I(dbh ^ 2) / dbh + I(dbh ^ 2) + 1.3
gives an error:
m1 <- lm(form2, data = df)
Error in terms.formula(formula, data = data)
invalid model formula in ExtractVars
I guess this is because / is interpreted as a nested model and not an arithmetic operator?
This doesn't give an error:
form2 <- h ~ I(I(dbh ^ 2) / dbh + I(dbh ^ 2) + 1.3)
m1 <- lm(form2, data = df)
But the result is not the one we want:
m1
Call:
lm(formula = form2, data = df)
Coefficients:
(Intercept) I(I(dbh^2)/dbh + I(dbh^2) + 1.3)
19.3883 0.8727
Only one coefficient is given for the whole term within the outer I(), which seems to be logic.
How can we fit the second equation to our data?
Assuming you are using nls the R formula can use an ordinary R function, H(a, b, c, D), so the formula can be just h ~ H(a, b, c, dbh) and this works:
# use lm to get startingf values
lm1 <- lm(1/(h - 1.3) ~ I(1/dbh) + I(1/dbh^2), df)
start <- rev(setNames(coef(lm1), c("c", "b", "a")))
# run nls
H <- function(a, b, c, D) 1.3 + D^2 / (a + b * D + c * D^2)
nls1 <- nls(h ~ H(a, b, c, dbh), df, start = start)
nls1 # display result
Graphing the output:
plot(h ~ dbh, df)
lines(fitted(nls1) ~ dbh, df)
You've got a couple problems. (1) You're missing parentheses for the denominator of form2 (and R has no way to know that you want to add a constant a in the denominator, or where to put any of the parameters, really), and much more problematic: (2) your 2nd model isn't linear, so lm won't work.
Fixing (1) is easy:
form2 <- h ~ 1.3 + I(dbh^2) / (a + b * dbh + c * I(dbh^2))
Fixing (2), though there are many ways to estimate parameters for a nonlinear model, the nls (nonlinear least squares) is a good place to start:
m2 <- nls(form2, data = df, start = list(a = 1, b = 1, c = 1))
You need to provide starting guesses for the parameters in nls. I just picked 1's, but you should use better guesses that ballpark what the parameters might be.
edit: fixed, no longer incorrectly using offset ...
An answer that complements #shujaa's:
You can transform your problem from
H = 1.3 + D^2/(a+b*D+c*D^2)
to
1/(H-1.3) = a/D^2+b/D+c
This would normally mess up the assumptions of the model (i.e., if H were normally distributed with constant variance, then 1/(H-1.3) wouldn't be. However, let's try it anyway:
data(trees)
df <- transform(trees,
h=Height * 0.3048, #transform to metric system
dbh=Girth * 0.3048 / pi #transform tree girth to diameter
)
lm(1/(h-1.3) ~ poly(I(1/dbh),2,raw=TRUE),data=df)
## Coefficients:
## (Intercept) poly(I(1/dbh), 2, raw = TRUE)1
## 0.043502 -0.006136
## poly(I(1/dbh), 2, raw = TRUE)2
## 0.010792
These results would normally be good enough to get good starting values for the nls fit. However, you can do better than that via glm, which uses a link function to allow for some forms of non-linearity. Specifically,
(fit2 <- glm(h-1.3 ~ poly(I(1/dbh),2,raw=TRUE),
family=gaussian(link="inverse"),data=df))
## Coefficients:
## (Intercept) poly(I(1/dbh), 2, raw = TRUE)1
## 0.041795 -0.002119
## poly(I(1/dbh), 2, raw = TRUE)2
## 0.008175
##
## Degrees of Freedom: 30 Total (i.e. Null); 28 Residual
## Null Deviance: 113.2
## Residual Deviance: 80.05 AIC: 125.4
##
You can see that the results are approximately the same as the linear fit, but not quite.
pframe <- data.frame(dbh=seq(0.8,2,length=51))
We use predict, but need to correct the prediction to account for the fact that we subtracted a constant from the LHS:
pframe$h <- predict(fit2,newdata=pframe,type="response")+1.3
p2 <- predict(fit2,newdata=pframe,se.fit=TRUE) ## predict on link scale
pframe$h_lwr <- with(p2,1/(fit+1.96*se.fit))+1.3
pframe$h_upr <- with(p2,1/(fit-1.96*se.fit))+1.3
png("dbh_tmp1.png",height=4,width=6,units="in",res=150)
par(las=1,bty="l")
plot(h~dbh,data=df)
with(pframe,lines(dbh,h,col=2))
with(pframe,polygon(c(dbh,rev(dbh)),c(h_lwr,rev(h_upr)),
border=NA,col=adjustcolor("black",alpha=0.3)))
dev.off()
Because we have used the constant on the LHS (this almost, but doesn't quite, fit into the framework of using an offset -- we could only use an offset if our formula were 1/H - 1.3 = a/D^2 + ..., i.e. if the constant adjustment were on the link (inverse) scale rather than the original scale), this doesn't fit perfectly into ggplot's geom_smooth framework
library("ggplot2")
ggplot(df,aes(dbh,h))+geom_point()+theme_bw()+
geom_line(data=pframe,colour="red")+
geom_ribbon(data=pframe,colour=NA,alpha=0.3,
aes(ymin=h_lwr,ymax=h_upr))
ggsave("dbh_tmp2.png",height=4,width=6)

Resources