fitting non linear function to data : singular gradient issue - r

I am trying to fit data to a non linear model, but I am getting "singular gradient" message when I build the model.
here is the data:
> astrodata
temperature intensity
1 277.15 121
2 282.15 131
3 287.15 153
4 292.15 202
5 297.15 311
The function:
y= a * exp(-b * temperature) + c
What I did so far:
> temperature <- astrodata$temperature
temperature
[1] 277.15 282.15 287.15 292.15 297.15
> intensity <- astrodata$intensity
> c.0 <- min(temperature)*0.5
> c.0 <- min(intensity)*0.5
> model.0 <- lm(log(intensity - c.0) ~ temperature, data=astrodata)
> start <- list(a=exp(coef(model.0)[1]), b=coef(model.0)[2], c=c.0)
>
> model <- nls(intensity ~ a * exp(-b * temperature) + c, data = astrodata, start = start)
Error in nls(intensity ~ a * exp(b * temperature) + c, data = astrodata, :
singular gradient
Does anybody has an idea how to solve this ?

The model is linear in a and c and only nonlinear in b. That suggests we try the "plinear" algorithm. It has the advantage that only the non-linear parameters require starting values.
Note that the formula specification for that algorithm is different and has a RHS which is a matrix with one column per linear parameter.
model <- nls(intensity ~ cbind(exp(-b * temperature), 1), data = astrodata,
start = start["b"], algorithm = "plinear")
giving:
> model
Nonlinear regression model
model: intensity ~ cbind(exp(-b * temperature), 1)
data: astrodata
b .lin1 .lin2
-1.598e-01 4.728e-19 1.129e+02
residual sum-of-squares: 0.003853
Number of iterations to convergence: 5
Achieved convergence tolerance: 2.594e-07
Also:
plot(intensity ~ temperature, astrodata)
lines(fitted(model) ~ temperature, astrodata)
Note: Based on the comment below you don't really need an nls model and it may be good enough to just use geom_line
p <- ggplot(astrodata, aes(temperature, intensity)) + geom_point()
p + geom_line()
or splines:
p + geom_line(data = data.frame(spline(temperature, intensity)), aes(x, y))

Your data isn't varied enough.
nls uses least squares to work. This is a measurement of the distance between the model and the data points. If there is no distance, nls doesn't work. Your model fits the data exactly, this is called "zero-residual" data. Hence
singular gradient matrix at initial parameter estimates.
It's an overly complicated error message that simply means "There is no error to measure."
You only have 5 (x,y) combos, so this error is almost guaranteed using non-linear analysis with so little data. Use different data or more data.

One possibility is to double each data point, adding very tiny variations to the doubled data like so:
temperature intensity
1 277.15 121
2 282.15 131
3 287.15 153
4 292.15 202
5 297.15 311
11 277.15000001 121.000001
12 282.15000001 131.000001
13 287.15000001 153.000001
14 292.15000001 202.000001
15 297.15000001 311.000001
In the original data set, each point effectively has the same weight of 1.0, and in the "doubled" data set again each point effectively has the same weight of 2.0 so you get the same fitted parameter values but no error.

Related

Fitting non-linear Langmuir Isotherm in R

I want to fit Isotherm models for the following data in R. The simplest isotherm model is Langmuir model given here model is given in the bottom of the page. My MWE is given below which throw the error. I wonder if there is any R package for Isotherm models.
X <- c(10, 30, 50, 70, 100, 125)
Y <- c(155, 250, 270, 330, 320, 323)
Data <- data.frame(X, Y)
LangIMfm2 <- nls(formula = Y ~ Q*b*X/(1+b*X), data = Data, start = list(Q = 1, b = 0.5), algorith = "port")
Error in nls(formula = Y ~ Q * b * X/(1 + b * X), data = Data, start = list(Q = 1, :
Convergence failure: singular convergence (7)
Edited
Some nonlinear models can be transform to linear models. My understanding is that there might be one-to-one relationship between the estimates of nonlinear model and its linear model form but their corresponding standard errors are not related to each other. Is this assertion true? Are there any pitfalls in fitting Nonlinear Models by transforming to linearity?
I am not aware of such packages and personally I don't think that you need one as the problem can be solved using a base R.
nls is sensitive to the starting parameters, so you should begin with a good starting guess. You can easily evaluate Q because it corresponds to the asymptotic limit of the isotherm at x-->Inf, so it is reasonable to begin with Q=323 (which is the last value of Y in your sample data set).
Next, you could do plot(Data) and add a line with an isotherm that corresponds to your starting parameters Q and b and tweak b to come up with a reasonable guess.
The plot below shows your data set (points) and a probe isotherm with Q = 323 and b = 0.5, generated by with(Data,lines(X,323*0.5*X/(1+0.5*X),col='red')) (red line). It seemed a reasonable starting guess to me, and I gave it a try with nls:
LangIMfm2 <- nls(formula = Y ~ Q*b*X/(1+b*X), data = Data, start = list(Q = 300, b = 1), algorith = "port")
# Nonlinear regression model
# model: Y ~ Q * b * X/(1 + b * X)
# data: Data
# Q b
# 366.2778 0.0721
# residual sum-of-squares: 920.6
#
# Algorithm "port", convergence message: relative convergence (4)
and plotted predicted line to make sure that nls found the right solution:
lines(Data$X,predict(LangIMfm2),col='green')
Having said that, I would suggest to use a more effective strategy, based on the linearization of the model by rewriting the isotherm equation in reciprocal coordinates:
z <- 1/Data
plot(Y~X,z)
abline(lm(Y~X,z))
M <- lm(Y~X,z)
Q <- 1/coef(M)[1]
# 363.2488
b <- coef(M)[1]/coef(M)[2]
# 0.0741759
As you could see, both approaches produce essentially the same result, but the linear model is more robust and doesn't require starting parameters (and, as far as I remember, it is the standard way of the isotherm analysis in the experimental physical chemistry).
You can use the SSmicmen self-starter function (see Ritz and Streibig, 2008, Nonlinear Regression with R) in the nlme package for R, which calculates initial parameters from the fit of the linearized form of the Michaelis-Menten (MM) equation. Fortunately, the MM equation possesses a form that can be adapted for the Langmuir equation, S = Smax*x/(KL + x). I've found the nlshelper and tidyverse packages useful for modeling and exporting the results of the nls command into tables and plots, particularly when modeling sample groups. Here's my code for modeling a single set of sorption data:
library(tidyverse)
library(nlme)
library(nlshelper)
lang.fit <- nls(Y ~ SSmicmen(X,Smax,InvKL), data=Data)
fit.summary <- tidy(lang.fit)
fit.coefs <- coef(lang.fit)
For simplicity, the Langmuir affinity constant is modeled here as 1/KL. Applying this code, I get the same parameter estimates as #Marat given above.
The simple code below allows for wrangling the data in order to create a ggplot object, containing the original points and fitted line (i.e., geom_point would represent the original X and Y data, geom_line would represent the original X plus YHat).
FitY <- tibble(predict(lang.fit))
YHat <- FitY[,1]
Data2 <- cbind(Data, YHat)
If you want to model multiple groups of data (say, based on a "Sample_name" column, then the lang.fit variable would be calculated as below, this time using the nlsList command:
lang.fit <- nlsList(Y ~ SSmicmen(X,Smax,InvKL) | Sample_name, data=Data)
The problem is the starting values. We show two approaches to this as well as an alternative that converges even using the starting values in the question.
1) plinear The right hand side is linear in Q*b so it would be better to absorb b into Q and then we have a parameter that enters linearly so it is easier to solve. Also with the plinear algorithm no starting values are needed for the linear parameter so only the starting value for b need be specified. With plinear the right hand side of the nls formula should be specified as the vector that multiplies the linear parameter. The result of running nls giving fm0 below will be coefficients named b and .lin where Q = .lin / b.
We already have our answer from fm0 but if we want a clean run in terms of b and Q rather than b and .lin we can run the original formula in the question using the starting values implied by the coefficients returned by fm0 as shown.
fm0 <- nls(Y ~ X/(1+b*X), Data, start = list(b = 0.5), alg = "plinear")
st <- with(as.list(coef(fm0)), list(b = b, Q = .lin/b))
fm <- nls(Y ~ Q*b*X/(1+b*X), Data, start = st)
fm
giving
Nonlinear regression model
model: Y ~ Q * b * X/(1 + b * X)
data: Data
b Q
0.0721 366.2778
residual sum-of-squares: 920.6
Number of iterations to convergence: 0
Achieved convergence tolerance: 9.611e-07
We can display the result. The points are the data and the red line is the fitted curve.
plot(Data)
lines(fitted(fm) ~ X, Data, col = "red")
(contineud after plot)
2) mean Alternately, using a starting value of mean(Data$Y) for Q seems to work well.
nls(Y ~ Q*b*X/(1+b*X), Data, start = list(b = 0.5, Q = mean(Data$Y)))
giving:
Nonlinear regression model
model: Y ~ Q * b * X/(1 + b * X)
data: Data
b Q
0.0721 366.2779
residual sum-of-squares: 920.6
Number of iterations to convergence: 6
Achieved convergence tolerance: 5.818e-06
The question already had a reasonable starting value for b which we used but if one were needed one could set Y to Q*b so that they cancel and X to mean(Data$X) and solve for b to give b = 1 - 1/mean(Data$X) as a possible starting value. Although not shown using this starting value for b with mean(Data$Y) as the starting value for Q also resulted in convergence.
3) optim If we use optim the algorithm converges even with the initial values used in the question. We form the residual sum of squares and minimize that:
rss <- function(p) {
Q <- p[1]
b <- p[2]
with(Data, sum((Y - b*Q*X/(1+b*X))^2))
}
optim(c(1, 0.5), rss)
giving:
$par
[1] 366.27028219 0.07213613
$value
[1] 920.62
$counts
function gradient
249 NA
$convergence
[1] 0
$message
NULL

How to put a complicated equation into a R formula?

We have the diameter of trees as the predictor and tree height as the dependent variable. A number of different equations exist for this kind of data and we try to model some of them and compare the results.
However, we we can't figure out how to correctly put one equation into the corresponding R formula format.
The trees data set in R can be used as an example.
data(trees)
df <- trees
df$h <- df$Height * 0.3048 #transform to metric system
df$dbh <- (trees$Girth * 0.3048) / pi #transform tree girth to diameter
First, the example of an equation that seems to work well:
form1 <- h ~ I(dbh ^ -1) + I( dbh ^ 2)
m1 <- lm(form1, data = df)
m1
Call:
lm(formula = form1, data = df)
Coefficients:
(Intercept) I(dbh^-1) I(dbh^2)
27.1147 -5.0553 0.1124
Coefficients a, b and c are estimated, which is what we are interested in.
Now the problematic equation:
Trying to fit it like this:
form2 <- h ~ I(dbh ^ 2) / dbh + I(dbh ^ 2) + 1.3
gives an error:
m1 <- lm(form2, data = df)
Error in terms.formula(formula, data = data)
invalid model formula in ExtractVars
I guess this is because / is interpreted as a nested model and not an arithmetic operator?
This doesn't give an error:
form2 <- h ~ I(I(dbh ^ 2) / dbh + I(dbh ^ 2) + 1.3)
m1 <- lm(form2, data = df)
But the result is not the one we want:
m1
Call:
lm(formula = form2, data = df)
Coefficients:
(Intercept) I(I(dbh^2)/dbh + I(dbh^2) + 1.3)
19.3883 0.8727
Only one coefficient is given for the whole term within the outer I(), which seems to be logic.
How can we fit the second equation to our data?
Assuming you are using nls the R formula can use an ordinary R function, H(a, b, c, D), so the formula can be just h ~ H(a, b, c, dbh) and this works:
# use lm to get startingf values
lm1 <- lm(1/(h - 1.3) ~ I(1/dbh) + I(1/dbh^2), df)
start <- rev(setNames(coef(lm1), c("c", "b", "a")))
# run nls
H <- function(a, b, c, D) 1.3 + D^2 / (a + b * D + c * D^2)
nls1 <- nls(h ~ H(a, b, c, dbh), df, start = start)
nls1 # display result
Graphing the output:
plot(h ~ dbh, df)
lines(fitted(nls1) ~ dbh, df)
You've got a couple problems. (1) You're missing parentheses for the denominator of form2 (and R has no way to know that you want to add a constant a in the denominator, or where to put any of the parameters, really), and much more problematic: (2) your 2nd model isn't linear, so lm won't work.
Fixing (1) is easy:
form2 <- h ~ 1.3 + I(dbh^2) / (a + b * dbh + c * I(dbh^2))
Fixing (2), though there are many ways to estimate parameters for a nonlinear model, the nls (nonlinear least squares) is a good place to start:
m2 <- nls(form2, data = df, start = list(a = 1, b = 1, c = 1))
You need to provide starting guesses for the parameters in nls. I just picked 1's, but you should use better guesses that ballpark what the parameters might be.
edit: fixed, no longer incorrectly using offset ...
An answer that complements #shujaa's:
You can transform your problem from
H = 1.3 + D^2/(a+b*D+c*D^2)
to
1/(H-1.3) = a/D^2+b/D+c
This would normally mess up the assumptions of the model (i.e., if H were normally distributed with constant variance, then 1/(H-1.3) wouldn't be. However, let's try it anyway:
data(trees)
df <- transform(trees,
h=Height * 0.3048, #transform to metric system
dbh=Girth * 0.3048 / pi #transform tree girth to diameter
)
lm(1/(h-1.3) ~ poly(I(1/dbh),2,raw=TRUE),data=df)
## Coefficients:
## (Intercept) poly(I(1/dbh), 2, raw = TRUE)1
## 0.043502 -0.006136
## poly(I(1/dbh), 2, raw = TRUE)2
## 0.010792
These results would normally be good enough to get good starting values for the nls fit. However, you can do better than that via glm, which uses a link function to allow for some forms of non-linearity. Specifically,
(fit2 <- glm(h-1.3 ~ poly(I(1/dbh),2,raw=TRUE),
family=gaussian(link="inverse"),data=df))
## Coefficients:
## (Intercept) poly(I(1/dbh), 2, raw = TRUE)1
## 0.041795 -0.002119
## poly(I(1/dbh), 2, raw = TRUE)2
## 0.008175
##
## Degrees of Freedom: 30 Total (i.e. Null); 28 Residual
## Null Deviance: 113.2
## Residual Deviance: 80.05 AIC: 125.4
##
You can see that the results are approximately the same as the linear fit, but not quite.
pframe <- data.frame(dbh=seq(0.8,2,length=51))
We use predict, but need to correct the prediction to account for the fact that we subtracted a constant from the LHS:
pframe$h <- predict(fit2,newdata=pframe,type="response")+1.3
p2 <- predict(fit2,newdata=pframe,se.fit=TRUE) ## predict on link scale
pframe$h_lwr <- with(p2,1/(fit+1.96*se.fit))+1.3
pframe$h_upr <- with(p2,1/(fit-1.96*se.fit))+1.3
png("dbh_tmp1.png",height=4,width=6,units="in",res=150)
par(las=1,bty="l")
plot(h~dbh,data=df)
with(pframe,lines(dbh,h,col=2))
with(pframe,polygon(c(dbh,rev(dbh)),c(h_lwr,rev(h_upr)),
border=NA,col=adjustcolor("black",alpha=0.3)))
dev.off()
Because we have used the constant on the LHS (this almost, but doesn't quite, fit into the framework of using an offset -- we could only use an offset if our formula were 1/H - 1.3 = a/D^2 + ..., i.e. if the constant adjustment were on the link (inverse) scale rather than the original scale), this doesn't fit perfectly into ggplot's geom_smooth framework
library("ggplot2")
ggplot(df,aes(dbh,h))+geom_point()+theme_bw()+
geom_line(data=pframe,colour="red")+
geom_ribbon(data=pframe,colour=NA,alpha=0.3,
aes(ymin=h_lwr,ymax=h_upr))
ggsave("dbh_tmp2.png",height=4,width=6)

Fitting a function in R

I have a few datapoints (x and y) that seem to have a logarithmic relationship.
> mydata
x y
1 0 123
2 2 116
3 4 113
4 15 100
5 48 87
6 75 84
7 122 77
> qplot(x, y, data=mydata, geom="line")
Now I would like to find an underlying function that fits the graph and allows me to infer other datapoints (i.e. 3 or 82). I read about lm and nls but I'm not getting anywhere really.
At first, I created a function of which I thought it resembled the plot the most:
f <- function(x, a, b) {
a * exp(b *-x)
}
x <- seq(0:100)
y <- f(seq(0:100), 1,1)
qplot(x,y, geom="line")
Afterwards, I tried to generate a fitting model using nls:
> fit <- nls(y ~ f(x, a, b), data=mydata, start=list(a=1, b=1))
Error in numericDeriv(form[[3]], names(ind), env) :
Missing value or an Infinity produced when evaluating the model
Can someone point me in the right direction on what to do from here?
Follow up
After reading your comments and googling around a bit further I adjusted the starting parameters for a, b and c and then suddenly the model converged.
fit <- nls(y~f(x,a,b,c), data=data.frame(mydata), start=list(a=1, b=30, c=-0.3))
x <- seq(0,120)
fitted.data <- data.frame(x=x, y=predict(fit, list(x=x))
ggplot(mydata, aes(x, y)) + geom_point(color="red", alpha=.5) + geom_line(alpha=.5) + geom_line(data=fitted.data)
Maybe using a cubic specification for your model and estimating via lm would give you a good fit.
# Importing your data
dataset <- read.table(text='
x y
1 0 123
2 2 116
3 4 113
4 15 100
5 48 87
6 75 84
7 122 77', header=T)
# I think one possible specification would be a cubic linear model
y.hat <- predict(lm(y~x+I(x^2)+I(x^3), data=dataset)) # estimating the model and obtaining the fitted values from the model
qplot(x, y, data=dataset, geom="line") # your plot black lines
last_plot() + geom_line(aes(x=x, y=y.hat), col=2) # the fitted values red lines
# It fits good.
Try taking the log of your response variable and then using lm to fit a linear model:
fit <- lm(log(y) ~ x, data=mydata)
The adjusted R-squared is 0.8486, which at face value isn't bad. You can look at the fit using plot, for example:
plot(fit, which=2)
But perhaps, it's not such a good fit after all:
last_plot() + geom_line(aes(x=x, y=exp(fit$fitted.values)))
Check this document out: http://cran.r-project.org/doc/contrib/Ricci-distributions-en.pdf
In brief, first you need to decide on the model to fit onto your data (e.g., exponential) and then estimate its parameters.
Here are some widely used distributions:
http://www.itl.nist.gov/div898/handbook/eda/section3/eda366.htm

How to find good start values for nls function?

I don't understand why I can't have a nls function for these data.
I have tried with a lot of different start values and I have always the same error.
Here is what I have been doing:
expFct2 = function (x, a, b,c)
{
a*(1-exp(-x/b)) + c
}
vec_x <- c(77.87,87.76,68.6,66.29)
vec_y <- c(1,1,0.8,0.6)
dt <- data.frame(vec_x=vec_x,vec_y=vec_y)
ggplot(data = dt,aes(x = vec_x, y = vec_y)) + geom_point() +
geom_smooth(data=dt, method="nls", formula=y~expFct2(x, a, b, c),
se=F, start=list(a=1, b=75, c=-5)
I have always this error:
Error in method(formula, data = data, weights = weight, ...) :
singular gradient
Fitting a three-parameter nonlinear model to four data points is going to be moderately challenging in any case, although in this case the data are well behaved. Point #1 is that your starting value for your c parameter (-5) was way off. Drawing a picture of the curve corresponding to your starting parameters (see below) would help you understand this (so would recognizing that the curve you get will range from c at its minimum to c+a at its maximum, and the range of your data is from 0.6 to 1 ...)
However, even with a better starting guess I found myself fussing with control parameters (i.e. control=nls.control(maxiter=200)), followed by more warnings -- nls is not known for its robustness. So I tried the SSasympOff model, which implements a self-starting version of the curve you want to fit.
start1 <- list(a=1, b=75, c=-5)
start2 <- list(a=0.5, b=75, c=0.5) ## a better guess
pfun <- function(params) {
data.frame(vec_x=60:90,
vec_y=do.call(expFct2,c(list(x=60:90),params)))
}
library(ggplot2)
ggplot(data = dt,aes(x = vec_x, y = vec_y)) + geom_point() +
geom_line(data=pfun(start1))+
geom_line(data=pfun(start2),colour="red")+
geom_smooth(data=dt, method="nls", formula=y~SSasympOff(x, a, b, c),
se=FALSE)
My advice in general is that it's easier to figure out what's going on and fix problems if you fit nls outside of geom_smooth and construct the curve you want to add using predict.nls ...
More generally, the way to get good starting parameters is to understand the geometry of the function you are fitting, and which parameters control which aspects of the curve. As I mentioned above, c is the minimum value of the shifted saturating-exponential curve, a is the range, and b is a scale parameter (you can see that when x=b, the curve is 1-exp(-1) or approximately 2/3 of the way from the minimum to the maximum). Either a bit of algebra and calculus (i.e. taking limits), or playing around with the curve() function, are good ways to gather this information.
This can be written with two linear parameters (.lin1 and .lin2) and one nonlinear parameter (b) like this:
a*(1-exp(-x/b)) + c
= (a+c) - a * exp(-x/b)
= .lin1 + .lin2 * exp(-x/b)
where .lin1 = a+c and .lin2 = -a (so a = - .lin2 and c = .lin1 + .lin2) This lets us use "plinear" which only requires specification of a starting value for the single nonlinear parameter (eliminating the problem of how to set the starting values for the other parameters) and which converges despite the starting value of b=75 being far from that of the solution:
nls(y ~ cbind(1, exp(-x/b)), start = list(b = 75), alg = "plinear")
Here is the result of a run from which we can see from the size of .lin2 that the problem is badly scaled:
> x <- c(77.87,87.76,68.6,66.29)
> y <- c(1,1,0.8,0.6)
> nls(y ~ cbind(1, exp(-x/b)), start = list(b = 75), alg = "plinear")
Nonlinear regression model
model: y ~ cbind(1, exp(-x/b))
data: parent.frame()
b .lin1 .lin2
3.351e+00 1.006e+00 -1.589e+08
residual sum-of-squares: 7.909e-05
Number of iterations to convergence: 9
Achieved convergence tolerance: 9.887e-07
> R.version.string
[1] "R version 2.14.2 Patched (2012-02-29 r58660)"
> win.version()
[1] "Windows Vista (build 6002) Service Pack 2"
EDIT: added sample run and comment on scaling.
I struggle to find an interpretation to your parameters:
a is a slope, b the speed of convergence, and a+c the limit,
but c by itself does not seem to mean much.
After reparametrizing your function, the problem disappears.
f <- function (x, a,b,c) a + c * exp(-x/abs(b))
nls(y~f(x, a, b, c), data=dt, start=list(a=1, b=75, c=-5), trace=TRUE)
However, the value of c looks very, very high:
that is probably why the model initially failed to converge.
Nonlinear regression model
model: y ~ f(x, a, b, c)
data: dt
a b c
1.006e+00 3.351e+00 -1.589e+08
residual sum-of-squares: 7.909e-05
Number of iterations to convergence: 9
Achieved convergence tolerance: 2.232e-06
Here is another, more reasonable parametrization of the same function.
g <- function (x, a,b,c) a * (1-exp(-(x-c)/abs(b)))
nls(y~g(x, a, b, c), data=dt, start=list(a=1, b=75, c=-5), trace=TRUE)
Nonlinear regression model
model: y ~ g(x, a, b, c)
data: dt
a b c
1.006 3.351 63.257
residual sum-of-squares: 7.909e-05
Number of iterations to convergence: 10
Achieved convergence tolerance: 1.782e-06

need to optimize specific function in order to have best fitted curve

I'm looking for best fitted curve that will match with a specific formula. The parameters A and B need to be optimize
# X and Y are my datasets Y depend of X
X=c(73,33,201,90,1513,1312,1044,929,836,657,104,22)
Y=c(2.89,6.11,3.57,4.03,0.16,0.00,1.41,0.00,2.13,6.16,2.85,5.08)
# X and Y plot give us
plot(X,Y, main="Y function of X",pch=6,xlab="X", ylab="Y",col='black')
#Type of curve that I need
curve(0.07*90*(1-(x/(x+exp(5-0.001*x)))), add = T,col="blue",lw=3,lty=2)
#I want therefore to optimize the parameters A and B in order to have the best fitted curve according to this formula that fit better with my data
curve(0.07*90*(1-(x/(x+exp(B+A*x)))), add = T, col="blue",lw=3,lty=2)
# Please help `enter code here`
You can use nls() to optimize the function using non-linear least squares. For example
nls(Y~0.07*90*(1-(X/(X+exp(B+A*X)))), data.frame(X,Y), start=list(A=.001,B=5))
returns
Nonlinear regression model
model: Y ~ 0.07 * 90 * (1 - (X/(X + exp(B + A * X))))
data: data.frame(X, Y)
A B
0.0004154 5.1653460
residual sum-of-squares: 30.27
Number of iterations to convergence: 8
Achieved convergence tolerance: 8.472e-06

Resources