R curve fitting (multiple exponential) with NLS2 and NLS - r

I have some difficulties getting a specific curve to fit in R, while it works perfectly fine in a commercial curve-fitting program.
The formula that the data should fit to is:
y(t) = A * exp(-a*(t)) + B * exp(-b*(t)) - (A+B) * exp(-c*(t))
So for this I want to use the nonlinear regression built into R. I've been at this for a day on-and-off now and just can't get it to function. The issue lies entirely with the initial values, so I'm using NLS2 to brute-force find the initial values.
y <- c(0,0.01377,0.01400875,0.0119175,0.00759375,0.00512125,0.004175,0.00355375,
0.00308875,0.0028925,0.00266375)
t <- c(0,3,6,12,24,48,72,96,120,144,168)
df <- data.frame(t,y)
plot(t,y);
#Our model:
fo <- y ~ f1*exp(-k1*t)+f2*exp(-k2*t)-(f1+f2)*exp(-k3*t);
#Define the outer boundaries to search for initial values
grd <- data.frame(f1=c(0,1),
f2=c(0,1),
k1=c(0,2),
k2=c(0,2),
k3=c(0,0.7));
#Do the brute-force
fit <- nls2(fo,
data=df,
start = grd,
algorithm = "brute-force",
control=list(maxiter=20000))
fit
coef(fit)
final <- nls(fo, data=df, start=as.list(coef(fit)))
The values it should give are:
f1 0.013866
f2 0.005364
k1 0.063641
k2 0.004297
k3 0.615125
Though even with quite high iteration values, I'm just getting nonsense returns. I'm clearly doing something wrong, but I cannot see it
Edit based on #Roland 's comment:
The method you propose with the approximation of k1-3 with a linear approach seems to work on some datasets, but not on all of them. Below is the code I'm using now based on your input.
#Oral example:
y <- c(0,0.0045375,0.0066325,0.00511375,0.00395875,0.003265,0.00276,
0.002495,0.00231875);
t <- c(0,12,24,48,72,96,120,144,168);
#IV example:
#y <- c(0,0.01377,0.01400875,0.0119175,0.00759375,0.00512125,0.004175,
#0.00355375,0.00308875,0.0028925,0.00266375)
#t <- c(0,3,6,12,24,48,72,96,120,144,168)
DF <- data.frame(y, t)
fit1 <- nls(y ~ cbind(exp(-k1*t), exp(-k2*t), exp(-k3*t)), algorithm = "plinear", data = DF,
start = list(k1 = 0.002, k2 = 0.02, k3= 0.2))
k1_predict <-summary(fit1)$coefficients[1,1]
k2_predict <-summary(fit1)$coefficients[2,1]
k3_predict <-summary(fit1)$coefficients[3,1]
fo <- y ~ f1*exp(-k1*t)+f2*exp(-k2*t)-(f1+f2)*exp(-k3*t);
fit2 <- nls(fo, data = DF,
start = list(k1 = k1_predict, k2 = k2_predict, k3 = k3_predict, f1 = 0.01, f2 = 0.01))
summary(fit2);
plot(t,y);
curve(predict(fit2, newdata = data.frame(t = x)), 0, 200, add = TRUE, col = "red")
oral_example fit
#G. Grothendieck
Similar to Roland's suggestion, your suggestion is also excellent in that it is capable of fitting some datasets but struggles with others. The code below is based on your input, and exits with a singular gradient matrix.
#Oral example:
y <- c(0,0.0045375,0.0066325,0.00511375,0.00395875,0.003265,0.00276,
0.002495,0.00231875);
t <- c(0,12,24,48,72,96,120,144,168);
#IV example:
#y <- c(0,0.01377,0.01400875,0.0119175,0.00759375,0.00512125,0.004175,
#0.00355375,0.00308875,0.0028925,0.00266375)
#t <- c(0,3,6,12,24,48,72,96,120,144,168)
df <- data.frame(y, t)
grd <- data.frame(f1=c(0,1),
f2=c(0,1),
k1=c(0,2),
k2=c(0,2),
k3=c(0,0.7));
set.seed(123)
fit <- nls2(fo,
data=df,
start = grd,
algorithm = "random",
control = nls.control(maxiter = 100000))
nls(fo, df, start = coef(fit), alg = "port", lower = 0)
plot(t,y);
curve(predict(nls, newdata = data.frame(t = x)), 0, 200, add = TRUE, col = "red")

I would first do a partially linear fit with no constraints on the linear parameters to get good starting values for the exponential parameters and some idea regarding the magnitude of the linear parameters:
DF <- data.frame(y, t)
fit1 <- nls(y ~ cbind(exp(-k1*t), exp(-k2*t), exp(-k3*t)), algorithm = "plinear", data = DF,
start = list(k1 = 0.002, k2 = 0.02, k3= 0.2))
summary(fit1)
#Formula: y ~ cbind(exp(-k1 * t), exp(-k2 * t), exp(-k3 * t))
#
#Parameters:
# Estimate Std. Error t value Pr(>|t|)
#k1 0.0043458 0.0010397 4.180 0.008657 **
#k2 0.0639379 0.0087141 7.337 0.000738 ***
#k3 0.6077646 0.0632586 9.608 0.000207 ***
#.lin1 0.0053968 0.0006637 8.132 0.000457 ***
#.lin2 0.0139231 0.0008694 16.014 1.73e-05 ***
#.lin3 -0.0193145 0.0010631 -18.168 9.29e-06 ***
Then you can fit your actual model:
fit2 <- nls(fo, data = DF,
start = list(k1 = 0.06, k2 = 0.004, k3 = 0.6, f1 = 0.01, f2 = 0.01))
summary(fit2)
#Formula: y ~ f1 * exp(-k1 * t) + f2 * exp(-k2 * t) - (f1 + f2) * exp(-k3 * t)
#
#Parameters:
# Estimate Std. Error t value Pr(>|t|)
#k1 0.0639344 0.0079538 8.038 0.000198 ***
#k2 0.0043456 0.0009492 4.578 0.003778 **
#k3 0.6078929 0.0575616 10.561 4.24e-05 ***
#f1 0.0139226 0.0007934 17.548 2.20e-06 ***
#f2 0.0053967 0.0006059 8.907 0.000112 ***
curve(predict(fit2, newdata = data.frame(t = x)), 0, 200, add = TRUE, col = "red")
Note that this model can easily be re-parameterized by switching the exponential terms (i.e., the order of the kn starting values), which could result in different estimates for f1 and f2, but basically the same fit.

With this many parameters I would use algorithm = "random" rather than "brute". If we do that then the following gives a result close to the one in the question (up to permutation of the arguments due to the symmetry of the model parameters):
set.seed(123)
fit <- nls2(fo,
data=df,
start = grd,
algorithm = "random",
control = nls.control(maxiter = 20000))
nls(fo, df, start = coef(fit), alg = "port", lower = 0)
giving:
Nonlinear regression model
model: y ~ f1 * exp(-k1 * t) + f2 * exp(-k2 * t) - (f1 + f2) * exp(-k3 * t)
data: df
f1 f2 k1 k2 k3
0.005397 0.013923 0.004346 0.063934 0.607893
residual sum-of-squares: 2.862e-07
Algorithm "port", convergence message: relative convergence (4)
ADDED
A variation of the above is to use nlsLM in the minpack.lm package instead of nls and to use splines to get more points in the data set. In place of the nls line try the following. It still gives convergence:
library(minpack.lm)
t_s <- with(df, min(t):max(t))
df_s <- setNames(data.frame(spline(df$t, df$y, xout = t_s)), c("t", "y"))
nlsLM(fo, df_s, start = coef(fit), lower = rep(0,5), control = nls.control(maxiter = 1024))
and it also does in the Oral example:
set.seed(123)
y <- c(0,0.0045375,0.0066325,0.00511375,0.00395875,0.003265,0.00276,
0.002495,0.00231875);
t <- c(0,12,24,48,72,96,120,144,168)
DF <- data.frame(y, t)
grd <- data.frame(f1=c(0,1), f2=c(0,1), k1=c(0,2), k2=c(0,2), k3=c(0,0.7))
fit <- nls2(fo,
data=DF,
start = grd,
algorithm = "random",
control = nls.control(maxiter = 20000))
library(minpack.lm)
t_s <- with(DF, min(t):max(t))
df_s <- setNames(data.frame(spline(DF$t, DF$y, xout = t_s)), c("t", "y"))
nlsLM(fo, df_s, start = coef(fit), lower = rep(0,5), control = nls.control(maxiter = 1024))

Related

How to fix 'unused arguments' error on nls model

A <- c(100, 200, 300, 400, 500, 600)
B <- c(60, 50, 40, 30, 20, 10)
data <- data.frame(A,B)
sample_plot <- function(A,B)
{
(w * ((x * A)^2) / (y/z)
}
control <- nls.control(maxiter = 1000)
sample_model <- nls(B, w, x, y, z), control = control, start = list(w = 62.2060, x = 0.0438, y = 0.9692, z = 0.8693))
plot(A, predict(sample_model), type = "1", col = "blue")
points(A, B, col = "red")
summary(sample_model)
Everytime I run the codes of the model, it provides an error about the unused arguments which states that "Error in sample_model(B, w, x, y, z) : unused arguments (x, y, z)", even though I provided values for x, y, and z. I also try to change the sample_plot <- function(A,B) into sample_plot <- function(A,B,...) but it doesn't work. Any help? Thank You.
As #user20650 has pointed out the model is overparametized and cannot give any different predictions than the model B ~ w * A^2. This is because all the other parameters can be absorbed into w. Further, as #Dave2e has indicated such a model can't fit the data in any reasonable way.
Plotting the data, B seems linear in A so using lm should do and, in fact, gives a perfect fit as the residual sum of squares is effectively zero.
fm <- lm(B ~ A, data)
fm
## Call:
## lm(formula = B ~ A, data = data)
##
## Coefficients:
## (Intercept) A
## 70.0 -0.1
deviance(fm) # residual sum of squares
## [1] 1.390367e-28
plot(B ~ A, data)
abline(fm)

`nlme` with crossed random effects

I am trying to fit a crossed non-linear random effect model as the linear random effect models as mentioned in this question and in this mailing list post using the nlme package. Though, I get an error regardless of what I try. Here is an example
library(nlme)
#####
# simulate data
set.seed(18112003)
na <- 30
nb <- 30
sigma_a <- 1
sigma_b <- .5
sigma_res <- .33
n <- na*nb
a <- gl(na,1,n)
b <- gl(nb,na,n)
u <- gl(1,1,n)
x <- runif(n, -3, 3)
y_no_noise <- x + sin(2 * x)
y <-
x + sin(2 * x) +
rnorm(na, sd = sigma_a)[as.integer(a)] +
rnorm(nb, sd = sigma_b)[as.integer(b)] +
rnorm(n, sd = sigma_res)
#####
# works in the linear model where we know the true parameter
fit <- lme(
# somehow we found the right values
y ~ x + sin(2 * x),
random = list(u = pdBlocked(list(pdIdent(~ a - 1), pdIdent(~ b - 1)))))
vv <- VarCorr(fit)
vv2 <- vv[c("a1", "b1"), ]
storage.mode(vv2) <- "numeric"
print(vv2,digits=4)
#R Variance StdDev
#R a1 1.016 1.0082
#R b1 0.221 0.4701
#####
# now try to do the same with `nlme`
fit <- nlme(
y ~ c0 + sin(c1),
fixed = list(c0 ~ x, c1 ~ x - 1),
random = list(u = pdBlocked(list(pdIdent(~ a - 1), pdIdent(~ b - 1)))),
start = c(0, 0.5, 1))
#R Error in nlme.formula(y ~ a * x + sin(b * x), fixed = list(a ~ 1, b ~ :
#R 'random' must be a formula or list of formulae
The lme example is similar to the one page 163-166 of "Mixed-effects Models in S and S-PLUS" with only 2 random effects instead of 3.
I should haved used a two-sided formula as written in help("nlme")
fit <- nlme(
y ~ c0 + c1 + sin(c2),
fixed = list(c0 ~ 1, c1 ~ x - 1, c2 ~ x - 1),
random = list(u = pdBlocked(list(pdIdent(c0 ~ a - 1), pdIdent(c1 ~ b - 1)))),
start = c(0, 0.5, 1))
# fixed effects estimates
fixef(fit)
#R c0.(Intercept) c1.x c2.x
#R -0.1788218 0.9956076 2.0022338
# covariance estimates
vv <- VarCorr(fit)
vv2 <- vv[c("c0.a1", "c1.b1"), ]
storage.mode(vv2) <- "numeric"
print(vv2,digits=4)
#R Variance StdDev
#R c0.a1 0.9884 0.9942
#R c1.b1 0.2197 0.4688

Bootstrapping of multiple values using boot::boot()

I try to estimate confidence intervals for several parameters of a nonlinear model using bootstrapping. Right now, I do bootstrapping for for each parameter individually. Therefore I have to gererate the model serveral times.
Here is an example:
library(boot)
# generate some data:
x <- rnorm(300, mean = 5, sd = 2)
y <- xvalues^2*rnorm(300, mean = 1.5, sd = 1) + rnorm(300, mean = 3, sd = 1)
data <- data.frame(x = x, y = y)
# this is my model: nls(y ~ b1*x^2+b2, data = data, start = list(b1 = 1.5,b2 = 3))
# functions for bootstrapping:
getParamB1 <- function(x1, idx){
data <- x1 %>%
dplyr::slice(idx)
model <- nls(y ~ b1*x^2+b2, data = data, start = list(b1 = 1.5,b2 = 3))
coef(model)[['b1']]
}
getParamB2 <- function(x1, idx){
data <- x1 %>%
dplyr::slice(idx)
model <- nls(y ~ b1*x^2+b2, data = data, start = list(b1 = 1.5,b2 = 3))
coef(model)[['b2']]
}
# Calculate bootstrap confidence intervals
btrpB1 <- boot(data, statistic = getParamB1, R=200)
btrpB2 <- boot(data, statistic = getParamB2, R=200)
ciB1 <- boot.ci(btrpB1)
ciB2 <- boot.ci(btrpB2)
This is of course not very nice code. Is there a way to estiamte confidence intervals for several parameters (here b1 and b2) at once?
How about this?
library(boot)
# generate some data:
x <- rnorm(300, mean = 5, sd = 2)
y <- x^2 * rnorm(300, mean = 1.5, sd = 1) + rnorm(300, mean = 3, sd = 1)
df <- data.frame(x = x, y = y)
m1 <- nls(y ~ b1 * x^2 + b2, data = df, start = list(b1 = 1.5, b2 = 3))
boot.coef <- function(mod, data, indices) {
assign(deparse(mod$data), data[indices, ])
m <- eval(mod$call)
return(coef(m))
}
results <- boot(data = df, statistic = boot.coef,
R = 1000, mod = m1)

Error with bootmer and confint for glmer

I'm running into an error that I can't find any documentation on when I try to bootstrap a glmer object:
glm2 <- glmer(RT~valence+location+first_location+Trial_num +
(1+Trial_num|id)+(1|Trial_num),
family=inverse.gaussian(log),
control = glmerControl(optimizer = "nloptwrap",
calc.derivs = FALSE), data=df_long)
The error is:
Error in lme4::.simulateFun(object = , :
could not find function "sfun
This is regardless of whether I try bootMer or confint:
bootMer_out <- bootMer(glm2,FUN=fixef, nsim=300)
confint_out <- confint(glm2, method="boot")
When I run as an lmer object I don't have the issue with bootstrapping. i.e.
lm2 <- glmer(RT~valence+location+first_location+Trial_num + (1+Trial_num|id)+(1|Trial_num), family=inverse.gaussian(log), control = glmerControl(optimizer = "nloptwrap", calc.derivs = FALSE), data=df_long))
Does it have to do with the link function? Is there a workaround? I couldn't find function 'sfun' in the simulateFun documentation either. I could always just do the transformation on the data separately and use lmer instead of glmer, but if anyone has some insight that would be great (since I'm curious now).
As pointed out by #user20650, you'll need to add a simulation method for the inverse gaussian family.
For example, I added these to a branch on my lme4 fork under predict.R:
rinvgauss <- function(n, mu, lambda) {
# transcribed from https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution
nu <- rnorm(n)
y <- nu^2
x <- mu + (mu^2 * y)/(2*lambda) - (mu/(2*lambda)) * sqrt(4*mu*lambda*y + mu^2*y^2)
z <- runif(n)
ifelse(z <= mu/(mu + x), x, mu^2/x)
}
inverse.gaussian_simfun <- function(object, nsim, ftd = fitted(object),
wts = weights(object)) {
if (any(wts != 1)) message("using weights as inverse variances")
dispersion <- sum((weights(object, 'working') *
resid(object, 'working')^2)[weights(object, 'working')>0])/df.residual(object)
rinvgauss(nsim * length(ftd), mu = ftd,
lambda = wts/dispersion)
}
# ... skip a few
simfunList <- list(gaussian = gaussian_simfun,
binomial = binomial_simfun,
poisson = poisson_simfun,
Gamma = Gamma_simfun,
negative.binomial = negative.binomial_simfun,
inverse.gaussian = inverse.gaussian_simfun)
Here's an example:
# devtools::install_github('aforren1/lme4', ref = 'add_invgauss_simulate')
library(lme4)
set.seed(1)
dat <- data.frame(y = lme4:::rinvgauss(1000, 3, 4),
x = runif(1000),
subj = factor(rep(1:10, 100)))
mod <- glmer(y ~ x + (1|subj),
data = dat,
family = inverse.gaussian(link='log'))
# ~60 secs on my laptop
(boots <- confint(mod, method = 'boot', nsim = 100, parm = 'beta_'))
2.5 % 97.5 %
(Intercept) 1.0044813 1.248774
x -0.2158155 0.161213
(walds <- confint(mod, method = 'Wald', parm = 'beta_'))
2.5 % 97.5 %
(Intercept) 1.000688 1.2289971
x -0.205546 0.1644621
You can see that the bootstrap method gives (roughly) the same results as the Wald method.

Creating function arguments from a named list (with an application to stats4::mle)

I should start by saying what I'm trying to do: I want to use the mle function without having to re-write my log likelihood function each time I want to try a different model specification. Because mle is expecting a named list of starting values, you apparently cannot just write the log-likelihood function as taking a vector of parameters. A simple example:
Suppose I want to fit a linear regression model via maximum likelihood and at first, I'm ignoring one of my predictors:
n <- 100
df <- data.frame(x1 = runif(n), x2 = runif(n), y = runif(n))
Y <- df$y
X <- model.matrix(lm(y ~ x1, data = df))
# define log-likelihood function
ll <- function(beta0, beta1, sigma){
beta = matrix(NA, nrow=2, ncol=1)
beta[,1] = c(beta0, beta1)
-sum(log(dnorm(Y - X %*% beta, 0, sigma)))
}
library(stats4)
mle(ll, start = list(beta0=.1, beta1=.2, sigma=1)
Now, if I want to fit a different model, say:
m <- lm(y ~ x1 + x2, data = df)
I cannot re-use my log-likelihood function--I'd have to re-write it to have the beta3 parameter. What I'd like to do is something like:
ll.flex <- function(theta){
# theta is a vector that I can use directly
...
}
if I could then somehow adjust the start argument in mle to account for my now vector-input log-likelihood function, or barring that, have a function that constructs the log-likelihood function at run-time, say by constructing the named list of arguments and then using it to define the function e.g., something like this:
X <- model.matrix(lm(y ~ x1 + x2, data = df))
arguments <- rep(NA, dim(X)[2])
names(arguments) <- colnames(X)
ll.magic <- function(bring.this.to.life.as.function.arguments(arguments)){...}
Update:
I ended up writing a helper function that can add an arbitrary number of named arguments x1, x2, x3... to a passed function f.
add.arguments <- function(f,n){
# adds n arguments to a function f; returns that new function
t = paste("arg <- alist(",
paste(sapply(1:n, function(i) paste("x",i, "=",sep="")), collapse=","),
")", sep="")
formals(f) <- eval(parse(text=t))
f
}
It's ugly, but it got the job done, letting me re-factor my log-likelihood function on the fly.
You can use the mle2 function from the package bbmle which allows you to pass vectors as parameters. Here is some sample code.
# REDEFINE LOG LIKELIHOOD
ll2 = function(params){
beta = matrix(NA, nrow = length(params) - 1, ncol = 1)
beta[,1] = params[-length(params)]
sigma = params[[length(params)]]
minusll = -sum(log(dnorm(Y - X %*% beta, 0, sigma)))
return(minusll)
}
# REGRESS Y ON X1
X <- model.matrix(lm(y ~ x1, data = df))
mle2(ll2, start = c(beta0 = 0.1, beta1 = 0.2, sigma = 1),
vecpar = TRUE, parnames = c('beta0', 'beta1', 'sigma'))
# REGRESS Y ON X1 + X2
X <- model.matrix(lm(y ~ x1 + x2, data = df))
mle2(ll2, start = c(beta0 = 0.1, beta1 = 0.2, beta2 = 0.1, sigma = 1),
vecpar = TRUE, parnames = c('beta0', 'beta1', 'beta2', 'sigma'))
This gives you
Call:
mle2(minuslogl = ll2, start = c(beta0 = 0.1, beta1 = 0.2, beta2 = 0.1,
sigma = 1), vecpar = TRUE, parnames = c("beta0", "beta1",
"beta2", "sigma"))
Coefficients:
beta0 beta1 beta2 sigma
0.5526946 -0.2374106 0.1277266 0.2861055
It might be easier to use optim directly; that's what mle is using anyway.
ll2 <- function(par, X, Y){
beta <- matrix(c(par[-1]), ncol=1)
-sum(log(dnorm(Y - X %*% beta, 0, par[1])))
}
getp <- function(X, sigma=1, beta=0.1) {
p <- c(sigma, rep(beta, ncol(X)))
names(p) <- c("sigma", paste("beta", 0:(ncol(X)-1), sep=""))
p
}
set.seed(5)
n <- 100
df <- data.frame(x1 = runif(n), x2 = runif(n), y = runif(n))
Y <- df$y
X1 <- model.matrix(y ~ x1, data = df)
X2 <- model.matrix(y ~ x1 + x2, data = df)
optim(getp(X1), ll2, X=X1, Y=Y)$par
optim(getp(X2), ll2, X=X2, Y=Y)$par
With the output of
> optim(getp(X1), ll2, X=X1, Y=Y)$par
sigma beta0 beta1
0.30506139 0.47607747 -0.04478441
> optim(getp(X2), ll2, X=X2, Y=Y)$par
sigma beta0 beta1 beta2
0.30114079 0.39452726 -0.06418481 0.17950760
It might not be what you're looking for, but I would do this as follows:
mle2(y ~ dnorm(mu, sigma),parameters=list(mu~x1 + x2), data = df,
start = list(mu = 1,sigma = 1))
mle2(y ~ dnorm(mu,sigma), parameters = list(mu ~ x1), data = df,
start = list(mu=1,sigma=1))
You might be able to adapt this formulation for a multinomial, although dmultinom might not work -- you might need to write a Dmultinom() that took a matrix of multinomial samples and returned a (log)probability.
The R code that Ramnath provided can also be applied to the optim function because
it takes vectors as parameters also.

Resources