lsoda creating fictional values? - r

I'm currently developing a model to integrate microbiology and geochemistry that uses lsoda to solve a great number of differential equations. The model is far too large to be posted here because it's made of several modules, but I have some very weird happening.
These are my initial values
enter image description here
I have initialised them as zero because I don't want any kind of microbe, just to check how the chemistry would change without microbes. However after 5 or 6 steps I start seeing values that are different from zero in some of my microbial variables:
enter image description here.
I wonder if maybe lsoda is doing some kind of round and that's why I get these values, because otherwise I cannot explain where these values are popping out from. If this is the case, does anyone know how to stop this kind of round-ups?

Short answer
No, lsoda does not create "fictional" values.
Detailed Answer
As #Ben Bolker stated, "floating point arithmetics is intrinsically imprecise". I want to add that all solvers including lsoda compute approximations, and that nonlinear models can magnify errors.
You can check your model, if all derivatives are exactly zero by calling the model function separately, without the solver, see the following example of a Lorenz system, where all states are set to zero:
library(deSolve)
Lorenz <- function(t, state, parameters) {
with(as.list(c(state, parameters)), {
dX <- a * X + Y * Z
dY <- b * (Y - Z)
dZ <- -X * Y + c * Y - Z
list(c(dX, dY, dZ))
})
}
parameters <- c(a = -8/3, b = -10, c = 28)
state <- c(X = 0, Y = 0, Z = 0)
times <- seq(0, 500, by = 1)
range(Lorenz(0, state, parameters))
that gives
[1] 0 0
Counter example
If we modify the model, so that one derivative is slightly different from zero due to rounding error, e.g. by using Ben's example:
Fancy <- function(t, state, parameters) {
with(as.list(c(state, parameters)), {
dX <- a * X + Y * Z + 27 * (sqrt(X + Y + Z + 2)^2 - 2)
dY <- b * (Y - Z)
dZ <- -X * Y + c * Y - Z
list(c(dX, dY, dZ))
})
}
range(Fancy(0, state, parameters))
then it gives:
[1] 0.000000e+00 1.199041e-14
Depending on tolerances, such a model may then fluctuate or even blow up. Surprisingly, a smaller tolerance may sometimes be worse:
out1 <- ode(y = state, times = times, func = Lorenz, parms = parameters, atol=1e-6)
out2 <- ode(y = state, times = times, func = Fancy, parms = parameters, atol=1e-6)
out3 <- ode(y = state, times = times, func = Fancy, parms = parameters, atol=1e-8)
par(mfrow=c(3,1))
plot(out1[,1:2], type="l", main="Lorenz, derivatives = 0")
plot(out2[,1:2], type="l", main="Fancy, small error in derivatives, atol=1e-6")
plot(out3[,1:2], type="l", main="Fancy, small error in derivatives, atol=1e-8")
Conclusion
lsodaworks well within the limitations of floating point arithmetics.
models should be carefully designed and tested.

Related

Vary parameter through time in ODE

I am using the deSolve package to solve a differential equation describing predator-prey dynamics. As an example, below is a simple L-V predator-prey model. I would like some of the parameters in the model to vary through time. I can vary state variable (e.g. prey density) no problem using the event argument in the ode function.
But I cannot use the event argument to alter parameters.
Here is the simple L-V model with no events added (works fine)
# Lotka-Volterra Predator-Prey model from ?deSolve::ode
# define model
LVmod <- function(Time, State, Pars) {
with(as.list(c(State, Pars)), {
Ingestion <- rIng * Prey * Predator
GrowthPrey <- rGrow * Prey * (1 - Prey/K)
MortPredator <- rMort * Predator
dPrey <- GrowthPrey - Ingestion
dPredator <- Ingestion * assEff - MortPredator
return(list(c(dPrey, dPredator)))
})
}
# parameters
pars <- c(rIng = 0.2, # rate of ingestion
rGrow = 1.0, # growth rate of prey
rMort = 0.2 , # mortality rate of predator
assEff = 0.5, # assimilation efficiency
K = 10) # carrying capacity
# initial densities (state variables)
yini <- c(Prey = 1, Predator = 2)
# time steps
times <- seq(0, 200, by = 1)
# run model
out <- ode(yini, times, LVmod, pars)
## plot
plot(out)
Here is the L-V model with state variable Prey multiplied by some rnorm()every 5 timesteps (works fine).
# add prey every 5 timesteps using events
add_prey <- function(t, var, parms){
with(as.list(var),{
Prey <- Prey * rnorm(1, 1, 0.05)
return(c(Prey, Predator))
})
}
# run ode - works fine
out <- ode(y = yini,
times = times,
func = LVmod,
parms = pars,
method = "ode45",
events = list(func = add_prey, time = seq(0, 200, by = 5)))
plot(out)
Here is my attempt to increase K every 5 timesteps (does not work)
# vary K through time
add_k <- function(t, var, parms){
with(as.list(var),{
K <- K + 2
return(c(Prey, Predator))
})
}
# run ode
out <- ode(y = yini,
times = times,
func = LVmod,
parms = pars,
method = "ode45",
events = list(func = add_k, time = seq(0, 200, by = 5)))
Which produces this error:
Error in eval(substitute(expr), data, enclos = parent.frame()) :
object 'K' not found
Based on the error K is not being passed to add_k, in add_k the line with(as.list(var) is obviously only accessing the variables Prey and Predator. In the ode and event helpfiles I can only find information regarding altering state variables (Prey and Predator in this case), and no information about altering other parameters. I am new to ODEs, so maybe I am missing something obvious. Any advice would be much appreciated.
Events are used to modify state variables. This means that at each event time, the ode solver is stopped, then states are modified and then the solver is restarted. In case of time-dependent *parameters, this can be done much easier without events. We call this a "forcing function" (or forcing data).
A modified version of the original code is shown below. Here a few explanations:
approxfun is a function that returns a function. We name it K_t that interpolates between the data timesteps t and the parameter values K. The argument rule = 2 is important. It describes how interpolation is to take place outside the range of t, where 2means that the closest value is returned, because some solvers overshoot the end of the simulation and then interpolate back. The method can be constant or linear, whatever is more appropriate.
The interpolation function of the variable model parameter K_t can be added to the function as an optional argument at the end. It is also possible to define it globally or to include it in parmsif it is defined as a list, not a vector.
The LVmod function checks if this additional parameter exists and if yes overwrites the default of K.
At the end of the LVmodfunction we return the actual value of K as an optional (auxiliary) variable, so that it is included in the model output.
library(deSolve)
LVmod <- function(Time, State, Pars, K_t) {
with(as.list(c(State, Pars)), {
if (!is.null(K_t)) K <- K_t(Time)
Ingestion <- rIng * Prey * Predator
GrowthPrey <- rGrow * Prey * (1 - Prey/K)
MortPredator <- rMort * Predator
dPrey <- GrowthPrey - Ingestion
dPredator <- Ingestion * assEff - MortPredator
list(c(dPrey, dPredator), K = K)
})
}
pars <- c(rIng = 0.2, # rate of ingestion
rGrow = 1.0, # growth rate of prey
rMort = 0.2 , # mortality rate of predator
assEff = 0.5, # assimilation efficiency
K = 10) # carrying capacity
yini <- c(Prey = 1, Predator = 2)
times <- seq(0, 200, by = 1)
# run model with constant parameter K
out <- ode(yini, times, LVmod, pars, K_t=NULL)
## make K a function of t with given time steps
t <- seq(0, 200, by = 5)
K <- cumsum(c(10, rep(2, length(t) - 1)))
K_t <- approxfun(t, K, method = "constant", rule = 2)
out2 <- ode(yini, times, LVmod, pars, K_t=K_t)
plot(out, out2, mfrow=c(1, 3))

Error delay differential equation deSolve (dede)

I am writing a delay differential equation in deSolve (R), and I am getting an error message I am unsure how to solve.
So for some background. I have a system with 12 differential equations, and 3 of those have a delay. I was successful in writing the system without deSolve, but I want to use deSolve because it gives me opportunity to easily work with other methods that are not a fixed-step euler.
However, now I put it in deSolve, and I am using the general solver for delay differential equations (dede), I get an error.
This is the delay and the differential equation that the error is about:
lag2=ifelse(t-tau2<0, 0, e2(lagvalue(t-tau2,3))*lagvalue(t-tau2,8))
dn9dt=lag2-IP2*ethaP2M*P2-mu2*sf0B(B2)*P2-DNB(N2,B2)*P2+DD*PD
The first and third delay differential equations are done the same as this one and don't seem to have an error. The error is:
Error in lagvalue(t - tau2, 3) : illegal input in lagvalue - lag, 0, too large, at time = 15.945
Important to note is that the delay (tau2) is 16 in this case, and the error occurs sligtly before time = 16.
I have already tried to change t-tau2<0 to t-tau2<=0, but this doesn't help, and I have tried to increase the size of the history array (control=list(mxhist = 1e6)), which also didn't help. I have also tried to rewrite the lag a couple of times, but everytime I get the same error.
I have tried searching online, but I can hardly find anything on dede in deSolve, so I hope someone here might be able to help.
The question did not contain a full reproducible example, but the error can be reproduced if a simulation is run with a large number of time steps and the history array is too small. The following example is an adapted version from the ?dede help page:
library("deSolve")
derivs <- function(t, y, parms) {
#cat(t, "\n") # uncomment this to see when the error occurs
lag1 <- ifelse(t - tau1 < 0, 0.1, lagvalue(t - tau1, 2))
lag2 <- ifelse(t - tau2 < 0, 0.1, lagvalue(t - tau2, 2))
dy1 <- -y[1] * lag1 + lag2
dy2 <- y[1] * lag1 - y[2]
dy3 <- y[2] - lag2
list(c(dy1, dy2, dy3))
}
yinit <- c(x=5, y=0.1, z=1)
times <- seq(0, 40, by = 0.1)
tau1 <- 1
tau2 <- 10
It runs successfuly with:
yout <- dede(y = yinit, times = times, func = derivs, parms = NULL)
but if we increase the number of time steps:
times <- seq(0, 40, by = 1e-3)
yout <- dede(y = yinit, times = times, func = derivs, parms = NULL)
we can get an error:
Error in lagvalue(t - tau2, 2) :
illegal input in lagvalue - lag, 0, too large, at time = 9.99986
It occurs after exceeding the threshold time (uncomment the cat above) when the algorithm starts to interpolate in the history array. As a solution, increase the history buffer:
yout <- dede(y = yinit, times = times, func = derivs,
parms = NULL, control = list(mxhist = 1e5))
It may also help to reduce the number of time steps.

MLE of the parameters of a PDF written as an infinite sum of terms

My question relates to the use of R for the derivation of maximum likelihood estimates of parameters when a probability distributions is expressed in the form of an infinite sum, such as the one below due to Rao, Girija et al.
I wanted to see if I could reproduce the maximum likelihood estimates obtained by these authors (who used Matlab, rather than R) when the model is applied to a given set of data. My attempt is given below, although this throws up several warnings that "longer object length is not a multiple of shorter object length". I know why I am getting this warning, but I do not know how to remedy it. How can I edit my code to overcome this?
Also, is there a better way to handle infinite sums? Here I'm just using an arbitrary large number for n (1000).
library(bbmle)
svec <- list(c=1,lambda=1)
x <- scan(textConnection("0.1396263 0.1570796 0.2268928 0.2268928 0.2443461 0.3141593 0.3839724 0.4712389 0.5235988 0.5934119 0.6632251 0.6632251 0.6981317 0.7679449 0.7853982 0.8203047 0.8377580 0.8377580 0.8377580 0.8377580 0.8726646 0.9250245 0.9773844 0.9948377 1.0122910 1.0122910 1.0646508 1.0995574 1.1170107 1.1170107 1.1170107 1.1344640 1.1344640 1.1868239 1.2217305 1.2740904 1.3613568 1.3613568 1.3613568 1.4486233 1.4486233 1.5358897 1.5358897 1.5358897 1.5707963 1.6057029 1.6057029 1.6231562 1.6580628 1.6755161 1.7104227 1.7453293 1.7976891 1.8500490 1.9722221 2.0594885 2.4085544 2.6703538 2.6703538 2.7052603 3.5604717 3.7524579 3.8920842 3.9444441 4.1364303 4.1538836 4.2411501 4.2586034 4.3633231 4.3807764 4.4854962 4.6774824 4.9741884 5.5676003 5.9864793 6.1086524"))
dL <- function(x, c,lambda,n = 1000,log=TRUE) {
k <- 0:n
r <- log(sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda))))
if (log) return(r) else return(exp(r))
}
dat <- data.frame(x)
m1 <- mle2( x ~ dL(c,lambda),
data=dat,
start=svec,
control=list(parscale=unlist(svec)),
method="L-BFGS-B",
lower=c(0,0)
)
I suggest starting out with that algorithm and making a density function that can be tested for proper behavior by integrating over its range of definition, (c(0, 2*pi). You are calling it a "probability function" but that is a term that I associate with CDF's rather than density distributions (PDF's):
dL <- function(x, c=1,lambda=1,n = 1000, log=FALSE) {
k <- 0:n
r <- sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda)))
if (log) {log(r) }
}
vdL <- Vectorize(dL)
integrate(vdL, 0,2*pi)
#0.999841 with absolute error < 9.3e-06
LL <- function(x, c, lambda){ -sum( log( vdL(x, c, lambda))) }
(I think you were trying to pack too much into your log-likelihood function so I decide to break apart the steps.)
When I ran that version I got a warning message from the final mle2 step that I didn't like and I thought it might be the case that this density function was occasionally returning negative values, so this was my final version:
dL <- function(x, c=1,lambda=1,n = 1000) {
k <- 0:n
r <- max( sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda))), 0.00000001)
}
vdL <- Vectorize(dL)
integrate(vdL, 0,2*pi)
#0.999841 with absolute error < 9.3e-06
LL <- function(x, c, lambda){ -sum( log( vdL(x, c, lambda))) }
(m0 <- mle2(LL,start=list(c=0.2,lambda=1),data=list(x=x)))
#------------------------
Call:
mle2(minuslogl = LL, start = list(c = 0.2, lambda = 1), data = list(x = x))
Coefficients:
c lambda
0.9009665 1.1372237
Log-likelihood: -116.96
(The warning and the warning-free LL numbers were the same.)
So I guess I think you were attempting to pack too much into your definition of a log-likelihood function and got tripped up somewhere. There should have been two summations, one for the density approximation and a second one for the summation of the log-likelihood. The numbers in those summations would have been different, hence the error you were seeing. Unpacking the steps allowed success at least to the extent of not throwing errors. I'm not sure what that density represents and cannot verify correctness.
As far as the question of whether there is a better way to approximate an infinite series, the answer hinges on what is known about the rate of convergence of the partial sums, and whether you can set up a tolerance value to compare successive values and stop calculations after a smaller number of terms.
When I look at the density, it makes me wonder if it applies to some scattering process:
curve(vdL(x, c=.9, lambda=1.137), 0.00001, 2*pi)
You can examine the speed of convergence by looking at the ratios of successive terms. Here's a function that does that for the first 10 terms at an arbitrary x:
> ratios <- function(x, c=1, lambda=1) {lambda*c*(x+2*(1:11)*pi)^(-c-1)*(exp(-(x+2*(1:10)*pi)^(-c))^(lambda))/lambda*c*(x+2*(0:10)*pi)^(-c-1)*(exp(-(x+2*(0:10)*pi)^(-c))^(lambda)) }
> ratios(0.5)
[1] 1.015263e-02 1.017560e-04 1.376150e-05 3.712618e-06 1.392658e-06 6.351874e-07 3.299032e-07 1.880054e-07
[9] 1.148694e-07 7.409595e-08 4.369854e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
> ratios(0.05)
[1] 1.755301e-08 1.235632e-04 1.541082e-05 4.024074e-06 1.482741e-06 6.686497e-07 3.445688e-07 1.952358e-07
[9] 1.187626e-07 7.634088e-08 4.443193e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
> ratios(0.5)
[1] 1.015263e-02 1.017560e-04 1.376150e-05 3.712618e-06 1.392658e-06 6.351874e-07 3.299032e-07 1.880054e-07
[9] 1.148694e-07 7.409595e-08 4.369854e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
That looks like pretty rapid convergence to me, so I'm guessing that you could use only the first 20 terms and get similar results. With 20 terms the results look like:
> integrate(vdL, 0,2*pi)
0.9924498 with absolute error < 9.3e-06
> (m0 <- mle2(LL,start=list(c=0.2,lambda=1),data=list(x=x)))
Call:
mle2(minuslogl = LL, start = list(c = 0.2, lambda = 1), data = list(x = x))
Coefficients:
c lambda
0.9542066 1.1098169
Log-likelihood: -117.83
Since you never attempt to interpret a LL in isolation but rather look at differences, I'm guessing that the minor difference will not affect your inferences adversely.

Fit Exponentially modified Gaussian to 2D data

I am trying to fit a exponentially modified gaussian (like in https://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution equation (1)) to my 2D (x, y) data in R.
My data are:
x <- c(1.13669371604919, 1.14107275009155, 1.14545404911041, 1.14983117580414,
1.15421032905579, 1.15859162807465, 1.16296875476837, 1.16734790802002,
1.17172694206238, 1.17610621452332, 1.18048334121704, 1.18486452102661,
1.18924164772034, 1.19362080097198, 1.19800209999084, 1.20237922668457,
1.20675826072693, 1.21113955974579, 1.21551668643951, 1.21989583969116,
1.22427713871002, 1.22865414619446, 1.2330334186554, 1.23741245269775,
1.24178957939148, 1.24616885185242, 1.25055003166199, 1.25492715835571,
1.25930631160736, 1.26368761062622, 1.26806473731995, 1.2724437713623
)
y <- c(42384.03125, 65262.62890625, 235535.828125, 758616, 1691651.75,
3956937.25, 8939261, 20311304, 41061724, 65143896, 72517440,
96397368, 93956264, 87773568, 82922064, 67289832, 52540768, 50410896,
35995212, 27459486, 14173627, 12645145, 10069048, 4290783.5,
2999174.5, 2759047.5, 1610762.625, 1514802, 958150.6875, 593638.6875,
368925.8125, 172826.921875)
The function I am trying to fit and the value I am trying to minimize for optimization:
EMGCurve <- function(x, par)
{
ta <- 1/par[1]
mu <- par[2]
si <- par[3]
h <- par[4]
Fct.V <- (h * si / ta) * (pi/2)^0.5 * exp(0.5 * (si / ta)^2 - (x - mu)/ta)
Fct.V
}
RMSE <- function(par)
{
Fct.V <- EMGCurve(x,par)
sqrt(sum((signal - Fct.V)^2)/length(signal))
}
result <- optim(c(1, x[which.max(y)], unname(quantile(x)[4]-quantile(x)[2]), max(y)),
lower = c(1, min(x), 0.0001, 0.1*max(y)),
upper = c(Inf, max(x), 0.5*(max(x) - min(x)), max(y)),
RMSE, method="L-BFGS-B", control=list(factr=1e7))
However, when I try to vizualize the result in the end it seems like nothing usful is happening,..
plot(x,y,xlab="RT/min",ylab="I")
lines(seq(min(x),max(x),length=1000),GaussCurve(seq(min(x),max(x),length=1000),result$par),col=2)
However, for some reason it doesn't work at all, although a managed to do it for a normal distribution with similar code. Would be great if someone has an idea?
If it might be of some use, I got an OK fit to your data using an X-shifted log-normal type peak equation, "y = a * exp(-0.5 * pow((log(x-d)-b) / c, 2.0))" with parameters a = 9.4159743234392539E+07, b = -2.7516932481669185E+00, c = -2.4343893243720971E-01, and d = 1.1251623071481867E+00 yielding R-squared = 0.994 and RMSE = 2.49E06. I personally was unable to fit using the equation in your post. There may be value in scaling the dependent data as the values seem large, but this equation seems to fit the data as is.

Fitting ODE in R, with the use of the FME package

I'm trying to fit an ODE model to some data and solve for the values of the parameters in the model.
I know there is a package called FME in R which is designed to solve this kind of problem. However, when I tried to write the code like the manual of this package, the program could not run with the following traceback information:
Error in lsoda(y, times, func, parms, ...) : illegal input detected before taking any integration steps - see written message
The code is the following:
x <- c(0.1257,0.2586,0.5091,0.7826,1.311,1.8636,2.7898,3.8773)
y <- c(11.3573,13.0092,15.1907,17.6093,19.7197,22.4207,24.3998,26.2158)
time <- 0:7
# Initial Values of the Parameters
parms <- c(r = 4, b11 = 1, b12 = 0.2, a111 = 0.5, a112 = 0.1, a122 = 0.1)
# Definition of the Derivative Functions
# Parameters in pars; Initial Values in y
derivs <- function(time, y, pars){
with(as.list(c(pars, y)),{
dx <- r + b11*x + b12*y - a111*x^2 - a122*y^2 - a112*x*y
dy <- r + b12*x + b11*y - a122*x^2 - a111*y^2 - a112*x*y
list(c(dx,dy))
})
}
initial <- c(x = x[1], y = y[1])
data <- data.frame(time = time, x = x, y = y)
# Cost Computation, the Object Function to be Minimized
model_cost <- function(pars){
out <- ode(y = initial, time = time, func = derivs, parms = pars)
cost <- modCost(model = out, obs = data, x = "time")
return(cost)
}
# Model Fitting
model_fit <- modFit(f = model_cost, p = parms, lower = c(-Inf,rep(0,5)))
Is there anyone that knows how to use the FME package and fix the problem?
Your code-syntax is right and it works until the last line.
you can check your code with
model_cost(parms)
This works fine and you can see with
model_cost(parms)$model
that your "initial guess" is far away from the observed data (compare "obs" and "mod"). Perhaps here is a failure so that the fitting procedure will not reach the observed data.
So much for the while ... I also checked different methods with parameter "methods = ..." but still does not work.
Best wishes,
Johannes
edit: If you use:
model_fit <- modFit(f = model_cost, p = parms)
without any lower bounds, then you will get an result (even there are warnings), but then a112 is negative what you wanted to omit.

Resources