I have a system with X connected patches, each has a simple predator-prey model like that:
C_i = r_c*C -d*C + e*P + b*\sum_j(A_ij*C_j)
P_i = r_p*P +e*P + b*\sum_j(A_ij*P_j)
where C_i and P_i are consumer and prey abundances in patch i; r_c,r_p are growth rates; d is rate of consumer death, e is rate of prey intake. The last term in each equation is the influx of consumers or prey: A_ij is a matrix indicating if patches i and j are connected, and b is a rate of migration from patch j. (My system is more complicated but this simple example will do).
This kind of system will require as many ODE systems as there are patches. Any idea how to implement this kind of system? I know how to implement it for a single patch (no indices and no influx term) with deSolve. So any solution with deSolve is preferred.
Latex version of equations:
Consumer equation in patch i
Prey equation in patch i
The following matrix predator-prey model may serve as a starting point:
library(deSolve)
model <- function(t, n, parms) {
with(parms, {
dn <- r * n + n * (A %*% n)
list(dn)
})
}
parms <- list(
r = c(r1 = 0.1, r2 = 0.1, r3 = -0.1, r4 = -0.1),
A = matrix(c(
0.0, 0.0, -0.2, 0.0, # prey 1
0.0, 0.0, 0.0, -0.1, # prey 2
0.2, 0.0, 0.0, 0.0, # predator 1; eats prey 1
0.0, 0.1, 0.0, 0.0), # predator 2; eats prey 2
nrow = 4, ncol = 4, byrow = TRUE)
)
times = seq(0, 500, 0.1)
n0 = c(n1 = 1, n2 = 1, n3 = 2, n4 = 2)
out <- ode(n0, times, model, parms)
plot(out)
Here a more meta-population related example following our off-list discussion. Please not that this is just experimental and comes WITHOUT WARRANTY. Feedback and improvements are welcome.
library(deSolve)
n <- 7 # number of metapopulations
beta <- rep(c(-500, 500, 0), each = n)
gamma <- rep(c(0, -365/13, 365/13), each = n)
## case(1) a "fully connected" system
#mig <- 1e-10 # migration rate
#As <- matrix(mig, nrow=n, ncol=n)
#diag(As) <- 0
## case (2) directed move
mig <- 0.0001 # migration rate
As <- matrix(0, nrow=n, ncol=n)
As[1:(n-1), 2:n] <- diag(mig, n-1)
As[2:n, 1:(n-1)] <- As[2:n, 1:(n-1)] + diag(mig, n-1)
## case (3) enter migration matrix manually ...
## expand movement to full matrix, within respective states S, I, R
## assumes that all states move equally; this can of course be changed
A <- matrix(0, nrow = 3 * n, ncol = 3 * n)
A[1:n, 1:n] <- As
A[(n+1):(2*n), (n+1):(2*n)] <- As
A[(2*n+1):(3*n), (2*n+1):(3*n)] <- As
## balance: what moves to other cells needs to be removed from the cell itself
diag(A) <- -rowSums(A)
## migration matrix A
## - positive values: what moves from the neighbors
## - negative values: what moves to the neighbors
A
S <- rep(0.99, times=n)
I <- c(0.01, rep(0, n-1)) # only first sub-population infected
R <- rep(0, times=n)
Y0 <- c(S, I, R)
sirmodel <- function(t, Y, parameters) {
S <- Y[1:n]
I <- Y[(n+1):(2*n)]
# dS <- -beta*S*I
# dI <- beta*S*I-gamma*I
# dR <- gamma*I
dY <- beta * S * I + gamma * I + Y %*% A
list(dY)
}
times <-seq(from=0, to=0.2, length.out=100)
out <- ode(y = Y0, times = times, func = sirmodel, parms = NULL)
windows(height = 6, width = 2 * n) # create somewhat bigger window
plot(out, xlab = "time", ylab = "-", mfrow=c(3, n))
Related
TLDR: My main challenge is just how do I write the root function that checks an arbitrary number of state variables, x, and then apply the event function such that all state variables n that have a value less than the threshold (n <= x) are acted upon by the event function?
I'm trying to use deSolve for a set of Lotka-Volterra equations, but with many state variables (i.e. not just a predator and prey but 20 interacting organisms).
I want to use a root function and event function to be constantly checking if any state variable values dip below a threshold value (e.g. 1.0) and if they do, use the event function to make that particular state variable 0. I've been messing around with a simple minimal example, but can't quite understand how to extend this to check all the state variables and just apply to the one(s) that is/are below the threshold.
The LV example from the deSolve package vignette
LVmod <- function(Time, State, Pars) {
with(as.list(c(State, Pars)), {
Ingestion <- rIng * Prey * Predator
GrowthPrey <- rGrow * Prey * (1 - Prey/K)
MortPredator <- rMort * Predator
dPrey <- GrowthPrey - Ingestion
dPredator <- Ingestion * assEff - MortPredator
return(list(c(dPrey, dPredator)))
})
}
pars <- c(rIng = 0.2, # /day, rate of ingestion
rGrow = 1.0, # /day, growth rate of prey
rMort = 0.2 , # /day, mortality rate of predator
assEff = 0.5, # -, assimilation efficiency
K = 10) # mmol/m3, carrying capacity
yini <- c(Prey = 10, Predator = 2)
times <- seq(0, 50, by = 1)
I can apply my root and event functions to check for just the prey's values:
## event triggered if state variable less than 1
rootfun <- function (Time, State, Pars) {
return(State[1] - 1)
}
## sets state variable = 1
eventfun <- function(Time, State, Pars) {
return(c(State[1] <- 0, State[2]))
}
out <- lsode(yini, times, LVmod, pars,
rootfunc = rootfun,
events = list(func = eventfun, root = TRUE))
## User specified plotting
matplot(out[ , 1], out[ , 2:3], type = "l", xlab = "time", ylab = "Conc",
main = "Lotka-Volterra", lwd = 2)
legend("topright", c("prey", "predator"), col = 1:2, lty = 1:2)
And the result is this:
But now I want to extend this so that it checks all the state variables (in this case just the 2), but ideally in a way that is flexible to different numbers of state variables. I have tried messing around with doing this in some sort of loop but can't seem to figure it out. My main challenge is just how do I write the root function that checks an arbitrary number of state variables, x, and then apply the event function such that all state variables n that have a value less than the threshold (n <= x) are acted upon by the event function?
Perhaps useful information is at some point I would like to implement a separate (not root-based) event function to change a parameter at some pre-set times, so ideally the solution to this problem could interface with additional event function implementation.
Help much appreciated as always!!
One can use a vectorized version of the LV model and then write rootfun and eventfun also in vectorized style:
library(deSolve)
model <- function(t, y, parms) {
with(parms, {
dy <- r * y + y * (A %*% y)
list(dy)
})
}
## int6eraction matrix
parms <- list(
r = c(r1 = 0.1, r2 = 0.1, r3 = -0.1, r4 = -0.1),
A = matrix(c(
0.0, 0.0, -0.2, 0.0, # prey 1
0.0, 0.0, 0.0, -0.1, # prey 2
0.2, 0.0, 0.0, 0.0, # predator 1; eats prey 1
0.0, 0.1, 0.0, 0.0), # predator 2; eats prey 2
nrow = 4, ncol = 4, byrow = TRUE)
)
times = seq(0, 150, 1)
y0 = c(n1 = 1, n2 = 1, n3 = 2, n4 = 2)
out <- ode(y0, times, model, parms)
plot(out)
## defined as global variables for simplicity, can also be put into parms
threshold <- 0.2 # can be a vector of length(y0)
y_new <- 1.0 # can be a vector of length(y0)
## uncomment the 'cat' lines to see what's going on
rootfun <- function (t, y, p) {
#cat("root at t=", t, "\n")
#cat("y old =", y, "\n")
return(y - threshold)
}
eventfun <- function(t, y, p) {
#cat("y old =", y, "\n")
y <- ifelse(y <= threshold, y_new, y)
#cat("y new =", y, "\n")
return(y)
}
out <- ode(y0, times, model, parms,
events = list(func = eventfun, root = TRUE), rootfunc=rootfun)
plot(out)
I'm trying to fit the tetracycline data set from Bates & Watts to a compartment model which forms a system of first order differential equations. The system has an analytic solution but I want to use the dede function to estimate the parameters numerically.
I can get parameter estimates which are close to the ones published in Bates and Watts but I'm wondering if I have coded the problem correctly. Specifically, since Bates & Watts account for dead time in their solution, I'm concerned about whether I have coded the use of lagvalue() in the function called DiffEqns correctly.
My programming question relates to coding of the derivatives with lag time. They are currently coded as:
dy1 <- -theta1*y1lag
dy2 <- theta1*y1lag - theta2*y2lag
However, I wonder if the derivatives should be coded instead as:
dy1 <- -theta1*y1lag*y[1]
dy2 <- theta1*y1lag*y[1] - theta2*y2lag*y[2]
Thanks and regards,
# Analyze the tetracycline data set as a two-compartment model
# (see Bates & Watts, "Nonlinear Regression Analysis and Its Applications")
## Note: the differential equations for the compartment model are:
## dy1/dt = -theta1*y1
## dy2/dt = theta1*y1 - theta2*y2
## (see p. 169 in Bates & Watts)
# Load packages
library(FME)
# Create the tetracycline dataset (see p. 281 in Bates & Watts)
tetra <- structure(list(time = c(1, 2, 3, 4, 6, 8, 10, 12, 16),
conc = c(0.7,1.2, 1.4, 1.4, 1.1, 0.8, 0.6, 0.5, 0.3)),
row.names = c(NA, 9L), class = "data.frame")
# Observe that: A) "conc" = data for y2; B) there is no data for y1; C) data start at time = 1 instead of time = 0
# Create a differential equation model with dead time
DiffEqns <- function(t, y, parms) {
theta1 <- parms[1] # rate constant for y1
theta2 <- parms[2] # rate constant for y2
theta3 <- parms[3] # amount of y1 at time = 0
theta4 <- parms[4] # parameter that accounts for dead time
y1lag <- ifelse(t - theta4 < 0, 0, lagvalue(t - theta4, 1))
y2lag <- ifelse(t - theta4 < 0, 0, lagvalue(t - theta4, 2))
dy1 <- -theta1*y1lag
dy2 <- theta1*y1lag - theta2*y2lag
return(list(c(dy1, dy2), y1lag = y1lag, y2lag = y2lag))
}
# Find a numerical solution for the system of delay differential equations using dede() from deSolve
time <- seq(from = 0, to = 16, by = 0.1)
Cost <- function(P) {
theta1 <- P[1]
theta2 <- P[2]
theta3 <- P[3]
theta4 <- P[4]
theta <- c(theta1, theta2, theta3, theta4)
yinit <- c(y1 = theta3, conc = 0)
out <- dede(y = yinit, times = time, func = DiffEqns, parms = theta)
modCost(model = out, obs = tetra)
}
theta <- c(theta1 = 0.1, theta2 = 0.2, theta3 = 5, theta4 = 0.2) # starting values for the parameters
yinit <- c(y1 = theta[3], conc = 0)
CompModFit2 <- modFit(f = Cost, p = theta, lower = c(0,0,0,0))
FMEtheta <- coef(CompModFit2)
# Compare data to numerical model solution using parameters from modFit
dedeFitted <- dede(times = time,y = c(y1 = FMEtheta[3], conc = 0), func = DiffEqns, parms = FMEtheta)
plot(dedeFitted, obs=tetra)
# Parameters from FME are:
# theta1 theta2 theta3 theta4
#0.1193617 0.6974401 10.7188251 0.2206997
# Compare FME parameters to the parameter estimates published in Bates & Watts:
# theta1 theta2 theta3 theta4
# 0.1488 0.7158 10.10 0.4123
What I am trying to do: I have a simple SIR model, with time varying transmission rates beta, I have already implemented this in R (thanks to #tpetzoldt). We have a population of N=10000, gamma is also fixed.
sir_1 <- function(f_beta, S0, I0, R0, times) {
# the differential equations
sir_equations <- function(time, variables, parameters) {
beta <- f_beta(time)
gamma <- f_gamma(time)
with(as.list(variables), {
dS <- -beta * I * S/10000
dI <- beta * I * S/10000 - 1/5 * I
dR <- 1/5 * I
return(list(c(dS, dI, dR), beta=beta))
})
}
# time dependent parameter functions
parameters_values <- list(
f_beta = f_beta
)
# the initial values of variables
initial_values <- c(S = S0, I = I0, R = R0)
out <- ode(initial_values, times, sir_equations, parameters)
}
times <- seq(0, 19)
f_beta <- approxfun(x=times, y=seq(0.901, 0.92, by=0.001), rule=2)
out <- as.data.frame(sir_1(f_beta=f_beta, S0 = 9990, I0 = 10, R0 = 0, times = times))
Now I have some "real" data, with the FME package I want to get the optimal beta parameters at each timestep
datareal <- cbind(time = times, I=c(10,32,120,230,480,567,1040,1743,2300,2619,3542,4039,4231,6378,
5356, 4987, 3421, 2789, 1789,1156))
sir_cost <- function (f_beta) {
outsir <- as.data.frame(sir_1(f_beta=f_beta, S0 = 9990, I0 = 10, R0 = 0, times = times))
costf <- modCost(model = outsir, obs = datareal)
}
p <- rep(0.8, 20)
Fit <- modFit(f = sir_cost, p = p)
Fit
$par
[1] 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8
My issues:
For the initial values I took 0.8 at each timestep, however the Fit function does nothing, it only returns the 0.8 for each timestep (even if I take a very high value like 800, it says that this is already the best fit). My guess is for timevarying values of the same variable (beta) I have to approach this another way as it is in the documentation.
Any help is highly appreciated.
I don't think that estimating beta per time step is a good idea. This is inherent in the problem and not a fault of deSolve or FME. If a dynamic model shall be used to estimate time dependent parameters, I would recommend to use a suitable function with less knots, e.g. time dependent linear, quadratic or spline, for example 3-5 instead of 20 knots. Then replace approxfun with that function and plug it in. Model fitting is an art, so play with start values and solvers. And, read the books.
Note that the following is just a technical demonstration:
library("deSolve")
library("FME")
sir_1 <- function(f_beta, S0, I0, R0, times) {
# the differential equations
sir_equations <- function(time, variables, parameters) {
beta <- parameters$f_beta(time)
with(as.list(variables), {
dS <- -beta * I * S/10000
dI <- beta * I * S/10000 - 1/5 * I
dR <- 1/5 * I
return(list(c(dS, dI, dR), beta=beta))
})
}
initial_values <- c(S = S0, I = I0, R = R0)
parameters <- list(f_beta=f_beta)
out <- ode(initial_values, times, sir_equations, parameters)
}
times <- seq(0, 19)
# use method "constant" to leave beta constant over time step
f_beta <- approxfun(x=times, y=seq(0.901, 0.92, by=0.001), method="constant", rule=2)
out <- sir_1(f_beta=f_beta, S0 = 9990, I0 = 10, R0 = 0, times = times)
plot(out)
datareal <- cbind(time = times, I=c(10,32,120,230,480,567,1040,1743,2300,2619,3542,4039,4231,6378,
5356, 4987, 3421, 2789, 1789,1156))
plot(out, obs=datareal)
sir_cost <- function (p) {
f_beta <- approxfun(x=times, y=p, method="constant", rule=2)
outsir <- sir_1(f_beta=f_beta, S0 = 9990, I0 = 10, R0 = 0, times = times)
modCost(model = outsir, obs = datareal)
}
# Play with start values!!!
p <- rep(0.8, 20)
# e.g.: consider random start values
set.seed(123)
p <- runif(20, min=0.8, max=1.2)
# try other solvers, especially such with true box constraints
Fit <- modFit(f = sir_cost, p = p,
lower=rep(0.2, 20), upper=rep(5, 20), # box constraints
method="Port")
summary(Fit) # system is singular (that is what we expected)
# use another solver. Note: it takes a while
Fit <- modFit(f = sir_cost, p = p,
lower=rep(0.2, 20), upper=rep(5, 20), # box constraints
method="L-BFGS-B")
# goes in a surprisingly good direction
Fit$par
f_beta <- approxfun(x=times, y=Fit$par, method="constant", rule=2)
out2 <- sir_1(f_beta=f_beta, S0 = 9990, I0 = 10, R0 = 0, times = times)
# compare with data
plot(out, out2, obs=datareal)
# but see how unstable beta is
plot(out2)
Fitting a model with time dependent parameters may be a good idea or not, but if there are reasons to do so, I would suggest to restrict the number of parameters and to use a kind of smooth function.
The following example shows how to use a spline for this purpose, but it is of course also possible (and may be preferable) to use a function with some mechanistic meaning.
As a side effect, it was also possible to identify gamma instead of fixing it a-priori. Nevertheless, this is still a technical demonstration, but I leave the scientific question open, whether a time-dependent beta will make any sense.
library("FME")
sir_1 <- function(f_beta, gamma, S0, I0, R0, times) {
# the differential equations
sir_equations <- function(time, variables, parameters) {
beta <- parameters$f_beta(time)
gamma <- parameters$gamma
with(as.list(variables), {
dS <- -beta * I * S / 10000
dI <- beta * I * S / 10000 - gamma * I
dR <- gamma * I
# return vector of derivatives, and beta as auxiliary variable
return(list(c(dS, dI, dR), beta = beta))
})
}
initial_values <- c(S = S0, I = I0, R = R0)
# pass constant parameter and parameter function together as a list
parameters <- list(
f_beta = f_beta,
gamma = gamma
)
ode(initial_values, times, sir_equations, parameters)
}
times <- seq(0, 19)
datareal <- data.frame(
time = times,
I = c(10, 32, 120, 230, 480, 567, 1040, 1743, 2300,
2619, 3542, 4039, 4231, 6378,
5356, 4987, 3421, 2789, 1789, 1156)
)
## define parameter as a vector: gamma and beta
t_beta <- c(0, 12, 16, 19) # consider more or less knots
n_beta <- length(t_beta)
y_beta <- rep(1, n_beta)
p <- c(gamma = 1/5, y_beta) # combine all parameters in one vector
## a small helper function for parameter selection
select <- function(p, which, exclude = FALSE) {
parnames <- names(p)
p[(which == parnames) != exclude]
}
## check the helper function
select(p, "gamma")
select(p, "gamma", excl=TRUE)
## cost function, see ?modCost help page
sir_cost <- function (p) {
gamma <- select(p, "gamma")
y_beta <- select(p, "gamma", exclude = TRUE)
f_beta <- splinefun(x = t_beta, y = y_beta)
outsir <- sir_1(f_beta = f_beta, gamma = gamma,
S0 = 9990, I0 = 10, R0 = 0, times = times)
modCost(model = outsir, obs = datareal)
}
## model calibration, see ?modFit
Fit <- modFit(f = sir_cost, p = p,
# lower bound to avoid negative values of beta
lower = c(gamma = 0, rep(0.0, n_beta)),
# note: high sensitivity wrt. upper bound
upper = c(gamma=1, rep(2.0, n_beta)),
# an algorithm that supports box constraints
method = "Port")
## all parameters were identifiable
summary(Fit)
## smaller time steps to obtain a curves
times <- seq(0, 19, 0.1)
## split components of fitted parameters
gamma <- select(Fit$par, "gamma")
y_beta <- select(Fit$par, "gamma", exclude = TRUE)
out2 <- sir_1(f_beta = splinefun(x = t_beta, y = y_beta), gamma,
S0 = 9990, I0 = 10, R0 = 0, times = times)
## show fitted curves and compare simulation with data
## see ?plot.deSolve help page
plot(out2, obs = datareal, which = c("S", "R", "I", "beta"),
las = 1, obspar = list(pch = 16, col = "red"))
I am trying to model a disturbance event in a generalized Lotka-Volterra model, where at time t, 1 is added to the variable e. I keep on running into the following error:
Error in checkevents(events, times, Ynames, dllname) :
unknown state variable in 'event': e
My model is the following:
lvg<-function(t, N, e, param){
e <- 0
dNdt <- N * r + N * (a %*% N) - N * e
list(c(dNdt))
}
where N is the population size of species i, r is the growth rate, a is the interaction matrix, and e is the event. r and a are specified as prior parameters, the event is specified in a dataframe. A simplified version is as follows:
#set parameters
S<- 10 # number of species
r <- rep(1.1, S) # growth rates
a <- matrix (nrow = S, ncol = S) #interaction matrix
a[lower.tri(a)] <- -0.001
a[upper.tri(a)] <- -0.001
diag(a) <- -0.01
parms <- list (r, a) #put parameters in a list
N0 <- rep(100, S) #initial values for species abundances
ts<-seq(0, 100, 1) # time steps for solver
#create data frame for event
eventdat <- data.frame(var = c("e", "e"), time = c(10, 20), value = c(1, 1), method = c("add"))
lvout<-lsoda(N0, ts, lvg, parms, events = list(data = eventdat))
Here an approach with an event function instead of an event table, that is in general more flexible in the case here simpler. Note also that the number of states and values of parameters were changed to get a more typical L&V model:
library(deSolve)
## multi-species Lotka-Volterra
lvg <- function(t, N, param) {
with(param, {
dNdt <- r * N + N * (a %*% N)
list(c(dNdt))
})
}
## simplified to 4 species, you can add more
S <- 4
N0 <- c(1,1,1,1)
## parameter list
parms <- list(
r = c(r1 = 0.5, r2 = 0.5, r3 = -0.5, r4 = -0.5),
a = matrix(c(
0.0, 0.0, -0.5, 0.0, # prey 1
0.0, 0.0, 0.0, -0.2, # prey 2
0.5, 0.0, 0.0, 0.0, # predator 1; eats prey 1
0.0, 0.2, 0.0, 0.0), # predator 2; eats prey 2
nrow = 4, ncol = 4, byrow = TRUE),
e = rep(0.5, S)
)
ts <- seq(0, 100, 1) # time steps for solver
te <- c(20, 40) # event times
## event function is more flexible than an event table
eventfun <- function(t, N, param){
with (as.list(param), {
N <- N - N * e
return(c(N))
})
}
## simulation without events
lvout<-lsoda(N0, ts, lvg, parms)
plot(lvout)
## simulation with events
lvout<-lsoda(N0, ts, lvg, parms, events = list(func = eventfun, time = te))
plot(lvout)
I am trying to implement a reaction-diffusion PDE using reacTran in the deSolve package. However, the time-dependent reaction term is not working. Any suggestions on how to implement this would be greatly appreciated!
library(ReacTran)
library(deSolve)
N <- 1000
xgrid <- setup.grid.1D(x.up = 0, x.down = 10, N = N)
x <- xgrid$x.mid
D.coeff <- 1
k <- 1
Diffusion <- function (t, Y, parms){
tran <- tran.1D(C = Y, C.up = 0, C.down = 0, D = D.coeff, dx = xgrid)-k*t
reac <- -kt
return(list(tran$dC+reac))
}
# Set initial conditions as gaussian distribution
C0 <- 10 #Initial concentration (mg/L)
X0 <- 5 #Location of initial concentration (m)
sig <- .2 #Spread of Gaussian distribution
C <- rep(0,N) #matrix
Yini <- C+C0*exp(-((x-X0)/sig)^2)
parms1 <- list(D=D.coeff, k=k)
times <- seq(from = 0, to = 5, by = 0.01)
print(system.time(
out <- ode.1D(y = Yini, times = times, func = Diffusion,
parms = parms1, dimens = N)))