I struggle with loops intuitively. I have a simple consumer-resource model, and I want to loop through values of resource growth rate g to get final state values to then plot equilibrium as a function of the parameter values. This is what I have so far:
param.values = seq(from = 1, to = 10, by = 1)
variable = rep(0,length(param.values))
for (i in 1:length(param.values)){
state <- c(r = 1, n = 1)
parameters = c(g = variable[i],# resource growth rate
d = 0.5, # n mortality rate
k = 5, # r carrying capacity
c = 1, # consumption rate of n on r
e = 1, # conversion efficiency for n on r
h = 1 # handling time n on r
)
function1 <- function(times, state, parameters) {
with(as.list(c(state, parameters)),{
# rate of change
dr = variable[i]*r*(1 - (r/k)) - (c*n*r/(1+(h*c*r)))
dn = (e*c*n*r/(1+(h*c*r)))- n*d
# return the rate of change
list(c(dr, dn))
})
}
times <- seq(0, 100, by = 1)
out <- ode(y = state, times = times, func = function1, parms = parameters)
sol <- out[101, 2:3] # trying to get last equilibrium value to plot against param values...
print(sol[i])
}
plot(sol[,1] ~ param.values)
plot(sol[,2] ~ param.values)
I think I have thinks right up until the ode function - where should I be indexing i after this? I hope this makes sense.
Your approach had several issues, so I tried to re-organize it so that it runs through. But, as your model shows a stable cycle, it does not reach an equilibrium.
Here a few hints
The loop should only contain things that change during the simulation. Fixed code segments should come before the loop. This is easier to maintain and faster.
First, run the model without the loop, to see whether it works.
Then define a data structure (matrix or data frame) to store the results.
Here one approach how it can be implemented:
library("deSolve")
## define as much as possible outside the loop
function1 <- function(times, state, parameters) {
with(as.list(c(state, parameters)),{
# rate of change
dr = g*r*(1 - (r/k)) - (c*n*r/(1+(h*c*r)))
dn = (e*c*n*r/(1+(h*c*r)))- n*d
# return the rate of change
list(c(dr, dn))
})
}
state <- c(r = 1, n = 1)
parameters = c(g = 1, # resource growth rate
d = 0.5, # n mortality rate
k = 5, # r carrying capacity
c = 1, # consumption rate of n on r
e = 1, # conversion efficiency for n on r
h = 1 # handling time n on r
)
times <- seq(0, 100, by = 1)
## first test single run of model
out <- ode(y = state, times = times, func = function1, parms = parameters)
plot(out)
## It runs and we see a cycling model. I suspect it has no equilibrium!
param.values = seq(from = 1, to = 10, by = 1)
## define a matrix where results can be stored
sol <- matrix(0, nrow=length(param.values), ncol=2)
for (i in 1:length(param.values)){
## replace single parameter g with new value
parameters["g"] <- param.values[i]
out <- ode(y = state, times = times, func = function1, parms = parameters)
## store result of last value in row of matrix.
## Note that it may not be an equilibrium
sol[i, ] <- out[101, 2:3] # trying to get last equilibrium value to plot against param values...
print(sol[i, ])
}
plot(sol[,1] ~ param.values, type="l")
plot(sol[,2] ~ param.values, type="l")
## We see that the model has no equilibrium.
There are several other ways and, as said, the model has no equilibrium. Here another model example, a so-called chemostat with equilibrium.
Related
I would like to write a function that calculates the number of arrivals until time t in n
different trials. I know that the arguments should include the exponential parameter lambda, the time t, and the number of counts n to be sampled. It should return a vector with n elements, corresponding to the counts.
Progress: I have created a function that counts the number of events until time t,and I will need to use the rexp() function.
But how do I do this poisson function?
The following simulates a Poisson process. The function Nt takes two arguments, the exponential rate and the time limit.
Nt <- function(lambda = 1, t){
S <- 0 # Total time, sum of X's
n <- 0L # Number of events
repeat{
X <- rexp(1, lambda) # New time between events
if(S + X > t) break # Above the limit time t?
S <- S + X # No, update total time S
n <- n + 1L # and the nr. of events counter
}
n
}
set.seed(2021)
Rate <- 2
Time <- 10
N <- replicate(1e4, Nt(lambda = Rate, t = Time))
tbl <- table(N)
plot(tbl/sum(tbl), lwd = 10, col = "grey")
lines(0:40, dpois(0:40, lambda = Time*Rate), type = "h", col = "red")
I'm building predator-prey models based on Lotka-Volterra derivatives in R using package deSolve. I define parameters, initial state and timesteps and the model function. Then I solve everyting using ode() or dede() when using a time lag.
I noticed there's a big difference in output depending on how you define the parameters WITHIN the model function and I really don't understand why. You can extract the parameters either by calling them via the argument: parms['r'], or via the previously defined object I passed to the argument: parameters['r']. Same result in both cases.
This is different voor the initial state though: calling the argument: y[1] or y['N'], gives a totally different result than calling it via the object passed to the argument: init[1] or init['N'].
Also in the DDE: there's a difference in time - tau vs times - tau and ylag <- y vs ylag <- init.
Why is there a different result for argument vs object for the initial state and time and not for the parameters? I need to comprehend this well in order to use the FME package in a later stage, so I hope someone can explain this behaviour.
My code:
library(deSolve)
## Parameters
parameters <- c(r = 0.25, K = 200, a = 0.01, c = 0.01, m = 1, tau = 7)
init <- c(N = 20, P = 2)
time <- seq(0, 100, by = 0.01)
## Ordinary DE
PreyPred <- function(times, y, parms){ #chose same argument names as ode()
N <- y['N'] #y[1] works as well
P <- y['P']
#N <- init['N'] #(or init[1]) gives a totally different result!
#P <- init['P']
r <- parms['r'] #growth rate prey parameters['r'] gives same result
K <- parms['K'] #carrying capacity prey
a <- parms['a'] #attack rate predator
c <- parms['c'] #assimilation rate (?) predator
m <- parms['m'] #mortality predator
dN <- r * N * (1-N/K) - a * N * P
dP <- c * N * P - m * P
return(list(c(dN, dP)))
}
oderesult <- ode(func = PreyPred, parms = parameters, y = init, times = time)
plot(oderesult, lwd = 2, mfrow = c(1,2))
## Delayed DE
PreyPredLag <- function(times, y, parms){
N <- y['N']
P <- y['P']
#N <- init['N']
#P <- init['P']
r <- parms['r'] #growth rate prey
K <- parms['K'] #carrying capacity prey
a <- parms['a'] #attack rate predator
c <- parms['c'] #assimilation rate (?) predator
m <- parms['m'] #mortality predator
tau <- parms['tau'] #time lag
tlag <- times - tau
#tlag <- time - tau #different result
if (tlag < 0)
ylag <- y
#ylag <- init
else
ylag <- lagvalue(tlag)
# dede
dN <- r * N * (1-N/K) - a * N * P
dP <- c * ylag[1] * ylag[2] - m * P
return(list(c(dN, dP), lag = ylag))
}
dederesult <- dede(func = PreyPredLag, parms = parameters, y = init, times = time)
plot(dederesult, lwd = 2, mfrow = c(2,2))
The observed behavior is correct. A short explanation:
'parms' is the local variable in the model function, while 'parameters' the global variable in the workspace. This is nothing special for deSolve, it is the general way how R works. The in most cases preferred way is to use the local variable.
For the states, this is indeed different. here the outer value 'init' is the initial value at the beginning, while the local 'y' is the current value for the time step.
The dede parameters are analogous. init is the start, y the instantaneous value, times is the global vector of all time steps and time the actual time step.
I am trying to simulate a model with this differential equation for concentration A:
dA/dt = (a-b)*exp^(d*(s-t))
(The equations has parameters: a, b, d, and s.) I can not figure out how to use R to solve differential equations that have a t (time step) variable? I tried it with the function radau of the package deSolve (See beneath). I did not get the code to work. I also do not understand how to define the index variable? Or if this is solvable with this function at all? (All my other simpler differential equations I have ran in the past with the ode function of deSolve, worked fine).
I hope you can help me!
My try:
#Defining parameters
parameter <- c(a=0.03, b=0.02, d=0.01, s=179)
#Defining Function
Function1 <- function(t, y, parameter) { with (as.list(Y),
list(c(dA = (a-b)*exp^(d*(s-t)))))}
#Initial conditions
yini <- c(A=1)
#Mass matrix
M <- diag(nrow=1)
M[5,5] <- 0
M
#index/times/output
index <- c(1)
times <- seq(from = 0, to = 10, by = 0.01)
out <- radau(y = yini, func = Function1, parms = parameters, times = times, mass = M, nind = index)
plot(out, type = "l", lwd = 2)
I'm not sure what's up with M or index as they don't appear in your model, but here's code that runs and produces results based on your code.
#Defining parameters
parameter <- c(a=0.03, b=0.02, d=0.01, s=179)
#Defining Function
model <- function(t, y, parameter) {
with(as.list(parameter),{
dA <- (a - b) * exp(d * (s - t))
list(dA)
})
}
#Initial conditions
yini <- 1
# Output times
times <- seq(from = 0, to = 10, by = 0.01)
# Solve model
out <- ode(y = yini, func = model, parms = parameter, times = times)
# Plot results
plot(out, type = "l", lwd = 2)
I've written some code that has a logistic growth component (i.e. as N approaches the 'carrying capacity' it grows at a slower rate, until when it reaches the 'carrying capacity' it stops growing). However, when I run it in R it doesn't seem to be working. Some populations end up being larger than the carrying capacity. I've looked at the maths and its all OK. So I think that the problem is that dN/dt is only being calculated once for each population. Does anyone know how to fix this problem?
Any help would be greatly appreciated!
Example code below:
library('optimbase')
library('deSolve')
library('tidyverse')
K = 150000 #carrying capacity
deaths = 0.2567534 #death rate
treesize = 0.23523 #resource size
K_mat = K*ones(10, 1) #matrix of Ks
death_mat = deaths*ones(10, 1) #matrix of deathrates
tree_mat = treesize*ones(11, 1) #matrix of resources
for_mat <- matrix(rbinom(11 * 10, 1, 0.2), ncol = 11, nrow = 10) #connection
#matrix of foraging
parameters <- c(for_mat, tree_mat, death_mat, K_mat) #outline parameters
N <- runif(10,0,K)
state <- N #starting state
nestchange <- function(t, state, parameters){
with(as.list(c(state, parameters)),{
r = for_mat %*% tree_mat - death_mat
dNdt = r*N - r*N*(N/K_mat)
list(c(dNdt))
})
}
times <- seq(0,100)
out <- ode (y = state,
times = times,
func = nestchange,
parms = parameters)
results <- as.data.frame(out)
results <- gather(results, 'nest', 'N', 2:11)
ggplot(data=results,
aes(x=time, y=N, colour=nest)) +
geom_line() +
theme_bw()
Should your function actually be,
nestchange <- function(t, state, parameters){
with(as.list(c(state, parameters)),{
r <- for_mat %*% tree_mat - death_mat
dNdt <- r*state - r*state*(state/K_mat)
list(c(dNdt))
})
}
as the ODE solver is passing state to the function at each time step, yet the function is using the variable N for the calculations instead which isn't updated by the ODE solver.
obj1<-function(monthly.savings,
success,
start.capital,
target.savings,
monthly.mean.return,
monthly.ret.std.dev,
monthly.inflation,
monthly.inf.std.dev,
n.obs,
n.sim=1000){
req = matrix(start.capital, n.obs+1, n.sim) #matrix for storing target weight
monthly.invest.returns = matrix(0, n.obs, n.sim)
monthly.inflation.returns = matrix(0, n.obs, n.sim)
monthly.invest.returns[] = rnorm(n.obs * n.sim, mean = monthly.mean.return, sd = monthly.ret.std.dev)
monthly.inflation.returns[] = rnorm(n.obs * n.sim, mean = monthly.inflation, sd = monthly.inf.std.dev)
#for loop to be
for (a in 1:n.obs){
req[a + 1, ] = req[a, ] * (1 + monthly.invest.returns[a,] - monthly.inflation.returns[a,]) + monthly.savings
}
ending.values=req[nrow(req),]
suc<-sum(ending.values>target.savings)/n.sim
value<-success-suc
return(abs(value))
}
I have the above objective function that I want to minimize for. It tries to solve for the monthly savings required for a given probability of success. Given the following input assumptions
success<-0.9
start.capital<-1000000
target.savings<-1749665
monthly.savings=10000
monthly.mean.return<-(5/100)/12
monthly.ret.std.dev<-(3/100)/sqrt(12)
monthly.inflation<-(5/100)/12
monthly.inf.std.dev<-(1.5/100)/sqrt(12)
monthly.withdrawals<-10000
n.obs<-10*12 #years * 12 months in a year
n.sim=1000
I used the following notation:
optimize(f=obj1,
success=success,
start.capital=start.capital,
target.savings=target.savings,
monthly.mean.return=monthly.mean.return,
monthly.ret.std.dev=monthly.ret.std.dev,
monthly.inflation=monthly.inflation,
monthly.inf.std.dev=monthly.inf.std.dev,
n.obs = n.obs,
n.sim = n.sim,
lower = 0,
upper = 10000,
tol = 0.000000001,maximum=F)
I get 7875.03
Since I am sampling from a normal distribution, the output will be different each time but they should be around the same give or take a few % points. The problem I am having is that I can't specify a upper limit arbitrarily. The above example's upper limit (10000) is cherry picked after numerous trials. If say I put in a upper limit of 100000 (unreasonable I know) it will return that number as oppose to finding the global minimum saving. Any ideas where I am structuring my objective function incorrectly?
thanks,
The fact that your function does not always return the same output for a given input
is likely to pose a few problems (it will create a lot of spurious local minima):
you can avoid them by setting the seed of the random number generator
inside the function (e.g., set.seed(1)),
or by storing the random numbers and reusing them each time,
or by using a low-discrepancy sequence (e.g., randtoolbox::sobol).
Since it is a function of one variable, you can simply plot it to see what happens:
it has a plateau after 10,000 -- optimization algorithms cannot distinguish
between a plateau and a local optimum.
f <- function(x) {
set.seed(1)
obj1(x,
success = success,
start.capital = start.capital,
target.savings = target.savings,
monthly.mean.return = monthly.mean.return,
monthly.ret.std.dev = monthly.ret.std.dev,
monthly.inflation = monthly.inflation,
monthly.inf.std.dev = monthly.inf.std.dev,
n.obs = n.obs,
n.sim = n.sim
)
}
g <- Vectorize(f)
curve(g(x), xlim=c(0, 20000))
Your initial problem is actually not a minimization problem,
but a root finding problem, which is much easier.
obj2 <- function(monthly.savings) {
set.seed(1)
req = matrix(start.capital, n.obs+1, n.sim)
monthly.invest.returns <- matrix(0, n.obs, n.sim)
monthly.inflation.returns <- matrix(0, n.obs, n.sim)
monthly.invest.returns[] <- rnorm(n.obs * n.sim, mean = monthly.mean.return, sd = monthly.ret.std.dev)
monthly.inflation.returns[] <- rnorm(n.obs * n.sim, mean = monthly.inflation, sd = monthly.inf.std.dev)
for (a in 1:n.obs)
req[a + 1, ] <- req[a, ] * (1 + monthly.invest.returns[a,] - monthly.inflation.returns[a,]) + monthly.savings
ending.values <- req[nrow(req),]
suc <- sum(ending.values>target.savings)/n.sim
success - suc
}
uniroot( obj2, c(0, 1e6) )
# [1] 7891.187