deSolve ODE Not Working with Differential Equations (Calculates NA) - r

I have been using method deSolve::ode45 which has been working until I made a few necessary changes to my equations. Does anyone know why the ODE solver is not working? I have tried running with ode45 as well as the default ode method and neither work. Please let me know if any further explanation would be helpful.
I have checked over the differential equations and I am confident they are correct.
The equations used are as follows:
CCHFModel = function(t,x,params)
{
# get SIR values
SH <- x[1]
EH <- x[2]
IA <- x[3]
IS <- x[4]
RH <- x[5]
ST <- x[6]
IT <- x[7]
SC <- x[9]
IC <- x[10]
RC <- x[11]
# Load values ----
# Beta values
betaHHA = params["betaHHA"]
betaHHS = params["betaHHS"]
betaTH = params["betaTH"]
betaCH = params["betaCH"]
betaTC = params["betaTC"]
betaCT = params["betaCT"]
betaTT = params["betaTT"]
# Gamma value
gamma = params["gamma"]
# death rates
muH = params["muH"]
muT = params["muT"]
muC = params["muC"]
# birth rates
piH = params["piH"]
piT = params["piT"]
piC = params["piC"]
# incubation
deltaHS = params["deltaHS"]
deltaHA = params["deltaHA"]
# recovery rate
alphaA = params["alphaA"]
alphaS = params["alphaS"]
alphaC = params["alphaC"]
# total population
NH = (SH + IA + IS + EH + RH) + (piH * SH) - (muH * SH)
NT = (ST + IT) + (piT * ST) - (muT * ST)
NC = (SC + IC + RC) + (piC * SC) - (muH * SC)
# tick carrying Capacity
# KT = NC * 130 # 130 ticks per carrier max
#computations ----
dSHdt <- (piH * NH) - (betaHHA * IA + betaHHS * IS + betaCH * IC + betaTH * IT)*(SH/NH) - (muH * SH)
dEHdt <- (betaHHA * IA + betaHHS * IS + betaCH * IC + betaTH * IT)*(SH/NH) - ((deltaHA + muH)*EH)
dIAdt <- (deltaHA * EH) - ((alphaA + muH + deltaHS) * IA)
dISdt <- (deltaHS * IA) - ((alphaS + muH + gamma) * IS)
dRHdt <- alphaA * IA + alphaS * IS - muH*RH
dSTdt <- (piT * NT) - (betaTT * IT + betaCT * IC)*(ST/NT) - (muT * ST)
dITdt <- (betaTT * IT + betaCT * IC)*(ST/NT) - (muT * IT)
dSCdt <- (piC * NC) - (betaTC * IT)*(SC/NC) - (muC * SC)
dICdt <- (betaTC * IT)*(SC/NC) - ((alphaC +muC) * IC)
dRCdt <- (alphaC * IC) - (muC * RC)
# return results
list(c(dSHdt, dEHdt, dIAdt, dISdt, dRHdt, dSTdt, dITdt, dSCdt, dICdt, dRCdt))
}
I run the ODE solver using:
defaultParms = c(betaHHA = .0413,
betaHHS = .0413,
betaTH = .2891,
betaCH = .0826,
betaTC = (1/365),
betaCT = 59/365,
betaTT = ((1/(365 * 2)) * .04) * 280,
gamma = 1/10,
muH = (1/(365 * 73)),
muT = (1/(365 * 2)),
muC = (1/(11 * 365)),
piH = 1.25/(73 * 365),
piT = 4.5/730,
piC = 1/(11 * 365),
deltaHS = 1/3,
deltaHA = 1/2,
alphaA = 1/17,
alphaS = 1/17,
alphaC = 1/7)
# time to start solution
t = seq(from = 0, to = 365, by = 0.1)
#initialize initial conditions
initialConditions = c(SH = 10000, EH = 5, IA = 5, IS = 10, RH = 2, ST = 80000, IT = 50, SC = 30000, IC = 5, RC = 1)
dataSet = ode(y = initialConditions, times = t, func = CCHFModel, parms = defaultParms)%>%
as.data.frame()
After running this all the output following the initial conditions is NA.

This is due to a typo - you misnumbered the translation of input values in the first section of your code (i.e., you skipped x[8]. I will go through two (hopefully) useful exercises, first explaining how I debugged this and then showing how to rewrite your function to make it less error-prone ...
debugging
Try running the gradient function for t=0, x=<initial conditions>:
CCHFModel(0,initialConditions, defaultParms)
## piH betaHHA deltaHA deltaHS alphaA piT
## -15.02882327 12.62349834 0.53902803 0.07805607 0.88227788 385.31052332
## betaTT piC betaTC alphaC
## 0.85526763 NA NA NA
Hmm, we already see we have a problem. Why are the last three elements of the computed gradients NA?
add browser() near the end of the function (before the dsCdt <- ... line) so we can take a closer look. Redefine the function, and try computing the gradient again.
When we get there and print out some of the quantities involved in the computation we see that both NC and RC are NA ... we can also see that an NA value of RC will cause NC to be NA, so let's check the definition of RC ...
aha! RC is defined as x[11], but length(initialConditions) is only 10 ... and a closer look shows that we missed x[8]. Redefining properly gives non-NA values throughout (I don't know if they're correct, but at least they're not NA).
error-proofing (1)
Although using [] or [[]] to extract elements of a vector usually give equivalent answers, you should always use [[]] when you want to extract a single element (scalar) from a vector. Here's why:
initialConditions[11] ## NA
initialConditions[[11]] ## Error in x[[11]] : subscript out of bounds
If you use [], the NA propagates through your code and you have to hunt down the original source. If you use [[]], R fails right away and tells you where the problem is. An additional benefit is that [] propagates the names of the vector elements in a way that doesn't usually make sense (take a look at the names of the output in "debugging/1" above ...)
error-proofing (2)
You can avoid all of the tedious and error-prone unpacking of the parameter and state vectors by replacing the unpacking code (everything before the computation of total populations) with
comb <- c(as.list(x), as.list(params))
attach(comb)
on.exit(detach(comb))
Provided that your parameter and state vectors are properly named (and there are no names that overlap between them), this will create a named list and allow looking up of the elements by name within your function; on.exit(detach(comb)) makes sure that everything gets cleaned up properly at the end. (You will see recommendations to use with() to do this; I prefer the strategy here because it makes debugging within the function [if necessary] easier. But as #tpetzoldt notes in comments, you should always pair attach(...) with on.exit(detach(...)); otherwise things get very confusing and messy ...)
At the end of the function I would use
g <- c(dSHdt, dEHdt, dIAdt, dISdt, dRHdt, dSTdt, dITdt, dSCdt, dICdt, dRCdt)
names(g) <- names(x)
list(g)
to make sure the gradient vector is properly labeled, which makes troubleshooting easier.

Related

Multi-parameter optimization in R

I'm trying to estimate parameters that will maximize the likelihood of a certain event. My objective function looks like that:
event_prob = function(p1, p2) {
x = ((1-p1-p2)^4)^67 *
((1-p1-p2)^3*p2)^5 *
((1-p1-p2)^3*p1)^2 *
((1-p1-p2)^2*p1*p2)^3 *
((1-p1-p2)^2*p1^2) *
((1-p1-p2)*p1^2*p2)^2 *
(p1^3*p2) *
(p1^4)
return(x)
}
In this case, I'm looking for p1 and p2 [0,1] that will maximize this function. I tried using optim() in the following manner:
aaa = optim(c(0,0),event_prob)
but I'm getting an error "Error in fn(par, ...) : argument "p2" is missing, with no default".
Am I using optim() wrong? Or is there a different function (package?) I should be using for multi-parameter optimization?
This problem can in fact be solved analytically.
The objective function simplifies to
F(p1,p2) = (1-p1-p2)^299 * p1^19 * p2^11
which is to be maximised over the region
C = { (p1,p2) | 0<=p1, 0<=p2, p1+p2<=1 }
Note that F is 0 if p1=0 or p2 =0 or p1+p2 = 1, while if none of those are true then F is positive. Thus the maximum of F occurs in the interior of C
Taking the log
f(p1,p2) = 299*log(1-p1-p2) + 19*log(p1) + 11*log(p2)
In fact it is as easy to solve the more general problem: maximise f over C where
f( p1,..pN) = b*log( 1-p1-..-pn) + Sum{ a[j]*log(p[j])}
where b and each a[j] is positive and
C = { (p1,..pN) | 0<pj, j=1..N and p1+p2+..pN<1 }
The critical point occurs where all the partial derivatives of f are zero, which is at
-b/(1-p1-..-pn) + a[j]/p[j] = 0 j=1..N
which can be written as
b*p[j] + a[j]*(p1+..p[N]) = a[j] j=1..N
or
M*p = a
where M = b*I + a*Ones', and Ones is a vector with each component 1
The inverse of M is
inv(M) = (1/b)*(I - a*Ones'/(b + Ones'*a))
Thus the unique critical point is
p^ = inv(M)*a
= a/(b + Sum{i|a[i]})
Since there is a maximum, and only one critical point, the critical point must be the maximum.
Based on Erwin Kalvelagen's comment: Redefine your function event_prob:
event_prob = function(p) {
p1 = p[1]
p2 = p[2]
x = ((1-p1-p2)^4)^67 *
((1-p1-p2)^3*p2)^5 *
((1-p1-p2)^3*p1)^2 *
((1-p1-p2)^2*p1*p2)^3 *
((1-p1-p2)^2*p1^2) *
((1-p1-p2)*p1^2*p2)^2 *
(p1^3*p2) *
(p1^4)
return(x)
}
You may want to set limits to ensure that p1 and p2 fulfill your constraints:
optim(c(0.5,0.5),event_prob,method="L-BFGS-B",lower=0,upper=1)

Renewal Function for Weibull Distribution

The renewal function for Weibull distribution m(t) with t = 10 is given as below.
I want to find the value of m(t). I wrote the following r code to compute m(t)
last_term = NULL
gamma_k = NULL
n = 50
for(k in 1:n){
gamma_k[k] = gamma(2*k + 1)/factorial(k)
}
for(j in 1: (n-1)){
prev = gamma_k[n-j]
last_term[j] = gamma(2*j + 1)/factorial(j)*prev
}
final_term = NULL
find_value = function(n){
for(i in 2:n){
final_term[i] = gamma_k[i] - sum(last_term[1:(i-1)])
}
return(final_term)
}
all_k = find_value(n)
af_sum = NULL
m_t = function(t){
for(k in 1:n){
af_sum[k] = (-1)^(k-1) * all_k[k] * t^(2*k)/gamma(2*k + 1)
}
return(sum(na.omit(af_sum)))
}
m_t(20)
The output is m(t) = 2.670408e+93. Does my iteratvie procedure correct? Thanks.
I don't think it will work. First, lets move Γ(2k+1) from denominator of m(t) into Ak. Thus, Ak will behave roughly as 1/k!.
In the nominator of the m(t) terms there is t2k, so roughly speaking you're computing sum with terms
100k/k!
From Stirling formula
k! ~ kk, making terms
(100/k)k
so yes, they will start to decrease and converge to something but after 100th term
Anyway, here is the code, you could try to improve it, but it breaks at k~70
N <- 20
A <- rep(0, N)
# compute A_k/gamma(2k+1) terms
ps <- 0.0 # previous sum
A[1] = 1.0
for(k in 2:N) {
ps <- ps + A[k-1]*gamma(2*(k-1) + 1)/factorial(k-1)
A[k] <- 1.0/factorial(k) - ps/gamma(2*k+1)
}
print(A)
t <- 10.0
t2 <- t*t
r <- 0.0
for(k in 1:N){
r <- r + (-t2)^k*A[k]
}
print(-r)
UPDATE
Ok, I calculated Ak as in your question, got the same answer. I want to estimate terms Ak/Γ(2k+1) from m(t), I believe it will be pretty much dominated by 1/k! term. To do that I made another array k!*Ak/Γ(2k+1), and it should be close to one.
Code
N <- 20
A <- rep(0.0, N)
psum <- function( pA, k ) {
ps <- 0.0
if (k >= 2) {
jmax <- k - 1
for(j in 1:jmax) {
ps <- ps + (gamma(2*j+1)/factorial(j))*pA[k-j]
}
}
ps
}
# compute A_k/gamma(2k+1) terms
A[1] = gamma(3)
for(k in 2:N) {
A[k] <- gamma(2*k+1)/factorial(k) - psum(A, k)
}
print(A)
B <- rep(0.0, N)
for(k in 1:N) {
B[k] <- (A[k]/gamma(2*k+1))*factorial(k)
}
print(B)
shows that
I got the same Ak values as you did.
Bk is indeed very close to 1
It means that term Ak/Γ(2k+1) could be replaced by 1/k! to get quick estimate of what we might get (with replacement)
m(t) ~= - Sum(k=1, k=Infinity) (-1)k (t2)k / k! = 1 - Sum(k=0, k=Infinity) (-t2)k / k!
This is actually well-known sum and it is equal to exp() with negative argument (well, you have to add term for k=0)
m(t) ~= 1 - exp(-t2)
Conclusions
Approximate value is positive. Probably will stay positive after all, Ak/Γ(2k+1) is a bit different from 1/k!.
We're talking about 1 - exp(-100), which is 1-3.72*10-44! And we're trying to compute it precisely summing and subtracting values on the order of 10100 or even higher. Even with MPFR I don't think this is possible.
Another approach is needed
OK, so I ended up going down a pretty different road on this. I have implemented a simple discretization of the integral equation which defines the renewal function:
m(t) = F(t) + integrate (m(t - s)*f(s), s, 0, t)
The integral is approximated with the rectangle rule. Approximating the integral for different values of t gives a system of linear equations. I wrote a function to generate the equations and extract a matrix of coefficients from it. After looking at some examples, I guessed a rule to define the coefficients directly and used that to generate solutions for some examples. In particular I tried shape = 2, t = 10, as in OP's example, with step = 0.1 (so 101 equations).
I found that the result agrees pretty well with an approximate result which I found in a paper (Baxter et al., cited in the code). Since the renewal function is the expected number of events, for large t it is approximately equal to t/mu where mu is the mean time between events; this is a handy way to know if we're anywhere in the neighborhood.
I was working with Maxima (http://maxima.sourceforge.net), which is not efficient for numerical stuff, but which makes it very easy to experiment with different aspects. At this point it would be straightforward to port the final, numerical stuff to another language such as Python.
Thanks to OP for suggesting the problem, and S. Pappadeux for insightful discussions. Here is the plot I got comparing the discretized approximation (red) with the approximation for large t (blue). Trying some examples with different step sizes, I saw that the values tend to increase a little as step size gets smaller, so I think the red line is probably a little low, and the blue line might be more nearly correct.
Here is my Maxima code:
/* discretize weibull renewal function and formulate system of linear equations
* copyright 2020 by Robert Dodier
* I release this work under terms of the GNU General Public License
*
* This is a program for Maxima, a computer algebra system.
* http://maxima.sourceforge.net/
*/
"Definition of the renewal function m(t):" $
renewal_eq: m(t) = F(t) + 'integrate (m(t - s)*f(s), s, 0, t);
"Approximate integral equation with rectangle rule:" $
discretize_renewal (delta_t, k) :=
if equal(k, 0)
then m(0) = F(0)
else m(k*delta_t) = F(k*delta_t)
+ m(k*delta_t)*f(0)*(delta_t / 2)
+ sum (m((k - j)*delta_t)*f(j*delta_t)*delta_t, j, 1, k - 1)
+ m(0)*f(k*delta_t)*(delta_t / 2);
make_eqs (n, delta_t) :=
makelist (discretize_renewal (delta_t, k), k, 0, n);
make_vars (n, delta_t) :=
makelist (m(k*delta_t), k, 0, n);
"Discretized integral equation and variables for n = 4, delta_t = 1/2:" $
make_eqs (4, 1/2);
make_vars (4, 1/2);
make_eqs_vars (n, delta_t) :=
[make_eqs (n, delta_t), make_vars (n, delta_t)];
load (distrib);
subst_pdf_cdf (shape, scale, e) :=
subst ([f = lambda ([x], pdf_weibull (x, shape, scale)), F = lambda ([x], cdf_weibull (x, shape, scale))], e);
matrix_from (eqs, vars) :=
(augcoefmatrix (eqs, vars),
[submatrix (%%, length(%%) + 1), - col (%%, length(%%) + 1)]);
"Subsitute Weibull pdf and cdf for shape = 2 into discretized equation:" $
apply (matrix_from, make_eqs_vars (4, 1/2));
subst_pdf_cdf (2, 1, %);
"Just the right-hand side matrix:" $
rhs_matrix_from (eqs, vars) :=
(map (rhs, eqs),
augcoefmatrix (%%, vars),
[submatrix (%%, length(%%) + 1), col (%%, length(%%) + 1)]);
"Generate the right-hand side matrix, instead of extracting it from equations:" $
generate_rhs_matrix (n, delta_t) :=
[delta_t * genmatrix (lambda ([i, j], if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))];
"Generate numerical right-hand side matrix, skipping over formulas:" $
generate_rhs_matrix_numerical (shape, scale, n, delta_t) :=
block ([f, F, numer: true], local (f, F),
f: lambda ([x], pdf_weibull (x, shape, scale)),
F: lambda ([x], cdf_weibull (x, shape, scale)),
[genmatrix (lambda ([i, j], delta_t * if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))]);
"Solve approximate integral equation (shape = 3, t = 1) via LU decomposition:" $
fpprintprec: 4 $
n: 20 $
t: 1;
[AA, bb]: generate_rhs_matrix_numerical (3, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Iterative solution of approximate integral equation (shape = 3, t = 1):" $
xx: bb;
for i thru 10 do xx: AA . xx + bb;
xx - (AA.xx + bb);
xx_iterative: xx;
"Should find iterative and LU give same result:" $
xx_diff: xx_iterative - xx_by_lu[1];
sqrt (transpose(xx_diff) . xx_diff);
"Try shape = 2, t = 10:" $
n: 100 $
t: 10 $
[AA, bb]: generate_rhs_matrix_numerical (2, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Baxter, et al., Eq. 3 (for large values of t) compared to discretization:" $
/* L.A. Baxter, E.M. Scheuer, D.J. McConalogue, W.R. Blischke.
* "On the Tabulation of the Renewal Function,"
* Econometrics, vol. 24, no. 2 (May 1982).
* H(t) is their notation for the renewal function.
*/
H(t) := t/mu + sigma^2/(2*mu^2) - 1/2;
tx_points: makelist ([float (k/n*t), xx_by_lu[1][k, 1]], k, 1, n);
plot2d ([H(u), [discrete, tx_points]], [u, 0, t]), mu = mean_weibull(2, 1), sigma = std_weibull(2, 1);

Is it possible to use rk4 and rootfun in ode (package deSolve)

I'm trying to modeling a prey-prey-predator system using differential equations based on the LV model. For the sake of the precision, i need to use the runge-kutta4 method.
But given the equations, some of the populations become quickly negative.
So I tried to use the events/root system of ODE but it seems that rk4 and rootfun are not compatibles...
eventFunc <- function(t, y, p){
if (y["N1"] < 0) { y["N1"] = 0 }
if (y["N2"] < 0) { y["N2"] = 0 }
if (y["P"] < 0) { y["P"] = 0 }
return(y)
}
rootFunction <- function(t, y, p){
if (y["P"] < 0) {y["P"] = 0}
if (y["N1"] < 0) {y["N1"] = 0}
if (y["N2"] < 0) {y["N2"] = 0}
return(y)
}
out <- ode(func=Model_T2.2,
method="rk4",
y=state,
parms=parameters,
times=times,
events = list(func = eventFunc,
root = TRUE),
rootfun = rootFunction
)
This code give me the followin error :
Error in checkevents(events, times, Ynames, dllname) :
either 'events$time' should be given and contain the times of the events, if 'events$func' is specified and no root function or your solver does not support root functions
Is there any solution to use rk4 and forbid the functions to go under 0?
Thanks in advance.
For those who might ask, here is what works :
if(!require(ggplot2)) {
install.packages("ggplot2"); require(ggplot2)}
if(!require(deSolve)) {
install.packages("deSolve"); require(deSolve)}
Model_T2.2 <- function(t, state, par){
with(as.list(c(state, par)), {
response1 <- (a1 * N1)/(1+(a1*h1*N1)+(a2*h2*N2))
response2 <- (a2 * N2)/(1+(a1*h1*N1)+(a2*h2*N2))
dN1 = r1*N1 * (1 - ((N1 + A12 * N2)/K1)) - response1 * P
dN2 = r2*N2 * (1 - ((N1 + A21 * N2)/K2)) - response2 * P
dP = ((E1 * response1) + (E2 * response2)) * P - Mp
return(list(c(dN1, dN2, dP)))
})
}
parameters<-c(
r1=1.42, r2=0.9,
A12=0.6, A21=0.5,
K1=50, K2=50,
a1=0.77, a2=0.77,
b1 = 1, b2=1,
h1=1.04, h2=1.04,
o1=0, o2=0,
Mp=0.22,
E1=0.36, E2=0.36
)
## inital states
state<-c(
P=10,
N1=30,
N2=30
)
times <- seq(0, 30, by=0.5)
out <- ode(func=Model_T2.2,
method="rk4",
y=state,
parms=parameters,
times=times,
events = list(func = eventFunc,
root = TRUE),
rootfun = rootFunction
)
md <- melt(as.data.frame(out), id.vars=1, measure.vars = c("N1", "N2", "P"))
pl <- ggplot(md, aes(x=time, y=value, colour=variable))
pl <- pl + geom_line() + geom_point() + scale_color_discrete(name="Population")
pl
And the result in a graph :
Evolution of prey1, prey2 and predator populations
As you can see, the population of predators become negative which is clearly impossible in the real world.
Edit : missing variables, sorry about that.
This is a problem you will have with all explicit solvers like rk4. Reducing the time step will help, up to a point. Better use a solver with an implicit method, lsoda seems universally available in one form or another.
Another way to explicitly force positive values is to parametrize them as exponentials. Set N1=exp(U1), N2=exp(U2) then the ODE function code translates to (as dN = exp(U)*dU = N*dU)
N1 <- exp(U1)
N2 <- exp(U2)
response1 <- (a1)/(1+(a1*h1*N1)+(a2*h2*N2))
response2 <- (a2)/(1+(a1*h1*N1)+(a2*h2*N2))
dU1 = r1 * (1 - ((N1 + A12 * N2)/K1)) - response1 * P
dU2 = r2 * (1 - ((N1 + A21 * N2)/K2)) - response2 * P
dP = ((E1 * response1*N1) + (E2 * response2*N2)) * P - Mp
For the output you have then of course to reconstruct N1, N2 from the solutions U1, U2.
Thanks to J_F, I am now able to run my L-V model.
The radau (not randau as you mentionned) function indeed accept root function and events ans implicitly implements the runge-kutta method.
Thanks again, hope this will help someone in the future.

Solving a system of differential equations in R

I have a simple flux model in R. It boils down to two differential equations that model two state variables within the model, we'll call them A and B. They are calculated as simple difference equations of four component fluxes flux1-flux4, 5 parameters p1-p5, and a 6th parameter, of_interest, that can take on values between 0-1.
parameters<- c(p1=0.028, p2=0.3, p3=0.5, p4=0.0002, p5=0.001, of_interest=0.1)
state <- c(A=28, B=1.4)
model<-function(t,state,parameters){
with(as.list(c(state,parameters)),{
#fluxes
flux1 = (1-of_interest) * p1*(B / (p2 + B))*p3
flux2 = p4* A #microbial death
flux3 = of_interest * p1*(B / (p2 + B))*p3
flux4 = p5* B
#differential equations of component fluxes
dAdt<- flux1 - flux2
dBdt<- flux3 - flux4
list(c(dAdt,dBdt))
})
I would like to write a function to take the derivative of dAdt with respect to of_interest, set the derived equation to 0, then rearrange and solve for the value of of_interest. This will be the value of the parameter of_interest that maximizes the function dAdt.
So far I have been able to solve the model at steady state, across the possible values of of_interest to demonstrate there should be a maximum.
require(rootSolve)
range<- seq(0,1,by=0.01)
for(i in range){
of_interest=i
parameters<- c(p1=0.028, p2=0.3, p3=0.5, p4=0.0002, p5=0.001, of_interest=of_interest)
state <- c(A=28, B=1.4)
ST<- stode(y=y,func=model,parms=parameters,pos=T)
out<- c(out,ST$y[1])
Then plotting:
plot(out~range, pch=16,col='purple')
lines(smooth.spline(out~range,spar=0.35), lwd=3,lty=1)
How can I analytically solve for the value of of_interest that maximizes dAdt in R? If an analytical solution is not possible, how can I know, and how can I go about solving this numerically?
Update: I think this problem can be solved with the deSolve package in R, linked here, however I am having trouble implementing it using my particular example.
Your equation in B(t) is just-about separable since you can divide out B(t), from which you can get that
B(t) = C * exp{-p5 * t} * (p2 + B(t)) ^ {of_interest * p1 * p3}
This is an implicit solution for B(t) which we'll solve point-wise.
You can solve for C given your initial value of B. I suppose t = 0 initially? In which case
C = B_0 / (p2 + B_0) ^ {of_interest * p1 * p3}
This also gives a somewhat nicer-looking expression for A(t):
dA(t) / dt = B_0 / (p2 + B_0) * p1 * p3 * (1 - of_interest) *
exp{-p5 * t} * ((p2 + B(t) / (p2 + B_0)) ^
{of_interest * p1 * p3 - 1} - p4 * A(t)
This can be solved by integrating factor (= exp{p4 * t}), via numerical integration of the term involving B(t). We specify the lower limit of the integral as 0 so that we never have to evaluate B outside the range [0, t], which means the integrating constant is simply A_0 and thus:
A(t) = (A_0 + integral_0^t { f(tau; parameters) d tau}) * exp{-p4 * t}
The basic gist is B(t) is driving everything in this system -- the approach will be: solve for the behavior of B(t), then use this to figure out what's going on with A(t), then maximize.
First, the "outer" parameters; we also need nleqslv to get B:
library(nleqslv)
t_min <- 0
t_max <- 10000
t_N <- 10
#we'll only solve the behavior of A & B over t_rng
t_rng <- seq(t_min, t_max, length.out = t_N)
#I'm calling of_interest ttheta
ttheta_min <- 0
ttheta_max <- 1
ttheta_N <- 5
tthetas <- seq(ttheta_min, ttheta_max, length.out = ttheta_N)
B_0 <- 1.4
A_0 <- 28
#No sense storing this as a vector when we'll only ever use it as a list
parameters <- list(p1 = 0.028, p2 = 0.3, p3 = 0.5,
p4 = 0.0002, p5 = 0.001)
From here, the basic outline is:
Given the parameter values (in particular ttheta), solve for BB over t_rng via non-linear equation solving
Given BB and the parameter values, solve for AA over t_rng by numerical integration
Given AA and your expression for dAdt, plug & maximize.
derivs <-
sapply(tthetas, function(th){
#append current ttheta
params <- c(parameters, ttheta = th)
#declare a function we'll use to solve for B (see above)
b_slv <- function(b, t)
with(params, b - B_0 * ((p2 + b)/(p2 + B_0)) ^
(ttheta * p1 * p3) * exp(-p5 * t))
#solving point-wise (this is pretty fast)
# **See below for a note**
BB <- sapply(t_rng, function(t) nleqslv(B_0, function(b) b_slv(b, t))$x)
#this is f(tau; params) that I mentioned above;
# we have to do linear interpolation since the
# numerical integrator isn't constrained to the grid.
# **See below for note**
a_int <- function(t){
#approximate t to the grid (t_rng)
# (assumes B is monotonic, which seems to be true)
# (also, if t ends up negative, just assign t_rng[1])
t_n <- max(1L, which.max(t_rng - t >= 0) - 1L)
idx <- t_n:(t_n+1)
ts <- t_rng[idx]
#distance-weighted average of the local B values
B_app <- sum((-1) ^ (0:1) * (t - ts) / diff(ts) * BB[idx])
#finally, f(tau; params)
with(params, (1 - ttheta) * p1 * p3 * B_0 / (p2 + B_0) *
((p2 + B_app)/(p2 + B_0)) ^ (ttheta * p1 * p3 - 1) *
exp((p4 - p5) * t))
}
#a_int only works on scalars; the numeric integrator
# requires a version that works on vectors
a_int_v <- function(t) sapply(t, a_int)
AA <- exp(-params$p4 * t_rng) *
sapply(t_rng, function(tt)
#I found the subdivisions constraint binding in some cases
# at the default value; no trouble at 1000.
A_0 + integrate(a_int_v, 0, tt, subdivisions = 1000L)$value)
#using the explicit version of dAdt given as flux1 - flux2
max(with(params, (1 - ttheta) * p1 * p3 * BB / (p2 + BB) - p4 * AA))})
Finally, simply run `tthetas[which.max(derivs)]` to get the maximizer.
Note:
This code is not optimized for efficiency. There are a few places where there are some potential speed-ups:
probably faster to run the equation solver recursively, as it'll converge faster with better initial guesses -- using the previous value instead of the initial value is surely better
Will be faster to simply use Riemann sums to integrate; the tradeoff is in accuracy, but should be fine if you have a dense enough grid. One beauty of Riemann is you won't have to interpolate at all, and numerically they're simple linear algebra. I ran this with t_N == ttheta_N == 1000L and it ran within a few minutes.
Probably possible to vectorize a_int directly instead of just sapplying on it, which concomitant speed-up by more direct appeal to BLAS.
Loads of other small stuff. Pre-compute ttheta * p1 * p3 since it's re-used so much, etc.
I didn't bother including any of that stuff, though, because you're honestly probably better off porting this to a faster language -- Julia is my own pet favorite, but of course R speaks well with C++, C, Fortran, etc.

translating matlab script to R

I've just been working though converting some MATLAB scripts to work in R, however having never used MATLAB in my life, and not exactly being an expert on R I'm having some trouble.
Edit: It's a script I was given designed to correct temperature measurements for lag generated by insulation mass effects. My understanding is that It looks at the rate of change of the temperature and attempts to adjust for errors generated by the response time of the sensor. Unfortunately there is no literature available to me to give me an indication of the numbers i am expecting from the function, and the only way to find out will be to experimentally test it at a later date.
the original script:
function [Tc, dT] = CTD_TempTimelagCorrection(T0,Tau,t)
N1 = Tau/t;
Tc = T0;
N = 3;
for j=ceil(N/2):numel(T0)-ceil(N/2)
A = nan(N,1);
# Compute weights
for k=1:N
A(k) = (1/N) + N1 * ((12*k - (6*(N+1))) / (N*(N^2 - 1)));
end
A = A./sum(A);
# Verify unity
if sum(A) ~= 1
disp('Error: Sum of weights is not unity');
end
Comp = nan(N,1);
# Compute components
for k=1:N
Comp(k) = A(k)*T0(j - (ceil(N/2)) + k);
end
Tc(j) = sum(Comp);
dT = Tc - T0;
end
where I've managed to get to:
CTD_TempTimelagCorrection <- function(temp,Tau,t){
## Define which equation to use based on duration of lag and frequency
## With ESM2 profiler sampling # 2hz: N1>tau/t = TRUE
N1 = Tau/t
Tc = temp
N = 3
for(i in ceiling(N/2):length(temp)-ceiling(N/2)){
A = matrix(nrow=N,ncol=1)
# Compute weights
for(k in 1:N){
A[k] = (1/N) + N1 * ((12*k - (6*(N+1))) / (N*(N^2 - 1)))
}
A = A/sum(A)
# Verify unity
if(sum(A) != 1){
print("Error: Sum of weights is not unity")
}
Comp = matrix(nrow=N,ncol=1)
# Compute components
for(k in 1:N){
Comp[k] = A[k]*temp[i - (ceiling(N/2)) + k]
}
Tc[i] = sum(Comp)
dT = Tc - temp
}
return(dT)
}
I think the problem is the Comp[k] line, could someone point out what I've done wrong? I'm not sure I can select the elements of the array in such a way.
by the way, Tau = 1, t = 0.5 and temp (or T0) will be a vector.
Thanks
edit: apparently my description is too brief in explaining my code samples, not really sure what more I could write that would be relevant and not just wasting peoples time. Is this enough Mr Filter?
The error is as follows:
Error in Comp[k] = A[k] * temp[i - (ceiling(N/2)) + k] :
replacement has length zero
In addition: Warning message:
In Comp[k] = A[k] * temp[i - (ceiling(N/2)) + k] :
number of items to replace is not a multiple of replacement length
If you write print(i - (ceiling(N/2)) + k) before that line, you will see that you are using incorrect indices for temp[i - (ceiling(N/2)) + k], which means that nothing is returned to be inserted into Comp[k]. I assume this problem is due to Matlab allowing the use of 0 as an index and not R, and the way negative indices are handled (they don't work the same in both languages). You need to implement a fix to return the correct indices.

Resources