Time varying parameter-matrix in deSolve R - r

I am struggling with this for so long. I have a logistic growth function where the growth parameter
r is a matrix. The model is constructed in a way that I have as an output two N the N1 and N2.
I would like to be able to change the r parameter over time. When time < 50 I would like
r = r1 where
r1=matrix(c(
2,3),
nrow=1, ncol=2
When time >= 50 I would like r=r2 where
r2=matrix(c(
1,2),
nrow=1, ncol=2
Here is my function. Any help is highly appreciated.
rm(list = ls())
library(deSolve)
model <- function(time, y, params) {
with(as.list(c(y,params)),{
N = y[paste("N",1:2, sep = "")]
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
r=matrix(c(
4,5),
nrow=1, ncol=2)
K=100
params <- list(r,K)
y<- c(N1=0.1, N2=0.2)
times <- seq(0,100,1)
out <- ode(y, times, model, params)
plot(out)
I would like ideally something like this but it does not work
model <- function(time, y, params) {
with(as.list(c(y,params)),{
N = y[paste("N",1:2, sep = "")]
r = ifelse(times < 10, matrix(c(1,3),nrow=1, ncol=2),
ifelse(times > 10, matrix(c(1,4),nrow=1, ncol=2), matrix(c(1,2),nrow=1, ncol=2)))
print(r)
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
Thank you for your time.

Here a generic approach that uses an extended version of the approx function. Note also some further simplifications of the model function and the additional plot of the parameter values.
Edit changed according to the suggestion of Lewis Carter to make the parameter change at t=3, so that the effect can be seen.
library(simecol) # contains approxTime, a vector version of approx
model <- function(time, N, params) {
r <- approxTime(params$signal, time, rule = 2, f=0, method="constant")[-1]
K <- params$K
dN <- r*N*(1-N/K)
return(list(c(dN), r))
}
signal <- matrix(
# time, r[1, 2],
c( 0, 2, 3,
3, 1, 2,
100, 1, 2), ncol=3, byrow=TRUE
)
## test of the interpolation
approxTime(signal, c(1, 2.9, 3, 100), rule = 2, f=0, method="constant")
params <- list(signal = signal, K = 100)
y <- c(N1=0.1, N2=0.2)
times <- seq(0, 10, 0.1)
out <- ode(y, times, model, params)
plot(out)
For a small number of state variables like in the example, separate signals with approxfun from package stats will look less generic but may be slighlty faster.
As a further improvement, one may consider to replace the "hard" transitions with a more smooth one. This can then directly be formulated as a function without the need of approx, approxfun or approxTime.
Edit 2:
Package simecol imports deSolve, and we need only a small function from it. So instead of loading simecol it is also possible to include the approxTime function explicitly in the code. The conversion from data frame to matrix improves performance, but a matrix is preferred anyway in such cases.
approxTime <- function(x, xout, ...) {
if (is.data.frame(x)) {x <- as.matrix(x); wasdf <- TRUE} else wasdf <- FALSE
if (!is.matrix(x)) stop("x must be a matrix or data frame")
m <- ncol(x)
y <- matrix(0, nrow=length(xout), ncol=m)
y[,1] <- xout
for (i in 2:m) {
y[,i] <- as.vector(approx(x[,1], x[,i], xout, ...)$y)
}
if (wasdf) y <- as.data.frame(y)
names(y) <- dimnames(x)[[2]]
y
}

If you want to pass a matrix parameter you should pass a list of parameters and you can modify it inside the model when your time limit is exceeded (in the example below you don't even have to pass the r matrix to the model function)
library(deSolve)
model <- function(time, y, params) {
with(as.list(c(y,params)),{
if(time < 3) r = matrix(c(2,3), nrow = 1, ncol = 2)
else r = matrix(c(1,3), nrow = 1, ncol = 2)
N = y[paste("N",1:2, sep = "")]
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
y <- c(N1=0.1, N2=0.2)
params <- list(r = matrix(c(0,0), nrow = 1, ncol = 2), K=100)
times <- seq(0,10,0.1)
out <- ode(y, times, model, params)
plot(out)
You can see examples of this for instance with Delay Differential Equations ?dede

Related

parallelizing lapply with parLapply does not recognize objects even though I suppied them

I am trying to parallelize something with parLapply. I am exporting all necessary information to the cores, but somehow I am getting an error saying that it cannot find the object 'market_time' (first line of the function that is called in parLapply. However, this object is just a column of the data table 'dt' that I export to the cores.
library('data.table')
library('numDeriv')
library('snow')
cores=detectCores()
cl <- makeCluster(cores[1], type = 'PSOCK')
markets <- unique(dt[, market_time])
R = 10000
nu_p <- rnorm(n = R, -2, 0.5)
nu_xr <- rnorm(n = R, 2, 0.5)
nu_xm <- rnorm(n = R, 2, 0.5)
nu_xj <- rnorm(n = R, 2, 0.5)
clusterExport(cl,c('dt','nu_p','nu_xr','nu_xm','nu_xj')
temp <- parLapply(cl, markets,calc_mc_w, dt=dt,nu_p=nu_p,nu_xr= nu_xr,
nu_xm=nu_xm,nu_xj=nu_xj)
where the function calc_mc_w calls:
calc_mc_w <- function(m, dt,nu_p,nu_xr,nu_xm,nu_xj){
dt_mkt = dt[market_time==m,]
market_time <- dt_mkt[, market_time]
x_m <- dt_mkt[, x_m]
x_j <- dt_mkt[, x_j]
x_r <- dt_mkt[, x_r]
p <- as.matrix(dt_mkt[, p])
xi <- dt_mkt[, xi]
p <- as.matrix(dt_mkt[, p])
jacobian <- jacobian(function(x){calc_shares(x, x_m, x_j, x_r, xi, nu_p,
nu_xm, nu_xj, nu_xr,
market_time)},p)
output <- dt_mkt[,c('prod','market','time','retailer')]
#Get a system of equations with as many equations as unknowns
retailers = unique(dt_mkt[, retailer])
temp <- lapply(retailers,calc_mc_w_r,dt_mkt = dt_mkt, jacobian = jacobian)
temp <- rbindlist(temp)
output <- merge(output,temp,by.x = c('prod','retailer'),
by.y = c('prod','retailer'), allow.cartesian=TRUE)
output
}
calc_mc_w_r <- function(r, dt_mkt, jacobian){
dt_r = dt_mkt[retailer == r,]
result <- dt_r[,c('prod','retailer')]
rows = (dt_mkt[,'retailer']== r)
jacobian_r = jacobian[rows,rows]
result <- result[,mc_w := solve(jacobian_r, dt_r[,shares]+ jacobian_r %*% dt_r[,p])]
result
}
The error I get is:
Error in checkForRemoteErrors(val) :
2 nodes produced errors; first error: object 'market_time' not found
If instead, I do not export the data table dt, but instead each column of it, I get the same error but just for 'jacobian' which is something that I calculate in the function (I do not want to calculate it across the whole dataset as it is super costly, which is why I just want to calculate it on each subset).

R - fast interpolation between CDF quantiles

Say we have a data.frame where the columns represent the quantiles for a given set of probabilities. Each row represents a different subject and the quantiles vary by subject. The goal is to take n_draws for each subject.
n <- 1e5
alphas <- c(.05, .25, .5, .75, .95)
n_draws <- 100
dt <- data.frame(quantile_05 = runif(n),
quantile_25 = runif(n, min = 10, max = 20),
quantile_5 = runif(n, min = 30, max = 40),
quantile_75 = runif(n, min = 50, max = 60),
quantile_95 = runif(n, min = 70, max = 80))
R has stats::approx. The issue is that it can only be applied to 1 row at once.
draws <- apply(X = dt, MARGIN = 1, function(q){
stats::approx(x=alphas, y=q, yleft = 0, rule = 2, method="linear", xout = seq(.0001, .99999, length.out = n_draws))$y
})
Naturally, one way to speed this up is parallelization:
library(parallel)
registerDoParallel(cores=8)
cl <- makeCluster(8)
clusterExport(cl, c('alphas', 'n_draws'))
draws <- parApply(cl=cl, X = dt, MARGIN = 1, function(q){
stats::approx(x=alphas, y=q, yleft = 0, rule = 2, method="linear", xout = seq(.0001, .99999, length.out = n_draws))$y
})
This parallel code is several times faster on my machine. I am curious if anyone has suggestions for further speed ups.
To be a little more clear, this is just a simplification of what I'm using in real life. I have more than 5 quantiles. I want to use this concept to estimate not only other quantiles, but also the mean. Further, I'd like to use the draws to model other quantities like max(y - 10, 0) (or y - any arbitrary value).
Your proposed method is rather inefficient for the desired goal. You end up storing 100 points for each desired ECDF and you will still need to write a function to extract a desired draw. Instead I suggest you consider using the approxfun functions. It will return a more compact set of values which will be individual functions with associated environments that contain the knots for later calculation. The calculation will be done with a C call that is accessed with the invisible helper function, .approxfun.
Demonstrating the internals (up to a point) that I'm suggesting:
out <- approxfun(y=dt[1,], x=alphas,yleft = 0, rule = 2, method="linear")
# So out is now a single instance using the knots in the first row
out
#function (v)
#.approxfun(x, y, v, method, yleft, yright, f, na.rm)
#<bytecode: 0x558366535968>
#<environment: 0x5583690a04f8>
ls(environment(out))
#[1] "f" "method" "na.rm" "x" "y" "yleft" "yright"
environment(out)$x
#[1] 0.05 0.25 0.50 0.75 0.95
environment(out)$y
#[1] 0.4038727 17.7069735 33.4438595 57.2753257 77.2024894
If you wanted the estimated 55th percentile for the first case, you could get it with:
out(55/100)
#[1] 38.21015
And now that I've suggested a way to speed up you creation of this list of functions, I'm not even sure it's worth it. I think you could just leave that dt dataframe in place and call approxfun when needed. But that's your call.
Note: This is essentially the method used by the ecdf function:
ecdf
function (x)
{
x <- sort(x)
n <- length(x)
if (n < 1)
stop("'x' must have 1 or more non-missing values")
vals <- unique(x)
rval <- approxfun(vals, cumsum(tabulate(match(x, vals)))/n,
method = "constant", yleft = 0, yright = 1, f = 0, ties = "ordered")
class(rval) <- c("ecdf", "stepfun", class(rval))
assign("nobs", n, envir = environment(rval))
attr(rval, "call") <- sys.call()
rval
}
<bytecode: 0x558364a0f360>
<environment: namespace:stats>
And it's possible that you might want to use the ecdf function because it has some class-associated functions.

R package submission error concerning set.seed()

I recently submitted a package to CRAN that passed all the automatic checks, but failed passing the manual ones. One of the errors were the following:
Please do not set a seed to a specific number within a function.
Please do not modifiy the .GlobalEnv. This is not allowed by the CRAN policies.
I believe the lines of code that these comments are referring to are the following
if(simul == TRUE){
set.seed(42)
}
w <- matrix(data = rbinom(n = p, size = 1, prob = 0.5), ncol = 1)
beta <- w*beta-(1-w)*beta
s <- round((1-sparsity)*p)
toReplace <- sample(p, size = s)
beta <- replace(beta, list = toReplace, values = 0)
# Generate the random p-columned matrix of indicator series.
X <- matrix(data = rnorm ((n_l*m) * p, mean = mean_X, sd = sd_X), ncol = p, nrow = n_l*m)
if(simul == TRUE){
rm(.Random.seed, envir = globalenv())
}
Essentially, I am allowing the function to include a simulations option "simul", such that when set to "TRUE", a matrix "X" and a vector of coefficients "beta" remain fixed. I remove the seed at the end of this segment (final lines), as the rest of the code contains variables that should change at each iteration of the simulation. However, as noted in the feedback from CRAN, this is not allowed. What is an alternative way to go about this? I cannot set a fixed vector "beta" or matrix "X" when "simul" is "TRUE", since the dimension of these are inputs to the function and thus vary depending on the preferences of the investigator.
If you really, really, want to set the seed inside a function, which I believe you nor anyone should do, save the current seed, do whatever you want, and before exiting the function reset it to the saved value.
old_seed <- .Random.seed
rnorm(1)
#[1] -1.173346
set.seed(42)
rbinom(1, size = 1, prob = 0.5)
#[1] 0
.Random.seed <- old_seed
rnorm(1)
#[1] -1.173346
In a function it could be something like the following, without the message instructions. Note that the function prints nothing, it never calls any pseudo-RNG and always outputs TRUE. The point is to save the seed's current value and reset the seed in on.exit.
f <- function(simul = FALSE){
if(simul){
message("simul is TRUE")
old_seed <- .Random.seed
on.exit(.Random.seed <- old_seed)
# rest of code
} else message("simul is FALSE")
invisible(TRUE)
}
f()
s <- .Random.seed
f(TRUE)
identical(s, .Random.seed)
#[1] TRUE
rm(s)
A similar question has been asked on the Bio devel mailing list. The suggestion there was to use the functionality of withr::with_seed. Your code could then become:
library(withr)
if(simul == TRUE){
w <- with_seed(42, matrix(data = rbinom(n = p, size = 1, prob = 0.5), ncol = 1))
} else {
w <- matrix(data = rbinom(n = p, size = 1, prob = 0.5), ncol = 1)
}
beta <- w*beta-(1-w)*beta
s <- round((1-sparsity)*p)
toReplace <- sample(p, size = s)
beta <- replace(beta, list = toReplace, values = 0)
# Generate the random p-columned matrix of indicator series.
X <- matrix(data = rnorm ((n_l*m) * p, mean = mean_X, sd = sd_X), ncol = p, nrow = n_l*m)
Of course that raises the question of how withr got on CRAN, given that it appears to do the same thing that you're being told not to do - the difference may be that your version may overwrite an existing seed, whereas that code checks whether a seed already exists.
When you fix the seed, if the user try this code with the same parameters, the same results will be obtained each time.
Supposing that this chunk of code is inside a larger chunk related only to the simulation, just get rid of the setseed() and try something like that:
if(simul == TRUE){
w <- matrix(data = rbinom(n = p, size = 1, prob = 0.5), ncol = 1)
beta <- w*beta-(1-w)*beta
s <- round((1-sparsity)*p)
toReplace <- sample(p, size = s)
beta <- replace(beta, list = toReplace, values = 0)
# Generate the random p-columned matrix of indicator series.
X <- matrix(data = rnorm ((n_l*m) * p, mean = mean_X, sd = sd_X), ncol = p, nrow = n_l*m)
}

How do I iterate over several lists and matrices to call a function using parallel processing in R?

I have been trying to use am R function called ipsi, which takes arguments (a, y, id, time, x.trt, x.out, delta.seq, nsplits) Originally, the components of the arguments were in one dataframe (except for delta.seq and nsplits which are coded later), but my understanding is I needed to put them in separate lists, and in the case of x.trt and x.out, matrices. This function is very easy to run on one of each argument, but since I multiply imputed the dataframe 30 times before splitting it up into different elements to be taken as ipsi arguments, I now want to iterate over the set of elements 30 times as if there were 30 dataframes. Additionally, I want to parallelize to optimize my computing power.
I have just expanded the npcausal example:
n <- 500
T <- 4
time <- rep(1:T, n)
time <- list(time,time,time,time,time,time,time,time,time,time,time,time,time,time,time,
time,time,time,time,time,time,time,time,time,time,time,time,time,time,time)
id <- rep(1:n, rep(T, n))
id <- list(id,id,id,id,id,id,id,id,id,id,id,id,id,id,id,
id,id,id,id,id,id,id,id,id,id,id,id,id,id,id)
x.trt <- matrix(rnorm(n * T * 5), nrow = n * T)
x.trt <- list(x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,
x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt)
x.out <- matrix(rnorm(n * T * 5), nrow = n * T)
x.out <- list(x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,
x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out)
a <- rbinom(n * T, 1, .5)
a <- list(a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,
a,a,a,a,a,a,a,a,a,a,a,a,a,a,a)
y <- rnorm(mean=1,n)
y <- list(y,y,y,y,y,y,y,y,y,y,y,y,y,y,y,
y,y,y,y,y,y,y,y,y,y,y,y,y,y,y)
d.seq <- seq(0.1, 5, length.out = 10)
d.seq <- list(d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,
d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq)
set.seed(500, kind = "L'Ecuyer-CMRG")
numcores <- future::availableCores()
cl <- parallel::makeCluster(numcores)
parallel::clusterEvalQ(cl, library(dplyr))
parallel::clusterEvalQ(cl, library(npcausal))
parallel::clusterExport(cl, "d.seq", envir = environment())
parallel::clusterEvalQ(cl, d.seq <- d.seq)
new_element <- parallel::parLapply(cl = cl, for(i in 1:30){
npcausal::ipsi(a = a[[i]],
y = y[[i]],
id = id[[i]],
time = time[[i]],
x.out = x.out[[i]],
x.trt = x.trt[[i]],
delta.seq = d.seq[[i]],
nsplits = 10)
})
This actually runs, but at the end of the process it gives me an error saying that the FUN was missing. I knew that already, but I have no FUN to call besides ipsi. Thanks for any help you can provide.
My suggestion is to first figure out how to do it with a regular base-R *apply function without worrying about parallelization. I suspect you can use mapply() for this, so something like (non confirmed):
res <- mapply(
a, y, id, time, xout, x,out, x.trt, d.seq,
FUN = function(a_i, y_i, id_i, time_i, xout_i, x,out_i, x.trt_i, d.seq_i) {
npcausal::ipsi(a = a_i, y = y_i, id = id_i, time = time_i,
x.out = x.out_i, x.trt = x.trt_i, delta.seq = d.seq_i,
nsplits = 10)
}
)
When you figured that part out, you can start thinking about parallelization.
(Disclaimer: I'm the author) If you get an mapply() solution to work, then the simplest would be to replace that as-is with future_mapply() of the future.apply package. That will parallelize on your local machine if you set plan(multisession).

changing a variable depending on time in R ODE solver

I have a model that has a differential equation in it, that I'm solving using odesolver in R. One of the parameters that you give to the ode function is times. Is there a way that you can change another variable depending on what time it is in the solver (e.g. for seasonal changes). What I'd like to do is something like the below, where ODETIME is the variable for times in ODE.
function1 <- function(r, y, x) {
z <- x[which(x$time = ODETIME),2]
list(- r * y * z)
}
N <- 10
r <- runif(N, 15, 20)
yini <- runif(N, 1, 40)
vals <- rep(1:5, 22)
times <- 0:10
x <- data.frame(time = rep(times, 10), vals)
out <- ode(y = yini, parms = NULL, times = times, func = function1)

Resources