R - deSolve,parameters - r

i'm new to this site.
I'm implementing a script in R, and i need to utilize deSolve, in order to do that i have to define the parameters that are useful when i write the "rate of change" of my equations.
Here's my problem, for using the package i'm obliged to define those parameters, but I actually don't need them, I mean in the rate of change of my equations i don't have any parameters so i'm asking for a way to avoid their definition.
I partially solved that problem defining the parameters in this way(not very elegant):
parameters <- c(1)
but I don't like this kind of solution.
Thank you in advance for any help, if my message is not that clear please report it to me, sorry but english is not my first language!
That's the code:
parameters <- c(1)
e=0.2056
ini <- c(q1 =1-e, q2 = 0, p1 = 0, p2 = sqrt ((1+e)/(1-e)) )
rhs <- function(t, ini,parameters)
{
with(as.list(c(ini,parameters)),{
# rate of change
dq1 <- p1
dq2 <- p2
dp1 <- -q1/((q1^2+q2^2)^(3/2))
dp2 <- -q2/((q1^2+q2^2)^(3/2))
# return rate of change
list(c(dq1,dq2,dp1,dp2))
} )
}
###### EULER ######
library(deSolve)
times <- seq(0,40, by = 0.0005)
out<- ode(y = ini, times = times, func = rhs, parms = parameters,method="euler")
head(out)
'ode' needs those parameters, it says error if i don't put parms

parms is passed to the method that actually does the solving, and in turn passed to your supplied function, and thus must be supplied. But it doesn't need to have a value, and your function doesn't need to use it.
# Accept and ignore third argument
rhs <- function(t, ini,...)
{
with(as.list(c(ini)),{
# rate of change
dq1 <- p1
dq2 <- p2
dp1 <- -q1/((q1^2+q2^2)^(3/2))
dp2 <- -q2/((q1^2+q2^2)^(3/2))
# return rate of change
list(c(dq1,dq2,dp1,dp2))
} )
}
# Pass NULL to parms
out2 <- ode(y = ini, times = times, func = rhs, parms = NULL,method="euler")
identical(out, out2)
## [1] TRUE

Related

Name seurat function in r with name of each experiment/variable

I am using seurat to analyze some scRNAseq data, I have managed to put all the SCT integration one line codes from satijalab into a function with basically
SCT_normalization <- function (f1, f2) {
f_merge <- merge (f1, y=f2)
f.list <- SplitObject(f_merge, split.by = "stim")
f.list <- lapply(X = f.list, FUN = SCTransform)
features <- SelectIntegrationFeatures(object.list = f.list, nfeatures = 3000)
f.list <<- PrepSCTIntegration(object.list = f.list, anchor.features = features)
return (f.list)
}
so that I will have f.list in the global environment for downstream analysis and making plots. The problem I am running into is that, every time I run the function, the output would be f.list, I want it to be specific to the input value name (i.e., f1 and/or f2). Basically something that I can set so that I would know which input value was used to generate the final output. I saw something using the assign function but someone wrote a warning about "the evil and wrong..." so I am not sure as to how to approach this.
From what it sounds like you don't need to use the super assign function <<-. In my opinion, I don't think <<- should be used as it can cause unexpected changes in objects. This is what I assume the other person was saying. For example, if you have the following function:
AverageVector <- function(v) x <<- mean(v, rm.na = TRUE)
Now you're trying to find the average of a vector you have, along with more analysis
library(tidyverse)
x <- unique(iris$Species)
avg_sl <- AverageVector(iris$Sepal.Length)
Now where x used to be a character vector, it's not a numeric vector with a length of 1.
So I would remove the <<- and call your function like this
object_list_1_2 <- SCT_normalize(object1, object2)
If you wanted a slightly more programatic way you could do something like this to keep track of objects you could do something like this:
SCT_normalization <- function(f1, f2) {
f_merge <- merge (f1, y = f2)
f.list <- SplitObject(f_merge, split.by = "stim")
f.list <- lapply(X = f.list, FUN = SCTransform)
features <- SelectIntegrationFeatures(object.list = f.list, nfeatures = 3000)
f.list <- PrepSCTIntegration(object.list = f.list, anchor.features = features)
to_return <- list(inputs = list(f1, f2), normalized = f.list)
return(to_return)
}

R BB package - no way to pass parameters to objective function?

I am eager to use the R package BB to solve a system of non-linear equations, but the syntax does not seem to allow for parameters to be passed to the system of equations. Very strange since this would severely limit what appears to be an otherwise very appealing and powerful alternative to nleqslv().
To be clear: "Normally", you expect a solver to have a space for passing parameters to the underlying objective function. For eg. in nleqslv:
out <- nleqslv(in_x, obj_fn, jac = NULL, other_pars1, other_pars2, method = "Broyden")
Where "in_x" is the vector of initial guesses at a solution, and the "other_pars1, other_pars2" are additional fixed parameters (can be scalars, vectors, matrices, whatever) required by "obj_fn".
In BBsolve, on the other hand, you just have
out <- BBsolve(in_x, obj_fn)
With no space to put in all the "other_pars1, other_pars2" required by obj_fn.
Create a function that "attaches" additional parameters to your objective function. The key concept here is that the return value is itself a function:
gen_obj_fn <- function( obj_fn, other_pars1, other_pars2 )
{
function(x) { obj_fn( x, other_pars1, other_pars2 ) }
}
The output of gen_obj_fn can now be passed directly to BBsolve:
## Previous call using nleqslv():
out <- nleqslv( in_x, myFun, jac = NULL, myParam1, myParam2, ... )
## Equivalent call using BBsolve():
myObjF <- gen_obj_fn( myFun, myParam1, myParam2 )
is.function( myObjF ) ## TRUE
out <- BBsolve( in_x, myObjF )
You haven't shown how you are using BBsolve. As I said in my comment BBsolve certainly does accept additional function arguments.
But you must name those arguments.
See this example for how to do what you seem to want:
library(nleqslv)
f <- function(x,p1=3,p2=2) {
y <- numeric(2)
y[1] <- 10*x[1]+3*x[2]^2 - p1
y[2] <- x[1]^2 -exp(x[2]) -p2
y
}
xstart <- c(1,1)
nleqslv(xstart, f)
library(BB)
BBsolve(xstart,f)
Try slightly different values for p1 and p2:
nleqslv(xstart,f,p1=2.7,p2=2.1)
BBsolve(xstart,f,p1=2.7,p2=2.1)
Both functions find the same solution.

Repeating a defined function up to n times

I was wondering how I can modify the code below to repeat the function up to 7 times. I can not use replicate(7, func(f)) since I need a function which gives me some option to choose the number of repetitions. I mean, a function which asks me whether I want to continue or not.
suppose
speed<-cars$speed
dist<-cars$dist
level<-c(1:50)
f<-data.frame(speed,dist,level)
plot(speed, dist, main="Milage vs. Car Weight",
xlab="Weight", ylab="Mileage", pch=18, col="blue")
text(speed, dist, row.names(f), cex=0.6, pos=4, col="red")
This is my function
func = function(A){
id.co1 <- identify(f$speed, f$dist,labels=row.names(f), n = 2, pos = TRUE,plot = TRUE)
xy <- f[c(id.co1$ind[1],id.co1$ind[2]),]
lines(xy, col="red", lwd=5)
lm(dist~speed, xy)
abline(coef(lm(dist~speed, xy)),col="blue")
x.co1 <- f$speed[id.co1$ind[1]:id.co1$ind[2]]
y.co1 <- f$dist[id.co1$ind[1]:id.co1$ind[2]]
m.co1 <- lm(y.co1 ~ x.co1)
}
if i understand correctly, you want to specify how often to repeat execution of a function interactively, not programmatically. You can do this with readline:
I need a function which gives me some option to choose the number of repetitions
# some function
funcA <- function(){
cat("i am funcA\n")
}
# some function that interactively repeats funcA a specified amount of times
doNTimesA <- function() {
Ntimes <- readline("number of repeats: ")
for (i in 1:Ntimes) funcA()
}
doNTimesA()
I mean, a function which asks me whether I want to continue or not
funcB <- function(){
while (TRUE) {
cat("i am funcB\n")
continue <- readline("again? y/n: ")
if (tolower(continue)=="n") break
}
cat("funcB is done")
}
funcB()
edit: for your specific case, you can wrap your function declaration in a while loop that asks you whether you want to continue, as in my example funcB above. updated where it also stores its output:
func <- function(){
#initiate an iteration counter and an outputlist
counter <- 1
output <- list()
#start an infinite loop
while (TRUE) {
#do your thing
id.co1 <- identify(f$speed, f$dist,labels=row.names(f), n = 2, pos = TRUE,plot = TRUE)
xy <- f[c(id.co1$ind[1],id.co1$ind[2]),]
lines(xy, col="red", lwd=5)
lm(dist~speed, xy)
abline(coef(lm(dist~speed, xy)),col="blue")
x.co1 <- f$speed[id.co1$ind[1]:id.co1$ind[2]]
y.co1 <- f$dist[id.co1$ind[1]:id.co1$ind[2]]
m.co1 <- lm(y.co1 ~ x.co1)
#store the result at counter index in the outputlist
output[[counter]] <- list(xco =x.co1, yco=y.co1, lm =m.co1)
counter <- counter+1
#prompt for next iteration
continue <- readline("again? y/n: ")
#break the loop if the answer is 'n' or 'N'
if (tolower(continue)=="n") break
}
#return your output
output
}
What happens now is that after every iteration, the function asks if you want to rerun the function: continue <- readline("again? y/n: ") and checks whether you have answered N or n. You can add more checks on the answer if you like; if you answer anything but N or n now, the loop will run again.
If you run all <- func(), after you're done you can access each iterations' result using all[[1]], all[[2]], etc.
Please note that it's generally frowned upon to manipulate objects outside your function environment, so it would be cleaner to pull the initial plot generation into your function.

Solving differential equations inside of manipulate() in R

I am trying to analyze how varying starting conditions and variable values in a set of differential equations (that describe the progression of a disease through a population) influences the dynamics of the system (as seen via graph). I have written the code and it works perfectly well, but forces me to change a value and then re-run the whole code.
I am therefore trying to put this code inside of manipulate() so I can manipulate the variables and immediately see the effect on the produced graph:
library(deSolve)
library(manipulate)
xyz <- function(time, state, parameters) {
with(as.list(c(state, parameters)), {
dt <- 1
dv <- v0*v1*cos(2*pi*t/33)
dX <- (v*N-B*X*Y-(mu+N/K)*X)
dY <- (B*X*Y-(mu+m+g+N/K)*Y)
dZ <- (g*Y-(mu+N/K)*Z)
dN <- (v-mu-N/K)*N-m*Y
return(list(c(dt,dv,dX, dY, dZ, dN)))
})
}
times <- seq(0,365*3,by = 1)
init <- c(t=0,v=0.02,X=995,Y=5,Z=0,N=1000)
parameters <- c(B=0.14,mu=.01,m=.075,g=.025,K=10000,v0=.02,v1=.5)
manipulate(
out <- as.data.frame(ode(y = init, times = seq(0,365*3,by=1), func = xyz, parms = parameters)),
matplot(times,out[4:6],type="l",xlab="Time",ylab="Susceptibles and Recovereds",main="SIR Model",lwd=1,lty=1,bty="l",col=2:4),
B=slider(0,1,initial=0.14,step=0.01)
)
I keep getting error messages regardless if I have all or part of the code inside manipulate(), define variables outside or inside of it, or anything else. Any help would be greatly appreciated!
When I first ran your code, the error I encountered was:
Error in manipulate(out <- as.data.frame(ode(y = init, times = seq(0, :
all controls passed to manipulate must be named
This error occurred because the second argument to manipulate was the matplot() command rather than a named control argument (such as a slider). So I placed the first two lines within curly braces to make them a single expression:
manipulate( {
out <- as.data.frame(ode(y = init, times = seq(0,365*3,by=1), func = xyz, parms = parameters))
matplot(times,out[4:6],type="l",xlab="Time",ylab="Susceptibles and Recovereds",main="SIR Model",lwd=1,lty=1,bty="l",col=2:4)
}, B=slider(0,1,initial=0.14,step=0.01)
)
This eliminates the error, but moving the slider doesn't do anything to the plot. Why? Because the slider named B doesn't refer to anything within the expression passed to manipulate(). I solved that by moving the parameters <- ... line into the manipulate expression and then changing that line so that there was a variable B (not just a name in the list); in other words, we need B=B instead of B=0.14. Now the plot changes when you move the slider, which I believe is what you wanted:
manipulate( {
parameters <- c(B=B,mu=.01,m=.075,g=.025,K=10000,v0=.02,v1=.5)
out <- as.data.frame(ode(y = init, times = seq(0,365*3,by=1), func = xyz, parms = parameters))
matplot(times,out[4:6],type="l",xlab="Time",ylab="Susceptibles and Recovereds",main="SIR Model",lwd=1,lty=1,bty="l",col=2:4)
}, B=slider(0,1,initial=0.14,step=0.01)
)
times <- seq(0,365*3,by = 1)
init <- c(t=0,v=0.02,X=995,Y=5,Z=0,N=1000)
plot.ode <- function(B.param) {
parameters <- c(B=B.param,mu=.01,m=.075,g=.025,K=10000,v0=.02,v1=.5)
out <- as.data.frame(ode(y = init, times = seq(0,365*3,by=1), func = xyz, parms = parameters))
matplot(times,out[4:6],type="l",xlab="Time",ylab="Susceptibles and Recovereds",main="SIR Model",lwd=1,lty=1,bty="l",col=2:4, ylim=c(0,200))
}
manipulate(plot.ode(B), B=slider(0,1,initial=0.14,step=0.01))
Seems a little odd that only the red curve is influenced by changing B.

How to add vertical line to posterior density plots using plot.mcmc?

I often run JAGS models on simulated data with known parameters. I like the default plot method for mcmc objects. However, I would like to add an abline(v=TRUE_VALUE) for each parameter that is modelled. This would give me a quick check for whether the posterior is reasonable.
Of course I could do this manually, or presumably reinvent the wheel and write my own function. But I was wondering if there is an elegant way that builds on the existing plot method.
Here's a worked example:
require(rjags)
require(coda)
# simulatee data
set.seed(4444)
N <- 100
Mu <- 100
Sigma <- 15
y <- rnorm(n=N, mean=Mu, sd=Sigma)
jagsdata <- list(y=y)
jags.script <- "
model {
for (i in 1:length(y)) {
y[i] ~ dnorm(mu, tau)
}
mu ~ dnorm(0, 0.001)
sigma ~ dunif(0, 1000)
tau <- 1/sigma^2
}"
mod1 <- jags.model(textConnection(jags.script), data=jagsdata, n.chains=4,
n.adapt=1000)
update(mod1, 200) # burn in
mod1.samples <- coda.samples(model=mod1,
variable.names=c('mu', 'sigma'),
n.iter=1000)
plot(mod1.samples)
I just want to run something like abline(v=100) for mu and abline(v=15) for sigma. Of course in many other examples, I would have 5, 10, 20 or more parameters of interest. Thus, I'm interested in being able to supply a vector of true values for named parameters.
I've had a look at getAnywhere(plot.mcmc). Would modifying that be a good way to go?
Okay. So I modified plot.mcmc to look like this:
my.plot.mcmc <- function (x, trace = TRUE, density = TRUE, smooth = FALSE, bwf,
auto.layout = TRUE, ask = FALSE, parameters, ...)
{
oldpar <- NULL
on.exit(par(oldpar))
if (auto.layout) {
mfrow <- coda:::set.mfrow(Nchains = nchain(x), Nparms = nvar(x),
nplots = trace + density)
oldpar <- par(mfrow = mfrow)
}
for (i in 1:nvar(x)) {
y <- mcmc(as.matrix(x)[, i, drop = FALSE], start(x),
end(x), thin(x))
if (trace)
traceplot(y, smooth = smooth, ...)
if (density) {
if (missing(bwf)) {
densplot(y, ...); abline(v=parameters[i])
} else densplot(y, bwf = bwf, ...)
}
if (i == 1)
oldpar <- c(oldpar, par(ask = ask))
}
}
Then running the command
my.plot.mcmc(mod1.samples, parameters=c(Mu, Sigma))
produces this
Note that parameters must be a vector of values in the same sort order as JAGS sorts variables, which seems to be alphabetically and then numerically for vectors.
Lessons learnt
Simply writing a new plot.mcmc didn't work by default presumably because of namespaces. So I just created a new function
I had to change set.mfrow to coda:::set.mfrow presumably also because of namespaces.
I changed ask to ask=FALSE, because RStudio permits browsing through figures.
I'd be happy to hear any suggestions about better ways of overriding or adapting existing S3 methods.

Resources