I am using allFit() in lme4 to automatically scan through possible optimizers, since the default optimizer usually doesn't converge in this situation. My code works fine when I run it line-by-line, but when I run it wrapped in a simple function, it gives different results.
I've looked at the output of the allFit call and it seems that when it's NOT inside the function, it returns a list of lmerModLmerTest objects as desired.
However, inside the function, it returns a list with the values simpleError, error, and condition. Why is it doing this?
I'm using RStudio, R 3.6, lme4 1.1-21, lmerTest 3.1-0.
UPDATE: The problem is that the update() method used by allFit cannot find the 'tt' data frame when re-fitting the models. I have put breakpoints into the code and it seems that the 'test' data exists in the function environment, however, so I don't understand why it can't find it...
UPDATE 2: It appears that if I change the assignment of the test data to <<-, it works. This is dangerous, though, by breaking functional programming, and I think it may fail when I try to parallelize. I am testing further... still open to suggestions!
Here is the code that works, not inside the function:
library(lme4)
multi_arm_var_sim <- function(nsub = 20, nclust = 100, narm = 2, iccs = c(.01, .04), betas = c(0,.3)){
sig_b2 <- -1*iccs / (iccs - 1)
n <- nsub * nclust * narm
y <- rep_len(NA, n)
arm <- as.factor(rep(0:(narm-1), each = nsub*nclust))
clustid <- rep(1:(nclust*narm), each = nsub)
clustRElist <- rnorm(narm*nclust, mean = 0, sd = rep(sqrt(sig_b2), each = nclust))
clustRE <- rep(clustRElist, each = nsub)
sig_b2 <- rep(sig_b2, each = nclust*nsub)
error <- rnorm(n, mean = 0, sd = 1)
beta <- rep(betas, each = nclust*nsub)
linpred <- beta + clustRE + error
output <- cbind.data.frame(arm, clustid, sig_b2, clustRE, linpred)
return(output)
}
set.seed(2)
test_1 <- multi_arm_var_sim()
model_flex_1 <- lmer(linpred ~ arm + (1 + arm | clustid),
data = test_1)
diff_optims_1 <- allFit(model_flex_1, verbose = TRUE)
print(class(diff_optims_1[[1]]))
is.OK_1 <- sapply(diff_optims_1, is, "lmerMod")
print(is.OK_1)
And here is the code that doesn't work, same setup, wrapped in a function.
library(lme4)
multi_arm_var_sim <- function(nsub = 20, nclust = 100, narm = 2, iccs = c(.01, .04), betas = c(0,.3)){
sig_b2 <- -1*iccs / (iccs - 1)
n <- nsub * nclust * narm
y <- rep_len(NA, n)
arm <- as.factor(rep(0:(narm-1), each = nsub*nclust))
clustid <- rep(1:(nclust*narm), each = nsub)
clustRElist <- rnorm(narm*nclust, mean = 0, sd = rep(sqrt(sig_b2), each = nclust))
clustRE <- rep(clustRElist, each = nsub)
sig_b2 <- rep(sig_b2, each = nclust*nsub)
error <- rnorm(n, mean = 0, sd = 1)
beta <- rep(betas, each = nclust*nsub)
linpred <- beta + clustRE + error
output <- cbind.data.frame(arm, clustid, sig_b2, clustRE, linpred)
return(output)
}
get_pval <- function(){
tt <- multi_arm_var_sim()
model_flex <- lme4::lmer(linpred ~ arm + (1 + arm | clustid),
data = tt)
diff_optims <- lme4::allFit(model_flex, data = tt, verbose = TRUE)
print(class(diff_optims[[1]]))
is.OK <- sapply(diff_optims, is, "merMod")
print(is.OK)
}
set.seed(2)
get_pval()
Thanks!!
Related
I have a function that generates a sine wave
my.sin <- function(vec,a,f,p) a*sin(f*vec+p)
vec = vector indices
a = amplitude
f = frequency
p = phase
I also have some data my_var that I want to approximate with several sinusoids
set.seed(22)
my_var <- rnorm(100)
plot(my_var,t="l")
There is also a fitness function that calculates the approximation error of the sum of two sinusoids,but there can be any number of sinusoids
fit <- function(x,test=F){
vec <- 1:length(my_var)
s1 <- my.sin(vec = vec,a = x[1],f = x[2],p = x[3])
s2 <- my.sin(vec = vec,a = x[4],f = x[5],p = x[6])
res_sin <- s1+s2
err <- sqrt(sum((res_sin - my_var) ^ 2))
if(test) return(res_sin)
return( err/-1 )
}
Next, I use a genetic algorithm to find the best solution.
library(GA)
GA <- ga("real-valued",
fit,
lower = rep(-5,6),
upper = rep( 5,6),
maxiter = 300,
popSize = 300)
sol <- tail(GA#solution,1)[1,]
ga_sin <- fit(sol,test = T)
lines(ga_sin,col=4,lwd=2)
best_sin_comb <- matrix(sol,ncol = 3,byrow = T)
colnames(best_sin_comb) <- c("amplitude","frequency","phase")
print(best_sin_comb)
result
amplitude frequency phase
[1,] -0.3435402 1.5458888 1.8904578
[2,] -0.4326791 -0.4886035 0.5606401
My question is: can the approximation be made more efficient in terms of time spent. Perhaps a different search algorithm or something else ..
Also I would like to keep compatibility with the function my.sin
I recently submitted a package to CRAN that passed all the automatic checks, but failed passing the manual ones. One of the errors were the following:
Please do not set a seed to a specific number within a function.
Please do not modifiy the .GlobalEnv. This is not allowed by the CRAN policies.
I believe the lines of code that these comments are referring to are the following
if(simul == TRUE){
set.seed(42)
}
w <- matrix(data = rbinom(n = p, size = 1, prob = 0.5), ncol = 1)
beta <- w*beta-(1-w)*beta
s <- round((1-sparsity)*p)
toReplace <- sample(p, size = s)
beta <- replace(beta, list = toReplace, values = 0)
# Generate the random p-columned matrix of indicator series.
X <- matrix(data = rnorm ((n_l*m) * p, mean = mean_X, sd = sd_X), ncol = p, nrow = n_l*m)
if(simul == TRUE){
rm(.Random.seed, envir = globalenv())
}
Essentially, I am allowing the function to include a simulations option "simul", such that when set to "TRUE", a matrix "X" and a vector of coefficients "beta" remain fixed. I remove the seed at the end of this segment (final lines), as the rest of the code contains variables that should change at each iteration of the simulation. However, as noted in the feedback from CRAN, this is not allowed. What is an alternative way to go about this? I cannot set a fixed vector "beta" or matrix "X" when "simul" is "TRUE", since the dimension of these are inputs to the function and thus vary depending on the preferences of the investigator.
If you really, really, want to set the seed inside a function, which I believe you nor anyone should do, save the current seed, do whatever you want, and before exiting the function reset it to the saved value.
old_seed <- .Random.seed
rnorm(1)
#[1] -1.173346
set.seed(42)
rbinom(1, size = 1, prob = 0.5)
#[1] 0
.Random.seed <- old_seed
rnorm(1)
#[1] -1.173346
In a function it could be something like the following, without the message instructions. Note that the function prints nothing, it never calls any pseudo-RNG and always outputs TRUE. The point is to save the seed's current value and reset the seed in on.exit.
f <- function(simul = FALSE){
if(simul){
message("simul is TRUE")
old_seed <- .Random.seed
on.exit(.Random.seed <- old_seed)
# rest of code
} else message("simul is FALSE")
invisible(TRUE)
}
f()
s <- .Random.seed
f(TRUE)
identical(s, .Random.seed)
#[1] TRUE
rm(s)
A similar question has been asked on the Bio devel mailing list. The suggestion there was to use the functionality of withr::with_seed. Your code could then become:
library(withr)
if(simul == TRUE){
w <- with_seed(42, matrix(data = rbinom(n = p, size = 1, prob = 0.5), ncol = 1))
} else {
w <- matrix(data = rbinom(n = p, size = 1, prob = 0.5), ncol = 1)
}
beta <- w*beta-(1-w)*beta
s <- round((1-sparsity)*p)
toReplace <- sample(p, size = s)
beta <- replace(beta, list = toReplace, values = 0)
# Generate the random p-columned matrix of indicator series.
X <- matrix(data = rnorm ((n_l*m) * p, mean = mean_X, sd = sd_X), ncol = p, nrow = n_l*m)
Of course that raises the question of how withr got on CRAN, given that it appears to do the same thing that you're being told not to do - the difference may be that your version may overwrite an existing seed, whereas that code checks whether a seed already exists.
When you fix the seed, if the user try this code with the same parameters, the same results will be obtained each time.
Supposing that this chunk of code is inside a larger chunk related only to the simulation, just get rid of the setseed() and try something like that:
if(simul == TRUE){
w <- matrix(data = rbinom(n = p, size = 1, prob = 0.5), ncol = 1)
beta <- w*beta-(1-w)*beta
s <- round((1-sparsity)*p)
toReplace <- sample(p, size = s)
beta <- replace(beta, list = toReplace, values = 0)
# Generate the random p-columned matrix of indicator series.
X <- matrix(data = rnorm ((n_l*m) * p, mean = mean_X, sd = sd_X), ncol = p, nrow = n_l*m)
}
I am struggling with this for so long. I have a logistic growth function where the growth parameter
r is a matrix. The model is constructed in a way that I have as an output two N the N1 and N2.
I would like to be able to change the r parameter over time. When time < 50 I would like
r = r1 where
r1=matrix(c(
2,3),
nrow=1, ncol=2
When time >= 50 I would like r=r2 where
r2=matrix(c(
1,2),
nrow=1, ncol=2
Here is my function. Any help is highly appreciated.
rm(list = ls())
library(deSolve)
model <- function(time, y, params) {
with(as.list(c(y,params)),{
N = y[paste("N",1:2, sep = "")]
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
r=matrix(c(
4,5),
nrow=1, ncol=2)
K=100
params <- list(r,K)
y<- c(N1=0.1, N2=0.2)
times <- seq(0,100,1)
out <- ode(y, times, model, params)
plot(out)
I would like ideally something like this but it does not work
model <- function(time, y, params) {
with(as.list(c(y,params)),{
N = y[paste("N",1:2, sep = "")]
r = ifelse(times < 10, matrix(c(1,3),nrow=1, ncol=2),
ifelse(times > 10, matrix(c(1,4),nrow=1, ncol=2), matrix(c(1,2),nrow=1, ncol=2)))
print(r)
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
Thank you for your time.
Here a generic approach that uses an extended version of the approx function. Note also some further simplifications of the model function and the additional plot of the parameter values.
Edit changed according to the suggestion of Lewis Carter to make the parameter change at t=3, so that the effect can be seen.
library(simecol) # contains approxTime, a vector version of approx
model <- function(time, N, params) {
r <- approxTime(params$signal, time, rule = 2, f=0, method="constant")[-1]
K <- params$K
dN <- r*N*(1-N/K)
return(list(c(dN), r))
}
signal <- matrix(
# time, r[1, 2],
c( 0, 2, 3,
3, 1, 2,
100, 1, 2), ncol=3, byrow=TRUE
)
## test of the interpolation
approxTime(signal, c(1, 2.9, 3, 100), rule = 2, f=0, method="constant")
params <- list(signal = signal, K = 100)
y <- c(N1=0.1, N2=0.2)
times <- seq(0, 10, 0.1)
out <- ode(y, times, model, params)
plot(out)
For a small number of state variables like in the example, separate signals with approxfun from package stats will look less generic but may be slighlty faster.
As a further improvement, one may consider to replace the "hard" transitions with a more smooth one. This can then directly be formulated as a function without the need of approx, approxfun or approxTime.
Edit 2:
Package simecol imports deSolve, and we need only a small function from it. So instead of loading simecol it is also possible to include the approxTime function explicitly in the code. The conversion from data frame to matrix improves performance, but a matrix is preferred anyway in such cases.
approxTime <- function(x, xout, ...) {
if (is.data.frame(x)) {x <- as.matrix(x); wasdf <- TRUE} else wasdf <- FALSE
if (!is.matrix(x)) stop("x must be a matrix or data frame")
m <- ncol(x)
y <- matrix(0, nrow=length(xout), ncol=m)
y[,1] <- xout
for (i in 2:m) {
y[,i] <- as.vector(approx(x[,1], x[,i], xout, ...)$y)
}
if (wasdf) y <- as.data.frame(y)
names(y) <- dimnames(x)[[2]]
y
}
If you want to pass a matrix parameter you should pass a list of parameters and you can modify it inside the model when your time limit is exceeded (in the example below you don't even have to pass the r matrix to the model function)
library(deSolve)
model <- function(time, y, params) {
with(as.list(c(y,params)),{
if(time < 3) r = matrix(c(2,3), nrow = 1, ncol = 2)
else r = matrix(c(1,3), nrow = 1, ncol = 2)
N = y[paste("N",1:2, sep = "")]
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
y <- c(N1=0.1, N2=0.2)
params <- list(r = matrix(c(0,0), nrow = 1, ncol = 2), K=100)
times <- seq(0,10,0.1)
out <- ode(y, times, model, params)
plot(out)
You can see examples of this for instance with Delay Differential Equations ?dede
So, I have these functions:
funk1 <- function(a,x,l,r) {
x^2*exp(-(l*(1-exp(-r*a))/r))}
funk2 <- function(x,l,r) {
sapply(x, function (s) {
integrate(funk1, lower = 0, upper = s, x=s, l=l, r=r)$value })}
which are used to explain the data y in,
z <- data.frame(ts = 1:100,
y = funk2(1:100, l = 1, r = 1) + rpois(100, 1:100))
I wish to use optim to maximise the likelihood, so I defined a likelihood function:
LL_funk <- function(l,r) {
n=nrow(z)
R = sum((funk2(ts,l,r) - y)^2)
logl = -((n/2)*log(R))
return(-logl)
}
and I tried to fit using optim
fit <- optim(par=c(0.5,0.5), fn= LL_funk, method="Nelder-Mead")
But I get an error:
Error in integrate(funk1, lower = 0, upper = s, x = s, l = l, r = r) :
a limit is missing
I am not sure why? I could run nls fitting funk2(x,l,r) to y
nls(y ~ funk2(ts,l,r), data = z, start = list(l = 0.5, r = 0.5))
That means funk2 is working. I guess its the problem with LL function that I have designed, which I cant figure out!! Please Help!
Yup! There were two problems with your function. This worked for me:
LL_funk <- function(params) {
n=nrow(z)
l = params[1]
r = params[2]
R = sum((funk2(z$ts,l,r) - z$y)^2)
logl = -((n/2)*log(R))
return(-logl)
}
Previous issues:
LL_funk only takes 1 argument, which is the vector of parameters.
In LHS of the assignment of R, ts and y were not actually referring to columns in your dataset.
I was playing around with the nlsLM function, from the minpack.lm library, and encountered some behaviour that I don't understand.
Given that the following function produces output when I supply a numeric vector 'b' as input I wanted to use this function to fit a nonlinear model to my data.
volEquation <- function(DBH, PHt, b){
b[1] * DBH^b[2] * PHt^b[3]
}
However I have become stuck when it comes to correctly specifying the initial parameter values. R code follows:
library(minpack.lm)
n <- 20
x <- seq(12, 60, length.out = n)
y <- seq(22, 45, length.out = n)
z <- x^2 * y ^ 3 + rnorm(n, 0, 0.1)
Data <- data.frame(DBH = x, PHt = y, TVT = z)
nlsFormula <- "TVT ~ volEquation(DBH, PHt, b)"
nlsInitial <- list(b = c(0.5, 2.25, 3.25))
nlsLMOutput <- nlsLM(formula = nlsFormula, data = Data, start = nlsInitial)
nlsOutput <- nls(formula = nlsFormula, data = Data, start = nlsInitial
nls was successful at fitting the data while nlsLM gave me this error message,
Error in rownames<-(*tmp*, value = "b") :
length of 'dimnames' [1] not equal to array extent
Can anyone provide insight as to why this problem occurs in the nlsLM function? I've tried sifting through the nlsLM code but I still don't understand what's going on.
Try separating your parameters
volEquation <- function(DBH, PHt, x,y,z){
x * DBH^y * PHt^z
}
nlsFormula <- "TVT ~ volEquation(DBH, PHt, x, y, z)"
nlsInitial <- c(x=5e-3, y=2, z=1)
nlsOutput <- nlsLM(formula = nlsFormula, data = Data, start = nlsInitial, control=nls.lm.control(maxiter=100))