Error in lme4::allFit() -- no applicable method for 'isGLMM' - r

I'm hitting a confusing error while trying to run the lme4::allFit() using some built-in parallelization. I fit an initial model m0, which uses a larger dataframe ckDF (n = 265,623 rows) to model a binary response to a number of categorical and continuous predictors in a logistic framework with a random intercept for year.
I'm interested in determining whether different optimizers yield different results, following some recommendations I've found online (e.g. by #BenBolker here). My data is fairly large and takes ~20 minutes to run usually, so I'm hoping to use the parallel and ncpus parameters of allFit() to speed it up a bit. Here's my relevant code:
require(lme4)
require(parallel)
m0 <- glmer(returned ~ 1 + barge + site + barge:site +
(run + rearType + basin)^2 +
(tdg + temp + holdingTime)^2 +
(1|year),
data = ckDF, family = 'binomial',
control = glmerControl(optimizer='bobyqa',
optCtrl = list(maxfun = 1e5)))
af1 <- allFit(m0, parallel = 'multicore', ncpus = detectCores())
Upon doing this, I encounter the following error:
Error in checkForRemoteErrors(val) :
7 nodes produced errors; first error: no applicable method for 'isGLMM' applied to an object of class "list"
Any ideas? It seems to me that when it constructs a bunch of nodes, somehow some of them don't import the lme4 package and thus do not recognize isGLMM(); but I don't know why allFit() would do this, since it's from lme4(). I tried looking under the hood and altering the function for my own allFit() package, but ran into other errors.
Any help would be appreciated. R Version: 3.6.1; lme4 Version: 1.1-21; platform: Windows 10 64-bit

Thanks to #user20650 & #Ben Bolker for the tips in comments above -- it worked and I was able to get allFit() to run as expected, by ensuring I use parallel = "snow" in my function call since I'm running in Windows. Just posting the edited code here for anyone else who finds this useful:
require(lme4); require(snow)
# Define initial model (switched to defaults here)
m0 <- glmer(returned ~ 1 + barge + site + barge:site +
(run + rearType + basin)^2 +
(tdg + temp + holdingTime)^2 +
(1|year),
data = ckDF, family = 'binomial')
# Set up cluster for running allFit()
optCls <- makeCluster(detectCores()-1, type = "SOCK")
clusterEvalQ(optCls,library("lme4"))
clusterExport(optCls, "ckDF")
# Use allFit() to look at differences in optimizers
system.time(af1 <- allFit(m0, parallel = 'snow',
ncpus = detectCores()-1, cl=optCls))
stopCluster(optCls)
Ended up taking ~40 minutes using 11 cores on my machine.

Related

Failing to produce glmmTMB diagnostics plots with package DHARMa

I have tried to produce diagnostics plots for glmmTMB models using package DHARMa without success. Example 1.1 in this vignette gives:
owls_nb1 <- glmmTMB(SiblingNegotiation ~ FoodTreatment*SexParent +
(1|Nest)+offset(log(BroodSize)),
contrasts=list(FoodTreatment="contr.sum",
SexParent="contr.sum"),
family = nbinom1,
zi = ~1,
data=Owls)
plot(owls_nb1_simres <- simulateResiduals(owls_nb1) )
# Error in on.exit(add = TRUE, { : invalid 'add' argument
The same happens with:
if (!require(RCurl)) install.packages('RCurl'); library(RCurl)
unicorns <- read.csv(text= RCurl::getURL("https://raw.githubusercontent.com/marcoplebani85/datasets/master/unicorns.csv"))
# simulated data, obviously
unicorns_glmmTMB <- glmmTMB(Herd_size_n ~ food.quantity
+ (1 + food.quantity | Locality)
+ (1 + food.quantity | Year_Month),
family="poisson",
data=unicorns)
plot(simulateResiduals(unicorns_glmmTMB))
# Error in on.exit(add = TRUE, { : invalid 'add' argument
If I run the same model in lme4::glmer:
unicorns_glmer <- glmer(Herd_size_n ~ food.quantity
+ (1 + food.quantity | Locality)
+ (1 + food.quantity | Year_Month),
family="poisson",
data=unicorns)
...and "feed" it to:
plot(simulateResiduals(unicorns_glmer))
I obtain diagnostics plots without issues (by the way I am aware that model unicorns_glmer is suboptimal and can be improved).
I'm using:
glmmTMB version 1.0.2.9000 freshly installed from github;
DHARMa version 0.4.1;
R version 3.6.3;
MacOS Sierra version 10.12.6.
Has anyone encountered the same problem? Does anyone know how to solve it?
EDIT: my question was originally on how packages performance and DHARMa handle glmmTMB objects. For the sake of focus and clarity I removed the references to package performance, thus making this question specific to glmmTMB and DHARMa.
It looks like this is a bug that was present in R <= 4.0.1. From the R NEWS file for version 4.0.2:
on.exit() now correctly matches named arguments, thanks to PR#17815 (including patch) by Brodie Gaslam.
I have attempted to fix the glmmTMB code so it works around the bug.
You could try
remotes::install_github("glmmTMB/glmmTMB/glmmTMB#on_exit_order")
and see if that helps (provided nothing goes wrong, this branch should be merged into master shortly ...)

"Vector memory exhausted" w/ multiple imputation + multilevel model + robust standard errors

I am attempting to run a multilevel model with robust standard errors on multiple imputation output from the MICE package. I can't provide the data, unfortunately, but here is the code I'm running:
imputed <- mice(impvars, maxit = 5)
barg_imp <-complete(imputed, action = "long", include = TRUE)
fitimp <- with(barg_imp,rlmer(success_new ~ conflict_imp + positiondistance_coun +
positiondistance_ep + positiondistance_com + population_log + salience_rel
+ (1|prnrnmc)))
Note that I'm using rlmer from the robustlmm package. It gives me the error "vector memory exhausted (limit reached?)". I've tried out some of the solutions from other threads on this error message that haven't worked.
I'm able to run this all right when using lmer. Any suggestions on another way to do robust standard errors here (coeftest doesn't work), how to run this code more efficiently or remedying this error? Thank you!

Extracting path coefficients of piecewise SEM (structural equation model)

I'm constructing a piecewise structural equation model using the piecewiseSEM package in R (Lefcheck - https://cran.r-project.org/web/packages/piecewiseSEM/vignettes/piecewiseSEM.html)
I already created the model set and I could evaluate the model fit, so the model itself works. Also, the data fits the model (p = 0.528).
But I do not succeed in extracting the path coefficients.
This is the error i get: Error in cbind(Xlarge, Xsmall) : number of rows of matrices must match (see arg 2)
I already tried (but this did not work):
standardising my data because of the warning: Some predictor variables are on very different scales: consider rescaling
adapted my data (threw some NA values away)
This is my modellist:
predatielijst = list(
lmer(plantgrootte ~ gapfraction + olsen_P + (1|plot_ID), data = d),
glmer(piek1 ~ gapfraction + olsen_P + plantgrootte + (1|plot_ID),
family = poisson, data = d),
glmer(predatie ~ piek1 + (1|plot_ID), family = binomial, data = d)
)
with "predatie" being a binary variable (yes or no) and all the rest continuous variables (gapfraction, plantgrootte, olsen_P & piek1)
Thanks in advance!
Try installing the development version:
library(devtools)
install_github("jslefche/piecewiseSEM#2.0")
Replace list with psem and run the coefs or summary function. It will likely get rid of your error. If not, open a bug on Github!
WARNING: this will overwrite your current version from CRAN. You will need to reinstall from CRAN to get version 1.4 back.
try to use lme (out of the nlme library) ilstead of glmer. As far as I understand, the fact that lmer does not provide p-values (while lme does) seems to be the problem here.
Hope this works.

glmmPQL crashes on inclusion of corSpatial object

Link to data (1170 obs, 9 variables, .Rd file)
Simply read it in using readRDS(file).
I´m trying to setup a GLMM using the glmmPQL function from the MASS package including a random effects part and accounting for spatial autocorrelation. However, R (Version: 3.3.1) crashes upon execution.
library(nlme)
# setup model formula
fo <- hail ~ prec_nov_apr + t_min_nov_apr + srad_nov_apr + age
# setup corSpatial object
correl = corSpatial(value = c(10000, 0.1), form = ~ry + rx, nugget = TRUE,
fixed = FALSE, type = "exponential")
correl = Initialize(correl, data = d)
# fit model
fit5 <- glmmPQL(fo, random = ~1 | date, data = d,
correl = correl, family = binomial)
What I tried so far:
reduce number of observation
play with corSpatial parameters (range and nugget)
reduce number of fixed predictors
execute code on Windows, Linux (Debian) and Mac R installations
While I get no error message on my local pc (RStudio just crashes), running the script on a server returns the following error message:
R: malloc.c:3540: _int_malloc: Assertion (fwd->size & 0x4) == 0' failed. Aborted
I'd use the INLA package to model this, as it allows to use spatially correlated random effects. The required code is a bit too long to place here. Therefore I've place it in a document on http://rpubs.com/INBOstats/spde

Get randomForest regression faster in R

I have to make a regression with randomforest in R. My problem is that my dataframe is huge: I have 12 variables and more than 400k entries. When I try - the code is written in the bottom - to get a randomForest regression the system takes many hours to process the data: after 5, 6 hours of calculation, I am obliged to stop the operation without any output. Someone can suggests me how I can get it faster?
Thanks
library(caret)
library(randomForest)
dataset <- read.csv("/home/anonimo/Modelli/total_merge.csv", header=TRUE)
dati <- data.frame(dataset)
attach(dati)
trainSet <- dati[2:107570,]
testSet <- dati[107570:480343,]
output.forest <- randomForest(dati$Clip_pm25 ~ dati$e_1 + dati$Clipped_so + dati$Clip_no2 + dati$t2m_1 + dati$tp_1 + dati$Clipped_nh + dati$Clipped_co + dati$Clipped_o3 + dati$ssrd_1 + dati$Clipped_no + dati$Clip_pm10 + dati$sp_1, data=trainSet, ntree=250)
I don't think to parallelize on a single PC (2-4 cores) is the answer. There are plenty of lower hanging fruits to pick.
1) RF models increase in complexity with number of training samples. The average tree depth would be something like log(480,000/5)/log(2) = 16.5 intermediary nodes. In the vast majority of examples 2000-10000 samples per tree is fine. If you competing to win on kaggle, a small extra performance really matters, as winner takes all. In practice, you probably don't need that.
2) Don't clone you data set in your R code and try to only keep one copy of your data set (pass by reference is of course fine). It's not a big problem for this data set, as the dataset is not that big (~38Mb) even for R.
3) Don't use formula interface with randomForest algorithm for large datasets. It will make an extra copy of the data set. But again memory is not that much of a problem.
4) Use a faster RF algorithm: extraTrees, ranger or Rborist are available for R. extraTrees is not exactly a RF algorithm but pretty close.
5) avoid categorical features with more than 10 categories. RF can handle up to 32, but becomes super slow as any 2^32 possible split has to be evaluated. extraTrees and Rborist handle more categories by only testing some random selected splits (which works fine). Another solution as in the python-sklearn every category are assigned a unique integer, and the feature is handled as numeric. You can convert your categorical features with as.numeric and before runing randomForest to do the same trick.
6) For much bigger data. Split the data set in random blocks and train a few(~10) trees on each. Combine forests or save forests separate. This will slightly increase the tree correlation. There are some nice cluster implementation to train like these. But won't be necessary for datasets below 1-100Gb, depending on tree complexity etc.
#below I use solution 1-3) and get a run time of some minutes
library(randomForest)
#simulate data
dataset <- data.frame(replicate(12,rnorm(400000)))
dataset$Clip_pm25 = dataset[,1]+dataset[,2]^2+dataset[,4]*dataset[,3]
#dati <- data.frame(dataset) #no need to keep the data set, an extra time in memory
#attach(dati) #if you attach dati you don't need to write data$Clip_pm25, just Clip_pm25
#but avoid formula interface for randomForest for large data sets because it cost extra memory and time
#split data in X and y manually
y = dataset$Clip_pm25
X = dataset[,names(dataset) != "Clip_pm25"]
rm(dataset);gc()
object.size(X) #38Mb, no problemo
#if you were using formula interface
#output.forest <- randomForest(dati$Clip_pm25 ~ dati$e_1 + dati$Clipped_so + dati$Clip_no2 + dati$t2m_1 + dati$tp_1 + dati$Clipped_nh + dati$Clipped_co + dati$Clipped_o3 + dati$ssrd_1 + dati$Clipped_no + dati$Clip_pm10 + dati$sp_1, data=trainSet, ntree=250)
#output.forest <- randomForest(dati$Clip_pm25 ~ ., ntree=250) # use dot to indicate all variables
#start small, and scale up slowly
rf = randomForest(X,y,sampsize=1000,ntree=5) #runtime ~15 seconds
print(rf) #~67% explained var
#you probably really don't need to exeed 5000-10000 samples per tree, you could grow 2000 trees to sample most of training set
rf = randomForest(X,y,sampsize=5000,ntree=500) # runtime ~5 minutes
print(rf) #~87% explained var
#regarding parallel
#here you could implement some parallel looping
#.... but is it really worth for a 2-4 x speedup?
#coding parallel on single PC is fun but rarely worth the effort
#If you work at some company or university with a descent computer cluster,
#then you can spawn the process across 20-80-200 nodes and get a ~10-60-150 x speedup
#I can recommend the BatchJobs package
Since you are using caret, you could use the method = "parRF". This is an implementation of parallel randomforest.
For example:
library(caret)
library(randomForest)
library(doParallel)
cores <- 3
cl <- makePSOCKcluster(cores)
registerDoParallel(cl)
dataset <- read.csv("/home/anonimo/Modelli/total_merge.csv", header=TRUE)
dati <- data.frame(dataset)
attach(dati)
trainSet <- dati[2:107570,]
testSet <- dati[107570:480343,]
# 3 times cross validation.
my_control <- trainControl(method = "cv", number = 3 )
my_forest <- train(Clip_pm25 ~ e_1 + Clipped_so + Clip_no2 + t2m_1 + tp_1 + Clipped_nh + Clipped_co + Clipped_o3 + ssrd_1 + Clipped_no + Clip_pm10 + sp_1, ,
data = trainSet,
method = "parRF",
ntree = 250,
trControl=my_control)
Here is a foreach implementation as well:
foreach_forest <- foreach(ntree=rep(250, cores),
.combine=combine,
.multicombine=TRUE,
.packages="randomForest") %dopar%
randomForest(Clip_pm25 ~ e_1 + Clipped_so + Clip_no2 + t2m_1 + tp_1 + Clipped_nh + Clipped_co + Clipped_o3 + ssrd_1 + Clipped_no + Clip_pm10 + sp_1,
data = trainSet, ntree=ntree)
# don't forget to stop the cluster
stopCluster(cl)
Remember I didn't set any seeds. You might want to consider this as well. And here is a link to a randomforest package that also runs in parallel. But I have not tested this.
The other two answers are good. Another option is to actually use more recent packages that are purpose-built for highly dimensional / high volume data sets. They run their code using lower-level languages (C++ and/or Java) and in certain cases use parallelization.
I'd recommend taking a look into these three:
ranger (uses C++ compiler)
randomForestSRC (uses C++ compiler)
h2o (Java compiler - needs Java version 8 or higher)
Also, some additional reading here to give you more to go off on which package to choose: https://arxiv.org/pdf/1508.04409.pdf
Page 8 shows benchmarks showing the performance improvement of ranger against randomForest against growing data size - ranger is WAY faster due to linear growth in runtime rather than non-linear for randomForest for rising tree/sample/split/feature sizes.
Good Luck!

Resources