capturing convergence message from lme4 package in R - r

I was wondering if there is a way to write a logical test (TRUE/FALSE) to show whether a model from lme4 package has converged or not?
An example is shown below, I want to capture if any model comes with the convergence warning (i.e., Model failed to converge) message?
library(lme4)
dat <- read.csv('https://raw.githubusercontent.com/rnorouzian/e/master/nc.csv')
m <- lmer(math ~ ses*sector + (ses | sch.id), data = dat)
Warning message:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.00279 (tol = 0.002, component 1)

> sm=summary(model)
> sm$optinfo$conv$lme4$messages
[1] "Model failed to converge with max|grad| = 0.0120186 (tol = 0.002, component 1)"
>

We can use tryCatch, using withCallingHandlers taking inspiration from this post.
dat <- read.csv('https://raw.githubusercontent.com/rnorouzian/e/master/nc.csv')
m <- tryCatch({
withCallingHandlers({
error <- FALSE
list(model = lmer(math ~ ses*sector + (ses | sch.id), data = dat),
error = error)
},warning = function(w) {
if(grepl('failed to converge', w$message)) error <<- TRUE
}
)})
m$model
#Linear mixed model fit by REML ['lmerMod']
#Formula: math ~ ses * sector + (ses | sch.id)
# Data: dat
#REML criterion at convergence: 37509.07
#Random effects:
# Groups Name Std.Dev. Corr
# sch.id (Intercept) 1.9053
# ses 0.8577 0.46
# Residual 3.1930
#Number of obs: 7185, groups: sch.id, 160
#Fixed Effects:
#(Intercept) ses sector ses:sector
# 11.902 2.399 1.677 -1.322
#convergence code 0; 0 optimizer warnings; 1 lme4 warnings
m$error
#[1] TRUE
The output m is a list with model and error elements.
If we need to test for warning after the model has been created we can use :
is_warning_generated <- function(m) {
df <- summary(m)
!is.null(df$optinfo$conv$lme4$messages) &&
grepl('failed to converge', df$optinfo$conv$lme4$messages)
}
m <- lmer(math ~ ses*sector + (ses | sch.id), data = dat)
is_warning_generated(m)
#[1] TRUE

We can use safely from purrr. It will also return the error as a list element and captures the error. If there are no error, it will be NULL
library(purrr)
safelmer <- safely(lmer, otherwise = NA)
out <- safelmer(math ~ ses*sector + (ses | sch.id), data = dat)

I'm just going to say that #RonakShah's is_warning_generated could be made slightly more compact:
function(m) {
w <- m#optinfo$conv$lme4$messages
!is.null(w) && grepl('failed to converge', w)
}

I applied Ronak's solution to my own simulation data and found a problem.
The message may be a vector of multiple entries, leading also grepl() to have multiple entries. However, the && operator compares the string only to the first entry, such that further occurrences of 'failed to converge' are unobserved. To avoid this behavior, I changed && to &.
Now a problem occurred if there was no message at all. In this case the !is.null() part becomes correctly FALSE (i.e., no warning generated), but the grepl() part becomes logical(0) and the function value becomes FALSE & logical(0) which is logical(0). In fact it would work for FALSE && logical(0) which is FALSE (correct).
A solution that worked for me is
if(is.null(mess)) FALSE else grepl('failed to converge', mess)
which in case of a failure to converge provides a vector with a TRUE at the entry where the warning was placed. This vector may be evaluated, for example, by building the numeric (or Boolean) sum which becomes greater 0 or TRUE.

Related

Error when Bootstraping a Beta regression model in R with {betareg}

I need to bootstrap a beta regression model to check its robustness - because of a data point with a large cook's distance - with the boot package (other suggestions welcomed).
I have the following error:
Error in t.star[r, ] <- res[[r]] :
incorrect number of subscripts on matrix
Here's a reproductible example:
library(betareg)
library(boot)
fake_data <- data.frame(diet = as.factor(c(rep("A",10),rep("B",10))),
fat = c(runif(10,.1,.5),runif(10,.4,.9)) )
plot(fat~diet, data = fake_data)
my_beta_reg <- function(data,i){
data_i <- data[i,]
mod <- betareg(data_i[,"fat"] ~ data_i[,"diet"])
return(mod$coef)
}
b = boot(fake_data, statistic = my_beta_reg, R= 50)
Error in t.star[r, ] <- res[[r]] :
incorrect number of subscripts on matrix
What's the issue?
Thanks in advance.
The issue is that mod$coef is a list:
betareg(fat ~ diet, data = fake_data)$coef
#$mean
#(Intercept) dietB
# -1.275793 2.490126
#
#$precision
# (phi)
#20.59014
You need to unlist it or preferably use the function you are supposed to use for extraction of coefficients:
my_beta_reg <- function(data,i){
mod <- betareg(fat ~ diet, data = data[i,])
#unlist(mod$coef)
coef(mod)
}
b = boot(fake_data, statistic = my_beta_reg, R= 50)
print(b)
#ORDINARY NONPARAMETRIC BOOTSTRAP
#
#
#Call:
#boot(data = fake_data, statistic = my_beta_reg, R = 50)
#
#
#Bootstrap Statistics :
# original bias std. error
#t1* -1.275793 -0.019847377 0.2003523
#t2* 2.490126 0.009008892 0.2314521
#t3* 20.590142 8.265394485 17.2271497

Removing completely separated observations from glm()

I'm doing a bit of exploratory data analysis using HMDA data from the AER package; however, the variables that I used to fit the model seem to contain some observations that perfectly determine the outcomes, an issue known as "separation." So I tried to remedy this using the solution recommended by this thread, yet when I tried to execute the first set of source code from glm.fit(), R returned an error message:
Error in family$family : object of type 'closure' is not subsettable
so I could not proceed any further to remove those fully determined observations from my data with this code. I am wondering if anyone could help me fix this?
My current code is provided at below for your reference.
# load the AER package and HMDA data
library(AER)
data(HMDA)
# fit a 2-degree olynomial probit model
probit.fit <- glm(deny ~ poly(hirat, 2), family = binomial, data = HMDA)
# using the revised source code from that stackexchage thread to find out observations that received a warning message
library(tidyverse)
library(dplyr)
library(broom)
eps <- 10 * .Machine$double.eps
if (family$family == "binomial") {
if (any(mu > 1 - eps) || any(mu < eps))
warning("glm.fit: fitted probabilities numerically 0 or 1 occurred",
call. = FALSE)
}
# this return the following error message
# Error in family$family : object of type 'closure' is not subsettable
probit.resids <- augment(probit.fit) %>%
mutate(p = 1 / (1 + exp(-.fitted)),
warning = p > 1-eps)
arrange(probit.resids, desc(.fitted)) %>%
select(2:5, p, warning) %>%
slice(1:10)
HMDA.nwarning <- filter(HMDA, !probit.resids$warning)
# using HMDA.nwarning should solve the problem...
probit.fit <- glm(deny ~ poly(hirat, 2), family = binomial, data = HMDA.nwarning)
This chunk of code
if (family$family == "binomial") {
if (any(mu > 1 - eps) || any(mu < eps))
warning("glm.fit: fitted probabilities numerically 0 or 1 occurred",
call. = FALSE)
}
there is a function, binomial() called when you run glm with family == "binomial". If you look under glm (just type glm):
if (is.character(family))
family <- get(family, mode = "function", envir = parent.frame())
if (is.function(family))
family <- family()
if (is.null(family$family)) {
print(family)
stop("'family' not recognized")
}
And the glm function checks binomial()$family during the fit, and if any of the predicted values differ from 1 or 0 by eps, it raises that warning.
You don't need to run that part, and yes, you need to set eps <- 10 * .Machine$double.eps . So let's run the code below, and if you run a probit, you need to specify link="probit" in binomial, otherwise the default is logit:
library(AER)
library(tidyverse)
library(dplyr)
library(broom)
data(HMDA)
probit.fit <- glm(deny ~ poly(hirat, 2), family = binomial(link="probit"), data = HMDA)
eps <- 10 * .Machine$double.eps
probit.resids <- augment(probit.fit) %>%
mutate(p = 1 / (1 + exp(-.fitted)),
warning = p > 1-eps)
The column warning indicates if the observations raises a warning, in this dataset, there's one:
table(probit.resids$warning)
FALSE TRUE
2379 1
We can use the next step to filter it
HMDA.nwarning <- filter(HMDA, !probit.resids$warning)
dim(HMDA.nwarning)
[1] 2379 14
And rerun the regression:
probit.fit <- glm(deny ~ poly(hirat, 2), family = binomial(link="probit"), data = HMDA.nwarning)
coefficients(probit.fit)
(Intercept) poly(hirat, 2)1 poly(hirat, 2)2
-1.191292 8.708494 6.884404

Binomial glm in `rsq` package: error: object not found

It seems that whenever I use any of the rsq package functions (pcor for partial correlations; rsq and rsq.partial for R-squared) on a binomial glm which uses the two-column notation, I get an error - see below. The model actually is correct, fit goes perfectly, no data missing.
Is there something I can do about it?
Reproducible example:
require(rsq)
data(esoph)
model1 <- glm(cbind(ncases, ncontrols) ~ agegp + tobgp * alcgp,
data = esoph, family = binomial)
pcor(model1)
Error in cbind(ncases, ncontrols) : object 'ncases' not found
rsq(model1)
Error in cbind(ncases, ncontrols) : object 'ncases' not found
rsq.partial(model1)
Error in cbind(ncases, ncontrols) : object 'ncases' not found
You have to use attach(esoph) before applying the model. Like
data(esoph)
model1 <- glm(cbind(ncases, ncontrols) ~ agegp + tobgp * alcgp,
data = esoph, family = binomial)
attach(esoph)
pcor(model1)
# $adjustment
#[1] FALSE
#$variable
#[1] "agegp" "tobgp" "alcgp" "tobgp:alcgp"
#$partial.cor
#[1] 0.8092124 0.0000000 0.0000000 0.3815876
#Warning message:
#In (nLevels > 1) & (varcls == "factor") :
#longer object length is not a multiple of shorter object length
rsq(model1)
# [1] 0.826124
rsq.partial(model1)
#$adjustment
#[1] FALSE
#$variable
#[1] "agegp" "tobgp" "alcgp" "tobgp:alcgp"
#$partial.rsq
#[1] 6.548247e-01 -6.661338e-16 0.000000e+00 1.456091e-01
detach(esoph)
cbinding beforehand works.
esoph$ncases.ncontrols <- with(esoph, cbind(ncases, ncontrols))
glm(ncases.ncontrols ~ agegp + tobgp * alcgp, data=esoph, family=binomial)
Comes a warning though in pcor().

moderated mediation with lavaan

I was looking at this example:
https://stats.stackexchange.com/questions/163436/r-moderated-mediation-using-the-lavaan-package to make my own moderated mediation, but I get errors and I am not finding the solution.
my model is:
model <- '
#direct effects
mem ~ c*var1 + cw*interaction +b*var2
var2 ~ a*var1+aw*interaction
#covariates
mem ~ age+sex+iq
var1 ~ age+sex+iq
var 2 ~ age+sex+iq
#indirect effect
ab := a*b
#total effect
total := c+(a*b)
#conditional effects
ab1 := a*b+aw*b
total1 := a*b+c+cw'
fit <- sem(model, data=mydata, se="robust.huber.white", test="bootstrap",bootstrap=1000)
The error message I get is:
Error in chol.default(S) :the leading minor of order 8 is not
positive definite
In addition: Warning message:
In lav_samplestats_from_data(lavdata = NULL, DataX = dataX, DataeXo =
dataeXo,: lavaan WARNING: sample covariance can not be inverted
I did scale all variables beforehand, not sure if that is the issue?
Any thoughts?

R: Bootstrapped binary mixed-model logistic regression using bootMer() of the new lme4 package

I want to use the new bootMer() feature of the new lme4 package (the developer version currently). I am new to R and don't know which function should I write for its FUN argument. It says it needs a numerical vector, but I have no idea what that function will perform. So I have a mixed-model formula which is cast to the bootMer(), and have a number of replicates. So I don't know what that external function does? Is it supposed to be a template for bootstrapping methods? Aren't bootstrapping methods already implemented in he bootMer? So why they need an external "statistic of interest"? And which statistic of interest should I use?
Is the following syntax proper to work on? R keeps on error generating that the FUN must be a numerical vector. I don't know how to separate the estimates from the "fit" and even should I do that in the first place? I can just say I am lost with that "FUN" argument. Also I don't know should I pass the mixed-model glmer() formula using the variable "Mixed5" or should I pass some pointers and references? I see in the examples that X (the first argument of bootMer() is a *lmer() object. I wanted to write *Mixed5 but it rendered an error.
Many thanks.
My code is:
library(lme4)
library(boot)
(mixed5 <- glmer(DV ~ (Demo1 +Demo2 +Demo3 +Demo4 +Trt)^2
+ (1 | PatientID) + (0 + Trt | PatientID)
, family=binomial(logit), MixedModelData4))
FUN <- function(formula) {
fit <- glmer(DV ~ (Demo1 +Demo2 +Demo3 +Demo4 +Trt)^2
+ (1 | PatientID) + (0 + Trt | PatientID)
, family=binomial(logit), MixedModelData4)
return(coef(fit))
}
result <- bootMer(mixed5, FUN, nsim = 3, seed = NULL, use.u = FALSE,
type = c("parametric"),
verbose = T, .progress = "none", PBargs = list())
result
FUN
fit
And the error:
Error in bootMer(mixed5, FUN, nsim = 3, seed = NULL, use.u = FALSE, type = c("parametric"), :
bootMer currently only handles functions that return numeric vectors
-------------------------------------------------------- Update -----------------------------------------------------
I edited the code like what Ben instructed. The code ran very good but the SEs and Biases were all zero. Also do you know how to extract P values from this output (strange to me)? Should I use mixed() of afex package?
My revised code:
library(lme4)
library(boot)
(mixed5 <- glmer(DV ~ (Demo1 +Demo2 +Demo3 +Demo4 +Trt)^2
+ (0 + Trt | PatientID)
, family=binomial(logit), MixedModelData4))
FUN <- function(fit) {
fit <- glmer(DV ~ (Demo1 +Demo2 +Demo3 +Demo4 +Trt)^2
+ (1 | PatientID) + (0 + Trt | PatientID)
, family=binomial(logit), MixedModelData4)
return(fixef(fit))
}
result <- bootMer(mixed5, FUN, nsim = 3)
result
-------------------------------------------------------- Update 2 -----------------------------------------------------
I also tried the following but the code generated warnings and didn't give any result.
(mixed5 <- glmer(DV ~ Demo1 +Demo2 +Demo3 +Demo4 +Trt
+ (1 | PatientID) + (0 + Trt | PatientID)
, family=binomial(logit), MixedModelData4))
FUN <- function(mixed5) {
return(fixef(mixed5))}
result <- bootMer(mixed5, FUN, nsim = 2)
Warning message:
In bootMer(mixed5, FUN, nsim = 2) : some bootstrap runs failed (2/2)
> result
Call:
bootMer(x = mixed5, FUN = FUN, nsim = 2)
Bootstrap Statistics :
WARNING: All values of t1* are NA
WARNING: All values of t2* are NA
WARNING: All values of t3* are NA
WARNING: All values of t4* are NA
WARNING: All values of t5* are NA
WARNING: All values of t6* are NA
-------------------------------------------------------- Update 3 -----------------------------------------------------
This code as well generated warnings:
FUN <- function(fit) {
return(fixef(fit))}
result <- bootMer(mixed5, FUN, nsim = 2)
The warnings and results:
Warning message:
In bootMer(mixed5, FUN, nsim = 2) : some bootstrap runs failed (2/2)
> result
Call:
bootMer(x = mixed5, FUN = FUN, nsim = 2)
Bootstrap Statistics :
WARNING: All values of t1* are NA
WARNING: All values of t2* are NA
WARNING: All values of t3* are NA
WARNING: All values of t4* are NA
WARNING: All values of t5* are NA
WARNING: All values of t6* are NA
There are basically two (simple) confusions here.
The first is between coef() (which returns a list of matrices) and fixef() (which returns a vector of the fixed-effect
coefficients): I assume that fixef() is what you wanted, although you might want something like c(fixef(mixed),unlist(VarCorr(mixed))).
the second is that FUN should take a fitted model object as input ...
For example:
library(lme4)
library(boot)
mixed <- glmer(incidence/size ~ period + (1|herd),
weights=size, data=cbpp, family=binomial)
FUN <- function(fit) {
return(fixef(fit))
}
result <- bootMer(mixed, FUN, nsim = 3)
result
## Call:
## bootMer(x = mixed, FUN = FUN, nsim = 3)
## Bootstrap Statistics :
## original bias std. error
## t1* -1.398343 -0.20084060 0.09157886
## t2* -0.991925 0.02597136 0.18432336
## t3* -1.128216 -0.03456143 0.05967291
## t4* -1.579745 -0.08249495 0.38272580
##
This might be the same problem, that I reported as an issue here. At least it leads to the same, unhelpful error message and took me a while too.
That would mean you have missings in your data, which lmer ignores but which kill bootMer.
Try:
(mixed5 <- glmer(DV ~ (Demo1 +Demo2 +Demo3 +Demo4 +Trt)^2
+ (1 | PatientID) + (0 + Trt | PatientID)
, family=binomial(logit), na.omit(MixedModelData4[,c('DV','Demo1','Demo2','Demo3','Trt','PatientId')])))

Resources