Increasing number of iterations for lme4 version 1.1-7 - r
I am encountering a problem with iterations whilst trying to do a mixed effects binomial regression using the glmer function of the package lme4 version 1/1-7.
When I run the model using the code:
model <- glmer(Clinical.signs ~ cloacal +(1|Chicken_ID), family = binomial,
data = viral_load_9)
I get the warning:
Error: pwrssUpdate did not converge in (maxit) iterations
When I follow the advice given here
Using the code:
model <- glmer(Clinical.signs ~ cloacal +(1|Chicken_ID), family = binomial,
data = viral_load_9,
control=glmerControl(optimizer="bobyqa",
optCtrl = list(maxfun = 100000)))
I still have the exact same error message.
Any suggestions on what might be wrong with my code will be gratefully received.
-----------------------------------------------------------------
Following the advice from aosmith (Thanks for the sugggestion!) I am including the data and the code so as others might be able to replicate the results I am getting. Note that the code worked fine for variable "oral" and produced "model_1", but when I ran it with the variable "cloacal", I got the error message as noted above.
Chicken_ID <- c(44,44,45,45,46,46,47,47,48,48,49,49,50,50,51,51,52,52,53,55,55)
oral <- c(-0.4827578,-0.1845839,-1.3772797,-0.7809318,-0.4827578,1.6044598,0.1135901,0.411764,-0.1845839,1.6044598,-0.1845839,1.6044598,-1.6754536,0.709938,-1.0791057,0.709938,0.1135901,1.0081119,0.411764,-1.6754536,-0.1845839)
cloacal <- c(-0.9833258,0.450691,-1.1267275,0.7374944,-1.1267275,1.0242977,-1.5569325,1.0242977,0.3072893,1.0242977,-0.1229157,1.1676994,-1.5569325,0.5940927,0.450691,0.3072893,-1.1267275,0.7374944,0.1638876,-1.5569325,1.1676994)
clinical.signs <- c("YES","YES","NO","YES","NO","YES","NO","YES","YES","YES","YES","YES","NO","YES","YES","YES","NO","YES","YES","NO","YES")
clinical.signs <- factor(clinical.signs)
viral_load <- data.frame(Chicken_ID, oral, cloacal, clinical.signs)
library(lme4)
model_1 <- glmer(clinical.signs ~ oral +(1|Chicken_ID),
family = binomial, data = viral_load)
summary(model_1)
model_2 <- glmer(clinical.signs ~ cloacal +(1|Chicken_ID),
family = binomial, data = viral_load)
It may not be a problem with your code. See this Q on Cross-Validated.
Some things you can do to prevent convergence failures:
Rescale continuous variables
Try different approximators using glmerControl()
Check your data for sparse data. If there aren't sufficient outcomes or observations at certain levels of predictors the model may fail to converge.
Related
Why do my variables disappear after using feature selection with step()?
I made a Multinomial Logistic Regression model using library(nnet) in R. I notice I, one, get an error, and two, after using the step() function, my predictor variables convert into the variable I'm attempting to predict, solely (Depression). summary(multinom_model)$call produces: multinom(formula = out ~ ., data = train) Warning message: In sqrt(diag(vc)) : NaNs produced BUT mult_model <- step(multinom_model, trace = FALSE) summary(mult_model)$call this code produces: multinom(formula = out ~ Depressed, data = train) Why is this happening? Also, both models predict the same output on the test data. Does it have to do with the warning message? How do I fix that?
Compare regression with robust standard errors to null using Wald's Test in R
I am running a regression model that looks like this: wwMLR <- lm(contAOMIdiff ~ PHQ9 + KVIQtot, data = wwMeanWide4) Having used check_heteroscedasticity(wwMLR) from the Performance package I can see that the regression model violates the assumption of homoscedasticity. Due to this I have built a model with robust standard errors shown below: library(estimatr) wwMLR_hc3 <- lm_robust(formula = contAOMIdiff ~ PHQ9 + KVIQtot, data = wwMeanWide4, se_type = "HC3", alpha = 0.0482) What I would like to do now is compare this regression model to a null using Wald's Test. The null model looks like the below: wwnull_hc3 <- lm_robust(formula = contAOMIdiff ~ 1, data = wwMeanWide4, se_type = "HC3", alpha = 0.0482) When I try to compare these using a Wald's Test: library(lmtest) waldtest(wwMLR_hc3, wwnull_hc3, vcov = vcovHC) I get an error: Error in eval(predvars, data, env) : object 'contAOMIdiff' not found contAOMIdiff is my response variable in the regression. I am not sure why it can't be found but I am assuming this may be a compatibility issue between the lm_robust model type and the waldtest() function. If anyone has any ideas on how I can get this to work, or an alternate way to run a Wald's Test on these two models I would be very grateful. I have found a similar question here, which has not been answered: R Wald test for cluster robust se's
Error in brglm model with Backward elimination with Interaction: error in do.call("glm.control", control) : second argument must be a list
After fitting a model with glm I got this as a result: Warning message: glm.fit: Adjusted probabilities with numerical value 0 or 1.** After some research on Google, I tried with the brglm package. When I try to apply backward elimination on the model, I get the following error: Error in do.call("glm.control", control) : second argument must be a list. I searched on Google but I didn't find anything. Here is my code with brglm: library(mlbench) #require(Amelia) library(caTools) library(mlr) library(ciTools) library(brglm) data("BreastCancer") data_bc <- BreastCancer data_bc head(data_bc) dim(data_bc) #Delete id column data_bc<- data_bc[,-1] data_bc dim(data_bc) str(data_bc) # convert all factors columns to be numeric except class. for(i in 1:9){ data_bc[,i]<- as.numeric(as.character(data_bc[,i])) } str(data_bc) #convert class: benign and malignant to binary 0 and 1: data_bc$Class<-ifelse(data_bc$Class=="malignant",1,0) # now convert class to factor data_bc$Class<- factor(data_bc$Class, levels = c(0,1)) str(data_bc) model <- brglm(formula = Class~.^2, data = data_bc, family = "binomial", na.action = na.exclude ) summary(model) #Backward Elimination: final <- step(model, direction = "backward")
You can work around this by using the brglm2 package, which supersedes the brglm package anyway: model <- glm(formula = Class~.^2, data = na.omit(data_bc), family = "binomial", na.action = na.fail, method="brglmFit" ) final <- step(model, direction = "backward") length(coef(model)) ## 46 length(coef(final)) ## 42 setdiff(names(coef(model)), names(coef(final)) ## [1] "Cl.thickness:Epith.c.size" "Cell.size:Marg.adhesion" ## [3] "Cell.shape:Bl.cromatin" "Bl.cromatin:Mitoses" Some general concerns about your approach: stepwise reduction is one of the worst forms of model reduction (cf. lasso, ridge, elasticnet ...) in the presence of missing data, model comparison (e.g. by AIC) is questionable, as different models will be fitted to different subsets of the data. Given that you are only going to lose a small fraction of your data by using na.omit() (comparing nrow(bc_data) with sum(complete.cases(bc_data)), I would strongly recommend dropping observations with NA values from the data set before starting it's also not clear to me that comparing penalized models via AIC is statistically appropriate (see here)
Permutation test error for likelihood ratio test of mixed model in R: permlmer, lmer, lme4, predictmeans
I would like to test the main effect of a categorical variable using a permutation test on a likelihood ratio test. I have a continuous outcome and a dichotomous grouping predictor and a categorical time predictor (Day, 5 levels). Data is temporarily available in rda format via this Drive link. library(lme4) lmer1 <- lmer(outcome ~ Group*Day + (1 | ID), data = data, REML = F, na.action=na.exclude) lmer2 <- lmer(outcome ~ Group + (1 | ID), data = data, REML = F, na.action=na.exclude) library(predictmeans) permlmer(lmer2,lmer1) However, this code gives me the following error: Error in density.default(c(lrtest1, lrtest), kernel = "epanechnikov") : need at least 2 points to select a bandwidth automatically The following code does work, but does not exactly give me the outcome of a permutated LR-test I believe: library(nlme) lme1 <- lme(outcome ~ Genotype*Day, random = ~1 | ID, data = data, na.action = na.exclude) library(pgirmess) PermTest(lme1) Can anyone point out why I get the "epanechnikov" error when using the permlmer function? Thank you!
The issue is with NANs, remove all nans from your dataset and rerun the models. I had the same problem and that solved it.
stepAIC handling of multinom models
I am seeing some weird behavior with the stepAIC function in the MASS package when dealing with multinomial logistic models. Here is some sample code: library(nnet) library(MASS) example("birthwt") race.model <- multinom(race ~ smoke, bwt) race.model2 <- stepAIC(race.model, k = 2) In this case race.model and race.model2 have identical terms; stepAIC did not prune anything. However, I need to query certain attributes of the models, and I get an error with race.model2: formula(race.model)[2] returns race() but formula(race.model2)[2] gives the error: Error in terms.formula(newformula, specials = names(attr(termobj, "specials"))) : invalid model formula in ExtractVars This behavior only seems to occur when stepAIC does not remove terms from the model. In the following code, terms are removed by stepAIC, and both models can be properly queried: race.big <- multinom(race ~ ., bwt) race.big2 <- stepAIC(race.big, k = 2) formula(race.big)[2] formula(race.big2)[2] Any ideas about what is going wrong here?