I have a set of GLMMs fitted with a binary response variable and a set of continuous variables, and I would like to get confidence intervals for each model. I've been using confint() function, at 95% and with the profile method, and it works without any problems if it is applied to a model with no interactions.
However, when I apply confint() to a model with interactions (continuous*continuous), I've been getting this error:
m1CI <- confint(m1, level=0.95, method="profile")
Error in zeta(shiftpar, start = opt[seqpar1][-w]) :
profiling detected new, lower deviance
The model runs without any problem (although I applied an optimizer because some of the models were having problems with convergence), and here is the final form of one of them:
m1 <- glmer(Use~RSr2*W+RSr3*W+RShw*W+RScon*W+
RSmix*W+(1|Pack/Year),
control=glmerControl(optimizer="bobyqa",
optCtrl=list(maxfun=100000)),
data = data0516RS, family=binomial(link="logit"))
Does anyone know why this is happening, and how can I solve it?
I am using R version 3.4.3 and lme4 1.1-17
The problem was solved by following these instructions:
The error message indicates that during profiling, the optimizer found
a fitted value that was significantly better (as characterized by the
'devtol' parameter) than the supposed minimum-deviance solution returned
in the first place. You can boost the 'devtol' parameter (which is
currently set at a conservative 1e-9 ...) if you want to ignore this --
however, the non-monotonic profiles are also warning you that something
may be wonky with the profile.
From https://stat.ethz.ch/pipermail/r-sig-mixed-models/2014q3/022394.html
I used the confint.merModfrom the lme4 package, and boosted the 'devtol' parameter, first to 1e-8, which didn't work for my models, and then to 1e-7. With this value, it worked
Related
I need to perform a log binomial regression using GEE. I tried to solve this using gee package with the following code:
gee(y~x,id,data = data, family="binomial",link="log",corstr="exchangeable")
Where y is a binary variable 0/1.
When I run that, i got this error:
Error in gee(y~x,id,data = data, family="binomial",link="log",corstr="exchangeable") :
unused argument (link = "log")
I know that the error is straightforward, however, I don't know how to specify the link function inside the gee function. The problem is that the binary outcome is more than 50%. Because of that, OR is not a good proxy for risk interpretation.
Please help.
I have run a cox regression model below
allcause_cox<-coxph(Surv(allcause_time, allcause)~centgroup, data=fulldata)
This has run fine. However I want to return robust standard error. Therefore I have introduced the
cluster(parity)
term into the model, where parity is another variable in my data set. However this won't run. It returns the error
> allcause_cox<-coxph(Surv(allcause_time, allcause)~centgroup, data=fulldata, cluster(parity))
Error in coxph(Surv(allcause_time, allcause)~centgroup, data=fulldata, :
weights must be finite
Is there a solution to this? I have read about the weight term which can be added but I'm not sure what this does.
cluster is in the wrong place, the command should read
allcause_cox<-coxph(Surv(allcause_time, allcause)~centgroup+cluster(parity), data=fulldata)
I have a GLMM with binary variables using R version 3.6.2 and lmerTEST 3.1-2 and I would like to get confidence intervals. So I've used confint() function.
Here:
model71<-glmer(fetale_Schwangerschaftskomplikationen~Infektion_in_Schwangerschaft+(1|Pat_ID),data=CED,family="binomial")
With the summary() function I get resulst without a warning, but for the confidence intervals I get:
confint(model71)
Computing profile confidence intervals ...
Error in zetafun(np, ns) : profiling detected new, lower deviance
I tried to solve the problem like described here, by boosting the devtol parameter using the confint.merMod.
But this boosting isn't working for me. Boosting devtol to 1e-8,1e-7 or 1e-6 does give me the same output:
confint(model71)
Computing profile confidence intervals ...
Error in zetafun(np, ns) : profiling detected new, lower deviance
Does anyone have another idea for me, how i can solve this problem?
(I am sorry for my bad English)
Thanks
As proposed in the commend, you can specify the method used for generating confidence intervals in with confint.merMod() with the method parameters, like confint.merMod(model, method = "Wald").
Options include bootstrapping (boot), Wald (Wald), and profile (profile).
As always, your experimental setup will determing what suits your usecase the best.
I've seen that a common error when running a generalized least squares (gls) from nlme package in R is the "false convergence (8)". I am trying to run gls models to account for the spatial dependence of my residuals, but I got stucked with the same problem. For example:
library(nlme)
set.seed(2)
samp.sz<-400
lat<-runif(samp.sz,-4,4)
lon<-runif(samp.sz,-4,4)
exp1<-rnorm(samp.sz)
exp2<-rnorm(samp.sz)
resp<-1+4*exp1-3*exp2-2*lat+rnorm(samp.sz)
mod.cor<-gls(resp~exp1+exp2,correlation=corGaus(form=~lat,nugget=TRUE))
Error in gls(resp ~ exp1 + exp2, correlation = corGaus(form = ~lat, nugget = TRUE)) :
false convergence (8)
(the above data simulation was copied from here because it yields the same problem I am facing).
Then, I read that the function glsControl has some parameters (maxIter, msMaxIter, returnObject) that can be setted prior running the analysis, which can solve this error. As an attempt to understand what was going on, I adjusted the three parameters above to 500, 2000 and TRUE, and ran the same code above, but the error still shows up. I think that the glsControl didn't work at all, because none result was shown even I've asked for it.
glsControl(maxIter = 500, msMaxIter=2000, returnObject = TRUE)
mod.cor<-gls(resp~exp1+exp2,correlation=corGaus(form=~lat,nugget=TRUE))
For comparison, if I run different models with the same variables, it works fine and no error is shown.
For example, models containing only one explanatory variable.
mod.cor2<-gls(resp~exp1,correlation=corGaus(form=~lat,nugget=TRUE))
mod.cor3<-gls(resp~exp2,correlation=corGaus(form=~lat,nugget=TRUE))
I really digged into several sites, foruns and books in a desperate search trying to solve it, and then I come to know that the 'false convergence' is a recurrent error that many users have faced. However, none of the previous posts seems to solve it for me. i really thought the glsControl could provide an alternative, but it didn't. Do you guys have a clue on how can I solve that?
I really appreciate any help. Thanks in advance.
The problem is that the nugget effect is very small. Provide better starting values:
mod.cor <- gls(resp ~ exp1 + exp2,
correlation = corGaus(c(200, 0.1), form = ~lat, nugget = TRUE))
summary(mod.cor)
#<snip>
#Correlation Structure: Gaussian spatial correlation
# Formula: ~lat
# Parameter estimate(s):
# range nugget
#2.947163e+02 5.209379e-06
#</snip>
Note that this model may be sensitive to starting values even if there is no error or warning.
I would like to add a quote from library(lme4); help("convergence"):
The lme4 package uses general-purpose nonlinear optimizers (e.g.
Nelder-Mead or Powell's BOBYQA method) to estimate the
variance-covariance matrices of the random effects. Assessing reliably
whether such algorithms have converged is difficult.
I believe something similar applies here. This model is clearly problematic and you should be grateful for getting this error. You should at least check how the fit changes with different starting values and try increasing the number of iterations or decreasing the tolerance. In the end, I would suggest looking for a model that better fits the data (we know that this would be an OLS model including lat as a linear predictor here).
PS: A good coding style uses blanks where appropriate.
I am fitting a model regarding absence-presence data and I would like to check whether the random factor is significant or not.
To do this, one should compare a glmm with a glm and check with the LR-test which one is most significant, if I understand correct.
But if I perform an ANOVA(glm,glmm) , I get an analysis of Deviance Table and no output that compares the models.
How do I get the output that I desire, thus comparing both models?
Thanks in advance,
Koen
Somewhere you got the wrong impression about using anova() for this. Below re was fit using glmmPQL() in MASS package. fe was fit using glm() from base:
> anova(re,fe)
#Error in anova.glmmPQL(re, fe) : 'anova' is not available for PQL fits
That message appears to be the sole reason anova.glmmPQL() was created.
See this thread for verification and vague explanation:
https://stat.ethz.ch/pipermail/r-help/2002-July/022987.html
simply anova does not work with glmmPQL you need to use glmer from lme4 package to be able to use anova.