I get the following warning from glmer:
m <- glmer(cbind(Y, N) ~ c1 + c2 + c3 + (1|g1:Year) + (1 + c1 + c2|g1) + (1|g1:Site),
family = binomial, data = data,
control = glmerControl(optimizer ='optimx', optCtrl=list(method='nlminb')))
# Warning in optimx.check(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower, :
# Parameters or bounds appear to have different scalings.
# This can cause poor performance in optimization.
# It is important for derivative free methods like BOBYQA, UOBYQA, NEWUOA.
This is interesting, since all my covariates are scaled (c1: mean = 5.410769e-16, sd = 1), (c2: mean = -2.411114e-16, sd = 1), (c3: mean = 7.602661e-18, sd = 1).
What does this is warning actually mean? All my covariates are scaled, see above. Scaling them again won't fix it.
Shall I have to be concerned about this warning in the sense my model could return unreliable estimates? I got no other warnings or errors.
Thank you!
PS: note - the warning seems to be kinda non-deterministic, on certain data sets in different runs I've observed it sometimes is present and sometimes isn't.
I am rather late with this but since you haven't got other answers maybe someone can still use it. The source of this message is the optimx package. You use optimx as the non-linear optimiser.
The message is the result of a scale check (see the scalecheck() function in the manual). It raises suspicion when the parameter space could be set too narrow. However, this function can also throw misleading warnings. John Nash himself wrote in the manual: "It is, however, an imperfect and heuristic tool, and could be improved." If you get good results, you are probably okay.
Hope this help,
Jan
Related
I am estimating a mixed model with glmer and experience the error
Error in zeta(shiftpar, start = opt[seqpar1][-w]) : profiling detected new, lower deviance
I found a solution by "boosting" the devtol parameter. However, I don't know how and I can't find the solution.
Here is my model:
m3.glmer = glmer(binExap ~ (1|id) + Lag1 + Lag2 + Lag5 + BroadQ,
data = CLnMD,
family = binomial(link="logit"),
nAGQ=1, control = glmerControl(optimizer = 'bobyqa',
optCtrl=list(maxfun=100000)))
This is the code I am using for estimating the CIs:
KIsBoot <- confint.merMod(m3.glmer, method = "profile", nsim = 250)
Now where do I boost/how would I boost "devtol"?
This is admittedly a bit obscure. confint.merMod() takes a ... argument that gets passed to profile.merMod. ?profile.merMod says:
devtol: tolerance for fitted deviances less than baseline (supposedly
minimum) deviance.
So, if you want to ignore this check completely,
confint(m3.glmer, devtol = Inf)
should work. (You don't need .merMod, R figures that out automatically; "profile" is the default setting; and nsim is ignored unless method = "boot" [we should add a warning!])
However, I would also say a little bit pessimistically that if you're getting this error your profile CIs might not be very reliable ... try visualizing the profile as well (pp <- profile(m3.glmer, devtol = Inf); lattice::xyplot(pp)) to make sure it looks reasonable (i.e. at least monotonic!)
I am trying to find a best model fitting on my data using library(nlme) and lme function in R. Here is my model when the slope is fixed:
FixedRopeLength <- lme(EnergyCost~ RopeLength,
data = data,
random=~1|Subject, method = "ML")
summary(FixedRopeLength)
To see whether a random slope provides a better model than a fixed slope, I let the slope to vary across Subject as follows:
RandomRopeLength <- lme(EnergyCost~RopeLength,
data = data,
random=~RopeLength|Subject, method = "ML")
summary(RandomRopeLength)
However, I got this error:
Error in lme.formula(EnergyCost ~ RopeLength, data = data, random =
~RopeLength | : nlminb problem, convergence error code = 1
message = iteration limit reached without convergence (10)
Any solution??
Thank you so much for your help. Your code worked. I only needed to justify your code based on lme function. Here is the code which can be used for aforementioned error:
RandomRopeLength<-lme(EnergyCost~RopeLength, data = data, random=~RopeLength|Subject, method = "ML", control =list(msMaxIter = 1000, msMaxEval = 1000))
summary(RandomRopeLength)
Thanks!
?lme shows that there is a control argument, which redirects you to ?lmerControl, which gives you
msMaxIter: maximum number of iterations for the optimization step
inside the ‘lme’ optimization. Default is ‘50’.
and
msMaxEval: maximum number of evaluations of the objective function
permitted for nlminb. Default is ‘200’.
These correspond to eval.max and iter.max from ?nlminb. Since I'm not sure which of these is the problem, I would re-run the model with
control = lmeControl(msMaxIter = 1000, msMaxEval = 1000)
However, I'll warn you that once you have a problem that experiences numerical problems with the default parameter settings, adjusting the parameter settings may just lead to other problems farther down the line ...
I'm trying to carry out covariate balancing using the ebal package. The basic code is:
W1 <- weightit(Conformidad ~ SexoCon + DurPetFiscPrisión1 +
Edad + HojaHistPen + NacionCon + AnteVivos +
TipoAbog + Reincidencia + Habitualidad + Delitos,
data = Suspension1,
method = "ebal", estimand = "ATT")
I then want to check the balance using the summary function:
summary(W1)
This originally worked fine but now I get the error message:
Error in rep(" ", spaces1) : invalid 'times' argument
It's the same dataset and same code, except I changed some of the covariates. But now even when I go back to original covariates I get the same error. Any ideas would be very appreciated!!
I'm the author of WeightIt. That looks like a bug. I'll take a look at it. Are you using the most updated version of WeightIt?
Also, summary() doesn't assess balance. To do that, you need to use cobalt::bal.tab(). summary() summarizes the distribution of the weights, which is less critical than examining balance. bal.tab() displays the effect sample size as well, which is probably the most important statistic produced by summary().
I encountered the same error message. This happens when the treatment variable is coded as factor or character, but not as numeric in weightit.
To make summary() work, you need to use 1 and 0.
I am working on a negative binomial model using the glmer.nb function within the lme4 package of R. The actual model itself is somewhat complicated, but should be (at least I believe) statistically sound. My question at the moment arises because the model is having difficult converging and returns this warning:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.00753068 (tol = 0.001, component 1)
Most of the time, I work within the standard glmer function, and there, when I get this warning, I add this argument to the glmer function:
glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 100000))
That usually solves the problem. Now, looking into the help file for glmer.nb, it appears the analogous argument for glmer.nb is nb.control. However, when I just change glmerControl to nb.control, R returns an error that it can't find that function. Ok, that's fine. From the given syntax in the help file, it looks like nb.control is supposed to be set equal to a list of whatever your desired control arguments are. I have tried various ways to get my two desired changes, and R just keeps dropping nb.control with the warning "extra argument(s) ‘nb.control’ disregarded"
I have tried searching the vast resource that is the internet for an example of someone that has used the nb.control argument. Most things that I have found (and I haven't been able to find much, hence this question) seem to just recommend the use of the glmerControl argument from glmer. When I put that argument in, it doesn't seem to solve the problem.
Essentially, I am just wondering how to use the nb.control argument to change the optimizer to 'bobyqa' and change the number of iterations to a higher number than default. What is the syntax for using the nb.control argument when it is not the defauilt value of NULL? Any thoughts would be appreciated. Thanks!
It's a little counterintuitive, but you should use control=glmerControl(...) for this, just as you would for the analogous glmer fit - this will get passed through to the inner loop.
Set up data etc:
library(lme4)
dd <- expand.grid(f1 = factor(1:3),
f2 = LETTERS[1:2], g=1:9, rep=1:15)
dd$y <- simulate(~f1+f2+(1|g),
newparams=list(beta=rep(1,4),
theta=1),
newdata=dd,
seed=101,
family=negative.binomial(theta=1.5))[[1]]
Fit "vanilla":
m.nb <- glmer.nb(y ~ f1+f2 + (1|g), data=dd)
Check optimization info:
m.nb#optinfo[c("optimizer","control")]
## $optimizer
## [1] "Nelder_Mead"
##
## $control
## $control$verbose
## [1] 0
Fit with alternative optimizer/etc.:
m.nb2 <- glmer.nb(y ~ f1+f2 + (1|g), data=dd,
control=glmerControl(optimizer="bobyqa",
optCtrl=list(maxfun=1e5)))
Check that we actually changed something:
m.nb2#optinfo[c("optimizer","control")]
## $optimizer
## [1] "bobyqa"
##
## $control
## $control$maxfun
## [1] 1e+05
##
## $control$iprint
## [1] 0
I am doing a Multi-touch Attribution problem using coxph() function. Its a large dataset with around 1 million data but currently I am running a subset of ~100000.
I have removed all the missing values from my data. I am getting an error
Error in if (any(infs)) warning(paste("Loglik converged before variable ",
:missing value where TRUE/FALSE needed
Here is the Cox Function :
SurvObj <- Surv(Final_Data$NormalizedStartTime,Final_Data$NormalizedEndTime,event = Final_Data$Converted)
model2 <- coxph(SurvObj ~ Clicks + RFR + Impressions + Other + `Site-ID` + `Creative-ID`, data = Final_Data1)
Thanks in Advance for the help :)
The Error and the Summary of Final_Data
The line above, "Loglik" and so on, is meant to give information about a dubious test, where loglik converges beforehandedly.
The full line, when produced correctly, should be something akin to the following:
"Loglik converged before variable 100; beta may be infinite."
And it is produced by the following code in the agreg.Rnw
https://r-forge.r-project.org/scm/viewvc.php/pkg/survival/noweb/agreg.Rnw?diff_format=c&sortdir=down&sortby=author&revision=11529&root=survival&view=markup
if (any(infs))
warning(paste("Loglik converged before variable ",
paste((1:nvar)[infs],collapse=","),
"; beta may be infinite. "))
From here we can see that the any() would expect the infs to be a number. If infs is NaN, the function will not work.
The inner part functions like this:
paste("Loglik converged before variable ",
paste((1:1)[NaN],collapse=","),
"; beta may be infinite. ")
[1] "Loglik converged before variable NA ; beta may be infinite. "
so this part of the function would run, if it could get to the inner part. But it does not, since evaluating
infs <- NaN
if (any(infs))
warning(paste("Loglik converged before variable ",
paste((1:nvar)[infs],collapse=","),
"; beta may be infinite. "))
Error in if (any(infs)) warning(paste("Loglik converged before variable ", :
missing value where TRUE/FALSE needed
the exact error you had. The infs variable is generated before, via infs <- abs(agfit$u %*% var). And the agfit is produced via .Call(Cagfit4.....), so the problem is in the underlying C code for the function.
For some of my data, both the agfit$u and agfit$imat are NaNs. The $u and $imat are generated from
u2 = SET_VECTOR_ELT(rlist, 1, allocVector(REALSXP, nvar));
u = REAL(u2);
and
PROTECT(imat2 = allocVector(REALSXP, nvar*nvar));
nprotect =1;
if (NAMED(covar2)>0) {
PROTECT(covar2 = duplicate(covar2));
nprotect++;
}
covar= dmatrix(REAL(covar2), nused, nvar);
imat = dmatrix(REAL(imat2), nvar, nvar);
respectively, in the agfit4 C code. I am not that good in C so I cannot say what the problem is in the C code area. It could be a bug or then the Cox function is not usable for your data or both. Nevertheless, something should be done to this, since I've seen others asking about this error too. But unfortunately I am not skilled enough to fix this, I only can point to the problem and holler "hey! somebody else pls take care of this" :-).
My possible solutions would be:
1) check if your data is usable with the Cox function at all (e.g. if you have 2000 cases of 0 and 2 cases of 1, the Cox function may not be suitable anyway, and the error is suggesting you find another way for the analysis :-) )
2) modify the code to do the any(infs) evaluation via removing the NAs, resulting in FALSE and skipping the error via the following:
if (any(infs, na.rm=T)) (could screw up the code, tho)
3) fix the C code so that the agfit4 does not produce NaNs in the output object. (only for the skilled, not me)
I had this problem too. Turns out one of my variables had infinite values in it. Problem solved once I replaced these values