Convergence Criteria in glmmTMB - what are my options? - r

When using glmmTMB() of the R-package {glmmTMB} (see CRAN with links to manual & vignettes), I am aware that I have certain options when dealing with the convergence of models. More specifically, there is the control = argument to which I can pass glmmTMBControl() parameters, whose section in the manual is this:
Furthermore, one of the vignettes - i.e. Troubleshooting with glmmTMB - talks explicitly about dealing with convergence problems. My key point is now, however, that to my knowledge any time glmmTMBControl() is mentioned, it is always in one of these two ways:
glmmTMBControl(optCtrl=list(iter.max=1e3,eval.max=1e3)) i.e. increase the number of iterations
glmmTMBControl(optimizer=optim, optArgs=list(method="BFGS")) i.e. try a different optimizer
Regarding the second one, I am left with the impression that I have multiple options besides the one shown there since "The optimizer argument can be any optimization (minimizing) function [...]" and the following phrasing:
Yet, I was not able to find out about any other options I could actually put as my optimizer=, since it really seems to be the exact example shown above that is presented, and I would be thankful if someone could provide a list.
P.S.: I am trying to play around with glmmTMBs convergence criteria, because it seems to often estimate slightly smaller variance components compared to the same model fit via PROC MIXED in SAS.

From ?glmmTMB:
The ‘optimizer’ argument can be any optimization (minimizing) function, provided that:
• the first three arguments, in order, are the starting values, objective function, and gradient function;
• the function also takes a ‘control’ argument;
• the function returns a list with elements (at least) ‘par’,
‘objective’, ‘convergence’ (0 if convergence is successful)
and ‘message’ (‘glmmTMB’ automatically handles output from
‘optim()’, by renaming the ‘value’ component to ‘objective’)
The options built into base R that satisfy these requirements (gradient-based minimizers) are nlminb and optim with method="BFGS", "L-BFGS-B", or "CG". You could also look into optimizers provided by optimx or nloptr, but you'd probably have to write a wrapper function to make sure they satisfied the criteria above ...

Related

Can I use automatic differentiation for non-differentiable functions?

I am testing performance of different solvers on minimizing an objective function derived from simulated method of moments. Given that my objective function is not differentiable, I wonder if automatic differentiation would work in this case? I tried my best to read some introduction on this method, but I couldn't figure it out.
I am actually trying to use Ipopt+JuMP in Julia for this test. Previously, I have tested it using BlackBoxoptim in Julia. I will also appreciate if you could provide some insights on optimization of non-differentiable functions in Julia.
It seems that I am not clear on "non-differentiable". Let me give you an example. Consider the following objective function. X is dataset, B is unobserved random errors which will be integrated out, \theta is parameters. However, A is discrete and therefore not differentiable.
I'm not exactly an expert on optimization, but: it depends on what you mean by "nondifferentiable".
For many mathematical functions that are used, "nondifferentiable" will just mean "not everywhere differentiable" -- but that's still "differentiable almost everywhere, except on countably many points" (e.g., abs, relu). These functions are not a problem at all -- you can just chose any subgradient and apply any normal gradient method. That's what basically all AD systems for machine learning do. The case for non-singular subgradients will happen with low probability anyway. An alternative for certain forms of convex objectives are proximal gradient methods, which "smooth" the objective in an efficient way that preserves optima (cf. ProximalOperators.jl).
Then there's those functions that seem like they can't be differentiated at all, since they seem "combinatoric" or discrete, but are in fact piecewise differentiable (if seen from the correct point of view). This includes sorting and ranking. But you have to find them, and describing and implementing the derivative is rather complicated. Whether such functions are supported by an AD system depends on how sophisticated its "standard library" is. Some variants of this, like "permute", can just fall out AD over control structures, while move complex ones require the primitive adjoints to be manually defined.
For certain kinds of problems, though, we just work in an intrinsically discrete space -- like, integer parameters of some probability distributions. In these case, differentiation makes no sense, and hence AD libraries define their primitives not to work on these parameters. Possible alternatives are to use (mixed) integer programming, approximations, search, and model selection. This case also occurs for problems where the optimized space itself depends on the parameter in question, like the second argument of fill. We also have things like the ℓ0 "norm" or the rank of a matrix, for which there exist well-known continuous relaxations, but that's outside of the scope of AD).
(In the specific case of MCMC for discrete or dimensional parameters, there's other ways to deal with that, like combining HMC with other MC methods in a Gibbs sampler, or using a nonparametric model instead. Other tricks are possible for VI.)
That being said, you will rarely encounter complicated nowhere differentiable continuous functions in optimization. They are already complicated to describe, are just unlikely to arise in the kind of math we use for modelling.

constrained optimization without gradient

This is a more general question, somewhat independent of data, so I do not have a MWE.
I often have functions fn(.) that implement algorithms that are not differentiable but that I want to optimize. I usually use optim(.) with its standard method, which works fine for me in terms of speed and results.
However, I now have a problem that requires me to set bounds on one of the several parameters of fn. From what I understand, optim(method="L-BFGS-B",...) allows me to set limits to parameters but also requires a gradient. Because fn(.) is not a mathematical function but an algorithm, I suspect it does not have a gradient that I could derive through differentiation. This leads me to ask whether there is a way of performing constrained optimization in R in a way that does not require me to give a gradient.
I have looked at some sources, e.g. John C. Nash's texts on this topic but as far as I understand them, they concern mostly differentiable functions where gradients can be supplied.
Summarizing the comments so far (which are all things I would have said myself):
you can use method="L-BFGS-B" without providing explicit gradients (the gr argument is optional); in that case, R will compute approximations to the derivative by finite differencing (#G.Grothendieck). It is the simplest solution, because it works "out of the box": you can try it and see if it works for you. However:
L-BFGS-B is probably the finickiest of the methods provided by optim() (e.g. it can't handle the case where a trial set of parameters evaluates to NA)
finite-difference approximations are relatively slow and numerically unstable (but, fine for simple problems)
for simple cases you can fit the parameter on a transformed scale, e.g. if b is a parameter that must be positive, you can use log_b as a parameter (and transform it via b <- exp(log_b) in your objective function). (#SamMason) But:
there isn't always a simple transformation that will achieve the constraint you want
if the optimal solution is on the boundary, transforming will cause problems
there are a variety of derivative-free optimizers with constraints (typically "box constraints", i.e. independent lower and/or upper bounds one or more parameters) (#ErwinKalvelagen): dfoptim has a few, I have used the nloptr package (and its BOBYQA optimizer) extensively, minqa has some as well. This is the solution I would recommend.

Does R support switching between optimizers like STATA does?

I need to implement the model show here:
http://www.ssc.upenn.edu/~fdiebold/papers/paper55/DRAfinal.pdf
The model estimation step on p.315 notes that:
"We maximize the likelihood by iterating the Marquart and
Berndt–Hall–Hall–Hausman algorithms, using numerical derivatives, optimal
stepsize, and a convergence criterion of 10^-6 for the change in the norm of the
parameter vector from one iteration to the next."
Now I know that stata supports switching between optimizers,
http://www.stata.com/manuals13/rmaximize.pdf
see bottom of p2.
Is there an R package or Matlab function/s that can do the same thing?
Specifically I need to be able to switch between BHHH and Levenberg-Marquardt.
Kind Regards
Baz
For R, check out the CRAN Task View on Optimization. Searching that page, it looks like BHHH and Marquardt are available in separate packages (minpack.lm and maxLik, respectively). You could write your own code to handle switching between them.

Difference between solnp and gosolnp functions in R for Non-Linear optimization problems

What is the main difference between the two functions, the r Help manual says that gosolnp helps to set the initial parameters correctly. Is there any difference otherwise? Also, if so is the case, how do we determine the correct distribution type for the parameter space?
In my problem, the initial set of parameters is difficult to determine, which is why the optimization problem is used. However, I have idea about the parameter upper and lower bounds.
gsolnp is an extension of solnp, a wrapper, allowing for multiple restarts. Simply put, it uses solnp several times (controllable by n.restarts) to avoid getting stuck in local minima. If your function is known to have no local minima (e.g., it is convex, which can be derived analytically), use solnp to save time. Otherwise, use gsolnp. If you know any additional information (for instance, an area where a global minimum is supposed to be), you may use it for finer control of the starting parameter distribution: see parameters distr and distr.opt.

How to use a glmnet model as a node model for the mob function (of the R package party)?

I am using the mob function of the R package party. My question concerns the model parameter of this function.
How can I define a StatModel object (from the package modeltools) - let's call it glmnetModel - so that the nodes models of the mob estimation are glmnet models (more precisely I would like to use the cv.glmnet function as the main estimation function in the fit slot of glmnetModel) ?
One difficulty is to extend correctly the reweight function (and maybe the estfun and deviance functions ?) like it is suggested here (section 2.1).
Anybody has an idea ?
NB : I have seen some extensions (for SVM : here) but I am not able to use them correctly.
Thank you very much !
Dominique
I'm not sure whether the inferential framework of the parameter instability tests in the MOB algorithm holds for either glmnet or svm.
The assumption is that the model's objective function (e.g., residual sum of squares or log-likelihood) is additive in the observations and that the corresponding first order conditions are consequently additive as well. Then, under certain weak regularity conditions, a central limit theorem holds for the parameter estimates. This can be extended to a functional central limit theorem upon which the parameter instability tests in MOB are based.
If these assumptions do not hold, the p-values may not be valid and hence may lead to too many or too few splits or a biased splitting variable selection. Whether or not this happens in practice for the models you are interested in, I don't know. You would have to check this - either theoretically (which may be hard) or in simulation studies.
Technically, the reimplementation of mob() in the partykit package makes it much easier to plug in new models. A lot less glue code (without S4 classes) is necessary now. See vignette("mob", package = "partykit") for details.

Resources