nlm number of function evaluations per iteration? - r

I use nlm to maximize a likelihood in R. I would like to predict the number of likelihood evaluations and abort if the task is likely to take too long. nlm returns the number of 'iterations' (typically 10-20), and I take it that each iteration involves one numerical evaluation of the Hessian. Time for each iteration (Hessian?) depends on the number of parameters. So I'd like to know: What is the general relationship between the number of parameters and the number of function evaluations per iteration in nlm?

The question is very general, and so is my answer.
From the reference of nlm:
If the function value has an attribute called gradient or both
gradient and hessian attributes, these will be used in the calculation
of updated parameter values. Otherwise, numerical derivatives are
used. deriv returns a function with suitable gradient attribute and
optionally a hessian attribute.
If you provide the gradient and the Hessian for a function being minimized, each iteration involves two function evaluations. If you don't, the Hessian and the gradient are calculated numerically. The source code can be found here. As far as I understand, the parameters of the R nlm function only affect the number of iterations until convergence, but have no influence on the way gradients are evaluated numerically.

Related

Optimizing over 3 dimensional piece-wise constant function

I'm working on a simulation project with a 3-dimensional piece-wise constant function, and I'm trying to find the inputs that maximize the output. Using optim() in R with the Nelder-Mead or SANN algorithms seems best (they don't require the function to be differentiable), but I'm finding that optim() ends up returning my starting value exactly. This starting value was obtained using a grid search, so it's likely reasonably good, but I'd be surprised if it was the exact peak.
I suspect that optim() is not testing points far enough out from the initial guess, leading to a situation where all tested points give the same output.
Is this a reasonable concern?
How can I tweak the breadth of values that optim() is testing as it searches?

Why does IPOPT evaluate objective function despite breaching constraints?

I'm using IPOPT within Julia. My objective function will throw an error for certain parameter values (specifically, though I assume this doesn't matter, it involves a Cholesky decomposition of a covariance matrix and so requires that the covariance matrix be positive-definite). As such, I non-linearly constrain the parameters so that they cannot produce an error. Despite this constraint, IPOPT still insists on evaluating the objective function at paramaters which cause my objective function to throw an error. This causes my script to crash, resulting in misery and pain.
I'm interested why, in general, IPOPT would evaluate a function at parameters that breach the constraints. (I've ensured that it is indeed checking the constraints before evaluating the function.) If possible, I would like to know how I can stop it doing this.
I have set IPOPT's 'bound_relax_factor' parameter to zero; this doesn't help. I understand I could ask the objective function to return NaN instead of throwing an error, but when I do IPOPT seems to get even more confused and does not end up converging. Poor thing.
I'm happy to provide some example code if it would help.
Many thanks in advance :):)
EDIT:
A commenter suggested I ask my objective function to return a bad objective value when the constraints are violated. Unfortunately this is what happens when I do:
I'm not sure why Ipopt would go from a point evaluating at 2.0016x10^2 to a point evaluating at 10^10 — I worry there's something quite fundamental about IPOPT I'm not understanding.
Setting 'constr_viol_tol' and 'acceptable_constr_viol_tol' to their minimal values doesn't noticably affect optimisation, nor does 'over-constraining' my parameters (i.e. ensuring they cannot be anywhere near an unacceptable value).
The only constraints that Ipopt is guaranteed to satisfy at all intermediate iterations are simple upper and lower bounds on variables. Any other linear or nonlinear equality or inequality constraint will not necessarily be satisfied until the solver has finished converging at the final iteration (if it can get to a point that satisfies termination conditions). Guaranteeing that intermediate iterates are always feasible in the presence of arbitrary non-convex equality and inequality constraints is not tractable. The Newton step direction is based on local first and second order derivative information, so will be an approximation and may leave the space of feasible points if the problem has nontrivial curvature. Think about the space of points where x * y == constant as an example.
You should reformulate your problem to avoid needing to evaluate objective or constraint functions at invalid points. For example, instead of taking the Cholesky factorization of a covariance matrix constructed from your data, introduce a unit lower triangular matrix L and a diagonal matrix D. Impose lower bound constraints D[i, i] >= 0 for all i in 1:size(D,1), and nonlinear equality constraints L * D * L' == A where A is your covariance matrix. Then use L * sqrtm(D) anywhere you need to operate on the Cholesky factorization (this is a possibly semidefinite factorization, so more of a modified Cholesky representation than the classical strictly positive definite L * L' factorization).
Note that if your problem is convex, then there is likely a specialized formulation that a conic solver will be more efficient at solving than a general-purpose nonlinear solver like Ipopt.

How can optimization be used as a solver?

In a question on Cross Validated (How to simulate censored data), I saw that the optim function was used as a kind of solver instead of as an optimizer. Here is an example:
optim(1, fn=function(scl){(pweibull(.88, shape=.5, scale=scl, lower.tail=F)-.15)^2})
# $par
# [1] 0.2445312
# ...
pweibull(.88, shape=.5, scale=0.2445312, lower.tail=F)
# [1] 0.1500135
I have found a tutorial on optim here, but I am still not able to figure out how to use optim to work as a solver. I have several questions:
What is first parameter (i.e., the value 1 being passed in)?
What is the function that is passed in?
I can understand that it is taking the Weibull probability distribution and subtracting 0.15, but why are we squaring the result?
I believe you are referring to my answer. Let's walk through a few points:
The OP (of that question) wanted to generate (pseudo-)random data from a Weibull distribution with specified shape and scale parameters, and where the censoring would be applied for all data past a certain censoring time, and end up with a prespecified censoring rate. The problem is that once you have specified any three of those, the fourth is necessarily fixed. You cannot specify all four simultaneously unless you are very lucky and the values you specify happen to fit together perfectly. As it happened, the OP was not so lucky with the four preferred values—it was impossible to have all four as they were inconsistent. At that point, you can decide to specify any three and solve for the last. The code I presented were examples of how to do that.
As noted in the documentation for ?optim, the first argument is par "[i]nitial values for the parameters to be optimized over".
Very loosely, the way the optimization routine works is that it calculates an output value given a function and an input value. Then it 'looks around' to see if moving to a different input value would lead to a better output value. If that appears to be the case, it moves in that direction and starts the process again. (It stops when it does not appear that moving in either direction will yield a better output value.)
The point is that is has to start somewhere, and the user is obliged to specify that value. In each case, I started with the OP's preferred value (although really I could have started most anywhere).
The function that I passed in is ?pweibull. It is the cumulative distribution function (CDF) of the Weibull distribution. It takes a quantile (X value) as its input and returns the proportion of the distribution that has been passed through up to that point. Because the OP wanted to censor the most extreme 15% of that distribution, I specified that pweibull return the proportion that had not yet been passed through instead (that is the lower.tail=F part). I then subtracted.15 from the result.
Thus, the ideal output (from my point of view) would be 0. However, it is possible to get values below zero by finding a scale parameter that makes the output of pweibull < .15. Since optim (or really most any optimizer) finds the input value that minimizes the output value, that is what it would have done. To keep that from happening, I squared the difference. That means that when the optimizer went 'too far' and found a scale parameter that yielded an output of .05 from pweibull, and the difference was -.10 (i.e., < 0), the squaring makes the ultimate output +.01 (i.e., > 0, or worse). This would push the optimizer back towards the scale parameter that makes pweibull output (.15-.15)^2 = 0.
In general, the distinction you are making between an "optimizer" and a "solver" is opaque to me. They seem like two different views of the same elephant.
Another possible confusion here involves optimization vs. regression. Optimization is simply about finding an input value[s] that minimizes (maximizes) the output of a function. In regression, we conceptualize data as draws from a data generating process that is a stochastic function. Given a set of realized values and a functional form, we use optimization techniques to estimate the parameters of the function, thus extracting the data generating process from noisy instances. Part of regression analyses partakes of optimization then, but other aspects of regression are less concerned with optimization and optimization itself is much larger than regression. For example, the functions optimized in my answer to the other question are deterministic, and there were no "data" being analyzed.

Supplying the Hessian to optim()

I have an objective function (a log likelihood), which I want to maximize using R for a vector of inputs (parameters). I have the gradient of the function (i.e. the score vector), and I also happen to know the Hessian of the function.
In Matlab, I can maximize the function easily and the performance is drastically improved by including both the gradient and the Hessian to the minimization using optimset('GradObj', 'on') and optimset('Hessian', 'on'). In particular, the latter makes a huge difference in this case.
However, I want to do this in R. In optim, I can supply the gradient, but as far as I can tell I can only request the Hessian.
My question: is there a straight-forward way of inlcuding the Hessian for optimization problems in R, as there is in Matlab?

R optim - log likelihood estimator and gradient in the same function

I have written a function for performing maximum simulated likelihood estimation in R, which works quite well.
However, the problem is that optim does not call the same function for estimating the likelihood value and estimating the gradient at the same time, like the fminuc optimizer in matlab does. Thus every time if optim wants to update the gradient, the simulation for the given parameter vector have to repeated. At the end optim has called about 100 times the loglik function for updating the parameters and in addition 50 times the loglik function for calculating the gradient.
I am wondering if there is an elegant solution to avoid the 50 additional simulation steps, for example by storing the estimated likelihood value and gradient in each step. Then before the likelihood function is called the next time, it is checked if for a given parameter vector the information are available already or not. This could be done by interpose an additional function between optim and the loglik function. But that seems to be bitty.
Any good ideas?
Cheers,
Ben

Resources