Adjoint method with black boxes - openmdao

If multiple solvers are attached to the model as external code components (complete black boxes) is there a way to implement adjoint method?
Many of the usages of openMDAO that I have seen so far seems to use some black box flow solvers or structural solvers etc.
But I dont see how one can implement an adjoint method without touching the source codes.
As it is also unlikely to implement a complex step method. The only way is finite difference if one wants to use a gradient based optimizer?

You can implement a semi-analytic adjoint method, even if you don't have any analytic derivatives in your model. Assume you have a single black-bock code, that has an internal solver to converge some implicit equations before computing some set of output values.
You wrap the code in OpenMDAO as an ImplicitComponent, using custom file wrapper. (note: You can't use the ExternalCode component because it is a sub-class of ExplicitComponent)
In your new wrapper, you will implement two methods in your custom ImplicitComponent:
solve_nonlienar
apply_nonlinear
The first method is simply the normal run of your code. The second method however takes the input values and given output values and computes a residual.
Then in order to get partial derivatives, OpenMDAO will finite-difference the apply_linear method to compute dResidaul_dinputs and dResidual_dOutputs, which it can then use to compute total derivatives using reverse (or adjoint) mode.
This approach is typically both more efficient and more accurate then wrapping the code as an ExplicitComponent. First of all, the apply_nonlinear method (residual evaluation) should be very significantly less expensive the solve_nonlinear method, so the finite-difference should be much cheaper. Perhaps more importantly though, finite-difference across a residual evaluation should be much more accurate than when you finite-difference around a full solver convergence loop (see this technical paper for an example of why that is).
There are a few things to note in this approach. First, there may be a large number of residual equations and implicit variables. So you may be doing many more finite-difference calls than if you wrapped your code as an ExplicitComponent. However, since the residual evaluation is much cheaper it should be acceptable. Second, not all codes have the ability to return residual evaluations, so that might require some modification of the black-box. Without the residual evaluation, you can't use an adjoint method.
One other general approach is to mix codes with analytic derivatives and finite-difference partial derivatives. Lets say you have an expensive CFD analysis that does have an existing adjoint, and you want to couple in a inexpensive black-box code. You can finite-difference the black-box code to compute its partials, and then OpenMDAO will use those with the exisiing adjoint to compute semi-analytic total derivatives.

Related

Should linear solver be converging when Coloring is beign computed

I had to add some circular dependencies to my model and thus adding NonlinearBlockGS and LinearBlockGS to the Group with the circular dependency. I get messages like this
LN: LNBGSSolver 'LN: LNBGS' on system 'XXX' failed to converge in 10
iterations.
in the phase where it's finding the Coloring of the problem. There is a Dymos trajectory as part of the problem, but the circular dependency is not in the Trajectory group, it's upstream. It however converges very easily when actually solving the problem. The number of FWD solves is the same as it was before-- everything seem to work fine. Should I be worried about anything?
the way our total derivative coloring works is that we replace partial derivatives with random numbers and then solve the linear system. So the linear solver should be converging. Now, whether or not it should converge with LNBGS in 10 iterations... probably not.
Its hard to speak diffinitively when putting random numbers into a matrix to invert it... but generally speaking it should remain invertible (though we can't promise). That does not mean that it will remain easily invertible. How close does the linear residual get during the coloring? it is decreasing, but slowly. Would more iteration let it get there?
If your problem is working well, I don't think you need to freak out about this. If you would like it to converge better, it won't hurt anything and might give you better coloring. You can increase the iprint of that solver to get more information on the convergence history.
Another option, if your system is small enough, is to try using the DirectSolver instead of LNBGS. For most models with less than 10,000 variables in them a DirectSolver will be overall faster than the LNBGS. There is a nice symetry to using LNBGS with NLGBS ... but while the nonlinear solver tends to be a good choice (i.e. fast and stable) for cyclic dependencies the same can't be said for its linear counter part.
So my go-to combination if NLBGS and DirectSolver. You can't always use the DirectSolver. If you have distributed components in your model, or components that use the matrix-free derivative APIs (apply_linear, compute_jacvec_product), then LNBGS is a good option. But if everything is explicit components with compute_partials or implicit components that provide partials in the linearize method then I suggest using the DirectSolver as your first option.
I think you may have discovered a coloring performance issue in OpenMDAO. When we compute coloring, internally we replace the component partials with random arrays matching the declared sparsity. Since we're not trying to find an actual solution when we compute coloring, we probably don't need to iterate more than once in any given system. And we shouldn't be generating convergence warnings when computing the coloring. I don't think you need to be worried in this case. I'll put a story in our bug tracker to look into this.

Can I use automatic differentiation for non-differentiable functions?

I am testing performance of different solvers on minimizing an objective function derived from simulated method of moments. Given that my objective function is not differentiable, I wonder if automatic differentiation would work in this case? I tried my best to read some introduction on this method, but I couldn't figure it out.
I am actually trying to use Ipopt+JuMP in Julia for this test. Previously, I have tested it using BlackBoxoptim in Julia. I will also appreciate if you could provide some insights on optimization of non-differentiable functions in Julia.
It seems that I am not clear on "non-differentiable". Let me give you an example. Consider the following objective function. X is dataset, B is unobserved random errors which will be integrated out, \theta is parameters. However, A is discrete and therefore not differentiable.
I'm not exactly an expert on optimization, but: it depends on what you mean by "nondifferentiable".
For many mathematical functions that are used, "nondifferentiable" will just mean "not everywhere differentiable" -- but that's still "differentiable almost everywhere, except on countably many points" (e.g., abs, relu). These functions are not a problem at all -- you can just chose any subgradient and apply any normal gradient method. That's what basically all AD systems for machine learning do. The case for non-singular subgradients will happen with low probability anyway. An alternative for certain forms of convex objectives are proximal gradient methods, which "smooth" the objective in an efficient way that preserves optima (cf. ProximalOperators.jl).
Then there's those functions that seem like they can't be differentiated at all, since they seem "combinatoric" or discrete, but are in fact piecewise differentiable (if seen from the correct point of view). This includes sorting and ranking. But you have to find them, and describing and implementing the derivative is rather complicated. Whether such functions are supported by an AD system depends on how sophisticated its "standard library" is. Some variants of this, like "permute", can just fall out AD over control structures, while move complex ones require the primitive adjoints to be manually defined.
For certain kinds of problems, though, we just work in an intrinsically discrete space -- like, integer parameters of some probability distributions. In these case, differentiation makes no sense, and hence AD libraries define their primitives not to work on these parameters. Possible alternatives are to use (mixed) integer programming, approximations, search, and model selection. This case also occurs for problems where the optimized space itself depends on the parameter in question, like the second argument of fill. We also have things like the ℓ0 "norm" or the rank of a matrix, for which there exist well-known continuous relaxations, but that's outside of the scope of AD).
(In the specific case of MCMC for discrete or dimensional parameters, there's other ways to deal with that, like combining HMC with other MC methods in a Gibbs sampler, or using a nonparametric model instead. Other tricks are possible for VI.)
That being said, you will rarely encounter complicated nowhere differentiable continuous functions in optimization. They are already complicated to describe, are just unlikely to arise in the kind of math we use for modelling.

constrained optimization without gradient

This is a more general question, somewhat independent of data, so I do not have a MWE.
I often have functions fn(.) that implement algorithms that are not differentiable but that I want to optimize. I usually use optim(.) with its standard method, which works fine for me in terms of speed and results.
However, I now have a problem that requires me to set bounds on one of the several parameters of fn. From what I understand, optim(method="L-BFGS-B",...) allows me to set limits to parameters but also requires a gradient. Because fn(.) is not a mathematical function but an algorithm, I suspect it does not have a gradient that I could derive through differentiation. This leads me to ask whether there is a way of performing constrained optimization in R in a way that does not require me to give a gradient.
I have looked at some sources, e.g. John C. Nash's texts on this topic but as far as I understand them, they concern mostly differentiable functions where gradients can be supplied.
Summarizing the comments so far (which are all things I would have said myself):
you can use method="L-BFGS-B" without providing explicit gradients (the gr argument is optional); in that case, R will compute approximations to the derivative by finite differencing (#G.Grothendieck). It is the simplest solution, because it works "out of the box": you can try it and see if it works for you. However:
L-BFGS-B is probably the finickiest of the methods provided by optim() (e.g. it can't handle the case where a trial set of parameters evaluates to NA)
finite-difference approximations are relatively slow and numerically unstable (but, fine for simple problems)
for simple cases you can fit the parameter on a transformed scale, e.g. if b is a parameter that must be positive, you can use log_b as a parameter (and transform it via b <- exp(log_b) in your objective function). (#SamMason) But:
there isn't always a simple transformation that will achieve the constraint you want
if the optimal solution is on the boundary, transforming will cause problems
there are a variety of derivative-free optimizers with constraints (typically "box constraints", i.e. independent lower and/or upper bounds one or more parameters) (#ErwinKalvelagen): dfoptim has a few, I have used the nloptr package (and its BOBYQA optimizer) extensively, minqa has some as well. This is the solution I would recommend.

Understanding the complex-step in a physical sense

I think I understand what complex step is doing numerically/algorithmically.
But the questions still linger. First two questions might have the same answer.
1- I replaced the partial derivative calculations of 'Betz_limit' example with complex step and removed the analytical gradients. Looking at the recorded design_var evolution none of the values are complex? Aren't they supposed to be shown as somehow a+bi?
Or it always steps in the real space. ?
2- Tying to picture 'cs', used in a physical concept. For example a design variable of beam length (m), objective of mass (kg) and a constraint of loads (Nm). I could be using an explicit component to calculate these (pure python) or an external code component (pure fortran). Numerically they all can handle complex numbers but obviously the mass is a real value. So when we say capable of handling the complex numbers is it just an issue of handling a+bi (where actual mass is always 'a' and b is always equal to 0?)
3- How about the step size. I understand there wont be any subtractive cancellation errors but what if i have a design variable normalized/scaled to 1 and a range of 0.8 to 1.2. Decreasing the step to 1e-10 does not make sense. I am a bit confused there.
The ability to use complex arithmetic to compute derivative approximations is based on the mathematics of complex arithmetic.
You should read about the theory to get a better understanding of why it works and how the step size issue is resolved with complex-step vs finite-difference.
There is no physical interpretation that you can make for the complex-step method. You are simply taking advantage of the mathematical properties of complex arithmetic to approximate a derivative in a more accurate manner than FD can. So the key is that your code is set up to do complex-arithmetic correctly.
Sometimes, engineering analyses do actually leverage complex numbers. One aerospace example of this is the Jukowski Transformation. In electrical engineering, complex numbers come up all the time for load-flow analysis of ac circuits. If you have such an analysis, then you can not easily use complex-step to approximate derivatives since the analysis itself is already complex. In these cases, it is technically possible to use a more general class of numbers called hyper dual numbers, but this is not supported in OpenMDAO. So if you had an analysis like this you could not use complex-step.
Also, occationally there are implementations of methods that are not complex-step safe which will prevent you from using it unless you define a new complex-step safe version. The simplest example of this is the np.absolute() method in the numpy library for python. The implementation of this, when passed a complex number, will return the asolute magnitude of the number:
abs(a+bj) = sqrt(1^2 + 1^2) = 1.4142
While not mathematically incorrect, this implementation would mess up the complex-step derivative approximation.
Instead you need an alternate version that gives:
abs(a+bj) = abs(a) + abs(b)*j
So in summary, you need to watch out for these kinds of functions that are not implemented correctly for use with complex-step. If you have those functions, you need to use alternate complex-step safe versions of them. Also, if your analysis itself uses complex numbers then you can not use complex-step derivative approximations either.
With regard to your step size question, again I refer you to the this paper for greater detail. The basic idea is that without subtractive cancellation you are free to use a very small step size with complex-step without the fear of lost accuracy due to numerical issues. So typically you will use 1e-20 smaller as the step. Since complex-step accuracy scalea with the order of step^2, using such a small step gives effectively exact results. You need not worry about scaling issues in most cases, if you just take a small enough step.

OpenMDAO: When is it needed to define the partial derivative?

I've noticed that defining unneccesary partial derivatives can significantly slow down the optimizer. Therefore I'm trying to understand: how can I know whether I should define the partial derivative for a certain input/output relationship?
When you say "unnecessary" do you mean partial derivatives that are always zero?
Using declare_partials('*', '*'), when a component is really more sparse than that will significantly slow down your model. Anywhere where a partial derivatives is always zero, you should simply not declare it.
Furthermore, if you have a vectorized operation, then your Jacobian is actually a diagonal matrix. In that case, you should declare a [sparse partial derivative] by giving rows and cols arguments to the declare_partial call1. This will often substantially speed up your code.
Technically speaking, if you follows the data path from all of your design variables, through each components, to the objective and constraints, then any variable you passed would need to have its partials defined. But practically speaking you should declare and specify all the partials for every output w.r.t. every input (unless they are zero), so that changes to model connectivity don't break your derivatives.
It takes a little bit more time to declare your partials more sparsely, but the performance speed up is well worth it.
I think they need to be defined if they are ever relevant to a response (constraint or objective) in the optimization, or as part of a nonlinear solve within a group. My personal practice is to always define them. Should I every change my optimization problem, which I do often, I don't want to have to go back and make sure I'm always defining the appropriate derivatives.
The master-branch of OpenMDAO contains some jacobian-coloring techniques which can significantly improve performance if your problem is particularly sparse in nature. This method is enabled by setting the following options on the driver:
p.driver.options['dynamic_simul_derivs'] = True
p.driver.options['dynamic_simul_derivs_repeats'] = 5
This method works by filling in the user-described sparsity pattern (specified with rows and cols in declare partials) with random numbers and computing the total jacobian. The repeat option is there in improve confidence in the results, since it's possible but unlikely that a single pass will result in an "incidental zero" in the jacobian that is not truly part of the sparsity structure.
With this technique, and by doing things like vectorizing by calculations instead of using nested for loops, I've been able to get very good performance in a lot of situations. Of course, the effectiveness of these methods is going to change from model to model.

Resources