Is it necessary, or recommended, to define partials w.r.t fixed parameters? - openmdao

As per the title - say you have a fixed parameter like air density. Is it worth defining the partial w.r.t this fixed parameter?

If you know the value will be fixed forever (i.e. you'll never want to connect it to something else), the you don't need to declare derivatives for that combination of variables.
However I consider this to be a bad practice. In my experience, at some point in the future you will end up connecting something to that input, and then the total derivatives will be wrong. You could, of course, fix the derivatives at that point, but you might not remember and it will take you some time to debug the optimization and figure out the source of the bad derivatives. So as a best practice, I always differentiate all outputs with respect to all inputs.
Alternatively, you could declare density as an option instead of an input (see the docs on options) If you really want it to be a constant, this is the route I suggest.

Related

Is there any way to get a specific partial similar to get_val()?

As the title states, the get_val() function allows the user to retrieve the value of a input, output or residual. Is the anything like a get_partial(of=..., wrt=...) that allows a user to retrieve a derivative? Or what would be the best way to go about retrieving that from the problem or model?
For getting a general derivative in a system, the recommended practice is to use the compute_totals method.
Even if you just want to look at a partial derivative, you can use the of and wrt arguments to point to just the specific partial. You'll get a total, but it should be equal to the partial.
The general debugging practice for looking at partials is to use check_partials. This will give you full values of all the partials to look at. But if you need an algorithmic approach as part of a run script, then use compute_totals.
OpenMDAO stores outputs, so obtaining those is a matter of getting a value that's already there (hence get_val).
For derivatives, depending on the way in which OpenMDAO is used, there's no guarantee that the totals are present in memory, so they must be computed when needed.

Avoiding singularity in analysis - does OpenMDAO automatically enable 'fully-simultaneous' solution?

Turbulent boundary layer calculations break down at the point of flow separation when solved with a prescribed boundary layer edge velocity, ue, in what is called the direct method.
This can be alleviated by solving the system in a fully-simultaneous or quasi-simultaneous manner. Details about both methods are available here (https://www.rug.nl/research/portal/files/14407586/root.pdf), pages 38 onwards. Essentially, the fully-simultaneous method combines the inviscid and viscous equations into a single large system of equations, and solves them with Newton iteration.
I have currently implemented an inviscid panel solver entirely in ExplicitComponents. I intend to implement the boundary layer solver also entirely with ExplicitComponents. I am unsure whether coupling these two groups would then result in an execution procedure like the direct method, or whether it would work like the fully-simultaneous method. I note that in the OpenMDAO paper, it is stated that the components are solved "as a single nonlinear system of equations", and that the reformulation from explicit components to the implicit system is handled automatically by OpenMDAO.
Does this mean that if I couple my two analyses (again, consisting purely of ExplicitComponents) and set the group to solve with the Newton solver, I'll get a fully-simultaneous solution 'for free'? This seems too good to be true, as ultimately the component that integrates the boundary layer equations will have to take some prescribed ue as an input, and then will run into the singularity in the execution of its compute() method.
If doing the above would instead make it execute like the direct method and lead to the singularity, (briefly) what changes would I need to make to avoid it? Would it require defining the boundary layer components implicitly?
despite seeming too good to be true, you can in fact change the structure of your system by changing out the top level solver.
If you used a NonlinearBlockGS solver at the tope, it would solve in the weak form. If you used a NewtonSolver at the top, it would solve as one large monolithic system. This property does indeed derive from the unique structure of how OpenMDAO stores things.
There are some caveats. I would guess that your panel code is implemented as a set of intermediate calculations broken up across several components. If thats the case, then the NewtonSolver will be treating each intermediate variable as it it was its own state variable. In other words, you would have more than just delta and u_e as states, but also all the intermediate calculations too.
This is might be somewhat unstable (though it might work just fine, so try it!). You might need a hybrid between the weak and strong forms, that can be achieved via the solve_subsystems option on the NewtonSolver. This approach, is called the Hierarchical Newton Method in section 5.1.2 of the OpenMDAO paper. It will do a sub-iteration of NLBGS for every top level Newton iteration. This acts as a form of nonlinear preconditioner which can help stabilize the strong form. You can limit ho many sub-iterations are done, and in your case you may want to use just 2 or 3 because of the risk of singularity.

Is there a way to specify partials for an Exec Comp?

Looking into the class, I'm seeing that by default it looks like they're complex stepped. Is there a way to specify an analytical partial?
I've got some code that has a lot of essentially one liner explicit comps with analytical partials specified. Is there any real performance benefit to that over an ExecComp? Or with simple functions does work out to roughly the same?
There's currently no way to specify analytic partials for ExecComps and you're right that they're complex-stepped.
The short answer to your next question is that for simple functions there's no meaningful performance benefit using explicit components over ExecComp. This is because complex-step computes derivatives within machine precision when using an adequately small step size, which OpenMDAO does. The actual computational cost of performing the complex-step, for one-liners, is generally trivial.
The longer answer involves a few considerations, such as the sizes of the component's input and output arrays, the sparsity pattern of the Jacobian, and the cost of the actual compute function. If you want, I can go into more detail about these considerations and suggest which method to use for your problems.
[Edit: I've updated the figure with results for this compute: y=sum(log(x)/x**2+3*log(x)]
I've added a figure below showing the cost for computing derivatives of a component as we change the size of the input array to that component. The analytic component is slightly faster across the board, but requires more lines of code.
Basically, whichever method is easier to implement is probably advantageous as there's not a huge cost difference. For this extremely simple compute function, because it's so inexpensive, the framework overhead probably has a larger impact on cost than the actual derivative computation. Of course these trends are also problem dependent.

How to check for missing partials

I implemented a system that is composed of few groups and multiple components. It is relatively intricate and has component inputs/outputs, which some partials are dependent/non dependent etc.
Gradient based optimizers seem to get stuck at the initial values and never go further than iteration 0 (not stuck at local optimum). I have encountered this error before as I was missing declare_partials for some variables. Is there a way to automatically check which component input/output is missing partials similar to missing connection in N^2 diagram.
There are two tools that you need to use to check for bad derivatives. The first is check_partials. That will go component by component and use either finite-difference or complex-step to verify the partial derivatives for every component (regardless of whether or not your declared them in the setup of that component). That will catch the problem if you are missing any partials, because the check-fd will see them as non-zero and will show you that there is an error.
Check_partials should be your first stop, always. If you can, use complex-step to verify your derivatives. That way you know they are totally accurate. Also, check_partials will do the check around whatever point is currently initialized. So sometimes you might have a degenerate case (e.g. you have some input that is 0) and so your check_passes, but your derivatives are still wrong. For example, if your component represented y=2*x, and you forgot to define derivatives, but you ran check_partials at x=0, then the check would pass. But if you ran it at x=1, then the check would show an error.
If all of your partial derivatives are correct, but you're still getting bad results then you can try check_totals. Depending on the structure of your model, and if you have any coupling in it (i.e. you need to use some kind of nonlinear solver) then its possible that you don't have a correctly configured linear solver setup to solve for total derivatives correctly. In a lot of cases, if you have coupling you can just put a DirectSolver right at the same level as the nonlinear solver you put in the model.

OpenMDAO: finite difference flag for Component.solve_nonlinear

For some of our components it would be useful to know whether it's being executed as part of a finite difference calculation or not. One example could be a meshing component where we'd want to maintain the same node count and distribution function during FD and allow for remeshing during major iteration steps. In the old OpenMDAO we could detect this from a component's itername. Would it be possible to reintroduce this or is that info already available to the Component class?
I can't think of any current way to figure out if you are inside an FD when solve_nonlinear is being called, but it's a good idea for the reasons that you mention.
We don't currently have that capability, but others have also asked to be informed when solve_nonlinear is being run for complex step as well.
One way to do this would be to introduce an optional_argument to solve_nonlinear such as call_mode="fd" or call_mode="cs" or call_mode="solve". The only problem with this approach is that its very backwards incompatible.
Another approach would be to add a regular python attribute to the component that you could check like self.call_mode="solve", etc. This one would be a pretty easy change and I think it would serve the purpose.
One last possible way would be to put a flag into the unknowns/params vector. So you would check params.call_mode to see what mode. This is somewhat sensible since its the param values that change when you're going to complex-step.
I think I like the last option the best. Both solve_nonlinear and apply_nonlinear need to know about this information. But none of the other methods do. So making it a component attribute seems a little out of place.

Resources