Using ExternalCodeComp as the single comp and OpenMDAO concept - openmdao

I am very much attracted to the idea of using the OpenMDAO. However I am not sure if it is worthwhile to use OpenMDAO in an optimization scenario where I use an external code as a single component and nothing else.
Is there any difference between the implementation using an optimizer available in SciPy versus the aforementioned openmdao implementation.
Or any difference between that and implementation of similar approach in some other language like matlab optimization toolbox etc?
(Of course the way optimizers are implemented may differ but i mean conceptually am i taking advantage of OpenMDAO with this approach?)
As far as I read the articles; openMDAO is powerful in cases where multiple components ''interact'' with each other and "global derivatives"" are obtained?
Am I taking advantage of openMDAO by using single ExternalCodeComp

Using just a single ExternalCodeComp would not be using the full potential of OpenMDAO. There would still be some advantages, because the ExternalCodeComp handles a lot of file wrapping details for you. Additionally, there are often details in an optimization, such as adding constraints, the will commonly require additional components. In that case you might use an ExecComp to add a few additional calculations.
Lastly, using OpenMDAO would allow you to potentially grow your model in the future to include other disciplines.
If you are sure that you'll never do anything other than optimize the one external code, then OpenMDAO does reduce down to a similarly functionality to using the bare pyoptsparse, scipy, or matlab optimizers though. In this corner case, OpenMDAO doesn't bring a whole lot to the table, other than the ease of use of the ExternalCodeComp.

Related

Understanding the reasons behind Openmdao design

I am reading about MDO and I find openmdao really interesting. However I have trouble understanding/justifying the reasons behind some basic choices.
Why Gradient-based optimization ? Since gradient-based optimizer can never guarantee global optimum why is it preferred. I understand that finding a global minima is really hard for MDO problems with numerous design variables and a local optimum is far better than a human design. But considering that the application is generally for expensive systems like aircrafts or satellites, why settle for local minima ? Wouldn't it be better to use meta-heuristics or meta-heuristics on top of gradient methods to converge to global optimum ? Consequently the computation time will be high but now that almost every university/ leading industry have access to super computers, I would say it is an acceptable trade-off.
Speaking about computation time, why python ? I agree that python makes scripting convenient and can be interfaced to compiled languages. Does this alone tip the scales in favor of Python ? But if computation time is one of the primary reasons that makes finding the global minima really hard, wouldn't it be better to use C++ or any other energy efficient language ?
To clarify the only intention of this post is to justify (to myself) using Openmdao as I am just starting to learn about MDO.
No algorithm can guarantee that it finds a global optimum in finite time, but gradient-based methods generally find locals faster than gradient-free methods. OpenMDAO concentrates on gradient-based methods because they are able to traverse the design space much more rapidly than gradient-free methods.
Gradient-free methods are generally good for exploring the design space more broadly for better local optima, and there's nothing to prevent users from wrapping the gradient-based optimization drivers under a gradient-free caller. (see the literature about algorithms like Monotonic Basin Hopping, for instance)
Python was chosen because, while it's not the most efficient in run-time, it considerably reduces the development time. Since using OpenMDAO means writing code, the relatively low learning curve, ease of access, and cross-platform nature of Python made it attractive. There's also a LOT of open-source code out there that's written in Python, which makes it easier to incorporate things like 3rd party solvers and drivers. OpenMDAO is only possible because we stand on a lot of shoulders.
Despite being written in Python, we achieve relatively good performance because the algorithms involved are very efficient and we attempt to minimize the performance issues of Python by doing things like using vectorization via Numpy rather than Python loops.
Also, the calculations that Python handles at the core of OpenMDAO are generally very low cost. For complex engineering calculations like PDE solvers (e.g. CFD or FEA) the expensive parts of the code can be written in C, C++, Fortran, or even Julia. These languages are easy to interface with python, and many OpenMDAO users do just that.
OpenMDAO is actively used in a number of applications, and the needs of those applications drives its design. While we don't have a built-in monotonic-basin-hopping capability right now (for instance), if that was determined to be a need by our stakeholders we'd look to add it in. As our development continues, if we were to hit roadblocks that could be overcome by switching do a different language, we would consider it, but backwards compatibility (the ability of users to use their existing Python-based models) would be a requirement.

Does it make sense to use a gradient free optimizer within openmdao framework

Is my understanding correct that : using a gradient free optimizer wraps the whole problem and treats it as a black box (even though the problem has multiple groups/components attached to inner solvers with gradients etc.).
Then the actual capabilities of openmdao are not exploited well and the advantage of openmdao boils down to easily tracking your calculations with smaller routines etc.
Though it is true that OpenMDAO's most unique and powerful feature is its automatic derivatives capability, IMO that does not mean that it is only useful in the context of gradient based optimization. The framework offers several other features that are useful regardless of what optimizer you chose. For example:
support for parallelization
library of powerful nonlinear solvers
modular model construction
You could certainly hand-code a large, complex model without OpenMDAO, then wrap that with a gradient free optimizer, but I would argue that you would ultimately end up doing a bit more work in the long run. Using a framework provides organization and structure to your model that pays off long term.

Algorithmic Differentiation vs Multiple Explicit Components with Analytical Derivatives

I have a problem composed of around 6 mathematical expressions - i.e. (f(g(z(y(x))))) where x are two independent arrays.
I can divide this expression into multiple explicit comps with analytical derivatives or use an algorithmic differentiation method to get the derivatives which reduces the system to a single explicit component.
As far as i understand it is not easy to tell in advance the possible computational performance difference between these 2 approaches.
It might depend on the algorithmic differentiation tools capabilities on the reverse mode case but maybe the system will be very large with multiple explicit components that it would still be ok to use algo diff.
my questions is :
Is algo diff. a common tool being used by any of the developers/users ?
I found AlgoPY but not sure about other python tools.
As of OpenMDAO v2.4 the OpenMDAO development team has not heavily used AD tools on any pure-python components. We have experimented with it a bit and found roughly a 2x increase in computational vs hand differentiated components. While some computational cost increase is expected, I do not want to indicate that I expect 2x to be the final rule of thumb. We simply don't have enough data to provide such an estimate.
Python based AD tools are much less well developed than those for compiled languages. The dynamic typing and general language flexibility both make it much more challenging to write good AD tools.
We have interfaced OpenMDAO with compiled codes that use AD, such as CFD and FEA tools. In these cases you're always using the matrix-free derivative APIs for OpenMDAO (apply_linear and compute_jacvec_product).
If your component is small enough to fit in memory and fast enough to run on a single process, I suggest you hand differentiate your code. That will give you the best overall performance for now.
AD support for small serial components is something we'll look into supporting in the future, but we don't have anything to offer you in the near term (as of OpenMDAO v2.4)

Advantage of the components apart from the three main ones

What is the advantage of using the components from OpenMDAO's standard library
(i.e. matrixvectorproduct, dotproduct, linearsystem, etc)?
As far as I understand, all of them are based on the two base classes: ExplicitComponent and ImplicitComponent
Is there a reason one should use them apart from convenience?
The OpenMDAO standard library of components provides a set of helful, general use components, all vectorized, and all with verified-to-be-correct analytic derivatives. You're certainly not required or even obligated to use them at all. However, these components are ones that appear again and again inside many different models that have been built.
Their common appearance motivated us to generalize their implementations and include them in the standard library to avoid the need to either re-implement the components each time, or copy/paste the existing implementation into a new project.
Duplicating code in general is a bad idea, so whenever you can abstract something to be more general and broadly useable is a good idea.
If you are smart about how you leverage these components, you can implement some very complex calculations without the need to write the nonlinear or linear code yourself. The Dymos version 0.10.0 and OpenConcept libraries, built on top of OpenMDAO, use these components extensively to reduce their own coding burdens.

Constraint handling, integer & parallel optimization

I have recently been assigned to a project where an optimization tool will be developed in python.
Various online search points out there are multiple libraries/platforms that come with pros and cons. As far as I have looked up with the existing openmdao framework we can not have an optimizer that can do constraint handling, mixed-integer, parallel optimization. Here with parallel it is meant that each iteration should be parallellized as in GADriver. I wanted to ask some advice from the developers considering the future possible improvements on openmdao:
Is it a good idea to look into writing a wrapper for an existing optimizer that can handle the aforementioned request or should one opt out from openmdao completely as openmdao may not be the strongest platform in this specific problem?
if writing a wrapper is a good idea i assume one should look for driver routines in the openmdao 2.2.X github. Do you have any advice for an optimizer type within python (paid or free) that can be easily compatible with openmdao.
There is an AIAA paper titled "Next generation aircraft design considering airline operations and economics", which described current state-of-the-art research into mixed integer programming problems. The approach here used a hybrid method that takes advantage of the efficient gradient based capabilities of OpenMDAO to handle larger numbers of continuous design variables.
In general, there is no limitation on mixed integer programming. You just need to write your own driver to handle it. These algorithsm are complex, but SimpleGADriver is a decent place to start to see how to run the model in parallel.

Resources