Is there an example of an OpenMDAO optimization where each iteration of the optimization is designed to run in parallel? The examples I saw seemed focused on the design-of-experiment drivers.
Once you get into parallelism within the model, there are potentially many many different ways to split up a problem. However, the simplest case, and the one that is likely the most relevant to something like running multiple directional cases at the same time is just to use a multi-point like setup. This is done with a parallel group, as shown in the multi-point doc.
Related
I'm pretty new to OpenMDAO. If would like to setup my problem such that there is a subdiscipline that is driven by its own optimizer, and it hands off the results to the top level problem, where a separate optimizer will use those results.
For a bit more context, the sub-problem is trajectory optimization of a vehicle. I successfully got that problem to converge in a few iterations, without varying the vehicle parameters (mass, thrust, fuel etc.). So far so good. However, if I let the optimizer also vary some vehicle parameters, it can't seem to get it to go to the global optimum.
So my thought was to let trajectory optimization subproblem do what it does succesfully, and incorporate that as subproblem to the overall problem, and see if that works better.
So my question is:
Can an OpenMDAO problem have multiple drivers?
What's the right way to set that up? Do I wrap my subproblem into its own ExplicitComponent?
While this is possible, solving a problem in this way will not pass accurate analytic derivatives between the system design and the trajectory design.
We've developed another tool specifically for the purpose of doing multidisciplinary optimization which involves trajectory optimization. Dymos
It supports pseudospectral methods (like those in GPOPS, PSOPT, and OTIS) as well as shooting methods, and it allows a trajectory to be optimized as part of a larger system optimization problem.
Take a look at some of the example problems and see if it might work for you.
I am very much attracted to the idea of using the OpenMDAO. However I am not sure if it is worthwhile to use OpenMDAO in an optimization scenario where I use an external code as a single component and nothing else.
Is there any difference between the implementation using an optimizer available in SciPy versus the aforementioned openmdao implementation.
Or any difference between that and implementation of similar approach in some other language like matlab optimization toolbox etc?
(Of course the way optimizers are implemented may differ but i mean conceptually am i taking advantage of OpenMDAO with this approach?)
As far as I read the articles; openMDAO is powerful in cases where multiple components ''interact'' with each other and "global derivatives"" are obtained?
Am I taking advantage of openMDAO by using single ExternalCodeComp
Using just a single ExternalCodeComp would not be using the full potential of OpenMDAO. There would still be some advantages, because the ExternalCodeComp handles a lot of file wrapping details for you. Additionally, there are often details in an optimization, such as adding constraints, the will commonly require additional components. In that case you might use an ExecComp to add a few additional calculations.
Lastly, using OpenMDAO would allow you to potentially grow your model in the future to include other disciplines.
If you are sure that you'll never do anything other than optimize the one external code, then OpenMDAO does reduce down to a similarly functionality to using the bare pyoptsparse, scipy, or matlab optimizers though. In this corner case, OpenMDAO doesn't bring a whole lot to the table, other than the ease of use of the ExternalCodeComp.
I know that there is possibility to export/import h2o model, that was previously trained.
My question is - is there a way to transform h2o model to a non-h2o one (that just works in plain R)?
I mean that I don't want to launch the h2o environment (JVM) since I know that predicting on trained model is simply multiplying matrices, applying activation function etc.
Of course it would be possible to extract weights manually etc., but I want to know if there is any better way to do it.
I do not see any previous posts on SA about this problem.
No.
Remember that R is just the client, sending API calls: the algorithms (those matrix multiplications, etc.) are all implemented in Java.
What they do offer is a POJO, which is what you are asking for, but in Java. (POJO stands for Plain Old Java Object.) If you call h2o.download_pojo() on one of your models you will see it is quite straightforward. It may even be possible to write a script to convert it to R code? (Though it might be better, if you were going to go to that trouble, to convert it to C++ code, and then use Rcpp!)
Your other option is to export the weights and biases, in the case of deep learning, implement your own activation function, and use them directly.
But, personally, I've never found the Java side to be a bottleneck, either from the point of view of dev ops (install is easy) or computation (the Java code is well optimized).
I am currently using Openmdao 1.7.1. I am trying to have a MetaModel with Kriging train itself at the best point of Expected Improvement. The aim is to find a global optimum on a compact design space with an EGO-like method.
However I am facing the following conundrum:
In order to find the best point, the only way I see is to run an optimization on the Expected Improvement function with a gradient base optimizer in a nested Problem, with an outer problem running a FixedPointIterator, checking on the value of the Expected Improvement value.
My questions are the following:
Is there another, more efficient way of doing this ? I couldn't find anything about EGO in Openmdao 1.x, if there is, where should I look ?
If this is the only way:
Will this find the global optimum in my design space ?
Thank you in advance for your responses.
I think that you could develop EGO as a stand alone driver. The driver would be responsible for running the underlying model, collecting the cases, building the surrogate and doing its own sub-optimization.
You can use the surrogate models built into OpenMDAO for this. You just wouldn't use the meta-model component. You would just use the surrogate model by itself. For an example of how to do that, look at this test which runs kriging by itself.
So 90% of the EGO process would be wrapped up into a driver. This avoids the need for a sub-problem and I think simplifies the code significantly. The EGO algorithm is fairly simple and is not hard to code into the driver. You won't gain much by using nested-problems to implement it. But by making it a driver, you can still build a more complex model that will get run by EGO.
In order to find a better minimum, I currently create and run multiple instances of openmdao problem with different initial guesses, then select the solution with the best performance. To make this process faster, I currently use Python's multiprocessing module and solve each openmdao problem in a parallel subprocess.
However, as my problem becomes more complex, I would like to parallelize the optimization process too (by using ParallelGroup and/or distributed components), and I'm unsure if mpi will interact with Python's multiprocessing in strange ways. Is there any openmdao features that will handle both parallelism in solving individual problems and multiple problem instances?
You can run multiple instances of an OpenMDAO problem under MPI (without subprocesses) by splitting the comm and passing a sub-comm to the Problem constructor. See a basic example here:
https://github.com/OpenMDAO/OpenMDAO/blob/master/mpitest/test_mpi.py#L207-237
Your Problem can have ParallelGroup(s) and it will be fine as long as you give it a large enough comm.