Replaceing Component in OpenMDAO Group - openmdao

In OpenMDAO 0.x there was a 'replace' method that let you swap components, not clear you can do that easily in 1.x. I have Problem that my outer loop algorithm has to run multiple times, and in some cases want to swap the computationally expensive custom MDA component out for a MetaModel Component which has the same i/o. Is there a quick and dirty way to do this at runtime?

I would just define a custom group class that takes an argument telling it which one to use. Rather than use a replace I suggest just re-instantiating the whole problem and calling setup again.
If, for some reason you don't want to call setup more than once (maybe its a big model and setup is slow), then I suggest you just create two problem instances. One will have the MMDA and the other will have the meta-model. In your outer loop you can just call whichever one is appropriate.

Sounds confusing to combine MetaModel and MDA problems. Having a MDA model suggests that the problem params and model are explicit, while the MetaModel suggests that the problem params and model are implicit which are two very different problems. Running two different problems conditionally seems contrary to the MDAO paradigm of a single system of non-linear equations and could get very messy to implement, IMHO.
If the intent of the MetaModel is to refine the initializing guess for the MDA, then perhaps the Component Implicit State concept in v1.7 is the built-in method to employ in the MDA problem and abandon the MetaModel thus reducing the problem to one set of equations? Beware that I have not tested the Component Implicit State method.
Otherwise, all OpenMDAO classes are just python classes and can have their solve_nonlinear methods modified to include conditional logic. Perhaps your project could create a parent problem and a parent group which control conditionally the execution of the solver and the data flows as needed between the MetaModel and the custom MDA?
Your thoughts?

Related

Running a group as optimization in design and simulation in off-design mode

I checked nested optimization code in RevHack2020 repository. I want to implement nested optimization for a group. In the subproblem codes, I saw that we can implement run_driver() in compute method for an explicit component (here).
Can I implement run_driver() in group classes? (Since compute() method is for components, could not implement it in a group)
Note: I plan to use nested optimization in a pycyle "element" and it inherits from group class. It is the reason for me to implement in "group" class. Otherwise, I can change my model to explicitComponent.
No, you can't implement any custom run methods inside of groups. The only place users are supposed to implement execution is inside components.
In the case you described, you would place the pyCycle models into a problem of their own. Then you embed that problem into a containing component.
I don't follow why you feel the need to implement it in a group.

Avoiding singularity in analysis - does OpenMDAO automatically enable 'fully-simultaneous' solution?

Turbulent boundary layer calculations break down at the point of flow separation when solved with a prescribed boundary layer edge velocity, ue, in what is called the direct method.
This can be alleviated by solving the system in a fully-simultaneous or quasi-simultaneous manner. Details about both methods are available here (https://www.rug.nl/research/portal/files/14407586/root.pdf), pages 38 onwards. Essentially, the fully-simultaneous method combines the inviscid and viscous equations into a single large system of equations, and solves them with Newton iteration.
I have currently implemented an inviscid panel solver entirely in ExplicitComponents. I intend to implement the boundary layer solver also entirely with ExplicitComponents. I am unsure whether coupling these two groups would then result in an execution procedure like the direct method, or whether it would work like the fully-simultaneous method. I note that in the OpenMDAO paper, it is stated that the components are solved "as a single nonlinear system of equations", and that the reformulation from explicit components to the implicit system is handled automatically by OpenMDAO.
Does this mean that if I couple my two analyses (again, consisting purely of ExplicitComponents) and set the group to solve with the Newton solver, I'll get a fully-simultaneous solution 'for free'? This seems too good to be true, as ultimately the component that integrates the boundary layer equations will have to take some prescribed ue as an input, and then will run into the singularity in the execution of its compute() method.
If doing the above would instead make it execute like the direct method and lead to the singularity, (briefly) what changes would I need to make to avoid it? Would it require defining the boundary layer components implicitly?
despite seeming too good to be true, you can in fact change the structure of your system by changing out the top level solver.
If you used a NonlinearBlockGS solver at the tope, it would solve in the weak form. If you used a NewtonSolver at the top, it would solve as one large monolithic system. This property does indeed derive from the unique structure of how OpenMDAO stores things.
There are some caveats. I would guess that your panel code is implemented as a set of intermediate calculations broken up across several components. If thats the case, then the NewtonSolver will be treating each intermediate variable as it it was its own state variable. In other words, you would have more than just delta and u_e as states, but also all the intermediate calculations too.
This is might be somewhat unstable (though it might work just fine, so try it!). You might need a hybrid between the weak and strong forms, that can be achieved via the solve_subsystems option on the NewtonSolver. This approach, is called the Hierarchical Newton Method in section 5.1.2 of the OpenMDAO paper. It will do a sub-iteration of NLBGS for every top level Newton iteration. This acts as a form of nonlinear preconditioner which can help stabilize the strong form. You can limit ho many sub-iterations are done, and in your case you may want to use just 2 or 3 because of the risk of singularity.

Using ExternalCodeComp as the single comp and OpenMDAO concept

I am very much attracted to the idea of using the OpenMDAO. However I am not sure if it is worthwhile to use OpenMDAO in an optimization scenario where I use an external code as a single component and nothing else.
Is there any difference between the implementation using an optimizer available in SciPy versus the aforementioned openmdao implementation.
Or any difference between that and implementation of similar approach in some other language like matlab optimization toolbox etc?
(Of course the way optimizers are implemented may differ but i mean conceptually am i taking advantage of OpenMDAO with this approach?)
As far as I read the articles; openMDAO is powerful in cases where multiple components ''interact'' with each other and "global derivatives"" are obtained?
Am I taking advantage of openMDAO by using single ExternalCodeComp
Using just a single ExternalCodeComp would not be using the full potential of OpenMDAO. There would still be some advantages, because the ExternalCodeComp handles a lot of file wrapping details for you. Additionally, there are often details in an optimization, such as adding constraints, the will commonly require additional components. In that case you might use an ExecComp to add a few additional calculations.
Lastly, using OpenMDAO would allow you to potentially grow your model in the future to include other disciplines.
If you are sure that you'll never do anything other than optimize the one external code, then OpenMDAO does reduce down to a similarly functionality to using the bare pyoptsparse, scipy, or matlab optimizers though. In this corner case, OpenMDAO doesn't bring a whole lot to the table, other than the ease of use of the ExternalCodeComp.

Best practice in converting from 0.X: variable trees

Is there a recommended best practice in converting variable trees from 0.X to 1.X? My intuition is to make variable trees into components, but I'm curious what the OpenMDAO team thinks.
we moved away from variable trees. Instead we just name the variables hierarchically like "top:sub:subsub:x, top:sub:subsub:y"
Kilojoules,
I too was really upset with the elimination of variable trees; but, I was much more upset with how they failed to integrate with openmdao components and failed silently. Good riddance.
I have been experimenting with numpy.ndarray as a replacement for variable trees. See the Sellar example for details. Creating multi-dimensional ndarray with field names seems to work well for a name-referenced data structure. To create multidimensionality seems to require nesting of declarations which is similar to variable tree branches.
Note that the numpy.array (sic) is not compatible with openmdao, but numpy.ndarray (sic) works well since ndarray is a "structured" array object with size, shape, data type, etc specified in an internal dictionary. Better than variable trees, the multi-dimensional ndarray provides multiple "views" of the same relationships with one (massive) global declaration which can be instantiated as a param within a component. Populating the ndarray instance is made by field name-referenced assignment instead of some iteration. It is more complicated to declare as ALL the information about the structured array must be provided to work within openmdao. Also, numpy.ndarray is for rigidly fixed array sizes and relationships, just like variable trees.
I am not advocating this concept for every application, but do take a look for your situation.
Silvia

Using generic functions of R, when and why?

I'm developing an major upgrade to the R package, and as part of the changes I want to start using the S3 methods so I can use the generic plot, summary and print functions. But I think I'm not totally sure I understand why and when to use generic functions in general.
For example, I currently have a function called logLikSSM, which computes the log-likelihood of a state space model. Instead of using this functions, I could make function logLik.SSM or something like that, as there is generic function logLik in R. The benefit of this would be that logLik is shorter to write than logLikSSM, but is there really any other point in this?
Similar case, there is a generic function called simulate in stats package, so in theory I could use that instead of simulateSSM. But now the description of the simulate function tells that function is used to "Simulate Responses", but my function actually simulates the hidden states, so it really doesn't fit into the description of the simulate function. So probably in this case I shouldn't use the generic function right?
I apologize if this question is too vague for here.
The advantages of creating methods for generics from the core of R include:
Ease of Use. Users of your package already familiar with those generics will have less to remember making it easier to use your package. They might even be able to do a certain amount without reading the documentation. If you come up with your own names then they must discover and remember new names which is an added cognitive burden.
Leverage Existing Functionality. Also any other functions that make use of generics you create methods for can then automatically use yours as well; otherwise, they would have to be changed. For example, AIC uses logLik.
A disadvantage is that the generic involves the extra level of dispatch and if logLik is in the inner loop of an optimization there could be an impact (although possibly not material). In that case you could check the performance of calling the generic vs. calling the method directly and use the latter if it makes a significant difference.
Regarding the case that your function has a completely different purpose than the generic in the core of R, then it might be more confusing than helpful so you might, in that case, not create a method but have your own function name.
You might want to read the zoo Design manual (see link to zoo Design under Vignettes near the bottom of that page) which discusses the design ideas that went into the zoo package. These include the idea being discussed here.
EDIT: Added disadvantates.
good question.
I'll split your Question into two parts; here's the first one:
i]s there really any other point in [making functions generic]?
Well, this pattern is usually invoked when the develper doesn't know the object class for every object he/she expects a user to pass in to the method under consideration.
And because of this uncertainty, this design pattern (which is called overloading in many other languages) is invokved, and which requires R to evaluate the object class, then dispatch that object to the appropriate method given the object type.
The second part of your Question: [i]n this case I shouldn't use [the generic function] right?
To try to give you an answer useful beyond the detail of your Question, consider what happens to the original method when you call setGeneric, passing that method in.
the original function body is replaced with code for performing a top-level dispatch based on type of object passed in. This replaces the original function body, which just slides down one level so that it becomes the default method that the top level (generic) function dispatches to.
showMethods() will let you see all of those methods which are called by the newly created dispatch function (generic function).
And now for one huge disadvantage:
Ease of MISUse:
Users of your package already familiar with those generics might do a certain amount without reading the documentation.
And therein lies the fallacy that components, reusable objects, services, etc are an easy panacea for all software challenges.
And why the overwhelming majority of software is buggy, bloated, and operates inconsistently with little hope of tech support being able to diagnose your problem.
There WAS a reason for static linking and small executables back in the day. But this generation of code now, get paid now, debug later if ever, before the layoffs/IPO come, has no memory of the days when code actually worked very reliably and installation/integration didn't require 200$/hr Big 4 consultants or hackers who spend a week trying to get some "simple" open source product installed and productively running.
But if you want to continue the tradition of writing ever shorter function/method names, be my guest.

Resources