OpenMDAO: unexpected behaviour of ParallelGroup - openmdao

In the below example where two ParallelGroups are set up to require different number of procs to compute their serial sub-components, it seems that a group that should not fill up the global comm in fact does, and executes redundant/repeated cases. In previous versions the unassigned procs would not execute their sub-components. It runs without errors though.
from mpi4py import MPI
import openmdao.api as om
class Exec(om.ExplicitComponent):
def __init__(self, val=-10):
super().__init__()
self.val = val
def setup(self):
self.add_output('y', 0.)
def compute(self, inputs, outputs):
print('HELLO from Exec1, %s, global rank %i, val=%f' % (self.name, MPI.COMM_WORLD.rank, self.val))
outputs['y'] = self.val
class Exec2(om.ExplicitComponent):
def __init__(self, val):
super().__init__()
self.val = val
def setup(self):
self.add_input('x', 0.)
self.add_output('y', 0.)
def compute(self, inputs, outputs):
print('HELLO from Exec2, %s, global rank %i, val=%f' % (self.name, MPI.COMM_WORLD.rank, self.val))
outputs['y'] = self.val * inputs['x']
class Summer(om.ExplicitComponent):
def __init__(self, ncase):
super().__init__()
self.ncase = ncase
def setup(self):
for i in range(self.ncase):
self.add_input('y%i' % i, 0.)
self.add_output('sum', 0.)
def compute(self, inputs, outputs):
for i in range(self.ncase):
outputs['sum'] += inputs['y%i' % i]
p = om.Problem()
ncase = 3
par1 = p.model.add_subsystem('par1', om.ParallelGroup())
p.model.add_subsystem('summer1', Summer(ncase))
for i in range(ncase):
par1.add_subsystem('ex_%i' % i, Exec(val=float(i)), min_procs=1)
p.model.connect('par1.ex_%i.y' % i, 'summer1.y%i' % i)
ncase2 = 4
par2 = p.model.add_subsystem('par2', om.ParallelGroup())
p.model.add_subsystem('summer2', Summer(ncase2))
for i in range(ncase2):
par2.add_subsystem('ex_%i' % i, Exec2(float(i)), min_procs=1)
p.model.connect('summer1.sum', 'par2.ex_%i.x' % i)
p.model.connect('par2.ex_%i.y' % i, 'summer2.y%i' % i)
p.setup()
p.run_model()

According to the docs, the processor allocation is done mainly based on proc_weight using a round robin style of allocation.
The algorithm used for the allocation starts, assuming that the number of processes is greater than or equal to the number of subsystems, by assigning the min_procs for each subsystem. It then adds any remaining processes to subsystems based on their weights, being careful not to exceed their specified max_procs, if any.
So the behavior is as expected. OpenMDAO seeks to allocate all the given processors. It is then up to the component author to use them wisely.
If you really want to, you could modify the component so that anything other than Rank 0 did nothing, but I would not recommend that.
As an asside, you should not reference COMM_WORLD like that. Each component has a local comm you should use instead. Here is a modified version of your example:
from mpi4py import MPI
import openmdao.api as om
class Exec(om.ExplicitComponent):
def __init__(self, val=-10):
super().__init__()
self.val = val
def setup(self):
self.add_output('y', 0.)
def compute(self, inputs, outputs):
print('HELLO from Exec1, %s, global rank %i, comm size %i, val=%f' % (self.pathname, self.comm.rank, self.comm.size, self.val))
outputs['y'] = self.val
class Exec2(om.ExplicitComponent):
def __init__(self, val):
super().__init__()
self.val = val
def setup(self):
self.add_input('x', 0.)
self.add_output('y', 0.)
def compute(self, inputs, outputs):
print('HELLO from Exec2, %s, global rank %i, comm size %i, val=%f' % (self.pathname, self.comm.rank, self.comm.size, self.val))
outputs['y'] = self.val * inputs['x']
class Summer(om.ExplicitComponent):
def __init__(self, ncase):
super().__init__()
self.ncase = ncase
def setup(self):
for i in range(self.ncase):
self.add_input('y%i' % i, 0.)
self.add_output('sum', 0.)
def compute(self, inputs, outputs):
for i in range(self.ncase):
outputs['sum'] += inputs['y%i' % i]
p = om.Problem()
ncase = 2
par1 = p.model.add_subsystem('par1', om.ParallelGroup())
p.model.add_subsystem('summer1', Summer(ncase))
for i in range(ncase):
par1.add_subsystem('ex_%i' % i, Exec(val=float(i)), min_procs=1)
p.model.connect('par1.ex_%i.y' % i, 'summer1.y%i' % i)
ncase2 = 4
par2 = p.model.add_subsystem('par2', om.ParallelGroup())
p.model.add_subsystem('summer2', Summer(ncase2))
for i in range(ncase2):
par2.add_subsystem('ex_%i' % i, Exec2(float(i)), min_procs=1, max_procs=1)
p.model.connect('summer1.sum', 'par2.ex_%i.x' % i)
p.model.connect('par2.ex_%i.y' % i, 'summer2.y%i' % i)
p.setup()
p.run_model()
Running that on 4 processors gives:
HELLO from Exec1, par1.ex_0, global rank 0, comm size 2, val=0.000000
HELLO from Exec1, par1.ex_0, global rank 1, comm size 2, val=0.000000
HELLO from Exec1, par1.ex_1, global rank 0, comm size 2, val=1.000000
HELLO from Exec1, par1.ex_1, global rank 1, comm size 2, val=1.000000
HELLO from Exec2, par2.ex_3, global rank 0, comm size 1, val=3.000000
HELLO from Exec2, par2.ex_0, global rank 0, comm size 1, val=0.000000
HELLO from Exec2, par2.ex_1, global rank 0, comm size 1, val=1.000000
HELLO from Exec2, par2.ex_2, global rank 0, comm size 1, val=2.000000
So you see that in par1 each component is given a comm of size 2. This is the duplication that you call wasteful. I argue that it is not wasteful though, due to details of how you have set up the components. Both Exec and Exec2 are serial components (i.e. they do not have self.options['distributed'] = True). OpenMDAO always duplicates any serial component across all ranks in the local group comm that owns that component. The value of this duplication is lower MPI communication overhead. Since the value is computed locally, you can do a local transfer to any other serial components on that proc (rather than having to broadcast from the root).
If you prefer not to have the local value used, you could choose to set src_indices=[0] in the connect statement yourself. Then you would force OpenMDAO to broadcast from the root of that comm. The duplicate calculation would still occur though. It should not waste any time, since the duplicate proc would have been sitting idle while the root proc did the calculations. You can argue that it wastes some electricity because of the extra calculations. In most cases, this cost would be trivially small, but if you are concerned about it, you can change the components to be distributed, and set the variable sizes to 0 on all ranks except the root. Then you can set things up to no duplicate calcs.
Our experience is that most of the time, communication overhead is what you want to avoid. This is why we designed it to duplicate, but you do have the freedom to work around it if you like.

Related

openMDAO: Optimization terminates successfully after 1 iteration, not at optimal point

I am trying to make a toy problem to learn a bit about the OpenMDAO software before applying the lessons to a larger problem. I have a problem set up so that the objective function should be minimized when both design variables are at a minimum. However both values stay at their originally assigned values despite receiving an 'Optimization terminated successfully' message.
I have been starting by writing the code based on the Sellar problem examples. ( http://openmdao.org/twodocs/versions/latest/basic_guide/sellar.html ) Additionally I have come across a stack overflow question that seems to be the same problem, but the solution there doesn't work. ( OpenMDAO: Solver converging to non-optimal point ) (When I add the declare_partials line to the IntermediateCycle or ScriptForTest I recieve an error saying either that self is not defined, or that the object has no attribute declare_partials)
This is the script that runs everything
import openmdao.api as om
from IntermediateForTest import IntermediateCycle
prob = om.Problem()
prob.model = IntermediateCycle()
prob.driver = om.ScipyOptimizeDriver()
#prob.driver.options['optimizer'] = 'SLSQP'
#prob.driver.options['tol'] = 1e-9
prob.model.add_design_var('n_gear', lower=2, upper=6)
prob.model.add_design_var('stroke', lower=0.0254, upper=1)
prob.model.add_objective('objective')
prob.setup()
prob.model.approx_totals()
prob.run_driver()
print(prob['objective'])
print(prob['cycle.f1.total_weight'])
print(prob['cycle.f1.stroke'])
print(prob['cycle.f1.n_gear'])
It calls an intermediate group, as per the Sellar example
import openmdao.api as om
from FunctionsForTest import FunctionForTest1
from FunctionsForTest import FunctionForTest2
class IntermediateCycle(om.Group):
def setup(self):
indeps = self.add_subsystem('indeps', om.IndepVarComp(), promotes=['*'])
indeps.add_output('n_gear', 3.0)
indeps.add_output('stroke', 0.2)
indeps.add_output('total_weight', 26000.0)
cycle = self.add_subsystem('cycle', om.Group())
cycle.add_subsystem('f1', FunctionForTest1())
cycle.add_subsystem('f2', FunctionForTest2())
cycle.connect('f1.landing_gear_weight','f2.landing_gear_weight')
cycle.connect('f2.total_weight','f1.total_weight')
self.connect('n_gear','cycle.f1.n_gear')
self.connect('stroke','cycle.f1.stroke')
#cycle.nonlinear_solver = om.NonlinearBlockGS()
self.nonlinear_solver = om.NonlinearBlockGS()
self.add_subsystem('objective', om.ExecComp('objective = total_weight', objective=26000, total_weight=26000), promotes=['objective', 'total_weight'])
Finally there is a file with the two functions in it:
import openmdao.api as om
class FunctionForTest1(om.ExplicitComponent):
def setup(self):
self.add_input('stroke', val=0.2)
self.add_input('n_gear', val=3.0)
self.add_input('total_weight', val=26000)
self.add_output('landing_gear_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
stroke = inputs['stroke']
n_gear = inputs['n_gear']
total_weight = inputs['total_weight']
outputs['landing_gear_weight'] = total_weight * 0.1 + 100*stroke * n_gear ** 2
class FunctionForTest2(om.ExplicitComponent):
def setup(self):
self.add_input('landing_gear_weight')
self.add_output('total_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
landing_gear_weight = inputs['landing_gear_weight']
outputs['total_weight'] = 26000 + landing_gear_weight
It reports optimization terminated successfully,
Optimization terminated successfully. (Exit mode 0)
Current function value: 26000.0
Iterations: 1
Function evaluations: 1
Gradient evaluations: 1
Optimization Complete
-----------------------------------
[26000.]
[29088.88888889]
[0.2]
[3.]
however the value for the function to optimize hasn't changed. It seems as it converges the loop to estimate the weight, but doesn't vary the design variables to find the optimum.
It arrives at 29088.9, which is correct for a value of n_gear=3 and stroke=0.2, but if both are decreased to the bounds of n_gear=2 and stroke=0.0254, it would arrive at a value of ~28900, ~188 less.
Any advice, links to tutorials, or solutions would be appreciated.
Lets take a look at the n2 of the model, as you provided it:
I've highlighted the connection from indeps.total_weight to objective.total_weight. So this means that your computed total_weight value is not being passed to your objective output at all. Instead you have a constant value being set there.
Now, taking a small step back, lets look at the computation of the objective itself:
self.add_subsystem('objective', om.ExecComp('objective = total_weight', objective=26000, total_weight=26000), promotes=['objective', 'total_weight'])
So this is an odd use of the ExecComp, because it just sets the output to exactly the input. It does nothing, and isn't really needed at all.
I believe what you wanted was simply to make the objective be the output f2.total_weight. When I do that (and make a few additional small cleanups to your code, like removing the unnecessary ExecComp, then I do get the correct answer in 2 major iterations of the optimizer:
import openmdao.api as om
class FunctionForTest1(om.ExplicitComponent):
def setup(self):
self.add_input('stroke', val=0.2)
self.add_input('n_gear', val=3.0)
self.add_input('total_weight', val=26000)
self.add_output('landing_gear_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
stroke = inputs['stroke']
n_gear = inputs['n_gear']
total_weight = inputs['total_weight']
outputs['landing_gear_weight'] = total_weight * 0.1 + 100*stroke * n_gear ** 2
class FunctionForTest2(om.ExplicitComponent):
def setup(self):
self.add_input('landing_gear_weight')
self.add_output('total_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
landing_gear_weight = inputs['landing_gear_weight']
outputs['total_weight'] = 26000 + landing_gear_weight
class IntermediateCycle(om.Group):
def setup(self):
indeps = self.add_subsystem('indeps', om.IndepVarComp(), promotes=['*'])
indeps.add_output('n_gear', 3.0)
indeps.add_output('stroke', 0.2)
cycle = self.add_subsystem('cycle', om.Group())
cycle.add_subsystem('f1', FunctionForTest1())
cycle.add_subsystem('f2', FunctionForTest2())
cycle.connect('f1.landing_gear_weight','f2.landing_gear_weight')
cycle.connect('f2.total_weight','f1.total_weight')
self.connect('n_gear','cycle.f1.n_gear')
self.connect('stroke','cycle.f1.stroke')
#cycle.nonlinear_solver = om.NonlinearBlockGS()
self.nonlinear_solver = om.NonlinearBlockGS()
prob = om.Problem()
prob.model = IntermediateCycle()
prob.driver = om.ScipyOptimizeDriver()
#prob.driver.options['optimizer'] = 'SLSQP'
#prob.driver.options['tol'] = 1e-9
prob.model.add_design_var('n_gear', lower=2, upper=6)
prob.model.add_design_var('stroke', lower=0.0254, upper=1)
prob.model.add_objective('cycle.f2.total_weight')
prob.model.approx_totals()
prob.setup()
prob.model.nl_solver.options['iprint'] = 2
prob.run_driver()
print(prob['cycle.f1.total_weight'])
print(prob['cycle.f2.total_weight'])
print(prob['cycle.f1.stroke'])
print(prob['cycle.f1.n_gear'])
gives:
Optimization terminated successfully. (Exit mode 0)
Current function value: 28900.177777779667
Iterations: 2
Function evaluations: 2
Gradient evaluations: 2
Optimization Complete
-----------------------------------
[28900.1777778]
[28900.17777778]
[0.0254]
[2.]

How to update connection sizes in a reconfigurable model in OpenMDAO 2.5.0?

With reconfigurable model execution it is possible to resize inputs and outputs of components. How are the connections updated, when reconfigured outputs and inputs are connected?
In the example below the output c2.y and c3.y is resized at each model run. This input and output is supposed to be connected, as shown in the N2 chart. However, after the reconfiguration the connection size seems to be not updated automatically, it throws the following error:
ValueError: The source and target shapes do not match or are ambiguous for the connection 'c2.y' to 'c3.y'. Expected (1,) but got (2,).
I included below 3 tests, with promoted connection, absolute connection, and the last one with reconfiguration but without the connection (which works).
The last chance would be to declare the connection in the parent group of the comps, which I did not try yet.
The tests:
Promoted connection
Absolute connection
No connection
Reconfigurable component classes and tests:
from __future__ import division
import logging
import numpy as np
import unittest
from openmdao.api import Problem, Group, IndepVarComp, ExplicitComponent
from openmdao.utils.assert_utils import assert_rel_error
class ReconfComp(ExplicitComponent):
def initialize(self):
self.size = 1
self.counter = 0
def reconfigure(self):
logging.info('reconf started {}'.format(self.pathname))
self.counter += 1
logging.info('reconf ended {}'.format(self.pathname))
if self.counter % 2 == 0:
self.size += 1
return True
else:
return False
def setup(self):
logging.info('setup started {}'.format(self.pathname))
self.add_input('x', val=1.0)
self.add_output('y', val=np.zeros(self.size))
# All derivatives are defined.
self.declare_partials(of='*', wrt='*')
logging.info('setup ended {}'.format(self.pathname))
def compute(self, inputs, outputs):
logging.info('compute started {}'.format(self.pathname))
outputs['y'] = 2 * inputs['x']
logging.info('compute ended {}'.format(self.pathname))
def compute_partials(self, inputs, jacobian):
jacobian['y', 'x'] = 2 * np.ones((self.size, 1))
class ReconfComp2(ReconfComp):
"""The size of the y input changes the same as way as in ReconfComp"""
def setup(self):
logging.info('setup started {}'.format(self.pathname))
self.add_input('y', val=np.zeros(self.size))
self.add_output('f', val=np.zeros(self.size))
# All derivatives are defined.
self.declare_partials(of='*', wrt='*')
logging.info('setup ended {}'.format(self.pathname))
def compute(self, inputs, outputs):
logging.info('compute started {}'.format(self.pathname))
outputs['f'] = 2 * inputs['y']
logging.info('compute ended {}'.format(self.pathname))
def compute_partials(self, inputs, jacobian):
jacobian['f', 'y'] = 2 * np.ones((self.size, 1))
class TestReconfConnections(unittest.TestCase):
def test_reconf_comp_promoted_connections(self):
p = Problem()
p.model = Group()
p.model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
p.model.add_subsystem('c2', ReconfComp(), promotes_inputs=['x'], promotes_outputs=['y'])
p.model.add_subsystem('c3', ReconfComp2(), promotes_inputs=['y'],
promotes_outputs=['f'])
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
totals = p.compute_totals(wrt=['x'], of=['y'])
assert_rel_error(self, p['x'], 3.0)
assert_rel_error(self, p['y'], 6.0)
assert_rel_error(self, totals['y', 'x'], [[2.0]])
print(p['x'], p['y'], totals['y', 'x'].flatten())
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model() # FIXME Fails with ValueError
def test_reconf_comp_connections(self):
p = Problem()
p.model = Group()
p.model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
p.model.add_subsystem('c2', ReconfComp(), promotes_inputs=['x'])
p.model.add_subsystem('c3', ReconfComp2(), promotes_outputs=['f'])
p.model.connect('c2.y', 'c3.y')
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model() # FIXME Fails with ValueError
def test_reconf_comp_not_connected(self):
p = Problem()
p.model = Group()
p.model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
p.model.add_subsystem('c2', ReconfComp(), promotes_inputs=['x'])
p.model.add_subsystem('c3', ReconfComp2(), promotes_outputs=['f'])
# c2.y not connected to c3.y
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
fail, _, _ = p.run_model()
self.assertFalse(fail)
if __name__ == '__main__':
unittest.main()
UPDATE:
It seems, that in Group._var_abs2meta only the source size is updated, but not the target. The setup of the connections starts, before the setup of the parent group or the setup of the other component would be called.
UPDATE 2:
This happens with the default NonlinearRunOnce solver, with a NewtonSolver of NonlinearBlockGS there is no error, but the variable sizes also don't change.
As of OpenMDAO V2.5 reconfigurable model variables is not an officially supported feature in the framework. The bare bones of the capability has been in the code since that research was done, but it wasn't something that was high priority enough for us to finalize. A recent major refactor in V2.4 re-worked how some underlying data-structures worked and must have broken this functionality.
It is on our development priority list to get this working again, but its not super high on that list. We focus development mainly on features that have a direct in-house applications, and we don't have one of those yet.
If you could provide a decently complete set of tests for it, we could take a look at getting the functionality working.

Splitting up connections between groups

I would like to know the best way to split up the connection command. I have two groups that I want to be modular, an inner group and an outer group. I want the inner group to be a kind of black box where I can switch out or change the inner group without changing all the connections for the outer group. I just want the outer group to have to know the inputs and outputs of the inner group. For an example:
import numpy as np
from openmdao.api import Group, Problem, Component, IndepVarComp, ExecComp
class C(Component):
def __init__(self, n):
super(C, self).__init__()
self.add_param('array_element', shape=1)
self.add_output('new_element', shape=1)
def solve_nonlinear(self, params, unknowns, resids):
unknowns['new_element'] = params['array_element']*2.0
class MUX(Component):
def __init__(self, n):
super(MUX, self).__init__()
for i in range(n):
self.add_param('new_element' + str(i), shape=1)
self.add_output('new_array', shape=n)
self.n = n
def solve_nonlinear(self, params, unknowns, resids):
new_array = np.zeros(n)
for i in range(n):
new_array[i] = params['new_element' + str(i)]
unknowns['new_array'] = new_array
class GroupInner(Group):
def __init__(self, n):
super(GroupInner, self).__init__()
for i in range(n):
self.add('c'+str(i), C(n))
self.connect('array', 'c'+str(i) + '.array_element', src_indices=[i])
self.connect('c'+str(i)+'.new_element', 'new_element'+str(i))
self.add('mux', MUX(n), promotes=['*'])
class GroupOuter(Group):
def __init__(self, n):
super(GroupOuter, self).__init__()
self.add('array', IndepVarComp('array', np.zeros(n)), promotes=['*'])
self.add('inner', GroupInner(n), promotes=['new_array'])
for i in range(n):
# self.connect('array', 'inner.c'+str(i) + '.array_element', src_indices=[i])
self.connect('array', 'inner.array', src_indices=[i])
n = 3
p = Problem()
p.root = GroupOuter(n)
p.setup(check=False)
p['array'] = np.ones(n)
p.run()
print p['new_array']
When I run the code I get the error that:
NameError: Source 'array' cannot be connected to target 'c0.array_element': 'array' does not exist.
To try to solve this I made 'array' an IndepVarComp in the GroupInner group. However, when I do this I get the error:
NameError: Source 'array' cannot be connected to target 'inner.array': Target must be a parameter but 'inner.array' is an unknown.
I know that if I just make the full connection: self.connect('array', 'inner.c'+str(i) + '.array_element', src_indices=[i]) then it will work. But like I said I want GroupInner to be kind of a black box where I don't know what groups or components are in it. I also can't just promote all because the array_elements are different. Is it possible to do this or do you have to do the entire connection in one command?
I have two answers to your question. First I'll get the problem working as you specified it. Second, I'll suggest a modification that I think might be more efficient for some applications of this model structure.
First, the main issue with the problem as you specified it was the following line
self.connect('array', 'c'+str(i) + '.array_element', src_indices=[i])
There simply isn't a output or state named array anywhere inside the Inner group, so that connection isn't going to work. You did create a variable called 'array' in the Outer group, but you can't issue a connection to it from inside the Inner definition because its not available in that scope. To make it work the way you've specified, the simplest way would be to do the following:
class GroupInner(Group):
def __init__(self, n):
super(GroupInner, self).__init__()
for i in range(n):
self.add('c'+str(i), C(n))
self.connect('c%d.new_element'%i, 'new_element'+str(i))
self.add('mux', MUX(n), promotes=['*'])
class GroupOuter(Group):
def __init__(self, n):
super(GroupOuter, self).__init__()
self.add('array', IndepVarComp('array', np.zeros(n)), promotes=['*'])
self.add('inner', GroupInner(n), promotes=['new_array'])
for i in range(n):
self.connect('array', 'inner.c%d.array_element'%i, src_indices=[i])
Here is an alternate approach that will reduce the number of variables and components in your model, which will help reduce setup times if n grows large is to use an actual distributed component, and partition the array using the MPI comm. This has some nice properties, besides the setup cost savings, because it will also allow you to scale your calculations more flexibility and improves efficiency when you run in serial. This solution works well if your model would have really had multiple c instances that were all doing the same thing and the process can be simply vectorized via numpy.
import numpy as np
from openmdao.api import Group, Problem, Component, IndepVarComp
from openmdao.util.array_util import evenly_distrib_idxs
from openmdao.core.mpi_wrap import MPI
if MPI:
# if you called this script with 'mpirun', then use the petsc data passing
from openmdao.core.petsc_impl import PetscImpl as impl
else:
# if you didn't use `mpirun`, then use the numpy data passing
from openmdao.api import BasicImpl as impl
class C(Component):
def __init__(self, n):
super(C, self).__init__()
self.add_param('in_array', shape=n)
self.add_output('new_array', shape=n)
self.n = n
def get_req_procs(self):
"""
min/max number of procs that this component can use
"""
return (1,self.n)
#NOTE: This needs to be setup_distrib_idx for <= version 1.5.0
def setup_distrib(self):
comm = self.comm
rank = comm.rank
# NOTE: evenly_distrib_idxs is a helper function to split the array
# up as evenly as possible
sizes, offsets = evenly_distrib_idxs(comm.size, self.n)
local_size, local_offset = sizes[rank], offsets[rank]
self.local_size = int(local_size)
start = local_offset
end = local_offset + local_size
self.set_var_indices('in_array', val=np.zeros(local_size, float),
src_indices=np.arange(start, end, dtype=int))
self.set_var_indices('new_array', val=np.zeros(local_size, float),
src_indices=np.arange(start, end, dtype=int))
def solve_nonlinear(self, params, unknowns, resids):
unknowns['new_array'] = params['in_array']*2.0
print "computing new_array: ", unknowns['new_array']
class GroupInner(Group):
def __init__(self, n):
super(GroupInner, self).__init__()
self.add('c', C(n), promotes=['new_array', 'in_array'])
class GroupOuter(Group):
def __init__(self, n):
super(GroupOuter, self).__init__()
self.add('array', IndepVarComp('array', np.zeros(n)), promotes=['*'])
self.add('inner', GroupInner(n), promotes=['new_array',])
self.connect('array', 'inner.in_array')
n = 3
p = Problem(impl=impl)
p.root = GroupOuter(n)
p.setup(check=False)
p['array'] = np.ones(n)
p.run()
print p['new_array']

OpenMDAO: unit conversion with pass_by_obj

Is unit conversion with pass_by_obj supported in OpenMDAO 1.4? I have a small repro case:
from openmdao.api import Component, Problem, Group, IndepVarComp
pass_by_obj=True
class PassByObjParaboloid(Component):
def __init__(self):
super(PassByObjParaboloid, self).__init__()
self.fd_options['force_fd'] = True
self.add_param('x', val=1.0, pass_by_obj=pass_by_obj, units='mm')
self.add_output('f_xy', val=0.0)
def solve_nonlinear(self, params, unknowns, resids):
print params['x']
assert params['x'] == 1000.0
unknowns['f_xy'] = params['x']
def linearize(self, params, unknowns, resids):
raise Exception()
top = Problem()
root = top.root = Group()
root.add('p1', IndepVarComp('x', 1.0, pass_by_obj=pass_by_obj, units='m'))
root.add('p', PassByObjParaboloid())
root.connect('p1.x', 'p.x')
top.setup()
top.run()
With pass_by_obj=True, the assert fails. top.setup() reports:
Unit Conversions
p1.x -> p.x : m -> mm
So I'd expect the unit conversion to be done.
OpenMDAO currently does not support automatic unit conversions for pass_by_obj variables. When designing OpenMDAO, we didn't intend for floating point data to be transferred using pass_by_obj. We only added pass_by_obj to handle other kinds of variables. We should fix the diagnostic output of setup so that it doesn't list unit conversions that don't actually happen. I'll put a story in for that.

Check Partial Derivatives with pass_by_obj

I have a component that has an input that is an int so I am setting pass_by_obj = True. However, when I check derivatives with check_partial_derivatives(), it throws this error:
data = prob.check_partial_derivatives(out_stream=sys.stdout)
File "/usr/local/lib/python2.7/site-packages/openmdao/core/problem.py", line 1711, in check_partial_derivatives
jac_rev[(u_name, p_name)][idx, :] = dinputs._dat[p_name].val
TypeError: float() argument must be a string or a number
It appears to be trying to take the derivative even though it cannot. Here is a simple example:
import sys
from openmdao.api import IndepVarComp, Problem, Group, Component
class Comp(Component):
def __init__(self):
super(Comp, self).__init__()
self.add_param('x', val=0.0)
self.add_param('y', val=3, pass_by_obj=True)
self.add_output('z', val=0.0)
def solve_nonlinear(self, params, unknowns, resids):
unknowns['z'] = params['y']*params['x']
def linearize(self, params, unknowns, resids):
J = {}
J['z', 'x'] = params['y']
return J
prob = Problem()
prob.root = Group()
prob.root.add('comp', Comp(), promotes=['*'])
prob.root.add('p1', IndepVarComp('x', 0.0), promotes=['x'])
prob.root.add('p2', IndepVarComp('y', 3, pass_by_obj=True), promotes=['y'])
prob.setup(check=False)
prob['x'] = 2.0
prob['y'] = 3
prob.run()
print prob['z']
data = prob.check_partial_derivatives(out_stream=sys.stdout)
It is possible to use the check_partial_derivatives() method with components that have inputs that are specified as pass_by_obj? I don't care about the derivatives for the inputs that are specified as pass_by_obj, but I care about the other inputs.
Thanks for the report and test. This was a bug where we weren't excluding the design variables that were declared pass_by_obj. I've got a pull request up on the OpenMDAO repo with a fix. It'll probably be merged to master within a day.
EDIT -- The fix is merged. https://github.com/OpenMDAO/OpenMDAO/commit/b123b284e46aac7e15fa9bce3751f9ad9bb63b95

Resources