I am not sure to understand the access to the GS convergence information when running a problem which contains a Group class with a cycle.
To illustrate this, consider these two versions of the Sellar problem:
prob = Problem()
model = prob.model
model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])
model.add_subsystem('d1', SellarDis1.SellarDis1(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2.SellarDis2(), promotes=['z', 'y1', 'y2'])
nlgbs = model.nonlinear_solver = NonlinearBlockGS()
nlgbs.options['maxiter'] = 8
prob.setup()
A = prob.run_model()
In this version, in the variable A there are convergence results such as
(False, 1.3188028447075339e-10, 3.6299074030587596e-12)
However, when defining the Sellar problem in the following form:
class SellarMDA(Group):
def setup(self):
indeps = self.add_subsystem('indeps', IndepVarComp(), promotes=['*'])
indeps.add_output('x', 1.0)
indeps.add_output('z', np.array([5.0, 2.0]))
cycle = self.add_subsystem('cycle', Group(), promotes=['*'])
d1 = cycle.add_subsystem('d1', SellarDis1.SellarDis1(), promotes_inputs=['x', 'z', 'y2'], promotes_outputs=['y1'])
d2 = cycle.add_subsystem('d2', SellarDis2.SellarDis2(), promotes_inputs=['z', 'y1'], promotes_outputs=['y2'])
nl = cycle.nonlinear_solver = NonlinearBlockGS()
nl.options['maxiter'] = 8
prob = Problem()
prob.model = SellarMDA()
prob.setup()
prob['x'] = 2.
prob['z'] = [-1., -1.]
C = prob.run_model()
In the variable C there is no relevant information to GS convergence, there is only
(False, 0.0, 0.0)
Is it possible to get the GS convergence information in the 2nd version as in the 1st without using a recorder ?
If you would like to see what the solvers are doing, I would definitely recommend turning on the residual printing. You can do that individually for each solver, but I find it easier to use the problem method to turn it all on:
prob.set_solver_print(level=2)
Related
I am trying out the example problem for ExternalCodeComp as given in openmdao docs.
The optimization code is
import openmdao.api as om
from openmdao.components.tests.test_external_code_comp import ParaboloidExternalCodeCompFD
prob = om.Problem()
model = prob.model
model.add_subsystem('p', ParaboloidExternalCodeCompFD(), promotes_inputs=['x', 'y'])
# find optimal solution with SciPy optimize
# solution (minimum): x = 6.6667; y = -7.3333
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.model.add_design_var('p.x', lower=-50, upper=50)
prob.model.add_design_var('p.y', lower=-50, upper=50)
prob.model.add_objective('p.f_xy')
prob.driver.options['tol'] = 1e-9
prob.driver.options['disp'] = True
prob.setup()
# Set input values
prob.set_val('p.x', 3.0)
prob.set_val('p.y', -4.0)
prob.run_driver()
print(prob.get_val('p.x'))
print(prob.get_val('p.y'))
However, I get the following error at prob.setup().
Exception has occurred: RuntimeError
Group (<model>): Output not found for design variable 'p.x'.
What does this mean ? I dont know if I am missing something basic. The problem only occurs when I try to optimize it. There is no problem when I just use the external code in a model (as given in the docs).
In this line you promoted the inputs x and y
model.add_subsystem('p', ParaboloidExternalCodeCompFD(), promotes_inputs=['x', 'y'])
Notice that you did not promote the output f_xy
so from the top level of the model the correct paths are:
p.f_xy for the output, but x and y for the inputs.
Thus the correct way to add the design variables and set the values was to use x and y.
import openmdao.api as om
from openmdao.components.tests.test_external_code_comp import ParaboloidExternalCodeCompFD
prob = om.Problem()
model = prob.model
model.add_subsystem('p', ParaboloidExternalCodeCompFD(), promotes_inputs=['x', 'y'])
# find optimal solution with SciPy optimize
# solution (minimum): x = 6.6667; y = -7.3333
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.model.add_design_var('x', lower=-50, upper=50)
prob.model.add_design_var('y', lower=-50, upper=50)
prob.model.add_objective('p.f_xy')
prob.driver.options['tol'] = 1e-9
prob.driver.options['disp'] = True
prob.setup()
# Set input values
prob.set_val('x', 3.0)
prob.set_val('y', -4.0)
prob.run_driver()
print(prob.get_val('x'))
print(prob.get_val('y'))
I am trying to make a toy problem to learn a bit about the OpenMDAO software before applying the lessons to a larger problem. I have a problem set up so that the objective function should be minimized when both design variables are at a minimum. However both values stay at their originally assigned values despite receiving an 'Optimization terminated successfully' message.
I have been starting by writing the code based on the Sellar problem examples. ( http://openmdao.org/twodocs/versions/latest/basic_guide/sellar.html ) Additionally I have come across a stack overflow question that seems to be the same problem, but the solution there doesn't work. ( OpenMDAO: Solver converging to non-optimal point ) (When I add the declare_partials line to the IntermediateCycle or ScriptForTest I recieve an error saying either that self is not defined, or that the object has no attribute declare_partials)
This is the script that runs everything
import openmdao.api as om
from IntermediateForTest import IntermediateCycle
prob = om.Problem()
prob.model = IntermediateCycle()
prob.driver = om.ScipyOptimizeDriver()
#prob.driver.options['optimizer'] = 'SLSQP'
#prob.driver.options['tol'] = 1e-9
prob.model.add_design_var('n_gear', lower=2, upper=6)
prob.model.add_design_var('stroke', lower=0.0254, upper=1)
prob.model.add_objective('objective')
prob.setup()
prob.model.approx_totals()
prob.run_driver()
print(prob['objective'])
print(prob['cycle.f1.total_weight'])
print(prob['cycle.f1.stroke'])
print(prob['cycle.f1.n_gear'])
It calls an intermediate group, as per the Sellar example
import openmdao.api as om
from FunctionsForTest import FunctionForTest1
from FunctionsForTest import FunctionForTest2
class IntermediateCycle(om.Group):
def setup(self):
indeps = self.add_subsystem('indeps', om.IndepVarComp(), promotes=['*'])
indeps.add_output('n_gear', 3.0)
indeps.add_output('stroke', 0.2)
indeps.add_output('total_weight', 26000.0)
cycle = self.add_subsystem('cycle', om.Group())
cycle.add_subsystem('f1', FunctionForTest1())
cycle.add_subsystem('f2', FunctionForTest2())
cycle.connect('f1.landing_gear_weight','f2.landing_gear_weight')
cycle.connect('f2.total_weight','f1.total_weight')
self.connect('n_gear','cycle.f1.n_gear')
self.connect('stroke','cycle.f1.stroke')
#cycle.nonlinear_solver = om.NonlinearBlockGS()
self.nonlinear_solver = om.NonlinearBlockGS()
self.add_subsystem('objective', om.ExecComp('objective = total_weight', objective=26000, total_weight=26000), promotes=['objective', 'total_weight'])
Finally there is a file with the two functions in it:
import openmdao.api as om
class FunctionForTest1(om.ExplicitComponent):
def setup(self):
self.add_input('stroke', val=0.2)
self.add_input('n_gear', val=3.0)
self.add_input('total_weight', val=26000)
self.add_output('landing_gear_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
stroke = inputs['stroke']
n_gear = inputs['n_gear']
total_weight = inputs['total_weight']
outputs['landing_gear_weight'] = total_weight * 0.1 + 100*stroke * n_gear ** 2
class FunctionForTest2(om.ExplicitComponent):
def setup(self):
self.add_input('landing_gear_weight')
self.add_output('total_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
landing_gear_weight = inputs['landing_gear_weight']
outputs['total_weight'] = 26000 + landing_gear_weight
It reports optimization terminated successfully,
Optimization terminated successfully. (Exit mode 0)
Current function value: 26000.0
Iterations: 1
Function evaluations: 1
Gradient evaluations: 1
Optimization Complete
-----------------------------------
[26000.]
[29088.88888889]
[0.2]
[3.]
however the value for the function to optimize hasn't changed. It seems as it converges the loop to estimate the weight, but doesn't vary the design variables to find the optimum.
It arrives at 29088.9, which is correct for a value of n_gear=3 and stroke=0.2, but if both are decreased to the bounds of n_gear=2 and stroke=0.0254, it would arrive at a value of ~28900, ~188 less.
Any advice, links to tutorials, or solutions would be appreciated.
Lets take a look at the n2 of the model, as you provided it:
I've highlighted the connection from indeps.total_weight to objective.total_weight. So this means that your computed total_weight value is not being passed to your objective output at all. Instead you have a constant value being set there.
Now, taking a small step back, lets look at the computation of the objective itself:
self.add_subsystem('objective', om.ExecComp('objective = total_weight', objective=26000, total_weight=26000), promotes=['objective', 'total_weight'])
So this is an odd use of the ExecComp, because it just sets the output to exactly the input. It does nothing, and isn't really needed at all.
I believe what you wanted was simply to make the objective be the output f2.total_weight. When I do that (and make a few additional small cleanups to your code, like removing the unnecessary ExecComp, then I do get the correct answer in 2 major iterations of the optimizer:
import openmdao.api as om
class FunctionForTest1(om.ExplicitComponent):
def setup(self):
self.add_input('stroke', val=0.2)
self.add_input('n_gear', val=3.0)
self.add_input('total_weight', val=26000)
self.add_output('landing_gear_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
stroke = inputs['stroke']
n_gear = inputs['n_gear']
total_weight = inputs['total_weight']
outputs['landing_gear_weight'] = total_weight * 0.1 + 100*stroke * n_gear ** 2
class FunctionForTest2(om.ExplicitComponent):
def setup(self):
self.add_input('landing_gear_weight')
self.add_output('total_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
landing_gear_weight = inputs['landing_gear_weight']
outputs['total_weight'] = 26000 + landing_gear_weight
class IntermediateCycle(om.Group):
def setup(self):
indeps = self.add_subsystem('indeps', om.IndepVarComp(), promotes=['*'])
indeps.add_output('n_gear', 3.0)
indeps.add_output('stroke', 0.2)
cycle = self.add_subsystem('cycle', om.Group())
cycle.add_subsystem('f1', FunctionForTest1())
cycle.add_subsystem('f2', FunctionForTest2())
cycle.connect('f1.landing_gear_weight','f2.landing_gear_weight')
cycle.connect('f2.total_weight','f1.total_weight')
self.connect('n_gear','cycle.f1.n_gear')
self.connect('stroke','cycle.f1.stroke')
#cycle.nonlinear_solver = om.NonlinearBlockGS()
self.nonlinear_solver = om.NonlinearBlockGS()
prob = om.Problem()
prob.model = IntermediateCycle()
prob.driver = om.ScipyOptimizeDriver()
#prob.driver.options['optimizer'] = 'SLSQP'
#prob.driver.options['tol'] = 1e-9
prob.model.add_design_var('n_gear', lower=2, upper=6)
prob.model.add_design_var('stroke', lower=0.0254, upper=1)
prob.model.add_objective('cycle.f2.total_weight')
prob.model.approx_totals()
prob.setup()
prob.model.nl_solver.options['iprint'] = 2
prob.run_driver()
print(prob['cycle.f1.total_weight'])
print(prob['cycle.f2.total_weight'])
print(prob['cycle.f1.stroke'])
print(prob['cycle.f1.n_gear'])
gives:
Optimization terminated successfully. (Exit mode 0)
Current function value: 28900.177777779667
Iterations: 2
Function evaluations: 2
Gradient evaluations: 2
Optimization Complete
-----------------------------------
[28900.1777778]
[28900.17777778]
[0.0254]
[2.]
I want to minimize the output of one component while ensuring that it is larger than the output of a second component.
The add_constraint expects an "Iterable of numeric values, or a scalar numeric value" when I feed it a string with the name of the output.
When given prob['name'] the error is "'NoneType' object is not subscriptable"
This has got to be something simple and documented, but I haven't found it yet.
import openmdao.api as om
prob=om.Problem()
independant = prob.model.add_subsystem('independant', om.IndepVarComp())
independant.add_output('x', val = 3.0)
prob.model.add_subsystem('steep_line', om.ExecComp('f = x'))
prob.model.add_subsystem('shallow_line', om.ExecComp('g = 0.5*x + 1.0'))
prob.model.connect('independant.x', ['steep_line.x', 'shallow_line.x'])
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.model.add_design_var('independant.x', lower=0.0, upper=3.0)
#Change which of the next two lines is commented out to see both errors I
#have encountered.
prob.model.add_constraint('steep_line.f', lower='shallow_line.g')
#prob.model.add_constraint('steep_line.f', lower=prob['shallow_line.g'])
prob.model.add_objective('steep_line.f')
prob.setup()
prob.run_driver()
print('x:', prob['independant.x'])
The desired result is an optimization that arrives at independant.x = 2.0
Thank you in advance for any help you can give.
You can't specify a non-constant lower, upper, or, equals bound. To make this work, you need to add (another) ExecComp, and then subtract the two values from eachother. Then you can set the resulting output of this new comp to have a lower bound of 0
import openmdao.api as om
prob=om.Problem()
independant = prob.model.add_subsystem('independant', om.IndepVarComp())
independant.add_output('x', val = 3.0)
prob.model.add_subsystem('steep_line', om.ExecComp('f = x'))
prob.model.add_subsystem('shallow_line', om.ExecComp('f = 0.5*x + 1.0'))
prob.model.add_subsystem('constraint',
om.ExecComp('g = f_computed - lower'))
prob.model.connect('independant.x', ['steep_line.x', 'shallow_line.x'])
prob.model.connect('shallow_line.f', 'constraint.lower')
prob.model.connect('steep_line.f', 'constraint.f_computed')
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.model.add_design_var('independant.x', lower=0.0, upper=3.0)
prob.model.add_constraint('constraint.g', lower=0)
prob.model.add_objective('steep_line.f')
prob.setup()
prob.run_driver()
print('x:', prob['independant.x'])
I am new to OpenMDAO and am trying to solve an optimization problem. When I run the code, I receive the following error "'numpy.ndarray' object has no attribute 'log'" Cannot resolve the problem? Any suggestions?
I have reviewed the OpenMDAO documentation.
Error message: 'numpy.ndarray' object has no attribute 'log'
from __future__ import division, print_function
import openmdao.api as om
import numpy as np
class Objective (om.ExplicitComponent):
def setup(self):
self.add_input('mu1', val = 3.84)
self.add_input('mu2', val = 3.84)
self.add_output('f', val = 0.00022)
# Finite difference all partials.
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
mu1 = inputs['mu1']
mu2 = inputs['mu2']
outputs['f'] = np.log((mu1*(0.86))/(1.0-(mu1*0.14)))+np.log((mu2*(0.86))/(1.0-(mu2*0.14)))
# build the model
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('mu1', 3.84)
indeps.add_output('mu2', 3.84)
prob.model.add_subsystem('obj', Objective())
prob.model.add_subsystem('cnst', om.ExecComp('g = 7924.8 - 2943.0*(np.log(mu1)) - 2943.0*(np.log(mu2))'))
prob.model.connect('indeps.mu1', 'obj.mu1')
prob.model.connect('indeps.mu2', 'obj.mu2')
prob.model.connect('indeps.mu1', 'cnst.mu1')
prob.model.connect('indeps.mu2', 'cnst.mu2')
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'COBYLA'
prob.model.add_design_var('indeps.mu1', lower=0.0, upper=5.0)
prob.model.add_design_var('indeps.mu2', lower=0.0, upper=5.0)
prob.model.add_objective('obj.f')
prob.model.add_constraint('cnst.g', lower=0.0, upper=0.0)
prob.setup()
prob.run_driver()
The problem is in the definition of your ExecComp. You have np.log, but the way the string parsing works for that you just wanted log.
Try this instead:
'g = 7924.8 - 2943.0*(log(mu1)) - 2943.0*(log(mu2))'
With that change I got:
Normal return from subroutine COBYLA
NFVALS = 46 F = 3.935882E+00 MAXCV = 2.892193E-10
X = 3.843492E+00 3.843491E+00
Optimization Complete
I am using sample 2D functions for optimization with MetaModelUnStructuredComp.
Below is a code snippet. The computational time spent for training increases considerably as I increase the number of sample points. I am not sure if this much increase is expected or am I doing something wrong.
The problem is 2D and predicting 1 output below is some performance time;
45 sec for 900 points*
14 sec for 625 points
3.7 sec for 400 points
*points represent the dimension of each training input
Will decreasing this be a focus of openMDAO development team in the future? (keep reading for the edited version)
import numpy as np
from openmdao.api import Problem, IndepVarComp
from openmdao.api import ScipyOptimizeDriver
from openmdao.api import MetaModelUnStructuredComp, FloatKrigingSurrogate,MetaModelUnStructuredComp
from openmdao.api import CaseReader, SqliteRecorder
import time
t0 = time.time()
class trig(MetaModelUnStructuredComp):
def setup(self):
ii=3
nx, ny = (10*ii, 10*ii)
print(nx*ny)
xx = np.linspace(-3,3, nx)
yy = np.linspace(-2,2, ny)
x, y = np.meshgrid(xx, yy)
# z = np.sin(x)**10 + np.cos(10 + y) * np.cos(x)
# z=4+4.5*x-4*y+x**2+2*y**2-2*x*y+x**4-2*x**2*y
term1 = (4-2.1*x**2+(x**4)/3) * x**2;
term2 = x*y;
term3 = (-4+4*y**2) * y**2;
z = term1 + term2 + term3;
self.add_input('x', training_data=x.flatten())
self.add_input('y', training_data=y.flatten())
self.add_output('meta_out', surrogate=FloatKrigingSurrogate(),
training_data=z.flatten())
prob = Problem()
inputs_comp = IndepVarComp()
inputs_comp.add_output('x', 1.5)
inputs_comp.add_output('y', 1.5)
prob.model.add_subsystem('inputs_comp', inputs_comp)
#triginst=
prob.model.add_subsystem('trig', trig())
prob.model.connect('inputs_comp.x', 'trig.x')
prob.model.connect('inputs_comp.y', 'trig.y')
prob.driver = ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.driver.options['tol'] = 1e-8
prob.driver.options['disp'] = True
prob.model.add_design_var('inputs_comp.x', lower=-3, upper=3)
prob.model.add_design_var('inputs_comp.y', lower=-2, upper=2)
prob.model.add_objective('trig.meta_out')
prob.setup(check=True)
prob.run_model()
print(prob['inputs_comp.x'])
print(prob['inputs_comp.y'])
print(prob['trig.meta_out'])
t1 = time.time()
total = t1-t0
print(total)
Following the answers below i am adding a code snippet of an explicit component that uses SMT toolbox for surrogate. I guess this is one way to use the toolbox's capabilities.
import numpy as np
from smt.surrogate_models import RBF
from openmdao.api import ExplicitComponent
from openmdao.api import Problem, ScipyOptimizeDriver
from openmdao.api import Group, IndepVarComp
import smt
# Sample problem with SMT Toolbox and OpenMDAO Explicit Comp
#Optimization of SIX-HUMP CAMEL FUNCTION with 2 global optima
class MetaCompSMT(ExplicitComponent):
def initialize(self):
self.options.declare('sm', types=smt.surrogate_models.rbf.RBF)
def setup(self):
self.add_input('x')
self.add_input('y')
self.add_output('z')
# self.declare_partials(of='z', wrt=['x','y'], method='fd')
self.declare_partials(of='*', wrt='*')
def compute(self, inputs, outputs):
# sm = self.options['sm'] # seems like this is not needed
sta=np.column_stack([inputs[i] for i in inputs])
outputs['z'] =sm.predict_values(sta).flatten()
def compute_partials(self, inputs, partials):
sta=np.column_stack([inputs[i] for i in inputs])
print(sta)
for i,invar in enumerate(inputs):
partials['z', invar] =sm.predict_derivatives(sta,i)
# SMT SURROGATE IS TRAINED IN ADVANCE AND PASSED TO THE COMPONENT AS GLOBAL INPUT
# Training Data
ii=3 # "incerases the domain size"
nx, ny = (10*ii, 5*ii)
x, y = np.meshgrid(np.linspace(-3,3, nx), np.linspace(-2,2, ny))
term1 = (4-2.1*x**2+(x**4)/3) * x**2;
term2 = x*y;
term3 = (-4+4*y**2) * y**2;
z = term1 + term2 + term3;
# Surrogate training
xt=np.column_stack([x.flatten(),y.flatten()])
yt=z.flatten()
#sm = KPLSK(theta0=[1e-2])
sm=RBF(d0=-1,poly_degree=-1,reg=1e-13,print_global=False)
sm.set_training_values(xt, yt)
sm.train()
prob = Problem() # Start the OpenMDAO optimization problem
prob.model = model = Group() # Assemble a group within the problem. In this case single group.
"Independent component ~ single Design variable "
inputs_comp = IndepVarComp() # OpenMDAO approach for the design variable as independent component output
inputs_comp.add_output('x', 2.5) # Vary initial value for finding the second global optimum
inputs_comp.add_output('y', 1.5) # Vary initial value for finding the second global optimum
model.add_subsystem('inputs_comp', inputs_comp)
"Component 1"
comp = MetaCompSMT(sm=sm)
model.add_subsystem('MetaCompSMT', comp)
"Connect design variable to the 2 components. Easier to follow than promote"
model.connect('inputs_comp.x', 'MetaCompSMT.x')
model.connect('inputs_comp.y', 'MetaCompSMT.y')
"Lower/Upper bound design variables"
model.add_design_var('inputs_comp.x', lower=-3, upper=3)
model.add_design_var('inputs_comp.y', lower=-2, upper=2)
model.add_objective('MetaCompSMT.z')
prob.driver = ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.driver.options['disp'] = True
prob.driver.options['tol'] = 1e-9
prob.setup(check=True, mode='fwd')
prob.run_driver()
print(prob['inputs_comp.x'],prob['inputs_comp.y'],prob['MetaCompSMT.z'])
If you are willing to compile some code yourself, you could write very light weight wrapper for the Surrogate Modeling Toolbox (SMT). You could write that wrapper to work with the standard MetaModelUnstructuredComp or just write your own component wrapper.
Either way, that library has some significantly faster unstructured surrogate models in it. The default OpenMDAO implementations are just basic implementations. We may improve them over time, but for larger data sets or design spaces SMT offers much better algorithms.
We haven't written a general SMT wrapper in OpenMDAO as of Version 2.4, but its not hard to write your own.
I'm going to look into the performance of the MetaModelUnStructuredComp using your test case a bit more closely. Though I do notice that this test case does involve fitting a structured data set. If you were to use MetaModelStructuredComp(http://openmdao.org/twodocs/versions/2.2.0/features/building_blocks/components/metamodelstructured.html), the performance is considerably better:
class trig(MetaModelStructuredComp):
def setup(self):
ii=3
nx, ny = (10*ii, 10*ii)
xx = np.linspace(-3,3, nx)
yy = np.linspace(-2,2, ny)
x, y = np.meshgrid(xx, yy, indexing='ij')
term1 = (4-2.1*x**2+(x**4)/3) * x**2;
term2 = x*y;
term3 = (-4+4*y**2) * y**2;
z = term1 + term2 + term3;
self.add_input('x', 0.0, xx)
self.add_input('y', 0.0, yy)
self.add_output('meta_out', 0.0, z)
The 900 points case goes from 14 seconds on my machine using MetaModelUnStructuredComp to 0.081 when using MetaModelStructuredComp.