In relation with my previous question Scaled paraboloid and derivatives checking, I see that you fixed related to running the problem once. I wanted to try but I still have a problem with the derivatives checking and finite differences showed in the following code:
""" Unconstrained optimization of the scaled paraboloid component."""
from __future__ import print_function
import sys
import numpy as np
from openmdao.api import IndepVarComp, Component, Problem, Group, ScipyOptimizer
class Paraboloid(Component):
def __init__(self):
super(Paraboloid, self).__init__()
self.add_param('X', val=np.array([0.0, 0.0]))
self.add_output('f_xy', val=0.0)
def solve_nonlinear(self, params, unknowns, resids):
X = params['X']
x = X[0]
y = X[1]
unknowns['f_xy'] = (1000.*x-3.)**2 + (1000.*x)*(0.01*y) + (0.01*y+4.)**2 - 3.
def linearize(self, params, unknowns, resids):
""" Jacobian for our paraboloid."""
X = params['X']
J = {}
x = X[0]
y = X[1]
J['f_xy', 'X'] = np.array([[ 2000000.0*x - 6000.0 + 10.0*y,
0.0002*y + 0.08 + 10.0*x]])
return J
if __name__ == "__main__":
top = Problem()
root = top.root = Group()
#root.fd_options['force_fd'] = True # Error if uncommented
root.add('p1', IndepVarComp('X', np.array([3.0, -4.0])))
root.add('p', Paraboloid())
root.connect('p1.X', 'p.X')
top.driver = ScipyOptimizer()
top.driver.options['optimizer'] = 'SLSQP'
top.driver.add_desvar('p1.X',
lower=np.array([-1000.0, -1000.0]),
upper=np.array([1000.0, 1000.0]),
scaler=np.array([1000., 0.001]))
top.driver.add_objective('p.f_xy')
top.setup()
top.check_partial_derivatives()
top.run()
top.check_partial_derivatives()
print('\n')
print('Minimum of %f found at (%s)' % (top['p.f_xy'], top['p.X']))
First check works fine but the second check_partial_derivatives gives weird results for FD :
[...]
Partial Derivatives Check
----------------
Component: 'p'
----------------
p: 'f_xy' wrt 'X'
Forward Magnitude : 1.771706e-04
Reverse Magnitude : 1.771706e-04
Fd Magnitude : 9.998228e-01
Absolute Error (Jfor - Jfd) : 1.000000e+00
Absolute Error (Jrev - Jfd) : 1.000000e+00
Absolute Error (Jfor - Jrev): 0.000000e+00
Relative Error (Jfor - Jfd) : 1.000177e+00
Relative Error (Jrev - Jfd) : 1.000177e+00
Relative Error (Jfor - Jrev): 0.000000e+00
Raw Forward Derivative (Jfor)
[[ -1.77170624e-04 -8.89040341e-10]]
Raw Reverse Derivative (Jrev)
[[ -1.77170624e-04 -8.89040341e-10]]
Raw FD Derivative (Jfd)
[[ 0.99982282 0. ]]
Minimum of -27.333333 found at ([ 6.66666658e-03 -7.33333333e+02])
And (may be not related) when I try to set root.fd_options['force_fd'] = True (just to see), I get an error during the first check :
Partial Derivatives Check
----------------
Component: 'p'
----------------
Traceback (most recent call last):
File "C:\Program Files (x86)\Wing IDE 101 5.0\src\debug\tserver\_sandbox.py", line 59, in
File "d:\rlafage\OpenMDAO\OpenMDAO\openmdao\core\problem.py", line 1827, in check_partial_derivatives
u_size = np.size(dunknowns[u_name])
File "d:\rlafage\OpenMDAO\OpenMDAO\openmdao\core\vec_wrapper.py", line 398, in __getitem__
return self._dat[name].get()
File "d:\rlafage\OpenMDAO\OpenMDAO\openmdao\core\vec_wrapper.py", line 223, in _get_scalar
return self.val[0]
IndexError: index 0 is out of bounds for axis 0 with size 0
I work with OpenMDAO HEAD (d1e12d4).
This is just a stepsize problem for that finite difference. The 2nd FD occurs at a different point (the optimum), and it must be more sensitive at that point.
I tried with central difference
top.root.p.fd_options['form'] = 'central'
And got much better results.
----------------
Component: 'p'
----------------
p: 'f_xy' wrt 'X'
Forward Magnitude : 1.771706e-04
Reverse Magnitude : 1.771706e-04
Fd Magnitude : 1.771738e-04
The exception when you set 'fd' is a real bug related to the scaler on the des_var being an array. Thanks for the report on that; we'll get a story up to fix it.
Related
"maskrcnn_benchmark"s github
Here is the source code for "FrozenBatchNorm2d"
import torch
from torch import nn
class FrozenBatchNorm2d(nn.Module):
def __init__(self, n):
super(FrozenBatchNorm2d, self).__init__()
self.register_buffer("weight", torch.ones(n))
self.register_buffer("bias", torch.zeros(n))
self.register_buffer("running_mean", torch.zeros(n))
self.register_buffer("running_var", torch.ones(n))
def forward(self, x):
scale = self.weight * self.running_var.rsqrt()
bias = self.bias - self.running_mean * scale
scale = scale.reshape(1, -1, 1, 1)
bias = bias.reshape(1, -1, 1, 1)
return x * scale + bias
When I put this function in my script, I found that this function had almost no effect.
Here is my usage
import torch.nn as nn
import torch
class FrozenBatchNorm2d(nn.Module):
"""
BatchNorm2d where the batch statistics and the affine parameters
are fixed
"""
def __init__(self, n):
super(FrozenBatchNorm2d, self).__init__()
self.register_buffer("weight", torch.ones(n))
self.register_buffer("bias", torch.zeros(n))
self.register_buffer("running_mean", torch.zeros(n))
self.register_buffer("running_var", torch.ones(n))
def forward(self, x):
scale = self.weight * self.running_var.rsqrt()
bias = self.bias - self.running_mean * scale
scale = scale.reshape(1, -1, 1, 1)
bias = bias.reshape(1, -1, 1, 1)
print(scale.shape,bias.shape)
return x * scale + bias
a=FrozenBatchNorm2d((1,2))
a(torch.tensor([1,2,3]))
The running result is different from what I thought.
So can someone tell me what this function exactly does?
I will appreciate it if someone could help me.
"register_buffer" means open an RAM for some parameters which couldn't be optimized or changed during the tranning process, in another word, the "weight","bias","running_mean","running_var" are consistent values. Hence, that is the reason why this rebuild batchnorm method could be called FrozenBatchnorm2d. It's my explan, hope it can help you.
I was trying to write a least time control code, using drake toolbox. But in the middle, I cannot understand the error info: (please ignore things happened in this parentheis, i just don't know how much detail is needed to submit the post, god!)
'''python
from pydrake.all import MathematicalProgram, Solve
import numpy as np
def g(x):
if abs(x)<1e-7:
return 0.
else:
return 1.
mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
position_goal = np.asarray([0, 0])
N=100
dt=0.01
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i]<=1.)
mp.AddLinearConstraint(u_over_time[i]>=-1.)
And the error info is :
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-be1aa565be42> in <module>()
29 state_next1 = states_over_time[i,1]+ dt*u_over_time[i]
30 mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
---> 31 mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
32 mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
33 mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
RuntimeError: You should not call `__bool__` / `__nonzero__` on `Formula`. If you are trying to make a map with `Variable`, `Expression`, or `Polynomial` as keys (and then access the map in Python), please use pydrake.common.containers.EqualToDict`.
May I know what's happening here? Thanks
----------------update line-----------------
I modified the code as you told me. Now the code now becomes:
'''python
from pydrake.all import MathematicalProgram, Solve
import numpy as np
def g(x):
if abs(x)<1e-7:
return 0.
else:
return 1.
mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
position_goal = np.asarray([0, 0])
N=100
dt=0.01
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0[0])
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1[0])
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0[0])
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1[0])
mp.AddLinearConstraint(u_over_time[i,0]<=1.)
mp.AddLinearConstraint(u_over_time[i,0]>=-1.)
'''
And the error info is:
TypeError Traceback (most recent call last)
<ipython-input-7-82e68c2ebfaa> in <module>()
27 state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
28 state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
---> 29 mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0[0])
30 mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1[0])
31 mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0[0])
TypeError: 'float' object has no attribute '__getitem__'
What's the problem this time? Thanks.
(Btw, one of my complain is that, the error info always not that effective to give the hint of where the problem is...)
-----------------update 2nd time line--------------------
Now a similar problem happened to the g(x), the code:
'''
from pydrake.all import MathematicalProgram, Solve
import numpy as np
def g(x):
print 'x=',x
print 'x[0]=',x[0]
if x[0]*x[0]+x[1]*x[1]<1e-7: # x.dot(x)
return 0.
else:
return 1.
mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
#position_goal = np.asarray([0, 0]) # already in g(x)
N=100
dt=0.01
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i,0]<=1.)
mp.AddLinearConstraint(u_over_time[i,0]>=-1.)
reward=np.zeros((N,1))
for i in range(N):
reward[i]=g(states_over_time[i,:])
mp.AddQuadraticCost(reward.dot(reward))
result=Solve(mp)
'''
This time neither x or x[0] could solve the problem. the output info is :
Number of decision vars 298
x= [1.0 0.0]
x[0]= 1.0
x= [Variable('state_1(0)', Continuous) Variable('state_1(1)', Continuous)]
x[0]= state_1(0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-8-08d1cd75397e> in <module>()
37 reward=np.zeros((N,1))
38 for i in range(N):
---> 39 reward[i]=g(states_over_time[i,:])
40
41 mp.AddQuadraticCost(reward.dot(reward))
<ipython-input-8-08d1cd75397e> in g(x)
5 print 'x=',x
6 print 'x[0]=',x[0]
----> 7 if x[0]*x[0]+x[1]*x[1]<1e-7: # x.dot(x)
8 return 0.
9 else:
RuntimeError: You should not call `__bool__` / `__nonzero__` on `Formula`. If you are trying to make a map with `Variable`, `Expression`, or `Polynomial` as keys (and then access the map in Python), please use pydrake.common.containers.EqualToDict`.
What can I do this time? Thanks
Btw, you see in the code i print x or x[0] only once, but i got two different answer? funny, isn't it? why is this?
state_next1 is not a symbolic expression, it is a numpy array of symbolic expression, so you need to do state_next1[0]. Similarly you will need to change u_over_time[i] <= 1 to u_over_time[i, 0] <= 1.
The other way to solve the problem is to compute state_next1 using u_overt_time[i, 0] instead of u_over_time[i]. After modification, the for loop in your code should be
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i, 0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i, 0]<=1.)
mp.AddLinearConstraint(u_over_time[i, 0]>=-1.)
I changed u_over_time[i] to u_over_time[i, 0] where you define state_next1.
The error thrown in the line
if x[0]*x[0]+x[1]*x[1]<1e-7: # x.dot(x)
return 0.
is because you called with AddQuadraticCost, but your cost is not quadratic. Drake tries to parse the symbolic expression as a quadratic expression, and failed. Specifically Drake fails when you check if the expression x[0] * x[0] + x[1] * x[1] < 1e-7. No quadratic cost can have this type of "if" statement.
What is the mathematical formulation of your cost? Do you really want to impose the cost as defined in your g(x) function, that if x'*x < 1e-7, then g(x) = 0, otherwise g(x) = 1? This is a pretty bad cost (it is almost constant everywhere, but have discrete jumps from 1 to 0 near the origin).
Since you want to solve a least time optimal control problem, I would suggest to change your formulation, and make dt a decision variable in your problem. Namely you will have the dynamic constraint
x[n+1] = x[n] + f(x[n], u[n]) * dt[n]
The final state constraint
x[N] = x_desired
The initial state constraint
x[0] = x_initial
And your cost function is to minimize the time
min sum_i dt[i]
Then you will have smooth cost and constraint.
Here is a piece of code that doesn't throw syntax error
from pydrake.all import MathematicalProgram, Solve
import numpy as np
def g(x):
x_squared_norm = np.power(x.reshape((2, -1)), 2)
return np.sum(x_squared_norm > 1e-7)
mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
#position_goal = np.asarray([0, 0]) # already in g(x)
N=100
dt=0.01
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i,0]<=1.)
mp.AddLinearConstraint(u_over_time[i,0]>=-1.)
mp.AddCost(g, vars=states_over_time[1:,:].reshape((1, -1)).squeeze())
result=Solve(mp)
Notice that I changed the definition of g, and called mp.AddCost instead of mp.AddQuadraticCost. mp.AddQuadraticCost expects a quadratic symbolic expression. The expression in your code is not quadratic (it has an if statement in the cost, and quadratic cost doesn't allow if statement.).
This code should run without error, but I don't know if it can find the solution. Again this cost is not differentiable, so any gradient based nonlinear solver will have trouble.
If you really don't want to solve the problem as a nonlinear optimization problem, you can consider to re-formulate the problem as a mixed-integer program. Namely your cost is the summation of a bunch of binary variables b[i], that b[i] = 1 if |x[i, 0]| > epsilon or |x[i, 1]| > epsilon; otherwise b[i] = 0, and your can formulate this as a mixed-integer linear constraints.
I wrote an answer by bisection method, which also recommended by tedrake on class. but I don't like this method. Too many iterations. I just put it here, when i have a mixed integer code, i will back.
god, i just cannot pass the code check...i really hate the code check machanism of stackoverflow...
'''
from pydrake.all import MathematicalProgram, Solve
import numpy as np
import matplotlib.pyplot as plt
'''
def g(x):
print 'x=',x
print 'x[0]=',x[0]
if x[0]*x[0]+x[1]*x[1]<1e-7: # x.dot(x)
return 0.
else:
return 1.
'''
#mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
#position_goal = np.asarray([0, 0]) # already in g(x)
#N=201
dt=0.01
upper=1000; lower=1;
N=upper
while upper-lower>1:
print '---------------------'
print 'N=',N
mp = MathematicalProgram()
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = mp.NewContinuousVariables(2,"state intial")
mp.AddLinearConstraint(states_over_time[0]==np.asarray([state_initial[0]]))
mp.AddLinearConstraint(states_over_time[1]==np.asarray([state_initial[1]]))
#states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i,0]<=1.)
mp.AddLinearConstraint(u_over_time[i,0]>=-1.)
'''
reward=np.zeros((N,1))
for i in range(N):
reward[i]=g(states_over_time[i,:])
'''
mp.AddLinearConstraint(states_over_time[-1,0]<=1e-7)
mp.AddLinearConstraint(states_over_time[-1,1]<=1e-7)
mp.AddLinearConstraint(states_over_time[-1,0]>=1e-7)
mp.AddLinearConstraint(states_over_time[-1,1]>=1e-7)
#mp.AddQuadraticCost(reward.dot(reward))
result=Solve(mp)
print result.is_success()
if result.is_success():
upper=N
else:
lower=N
N=lower+int((upper-lower)/2.0)
N=upper
#print result.is_success()
print 'least time=',dt*N
u_over_time=result.GetSolution(u_over_time)
states_over_time=result.GetSolution(states_over_time)
#print 'u=',u_over_time
#print 'last state=',states_over_time[-1,:]
fig, ax = plt.subplots(2, 1)
plt.subplot(2, 1, 1);plt.plot(np.arange(dt, dt*N, dt),u_over_time);
plt.legend(["u against t"])
plt.subplot(2, 1, 2);plt.plot(states_over_time[:,0],states_over_time[:,1]);
plt.legend(["phase portrait"])
'''
I am trying to make a toy problem to learn a bit about the OpenMDAO software before applying the lessons to a larger problem. I have a problem set up so that the objective function should be minimized when both design variables are at a minimum. However both values stay at their originally assigned values despite receiving an 'Optimization terminated successfully' message.
I have been starting by writing the code based on the Sellar problem examples. ( http://openmdao.org/twodocs/versions/latest/basic_guide/sellar.html ) Additionally I have come across a stack overflow question that seems to be the same problem, but the solution there doesn't work. ( OpenMDAO: Solver converging to non-optimal point ) (When I add the declare_partials line to the IntermediateCycle or ScriptForTest I recieve an error saying either that self is not defined, or that the object has no attribute declare_partials)
This is the script that runs everything
import openmdao.api as om
from IntermediateForTest import IntermediateCycle
prob = om.Problem()
prob.model = IntermediateCycle()
prob.driver = om.ScipyOptimizeDriver()
#prob.driver.options['optimizer'] = 'SLSQP'
#prob.driver.options['tol'] = 1e-9
prob.model.add_design_var('n_gear', lower=2, upper=6)
prob.model.add_design_var('stroke', lower=0.0254, upper=1)
prob.model.add_objective('objective')
prob.setup()
prob.model.approx_totals()
prob.run_driver()
print(prob['objective'])
print(prob['cycle.f1.total_weight'])
print(prob['cycle.f1.stroke'])
print(prob['cycle.f1.n_gear'])
It calls an intermediate group, as per the Sellar example
import openmdao.api as om
from FunctionsForTest import FunctionForTest1
from FunctionsForTest import FunctionForTest2
class IntermediateCycle(om.Group):
def setup(self):
indeps = self.add_subsystem('indeps', om.IndepVarComp(), promotes=['*'])
indeps.add_output('n_gear', 3.0)
indeps.add_output('stroke', 0.2)
indeps.add_output('total_weight', 26000.0)
cycle = self.add_subsystem('cycle', om.Group())
cycle.add_subsystem('f1', FunctionForTest1())
cycle.add_subsystem('f2', FunctionForTest2())
cycle.connect('f1.landing_gear_weight','f2.landing_gear_weight')
cycle.connect('f2.total_weight','f1.total_weight')
self.connect('n_gear','cycle.f1.n_gear')
self.connect('stroke','cycle.f1.stroke')
#cycle.nonlinear_solver = om.NonlinearBlockGS()
self.nonlinear_solver = om.NonlinearBlockGS()
self.add_subsystem('objective', om.ExecComp('objective = total_weight', objective=26000, total_weight=26000), promotes=['objective', 'total_weight'])
Finally there is a file with the two functions in it:
import openmdao.api as om
class FunctionForTest1(om.ExplicitComponent):
def setup(self):
self.add_input('stroke', val=0.2)
self.add_input('n_gear', val=3.0)
self.add_input('total_weight', val=26000)
self.add_output('landing_gear_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
stroke = inputs['stroke']
n_gear = inputs['n_gear']
total_weight = inputs['total_weight']
outputs['landing_gear_weight'] = total_weight * 0.1 + 100*stroke * n_gear ** 2
class FunctionForTest2(om.ExplicitComponent):
def setup(self):
self.add_input('landing_gear_weight')
self.add_output('total_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
landing_gear_weight = inputs['landing_gear_weight']
outputs['total_weight'] = 26000 + landing_gear_weight
It reports optimization terminated successfully,
Optimization terminated successfully. (Exit mode 0)
Current function value: 26000.0
Iterations: 1
Function evaluations: 1
Gradient evaluations: 1
Optimization Complete
-----------------------------------
[26000.]
[29088.88888889]
[0.2]
[3.]
however the value for the function to optimize hasn't changed. It seems as it converges the loop to estimate the weight, but doesn't vary the design variables to find the optimum.
It arrives at 29088.9, which is correct for a value of n_gear=3 and stroke=0.2, but if both are decreased to the bounds of n_gear=2 and stroke=0.0254, it would arrive at a value of ~28900, ~188 less.
Any advice, links to tutorials, or solutions would be appreciated.
Lets take a look at the n2 of the model, as you provided it:
I've highlighted the connection from indeps.total_weight to objective.total_weight. So this means that your computed total_weight value is not being passed to your objective output at all. Instead you have a constant value being set there.
Now, taking a small step back, lets look at the computation of the objective itself:
self.add_subsystem('objective', om.ExecComp('objective = total_weight', objective=26000, total_weight=26000), promotes=['objective', 'total_weight'])
So this is an odd use of the ExecComp, because it just sets the output to exactly the input. It does nothing, and isn't really needed at all.
I believe what you wanted was simply to make the objective be the output f2.total_weight. When I do that (and make a few additional small cleanups to your code, like removing the unnecessary ExecComp, then I do get the correct answer in 2 major iterations of the optimizer:
import openmdao.api as om
class FunctionForTest1(om.ExplicitComponent):
def setup(self):
self.add_input('stroke', val=0.2)
self.add_input('n_gear', val=3.0)
self.add_input('total_weight', val=26000)
self.add_output('landing_gear_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
stroke = inputs['stroke']
n_gear = inputs['n_gear']
total_weight = inputs['total_weight']
outputs['landing_gear_weight'] = total_weight * 0.1 + 100*stroke * n_gear ** 2
class FunctionForTest2(om.ExplicitComponent):
def setup(self):
self.add_input('landing_gear_weight')
self.add_output('total_weight')
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
landing_gear_weight = inputs['landing_gear_weight']
outputs['total_weight'] = 26000 + landing_gear_weight
class IntermediateCycle(om.Group):
def setup(self):
indeps = self.add_subsystem('indeps', om.IndepVarComp(), promotes=['*'])
indeps.add_output('n_gear', 3.0)
indeps.add_output('stroke', 0.2)
cycle = self.add_subsystem('cycle', om.Group())
cycle.add_subsystem('f1', FunctionForTest1())
cycle.add_subsystem('f2', FunctionForTest2())
cycle.connect('f1.landing_gear_weight','f2.landing_gear_weight')
cycle.connect('f2.total_weight','f1.total_weight')
self.connect('n_gear','cycle.f1.n_gear')
self.connect('stroke','cycle.f1.stroke')
#cycle.nonlinear_solver = om.NonlinearBlockGS()
self.nonlinear_solver = om.NonlinearBlockGS()
prob = om.Problem()
prob.model = IntermediateCycle()
prob.driver = om.ScipyOptimizeDriver()
#prob.driver.options['optimizer'] = 'SLSQP'
#prob.driver.options['tol'] = 1e-9
prob.model.add_design_var('n_gear', lower=2, upper=6)
prob.model.add_design_var('stroke', lower=0.0254, upper=1)
prob.model.add_objective('cycle.f2.total_weight')
prob.model.approx_totals()
prob.setup()
prob.model.nl_solver.options['iprint'] = 2
prob.run_driver()
print(prob['cycle.f1.total_weight'])
print(prob['cycle.f2.total_weight'])
print(prob['cycle.f1.stroke'])
print(prob['cycle.f1.n_gear'])
gives:
Optimization terminated successfully. (Exit mode 0)
Current function value: 28900.177777779667
Iterations: 2
Function evaluations: 2
Gradient evaluations: 2
Optimization Complete
-----------------------------------
[28900.1777778]
[28900.17777778]
[0.0254]
[2.]
I'm trying to understand the OpenMDAO error messages
RuntimeError: Singular entry found in '' for column associated with state/residual 'x'.
and
RuntimeError: Singular entry found in '' for row associated with state/residual 'y'.
Can someone explain these? E.g. When running the code
from openmdao.api import Problem, Group, IndepVarComp, ImplicitComponent, ScipyOptimizeDriver, NewtonSolver, DirectSolver, view_model, view_connections
class Test1Comp(ImplicitComponent):
def setup(self):
self.add_input('x', 0.5)
self.add_input('design_x', 1.0)
self.add_output('z', val=0.0)
self.add_output('obj')
self.declare_partials(of='*', wrt='*', method='fd', form='central', step=1.0e-4)
def apply_nonlinear(self, inputs, outputs, resids):
x = inputs['x']
design_x = inputs['design_x']
z = outputs['z']
resids['z'] = x*z + z - 4
resids['obj'] = (z/5.833333 - design_x)**2
if __name__ == "__main__":
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.5))
model.add_subsystem('d1', IndepVarComp('design_x', 1.0))
model.add_subsystem('comp', Test1Comp())
model.connect('p1.x', 'comp.x')
model.connect('d1.design_x', 'comp.design_x')
prob.driver = ScipyOptimizeDriver()
prob.driver.options["optimizer"] = 'SLSQP'
model.add_design_var("d1.design_x", lower=0.5, upper=1.5)
model.add_objective('comp.obj')
model.nonlinear_solver = NewtonSolver()
model.nonlinear_solver.options['iprint'] = 2
model.nonlinear_solver.options['maxiter'] = 20
model.linear_solver = DirectSolver()
prob.setup()
prob.run_model()
print(prob['comp.z'])
I get the error message:
File "C:\Scripts/mockup_component3.py", line 46, in <module>
prob.run_model()
File "C:\Python_32\lib\site-packages\openmdao\core\problem.py", line 315, in run_model
return self.model.run_solve_nonlinear()
File "C:\Python_32\lib\site-packages\openmdao\core\system.py", line 2960, in run_solve_nonlinear
result = self._solve_nonlinear()
File "C:\Python_32\lib\site-packages\openmdao\core\group.py", line 1420, in _solve_nonlinear
result = self._nonlinear_solver.solve()
File "C:\Python_32\lib\site-packages\openmdao\solvers\solver.py", line 602, in solve
fail, abs_err, rel_err = self._run_iterator()
File "C:\Python_32\lib\site-packages\openmdao\solvers\solver.py", line 349, in _run_iterator
self._iter_execute()
File "C:\Python_32\lib\site-packages\openmdao\solvers\nonlinear\newton.py", line 234, in _iter_execute
system._linearize()
File "C:\Python_32\lib\site-packages\openmdao\core\group.py", line 1562, in _linearize
self._linear_solver._linearize()
File "C:\Python_32\lib\site-packages\openmdao\solvers\linear\direct.py", line 199, in _linearize
raise RuntimeError(format_singluar_error(err, system, mtx))
RuntimeError: Singular entry found in '' for column associated with state/residual 'comp.obj'.
This error I was able to solve, by adding - outputs['obj'] in the equation for resids['obj']. But I still have little understanding as to what the two error messages mean. What matrix is it that is singular? And what does it mean to have
1) a singular entry for a column?
2) a singular entry for a row?
I realized that the cause for the singular row was that I had not defined the partial derivatives for the component. I fixed this problem by adding the command declare_partials to the top level system. The traceback gave me the clue that the matrix was related to linearization.
The case with the singular column seems related to that I had two equations in apply_nonlinear, but only one unknown (z).
I'm implementing an RBF network by using some beginer examples from Pytorch Website. I have a problem when implementing the kernel bandwidth differentiation for the network. Also, Iwould like to know whether my attempt ti implement the idea is fine. This is a code sample to reproduce the issue. Thanks
# -*- coding: utf-8 -*-
import torch
from torch.autograd import Variable
def kernel_product(x,y, mode = "gaussian", s = 1.):
x_i = x.unsqueeze(1)
y_j = y.unsqueeze(0)
xmy = ((x_i-y_j)**2).sum(2)
if mode == "gaussian" : K = torch.exp( - xmy/s**2) )
elif mode == "laplace" : K = torch.exp( - torch.sqrt(xmy + (s**2)))
elif mode == "energy" : K = torch.pow( xmy + (s**2), -.25 )
return torch.t(K)
class MyReLU(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors.
"""
#staticmethod
def forward(ctx, input):
"""
In the forward pass we receive a Tensor containing the input and return
a Tensor containing the output. ctx is a context object that can be used
to stash information for backward computation. You can cache arbitrary
objects for use in the backward pass using the ctx.save_for_backward method.
"""
ctx.save_for_backward(input)
return input.clamp(min=0)
#staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
input, = ctx.saved_tensors
grad_input = grad_output.clone()
grad_input[input < 0] = 0
return grad_input
dtype = torch.cuda.FloatTensor
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold input and outputs, and wrap them in Variables.
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad=False)
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad=False)
# Create random Tensors for weights, and wrap them in Variables.
w1 = Variable(torch.randn(H, D_in).type(dtype), requires_grad=True)
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
# I've created this scalar variable (the kernel bandwidth)
s = Variable(torch.randn(1).type(dtype), requires_grad=True)
learning_rate = 1e-6
for t in range(500):
# To apply our Function, we use Function.apply method. We alias this as 'relu'.
relu = MyReLU.apply
# Forward pass: compute predicted y using operations on Variables; we compute
# ReLU using our custom autograd operation.
# y_pred = relu(x.mm(w1)).mm(w2)
y_pred = relu(kernel_product(w1, x, s)).mm(w2)
# Compute and print loss
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])
# Use autograd to compute the backward pass.
loss.backward()
# Update weights using gradient descent
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
# Manually zero the gradients after updating weights
w1.grad.data.zero_()
w2.grad.data.zero_()
However I get this error, which dissapears when I simply use a fixed scalar in the default input parameter of kernel_product():
RuntimeError: eq() received an invalid combination of arguments - got (str), but expected one of:
* (float other)
didn't match because some of the arguments have invalid types: (str)
* (Variable other)
didn't match because some of the arguments have invalid types: (str)
Well, you are calling kernel_product(w1, x, s) where w1, x and s are torch Variable while the definition of the function is: kernel_product(x,y, mode = "gaussian", s = 1.). Seems like s should be a string specifying the mode.