Enforcing integers for declared inputs is not possible - openmdao

I try to declare an input with default integers but it does not seem possible. Am I making a mistake or is float enforced in the openmdao core.
Here are the code snippets I tried;
Expected output something like : array([1, 1, 1])
Received output : [1. 1. 1.]
from openmdao.api import ExplicitComponent, Problem, IndepVarComp
import numpy as np
class CompAddWithArrayIndices(ExplicitComponent):
"""Component for tests for declaring with array val and array indices."""
def setup(self):
self.add_input('x_a', val=np.ones(6,dtype=int))
self.add_input('x_b', val=[1]*5)
self.add_output('y')
p = Problem(model=CompAddWithArrayIndices())
p.setup()
p.run_model()
print(p['x_a'])
print(p['x_b'])
#%%
from openmdao.api import ExplicitComponent, Problem, IndepVarComp
import numpy as np
class CompAddWithArrayIndices(ExplicitComponent):
"""Component for tests for declaring with array val and array indices."""
def setup(self):
self.add_input('x_a', val=np.zeros(3,dtype=int))
self.add_output('y')
prob = Problem()
ivc=IndepVarComp()
prob.model.add_subsystem('ivc', ivc,promotes=['*'])
ivc.add_output('x_a', val=np.ones(3,dtype=int))
prob.model.add_subsystem('comp1', CompAddWithArrayIndices(),promotes=['*'])
prob.setup()
prob.run_model()
print(prob['x_a'])

Variables added via add_inputs or add_outputs will be converted to floats or float arrays. If you want a variable to be an int or any other discrete type, you must use add_discrete_input and add_discrete_output. Such variables will be passed between systems based on connection information, but no attempt will be made to compute their derivatives.
Discrete variable support was added in OpenMDAO v2.5 as an experimemental feature (its still being developed). There commit id 709401e535cf6933215abd942d4b4d49dbf61b2b on the master branch that promotion problem has been fixed.Make sure you're using a recent version of OpenMDAO from that commit or later

Related

Define variable in problem which is not used as input, and equals a previous input/output

I would like to define a variable which, depending on certain options, will be equal to a previous output (as if the previous output had two names) or will be the output of a new component.
A trivial solution is to just omit the definition of the value when the component which would define it is not implemented, but I would prefer it to be defined for readability/traceability reasons (to simplify if statements in the code, and to provide it as timeseries output).
The problem is that when using the connect statement, if the subsequent condition does not lead to the variable being used as an input to another component, it provides an error mentioning that it attempted to connect but the variable does not exist.
I made a temporal fix with a sort of link statement (LinkVarComp bellow) which creates an explicit component with the output being equal to the input (and some additional things as scaling and a shift which could be useful for linear equations), but i am worried that this would add unnecessary computations/design variables/constraints.
Is there an easier/better workaround? (maybe by allowing variables to have multiple names?) what could be the best practice to just have a variable with a different name equal to a previous output/input?
A simple example:
import openmdao.api as om
model = om.Group()
model.add_subsystem('xcomp',subsys=om.IndepVarComp(name='x',val=np.zeros((3,2))),promotes_outputs=['*'])
model.connect('x','y')
p = om.Problem(model)
p.setup(force_alloc_complex=True)
p.set_val('x', np.array([[1.0 ,3],[10 ,-5],[0,3.1]]))
p.run_model()
Crashes with error
NameError: <model> <class Group>: Attempted to connect from 'x' to 'y', but 'y' doesn't exist.
While this works if using the following LinkVarComp component (but i suppose that adding new variables and computations)
import openmdao.api as om
import numpy as np
from math import prod
class LinkVarComp(om.ExplicitComponent):
"""
Component containing
"""
def initialize(self):
"""
Declare component options.
"""
self.options.declare('shape', types=(int,tuple),default=1)
self.options.declare('scale', types=int,default=1)
self.options.declare('shift', types=float,default=0.)
self.options.declare('input_default', types=float,default=0.)
self.options.declare('input_name', types=str,default='x')
self.options.declare('output_name', types=str,default='y')
self.options.declare('output_default', types=float,default=0.)
self.options.declare('input_units', types=(str,None),default=None)
self.options.declare('output_units', types=(str,None),default=None)
def setup(self):
self.add_input(name=self.options['input_name'],val=self.options['input_default'],shape=self.options['shape'],units=self.options['input_units'])
self.add_output(name=self.options['output_name'],val=self.options['output_default'],shape=self.options['shape'],units=self.options['output_units'])
if type(self.options['shape']) == int:
n = self.options['shape']
else:
n =prod( self.options['shape'])
ar = np.arange(n)
self.declare_partials(of=self.options['output_name'] , wrt=self.options['input_name'], rows=ar, cols=ar,val=self.options['scale'])
def compute(self, inputs, outputs):
outputs[self.options['output_name']] = self.options['scale']*inputs[self.options['input_name']] + self.options['shift']
model = om.Group()
model.add_subsystem('xcomp',subsys=om.IndepVarComp(name='x',val=np.zeros((3,2))),promotes_outputs=['*'])
model.add_subsystem('link', LinkVarComp(shape=(3,2)),
promotes_inputs=['*'],
promotes_outputs=['*'])
p = om.Problem(model)
p.setup(force_alloc_complex=True)
p.set_val('x', np.array([[1.0 ,3],[10 ,-5],[0,3.1]]))
p.run_model()
print(p['y'])
Outputing the expected:
[[ 1. 3. ]
[10. -5. ]
[ 0. 3.1]]
In OpenMDAO, you can not have the same variable take on two separate names. Thats simply not allowed.
The solution you came up with is effectively creating a separate component to hold a copy of the output. That works. You could use an ExecComp to have the same effect with a little less code:
import numpy as np
import openmdao.api as om
model = om.Group()
model.add_subsystem('xcomp',subsys=om.IndepVarComp(name='x',val=np.zeros((3,2))),promotes_outputs=['*'])
model.add_subsystem('ycomp', om.ExecComp("y=x", shape=(3,2)), promotes=['*'])
p = om.Problem(model)
p.setup(force_alloc_complex=True)
p.set_val('x', np.array([[1.0 ,3],[10 ,-5],[0,3.1]]))
p.run_model()
print(p['x'])
print(p['y'])
In general, I probably wouldn't actually do this myself. It seems kind of wasteful. Instead, I would modify my post-processing script to look for y and if it didn't find it to then grab x instead.

Reference a variable of a dataclass in a field with a default_factory

I want to reference a dataclass variable in a lambda function for a default_factory like:
from typing import List
from dataclasses import dataclass, field
#dataclass
class A:
a: float = 1
b: List = field(default_factory = lambda: [a])
but it I get the error that the variable is undefined. How can I solve this?
You have a scoping problem. By the time the lambda function is executed, a isn't visible to it any more, so it doesn't know how to resolve it. See also the much simpler examples in the python docs on delayed lambda execution to understand the issue.
You can fix it by binding a to the lambda's local scope during its creation:
#dataclass
class A:
a: float = 1
b: List = field(default_factory = (lambda a=a: [a]))
Looks a bit weird, but it does the job.

Query parameters from pydantic model

Is there a way to convert a pydantic model to query parameters in fastapi?
Some of my endpoints pass parameters via the body, but some others pass them directly in the query. All this endpoints share the same data model, for example:
class Model(BaseModel):
x: str
y: str
I would like to avoid duplicating my definition of this model in the definition of my "query-parameters endpoints", like for example test_query in this code:
class Model(BaseModel):
x: str
y: str
#app.post("/test-body")
def test_body(model: Model): pass
#app.post("/test-query-params")
def test_query(x: str, y: str): pass
What's the cleanest way of doing this?
The documentation gives a shortcut to avoid this kind of repetitions. In this case, it would give:
from fastapi import Depends
#app.post("/test-query-params")
def test_query(model: Model = Depends()): pass
This will allow you to request /test-query-params?x=1&y=2 and will also produce the correct OpenAPI description for this endpoint.
Similar solutions can be used for using Pydantic models as form-data descriptors.
Special case that isn't mentioned in the documentation for Query Parameters Lists, for example with:
/members?member_ids=1&member_ids=2
The answer provided by #cglacet will unfortunately ignore the array for such a model:
class Model(BaseModel):
member_ids: List[str]
You need to modify your model like so:
class Model(BaseModel):
member_ids: List[str] = Field(Query([]))
Answer from #fnep on GitHub here
This solution is very apt if your schema is "minimal".
But, when it comes to a complicated one like this, Set description for query parameter in swagger doc using Pydantic model, it is better to use a "custom dependency class"
from fastapi import Depends, FastAPI, Query
app = FastAPI()
class Model:
def __init__(
self,
y: str,
x: str = Query(
default='default for X',
title='Title for X',
deprecated=True
)
):
self.x = x
self.y = y
#app.post("/test-body")
def test_body(model: Model = Depends()):
return model
If you are using this method, you will have more control over the OpenAPI doc.
#cglacet 's answer is simple and works, but it will raise pydantic ValidationError when validation fail and not gonna pass the error to client.
You can find reason here.
This works and pass message to client. Code from here.
import inspect
from fastapi import Query, FastAPI, Depends
from pydantic import BaseModel, ValidationError
from fastapi.exceptions import RequestValidationError
class QueryBaseModel(BaseModel):
def __init_subclass__(cls, *args, **kwargs):
field_default = Query(...)
new_params = []
for field in cls.__fields__.values():
default = Query(field.default) if not field.required else field_default
annotation = inspect.Parameter.empty
new_params.append(
inspect.Parameter(
field.alias,
inspect.Parameter.POSITIONAL_ONLY,
default=default,
annotation=annotation,
)
)
async def _as_query(**data):
try:
return cls(**data)
except ValidationError as e:
raise RequestValidationError(e.raw_errors)
sig = inspect.signature(_as_query)
sig = sig.replace(parameters=new_params)
_as_query.__signature__ = sig # type: ignore
setattr(cls, "as_query", _as_query)
#staticmethod
def as_query(parameters: list) -> "QueryBaseModel":
raise NotImplementedError
class ParamModel(QueryBaseModel):
start_datetime: datetime
app = FastAPI()
#app.get("/api")
def test(q_param: ParamModel: Depends(ParamModel.as_query))
start_datetime = q_param.start_datetime
...
return {}

Vector of registers size can not be parametrized by module parameter

I want to use module parameter as a size parameter of Vector, which contains registers, and I try next code:
package Test;
import Vector :: *;
(* synthesize *)
module mkTest #(
parameter UInt#(32) qsize
) (Empty);
Vector#(qsize,Reg#(Bit#(8))) queue <- replicateM (mkReg (0));
endmodule
endpackage
But compiling this module with bsc I get next error message:
Verilog generation
bsc -verilog -remove-dollar Test.bsv
Error: "Test.bsv", line 9, column 11: (T0008)
Unbound type variable `qsize'
bsc version:
Bluespec Compiler (build e55aa23)
If I use not Registers as a type of Vector elements, everything is OK. Next code will produce no errors:
package Test;
import Vector :: *;
(* synthesize *)
module mkTest #(
parameter UInt#(32) qsize
) (Empty);
Vector#(qsize,Bit#(8)) queue = replicate(0);
endmodule
endpackage
And I can not understand, why qsize is Unbound as it is clearly declared as a parameter? If I did something wrong, could you please help me and explain, how to make parameterized size Vector of Regs correctly?
I have asked this question in one of the Bluespec repositories on github and Rishiyur S. Nikhil gave me a very full explanation. See https://github.com/BSVLang/Main/issues/4
In short: Vector as a first parameter needs a type, not UInt (or Int or something else). So the right way to do will be:
Make an interface for module and make it type-polymorphic
Use type from that interface as a Vector size parameter
package Test;
import Vector :: *;
interface Queue_IFC #(numeric type qsize_t);
method Bool done;
endinterface
module mkQueue ( Queue_IFC #(qsize_t) );
Vector #(qsize_t, Reg #(Bit #(8))) queue <- replicateM (mkReg (0));
endmodule
endpackage

OpenMDAO1.x: Difficulty accessing variables implicitly linked through multiple Groups

I am having trouble accessing variables that are implicitly linked through multiple layers of groups. According to the documentation:
In new OpenMDAO, Groups are NOT Components and do not have their own
variables. Variables can be promoted to the Group level by passing the
promotes arg to the add call, e.g.,
group = Group()
group.add('comp1', Times2(), promotes=['x'])
This will allow the variable x that belongs to comp1 to be accessed
via group.params[‘x’].
However, when I try to access variables of sub-sub-groups I am getting errors. Please see example below that shows a working and non-working example:
from openmdao.api import Component, Group, Problem
import numpy as np
class Times2(Component):
def __init__(self):
super(Times2, self).__init__()
self.add_param('x', 1.0, desc='my var x')
self.add_output('y', 2.0, desc='my var y')
def solve_nonlinear(self, params, unknowns, resids):
unknowns['y'] = params['x'] * 2.0
def linearize(self, params, unknowns, resids):
J = {}
J[('y', 'x')] = np.array([2.0])
return J
class PassGroup1(Group):
def __init__(self):
super(PassGroup1, self).__init__()
self.add('t1', Times2(), promotes=['*'])
class PassGroup2(Group):
def __init__(self):
super(PassGroup2, self).__init__()
self.add('g1', PassGroup1(), promotes=['*'])
prob = Problem(root=Group())
prob.root.add('comp', PassGroup2(), promotes=['*'])
prob.setup()
prob.run()
# this works
print prob.root.comp.g1.t1.params['x']
# this does not
print prob.root.params['x']
Could you explain why this does not work, and how I can make variables available to the top level without a knowledge of the lower level groups?
There are a few answers to your question. First, I'll point out that you have what we call a "hanging parameter". By this I mean, a parameter on a component (or linked to multiple components via promotion and/or connection) that has no ultimate src variable associated with it. So, just for a complete understanding, it needs to be stated that as far as OpenMDAO is concerned hanging parameters are not it's problem. As a convinence to the user, we provide an easy way for you to set its value in the problem instance, but we never do any data passing with it or anything during run time.
In the common case where x is a design variable for an optimizer, you would create an IndepVarComp to provide the src for this value. But since you don't have an optimizer in your example it is not technically wrong to leave out the IndepVarComp.
For a more direct answer to your question you shouldn't really be reaching down into the params dictionaries in any kind of sub-level. I can't think of a good reason to do that as a user. If you stick with problem['x'] you should never go wrong.
But since you asked, here is the details of whats really going on for a slightly modified case that allows there to be an actual parameter.
from openmdao.api import Component, Group, Problem
import numpy as np
class Plus1(Component):
def __init__(self):
super(Plus1, self).__init__()
self.add_param('w', 4.0)
self.add_output('x', 5.0)
def solve_nonlinear(self, params, unknowns, resids):
unknowns['x'] = params['w'] + 1
def linearize(self, params, unknowns, resids):
J = {}
J['x', 'w'] = 1
return J
class Times2(Component):
def __init__(self):
super(Times2, self).__init__()
self.add_param('x', 1.0, desc='my var x')
self.add_output('y', 2.0, desc='my var y')
def solve_nonlinear(self, params, unknowns, resids):
unknowns['y'] = params['x'] * 2.0
def linearize(self, params, unknowns, resids):
J = {}
J[('y', 'x')] = np.array([2.0])
return J
class PassGroup1(Group):
def __init__(self):
super(PassGroup1, self).__init__()
self.add('t1', Times2(), promotes=['x','y'])
class PassGroup2(Group):
def __init__(self):
super(PassGroup2, self).__init__()
self.add('g1', PassGroup1(), promotes=['x','y'])
self.add('p1', Plus1(), promotes=['w','x'])
prob = Problem(root=Group())
prob.root.add('comp', PassGroup2(), promotes=['w','x','y'])
prob.setup()
prob.run()
# this works
print prob.root.comp.g1.t1.params['x']
# this does not
print prob.root.comp.params.keys()
Please note that in my example, 'x' is no longer free for the user to set. Its now computed by 'p1'. Instead 'w' is now the user set parameter. This was necessary in order to illustrate how params work.
Now that there is actually some data passing going on that OpenMDAO is responsible for you can see the actual pattern more clearly. At the root, there are no parameters at all (excluding any hanging params). Everything, from the roots perspective is an unknown, because everything has a src responsible for it at that level. Go down one level where there is p1 and g1, and now there is a parameter on g1 that p1 is the src for so some data passing has to happen at that level of the hiearchy. So g1 has an entry in its parameter dictionary, g1.t1.x. Why is it a full path? All book keeping for parameters is done with full path names for a variety of reasons outside the scope of this answer. But that is also another motivation for working through the shortcut in problem because that will work with relative (or promoted) names.

Resources