How do i do an type OR Type. String OR Int? - cerberus

I would like to be able to allow a string or an integer in a field. How do I do this?
This is my current schema:
'minSize': {'type': 'any'},

I'm quoting the docs:
A list of types can be used to allow different values
>>> v.schema = {'quotes': {'type': ['string', 'list']}}
>>> v.validate({'quotes': 'Hello world!'})
True
>>> v.validate({'quotes': ['Do not disturb my circles!', 'Heureka!']})
True

Related

Passing additional arguments to _normalise_coerse methods in cerberus

I have some code see EOM; it's by no means final but is the best way (so far) I've seen/conceived for validating multiple date formats in a somewhat performant way.
I'm wondering if there is a means to pass an additional argument to this kind of function (_normalise_coerce), it would be nice if the date format string could be defined in the schema. something like
{
"a_date":{
"type": "datetime",
"coerce": "to_datetime",
"coerce_args": "%m/%d/%Y %H:%M"
}
}
Vs making a code change in the function to support an additional date format. I've looked through the docs and not found anything striking. Fairly good chance I'm looking at this all wrong but figured asking the experts was the best approach. I think defining within the schema is the cleanest solution to the problem, but I'm all eyes and ears for facts, thoughts and opinions.
Some context:
Performance is essential as this could be running against millions of rows in AWS lambdas (and Cerbie (my nickname for cerberus) isn't exactly a spring chicken :P ).
None of the schemas will be native python dicts as they're all defined in JSON/YAML, so it all needs to be string friendly.
Not using the built-in coercion as the python types cannot be parsed from strings
I don't need the datetime object, so regex is a possibility, just less explicit and less futureproof.
If this is all wrong and I'm grossly incompetent, please be gentle (づ。◕‿‿◕。)づ
def _normalize_coerce_to_datetime(self, value: Union(str, datetime, None)) -> Union(datetime, str, None):
'''
Casts valid datetime strings to the datetime python type.
:param value: (str, datetime, None): python datetime, datetime string
:return: datetime, string, None. python datetime,
invalid datetime string or None if the value is empty or None
'''
datetime_formats = ['%m/%d/%Y %H:%M']
if isinstance(value, datetime):
return value
if value and not value.isspace():
for format in datetime_formats:
try:
return datetime.strptime(value, format)
except ValueError:
date_time = value
return date_time
else:
return None
I have attempted to do this myself and have not found a way to pass additional arguments to a custom normalize_coerce rule. If you want to extend the Cerberus library to include custom validators then you can include arguments and then access these through the constraints in the custom validator. The below is an example that I have used for a conditional to default coercer, but as I needed to specify the condition and both the value to check against and the value to return I couldn't find a way to do this with the normalize_coerce and hence applied inside a validate rule and edited the self.document, as seen by the code.
Schema:
{
"columns":{
"Customer ID":{
"type":"number",
"conditional_to_default":{
"condition":"greater_than",
"value_to_check_against":100,
"value_to_return":22
}
}
}
}
def _validate_conditional_to_default(self, constraint, field, value):
"""
Test the values and transform if conditions are met.
:param constraint: Dictionary with the args needed for the conditional check.
:param field: Field name.
:param value: Field value.
:return: the new document value if applicable, or keep the existing document value if not
"""
value_to_check_against = constraint["value_to_check_against"]
value_to_return = constraint["value_to_return"]
rule_name = 'conditional_to_default'
condition_mapping_dict = {"greater_than": operator.gt, "less_than": operator.lt, "equal_to": operator.eq,
"less_than_or_equal_to": operator.le,
"greater_than_or_equal_to": operator.ge}
if constraint["condition"] in condition_mapping_dict:
if condition_mapping_dict[constraint["condition"]](value, value_to_check_against):
self.document[field] = value_to_return
return self.document
else:
return self.document
if constraint["condition"] not in condition_mapping_dict:
custom_errors_list = []
custom_error = cerberus.errors.ValidationError(document_path=(field, ), schema_path=(field, rule_name),
code=0x03, rule=rule_name, constraint="Condition must be "
"one of: "
"{condition_vals}"
.format(condition_vals=list(condition_mapping_dict.keys())),
value=value, info=())
custom_errors_list.append(custom_error)
self._error(custom_errors_list)
return self.document
This is probably the wrong way to do it, but I hope the above gives you some inspiration and gets you a bit further. Equally I'm following this to see if anyone else has found a way to pass arguments to the _normlize_coerce function.

fastapi+pydantic query parameter checking for complex arguments

Based on this API definition, my api supports queries like:
GET http://my.api.url/posts?sort=["title","ASC"]&range=[0, 24]&filter={"q":"bar"}
where some of the checks needed are
sort[1] is either "asc" or "desc" (case should not matter)
filter has the key "q". filter can have other keys.
range is a list of two integers. range[0] is less then or equal to range[1]
In fastapi path definitions I currently define filter, sort, and range as strings as in the code below, convert them using json.loads, and do checks.
#r.get(
"/users",
response_model=List[User],
response_model_exclude_none=True,
)
async def list_users(
filter: Optional[str] = None,
sort: Optional[str] = None,
range: Optional[str] = None,
...
):
...
How can I use pydantic definitions for checks and API definition instead of just using str, such that checks are done by pydantic, and openapi schema definitions are more descriptive?
There are some ways to do what you want. But FastAPI only allow sequence structures (list, tuples, set, sequence) for Query and Header params, so that &filter={"q":"bar"} it won't work on FastAPI at least for 0.75.0 version where I am working.
If you wish to support a Dictionary you should use POST method with Body params. And then you can wrap your values around a Pydantic model to support validation.
Below there are two ways to implement this in FastAPI.
Solution 1:
class FilterModel(BaseModel):
filter: dict
#router.post(
"/posts"
)
async def list_users(
filters: FilterModel,
sort: Tuple[str, Literal["DESC", "ASC", "desc", "asc"]] = Query(("title", "ASC")),
ranges: Tuple[int, int] = Query((0, 24))
) -> None:
print(filters)
print(sort)
print(ranges)
model = MyModel(sort=sort, range=ranges)
print(model)
This will allow you to call api like:
POST http://my.api.url/posts?sort=title&sort=ASC&ranges=0&ranges=24
With body:
{
"filter"={"q":"bar"}
}
Solution 2:
class MyModel(BaseModel):
sortBy: Optional[str]
sortOrder: Optional[Literal["DESC", "ASC", "desc", "asc"]]
min_range: Optional[int]
max_range: Optional[int]
class FilterModel(BaseModel):
filter: dict
#router.post(
"/posts"
)
async def list_users(
filters: FilterModel,
model: MyModel = Depends()
) -> None:
print(filters)
print(model)
This will allow you to call api like:
POST http://my.api.url/posts?sortBy="title"&sortOrder="ASC"&min_range=0&max_range=24
With body:
{
"filter"={"q":"bar"}
}
If you wish to use GET method and the filter you have to do some hacky things.

What does the intersection of two function types boil down to?

Can someone point me to a comprehensive guide on the theory behind flowtype function intersections? Behavior is confusing to me. I understand that this type:
type FnT = ((string) => string) & ((number) => string);
reduces down to (string | number) => (string & string), but why is is that i can't cast the parameter to either string or number ???
i.e const g: FnT = (p: string) => { return "hi"; } gives me
Cannot assign function togbecause string [1] is incompatible with number [2] in the first argument..
Why??? isn't string a perfectly valid subtype of string | number?
is this because it expects a super type?
if this is the case then why is it that a union of same two function types lets me cast the param to one or the other?
i.e.
const FnT = ((string) => string) | ((number) => string) works with
const g: FnT = (p: string) => ("hi") ??? wouldn't we expect a supertype of string | number here?
With flow, you need to test all alternative types before casting.
example, if your type is string|number, and you want to cast as a number, you must first test that it is not actually a string.
This is because Flow will not try and modify your values for you, it is only a type checker. You must modify your values yourself, meaning flow can not 'convert' a number to a string, it can only cast the type.

Kotlin functional strategy pattern doesn't compile

I'm trying to put several functions in a map. The idea is to have: Map<String, [function]>.
The code is as follows:
class UserIdQueriesHandler {
val strategy: Map<String, KFunction2<#ParameterName(name = "id") String, #ParameterName(name = "options") Array<Options>, Answer>> =
mapOf( // the compiler complains here
"d" to ::debt,
"p" to ::currentProduct
)
fun debt(id: String, options: Array<DebtOptions>): UserDebt = UserDebt(isPresent = true, amount = 0.0)
fun currentProduct(id: String, options: Array<CurrentProductOptions>): UserProducts = UserProducts(products = emptyList())
}
enum class DebtOptions : Options { BOOL, AMOUNT }
enum class CurrentProductOptions : Options { ALL, PRINT, DIGITAL, ENG, HEB, TM }
data class UserDebt(val isPresent: Boolean, val amount: Double) : Answer
data class UserProducts(val products: List<Int>): Answer
Answer and Options are simple kotlin interfaces:
interface Answer
interface Options
Compiler output:
Type inference failed. Expected type mismatch:
required:
Map<String, KFunction2<#ParameterName String, #ParameterName Array<Options>, Answer>>
found:
Map<String, KFunction2<#ParameterName String, {[#kotlin.ParameterName] Array & [#kotlin.ParameterName] Array }, Answer>>
The type of strategy says functions you put into it can accept any Array<Options> as the second argument, which debt and currentProduct can't.
The simplest workaround would be to change their argument type to Array<Options> (or List<Options>; they probably don't need to mutate it!) and fail at runtime if wrong options are passed or ignore them.
Variance section in the documentation is also relevant.
Since an Array can be both read and written, its type parameter is invariant. This makes it so that you can't assign an Array<DebtOptions> to a variable that has the type of Array<Options>. The former isn't a subtype of the latter, because it would allow you to put other elements in the array that are Options, but not DebtOptions, leading to problems to code that has a reference to this array as an Array<DebtOptions>.
A solution would be to make your functions accept Array<Options>, if you can.
val strategy: Map<String, KFunction2<String, Array<Options>, Answer>> =
mapOf(
"d" to ::debt,
"p" to ::currentProduct
)
fun debt(id: String, options: Array<Options>): UserDebt = ...
fun currentProduct(id: String, options: Array<Options>): UserProducts = ...
You could combine this with using the nicer functional type instead of a KFunction2:
val strategy: Map<String, (String, Array<Options>) -> Answer> = ...

Ecto association to more than one schemes

Let's say I have these schemas:
defmodule Sample.Post do
use Ecto.Schema
schema "post" do
field :title
has_many :comments, Sample.Comment
end
end
defmodule Sample.User do
use Ecto.Schema
schema "user" do
field :name
has_many :comments, Sample.Comment
end
end
defmodule Sample.Comment do
use Ecto.Schema
schema "comment" do
field :text
belongs_to :post, Sample.Post
belongs_to :user, Sample.User
end
end
My questions is how can I use Ecto.build_assoc to save a comment?
iex> post = Repo.get(Post, 13)
%Post{id: 13, title: "Foo"}
iex> comment = Ecto.build_assoc(post, :comments)
%Comment{id: nil, post_id: 13, user_id: nil}
So far it's ok, all I need to do is use the same function to set the user_id in my Comment struct, however since the return value of build_assoc is Comment struct, I can not use the same function
iex> user = Repo.get(User, 1)
%User{id: 1, name: "Bar"}
iex> Ecto.build_assoc(user, :comment, comment)
** (UndefinedFunctionError) undefined function: Sample.Comment.delete/2
...
I have two options but neither of them looks good to me:
First one is to set user_id manually!
iex> comment = %{comment| user_id: user.id}
%Comment{id: nil, post_id: 13, user_id: 1}
Second one is to convert the struct to map and ... I don't even want to go there
Any suggestion?
Why don't you want convert struct to map? It is really easy.
build_assoc expects map of attributes as last value. Internally it tries to delete key :__meta__. Structs have compile time guarantees, that they will contain all defined fields, so you are getting:
** (UndefinedFunctionError) undefined function: Sample.Comment.delete/2
But you can just write:
comment = Ecto.build_assoc(user, :comment, Map.from_struct comment)
and everything will work fine.
Just pass it along with build_assoc
iex> comment = Ecto.build_assoc(post, :comments, user_id: 1)
%Comment{id: nil, post_id: 13, user_id: 1}
Check here for more details.

Resources