Query parameters from pydantic model - fastapi

Is there a way to convert a pydantic model to query parameters in fastapi?
Some of my endpoints pass parameters via the body, but some others pass them directly in the query. All this endpoints share the same data model, for example:
class Model(BaseModel):
x: str
y: str
I would like to avoid duplicating my definition of this model in the definition of my "query-parameters endpoints", like for example test_query in this code:
class Model(BaseModel):
x: str
y: str
#app.post("/test-body")
def test_body(model: Model): pass
#app.post("/test-query-params")
def test_query(x: str, y: str): pass
What's the cleanest way of doing this?

The documentation gives a shortcut to avoid this kind of repetitions. In this case, it would give:
from fastapi import Depends
#app.post("/test-query-params")
def test_query(model: Model = Depends()): pass
This will allow you to request /test-query-params?x=1&y=2 and will also produce the correct OpenAPI description for this endpoint.
Similar solutions can be used for using Pydantic models as form-data descriptors.

Special case that isn't mentioned in the documentation for Query Parameters Lists, for example with:
/members?member_ids=1&member_ids=2
The answer provided by #cglacet will unfortunately ignore the array for such a model:
class Model(BaseModel):
member_ids: List[str]
You need to modify your model like so:
class Model(BaseModel):
member_ids: List[str] = Field(Query([]))
Answer from #fnep on GitHub here

This solution is very apt if your schema is "minimal".
But, when it comes to a complicated one like this, Set description for query parameter in swagger doc using Pydantic model, it is better to use a "custom dependency class"
from fastapi import Depends, FastAPI, Query
app = FastAPI()
class Model:
def __init__(
self,
y: str,
x: str = Query(
default='default for X',
title='Title for X',
deprecated=True
)
):
self.x = x
self.y = y
#app.post("/test-body")
def test_body(model: Model = Depends()):
return model
If you are using this method, you will have more control over the OpenAPI doc.

#cglacet 's answer is simple and works, but it will raise pydantic ValidationError when validation fail and not gonna pass the error to client.
You can find reason here.
This works and pass message to client. Code from here.
import inspect
from fastapi import Query, FastAPI, Depends
from pydantic import BaseModel, ValidationError
from fastapi.exceptions import RequestValidationError
class QueryBaseModel(BaseModel):
def __init_subclass__(cls, *args, **kwargs):
field_default = Query(...)
new_params = []
for field in cls.__fields__.values():
default = Query(field.default) if not field.required else field_default
annotation = inspect.Parameter.empty
new_params.append(
inspect.Parameter(
field.alias,
inspect.Parameter.POSITIONAL_ONLY,
default=default,
annotation=annotation,
)
)
async def _as_query(**data):
try:
return cls(**data)
except ValidationError as e:
raise RequestValidationError(e.raw_errors)
sig = inspect.signature(_as_query)
sig = sig.replace(parameters=new_params)
_as_query.__signature__ = sig # type: ignore
setattr(cls, "as_query", _as_query)
#staticmethod
def as_query(parameters: list) -> "QueryBaseModel":
raise NotImplementedError
class ParamModel(QueryBaseModel):
start_datetime: datetime
app = FastAPI()
#app.get("/api")
def test(q_param: ParamModel: Depends(ParamModel.as_query))
start_datetime = q_param.start_datetime
...
return {}

Related

fastapi + sqlalchemy + pydantic → how to read data return to schema

I'm trying to use FastApi, sqlalchemy and pydantic.
I have in request body, a schema, a field type list, and optional named files (files: list[schemas.ImageBase]).
I need to read all the entered data one by one but it doesn't let me loop for the returned list.
This also happens to me when I get a query returned using for example:
def get_setting(svalue:int, s_name: str):
db = SessionLocal()
query = db.query(models.Setting)\
.filter(
models.Setting.svalue == svalue,
models.Setting.appuser == s_name
).all()
return query
in the -->
async def get_settings(svalue: int, name:str):
**values**=crud.get_setting(svalue=svalue,s_name=name)
return {"settings" : values}
But I can't loop (with the for) **values**
Why? I have to set something or I'm wrong using the query or pydantic?
I expect to be looking for a list or dictionary and being able to read the data

How to negate Airflow sensor task result?

Is there a built-in facility or some operator that will run a sensor and negate its status? I am writing a workflow that needs to detect that an object does not exist in order to proceed to eventual success. I have a sensor, but it detects when the object does exist.
For instance, I would like my workflow to detect that an object does not exist. I need almost exactly S3KeySensor, except that I need to negate its status.
The use case you are describing is checking key in S3, if exist wait otherwise continue workflow. As you mentioned this is a Sensor use case. The S3Hook has function check_for_key that checks if key exist so all needed is just to wrap it with Sensor poke function..
A simple basic implementation would be:
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
from airflow.sensors.base import BaseSensorOperator
class S3KeyNotPresentSensor(BaseSensorOperator):
""" Waits for a key to not be present in S3. """
template_fields: Sequence[str] = ('bucket_key', 'bucket_name')
def __init__(
self,
*,
bucket_key: str,
bucket_name: Optional[str] = None,
aws_conn_id: str = 'aws_default',
verify: Optional[Union[str, bool]] = None,
**kwargs,
):
super().__init__(**kwargs)
self.bucket_name = bucket_name
self.bucket_key = [bucket_key] if isinstance(bucket_key, str) else bucket_key
self.aws_conn_id = aws_conn_id
self.verify = verify
self.hook: Optional[S3Hook] = None
def poke(self, context: 'Context'):
return not self.get_hook().check_for_key(self.bucket_key, self.bucket_name)
def get_hook(self) -> S3Hook:
"""Create and return an S3Hook"""
if self.hook:
return self.hook
self.hook = S3Hook(aws_conn_id=self.aws_conn_id, verify=self.verify)
return self.hook
I ended up going another way. I can use the trigger_rule argument of (any) Task -- by setting it to one_failed or all_failed on the next task I can play around with the desired status.
For example,
file_exists = FileSensor(task_id='exists', timeout=3, poke_interval=1, filepath='/tmp/error', mode='reschedule')
sing = SmoothOperator(task_id='sing', trigger_rule='all_failed')
file_exists >> sing
It requires no added code or operator, but has the possible disadvantage of being somewhat surprising.
Replying to myself in the hope that this may be useful to someone else. Thanks!

Airflow 2 Push Xcom with Key Name

In Airflow 2 taskflow API I can, using the following code examples, easily push and pull XCom values between tasks:-
#task(task_id="task_one")
def get_height() -> int:
response = requests.get("https://swapi.dev/api/people/4")
data = json.loads(response.text)
height = int(data["height"])
return height
#task(task_id="task_two")
def check_height(val):
# Show val:
print(f"Value passed in is: {val}")
check_height(get_height())
I can see that the val passed into check_height is 202 and is wrapped in the xcom default key 'return_value' and that's fine for some of the time, but I generally prefer to use specific keys.
My question is how can I push the XCom with a named key? This was really easy previously with ti.xcom_push where you could just supply the key name you wanted the value to be stuffed into, but I can't quite put my finger on how to achieve this in the taskflow api workflow.
Would appreciate any pointers or (simple, please!) examples on how to do this.
You can just set ti in the decorator as:
#task(task_id="task_one", ti)
def get_height() -> int:
response = requests.get("https://swapi.dev/api/people/4")
data = json.loads(response.text)
height = int(data["height"])
# Handle named Xcom
ti.xcom_push("my_key", height)
For cases where you need context in deep function you can also use get_current_context. I'll use it in my example below just to show it but it's not really required in your case.
here is a working example:
import json
from datetime import datetime
import requests
from airflow.decorators import dag, task
from airflow.operators.python import get_current_context
DEFAULT_ARGS = {"owner": "airflow"}
#dag(dag_id="stackoverflow_dag", default_args=DEFAULT_ARGS, schedule_interval=None, start_date=datetime(2020, 2, 2))
def my_dag():
#task(task_id="task_one")
def get_height() -> int:
response = requests.get("https://swapi.dev/api/people/4")
data = json.loads(response.text)
height = int(data["height"])
# Handle named Xcom
context = get_current_context()
ti = context["ti"]
ti.xcom_push("my_key", height)
return height
#task(task_id="task_two")
def check_height(val):
# Show val:
print(f"Value passed in is: {val}")
#Read from named Xcom
context = get_current_context()
ti = context["ti"]
ti.xcom_pull("task_one")
print(f"Value passed from xcom my_key is: {val}")
check_height(get_height())
my_dag = my_dag()
two xcoms being pushed (one for the returned value and one with the by the key we choose):
printing the two xcoms in downstream task_two:

Unable to pass args to context.job_queue.run_once in Python Telegram bot API

In the following code how can we pass the context.args and context to another function, in this case callback_search_msgs
def search_msgs(update, context):
print('In TG, args', context.args)
context.job_queue.run_once(callback_search_msgs, 1, context=context, job_kwargs={'keys': context.args})
def callback_search_msgs(context, keys):
print('Args', keys)
chat_id = context.job.context
print('Chat ID ', chat_id)
def main():
updater = Updater(token, use_context=True)
dp = updater.dispatcher
dp.add_handler(CommandHandler("search_msgs",search_msgs, pass_job_queue=True,
pass_user_data=True))
updater.start_polling()
updater.idle()
if __name__ == '__main__':
main()
A few notes:
job callbacks accept exactly one argument of type CallbackContext. Not two.
the job_kwargs parameter is used to pass keywoard argument to the APScheduler backend, on which JobQueue is built. The way you're trying to use it doesn't work.
if you want to know only the chat_id in the job, you don't have to pass the whole context argument of search_msgs. Just do context.job_queue.run_once(..., context=chat_id,...)
if you want to pass both the chat_id and context.args you can e.g. pass them as tuple:
job_queue.run_once(..., context=(chat_id, context.args), ...)
and then retrieve them in the job via chat_id, args = context.job.context.
Since you're using use_context=True (which is the default in v13+, btw), the pass_* parameters of CommandHandler (or any other handler) have no effect at all.
I suggest to carefully read
The tutorial on JobQueue
The example on JobQueue
The docs of JobQueue
Disclaimer: I'm currently the maintainer of python-telegram-bot

Why should I call a BERT module instance rather than the forward method?

I'm trying to extract vector-representations of text using BERT in the transformers libray, and have stumbled on the following part of the documentation for the "BERTModel" class:
Can anybody explain this in more detail? A forward-pass makes intuitive sense to me (am trying to get final hidden states after all), and I can't find any additional information on what "pre and post processing" means in this context.
Thanks up front!
I think this is just general advice concerning working with PyTorch Module's. The transformers modules are nn.Modules, and they require a forward method. However, one should not call model.forward() manually but instead call model(). The reason is that PyTorch does some stuff under the hood when just calling the Module. You can find that in the source code.
def __call__(self, *input, **kwargs):
for hook in self._forward_pre_hooks.values():
result = hook(self, input)
if result is not None:
if not isinstance(result, tuple):
result = (result,)
input = result
if torch._C._get_tracing_state():
result = self._slow_forward(*input, **kwargs)
else:
result = self.forward(*input, **kwargs)
for hook in self._forward_hooks.values():
hook_result = hook(self, input, result)
if hook_result is not None:
result = hook_result
if len(self._backward_hooks) > 0:
var = result
while not isinstance(var, torch.Tensor):
if isinstance(var, dict):
var = next((v for v in var.values() if isinstance(v, torch.Tensor)))
else:
var = var[0]
grad_fn = var.grad_fn
if grad_fn is not None:
for hook in self._backward_hooks.values():
wrapper = functools.partial(hook, self)
functools.update_wrapper(wrapper, hook)
grad_fn.register_hook(wrapper)
return result
You'll see that forward is called when necessary.

Resources