I am trying out aiohttp (to test against Flask, and just to learn it) and am having an issue with passing data via the Application. The examples say that I can set a key value in the app in order to pass static info (e.g., a database connection). But, somehow this information is getting lost and I suspect it is in the nested applications, though not sure.
app.py:
import asyncio
from aiohttp import web
import logging
from data import data_handler
from data import setup_web_app as data_setup_web_app
logging.basicConfig()
log = logging.getLogger('data')
log.setLevel(logging.DEBUG)
async def my_web_app():
loop = asyncio.get_event_loop()
app = web.Application(loop=loop)
app['test'] = 'here'
data_setup_web_app(web, app)
return app
data.py:
from aiohttp import web
import logging
logging.basicConfig()
log = logging.getLogger('data')
log.setLevel(logging.DEBUG)
def setup_web_app(web, app):
data = web.Application()
data.add_routes([web.get('/{name}', data_handler, name='data')])
app.add_subapp('/data/', data)
async def data_handler(request):
name = request.match_info['name']
log.debug('test data is {}'.format(request.app['test']))
return web.json_response({'handler': name})
And I am using gunicorn to run it: gunicorn app:my_web_app --bind localhost:8080 --worker-class aiohttp.worker.GunicornWebWorker --workers=2
But when I go to http://127.0.0.1:8080/data/asdf in the browser I get a KeyError: 'test' in the data.py debug print statement.
I suspect the app data is not being passed through correctly to the nested applications, but not sure.
Now keys from main app are not visible from subapp and vise versa.
Please read the issue for more details.
I'd like to support a kind of chained map for this but the feature is not implemented yet.
Related
I am generally new to programming, with a large capacity for self-learning. I have found myself, after taking Harvard's CS50 program unable to use a database in FLASK (using python).
I have created the database with sqlite and have it saved to my work environment and a copy in exterior folders.
I would not like to use FLASK ALCHEMY as I am 100% comfortable in SQL and would not like to start defamiliarizing myself with the basic usage.
I am using visual studio and have Flask already properly installed, and being used to define working routes already.
The database is named trial.db and it can be assumed to have been properly set-up.
import os
from cs50 import SQL
from flask import Flask, flash, redirect, render_template, request,
session, jsonify
from flask_session import Session
from tempfile import mkdtemp
from werkzeug.exceptions import default_exceptions, HTTPException,
InternalServerError
from werkzeug.security import check_password_hash,
generate_password_hash
from helpers import apology, login_required, lookup, usd
app = Flask(__name__)
app.config["TEMPLATES_AUTO_RELOAD"] = True
#app.after_request
def after_request(response):
response.headers["Cache-Control"] = "no-cache, no-store, must-
revalidate"
response.headers["Expires"] = 0
response.headers["Pragma"] = "no-cache"
return response
app.jinja_env.filters["usd"] = usd
app.config["SESSION_FILE_DIR"] = mkdtemp()
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
Session(app)
db = SQL("sqlite:///finance.db")
** THE ABOVE CODE is what I am use to calling based on CS50s library, which is excessively generous.**
SQL, as above is utilized like such:
cs50.SQL(url)
Parameters
url – a str that indicates database dialect and connection arguments
Returns
a cs50.SQL object that represents a connection to a database
Example usage:
db = cs50.SQL("sqlite:///file.db") # For SQLite, file.db must exist
Please help, and Thank you
I'm using Airflow(GCP Composer) now.
I know it has GCS hook and I can download some GCS files.
But I'd like to read a file partially.
Can I use this python logic with PythonOperator in DAG?
from google.cloud import storage
def my_func():
client = storage.Client()
bucket = client.get_bucket("mybucket")
blob = bucket.get_blob("myfile")
data = blob.download_as_bytes(end=100)
return data
In Airflow task, is direct Client API call which is not using hooks forbidden?
You can but a more Airflowy to handle missing functionality in the hook is to extend the hook:
from airflow.providers.google.cloud.hooks.gcs import GCSHook
class MyGCSHook(GCSHook):
def download_bytes(
self,
bucket_name: str,
object_name: str,
end:str,
) -> bytes:
client = self.get_conn()
bucket = client.bucket(bucket_name)
blob = bucket.blob(blob_name=object_name)
return blob.download_as_bytes(end=end)
Then you can use the hook function in PythonOperator or in a custom operator.
To note that GCSHook has download function as you mention.
What you may have missed is that if you don't provide filename it will download as bytes (see source code). It doesn't allow to configure the end parameter as you expect but this should be an easy fix to PR for Airflow if you are looking to contributing to Airflow open source.
FastAPI documentation recommends using lru_cache decorated functions to retrieve the config file. That makes sense to avoid I/O getting the env file.
config.py
from pydantic import BaseSettings
class Settings(BaseSettings):
app_name: str = "Awesome API"
admin_email: str
items_per_user: int = 50
class Config:
env_file = ".env"
and then in other modules, the documentation implemented a function that gets the settings
#module_omega.py
from . import config
#lru_cache()
def get_settings():
return config.Settings()
settings = get_settings()
print(settings.ENV_VAR_ONE)
I am wondering if this method is better practice or advantageous to just initializing a settings object in the config module and then importing it like below.
#config.py
from pydantic import BaseSettings
class Settings(BaseSettings):
app_name: str = "Awesome API"
admin_email: str
items_per_user: int = 50
class Config:
env_file = ".env"
settings = Settings()
#module_omega.py
from .config import settings
print(settings.ENV_VAR_ONE)
I realize it's been a while since you asked, and though I agree with the commenters that these can be functionally equivalent, I can point out another important difference that I think motivates the use of #lru_cache.
What the #lru_cache approach can help with is limiting the amount of code that is executed when the module is imported.
settings = Settings()
By doing this, like you suggested, you are exporting an instance of your settings. Which means that you're transitively executing any code that needs to be run to create your settings immediately when your module is imported.
While module exports are cached similar to how #lru_cache would do, you don't have as much control over deferring the loading of your settings, since in python we typically place our imports at the top of a file.
The #lru_cache technique is especially useful if you have more expensive settings, like looking at the filesystem, or going to the network. That way you can defer loading your settings until you really actually need them.
from . import get_settings
def do_something_with_deferred_settings():
print(get_settings().my_setting)
if __name__ == "__main__":
do_something_with_deferred_settings()
Other things to look into:
#cache in python 3.9 instead of #lru_cache
Module __getattr__ doesn't add anything here IMO, but it can be useful when working with dynamism and the import system.
I am trying to create my first REST api. I heard that the Falcon is good and easy for beginners. I read the official docs and there is nothing about how to connect to the database.
I have seen the flask docs as well and there is well written everything.
def get_db():
"""Opens a new database connection if there is none yet for the
current application context.
"""
if not hasattr(g, 'sqlite_db'):
g.sqlite_db = connect_db()
return g.sqlite_db
Is there any way to connect SQlite with falcon?
You can use any orm to ease the process of database connection and data handling.
A simple SQLite connection using sqlalchemy is :
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
engine = create_engine('sqlite:///dbname.db', echo=True)
session = sessionmaker()
session.configure(bind=engine)
This will create a db connection and the same can be used to perform database operations.
This might be a dumb question, but I'm kind of confused as to how SQLAlchemy works with the actual database being used by my Flask application. I have a python file, models.py that defines a SQLAlchemy database schema, and then I have this part of my code that creates the database for it
if __name__ == '__main__':
from datetime import timedelta
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
engine = create_engine('sqlite://', echo=True)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
# Add a sample user
user = User(name='Philip House', password="test")
session.add(user)
session.commit()
I run that file and it works fine, but now I'm confused to as what happens with the database..how can I access it in another application? I've also heard that it might just be in memory, and if that is the case, how do I make it a permanent database file I can use my application with?
Also in my application, this is how I refer to my sqlite database in the config file:
PWD = os.path.abspath(os.curdir)
DEBUG=True
SQLALCHEMY_DATABASE_URI = 'sqlite:///{}/arkaios.db'.format(PWD)
I dunno if that might be of any help.
Thanks!!
Here are the docs for connection to SQLAlchemy with SQLite.
As you guessed, you are in fact creating a SQLite database in memory when you use sqlite:// as your connection string. If you were to use sqlite:///{}/arkaios.db'.format(PWD) you would create a new database in your current directory. If this is what you intend to do so that you can access that database from other applications then you should import your connections string from your configuration file and use that instead of sqlite://.