I'm still developing my project and I want to make a lot of changes to UI through HTML/CSS, I'm starting the Bokeh server programmatically and it looks like this:
from tornado.ioloop import IOLoop
from tornado.web import StaticFileHandler
from bokeh.server.server import Server
from bokeh.application import Application
from bokeh.application.handlers.function import FunctionHandler
from os.path import dirname, join
import sys
if not dirname(__file__) in sys.path: sys.path.append(dirname(__file__))
from models.ProjectDataLoading import modify_page1
from models.Empty import modify_page2
page1_app = Application(FunctionHandler(modify_page1))
page2_app = Application(FunctionHandler(modify_page2))
StaticFileHandler.reset()
io_loop = IOLoop.current()
server = Server(applications = {'/ProjectDataLoading': page1_app,
'/PrimaveraObjects': page2_app,
'/SpecializedReports': page2_app,
'/StatisticalReports': page2_app,
'/Predictions': page2_app},
extra_patterns=[(r'/static/css/(.*)', StaticFileHandler, {'path':join(dirname(__file__),"static","css")})],
io_loop = io_loop, port = 5006, static_hash_cache=False, debug=True, autoreload=True, compiled_template_cache=False, serve_traceback=True)
server.start()
server.show('/ProjectDataLoading')
io_loop.start()
I'm still working on page1 -> "ProjectDataLoading", and as you can see I'm trying to make the server to stop cashing the static files with the following codes:
StaticFileHandler.reset()
static_hash_cache=False
debug=True
of course, it's wrong, because it's related to tornado application, not Bokeh server, so, how can I do it Bokeh server?
Almost all the code to support dev mode, i.e. watching a list file files and auto-reloading, is in the application part of the Bokeh server, not in the library. So you would need to essentially reproduce the code here:
https://github.com/bokeh/bokeh/blob/8f9cf0a85ba51098da2d027150e900bfc073eeba/bokeh/command/subcommands/serve.py#L785-L816
Related
I want to write a minimal FastAPI static file server launched from a script that allows you to specify the directory to share on the command line. Following the example in the FastAPI documentation, I wrote this.
import uvicorn
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
server = FastAPI()
if __name__ == "__main__":
import sys
directory = sys.argv[1]
server.mount("/static", StaticFiles(directory=directory), name="static")
uvicorn.run(app="my_package:server")
If I run this with the argument /my/directory where this directory contains file.txt I expect that I'd be able to download file.txt at the URL http://localhost:8000/static/file.txt, but this returns an HTTP 404.
How do I write this minimal static file server script?
The assumption I made about sys.argv not being available when uvicorn loads your module is wrong, so it should work as you expect by moving your static setup outside of the __main__ guard:
import uvicorn
import sys
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
server = FastAPI()
directory = sys.argv[1]
server.mount("/static", StaticFiles(directory=directory), name="static")
if __name__ == "__main__":
uvicorn.run(app="my_package:server")
When you call uvicorn.run(app="my_package:server"), it actually starts a separate process where my_package is imported. Therefore, everything inside if __name__ == "__main__": will not be run in the uvicorn process, so your directory will never be mounted.
One possible solution would be getting the directory from an environment variable, which is set from a small bash script:
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
server = FastAPI()
directory = os.getenv("DIRECTORY")
server.mount("/static", StaticFiles(directory=directory), name="static")
start.sh:
#!/usr/bin/env bash
DIRECTORY=$1 uvicorn mypackage:server
I've been developing an API using FastAPI and pipenv to manage my virtual environment. To start my testing setup, I use the following commands on the console:
pipenv shell
uvicorn backend:app --reload
The API starts after a couple of seconds and everything is okay. However, recently I decided to install pdfkit to generate some PDF documents that need to be sent back to a webapp. However, after I did (using this this page as reference):
pipenv install pdfkit
Now, whenever I try to run the same uvicorn command to start my API, the API returns the error module pdfkit not found. I've checked that the package is properly installed and it does appear on my Pipfile and whenever I type pip freeze once my enviroment has been activated.
Here's a snippet of the API call that returning the error:
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import FileResponse, StreamingResponse
from scipy.optimize import leastsq
from openpyxl import load_workbook
from openpyxl.utils.dataframe import dataframe_to_rows
from openpyxl.styles import Font
from openpyxl import Workbook
import pandas as pd
import numpy as np
import json
import os
# ==========================
# SETUP
# ==========================
app = FastAPI()
# ==========================
# CORS
# ==========================
origins = [
"http://127.0.0.1:5500/",
"http://127.0.0.1:8000/",
"http://localhost"
]
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # List of origins that are allowed to make cross-origin requests
allow_credentials=True, # Indicate that cookies should be supported for cross-origin requests
allow_methods=["*"], # List of HTTP methods that should be allowed for cross-origin requests
allow_headers=["*"], # List of HTTP headers that should be supported for cross-origin requests
)
# ==========================
# GET: PDF RENDERING
# ==========================
#app.get("/pdf")
def render_pdf(pdf_content: str):
from jinja2 import Template, Environment, FileSystemLoader
import pdfkit as pdf
# Jinja2 template "generator" or "main" object (AKA Environment)
env = Environment(loader = FileSystemLoader('PDF Rendering'), trim_blocks=True, lstrip_blocks=True)
# Template parameters
template_params = {
"BASE_PATH": os.getcwd(),
"REPORT_CONTENT": pdf_content,
}
# Render template
template = env.get_template("pdf_template.j2")
template_text = template.render(template_params)
with open("PDF Rendering/render.html", "w") as f:
f.write(template_text)
# Turn rendered file to PDF
pdf.from_file("PDF Rendering/render.html", "PDF Rendering/render.pdf")
return {
"message": "Success"
}
(The Jinja section works mind you, but for some reason, the pdfkit one doesnt)
We are using Apache 1.9.0. I have written a snowflake hook plugin. I have placed the hook in the $AIRFLOW_HOME/plugins directory.
$AIRFLOW_HOME
+--plugins
+--snowflake_hook2.py
snowflake_hook2.py
# This is the base class for a plugin
from airflow.plugins_manager import AirflowPlugin
# This is necessary to expose the plugin in the Web interface
from flask import Blueprint
from flask_admin import BaseView, expose
from flask_admin.base import MenuLink
# This is the base hook for connecting to a database
from airflow.hooks.dbapi_hook import DbApiHook
# This is the snowflake provided Connector
import snowflake.connector
# This is the default python logging package
import logging
class SnowflakeHook2(DbApiHook):
"""
Airflow Hook to communicate with Snowflake
This is implemented as a Plugin
"""
def __init__(self, connname_in='snowflake_default', db_in='default', wh_in='default', schema_in='default'):
logging.info('# Connecting to {0}'.format(connname_in))
self.conn_name_attr = 'snowflake_conn_id'
self.connname = connname_in
self.superconn = super().get_connection(self.connname) #gets the values from Airflow
{SNIP - Connection stuff that works}
self.cur = self.conn.cursor()
def query(self,q,params=None):
"""From jmoney's db_wrapper allows return of a full list of rows(tuples)"""
if params == None: #no Params, so no insertion
self.cur.execute(q)
else: #make the parameter substitution
self.cur.execute(q,params)
self.results = self.cur.fetchall()
self.rowcount = self.cur.rowcount
self.columnnames = [colspec[0] for colspec in self.cur.description]
return self.results
{SNIP - Other class functions}
class SnowflakePluginClass(AirflowPlugin):
name = "SnowflakePluginModule"
hooks = [SnowflakeHook2]
operators = []
So I went ahead and put some print statements in Airflows plugin_manager to try and get a better handle on what is happening. After restarting the webserver and running airflow list_dags, these lines were showing the "new module name" (and no errors
SnowflakePluginModule [<class '__home__ubuntu__airflow__plugins_snowflake_hook2.SnowflakeHook2'>]
hook_module - airflow.hooks.snowflakepluginmodule
INTEGRATING airflow.hooks.snowflakepluginmodule
snowflakepluginmodule <module 'airflow.hooks.snowflakepluginmodule'>
As this is consistent with what the documentation says, I should be fine using this in my DAG:
from airflow import DAG
from airflow.hooks.snowflakepluginmodule import SnowflakeHook2
from airflow.operators.python_operator import PythonOperator
But the web throws this error
Broken DAG: [/home/ubuntu/airflow/dags/test_sf2.py] No module named 'airflow.hooks.snowflakepluginmodule'
So the question is, What am I doing wrong? Or have I uncovered a bug?
You need to import as below:
from airflow import DAG
from airflow.hooks import SnowflakeHook2
from airflow.operators.python_operator import PythonOperator
OR
from airflow import DAG
from airflow.hooks.SnowflakePluginModule import SnowflakeHook2
from airflow.operators.python_operator import PythonOperator
I don't think that airflow automatically goes through the folders in your plugins directory and runs everything underneath it. The way that I've set it up successfully is to have an __init__.py under the plugins directory which contains each plugin class. Have a look at the Astronomer plugins in Github, it provides some really good examples for how to set up your plugins.
In particular have a look at how they've set up the mysql plugin
https://github.com/airflow-plugins/mysql_plugin
Also someone has incorporated a snowflake hook in one of the later versions of airflow too which you might want to leverage:
https://github.com/apache/incubator-airflow/blob/master/airflow/contrib/hooks/snowflake_hook.py
I have installed qpython for android, the problem is that when I install a module with pip in the python console I can import it properly, but not when I try to import it to a script.
in console ... I type:
>>>import requests
>>>requests
<module 'requests' from '/data/data/com.hipipal.qpyplus/.../__init__.py>
but in a script saved in scripts' folder, when I execute:
import requests
r = requests.get("http://www.google.com")
print r.text.encode('utf-8')
I get this:
import requests
ImportError: no module named requests
Can anybody help with that?
Thank you!
Add import site.
This works on my tablet:
#-*-coding:utf8;-*-
#qpy:2
#qpy:console
import site
import requests
r = requests.get("http://www.google.com")
print r.text.encode('utf-8')
I have an application that works in development, but when I try to run it with Gunicorn it gives an error that the "sqlalchemy extension was not registered". From what I've read it seems that I need to call app.app_context() somewhere, but I'm not sure where. How do I fix this error?
# run in development, works
python server.py
# try to run with gunicorn, fails
gunicorn --bind localhost:8000 server:app
AssertionError: The sqlalchemy extension was not registered to the current application. Please make sure to call init_app() first.
server.py:
from flask.ext.security import Security
from database import db
from application import app
from models import Studio, user_datastore
security = Security(app, user_datastore)
if __name__ == '__main__':
# with app.app_context(): ??
db.init_app(app)
app.run()
application.py:
from flask import Flask
app = Flask(__name__)
app.config.from_object('config.ProductionConfig')
database.py:
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy()
Only when you start your app with python sever.py is the if __name__ == '__main__': block hit, where you're registering your database with your app.
You'll need to move that line, db.init_app(app), outside that block.