My app is running on openshift and I'm not being able to load the database. These are my codes:
from sqlalchemy import Column, Integer, String,create_engine,ForeignKey,Time
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from classes import Team,Match,Channel,Country,Mapping
import json
app = Flask(__name__)
engine = create_engine('sqlite:///../data/euro2012tvguide.sqlite')
Session = sessionmaker(bind=engine)
session = Session()
In the file data, I've the file euro2012tvguide.sqlite which is the sqlite db
In fact the problem was that there was a problem with the path, it should have been like this
engine = create_engine('sqlite://' + os.path.join(os.environ["OPENSHIFT_DATA_DIR"], 'euro2012tvguide.sqlite'))
I obtained much help from the openshift forum, here is the link, https://openshift.redhat.com/community/forums/openshift/sqlalchemy-not-loading-sqlite-db
Related
I am generally new to programming, with a large capacity for self-learning. I have found myself, after taking Harvard's CS50 program unable to use a database in FLASK (using python).
I have created the database with sqlite and have it saved to my work environment and a copy in exterior folders.
I would not like to use FLASK ALCHEMY as I am 100% comfortable in SQL and would not like to start defamiliarizing myself with the basic usage.
I am using visual studio and have Flask already properly installed, and being used to define working routes already.
The database is named trial.db and it can be assumed to have been properly set-up.
import os
from cs50 import SQL
from flask import Flask, flash, redirect, render_template, request,
session, jsonify
from flask_session import Session
from tempfile import mkdtemp
from werkzeug.exceptions import default_exceptions, HTTPException,
InternalServerError
from werkzeug.security import check_password_hash,
generate_password_hash
from helpers import apology, login_required, lookup, usd
app = Flask(__name__)
app.config["TEMPLATES_AUTO_RELOAD"] = True
#app.after_request
def after_request(response):
response.headers["Cache-Control"] = "no-cache, no-store, must-
revalidate"
response.headers["Expires"] = 0
response.headers["Pragma"] = "no-cache"
return response
app.jinja_env.filters["usd"] = usd
app.config["SESSION_FILE_DIR"] = mkdtemp()
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
Session(app)
db = SQL("sqlite:///finance.db")
** THE ABOVE CODE is what I am use to calling based on CS50s library, which is excessively generous.**
SQL, as above is utilized like such:
cs50.SQL(url)
Parameters
url – a str that indicates database dialect and connection arguments
Returns
a cs50.SQL object that represents a connection to a database
Example usage:
db = cs50.SQL("sqlite:///file.db") # For SQLite, file.db must exist
Please help, and Thank you
I am trying out aiohttp (to test against Flask, and just to learn it) and am having an issue with passing data via the Application. The examples say that I can set a key value in the app in order to pass static info (e.g., a database connection). But, somehow this information is getting lost and I suspect it is in the nested applications, though not sure.
app.py:
import asyncio
from aiohttp import web
import logging
from data import data_handler
from data import setup_web_app as data_setup_web_app
logging.basicConfig()
log = logging.getLogger('data')
log.setLevel(logging.DEBUG)
async def my_web_app():
loop = asyncio.get_event_loop()
app = web.Application(loop=loop)
app['test'] = 'here'
data_setup_web_app(web, app)
return app
data.py:
from aiohttp import web
import logging
logging.basicConfig()
log = logging.getLogger('data')
log.setLevel(logging.DEBUG)
def setup_web_app(web, app):
data = web.Application()
data.add_routes([web.get('/{name}', data_handler, name='data')])
app.add_subapp('/data/', data)
async def data_handler(request):
name = request.match_info['name']
log.debug('test data is {}'.format(request.app['test']))
return web.json_response({'handler': name})
And I am using gunicorn to run it: gunicorn app:my_web_app --bind localhost:8080 --worker-class aiohttp.worker.GunicornWebWorker --workers=2
But when I go to http://127.0.0.1:8080/data/asdf in the browser I get a KeyError: 'test' in the data.py debug print statement.
I suspect the app data is not being passed through correctly to the nested applications, but not sure.
Now keys from main app are not visible from subapp and vise versa.
Please read the issue for more details.
I'd like to support a kind of chained map for this but the feature is not implemented yet.
I am trying to create my first REST api. I heard that the Falcon is good and easy for beginners. I read the official docs and there is nothing about how to connect to the database.
I have seen the flask docs as well and there is well written everything.
def get_db():
"""Opens a new database connection if there is none yet for the
current application context.
"""
if not hasattr(g, 'sqlite_db'):
g.sqlite_db = connect_db()
return g.sqlite_db
Is there any way to connect SQlite with falcon?
You can use any orm to ease the process of database connection and data handling.
A simple SQLite connection using sqlalchemy is :
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
engine = create_engine('sqlite:///dbname.db', echo=True)
session = sessionmaker()
session.configure(bind=engine)
This will create a db connection and the same can be used to perform database operations.
I need to bulk import says 100 records into Cosmos DB.
I found dt.exe, that doesn't help. it throws error when importing csv into cosmos db with table api.
I'm not able to find any reliable way to automate this process.
The command-line Azure Cosmos DB Data Migration tool (dt.exe) can be
used to import your existing Azure Table storage data to a Table API
GA account, or migrate data from a Table API (preview) account into a
Table API GA account. Other sources are not currently supported. The
UI based Data Migration tool (dtui.exe) is not currently supported for
Table API accounts.
According to the above official statement, it seems that other sources(e.g csv file) are not supported to be migrated into Azure Table API account. You could adopt a workaround: Read csv file in the program then import data into Azure Table Storage.
Please refer to the sample python code which I did in this thread.
from azure.cosmosdb.table.tableservice import TableService
from azure.cosmosdb.table.models import Entity
import csv
import sys
import codecs
table_service = TableService(connection_string='***')
reload(sys)
sys.setdefaultencoding('utf-8')
filename = "E:/jay.csv"
with codecs.open(filename, 'rb', encoding="utf-8") as f_input:
csv_reader = csv.reader(f_input)
for row in csv_reader:
task = Entity()
task.PartitionKey = row[0]
task.RowKey = row[1]
task.description = row[2]
task.priority = EntityProperty(EdmType.INT32, row[3])
task.logtime = EntityProperty(EdmType.DATETIME, row[4])
table_service.insert_entity('tasktable', task)
Or you could commit feedback here.
Hope it helps you.
Just for minor update:
If you use python 3.1, there is no need for reload(sys) and sys.setdefaultencoding('utf-8') with 'r' filename = r"E:/jay.csv"
This might be a dumb question, but I'm kind of confused as to how SQLAlchemy works with the actual database being used by my Flask application. I have a python file, models.py that defines a SQLAlchemy database schema, and then I have this part of my code that creates the database for it
if __name__ == '__main__':
from datetime import timedelta
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
engine = create_engine('sqlite://', echo=True)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
# Add a sample user
user = User(name='Philip House', password="test")
session.add(user)
session.commit()
I run that file and it works fine, but now I'm confused to as what happens with the database..how can I access it in another application? I've also heard that it might just be in memory, and if that is the case, how do I make it a permanent database file I can use my application with?
Also in my application, this is how I refer to my sqlite database in the config file:
PWD = os.path.abspath(os.curdir)
DEBUG=True
SQLALCHEMY_DATABASE_URI = 'sqlite:///{}/arkaios.db'.format(PWD)
I dunno if that might be of any help.
Thanks!!
Here are the docs for connection to SQLAlchemy with SQLite.
As you guessed, you are in fact creating a SQLite database in memory when you use sqlite:// as your connection string. If you were to use sqlite:///{}/arkaios.db'.format(PWD) you would create a new database in your current directory. If this is what you intend to do so that you can access that database from other applications then you should import your connections string from your configuration file and use that instead of sqlite://.