Retrieving values from SQLite3 Database for Scotty - sqlite

I'm trying to get information from a SQLite DB (HDBC.sqlite3) to feed to a web view using the Scotty framework. I'm currently trying to complete a "grab all" or rather select all the information from the table and then return it to display on my web page that is running via Scotty. I've encountered an error and I'm having some trouble figuring out how to fix it.
Here is my error:
Controllers/Home.hs:42:44:
Couldn't match expected type `Data.Text.Lazy.Internal.Text'
with actual type `IO [[(String, SqlValue)]]'
In the expression: getUsersDB
In the first argument of `mconcat', namely
`["<p>/users/all</p><p>", getUsersDB, "</p>"]'
In the second argument of `($)', namely
`mconcat ["<p>/users/all</p><p>", getUsersDB, "</p>"]'
Here is my code:
import Control.Monad
import Web.Scotty (ScottyM, ActionM, get, html, param, text)
import Data.Monoid (mconcat)
import Controllers.CreateDb (createUserDB)
import Database.HDBC
import Database.HDBC.Sqlite3
import Control.Monad.Trans ( MonadIO(liftIO) )
import Data.Convertible
getAllUsers :: ScottyM()
getAllUsers = get "/users/all" $ do
html $ mconcat ["<p>/users/all</p><p>", getUsersDB , "</p>"]
getUsersDB = do
conn <- connectSqlite3 databaseFilePath
stmt <- prepare conn "SELECT name FROM users VALUES"
results <- fetchAllRowsAL stmt
disconnect conn
return (results)

run returns the number of rows modified :
https://hackage.haskell.org/package/HDBC-2.4.0.0/docs/Database-HDBC.html#v:run

Related

kubeflow pipeline SDK use dsl.ParallelFor to build a loop

#dsl.pipeline( name='classfier') def classifiertest(): make_classification_com_res = make_classification_com() rng_res = np_random_random_state() uniform_res = rng_uniform(make_classification_com_res.output,rng_res.output) all_datas_res = get_all_datas(x_input=uniform_res.output,y_input=make_classification_com_res.output) forlist= list([0,1,2,3,4,5,6,7,8,9]) with dsl.ParallelFor(forlist) as item_index: for_outter_func(item_index,ds_input=all_datas_res.output)
When I run this pipeline, the following error occurs after clicking the start button of run:
{"error":"Failed to create a new run.: InternalServerError: Failed to store run classfier-9xbrk to table: Error 1366: Incorrect string value: '\xE8\xBF\x99\xE4\xB8\x80...' for column 'WorkflowRuntimeManifest' at row 1","code":13,"message":"Failed to create a new run.: InternalServerError: Failed to store run classfier-9xbrk to table: Error 1366: Incorrect string value: '\xE8\xBF\x99\xE4\xB8\x80...' for column 'WorkflowRuntimeManifest' at row 1","details":[{"#type":"type.googleapis.com/api.Error","error_message":"Internal Server Error","error_details":"Failed to create a new run.: InternalServerError: Failed to store run classfier-9xbrk to table: Error 1366: Incorrect string value: '\xE8\xBF\x99\xE4\xB8\x80...' for column 'WorkflowRuntimeManifest' at row 1"}]}
When I delete these two lines of code, pipeline can successfully commit and run.
with dsl.ParallelFor(forlist) as item_index: for_outter_func(item_index,ds_input=all_datas_res.output)
Delete inline comments.
The following writing style is the cause of problems in kfp.
from kfp.components import create_component_from_func
# This comment is OK.
def inputdata_outputtable(input_arr, output_path):
import numpy as np
# This comment is also OK.
_input_arr = np.array(input_arr) # This comment causes an error.
np.save(output_path, _input_arr)

SQLAlchemy and SQLite3: Error if database file does not exist

I would like SQLAlchemy to return an error if the underlying SQLite3 database file does not exist.
I've looked around, and tried:
#!/usr/bin/env python3
from sqlalchemy import create_engine, Column, Integer
from sqlalchemy.orm import Session, declarative_base
Base = declarative_base()
class SomeTable(Base):
id = Column(Integer)
DB_SPECIFIER = 'sqlite+pysqlite:////tmp/non-exist.db?mode=rw'
engine = create_engine(DB_SPECIFIER, echo=False, future=True, connect_args={'uri': True})
session = Session(engine)
x = session.query(SomeTable)
I'd like the create_engine call to fail if /tmp/non-exist.db does not exist. I thought using this answer would work, but it did not.
Looks like it's in the docs, though fairly hidden:
https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#uri-connections
So you'd do:
DB_SPECIFIER = 'sqlite:///file:/tmp/non-exist.db?mode=ro&uri=true'
engine = create_engine(DB_SPECIFIER, echo=False, future=True)
It picks this apart and sends arguments to the connection, the rest through to the URI. (You can also add some lock disabling and such there, if that helps, since it's read only).

Python Pandas to sqlite

Hello I am new I had a question so I am trying to create a simple api using flask ..
the data I have is in CSV and I want to import it in to SQLlite file .. which I have done. and can access the data
after I data is loaded and I have conformed there is data. I try and get python to reflect the class .. as I need to confirm its there for flask ..
below is what I type.
Python:
Base = automap_base()
Base.prepare(engine, reflect=True)
Base.classes.keys()
I get nothing
know why I am getting nothing I its because there is no class set up before I load the data using pandas ..
below is the code I use to load the data to sqlite;
Base = declarative_base()
engine = create_engine("sqlite:///countrytwo.sqlite")
Base.metadata.create_all(engine)
file_name = 'us.csv'
df=pd.read_csv(file_name)
df.to_sql('us',con=engine, index_label='id', if_exists='replace')
## then to conform theirs data I do below ##
print (engine.table_names())
so do I know I need to set up the class first then load the data in to the sqlite file .. one does any one have a good webiste to do this ..
I would love a clue lead me to the answer but maybe not give me the answer ..
if this is unclear let me know I can load more code. thank you.
import sqlite3
import pandas as pd
import os
class readCSVintoDB():
def __init__(self):
'''
self.csvobj = csvOBJ
self.dbobj = dbOBJ
'''
self.importCSVintoDB()
def importCSVintoDB(self):
userInput= input("enter the path of the csv file: ")
csvfile = userInput
df = pd.read_csv(csvfile,sep=';')
#print("dataFrame Headers is {0}".format(df.columns))# display the Headers
dp = (df[['date','temperaturemin','temperaturemax']])
print(dp)
'''
check if DB file exist
if no create an empty db file
'''
if not(os.path.exists('./rduDB.db')):
open('./rduDB.db','w').close()
'''
connect to the DB and get a connection cursor
'''
myConn = sqlite3.connect('./rduDB.db')
dbCursor = myConn.cursor()
'''
Assuming i need to create a table of (Name,FamilyName,age,work)
'''
dbCreateTable = '''CREATE TABLE IF NOT EXISTS rduWeather
(id INTEGER PRIMARY KEY,
Date varchar(256),
TemperatureMin FLOAT,
TemperatureMax FLOAT)'''
dbCursor.execute(dbCreateTable)
myConn.commit()
'''
insert data into the database
'''
for i in dp:
print(i)
dbCursor.execute('''
INSERT INTO rduWeather VALUES (?,?,?,?)''', i)
myConn.commit()
mySelect=dbCursor.execute('''SELECT * from rduWeather WHERE (id = 10)''')
print(list(mySelect))
myConn.close()
test1 = readCSVintoDB()

writing to and reading peewee

I am writing a program that scrapes tweets of a number of people, if the body of the tweet is unique it will get stored in the sqlite database for that person. I have two files, one to write to the databases and one to read the database for and search for tweets with a search word. Before writing to databases I printed the tweets on the terminal, the tweets are being pulled from twitter correctly. When I try a search a term all databases have zero tweets, even if there is no term. There is either a problem with the writing or reading of the database. Please help, I appreciate that I am very new to python.
the writing file:
import requests
import datetime
from bs4 import BeautifulSoup
from peewee import *
from time import sleep
databases = ["femfreq.db", "boris_johnson.db", "barack_obama.db",
"daily_mail.db", "guardian.db", "times.db", "zac_goldsmith.db",
"bernie_sanders.db", "george_osborne.db", "john_mcdonnell.db",
"donald_trump.db", "hillary_clinton.db", "nigel_farage.db"]
urls = ["https://twitter.com/femfreq", "https://twitter.com/BorisJohnson",
"https://twitter.com/BarackObama",
"https://twitter.com/MailOnline?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor",
"https://twitter.com/guardian?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor",
"https://twitter.com/thetimes",
"https://twitter.com/ZacGoldsmith?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor",
"https://twitter.com/berniesanders?lang=en-gb",
"https://twitter.com/George_Osborne?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor",
"https://twitter.com/johnmcdonnellMP?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor",
"https://twitter.com/realDonaldTrump?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor",
"https://twitter.com/HillaryClinton?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor"
"https://twitter.com/Nigel_Farage?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor"]
selection = 0
for database_chosen in databases:
r = requests.get(urls[selection])
soup = BeautifulSoup(r.content, "html.parser")
content =soup.find_all("div",
{"class":
"content"})
db = SqliteDatabase(database_chosen)
class data_input(Model):
time_position = DateTimeField(default=datetime.datetime.now)
header = CharField()
time_posted = CharField()
tweet_body = CharField(unique=True)
class Meta:
database = db
db.connect()
db.create_tables([data_input], safe=True)
for i in content:
try:
data_input.create(header = i.contents[1].text,
time_posted = i.contents[3].text,
tweet_body = i.contents[5].text)
except IntegrityError:
pass
for i in content:
print("=============")
print(i.contents[1].text)
print(i.contents[3].text)
print(i.contents[5].text)
selection += 1
print("database: {} updated".format(database_chosen))
For the reading file
from peewee import *
import datetime
databases = ["femfreq.db", "boris_johnson.db", "barack_obama.db",
"daily_mail.db", "guardian.db", "times.db", "zac_goldsmith.db",
"bernie_sanders.db", "george_osborne.db", "john_mcdonnell.db",
"donald_trump.db", "hillary_clinton.db", "nigel_farage.db"]
search_results = []
search_index = 0
print("")
print("Please enter the number for the database you want to search: ")
for i in databases:
print("{}:{}".format(i, search_index))
search_index += 1
select = int(input("please select: "))
database_chosen = databases[select]
db = SqliteDatabase(database_chosen)
class data_input(Model):
time_position = DateTimeField(default=datetime.datetime.now)
header = CharField()
time_posted = CharField()
tweet_body = CharField(unique=True)
class Meta:
database = db
db.connect()
enteries = data_input.select().order_by(data_input.time_position.desc())
print(enteries)
enteries = enteries.where(data_input.tweet_body)
print("")
print("The total number of tweets in {} is: {}".format(database_chosen,
len(enteries)))
For the reading file I haven't put in a search function yet I will move to that when I can get this problem first. Many thanks
What are you intending to accomplish by putting ".where(data_input.tweet_body)" in the query to read entries? Try removing that whole line:
entries = entries.where(data_input.tweet_body)
When you go to add your search, at that time you will want to add a where clause...something like:
entries = entries.where(data_input.tweet_body.contains(search_term))

Django Haystack with elasticsearch returning empty queryset while data exists

I am doing a project in Python, django rest framework. I am using haystack SearchQuerySet. My code is here.
from haystack import indexes
from Medications.models import Salt
class Salt_Index(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True)
name = indexes.CharField(model_attr='name',null=True)
slug = indexes.CharField(model_attr='slug',null=True)
if_i_forget = indexes.CharField(model_attr='if_i_forget',null=True)
other_information = indexes.CharField(model_attr='other_information',null=True)
precautions = indexes.CharField(model_attr='precautions',null=True)
special_dietary = indexes.CharField(model_attr='special_dietary',null=True)
brand = indexes.CharField(model_attr='brand',null=True)
why = indexes.CharField(model_attr='why',null=True)
storage_conditions = indexes.CharField(model_attr='storage_conditions',null=True)
side_effects = indexes.CharField(model_attr='side_effects',null=True)
def get_model(self):
return Salt
def index_queryset(self, using=None):
return self.get_model().objects.all()
and my views.py file is -
from django.views.generic import View
from haystack.query import SearchQuerySet
from django.core import serializers
class Medication_Search_View(View):
def get(self,request,format=None):
try:
get_data = SearchQuerySet().all()
print get_data
serialized = ss.serialize("json", [data.object for data in get_data])
return HttpResponse(serialized)
except Exception,e:
print e
my python manage.py rebuild_index is working fine (showing 'Indexing 2959 salts') but in my 'views.py' file , SearchQuerySet() is returning an empty query set...
I am very much worried for this. Please help me friends if you know the reason behind getting empty query set while I have data in my Salt model.
you should check app name it is case sensitive.try to write app name in small letters
My problem is solved now. The problem was that i had wriiten apps name with capital letters and the database tables were made in small letters(myapp_Student). so it was creating problem on database lookup.

Resources