sqlalchemy with sqlite to_sql not creating table nor database - sqlite

Am new to sqlalchemy. When I run this code there is NO database. I want it to create the database, add the table defined and the data. Reading the documentation for to_sql this code should create the table if it doesn't exist ( it doesn't), when I run it it throws an error that the table has no column num 1 ??? AND does NOT create the database. What am I doing wrong please?
import pandas as pd
import sqlite3
from sqlalchemy import create_engine
date_stuff = [ (20171219, 13.71,28), (20171319, 144.71,33), (20171919, 99.99,99)]
labels = ['date', 'num 1' , 'num 2']
dev_env = "/home/test/Desktop/mtest/hvdata/"
db_name = "tinydatabase.db"
def new_sql_add ( todays_data ):
todays_data.to_sql(name='mcm_trends', con = db ,if_exists='append')
if __name__ == '__main__' :
db_path = dev_env + db_name
db = create_engine('sqlite:///db_path')
df_for_sql = pd.DataFrame.from_records( date_stuff , columns = labels)
new_sql_add ( df_for_sql )

Related

Get multiple variable Aspentech infoplus 21

I am writing a script in order to connect to an Aspentech Infoplus 21 database server.
When calling for a single TAG I do not record any problem
import pandas as pd
import pyodbc
from datetime import datetime
from datetime import timedelta
#---- Connect to IP21
conn = pyodbc.connect("DRIVER={AspenTech SQLplus};HOST=192.xxx.x.xxx;PORT=10014")
#---- Query string
tag = 'BAN0E10TI110V'
end = datetime.now()
start = end-timedelta (days=2)
end = end.strftime("%Y-%m-%d %H:%M:%S")
start=start.strftime("%Y-%m-%d %H:%M:%S")
sql = "select TS,VALUE from HISTORY "\
"where NAME='%s'"\
"and PERIOD = 300*10"\
"and REQUEST = 2"\
"and REQUEST=2 and TS between TIMESTAMP'%s' and TIMESTAMP'%s'" % (tag, start, end)
data = pd.read_sql(sql,conn) # Pandas DataFrame with your data!
When calling multiple tags through a csv (following script) file I can not get the required data.
import pandas as pd
import pyodbc
from datetime import datetime
from datetime import timedelta
#---- Connect to IP21
conn = pyodbc.connect("DRIVER={AspenTech SQLplus};HOST=192.xxx.x.xxx;PORT=10014")
tags = pd.read_csv("C:\\Users\\xxx\\TAGcsvIN.csv", decimal=',', sep=';', parse_dates=True)
#---- Query string
end = datetime.now()
start = end-timedelta (days=2)
end = end.strftime("%Y-%m-%d %H:%M:%S")
start=start.strftime("%Y-%m-%d %H:%M:%S")
sql = "select TS,VALUE from HISTORY "\
"where NAME='%s'"\
"and PERIOD = 300*10"\
"and REQUEST = 2"\
"and REQUEST=2 and TS between TIMESTAMP'%s' and TIMESTAMP'%s'" % (tags['TAGcsv'], start, end)
data = pd.read_sql(sql,conn) # Pandas DataFrame with your data!
Do someone know how to call multiple tags via csv file?
I'm not proficient in python, but if you want to query several tag, you should build a query like this:
"where NAME IN (""tag1"", ""tag2"", ""tagN"")"\

Select Statement getting overridden in job variables file

I'm running a tdload command using a job variables file with values :
SelectStmt = 'select * from database.tablename where column1 > 100',
SourceTdpid = 'hostid',
SourceUserName = 'username',
SourceUserPassword = 'password'
SourceTable = 'database.tablename',
FileWriterFileSizeMax = '10M',
TargetTextDelimiter = '|'
TargetFilename = "file.csv"
FileWriterQuotedData = "Y"
The filter clause in the select statement should return me only 39 rows,
but I'm getting all of the rows from the table in the extracted file.
How to resolve this?
Had to use ExportSelectStmt instead of SelectStmt

Scan all records in AWS DynamoDB

I have a Python program to scan all the records from DynamoDB table, however its not retrieving all the records. I am using LastEvaluatedKey to scan all the records due to 1mb record retrieval limitation. it looks like LastEvaluatedKey is not present in my response. Can someone please help?
import json
import sys
import boto3
from boto3.dynamodb.conditions import Key, Attr
dynamodb = boto3.resource('dynamodb')
def lambda_handler(event, context):
table = dynamodb.Table('Your_Table_Name')
queryCount = 1
response = table.scan()
print("Total Records:-", response['ScannedCount'])
#Extract the Results
items = response['Items']
for item in items:
print(item)
queryCount = queryCount + 1
while 'LastEvaluatedKey' in response:
print('1---------')
key = response['LastEvaluatedKey']
response = table.scan(ExclusiveStartKey=key)
items = response['Items']
for item in items:
queryCount = queryCount + 1
print("2---------")

How to import 11 million row table into Rstudio from Google BigQuery? [code included]

I am trying to do some data exploration for this dataset I have. The table I want to import is 11 million rows. Here is the script and output
#Creating a variable for our BQ project space
project_id = 'project space'
#Query
Step1 <-
"
insertquery
"
#Executing the query from the variable above
Step1_df <- query_exec(Step1, project = project_id, use_legacy_sql = FALSE, max_pages = Inf,page_size = 99000)
Error:
Error in curl::curl_fetch_memory(url, handle = handle) :
Operation was aborted by an application callback
Is there a different bigquery library I can use ? Looking to also speed up the upload time .

How do I solve Sqlite DB Index Error

Am working with Web2py and sqlite Db in Ubuntu. Iweb2py, a user input posts an item into an sqlite DB such as 'Hello World' as follows:
In the controller default the item is posted into ThisDb as follows:
consult = db.consult(id) or redirect(URL('index'))
form1 = [consult.body]
form5 = form1#.split()
name3 = ' '.join(form5)
conn = sqlite3.connect("ThisDb.db")
c = conn.cursor()
conn.execute("INSERT INTO INPUT (NAME) VALUES (?);", (name3,))
conn.commit()
Another code picks or should read the item from ThisDb, in this case 'Hello World' as follows:
location = ""
conn = sqlite3.connect("ThisDb.db")
c = conn.cursor()
c.execute('select * from input')
c.execute("select MAX(rowid) from [input];")
for rowid in c:break
for elem in rowid:
m = elem
c.execute("SELECT * FROM input WHERE rowid = ?", (m,))
for row in c:break
location = row[1]
name = location.lower().split()
my DB configuration for the table 'input' where Hello World' should be read from is this:
CREATE TABLE `INPUT` (
`NAME` TEXT
);
This code previously workd well while coding with windows7 and 10 but am having this problem ion Ubuntu 16.04. And I keep getting this error:
File "applications/britamintell/modules/xxxxxx/define/yyyy0.py", line 20, in xxxdefinition
location = row[1]
IndexError: tuple index out of range
row[0] is the value in the first column.
row[1] is the value in the second column.
Apparently, your previous database had more than one column.

Resources