SQLite - Run multi-line SQL script from file? - sqlite

I have the following SQL in a file, user.sql:
CREATE TABLE user
(
user_id INTEGER PRIMARY KEY,
username varchar(255),
password varchar(255)
);
However, when the following command is executed:
sqlite3 my.db < user.sql
The following error is generated:
Error: near line 1: near ")": syntax error
I would prefer to keep the SQL as-is, as the file will be checked into source control and will be more maintainable and readable as it is now. Can the SQL span multiple lines like this, or do I need to put it all on the same line?

I realize that this is not a direct answer to your question. As Brian mentions, this could be a silly platform issue.
If you interface with SQLite through Python, you will probably avoid most platform-specific issues and you get to have fun things like datetime columns :-)
Something like this should work fine:
import sqlite3
qry = open('create_table_user.sql', 'r').read()
conn = sqlite3.connect('/path/to/db')
c = conn.cursor()
c.execute(qry)
conn.commit()
c.close()
conn.close()

I had exactly the same problem.
Then I noticed, my editor (Notepad++) reports Macintosh format for end of lines.
Converting eols into Unix style turned the script file into format, which sqlite3 understood.

Multiple lines aren't a problem. There might be a platform issue, because I am able to run this example successfully using SQLite3 3.6.22 on OS X 10.5.8.

Here is bernie's python example upgraded to handle exceptions in the script instead of silently failing (Windows 7, ActiveState Python 3.x)
import sqlite3
import os
import os.path
import ctypes
databaseFile = '.\\SomeDB.db'
sqlFile = '.\\SomeScripts.sql'
# Delete the old table
if os.path.isfile(databaseFile):
os.remove(databaseFile)
# Create the tables
qry = open(sqlFile, 'r').read()
sqlite3.complete_statement(qry)
conn = sqlite3.connect(databaseFile)
cursor = conn.cursor()
try:
cursor.executescript(qry)
except Exception as e:
MessageBoxW = ctypes.windll.user32.MessageBoxW
errorMessage = databaseFile + ': ' + str(e)
MessageBoxW(None, errorMessage, 'Error', 0)
cursor.close()
raise

Related

How to open an SQLite database readonly in Julia?

I'd like to read my Safari history database from a Julia script (Mac OS X).
I have a command line script that works:
sqlite3 -readonly ~/Library/Safari/History.db 'SELECT v.title, i.url FROM history_items i, history_visits v WHERE i.url LIKE "%en.wikipedia.org%" AND i.id=v.history_item AND v.title LIKE "%- Wikipedia%" GROUP BY v.title ORDER BY v.visit_time'
... but trying it in Julia (in Juno / Atom) gives me a permission error
db = SQLite.DB("/Users/grimxn/Library/Safari/History.db")
sql = """
SELECT v.title, i.url, v.visit_time
FROM history_items i, history_visits v
WHERE i.url LIKE "%en.wikipedia.org%"
AND i.id=v.history_item
AND v.title LIKE "%- Wikipedia%"
GROUP BY v.title
ORDER BY v.visit_time
"""
result = DBInterface.execute(db, sql) |> DataFrame
(rows, cols) = size(result)
println("Result has $(rows) rows")
println("Earliest: $(result[1,1])")
println("Latest: $(result[rows,1])")
ERROR: LoadError: SQLite.SQLiteException("unable to open database file")
Now, when I copy the database to my home directory, and swap
db = SQLite.DB("/Users/grimxn/Library/Safari/History.db")
to
db = SQLite.DB("/Users/grimxn/History.db")
everything works, so I guess it is that the Julia / Juno process has only got read permissions, but is accessing the db read/write.
How do I attach to the database as readonly in Julia?
Theoretically, use a URI connection string: file:foo.db?mode=ro.
This is documented in the SQLite manual.
Practically, it appears the current version of the SQLite.jl package does not support URIs, and neither does it support flags that could be passed along to sqlite3_open_v2().
Leaving this answer for reference just in case the Julia package fixes this some day.
Jaen's answer was correct, and also correctly predicted that the mode=ro flag would be supported. It is now supported, and so the following will work (and does as of today):
julia> using SQLite
julia> db = SQLite.DB("file:/path/to/db.sqlite?mode=ro")
SQLite.DB("file:/path/to/db.sqlite?mode=ro")

How to export the query results from a Database Manager to a CVS file in SQLite?

I am using DataGrip or SQLiteStudio (database managers) to run a series of queries in a database which guide me to find the information that I require. The queries works well and the results are shown in the console of the Dabase Manager. However, I need to export the results that appears in the database manager console into a CVS file.
I have seen everybody works directly in the shell, but I need (I have to) to use a DB manager to run the queries (so far the queries that I need to run in one step are about 600 lines).
In the sqlite3 shell I am able to run (and works)
(.headers on)
(.mode csv)
(.output C:/filename.csv)
(select * from "6000_1000_Results";)
(.output stdout)
However, running this code in the sql editor of the DB manager, doesn´t work at all.
--(.....)
--(around 600 lines before)
--(.....)
"Material ID",
"Material Name",
SUM("Quantity of Material") Quantity
FROM
"6000_1000_Results_Temp"
GROUP BY
"DataCenterID", "Material ID";
------------------------------------------------------------
--(HERE IS WHERE I NEED TO EXPORT THE RESULTS IN A CVS FILE)
------------------------------------------------------------
.headers on
.mode csv
.output C:/NextCloudLuis/TemproDB.git/csvtest.csv
select * from "6000_1000_Results";
.output stdout
.show
DROP TABLE IF EXISTS "6000_1000_Results_Temp";
DROP TABLE IF EXISTS "6000_1000_Results";
Datagrip do not show any error, it runs the queries in a few seconds, but there´s no file anywhere, SQLiteStudio gives a syntax error.
Finally I solve this issue in the next steps:
I run over python all the queries using the lybrary sqlite3. Then the result of all the queries are saved in a pandas dataframe. Then the pandas dataframe is exported to a cvs and xlxs file.
Here is the python code:
import sqlite3
import queries
conn = sqlite3.connect("tempro.db") #make the database connection to python
level4000 = queries.level4000to1000(conn) #I call the function in queries.py
level4000.to_csv('Level4000to1000.csv') #export result ro cvs
level4000.to_excel('Level4000to1000.xlsx') #export result ro xlsx
conn.close()
and here is the python file where I save all the queries (queries.py)
import sqlite3
from sqlite3 import Error
import pandas as pd
def level4000to1000(conn):
cur = conn.cursor()
cur.executescript(
"""
/* Here I put all the 600 lines of queries */
DROP TABLE IF EXISTS "4000_1000_Results_Temp";
DROP TABLE IF EXISTS "4000_1000_Results";
/* Here more and more lines */
--To keep the results from all queries
CREATE TABLE "4000_1000_Results_Temp" (
"DeviceID" INTEGER,
"Device Name" TEXT,
SUM("Quantity of Material") Quantity
FROM
"4000_1000_Results_Temp"
GROUP BY
"DeviceID", "Material ID";
""")
df = pd.read_sql_query('''SELECT * FROM "4000_1000_Results";''', conn)
cur.executescript("""DROP TABLE IF EXISTS "4000_1000_Results_Temp";
DROP TABLE IF EXISTS "4000_1000_Results";""")
return df #returns a dataframe with the info results from the queries
In the end, it seems that there is no way to export results to file formats as cvs using SQL coding.

Using Mainframe Datasets in Python 3.6 within Anaconda Spyder

I am trying to read and write the Mainframe Datasets data in Python3.6. I am using Anaconda's Spyder(version 3.2.4). I am using Zosftplib inorder to import mainframe features. Below is the code snippet:
import zosftplib
Myzftp = zosftplib.Zftp("ip address-mainframe","username","password")
mf_file = open("mainframe ps file-name", 'r+')
ffa = mf_file.read(16);
print ("Read record is :", ffa)
mf_file.close()
Mainframe PS-file name contains 1 record with data-0010021023457893.But the output I am getting is spaces in Spyder kernel.I also tried using ftplib but it didn't worked there too.I believe there's conversion required as its not a text file which I am reading.Does anyone has any suggestion on this.Please reply.Thanks
Thru FTPLIB and Zosftplib import
import zosftplib
Myzftp = zosftplib.Zftp("ip address-mainframe","username","password")
mf_file = open("mainframe ps file-name", 'r+')
ffa = mf_file.read(16);
print ("Read record is :", ffa)
mf_file.close()
Expected result should be 0010021023457893 after file read and print.
The zosftplib package will provide you ftp access to your dataset on z/OS, meaning you can download it, but you have to open it locally. Also, you need to be aware of the encoding differences between your local machine and the z/OS environment, so you should specify the sbdataconn() argument to provide codepage translation. I was able to do what you want with code like this:
import zosftplib
Myzftp = zosftplib.Zftp('mainframe_ip',
'mainframe_userid',
'mainframe_password',
timeout=500.0,
sbdataconn='(ibm-1147,iso8859-1)')
Myzftp.download_text('mainframe_dataset_name', '/tmp/local_filename.txt')
mf_file = open('/tmp/local_filename.txt', 'r+')
ffa = mf_file.read(16);
print ("Read record is :", ffa)
mf_file.close()

Insert Blob into VARBINARY(MAX) into column encrypted table on SQL Server using pyodbc

I am currently investigating the use of the Always Encrypted feature for Microsoft SQL Server. I'm trying to simply store a blob object in a column encrypted table ('randomised') using pyodbc. Where the code works perfectly fine on non-encrypted columns for inserting arbitrary binary objects, it fails when running the same code on a column that is encrypted. Even more strange is the fact that it works for non-image files, but whenever I'm trying to upload a PDF, JPEG, PNG or similar, it fails.
The code looks like this.
import pyodbc
server = 'tcp:XXXXX-XXXXXX-XXXXX-XXXXX-XXXXX.windows.net,1433'
database = 'db-encryption'
username = 'XXXXXX#dbs-always-encrypted'
password = 'XXXXXXXXX'
connection_string = [
'DRIVER={ODBC Driver 17 for SQL Server}',
'Server={}'.format(server),
'Database={}'.format(database),
'UID={}'.format(username),
'PWD={}'.format(password),
'Encrypt=yes',
'TrustedConnection=yes',
'ColumnEncryption=Enabled',
'KeyStoreAuthentication=KeyVaultClientSecret',
'KeyStorePrincipalId=XXXXX-XXXXXX-XXXXX-XXXXX-XXXXX',
'KeyStoreSecret=XXXXX-XXXXXX-XXXXX-XXXXX-XXXXX'
]
cnxn = pyodbc.connect( ';'.join(connection_string) )
cursor = cnxn.cursor()
insert = 'insert into Blob (Data) values (?)'
files = ['Text.txt', 'SimplePDF.pdf']
for file in files:
# without hex encode
bindata = None
with open(file, 'rb') as f:
bindata = pyodbc.Binary(f.read())
# insert binary
cursor.execute(insert, bindata)
cnxn.commit()
The error message I receive when trying to run the code on the encrypted 'Data' column (VARBINARY(MAX)) is the following
pyodbc.DataError: ('22018', "[22018] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Operand type clash: image is incompatible with varbinary(max) encrypted with (encryption_type = 'RANDOMIZED', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'CEK_Auto1', column_encryption_key_database_name = 'db-encryption') (206) (SQLExecDirectW)")
It seems like the driver reads the bytes and sees that it is a 'known type' and treats the data as 'image'
Is there any way I can prevent this from happening? I simply wanna store any arbitrary byte object in said column.
It might be late but the issue is with your driver. You must install the ODBC 17 driver or use {ODBC Driver 13 for SQL Server} or you can also try {SQL Server}.
Download the driver from here

Multiple query execution in cloudera impala

Is it possible to execute multiple queries at the same time in impala ? If yes, how does impala handle it?
I would certainly do some tests on your own, but I was not able to get multiple queries to execute:
I was using Impala connection, and reading query from a .sql file. This works for single commands.
from impala.dbapi import connect
# actual server and port changed for this post for security
conn=connect(host='impala server', port=11111,auth_mechanism="GSSAPI")
cursor = conn.cursor()
cursor.execute((open("sandbox/z_temp.sql").read()))
This is the error I received.
HiveServer2Error: AnalysisException: Syntax error in line 2:
This is what the SQL looked like in the .sql file.
Select * FROM database1.table1;
Select * FROM database1.table2;
I was able to run multiple commands with the SQL commands in separate .sql files iterating over all .sql files in a specified folder.
#Create list of file names for recon .sql files this will be sorted
#Numbers at begining of filename are important to sort so that files will be executed in correct order
file_names = glob.glob('folder/.sql')
asc_names = sorted(file_names, reverse = False)
filename = ""
for file_name in asc_names:
str_filename = str(file_name)
print(filename)
query = (open(str_filename).read())
cursor = conn.cursor()
# creates an error log dataframe to print, or write to file at end of job.
try:
# Each SQL command must be executed seperately
cursor.execute(query)
df_id= pd.DataFrame([{'test_name': str_filename[-40:], 'test_status': 'PASS'}])
df_log = df_log.append(df_id, ignore_index=True)
except:
df_id= pd.DataFrame([{'test_name': str_filename[-40:], 'test_status': 'FAIL'}])
df_log = df_log.append(df_id, ignore_index=True)
continue
Another way to do this would be to have all of the SQL statements in one .sql file separated by ; then loop thru the .sql file splitting statements out by ; running one at a time.
from impala.dbapi import connect
from impala.util import as_pandas
conn=connect(host='impalaserver', port=11111, auth_mechanism='GSSAPI')
cursor = conn.cursor()
# split SQL statements from one file seperated by ';', Note: last command will not have semicolon at end.
sql_file = open("sandbox/temp.sql").read()
sql = sql_file.split(';')
for cmd in sql:
# This gets rid of the non printing characters you may have
cmd = cmd.replace('/r','')
cmd = cmd.replace('/n','')
# This runs your SQL commands one at a time.
cursor.execute(cmd)
print(cmd)
Impala can execute multiple queries at the same time as long as it doesn't hit the memory cap.
You can issue a command like impala-shell -f <<file_name>>, where the file has multiple queries each complete query separated by a semi colon (;)
If you are a python geek, you can even try the impyla package to create multiple connections and run all your queries at once.
pip install impyla

Resources