I have a text file with .sql extension which contains code for building table and population values into columns.
How would I generate a .db sqlite file out of that
Assuming you are using SQLITE via a command the use the .read FILENAME to run the SQL you'd then use the .save FILENAME command to save the database (including the .db extension), alternately before using the .read command you could use .open FILENAME)
e.g. Using the file C:/User/Mike/mysql.sql as :-
CREATE TABLE IF NOT EXISTS mytable (id INTEGER PRIMARY KEY, mydata TEXT);
INSERT INTO mytable (mydata) VALUES('Fred'),('Mary'),('Sue'),('Tom');
SELECT * FROM mytable;
Starting a command window and then :-
Microsoft Windows [Version 10.0.17134.407]
(c) 2018 Microsoft Corporation. All rights reserved.
C:\Users\Mike>SQLITE3
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .read mysql.sql
1|Fred
2|Mary
3|Sue
4|Tom
sqlite>
.read mysql.sql being manually input after issuing the SQLITE3 command.
PS .help results in :-
sqlite> .help
.auth ON|OFF Show authorizer callbacks
.backup ?DB? FILE Backup DB (default "main") to FILE
.bail on|off Stop after hitting an error. Default OFF
.binary on|off Turn binary output on or off. Default OFF
.cd DIRECTORY Change the working directory to DIRECTORY
.changes on|off Show number of rows changed by SQL
.check GLOB Fail if output since .testcase does not match
.clone NEWDB Clone data into NEWDB from the existing database
.databases List names and files of attached databases
.dbinfo ?DB? Show status information about the database
.dump ?TABLE? ... Dump the database in an SQL text format
If TABLE specified, only dump tables matching
LIKE pattern TABLE.
.echo on|off Turn command echo on or off
.eqp on|off|full Enable or disable automatic EXPLAIN QUERY PLAN
.excel Display the output of next command in a spreadsheet
.exit Exit this program
.expert EXPERIMENTAL. Suggest indexes for specified queries
.fullschema ?--indent? Show schema and the content of sqlite_stat tables
.headers on|off Turn display of headers on or off
.help Show this message
.import FILE TABLE Import data from FILE into TABLE
.imposter INDEX TABLE Create imposter table TABLE on index INDEX
.indexes ?TABLE? Show names of all indexes
If TABLE specified, only show indexes for tables
matching LIKE pattern TABLE.
.limit ?LIMIT? ?VAL? Display or change the value of an SQLITE_LIMIT
.lint OPTIONS Report potential schema issues. Options:
fkey-indexes Find missing foreign key indexes
.log FILE|off Turn logging on or off. FILE can be stderr/stdout
.mode MODE ?TABLE? Set output mode where MODE is one of:
ascii Columns/rows delimited by 0x1F and 0x1E
csv Comma-separated values
column Left-aligned columns. (See .width)
html HTML <table> code
insert SQL insert statements for TABLE
line One value per line
list Values delimited by "|"
quote Escape answers as for SQL
tabs Tab-separated values
tcl TCL list elements
.nullvalue STRING Use STRING in place of NULL values
.once (-e|-x|FILE) Output for the next SQL command only to FILE
or invoke system text editor (-e) or spreadsheet (-x)
on the output.
.open ?OPTIONS? ?FILE? Close existing database and reopen FILE
The --new option starts with an empty file
.output ?FILE? Send output to FILE or stdout
.print STRING... Print literal STRING
.prompt MAIN CONTINUE Replace the standard prompts
.quit Exit this program
.read FILENAME Execute SQL in FILENAME
.restore ?DB? FILE Restore content of DB (default "main") from FILE
.save FILE Write in-memory database into FILE
.scanstats on|off Turn sqlite3_stmt_scanstatus() metrics on or off
.schema ?PATTERN? Show the CREATE statements matching PATTERN
Add --indent for pretty-printing
.selftest ?--init? Run tests defined in the SELFTEST table
.separator COL ?ROW? Change the column separator and optionally the row
separator for both the output mode and .import
.sha3sum ?OPTIONS...? Compute a SHA3 hash of database content
.shell CMD ARGS... Run CMD ARGS... in a system shell
.show Show the current values for various settings
.stats ?on|off? Show stats or turn stats on or off
.system CMD ARGS... Run CMD ARGS... in a system shell
.tables ?TABLE? List names of tables
If TABLE specified, only list tables matching
LIKE pattern TABLE.
.testcase NAME Begin redirecting output to 'testcase-out.txt'
.timeout MS Try opening locked tables for MS milliseconds
.timer on|off Turn SQL timer on or off
.trace FILE|off Output each SQL statement as it is run
.vfsinfo ?AUX? Information about the top-level VFS
.vfslist List all available VFSes
.vfsname ?AUX? Print the name of the VFS stack
.width NUM1 NUM2 ... Set column widths for "column" mode
Negative values right-justify
sqlite>
The entire process (less creating the file mysql.sql) :-
C:\Users\Mike>dir
Volume in drive C has no label.
Volume Serial Number is 14E1-AC1D
Directory of C:\Users\Mike
19/11/2018 11:37 AM <DIR> .
19/11/2018 11:37 AM <DIR> ..
14/11/2018 07:48 PM <DIR> Links
14/11/2018 07:48 PM <DIR> Music
19/11/2018 11:26 AM 168 mysql.sql
21/08/2017 06:02 PM <DIR> Nero
10 File(s) 82,031 bytes
34 Dir(s) 149,798,195,200 bytes free
C:\Users\Mike>SQLITE3
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open mydb.db
sqlite> .read mysql.sql
1|Fred
2|Mary
3|Sue
4|Tom
sqlite>
C:\Users\Mike>dir
Volume in drive C has no label.
Volume Serial Number is 14E1-AC1D
Directory of C:\Users\Mike
19/11/2018 11:39 AM <DIR> .
19/11/2018 11:39 AM <DIR> ..
14/11/2018 07:48 PM <DIR> Music
19/11/2018 11:39 AM 12,288 mydb.db
19/11/2018 11:26 AM 168 mysql.sql
21/08/2017 06:02 PM <DIR> Nero
11 File(s) 94,319 bytes
34 Dir(s) 149,797,101,568 bytes free
Related
I'm working on writing a batch file to pull data from multiple SQLite3 databases. I need to set 2 System environment variables then use them two environment variables within my SELECT statement.As this will run against multiple databases that are the same schema, but for different site locations. Currently 14 differen locations. What I have come up with so far is my main queryrun.bat file:
#echo off
setx /m DateStart '20200101'
setx /m DateEnd '20200103'
sqlite3 SiteID40.db < query40.dat
TIMEOUT /t 3
sqlite3 SiteID41.db < query41.dat
Then the query#.dat files look like this:
.mode list
.separator ,
.output results40.csv
SELECT * FROM FilesActive WHERE CompactDate >= %DateStart% and CompactDate <= %DateEnd%;
.quit
The %DateStart% and %DateEnd% is where I am wanting to insert the system variables of the same name. I have tried using $DateStart and %DateStart% both with no luck.
Any constructive help is appreciated. I must RE-ITERATE though that this is SQLite3 NOT MS SQL Server or mySQL.
Using the .param command in sqlite, something like this should work:
Change the .bat call from sqlite3 SiteID40.db < query40.dat to
sqlite3 SiteID40.db ".param set :DateStart %DateStart%" ".param set :DateEnd %DateEnd%" ".read query.dat"
Change the query#.dat file SELECTs to:
SELECT * FROM FilesActive WHERE CompactDate >= :DateStart and CompactDate <= :DateEnd;
I am using DataGrip or SQLiteStudio (database managers) to run a series of queries in a database which guide me to find the information that I require. The queries works well and the results are shown in the console of the Dabase Manager. However, I need to export the results that appears in the database manager console into a CVS file.
I have seen everybody works directly in the shell, but I need (I have to) to use a DB manager to run the queries (so far the queries that I need to run in one step are about 600 lines).
In the sqlite3 shell I am able to run (and works)
(.headers on)
(.mode csv)
(.output C:/filename.csv)
(select * from "6000_1000_Results";)
(.output stdout)
However, running this code in the sql editor of the DB manager, doesn´t work at all.
--(.....)
--(around 600 lines before)
--(.....)
"Material ID",
"Material Name",
SUM("Quantity of Material") Quantity
FROM
"6000_1000_Results_Temp"
GROUP BY
"DataCenterID", "Material ID";
------------------------------------------------------------
--(HERE IS WHERE I NEED TO EXPORT THE RESULTS IN A CVS FILE)
------------------------------------------------------------
.headers on
.mode csv
.output C:/NextCloudLuis/TemproDB.git/csvtest.csv
select * from "6000_1000_Results";
.output stdout
.show
DROP TABLE IF EXISTS "6000_1000_Results_Temp";
DROP TABLE IF EXISTS "6000_1000_Results";
Datagrip do not show any error, it runs the queries in a few seconds, but there´s no file anywhere, SQLiteStudio gives a syntax error.
Finally I solve this issue in the next steps:
I run over python all the queries using the lybrary sqlite3. Then the result of all the queries are saved in a pandas dataframe. Then the pandas dataframe is exported to a cvs and xlxs file.
Here is the python code:
import sqlite3
import queries
conn = sqlite3.connect("tempro.db") #make the database connection to python
level4000 = queries.level4000to1000(conn) #I call the function in queries.py
level4000.to_csv('Level4000to1000.csv') #export result ro cvs
level4000.to_excel('Level4000to1000.xlsx') #export result ro xlsx
conn.close()
and here is the python file where I save all the queries (queries.py)
import sqlite3
from sqlite3 import Error
import pandas as pd
def level4000to1000(conn):
cur = conn.cursor()
cur.executescript(
"""
/* Here I put all the 600 lines of queries */
DROP TABLE IF EXISTS "4000_1000_Results_Temp";
DROP TABLE IF EXISTS "4000_1000_Results";
/* Here more and more lines */
--To keep the results from all queries
CREATE TABLE "4000_1000_Results_Temp" (
"DeviceID" INTEGER,
"Device Name" TEXT,
SUM("Quantity of Material") Quantity
FROM
"4000_1000_Results_Temp"
GROUP BY
"DeviceID", "Material ID";
""")
df = pd.read_sql_query('''SELECT * FROM "4000_1000_Results";''', conn)
cur.executescript("""DROP TABLE IF EXISTS "4000_1000_Results_Temp";
DROP TABLE IF EXISTS "4000_1000_Results";""")
return df #returns a dataframe with the info results from the queries
In the end, it seems that there is no way to export results to file formats as cvs using SQL coding.
I'm loading CSV to sqlite db like this:
sqlite3 /path/to/output.db < /path/to/sqlite_commands.sql
The sqlite command file looks like this:
sqlite_commands.sql
CREATE TABLE products (
"c1" TEXT PRIMARY KEY NOT NULL,
"c2" TEXT,
"c3" TEXT
);
.mode csv
.import /tmp/csv_with_dups.csv products
and the csv looks like
/tmp/csv_with_dups.csv
c1,c2,c3
a,b,c
b,c,d
c,d,e
d,e,f
a,a,b
e,f,g
I am getting errors to stderr
/tmp/csv_with_dups.csv.tmp:6: INSERT failed: UNIQUE constraint failed: products.c1
I want to silence this error as we know that some csv have duplicates (the csv is generated by seperate mechanism on very large data set can can not validate duplicates at that stage)
I've tried adding this line per the documentation
.log off
also tried
.log stderr|off
also tried
.log stderr off
sqlite3
.help
...
.log FILE|off Turn logging on or off. FILE can be stderr/stdout
...
The "INSERT failed" message is always printed to stderr.
You could ignore stderr, but that would also suppress all other error messages:
sqlite3 ... 2>/dev/null
Alternatively, generate the SQL commands yourself so that you can use INSERT OR IGNORE:
import sys
import csv
def quote_sql_str(s):
return "'" + s.replace("'", "''") + "'"
print('BEGIN;')
with open(sys.argv[1], 'rb') as file:
for row in csv.reader(file):
print('INSERT OR IGNORE INTO products VALUES({});'
.format(','.join([quote_sql_str(s) for s in row])))
print('COMMIT;')
python script.py csv_with_dups.csv | sqlite3 /path/to/output.db
Alternatively, import into a temporary table without constraints, then copy into the real table with INSERT OR IGNORE.
new post :
i already read tutorial and i found this script
.LOGMECH LDAP;
.LOGON xx.xx.xx.xx/username,password;
.LOGTABLE dbname.LOG_tablename;
DATABASE dbname;
.BEGIN EXPORT SESSIONS 2;
.EXPORT OUTFILE D:\test.txt
MODE RECORD format text;
select a.my_date,b.name2,a.value from dbsource.tablesource a
inner join dbname.ANG_tablename b
on a.name1=b.name2
where value=59000
and a.my_date >= 01/12/2015
;
.END EXPORT;
.LOGOFF;
but it is like not working
D:\>bteq < dodol.txt
BTEQ 15.00.00.00 Tue Jan 05 14:40:52 2016 PID: 4452
+---------+---------+---------+---------+---------+---------+---------+----
.LOGMECH LDAP;
+---------+---------+---------+---------+---------+---------+---------+----
.LOGON xx.xx.xx.xx/username,
*** Logon successfully completed.
*** Teradata Database Release is 13.10.07.12
*** Teradata Database Version is 13.10.07.12
*** Transaction Semantics are BTET.
*** Session Character Set Name is 'ASCII'.
*** Total elapsed time was 4 seconds.
+---------+---------+---------+---------+---------+---------+---------+----
.LOGTABLE dbname.LOG_tablename;
*** Error: Unrecognized command 'LOGTABLE'.
+---------+---------+---------+---------+---------+---------+---------+----
DATABASE dbname;
*** New default database accepted.
*** Total elapsed time was 2 seconds.
+---------+---------+---------+---------+---------+---------+---------+----
.BEGIN EXPORT SESSIONS 2;
*** Error: Unrecognized command 'BEGIN'.
+---------+---------+---------+---------+---------+---------+---------+----
.EXPORT OUTFILE D:\test.txt
*** Warning: No data format given. Assuming REPORT carries over.
*** Error: Expected FILE or DDNAME keyword, not 'OUTFILE'.
+---------+---------+---------+---------+---------+---------+---------+----
MODE RECORD format text;
MODE RECORD format text;
$
*** Failure 3706 Syntax error: expected something between the beginning of
the request and the 'MODE' keyword.
Statement# 2, Info =6
*** Total elapsed time was 1 second.
+---------+---------+---------+---------+---------+---------+---------+----
select a.my_date,b.name2,a.value from dbsource.tablesource a
inner join dbname.ANG_tablename b
on a.name1=b.name2
where value=59000
and a.my_date >= 01/12/2015
;
old post :
I am new in teradata, i have found mload to upload big data, now i have question, is there option to use cmd ( win7 ) to export data from teradata to xxx.txt
--- sample
select a.data1,b.data2,a.data3 from room1.REPORT_DAILY a
inner join room1.andaikan_saja b
on a.likeme=b.data2
where revenue=30000
and content_id like '%super%'
and a.trx_date >= 01/12/2015
;
this is my mload up.txt
.LOGMECH LDAP;
.LOGON xx.xx.xx.xx/username,mypassword;
.LOGTABLE mydatabase.LOG_my_table;
SET QUERY_BAND = 'ApplicationName=TD-Subscriber-RechargeLoad; Version=01.00.00.00;' FOR SESSION;
.BEGIN IMPORT MLOAD
TABLES mydatabase.my_table
WORKTABLES mydatabase.WT_my_table
ERRORTABLES mydatabase.ET_my_table mydatabase.UV_my_table;
.LAYOUT LAYOUT_DATA INDICATORS;
.FIELD number * VARCHAR(20);
.DML LABEL DML_INSERT;
INSERT INTO mydatabase.my_table
(
number =:number
);
.IMPORT INFILE "D:\folderdata\data.txt"
LAYOUT LAYOUT_DATA
FORMAT VARTEXT
APPLY DML_INSERT;
.END MLOAD;
.LOGOFF &SYSRC;
i need solution to export file to my laptop, just like my script that i put ---sample title ....
i use that script from teradasql, and i am search for cmd script
If it's just a few MB and an adhoc export you can use SQL Assistant: Set the delimiter in Tools-Options-Export/Import, maybe modify the settings in Tools-Options-Export and then click File-Export Results before submitting your Select. (Similar in TD Studio)
Otherwise the easiest way to extract data in a readable delimited format is TPT, either Export for large amounts of data (GBs) or SQL Selector (MBs). TPT is available for most Operating Systems including Windows.
There's a nice User Guide with lots of example scripts:
Job Example 12: Extracting Rows and Sending Them in Delimited Format
In your case you'll define a generic template file like this:
DEFINE JOB EXPORT_DELIMITED_FILE
DESCRIPTION 'Export rows from a Teradata table to a delimited file'
(
APPLY TO OPERATOR ($FILE_WRITER() ATTR (Format = 'DELIMITED'))
SELECT * FROM OPERATOR ($SELECTOR ATTR (SelectStmt = #ExportSelectStmt));
);
Change $SELECTOR to $EXPORT for larger exports.
Then you just need a job variable file like this:
SourceTdpId = 'your system'
,SourceUserName = 'your user'
,SourceUserPassword = 'your password'
,FileWriterFileName = 'xxx.txt'
,ExportSelectStmt = 'select a.data1,b.data2,a.data3 from room1.REPORT_DAILY a
inner join room1.andaikan_saja b
on a.likeme=b.data2
where revenue=30000
and content_id like ''%super%''
and a.trx_date >= DATE ''2015-12-01'' -- modified this to a valid date literal
;'
The only bad part is that you have to double any single quotes within your select, e.g. '%super%' -> ''%super%''.
Finally you run a cmd:
tbuild -f your_template_file -v your_job_var_file
Depending on the volume of data you wish to extract from Teradata you can use Teradata BTEQ or the Teradata Parallel Transport (TPT) utility with the EXPORT operator from the command line to extract the data.
The TPT utility is the eventual replacement for the legacy Teradata Load and Unload utilities (FastLoad, MultiLoad, FastExport, and TPump) and provides an easier mechanism to produce delimited flat files over FastExport. TPT is fairly flexible and effective for exporting large volumes of data to channel or network attached clients.
Teradata BTEQ can perform lightweight load and unload functions. The BTEQ manual is pretty good at providing you an overview of how to use the various commands to produce a semi-structured report or data extract. It doesn't have a simple command to produce a delimited flat file. If you review the manual's overview of the EXPORT command you should get a good feel for how BTEQ behaves when working with channel or network attached clients.
I want to investigate how to access SQLite DB from SAS. What's the easiest way of doing this? Is there a SAS product that we can license to do that? I don't want to use ODBC drivers as that seems to have been written a long time ago and is not officially part of SQLite.
SAS supports reading data from pipes (in a unix environment). Essentially, you can set up a filename statement to execute an sqlite command in the host environment, then process the command output as if reading it from a text file.
SAS Support page: http://support.sas.com/documentation/cdl/en/hostunx/61879/HTML/default/viewer.htm#pipe.htm
Example:
*----------------------------------------------
* (1) Write a command in place of the file path
* --> important: the 'pipe' option makes this work
*----------------------------------------------;
filename QUERY pipe 'sqlite3 database_file "select * from table_name"';
*----------------------------------------------
* (2) Use a datastep to read the output from sqlite
*----------------------------------------------;
options linesize=max; *to prevent truncation of results;
data table_name;
infile QUERY delimiter='|' missover dsd lrecl=32767;
length
numeric_id 8
numeric_field 8
character_field_1 $40
character_field_2 $20
wide_character_field $500
;
input
numeric_id
numeric_field $
character_field_1 $
character_field_2 $
wide_character_field $
;
run;
*----------------------------------------------
* (3) View the results, process data etc.
*----------------------------------------------;
proc contents;
proc means;
proc print;
run;