export SQLite empty tables to csv with headers - sqlite

When I export tables from an SQLite database to csv files with headers, the tables that are empty return an empty csv file.
I would like to obtain a csv file with just the header in that case.
Here is an example.
Create a data base with tblA and tblB where table A has no data.
sqlite3 test.sqlite
CREATE TABLE tblA (
ID LONG,
Col01 TEXT
);
CREATE TABLE tblB (
ID LONG,
Col01 TEXT
);
INSERT INTO tblB (ID, Col01)
VALUES
(1, "AAA"),
(2, "BBB");
.quit
Export all tables in csv :
# obtain all data tables from database
tables=`sqlite3 test.sqlite "SELECT tbl_name FROM sqlite_master WHERE type='table' and tbl_name not like 'sqlite_%';"`
for i in $tables ; do
sqlite3 -header -csv test.sqlite "select * from $i;" > "$i.csv" ;
done
Check the content of these csv files :
[tmp] : head *csv
==> tblA.csv <==
==> tblB.csv <==
ID,Col01
1,AAA
2,BBB
I would like to obtain this instead :
[tmp] : head *csv
==> tblA.csv <==
ID,Col01
==> tblB.csv <==
ID,Col01
1,AAA
2,BBB

One option is to utilize pragmatable_info to get the column names, and then just append the rows' content:
for i in $tables ; do
sqlite3 test.sqlite "pragma table_info($i)" | cut -d '|' -f 2 | paste -s -d, > "$i.csv"
sqlite3 -csv test.sqlite "select * from $i;" >> "$i.csv"
done
Result:
$ cat tblA.csv
ID,Col01
$ cat tblB.csv
ID,Col01
1,AAA
2,BBB

Combining #Shawn comment with https://stackoverflow.com/a/27710284/788700
# do processing:
sqlite3 -init script.sql test.sqlite .exit
# if processing resulted in empty csv file, write header to it:
test -s tblA.csv || sqlite3 test.sqlite "select group_concat(name, ',') from pragma_table_info('tblA')"

Related

How to use a column value as part of the file name created from a table using a unix ksh script

I have a Unix ksh script that creates a file from a Snowflake table. I need to use the value in one of the columns in this table (same value for all rows) in the generated file name.
Instead of this...
filename=My_File_date +%Y%m%d.txt
I want this...
filename=My_File_202120.txt
...taken from the "Week_ID" column in this table:
Col_A
Col_B
Week_ID
Col_D
One
Two
202120
Dog
Three
Four
202120
Cat
Seven
Two
202120
Lizard
Two
Ten
202120
Bird
Here is a mocked up version of my ksh script. What code and where do I add it to get the desired file name?
=================================
#!/usr/bin/ksh
set +x
. /temp/users/omega/.tdlogon_prd2
TZ=":US/Pacific"
pipeFile=My_File_${WeekID}.pipe
filename=My_File_${WeekID}
rm -f ${pipeFile} ${filename}.txt
echo "COL_A|COL_B|WEEK_ID|COL_D" > ${filename}.txt
np_fexp <<EOF
.LOGTABLE ABC_DB.all_audiences_${Date};
.BEGIN EXPORT SESSIONS 10;
SELECT
''||COALESCE(TRIM(COL_A),'')||'|'
||COALESCE(TRIM(COL_B),'')|| '|'
||COALESCE(TRIM(WEEK_ID),'')|| '|'
||COALESCE(TRIM(COL_D),'')
from ABC_DB.my_table
;
.EXPORT OUTFILE ${pipeFile}
MODE RECORD
FORMAT TEXT;
.END EXPORT;
.LOGOFF;
.QUIT;
EOF
rc=$?
sed 's/\s\+$//g' ${pipeFile} >> ${filename}.txt
rm -f ${pipeFile}
=================================
I think I need to add something like this but I'm not sure if it is correct or where to add it in the above ksh script:
EXPORT REPORT FILE=${WeekID};
SELECT COALESCE(TRIM(WEEK_ID))
FROM ABC_DB.my_table
GROUP BY 1;
.IF ERRORCODE <> 0 THEN .QUIT 1;
.EXPORT RESET;
WeekID='${WeekID}'

Teradata - query invalid when importing file

I'm trying to import a csv in Teradata but I just get the message "query invalid" with no explanation. I tried changing the slashes to forward without any difference
create volatile table temp (
a varchar(255)
);
.IMPORT vartext ',' FILE = 'F:\xyz\abcdef.csv', skip = 1;
delete from temp;
.QUIET ON
.REPEAT *
USING (
a varchar(255)
)
INSERT INTO temp(a)
VALUES (
:a
);
.QUIT
.LOGOFF

grep entire text based on key words

i have a file consists of oracle select statements as given below.
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
select numm from table3;
output is given below -
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
i need help on grepping the select statements(from select keyword till semicolon) which has into key word. select statements may goes n number of rows. begining of line is select. ending of line is semicolon. in between these keywords if we have into text. then we need to capture the whole line. i am trying grep/awk statement. but i am not getting exaclty. multiple line select statements are breaking. Any ideas/suggestions from your end. Thanks in advance.
Perl to the rescue!
perl -0x3b -ne 'print if /\binto\b/'
-0x3b sets the record separator to character x3b, i.e. ;
-n reads the input record by record, running the code for each
\b matches a word boundary, so all records containing "into" that's not part of a longer word should be printed
If there are some commands that don't start with select and you want to skip them, change the condition to if /^select\b/m && /\binto\b/ (which can be incorporated into a single regex if /^select\b.*\binto\b/ms). To make the regexes case insensitive, add the /i modifier: /^select\b/mi.
Try this:
tr '\n' '~' < <Input file> | sed 's#;#\n#g' | grep -i 'select.*into.*' | tr '~' '\n'
Demo:
$cat file.txt
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
select numm from table3;
$tr '\n' '~' < file.txt | sed 's#;#\n#g' | grep -i 'select.*into.*' | tr '~' '\n'
select count(*) into v_cnt from table
select
max(num) into v_max
from table2
$
With GNU awk for multi-char RS:
$ awk 'BEGIN{RS=ORS=";\n"} /into/' file
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
With any awk:
$ awk -v RS=';' -v ORS=';\n' '/into/{sub(/^\n/,""); print}' file
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
awk: issel remembers whether select lacked a ;:
{
if (issel = (issel || $0 ~ /select/)) print $0;
issel = !( $0 ~ /;/)
}

Search entire SQLite database for ID

I'm using SQLiteStudio to view and test an sqlite database which means I don't have access to fts3 or fts4.
I have an id which I need to find from within the database and have no idea which of the 45 tables it belongs to. Is there a query I can run that will return the table name it belongs to?
There's a solution to do this in SQLiteStudio. Note, that this does a full scan across all tables, all columns in every table (until it finds the match - then it stops), so this can be very slow. Be warned.
Here's how you do it:
Run SQLiteStudio, open "Custom SQL functions" dialog (it's the one with a blue brick icon).
Add new function, for exampe "find" and set its implementation language to Tcl (in top right corner). Paste following code as an implementation.
set value [string map [list "'" "''"] $0]
foreach table [db eval {select name from sqlite_master where type = "table"}] {
set cols [list]
foreach infoRow [db getTableInfo $table] {
lappend cols "\[[dict get $infoRow name]\] = '$value'"
}
set res [db eval "SELECT rowid FROM \[$table\] WHERE [join $cols { OR }]"]
if {[llength $res] > 0} {
return "found in table $table in rows with following ROWID: [join $res ,\ ]"
}
}
return "not found"
Use it from SQL query like this:
select find('your-id');
The function will scan table after table to find your-id. Once it finds a table, it will print ROWIDs of all rows that matched your-id. It will return something like:
found in table Products in rows with following ROWID: 345, 4647, 32546
Then you can query Products table using those ROWIDs:
select * from Products where rowid in (345, 4647, 32546);
If your-id will not be found, then the result of find will be: not found.
Write this shell script into a file named dbSearchString.sh:
#!/bin/sh
searchFor="$1"
db="$2"
sqlite3 "$db" .tables | while read table; do
output=`sqlite3 -line "$db" "select * from $table" | grep "$searchFor"`
if [[ "$?" -eq 0 ]]; then
echo "Found in ${table}:"
echo "$output"
fi
done
Then use it like this:
$ dbSearchString.sh "text to search for" database.db

How to selectively dump all innodb tables in a mysql database?

I have a database called av2web, which contains 130 MyISAM tables and 20 innodb tables. I wanna take mysqldump of these 20 innodb tables, and export it to another database as MyISAM tables.
Can you tell me a quicker way to achieve this?
Thanks
Pedro Alvarez Espinoza.
If this was an one-off operation I'd do:
use DB;
show table status name where engine='innodb';
and do a rectangular copy/paste from the Name column:
+-----------+--------+---------+------------+-
| Name | Engine | Version | Row_format |
+-----------+--------+---------+------------+-
| countries | InnoDB | 10 | Compact |
| foo3 | InnoDB | 10 | Compact |
| foo5 | InnoDB | 10 | Compact |
| lol | InnoDB | 10 | Compact |
| people | InnoDB | 10 | Compact |
+-----------+--------+---------+------------+-
to a text editor and convert it to a command
mysqldump -u USER DB countries foo3 foo5 lol people > DUMP.sql
and then import after replacing all instances of ENGINE=InnoDB with ENGINE=MyISAM in DUMP.sql
If you want to avoid the rectangular copy/paste magic you can do something like:
use information_schema;
select group_concat(table_name separator ' ') from tables
where table_schema='DB' and engine='innodb';
which will return countries foo3 foo5 lol people
I know this is an old question. I just want to share this script that genrate the mysqldump command and also shows how to restore it
This following portion of the script will generate a command to create a mysql backup/dump
SET SESSION group_concat_max_len = 100000000; -- this is very important when you have lots of table to make sure all the tables get included
SET #userName = 'root'; -- the username that you will login with to generate the dump
SET #databaseName = 'my_database_name'; -- the database name to look up the tables from
SET #extraOptions = '--compact --compress'; -- any additional mydqldump options https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
SET #engineName = 'innodb'; -- the engine name to filter down the table by
SET #filename = '"D:/MySQL Backups/my_database_name.sql"'; -- the full path of where to generate the backup too
-- This query will generate the mysqldump command to generate the backup
SELECT
CASE WHEN tableNames IS NULL
THEN 'No tables found. Make sure you set the variables correctly.'
ELSE CONCAT_WS(' ','mysqldump -p -u', #userName, #databaseName, tableNames, #extraOptions, '>', #filename)
END AS command
FROM (
SELECT GROUP_CONCAT(table_name SEPARATOR ' ') AS tableNames
FROM INFORMATION_SCHEMA.TABLES
WHERE table_schema= #databaseName AND ENGINE= #engineName
) AS s;
This following portion of the script will generate a command to restore mysql backup/dump into a specific database on the same or a different server
SET #restoreIntoDatabasename = #databaseName; -- the name of the new database you wish to restore into
SET #restoreFromFile = #filename; -- the full path of the filename you want to restore from
-- This query will generate the command to use to restore the generated backup into mysql
SELECT CONCAT_WS(' ', 'mysql -p -u root', #restoreIntoDatabasename, '<', #restoreFromFile);

Resources