i have a file consists of oracle select statements as given below.
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
select numm from table3;
output is given below -
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
i need help on grepping the select statements(from select keyword till semicolon) which has into key word. select statements may goes n number of rows. begining of line is select. ending of line is semicolon. in between these keywords if we have into text. then we need to capture the whole line. i am trying grep/awk statement. but i am not getting exaclty. multiple line select statements are breaking. Any ideas/suggestions from your end. Thanks in advance.
Perl to the rescue!
perl -0x3b -ne 'print if /\binto\b/'
-0x3b sets the record separator to character x3b, i.e. ;
-n reads the input record by record, running the code for each
\b matches a word boundary, so all records containing "into" that's not part of a longer word should be printed
If there are some commands that don't start with select and you want to skip them, change the condition to if /^select\b/m && /\binto\b/ (which can be incorporated into a single regex if /^select\b.*\binto\b/ms). To make the regexes case insensitive, add the /i modifier: /^select\b/mi.
Try this:
tr '\n' '~' < <Input file> | sed 's#;#\n#g' | grep -i 'select.*into.*' | tr '~' '\n'
Demo:
$cat file.txt
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
select numm from table3;
$tr '\n' '~' < file.txt | sed 's#;#\n#g' | grep -i 'select.*into.*' | tr '~' '\n'
select count(*) into v_cnt from table
select
max(num) into v_max
from table2
$
With GNU awk for multi-char RS:
$ awk 'BEGIN{RS=ORS=";\n"} /into/' file
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
With any awk:
$ awk -v RS=';' -v ORS=';\n' '/into/{sub(/^\n/,""); print}' file
select count(*) into v_cnt from table;
select
max(num) into v_max
from table2;
awk: issel remembers whether select lacked a ;:
{
if (issel = (issel || $0 ~ /select/)) print $0;
issel = !( $0 ~ /;/)
}
Related
I have a Unix ksh script that creates a file from a Snowflake table. I need to use the value in one of the columns in this table (same value for all rows) in the generated file name.
Instead of this...
filename=My_File_date +%Y%m%d.txt
I want this...
filename=My_File_202120.txt
...taken from the "Week_ID" column in this table:
Col_A
Col_B
Week_ID
Col_D
One
Two
202120
Dog
Three
Four
202120
Cat
Seven
Two
202120
Lizard
Two
Ten
202120
Bird
Here is a mocked up version of my ksh script. What code and where do I add it to get the desired file name?
=================================
#!/usr/bin/ksh
set +x
. /temp/users/omega/.tdlogon_prd2
TZ=":US/Pacific"
pipeFile=My_File_${WeekID}.pipe
filename=My_File_${WeekID}
rm -f ${pipeFile} ${filename}.txt
echo "COL_A|COL_B|WEEK_ID|COL_D" > ${filename}.txt
np_fexp <<EOF
.LOGTABLE ABC_DB.all_audiences_${Date};
.BEGIN EXPORT SESSIONS 10;
SELECT
''||COALESCE(TRIM(COL_A),'')||'|'
||COALESCE(TRIM(COL_B),'')|| '|'
||COALESCE(TRIM(WEEK_ID),'')|| '|'
||COALESCE(TRIM(COL_D),'')
from ABC_DB.my_table
;
.EXPORT OUTFILE ${pipeFile}
MODE RECORD
FORMAT TEXT;
.END EXPORT;
.LOGOFF;
.QUIT;
EOF
rc=$?
sed 's/\s\+$//g' ${pipeFile} >> ${filename}.txt
rm -f ${pipeFile}
=================================
I think I need to add something like this but I'm not sure if it is correct or where to add it in the above ksh script:
EXPORT REPORT FILE=${WeekID};
SELECT COALESCE(TRIM(WEEK_ID))
FROM ABC_DB.my_table
GROUP BY 1;
.IF ERRORCODE <> 0 THEN .QUIT 1;
.EXPORT RESET;
WeekID='${WeekID}'
When I export tables from an SQLite database to csv files with headers, the tables that are empty return an empty csv file.
I would like to obtain a csv file with just the header in that case.
Here is an example.
Create a data base with tblA and tblB where table A has no data.
sqlite3 test.sqlite
CREATE TABLE tblA (
ID LONG,
Col01 TEXT
);
CREATE TABLE tblB (
ID LONG,
Col01 TEXT
);
INSERT INTO tblB (ID, Col01)
VALUES
(1, "AAA"),
(2, "BBB");
.quit
Export all tables in csv :
# obtain all data tables from database
tables=`sqlite3 test.sqlite "SELECT tbl_name FROM sqlite_master WHERE type='table' and tbl_name not like 'sqlite_%';"`
for i in $tables ; do
sqlite3 -header -csv test.sqlite "select * from $i;" > "$i.csv" ;
done
Check the content of these csv files :
[tmp] : head *csv
==> tblA.csv <==
==> tblB.csv <==
ID,Col01
1,AAA
2,BBB
I would like to obtain this instead :
[tmp] : head *csv
==> tblA.csv <==
ID,Col01
==> tblB.csv <==
ID,Col01
1,AAA
2,BBB
One option is to utilize pragmatable_info to get the column names, and then just append the rows' content:
for i in $tables ; do
sqlite3 test.sqlite "pragma table_info($i)" | cut -d '|' -f 2 | paste -s -d, > "$i.csv"
sqlite3 -csv test.sqlite "select * from $i;" >> "$i.csv"
done
Result:
$ cat tblA.csv
ID,Col01
$ cat tblB.csv
ID,Col01
1,AAA
2,BBB
Combining #Shawn comment with https://stackoverflow.com/a/27710284/788700
# do processing:
sqlite3 -init script.sql test.sqlite .exit
# if processing resulted in empty csv file, write header to it:
test -s tblA.csv || sqlite3 test.sqlite "select group_concat(name, ',') from pragma_table_info('tblA')"
I am trying to generate insert statements in unix shell script and execute it using sqlplus command. I wrote the below code .
awk 'BEGIN { for (i = 1; i < 3; i++)
printf "INSERT INTO EMPLOYEE VALUES(%s,%d,%d);\n","\047TEST-FROM-UNIX\047",$i,$i}'| sqlplus -s username/passwrd
Is there any other good practice to do the insertion from unix ?
It generates the output
INSERT INTO EMPLOYEE VALUES('TEST-FROM-UNIX',0,0);
INSERT INTO EMPLOYEE VALUES('TEST-FROM-UNIX',0,0);
But the required output is
INSERT INTO EMPLOYEE VALUES('TEST-FROM-UNIX',1,1);
INSERT INTO EMPLOYEE VALUES('TEST-FROM-UNIX',2,2);
Could some one please tell, What change in the above code will achieve this ? Thanks for your time..
Drop the $ with i :
awk 'BEGIN { for (i = 1; i < 3; i++)
printf "INSERT INTO EMPLOYEE VALUES(%s,%d,%d);\n","\047TEST-FROM-UNIX\047",i,i}'
If you want to do it with variable max value then you have to create a script file with this code :
awk -v rows=$1 'BEGIN { for (i = 1; i < rows; i++)
printf "INSERT INTO EMPLOYEE VALUES(%s,%d,%d);\n","\047TEST-FROM-UNIX\047",i,i}'
I'm using SQLiteStudio to view and test an sqlite database which means I don't have access to fts3 or fts4.
I have an id which I need to find from within the database and have no idea which of the 45 tables it belongs to. Is there a query I can run that will return the table name it belongs to?
There's a solution to do this in SQLiteStudio. Note, that this does a full scan across all tables, all columns in every table (until it finds the match - then it stops), so this can be very slow. Be warned.
Here's how you do it:
Run SQLiteStudio, open "Custom SQL functions" dialog (it's the one with a blue brick icon).
Add new function, for exampe "find" and set its implementation language to Tcl (in top right corner). Paste following code as an implementation.
set value [string map [list "'" "''"] $0]
foreach table [db eval {select name from sqlite_master where type = "table"}] {
set cols [list]
foreach infoRow [db getTableInfo $table] {
lappend cols "\[[dict get $infoRow name]\] = '$value'"
}
set res [db eval "SELECT rowid FROM \[$table\] WHERE [join $cols { OR }]"]
if {[llength $res] > 0} {
return "found in table $table in rows with following ROWID: [join $res ,\ ]"
}
}
return "not found"
Use it from SQL query like this:
select find('your-id');
The function will scan table after table to find your-id. Once it finds a table, it will print ROWIDs of all rows that matched your-id. It will return something like:
found in table Products in rows with following ROWID: 345, 4647, 32546
Then you can query Products table using those ROWIDs:
select * from Products where rowid in (345, 4647, 32546);
If your-id will not be found, then the result of find will be: not found.
Write this shell script into a file named dbSearchString.sh:
#!/bin/sh
searchFor="$1"
db="$2"
sqlite3 "$db" .tables | while read table; do
output=`sqlite3 -line "$db" "select * from $table" | grep "$searchFor"`
if [[ "$?" -eq 0 ]]; then
echo "Found in ${table}:"
echo "$output"
fi
done
Then use it like this:
$ dbSearchString.sh "text to search for" database.db
I would like to remove comma , at the end of each line in my file. How can I do it other than using substring function in awk?
Sample Input:
SUPPLIER_PROC_ID BIGINT NOT NULL,
BTCH_NBR INTEGER NOT NULL,
RX_BTCH_SUPPLIER_SEQ_NBR INTEGER NOT NULL,
CORRN_ID INTEGER NOT NULL,
RX_CNT BYTEINT NOT NULL,
DATA_TYP_CD BYTEINT NOT NULL,
DATA_PD_CD BYTEINT NOT NULL,
CYC_DT DATE NOT NULL,
BASE_DT DATE NOT NULL,
DATA_LOAD_DT DATE NOT NULL,
DATA_DT DATE NOT NULL,
SUPPLIER_DATA_SRC_CD BYTEINT NOT NULL,
RX_CHNL_CD BYTEINT NOT NULL,
MP_IMS_ID INTEGER NOT NULL,
MP_LOC_ID NUMERIC(3,0),
MP_IMS_ID_ACTN_CD BYTEINT NOT NULL,
NPI_ID BIGINT,
Try doing this :
awk '{print substr($0, 1, length($0)-1)}' file.txt
This is more generic than just removing the final comma but any last character
If you'd want to only remove the last comma with awk :
awk '{gsub(/,$/,""); print}' file.txt
You can use sed:
sed 's/,$//' file > file.nocomma
and to remove whatever last character:
sed 's/.$//' file > file.nolast
An awk code based on RS.
awk '1' RS=',\n' file
or:
awk 'BEGIN{RS=",\n"}1' file
This last example will be valid for any char before newline:
awk '1' RS='.\n' file
Note: dot . matches any character except line breaks.
Explanation
awk allows us to use different record (line) regex separators, we just need to include the comma before the line break (or dot for any char) in the one used for the input, the RS.
Note: what that 1 means?
Short answer, It's just a shortcut to avoid using the print statement.
In awk when a condition gets matched the default action is to print the input line, example:
$ echo "test" |awk '1'
test
That's because 1 will be always true, so this expression is equivalent to:
$ echo "test"|awk '1==1'
test
$ echo "test"|awk '{if (1==1){print}}'
test
Documentation
Check Record Splitting with Standard awk and Output Separators.
This Perl code removes commas at the end of the line:
perl -pe 's/,$//' file > file.nocomma
This variation still works if there is whitespace after the comma:
perl -lpe 's/,\s*$//' file > file.nocomma
This variation edits the file in-place:
perl -i -lpe 's/,\s*$//' file
This variation edits the file in-place, and makes a backup file.bak:
perl -i.bak -lpe 's/,\s*$//' file