I used the following command
SELECT json_extract(data,'$.address') FROM data;
and output as CSV file.
Output in CSV file is
enter image description here
Field (column) in CSV file is saved as 2 lines for 1 field (column).
Eg-
"71 CHOA CHU KANG LOOP
NORTHVALE"
How could I save field(column) as 1 line ?
That is I don't want to include new line character in filed(column).
Eg-
"71 CHOA CHU KANG LOOP NORTHVALE"
Thanks.
Just replace the new line character:
select replace(json_extract(data,'$.address'), char(10), '') from data;
This will catch the newline character ('\n'). If you want '\r' and '\r\n' too:
select replace(
replace(json_extract(data,'$.address'), char(10), ''),
char(13),
''
) from data;
Related
I am new to progress 4GL. I have a CSV file that has data for the first 2 rows. 1st-row data is for the list of users and 2nd-row data is for users to be deactivated.
In my program, if I selected flag yes then the program should check the second row in CSV file and store it to a temp table. Please take a look at what I have tried from my side as it is not helping me to focus only on the second row in CSV instead it is taking all the data including 1st-row data as well.
I really appreciate if you tell me how can I create new/move to a sheet(sec) in CSV file and parse the data using progress 4GL
DEFINE TEMP-TABLE tt_sec7Role
FIELD ttsec_role AS CHARACTER.
DEFINE VARIABLE v_dataline AS CHARACTER NO-UNDO.
DEFINE VARIABLE v_count AS INTEGER NO-UNDO.
EMPTY TEMP-TABLE tt_sec7Role.
input from "C:\Users\ast\Desktop\New folder\cit.csv".
repeat:
import unformatted v_dataline.
if v_dataline <> '' then
do:
do v_count = 1 to NUM-ENTRIES(v_dataline,','):
create tt_sec7Role.
ttsec_role = entry(v_count,v_dataline,',').
end.
end. /* if v_dataline <> '' then */
end. /*repeat*/
input close.
v_count = 0.
FOR EACH tt_sec7Role:
v_count = v_count + 1.
END.
MESSAGE v_count.
If you simply need to count rows just add an integer and increase it after each import statement:
define variable counter as integer no-undo.
input from "C:\Users\ast\Desktop\New folder\cit.csv".
repeat:
import unformatted v_dataline.
counter = counter + 1.
if v_dataline <> '' then
do:
//If you only want to do this on line 2
if counter = 2 then do v_count = 1 to NUM-ENTRIES(v_dataline,','):
create tt_sec7Role.
ttsec_role = entry(v_count,v_dataline,',').
end.
end. /* if v_dataline <> '' then */
end. /*repeat*/
input close.
Once you determine if you should read that second row, create a record in your temp-table, then do another import. Then copy that part of the data to your tt, and at the end just cycle through your tt and export the fields with a comma as a delimiter.
I index a file in a SQLite DB.
I create my table with this:
CREATE TABLE Record (RecordID INTEGER,Data TEXT,PRIMARY KEY (RecordID))
I read a file, and for each line I add a row on the table.
Each line can have binary data at end. It's not a problem for many chars, but \0 char make a problem.
If I have a line like this : "My data \0with binary".
When I try to get data after the \0 not worked (SELECT substr(Data, 11, 5) FROM Record return an empty string or SELECT substr(Data, 4, 10) FROM Record return data)
When I try to search a data (SELECT Data FROM Record WHERE Data LIKE '%binar%') return 0 rows returned.
How can I solve this problem ? I try to replace \0 by an other char sequence, but it's not a good idea because can I have this sequence in my file.
Thank you
I have a table called Players with two columns Name and PlayerID. I am using SQLite under DB Browser for SQLite.
Unfortunately, all my player's names have a something like a "\n" (a newline) at the end of their Name.
Ex:
"Mark
"
I tried to use Update & Replace for all the names with the following query (I have like 450 rows in the table):
UPDATE Players
SET Name = REPLACE(Name,CHAR(10),'')
WHERE PlayerID <= 500
When I execute something like:
SELECT * FROM Players
WHERE Players.Name LIKE 'Mark'
it'll return no rows because of the end line. Here 'Mark' has no "\n", so it won't be found.
If I execute:
SELECT * FROM Players
WHERE Players.Name LIKE 'Mark
'
it will return my player. (after Mark I pressed enter)
I want to change all my rows from this format
"Mark
"
to this
"Mark"
and save all the changes.
How can I solve my problem? What's wrong?
Solution
The problem was that I had /r at the end of each string, not \n. So I had to use CHAR(13) instead of CHAR(10).
UPDATE Players
SET Name = REPLACE(Name, CHAR(13), '')
Also to remove all line feed characters (\n) I used:
UPDATE Players
SET Name = REPLACE(Name, CHAR(10), '')
Moreover to remove all the spaces () I used:
UPDATE Players
SET Name = REPLACE(Name, ' ', '')
I am working on a project with a database. This database is very simple. There is only one table with 2 columns : id (int) and text (string).
To fill this base I want to create a .sql script file.
(this database isn't created inside an android project because I want an already filled database to insert in my android project)
I want my script to create the table and then read a .txt file with a string value (for text column) on each row.
For each row, it should insert the string value into the table.
I am not very familiar with SQLite and SQL in general.
I already found a way to auto-increment the id using an iterator (but I dind't test it yet), but I couldn't found how to read a .txt file line by line.
So my question is : Is it possible to read a .txt file line by line in a SQLite script ?
And if it is, could you please tell me how to do it.
Here's a solution in pure sqlite
CREATE TEMP TABLE input (value STRING);
INSERT INTO input VALUES (TRIM(readfile('input.txt'), char(10)));
CREATE TABLE lines (s STRING);
WITH RECURSIVE
nn (s, rest)
AS (
SELECT
(SELECT SUBSTR(input.value, 0, INSTR(input.value, char(10))) FROM input),
(SELECT SUBSTR(input.value, INSTR(input.value, char(10)) + 1) FROM input)
UNION ALL
SELECT
CASE INSTR(nn.rest, char(10))
WHEN 0 THEN nn.rest
ELSE SUBSTR(nn.rest, 0, INSTR(nn.rest, char(10)))
END,
CASE INSTR(nn.rest, char(10))
WHEN 0 THEN ''
ELSE SUBSTR(nn.rest, INSTR(nn.rest, char(10)) + 1)
END
FROM nn
WHERE LENGTH(nn.rest) > 0
)
INSERT INTO lines (s)
SELECT nn.s FROM nn;
DROP TABLE input;
A few subtleties here:
sqlite does not have a \n escape so you have to use char(10)
this doesn't work well for mixed newlines or \r\n newlines (though you can adjust some + 1s to + 2s and char(10) to char(13) || char(10)
most of the magic is in the recursive union in the middle which nibbles off a line at a time
note that I'm using this approach to solve advent of code -- https://github.com/anthonywritescode/aoc2020
SQLite is an embedded database; it is designed to be used together with some 'real' programming language.
There are no functions to access and parse text files.
You have to write your own script in whatever language you like, or use some existing tool.
If there is a character that is guaranteed not to occurr in the text file, you can use the sqlite3 command-line shell and a temporary, one-column table for importing:
CREATE TEMP TABLE i(txt);
.separator ~
.import MyFile.txt i
INSERT INTO TheRealTable(text) SELECT txt FROM i; -- assumes id is autoincrementing
DROP TABLE i;
I think the simplest way is work on the txt file to convert it to a csv file. Then you can import it directly in Sqlite3 or by a programming language.
sqlite> .mode csv table_name
sqlite> .import file_name.csv table_name
You can use a BufferedReader for that. the code could look like:
InputStream in = context.getResources().openRawResource( R.raw.your_txt_file );
BufferedReader reader = new BufferedReader( new InputStreamReader( in ) );
String line = null;
while( null != ( line = reader.readLine() ) ){
doStuffWithLine( line );
}
reader.close();
Yes, reading a .txt file line by line in a SQLite script is possible. But you'll need to use an extension. Specifically, sqlean-fileio can do the job.
Its fileio_scan(path) function reads the file specified by path line by line without loading the whole file into memory.
For example:
$ echo 'one' > data.txt
$ echo 'two' >> data.txt
$ echo 'three' >> data.txt
create table data(id integer primary key, txt text);
insert into data(txt)
select value from fileio_scan('data.txt');
select * from data;
┌────┬───────┐
│ id │ txt │
├────┼───────┤
│ 1 │ one │
│ 2 │ two │
│ 3 │ three │
└────┴───────┘
That's it!
So my question is : Is it possible to read a .txt file line by line in a SQLite script ?
Yes.
And if it is, could you please tell me how to do it.
There we go:
Pseudo-code algorithm:
Open the file.
Read line by line and insert new row in the database.
Close resources and commit transactions.
1) Open the file
InputStream instream = new FileInputStream("myfilename.txt");
InputStreamReader inputreader = new InputStreamReader(instream);
BufferedReader buffreader = new BufferedReader(inputreader);
2) Read line by line and insert new row in database
List<String> nameList = new ArrayList<>();
String line;
do {
line = buffreader.readLine();
if (line != null){
nameList.add(line);
}
} while (line != null);
Now you should insert all names in database:
storeNamesInDB(nameList);
Where
private void storeNamesInDB(nameList){
String sql = "INSERT INTO table (col1) VALUES (?)";
db.beginTransaction();
SQLiteStatement stmt = db.compileStatement(sql);
for (int i = 0; i < nameList.size(); i++) {
stmt.bindString(1, values.get(i));
stmt.execute();
stmt.clearBindings();
}
db.setTransactionSuccessful();
db.endTransaction();
}
3) Close resources
Don't forget to close resources:
instream.close();
inputreader.close();
DISCLAIMER!
You shouldn't copy&paste this code. Replace each var name and some instructions with someone that make sense in your project. This is just an idea.
I am having trouble importing into sql lite.
I am exporting a table from Sql Server into a flat file encoded in UTF-8. And then trying to import the flat file into sqlite db. DB is UTF-8 encoded.
These lines are troublesome (tab delimited, line ends with CRLF):
ID w posid def
1234 bracket 40 "(" and ")" spec...
1234 bracket 40 Any of the characters "(", ")", "[", "]", "{", "}", and, in the area of computer languages, "<" and ">".
Error:
unescaped " character
I have tried replacing the quotes " with double quotes "", still doesn't work.
Import settings: tab separator
.separator " "
.import data.txt words
sqlite Table Schema:
CREATE TABLE words (ID integer NOT NULL, w TEXT NOT NULL, posid integer NOT NULL, def TEXT NOT NULL);
Update:
Somehow, adding a hash at the beginning of the def field in Sql Server worked:
update words set def = '#' + def
Not sure why that is. This worked, but it added an unwanted character in the field.
It turned out import can mess up when there are new line characters, or quotes, or commas.
One solution would be to replace these characters with some other character sequences, or character codes (e.g. char(1), char(2)...) , and make sure fields don't contain these sequences or codes, before you run the import. For example, replace quotes with --, then import, then replace -- with quotes again. I have another table with some text fields that have new line characters, and this solution seems to work.
before import:
update [table] set comment = REPLACE(comment, CHAR(13), '-*-')
update [table] set comment = REPLACE(comment, CHAR(10), '%-$-%')
update [table] set comment = REPLACE(comment, '"', '%-&-%')
after import:
update [table] set comment = REPLACE(comment, '-*-', CHAR(13))
update [table] set comment = REPLACE(comment, '%-$-%', CHAR(10))
update [table] set comment = REPLACE(comment, '%-&-%', '"')
To do that without changing the input data, use ascii mode and set the column separator to tab and the row separator to CRLF.
.mode ascii
.separator "\t" "\r\n"
See my answer to this other question for an explanation of why.