What does the TEXT (5) in CREATE TABLE SQL mean when I use SQLitle? [duplicate] - sqlite

This question already has answers here:
sqlite allows char(p) inputs exceeding length p
(2 answers)
Varchar Length? SQLite in Android
(1 answer)
String data type in sqlite
(2 answers)
No limitation on SQL column data type
(1 answer)
Closed 10 days ago.
The Code A is to created a SQLitle table made by SQLite Studio 3.4.3.
1: What does the TEXT (5) mean in SQLitle? Does it mean that the length of linkTitle field can't be exceed 5 letters? What will happend if I add a record with 20 letters for linkTitle field?
2: The default value "The title of saved links" of the linkTitle field has exceeded 5 letters, What will happend ?
Code A
CREATE TABLE myTable (
id INTEGER PRIMARY KEY ASC AUTOINCREMENT,
linkTitle TEXT (5) NOT NULL
DEFAULT [The title of saved links],
linkSaved TEXT (10) NOT NULL
);

It has no affect, the column type is considered as TEXT. SQLite accommodates much of the SQL that is used by other databases and hence the acceptance of the length specifier but ignoring (other than syntactically (omit either parenthesis and a syntax error will be issued, likewise if a non numeric is specified)).
No length restriction is imposed by the specifier.
If you were to use:-
INSERT INTO myTable (linkTitle,linkSaved) VALUES
('The quick brown fox jumped over the lazy fence','The slow grey elephant could not jump over the fence so crashed though it'),
(100,zeroblob(10)),
(10.1234567,'10.1234567')
;
SELECT * FROM myTable;
The result would be:-
This also demonstrates that you can save (with the exception of an alias of the rowid, e.g. the id column, or the rowid itself ), any type of data in any type of column. Which other databases typically do not allow.
Furthermore, the column type itself is highly flexible. That is you could specify virtually any type e.g. the_column ridiculoustype and SQLite will make the type NUMERIC (the default/drop through type) in this case (see 3.1 in the link for the rules for assigning a type affinity).
You should perhaps have a read of https://www.sqlite.org/datatype3.html

Related

How to convert a number with decimal values to a float in PL/SQL?

The issue is that I need to insert this number into json, and because the number contains a comma, json becomes invalid. A float would work because it contains a period not a comma.
I have tried using replace(v_decimalNumber,',','.') and it kind of works, except that the json property is converted to a string. I need it to remain some type of a numerical value.
How can this be achieved?
I am using Oracle 11g.
You just need to_number() function.
select to_number(replace('1,2', ',', '.')) float_nr from dual;
Result:
FLOAT_NR
1.2
Note that if your number has .0 like 1.0, the function will remove it and leave it only 1
The data type of v_decimalNumber is some type of character format as it can contain commas (,). Your contention is that it contains a number once the commas are removed. However there is NO SUCH THING until that contention has been validated since being character I can put any character(s) I want into it subject to any length restriction. As an example a spreadsheet column that should contain numeric data. However, it that doesn't apply users will often put N/A into telling themselves that it doesn't apply. Oracle will happily load this into your v_decimalNumber. (And that's 1 of many many ways non-numeric data can get into your column.) So before attempting to process as a numeric value you must validate it is in fact valid numeric data. The following demonstrates one such way.
with some_numbers (n) as
( select '123,4456,789.00' from dual union all
select '987654321.00' from dual union all
select '1928374655' from dual union all
select '1.2' from dual union all
select '.1' from dual union all
select '1..1' from dual union all
select 'N/A' from dual
)
, rx as (select '^[0-9]*\.?[0-9]*$' regexp from dual)
select n
, case when regexp_like(replace(n,',',null), regexp)
then to_number(replace(n,',',null))
else null
end Num_value
, case when regexp_like(replace(n,',',null), regexp)
then null
else 'Not valid number'
end msg
from some_numbers,rx ;
Take away: Never trust a character type column to contain specific data requirements except random characters. Always validate then put the data into the appropriately defined columns.

random unique integers avoid duplicates before insert

Given a need to store 4 digit random unique integers, how would I do a efficient insert of a large quality new numbers.
If the values are created outside sqlite there's a chance the values already existing in the database
What would be the best method to do such a task?
You could make the column where the numbers will be stored UNIQUE and use INSERT OR IGNORE on a SINGLE INSERT with multiple values (for efficiency). e.g. :-
INSERT OR IGNORE INTO rndm_id VALUES
('0001'),('0027'),('9999'),('0412'),('2108'),
('0001'), -- duplicate will be skipped
('3085') -- and so on
;
Note values have been enclosed in quotes to maintain 4 numerics. The Table was defined using :-
CREATE TABLE IF NOT EXISTS rndm_id (myid TEXT UNIQUE);
If you are considering a large number of values then may need to consider :-
Maximum Length Of An SQL Statement
The maximum number of bytes in the text of an SQL statement is limited
to SQLITE_MAX_SQL_LENGTH which defaults to 1000000. You can redefine
this limit to be as large as the smaller of SQLITE_MAX_LENGTH and
1073741824.
If an SQL statement is limited to be a million bytes in length, then
obviously you will not be able to insert multi-million byte strings by
embedding them as literals inside of INSERT statements. But you should
not do that anyway. Use host parameters for your data. Prepare short
SQL statements like this:
INSERT INTO tab1 VALUES(?,?,?); Then use the sqlite3_bind_XXXX()
functions to bind your large string values to the SQL statement. The
use of binding obviates the need to escape quote characters in the
string, reducing the risk of SQL injection attacks. It is also runs
faster since the large string does not need to be parsed or copied as
much.
The maximum length of an SQL statement can be lowered at run-time
using the sqlite3_limit(db,SQLITE_LIMIT_SQL_LENGTH,size) interface.
Limits In SQLite
Considering the comment
Is there a way I can do a select to give a a set of new values which I
can use then do a insert later?
So assuming that you wanted 1000 4 digit unique random values then the following may suffice :-
DROP TABLE IF EXISTS save_for_later; -- Drop the table
CREATE TEMP TABLE IF NOT EXISTS save_for_later (four_digit_random_value UNIQUE); -- Create a temporary table
-- Create a table with 1500 random rows
WITH RECURSIVE cte1 AS (
SELECT CAST((abs(random() % 10)||abs(random() % 10)||abs(random() % 10)||abs(random() % 10)) AS TEXT)
UNION ALL SELECT CAST((abs(random() % 10)||abs(random() % 10)||abs(random() % 10)||abs(random() % 10)) AS TEXT)
FROM cte1 LIMIT 1500
)
INSERT OR IGNORE INTO save_for_later SELECT * FROM cte1;
-- Later on extract the 1000 required rows.
SELECT * FROM save_for_later LIMIT 1000;
Suspected requirement
If the question were how can I insert a set number (300) of random 4 numeric unique values into a table (master) with existing data where the new values should also be unique in conjunction with the existing values
Then the following could do that (see note re limitations) :-
DROP TABLE IF EXISTS master; --
CREATE TABLE IF NOT EXISTS master (random_value TEXT UNIQUE);
-- Master (existing) Table populated with some values
INSERT OR IGNORE INTO master VALUES
('0001'),('0027'),('9999'),('0412'),('2108'),
('0001'), -- duplicate will be skipped
('3085') -- and so on
;
SELECT * FROM master; -- Result 1 show what's in the master table
-- Create a table to save the values for later
DROP TABLE IF EXISTS save_for_later; -- Drop the table
CREATE TEMP TABLE IF NOT EXISTS save_for_later (four_digit_random_value UNIQUE); -- Create a temporary table
-- Populate the vales to be saved for later excluding any values that already exist
-- 1500 rows perhaps excessive but very likely to result in 300 unique values
WITH RECURSIVE cte1(rndm) AS (
SELECT
CAST((abs(random() % 10)||abs(random() % 10)||abs(random() % 10)||abs(random() % 10)) AS TEXT)
UNION ALL
SELECT
CAST((abs(random() % 10)||abs(random() % 10)||abs(random() % 10)||abs(random() % 10)) AS TEXT)
FROM cte1
LIMIT 1500 --<<<<<< LIMIT important otherwise would be infinite
)
INSERT OR IGNORE INTO save_for_later
SELECT * FROM cte1
WHERE rndm NOT IN(SELECT * FROM master)
;
-- Later on extract the required rows (300 here) and insert them.
INSERT INTO master
SELECT * FROM save_for_later
LIMIT 300;
SELECT * FROM master; -- Should be 6 original/existing rows + 300 so 306 rows (although perhaps a chance that 300 weren't generated)
Note with 4 numerics there is a limitation of 10,000 possible values (0000-9999), so the more values that exist in the original table the greater the chance that there will be issues finding values that would be unique.
The above would result in :-
The first result the master table before generation of the new values :-
The result after adding the new values (original 6 rows + new 300 rows) :-

Why does symbol char(10) at the end of line lose? [duplicate]

This question already has an answer here:
SQLite FireDAC trailing spaces
(1 answer)
Closed 5 years ago.
In my case, I have lost a symbol $A at the end of line when I get field in Delphi. I think, Problem is in FireDac components. I use Delphi 10.1 Berlin and Sqlite (I don't know a version). When I start up program below, I have result 3!=4 in message.
This is code:
FD := TFDQuery.Create(nil);
FD.Connection := FDConnection1;
FD.ExecSQL('create table t2 (f2 text)');
FD.ExecSQL('insert into t2 values(''123''||char(10))');
FD.Open('select f2, length(f2) as l from t2');
ShowMessage(IntToStr(Length(FD.FieldByName('f2').AsString))+'!='+FD.FieldByName('l').AsString);
Last symbol $A lost.
May be somebody explain me this strange behavior.
You need to turn off the TFDQuery.FormatOptions.StrsTrim property:
Controls the removing of trailing spaces from string values and zero bytes from binary values
...
For SQLite, this property is applied to all string columns

SUM totals by FOR ALL ENTRIES itab keys

I want to execute a SELECT query on a database table that has 6 key fields, let's assume they are keyA, keyB, ..., keyF.
As input parameters to my ABAP function module I do receive an internal table with exactly that structure of the key fields, each entry in that internal table therefore corresponds to one tuple in the database table.
Thus I simply need to select all tuples from the database table that correspond to the entries in my internal table.
Furthermore, I want to aggregate an amount column in that database table in exactly the same query.
In pseudo SQL the query would look as follows:
SELECT SUM(amount) FROM table WHERE (keyA, keyB, keyC, keyD, keyE, keyF) IN {internal table}.
However, this representation is not possible in ABAP OpenSQL.
Only one column (such as keyA) is allowed to state, not a composite key. Furthermore I can only use 'selection tables' (those with SIGN, OPTIOn, LOW, HIGH) after they keyword IN.
Using FOR ALL ENTRIES seems feasible, however in this case I cannot use SUM since aggregation is not allowed in the same query.
Any suggestions?
For selecting records for each entry of an internal table, normally the for all entries idiom in ABAP Open SQL is your friend. In your case, you have the additional requirement to aggregate a sum. Unfortunately, the result set of a SELECT statement that works with for all entries is not allowed to use aggregate functions. In my eyes, the best way in this case is to compute the sum from the result set in the ABAP layer. The following example works in my system (note in passing: using the new ABAP language features that came with 7.40, you could considerably shorten the whole code).
report zz_ztmp_test.
start-of-selection.
perform test.
* Database table ZTMP_TEST :
* ID - key field - type CHAR10
* VALUE - no key field - type INT4
* Content: 'A' 10, 'B' 20, 'C' 30, 'D' 40, 'E' 50
types: ty_entries type standard table of ztmp_test.
* ---
form test.
data: lv_sum type i,
lt_result type ty_entries,
lt_keys type ty_entries.
perform fill_keys changing lt_keys.
if lt_keys is not initial.
select * into table lt_result
from ztmp_test
for all entries in lt_keys
where id = lt_keys-id.
endif.
perform get_sum using lt_result
changing lv_sum.
write: / lv_sum.
endform.
form fill_keys changing ct_keys type ty_entries.
append :
'A' to ct_keys,
'C' to ct_keys,
'E' to ct_keys.
endform.
form get_sum using it_entries type ty_entries
changing value(ev_sum) type i.
field-symbols: <ls_test> type ztmp_test.
clear ev_sum.
loop at it_entries assigning <ls_test>.
add <ls_test>-value to ev_sum.
endloop.
endform.
I would use FOR ALL ENTRIES to fetch all the related rows, then LOOP round the resulting table and add up the relevant field into a total. If you have ABAP 740 or later, you can use REDUCE operator to avoid having to loop round the table manually:
DATA(total) = REDUCE i( INIT sum = 0
FOR wa IN itab NEXT sum = sum + wa-field ).
One possible approach is simultaneous summarizing inside SELECT loop using statement SELECT...ENDSELECT statement.
Sample with calculating all order lines/quantities for the plant:
TYPES: BEGIN OF ls_collect,
werks TYPE t001w-werks,
menge TYPE ekpo-menge,
END OF ls_collect.
DATA: lt_collect TYPE TABLE OF ls_collect.
SELECT werks UP TO 100 ROWS
FROM t001w
INTO TABLE #DATA(lt_werks).
SELECT werks, menge
FROM ekpo
INTO #DATA(order)
FOR ALL ENTRIES IN #lt_werks
WHERE werks = #lt_werks-werks.
COLLECT order INTO lt_collect.
ENDSELECT.
The sample has no business sense and placed here just for educational purpose.
Another more robust and modern approach is CTE (Common Table Expressions) available since ABAP 751 version. This technique is specially intended among others for total/subtotal tasks:
WITH
+plants AS (
SELECT werks UP TO 100 ROWS
FROM t011w ),
+orders_by_plant AS (
SELECT SUM( menge )
FROM ekpo AS e
INNER JOIN +plants AS m
ON e~werks = m~werks
GROUP BY werks )
SELECT werks, menge
FROM +orders_by_plant
INTO TABLE #DATA(lt_sums)
ORDER BY werks.
cl_demo_output=>display( lt_sums ).
The first table expression +material is your internal table, the second +orders_by_mat quantities totals selected by the above materials and the last query is the final output query.

How do I find the length (size) of a binary blob?

I have an SQLite table that contains a BLOB I need to do a size/length check on. How do I do that?
According to documentation length(blob) only works on texts and will stop counting after the first NULL. My tests confirmed this. I'm using SQLite 3.4.2.
I haven't had this problem, but you could try length(hex(glob))/2
Update (Aug-2012):
For SQLite 3.7.6 (released April 12, 2011) and later, length(blob_column) works as expected with both text and binary data.
for me length(blob) works just fine, gives the same results like the other.
As an additional answer, a common problem is that sqlite effectively ignores the column type of a table, so if you store a string in a blob column, it becomes a string column for that row. As length works different on strings, it will then only return the number of characters before the final 0 octet. It's easy to store strings in blob columns because you normally have to cast explicitly to insert a blob:
insert into table values ('xxxx'); // string insert
insert into table values(cast('xxxx' as blob)); // blob insert
to get the correct length for values stored as string, you can cast the length argument to blob:
select length(string-value-from-blob-column); // treast blob column as string
select length(cast(blob-column as blob)); // correctly returns blob length
The reason why length(hex(blob-column))/2 works is that hex doesn't stop at internal 0 octets, and the generated hex string doesn't contain 0 octets anymore, so length returns the correct (full) length.
Example of a select query that does this, getting the length of the blob in column myblob, in table mytable, in row 3:
select length(myblob) from mytable where rowid=3;
LENGTH() function in sqlite 3.7.13 on Debian 7 does not work, but LENGTH(HEX())/2 works fine.
# sqlite --version
3.7.13 2012-06-11 02:05:22 f5b5a13f7394dc143aa136f1d4faba6839eaa6dc
# sqlite xxx.db "SELECT docid, LENGTH(doccontent), LENGTH(HEX(doccontent))/2 AS b FROM cr_doc LIMIT 10;"
1|6|77824
2|5|176251
3|5|176251
4|6|39936
5|6|43520
6|494|101447
7|6|41472
8|6|61440
9|6|41984
10|6|41472

Resources