I have a column C of type REAL in table F in SQLite. I want to join this everywhere where in another table the negative value of F exists (along with some other fields).
However -C or 0-C etc.. all return the rounded value of C e.g. when C contains "123,456" then -C returns "-123".
Should I cast this via a string first or is the syntax differently?
Looks like the , in 123,456 is meant to be a decimal separator but SQLite treats the whole thing as a string (i.e. '123,456' rather than 123.456). Keep in mind that SQLite's type system is a little different than SQL's as values have types but columns don't:
[...] In SQLite, the datatype of a value is associated with the value itself, not with its container. [...]
So you can quietly put a string (that looks like a real number in some locales) into a real column and nothing bad happens until later.
You could fix the import process to interpret the decimal separator as desired before the data gets into SQLite or you could use replace to fix them up as needed:
sqlite> select -'123,45';
-123
sqlite> select -replace('123,45', ',', '.');
-123.45
Related
In my database, a table contains two columns each containing an 8 digit ASCII code, usually it's just alphanumeric. For example, a row might contain A123B45C in col1 and PQ2R4680 in col2.
I need to have a query/view that outputs a 4 character string calculated as the 2nd+3rd chars of these, concatenated. So in this example the extra column value would be 12Q2.
This is a cut-down version of the SQL I'd like to use, although it won't work as written because of zero stripping / conversion:
select
*,
(substr(col1, 2, 2) || substr(col2, 2, 2)) AS mode
from (nested SQL source query)
where (conditions)
This fails because if a row contains A00B23B4 in col1 and P32R4680 in col2, it will evaluate as 0032 and the query output will contain numeric 32 not 0032. (It's worse if col1 contains P1-2345 or "1.23456" or something like that)
Other questions on preventing zero stripping and string to integer conversion in Sqlite, all relate to data in tables where you can define a column text affinity, or static (quotable) data. In this case I can't do these things. I also can only create queries, not tables, so I can't write to a temp table.
What is the best way to ensure I get a 4 character output in all cases?
I believe you issue is not with substr stripping characters as this works as expected e.g. :-
Then running query SELECT substr(col1,2,2) || substr(col2,2,2) as mode FROM stripping
results in (as expected):-
Rather, your issue is likely how you subsequently utilise mode in which case you may need to use a CAST expression CAST expressions
For example the following does what is possibly happening :-
`SELECT substr(col1,2,2) || substr(col2,2,2) as mode, CAST(substr(col1,2,2) || substr(col2,2,2) AS INTEGER) AS oops FROM stripping`
resulting in :-
I need to use cast function with length of column in teradata.
say I have a table with following data ,
id | name
1|dhawal
2|bhaskar
I need to use cast operation something like
select cast(name as CHAR(<length of column>) from table
how can i do that?
thanks
Dhawal
You have to find the length by looking at the table definition - either manually (show table) or by writing dynamic SQL that queries dbc.ColumnsV.
update
You can find the maximum length of the actual data using
select max(length(cast(... as varchar(<large enough value>))) from TABLE
But if this is for FastExport I think casting as varchar(large-enough-value) and postprocessing to remove the 2-byte length info FastExport includes is a better solution (since exporting a CHAR() will results in a fixed-length output file with lots of spaces in it).
You may know this already, but just in case: Teradata usually recommends switching to TPT instead of the legacy fexp.
I would like to run a query involving joining a table to a manually generated list but am stuck trying to generate the manual list. There is an example of what I am attempting to do below:
SELECT
*
FROM
('29/12/2014', '30/12/2014', '30/12/2014') dates
;
Ideally I would want my output to look like:
29/12/2014
30/12/2014
31/12/2014
What's your Teradata release?
In TD14 there's STRTOK_SPLIT_TO_TABLE:
SELECT *
FROM TABLE (STRTOK_SPLIT_TO_TABLE(1 -- any dummy value
,'29/12/2014,30/12/2014,30/12/2014' -- any delimited string
,',' -- delimiter
)
RETURNS (outkey INTEGER
,tokennum INTEGER
,token VARCHAR(20) CHARACTER SET UNICODE) -- modify to match the actual size
) AS d
You can easily put this in a Derived Table and then join to it.
inkey (here the dummy value 1) is a numeric or string column, usually a key. Can be used for joining back to the original row.
outkey is the same as inkey.
tokennum is the ordinal position of the token in the input string.
token is the extracted substring.
Try this:
select '29/12/2014'
union
select '30/12/2014'
union
...
It should work in Teradata as well as in MySql.
I want to get a subtree from a table by tree path.
the path column stores strings like:
foo/
foo/bar/
foo/bar/baz/
If I try to select all records that start with a certain path:
EXPLAIN QUERY PLAN SELECT * FROM f WHERE path LIKE "foo/%"
it tells me that the table is scanned, even though the path column is indexed :(
Is there any way I could make LIKE use the index and not scan the table?
I found a way to achieve what I want with closure table, but it's harder to maintain and writes are extremely slow...
To be able to use an index for LIKE in SQLite,
the table column must have TEXT affinity, i.e., have a type of TEXT or VARCHAR or something like that; and
the index must be declared as COLLATE NOCASE (either directly, or because the column has been declared as COLLATE NOCASE):
> CREATE TABLE f(path TEXT);
> CREATE INDEX fi ON f(path COLLATE NOCASE);
> EXPLAIN QUERY PLAN SELECT * FROM f WHERE path LIKE 'foo/%';
0|0|0|SEARCH TABLE f USING COVERING INDEX fi (path>? AND path<?)
The second restriction could be removed with the case_sensitive_like PRAGMA, but this would change the behaviour of LIKE.
Alternatively, one could use a case-sensitive comparison, by replacing LIKE 'foo/%' with GLOB 'foo/*'.
LIKE has strict requirements to be optimizable with an index (ref).
If you can relax your requirements a little, you can use lexicographic ordering to get indexed lookups, e.g.
SELECT * FROM f WHERE PATH >= 'foo/' AND PATH < 'foo0'
where 0 is the lexigographically next character after /.
This is essentially the same optimization the optimizer would do for LIKEs if the requirements for optimization are met.
I have an SQLite table that contains a BLOB I need to do a size/length check on. How do I do that?
According to documentation length(blob) only works on texts and will stop counting after the first NULL. My tests confirmed this. I'm using SQLite 3.4.2.
I haven't had this problem, but you could try length(hex(glob))/2
Update (Aug-2012):
For SQLite 3.7.6 (released April 12, 2011) and later, length(blob_column) works as expected with both text and binary data.
for me length(blob) works just fine, gives the same results like the other.
As an additional answer, a common problem is that sqlite effectively ignores the column type of a table, so if you store a string in a blob column, it becomes a string column for that row. As length works different on strings, it will then only return the number of characters before the final 0 octet. It's easy to store strings in blob columns because you normally have to cast explicitly to insert a blob:
insert into table values ('xxxx'); // string insert
insert into table values(cast('xxxx' as blob)); // blob insert
to get the correct length for values stored as string, you can cast the length argument to blob:
select length(string-value-from-blob-column); // treast blob column as string
select length(cast(blob-column as blob)); // correctly returns blob length
The reason why length(hex(blob-column))/2 works is that hex doesn't stop at internal 0 octets, and the generated hex string doesn't contain 0 octets anymore, so length returns the correct (full) length.
Example of a select query that does this, getting the length of the blob in column myblob, in table mytable, in row 3:
select length(myblob) from mytable where rowid=3;
LENGTH() function in sqlite 3.7.13 on Debian 7 does not work, but LENGTH(HEX())/2 works fine.
# sqlite --version
3.7.13 2012-06-11 02:05:22 f5b5a13f7394dc143aa136f1d4faba6839eaa6dc
# sqlite xxx.db "SELECT docid, LENGTH(doccontent), LENGTH(HEX(doccontent))/2 AS b FROM cr_doc LIMIT 10;"
1|6|77824
2|5|176251
3|5|176251
4|6|39936
5|6|43520
6|494|101447
7|6|41472
8|6|61440
9|6|41984
10|6|41472