Reading a 10 digit Integer from sqlite to C++ program - sqlite

I am trying to read a 10-12 digit signed integer value stored in a sqlite database. I want to read it into an int variable in my c++ code. I am trying the following query but I know I am going wrong somewhere as the value I retrieve from the database is always a negative number, different from the one in the database.
"SELECT _id FROM Picture where Time<%lld"
I am then appending the integer value to the above string, before sending it to sqlite, by using sprintf. When I print out the query, it shows a neg. long int number. What am I doing wrong with the query?
Thanks,
Ab

I figured out where I was going wrong. The field I was trying to read holds 64-bit integer type and I was using sqlite3_column_int instead of sqlite3_column_int64. I changed it to the latter and got the data back in my c++ signed long long variable.
Thanks for giving me little bit of direction there.

Related

Searching for blob field in SQLite

I have column in my database that stores BLOB.
I want to run a query to check if specific byte array value is present in the table.
The value is b'\xf4\x8f\xc6{\xc2mH(\x97\x9c\x83hkE\x8b\x95' (python bytes).
I tried to run this query:
SELECT * from received_message
WHERE "EphemeralID"
LIKE HEX('\xf4\x8f\xc6{\xc2mH(\x97\x9c\x83hkE\x8b\x95');
But I get 0 results though I 100% sure that I store this value in the database.
Is there something wrong with my query?
Your search string is a bit weird-- you appear to have some complex things in there like { and (. Maybe you should search through the blob the way it is stored instead?
From the Sqlite documentation:
BLOB literals are string literals containing hexadecimal data and
preceded by a single "x" or "X" character. Example: X'53514C697465'
So maybe do a like with the ascii representation of the hex value you want? Maybe start with looking for just f48f or F48F if your sqlite stores it upper case.

Using ADO through ODBC MariaDB Connector, I can't seem to return the decimal properly

I recently migrated from MySQL to MariaDB and my prices were off by two decimal places. I've done a check and the column for price had it's type set to decimal(19,4) so that I could have four decimal points of accuracy if I needed it.
Logged into MariaDB using a select statement the prices are okay.
Logging in with heidisql also shows that the prices are okay.
Looking at the linked table that uses the same ODBC connection I'm trying to use is also correct.
So I've concluded that there is something with connecting via ADO and the ODBC connector.
I looked and I found that if I cast my sql statement as decimal(6,2) then the decimals appear properly, so I recast the table directly within mariadb.
However I noticed that all the prices appear correctly, except for the one that has two zeros in the decimal places.
The value is 5.00 and what I get returned is 0.05
I'm not sure where I'm going wrong but this means that any round number or zeros won't keep their place. How do I fix this problem? Is it the way my column is cast, or the way vba:ado interprets what it recieves, or is it what the odbc connector returns?
This is the code I am using in order to try to debug this problem:
Public Sub decimalcheck()
Dim db As New ADODB.Connection
Dim rs As New ADODB.Recordset
Dim constring As String
constring = "DSN=my_dsn;"
db.Open constring, "user", "pass"
rs.ActiveConnection = db
rs.Source = "select Prices FROM my_table "
rs.Source = "select cast(my_prices as decimal(6,2) ) FROM my_table"
rs.Open
rs.MoveFirst
Do While Not rs.EOF And Not rs.BOF
Debug.Print rs.Fields(0)
Debug.Print CDec(rs.Fields(0))
rs.MoveNext
Loop
rs.Close
end sub
Update
I'm having some results casting as double, so no I need to research if it's worth keeping it as decimal or going to double.
Okay, Here are my results:
So my assumption is that ado, (or vba) is more used to using double and isn't really able to interpret decimal correctly. So that leaves us with a question:
What's the difference between Decimal and Double
I didn't find much. There is a difference in how big the numbers can be, but if you are going to provide a limit such as decimal(19,4) or double(19,4) then that doesn't matter unless you are beyond one of the two's ranges, then you are kind of stuck using the other, or finding a different solution. So for me this is not a big deal.
It seems that people are saying that decimal are more precise than double. Not sure what that means, I guess scientific precision. I'm working with money, so maybe I'll settle for decimal in my case, even though it may not be required, and probably overkill. (See Vladislav Vaintroub Comment, Decimal is absolutely precise if it fits within the M,D range. Thus use it with figures representing money.)
I see two solutions but there are probably many
First you can just change the decimal to double and leave it at that.
Second, you keep the column as a decimal and then in the select statement cast it as double when using it with ADO. Sample Code:
SELECT CAST( column_name AS DOUBLE(19,4) ) FROM my_table
Sorry for asking a question only to figure it out a few moments later. Hopefully it helps others, if not... well delete it.

Difference in these 2 queries in TERADATA

SEL * FROM TABLE WHERE a=10
VS
SEL * FROM TABLE WHERE a='10'
Here a is BIGINT, Explain plan does not show any difference, how teradata handles this and any difference in these query ?
Teradata automatically applies a datatype conversion if you compare different datatypes (usually but not always).
Whenever a string is compared to a number the string will be converted to a FLOAT, which is the most flexible numeric format.
In your case this conversion was already done by the parser, so the optimizer didn't know 10 was s string before.
If you do it the other way:
SEL * FROM TABLE WHERE a=10 -- column a is a character
you can spot this cast in explain:
"(table.last_name (FLOAT, FORMAT '-9.99999999999999E-999'))= 1.00000000000000E 001"
Sometimes this automatic conversion is convenient, but in a case like that it's really bad: No index can be used and all existing statistics are lost. So you better know you datatypes :-)
This (FLOAT, FORMAT '-9.99999999999999E-999')) in Explain is one of the first things I check if a query performs badly.

What is the difference between related SQLite data-types like INT, INTEGER, SMALLINT and TINYINT?

When creating a table in SQLite3, I get confused when confronted with all the possible datatypes which imply similar contents, so could anyone tell me the difference between the following data-types?
INT, INTEGER, SMALLINT, TINYINT
DEC, DECIMAL
LONGCHAR, LONGVARCHAR
DATETIME, SMALLDATETIME
Is there some documentation somewhere which lists the min./max. capacities of the various data-types? For example, I guess smallint holds a larger maximum value than tinyint, but a smaller value than integer, but I have no idea of what these capacities are.
SQLite, technically, has no data types, there are storage classes in a manifest typing system, and yeah, it's confusing if you're used to traditional RDBMSes. Everything, internally, is stored as text. Data types are coerced/converted into various storage locations based on affinities (ala data types assigned to columns).
The best thing that I'd recommend you do is to :
Temporarily forget everything you used to know about standalone database datatypes
Read the above link from the SQLite site.
Take the types based off of your old schema, and see what they'd map to in SQLite
Migrate all the data to the SQLite database.
Note: The datatype limitations can be cumbersome, especially if you add time durations, or dates, or things of that nature in SQL. SQLite has very few built-in functions for that sort of thing. However, SQLite does provide an easy way for you to make your own built-in functions for adding time durations and things of that nature, through the sqlite3_create_function library function. You would use that facility in place of traditional stored procedures.
The difference is syntactic sugar. Only a few substrings of the type names matter as for as the type affinity is concerned.
INT, INTEGER, SMALLINT, TINYINT → INTEGER affinity, because they all contain "INT".
LONGCHAR, LONGVARCHAR → TEXT affinity, because they contain "CHAR".
DEC, DECIMAL, DATETIME, SMALLDATETIME → NUMERIC, because they don't contain any of the substrings that matter.
The rules for determining affinity are listed at the SQLite site.
If you insist on strict typing, you can implement it with CHECK constraints:
CREATE TABLE T (
N INTEGER CHECK(TYPEOF(N) = 'integer'),
Str TEXT CHECK(TYPEOF(Str) = 'text'),
Dt DATETIME CHECK(JULIANDAY(Dt) IS NOT NULL)
);
But I never bother with it.
As for the capacity of each type:
INTEGER is always signed 64-bit. Note that SQLite optimizes the storage of small integers behind-the-scenes, so TINYINT wouldn't be useful anyway.
REAL is always 64-bit (double).
TEXT and BLOB have a maximum size determined by a preprocessor macro, which defaults to 1,000,000,000 bytes.
Most of those are there for compatibility. You really only have integer, float, text, and blob. Dates can be stored as either a number (unix time is integer, microsoft time is float) or as text.
NULL. The value is a NULL value.
INTEGER. The value is a signed integer, stored in 1, 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value.
REAL. The value is a floating point value, stored as an 8-byte IEEE floating point number.
TEXT. The value is a text string, stored using the database encoding (UTF-8, UTF-16BE or UTF-16LE).
BLOB. The value is a blob of data, stored exactly as it was input.
As an addition to answer from dan04, if you want to blindly insert a NUMERIC other than zero represented by a TEXT but ensure that text is convertible to a numeric:
your_numeric_col NUMERIC CHECK(abs(your_numeric_col) <> 0)
Typical use case is in a query from a program that treats all data as text (for uniformity & simplicity, since SQLite already does so). The nice thing about this is that it allows constructs like this:
INSERT INTO table (..., your_numeric_column, ...) VALUES (..., some_string, ...)
which is convenient in case you're using placeholders because you don't have to handle such non-zero numeric fields specially. An example using Python's sqlite3 module would be,
conn_or_cursor.execute(
"INSERT INTO table VALUES (" + ",".join("?" * num_values) + ")",
str_value_tuple) # no need to convert some from str to int/float
In the above example, all values in str_value_tuple will be escaped and quoted as strings when passed to SQlite. However, since we're not checking explicitly the type via TYPEOF but only convertibility to type, it will still work as desired (i.e., SQLite will either store it as a numeric or fail otherwise).

Can't store a korean string in database using LINQ

I'm using this code to store korean string in my database:
Dim username As String = Request.QueryString.Get("Some Korean String")
Using dg As New DataContext()
Dim newfriend As New FriendsTable With {.AskingUser = User.Identity.Name, .BeingAskedUser = username, .Pending = True}
dg.FriendsTables.InsertOnSubmit(newfriend)
dg.SubmitChanges()
end using
Checking my database, the username stored is a string"????"...
anybody got an idea how this happened or any workarounds?
What is your database collation? Are you able to store Korean strings with any other data access technology? What is the type of the username column, and is it accurately mapped in LINQ to SQL?
I suspect that something in the database isn't set up correctly to allow full Unicode. I very much doubt that this has anything to do with LINQ itself.
The other thing to check is that you're actually getting the right data in the first place. There are often several places where things can go wrong - you need to validate each place separately to see where the data is being corrupted. I have a short article on this which you may find helpful.
It sounds like you are storing Korean text in a varchar/text column which is not using a Korean collation. Thea easiest fix is to change the column type to nvarchar/ntext.
The nchar column types store Unicode data, whereas the char and varchar types store single byte characters in the specified collation.

Resources