I have some experince with MySql, and am moving to Sqlite for the first time.
The Sqlite documentation for data types, section 1.2 states that
SQLite does not have a storage class set aside for storing dates
and/or times. Instead, the built-in Date And Time Functions of SQLite
are capable of storing dates and times as TEXT, REAL, or INTEGER
values
I would prefer an auto timestamp, but will live with having to pass it in every time if it will get my code working.
Followinf this question, I have declared my field as
`time_stamp` TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
However, it is not displaying anything on a DB ware grid.
I added an OnDrawCell() and thought that this might help:
var cellText : String;
cellValue : String;
dateTime : TDateTime;
begin
if ARow = 0 then
Exit;
cellValue := myGrid.Cells[ACol, ARow];
case ACol of
0: ; // invisibe column, do nothing
1: cellText := cellValue;
2: begin
dateTime := StrToDateTime(cellValue);
cellText := DateTimeToStr(dateTime);
end;
3: cellText := cellValue;
4: cellText := cellValue;
end;
myGrid.Canvas.FillRect(Rect);
myGrid.Canvas.TextOut(Rect.Left + 2, Rect.Top + 2, cellText);
where column 2 is my timestamp, but is apparently empty.
So, my question is, can anyone correct this code snippet, or show me a code example of how to declare an Sqlite column which defaults to the current timestamp and how to display that in a DB aware grid? I am happy enough to store a Unix timestamp, if that would help.
Btw, I am using XE7, FireDac with a TMS TAdvDbGrid.
[Update] As mentioned it a comment thread below (and as perhaps ought to have been mentioned originally), in this case I am generating some dummy data for testing porpoises using a TDateTime and IncSecond(startTime, delay * i). So, effectively, I am writing a TDateTime to that field, then I close/open the datasource and all other fields of the new row are shown, but not that one.
But, that actually digresses from my original, "please provide an example" question and turns it into a "please fix my code" question. An answer to either will make me very happy.
You are looking in the wrong place, your problem is in the dataset. It is a generic problem for all datasets that get their data from an external database.
Your query/dataset has a copy of the data in your database. It gets that copy from the database when it is opened or when you use it to update/insert records into the database. If the data in the database is changed some other way, your dataset will not have those changes until the changed record(s) are re-read. This applies to you, because the timestamp value is being set in database, not through the dataset. This can be accomplished by closing then opening the dataset.
With FireDAC, try setting the query's UpdateOptions .RefreshMode := rmAll. It has worked for me when there is a single table in the query, i.e. no joins.
Related
I have a doubt about the structure of a SQLite query. I'm trying to update a user-selected value in the table referencing the row by the username.
The table is called Data and has these columns: USERNAME,PASSWORD,ADDRESS,NOTES.
I'm using SQL drivers for GO (_ "github.com/mattn/go-sqlite3"), here's my query:
...
stmt, err := db.Prepare("UPDATE Data SET ?=? WHERE USERNAME=?")
check(err)
res, err := stmt.Exec(splittedQuery[0], splittedQuery[1],splittedQuery[2])
...
From this sequence I can only get a syntax error: near "?": syntax error.
How should I manage this? If it's a trivial question I'm sorry, I'm just new to GO and trying to learn something out of it.
Thanks
You cannot do that in SQL. It's not specific to sqlite either. Parameterized placeholder are only for value, you cannot change the structure of the query with that. Here are some documentation links for your reference:
https://jmoiron.github.io/sqlx/#bindvars
https://use-the-index-luke.com/sql/where-clause/bind-parameters
What you are trying to do is building a dynamic query. You can do that by building your query string yourself:
query := "UPDATE Data SET " + col_name + "=? WHERE USERNAME=?"
But depending from the source of your data for the column_name you need to be cautious of sql injection (this is a whole other topic, for fun you can look at that https://imgs.xkcd.com/comics/exploits_of_a_mom.png).
There are also a few library available to help you with that. For example to name one, you can check this one https://github.com/Masterminds/squirrel
I'd like to force Oracle sysdate function to return different values for separate statements, just like it does in Postgres. I've done some digging over the documentation, net and SO itself but couldn't find an answer to address this.
Documentation seems to be pretty poor for this one: see for yourself
I'm using Oracle 11g with SQL Developer 18.3
Please read on the MVCE below.
After executing this:
create table t(a timestamp);
insert into t values (sysdate);
insert into t values (sysdate);
insert into t values (sysdate);
select * from t;
I get:
A
---------------------------
18/12/25 04:25:59,000000000
18/12/25 04:25:59,000000000
18/12/25 04:25:59,000000000
I would want to get (changed by hand):
A
---------------------------
18/12/25 04:25:59,1234
18/12/25 04:25:59,7281
18/12/25 04:26:00,1928
Real issue is presented within different CALL statements to procedures, but the above sample seems to replicate the issue for me.
UPDATE
One thing I found to be helpful is to put pauses between statements, but this really isn't what I'm looking for:
set pause on;
create table t(a timestamp);
insert into t values (sysdate);
pause
insert into t values (sysdate);
pause
insert into t values (sysdate);
As noted in the documentation, the sysdate function returns a date, which only has precision down to seconds - it does not support fractional seconds. So, multiple calls within the same second will always get the same value, and you can't force it to do anything else.
You're putting that date value into a timestamp column, which is causes an implicit conversion from one data type to the other, but that conversion can't set/create a new fractional seconds value - it keeps the implicit fractional seconds from the date, which is of course always zero.
As well as sysdate, Oracle has a systimestamp function, which returns a timestamp with time zone value - and that does have fractional seconds. The precision is limited by the platform you're running on. If you use that to populate your plain timestamp column then an implicit conversion still occurs, but you essentially just throw away the time zone information.
Oracle also supports current_date and current_timestamp, which are very similar - except they return the date/time in the current session time zone, rather than in the server time zone as the sys* versions do.
I found that current_timestamp does the job:
drop table t;
create table t(a timestamp);
insert into t values (current_timestamp);
insert into t values (current_timestamp);
insert into t values (current_timestamp);
select * from t;
Outputs:
A
---------------------------
18/12/25 04:48:54,134000000
18/12/25 04:48:54,142000000
18/12/25 04:48:54,149000000
Using Delphi 10.2, SQLite and Teecharts. My SQLite database has two fields, created with:
CREATE TABLE HistoryRuntime ('DayTime' DateTime, Device1 INTEGER DEFAULT (0));
I access the table using a TFDQuery called qryGrpahRuntime with the following SQL:
SELECT DayTime AS TheDate, Sum(Device1) As DeviceTotal
FROM HistoryRuntime
WHERE (DayTime >= "2017-06-01") and (DayTime <= "2017-06-26")
Group by Date(DayTime)
Using the Field Editor in the Delphi IDE, I can add two persistent fields, getting TheDate as a TDateTimeField and DeviceTotal as a TLargeIntField.
I run this query in a program to create a TeeChart, which I created at design time. As long as the query returns some records, all this works. However, if there are no records for the requested dates, I get an EDatabaseError exception with the message:
qryGrpahRuntime: Type mismatch for field 'DeviceTotal', expecting: LargeInt actual: Widestring
I have done plenty of searching for solutions on the web on how to prevent this error on an empty query, but have had not luck with anything I found. From what I can tell, SQLite defaults to the wide string field when no data is returned. I have tried using CAST in the query and it did not seem to make any difference.
If I remove the persistent fields, the query will open without problems on an empty return set. However, in order to use the TeeChart editor in the IDE, it appears I need persistent fields.
Is there a way I can make this work with persistent fields, or am I going to have to throw out the persistent fields and then add the TeeChart Series at runtime?
This behavior is described in Adjusting FireDAC Mapping chapter of the FireDAC's SQLite manual:
For an expression in a SELECT list, SQLite avoids type name
information. When the result set is not empty, FireDAC uses the value
data types from the first record. When empty, FireDAC describes those
columns as dtWideString. To explicitly specify the column data type,
append ::<type name> to the column alias:
SELECT count(*) as "cnt::INT" FROM mytab
So modify your command e.g. this way (I used BIGINT, but you can use any pseudo data type that maps to a 64-bit signed integer data type and is not auto incrementing, which corresponds to your persistent TLargeIntField field):
SELECT
DayTime AS "TheDate",
Sum(Device1) AS "DeviceTotal::BIGINT"
FROM
HistoryRuntime
WHERE
DayTime BETWEEN {d 2017-06-01} AND {d 2017-06-26}
GROUP BY
Date(DayTime)
P.S. I did a small optimization by using BETWEEN operator (which evaluates the column value only once), and used an escape sequence for date constants (which, in real you replace by parameter, I guess; so just for curiosity).
This data type hinting is parsed by the FDSQLiteTypeName2ADDataType procedure that takes and parses column name in format <column name>::<type name> in its AColName parameter.
I have created a table with sqlite for my corona/lua app. It's a hashtable with ~=700 000 values.The table has two columns, which are the hashcode (a string), and the value (another string). During the program I need to get data several times by providing the hashcode.
I'm using something like this code to get the data:
for p in db:nrows([[SELECT * FROM test WHERE id=']].."hashcode"..[[';]]) do
print(p)
-- p = returned value --
end
This statement is though taking insanely too much time to perform
thanks,
Edit:
Success!
the mistake was with the primare key thing.I set the hashcode as the primary key like below and the retrieve time whent to normal:
CREATE TABLE IF NOT EXISTS test (id STRING PRIMARY KEY , array);
I also prepared the statements in advance as you said:
stmt = db:prepare("SELECT * FROM test WHERE id = ?;")
[...]
stmt:bind(1,s)
for p in stmt:nrows() do
The only problem was that the db file size,that was around 18 MB, went to 29,5 MB
You should create the table with id as a unique primary key; this will automatically make an index.
create table if not exists test
(
id text primary key,
val text
);
You should not construct statements using string concatenation; this is a security issue so avoid getting in this habit. Also, you should prepare statements in advance, at program initialization, and run the prepared statements.
Something like this... initially:
hashcode_query_stmt = db:prepare("SELECT * FROM test WHERE id = ?;")
then for each use:
hashcode_query_stmt:bind_values(hashcode)
for p in hashcode_query_stmt:urows() do ... end
Ensure that there is an index on the id/hashcode column? Without one such queries will be slow, slow, slow. This index should probably be unique.
If only selecting the value/hashcode (SELECT value FROM ..), it may be beneficial to have a covering index over (id, value) as that can avoid additional seeking to the row data (see SQLite Query Planning). Try it with and without such a covering index.
Also, it may be worthwhile to employ caching if the same hashcodes are queried multiple times.
As already stated, get sure you have an index on ID.
If you can't change table schema now, you can add a index ad hoc:
CREATE INDEX test_id ON test (id);
About hashes: if you are computing hashes in your software to speed up searches, don't!
SQLite will use your supplied hashes as any regular string/blob. Also, RDBMS are optimized for efficient searching, which may be greatly improved with indexes.
Unless your hashing to save space, you are wasting processor time computing hashes in your application.
I would like to ask you how would you increase the performance on Insert cursor in this code?
I need to use dynamic plsql to fetch data but dont know how to improve the INSERT in best way. like Bulk Insert maybe?
Please let me know with code example if possible.
// This is how i use cur_handle:
cur_HANDLE integer;
cur_HANDLE := dbms_sql.open_cursor;
DBMS_SQL.PARSE(cur_HANDLE, W_STMT, DBMS_SQL.NATIVE);
DBMS_SQL.DESCRIBE_COLUMNS2(cur_HANDLE, W_NO_OF_COLS, W_DESC_TAB);
LOOP
-- Fetch a row
IF DBMS_SQL.FETCH_ROWS(cur_HANDLE) > 0 THEN
DBMS_SQL.column_value(cur_HANDLE, 9, cont_ID);
DBMS_SQL.COLUMN_VALUE(cur_HANDLE, 3, proj_NR);
ELSE
EXIT;
END IF;
Insert into w_Contracts values(counter, cont_ID, proj_NR);
counter := counter + 1;
END LOOP;
You should do database actions in sets whenever possible, rather than row-by-row inserts. You don't tell us what CUR_HANDLE is, so I can't really rewrite this, but you should probably do something like:
INSERT INTO w_contracts
SELECT ROWNUM, cont_id, proj_nr
FROM ( ... some table or joined tables or whatever... )
Though if your first value there is a primary key, it would probably be better to assign it from a sequence.
Solution 1) You can populate inside the loop a PL/SQL array and then just after the loop insert the whole array in one step using:
FORALL i in contracts_tab.first .. contracts_tab.last
INSERT INTO w_contracts VALUES contracts_tab(i);
Solution 2) if the v_stmt contains a valid SQL statement you can directly insert data into the table using
EXECUTE IMMEDIATE 'INSERT INTO w_contracts (counter, cont_id, proj_nr)
SELECT rownum, 9, 3 FROM ('||v_stmt||')';
"select statement is assembled from a website, ex if user choose to
include more detailed search then the select statement is changed and
the result looks different in the end. The whole application is a web
site build on dinamic plsql code."
This is a dangerous proposition, because it opens your database to SQL injection. This is the scenario in which Bad People subvert your parameters to expand the data they can retrieve or to escalate privileges. At the very least you need to be using DBMS_ASSERT to validate user input. Find out more.
Of course, if you are allowing users to pass whole SQL strings (you haven't provided any information regarding the construction of W_STMT) then all bets are off. DBMS_ASSERT won't help you there.
Anyway, as you have failed to give the additional information we actually need, please let me spell it out for you:
will the SELECT statement always have the same column names from the same table name, or can the user change those two?
will you always be interested in the third and ninth columns?
how is the W_STMT string assembled? How much control do you have over its projection?