I am interested in using transactions for performance.
This is the process I'm currently doing:
* ~~~BEGIN TRANSACTION~~~
* Instance a command object and set the command text.
* Prepare command text so parameters can be added.
* In a loop, I set the values of said parameters and execute the command.
* ~~~COMMIT~~~
I am doing this because, as far as I understand, when you prepare a statement, you ARE communicating with the SQLite engine, so maybe encompassing that in the transaction helps in some way??? (I'm just speculating.)
.
Question: Should I change it to the following process?
* Instance a command object and set the command text.
* Prepare command text so parameters can be added.
* ~~~BEGIN TRANSACTION~~~
* In a loop, I set the values of said parameters and execute the command.
* ~~~COMMIT~~~
I am asking about this because, maybe, making the transaction contain ONLY series of homogenous commands seems like it might be better??? (Again, I have no idea; I'm just speculating.)
There is no advantage to preparing statements inside BEGIN (with caveat below).
There is a small advantage to preparing the statements outside the BEGIN since the transaction will be slightly smaller in time (with second caveat below), thereby permitting more concurrency.
In either case, be sure to reset the statement before reusing it, and finalize the statement before closing the database.
Caveat 1: if the database schema changes, the statement will need to be re-prepared. If you use the recommended sqlite3_prepare_v2() then SQLite will do this for you. You can avoid schema changes by preparing inside a transaction, but note that you will need to use BEGIN IMMEDIATE to be sure the database is locked.
Caveat 2: Since you use BEGIN rather than BEGIN IMMEDIATE the database lock is not actually taken until the first statement of the transaction is stepped. So, there isn't really a concurrency advantage unless you are using BEGIN IMMEDIATE.
There are other advantages to preparing statements outside transactions, e.g., you can use them in multiple transactions without preparing multiple times. However, the logic to maintain their lifetimes becomes more complex/disbursed.
Related
When multiple statements are submitted together --separated by semicolons(;) but in the same string-- and are NOT wrapped in an explicit transaction, is only a single implicit transaction created or is an implicit transaction created for each statement separately? Further, if one of the later statements fail and an automatic rollback is performed, are all of the statements rolled back?
This other answer almost satisfies my question, but wording in the official documentation leaves me puzzled. In fact, this may seem like a duplicate, but I am specifically wondering about implicit transactions for multiple statements. The other answer does not explicitly address this particular case.
As an example (borrowing from the other question), the following are submitted as a single string:
INSERT INTO a (x, y) VALUES (0, 0);
INSERT INTO b (x, y) VALUES (1, 2); -- line 3 error here, b doesn't have column x
The documentation says
Automatically started transactions are committed when the last query finishes. (emphasis added)
and
An implicit transaction (a transaction that is started automatically, not a transaction started by BEGIN) is committed automatically when the last active statement finishes. A statement finishes when its prepared statement is reset or finalized. (emphasis added)
The keyword last implies to me the possibility of multiple statements. Of course if an implicit transaction is started for each individual statement, then taken individually each statement will be the "last" statement to be executed, but in context of individual statements then it should just say the statement to emphasize the context being one single statement at a time.
Or is there there a difference between prepared statements and unprepared SQL strings? (But as I understand, all statements are prepared even if the calling application doesn't preserve the prepared statement for reuse, so I'm not sure this even matters.)
In the case of all statements being successful, the result of a single commit or multiple commits are essentially the same, but the docs only mention that the single failing statement is automatically rolled back, but doesn't mention other statements submitted together.
The sqlite3_prepare interface compiles the first SQL statement in a query string. The pzTail parameter to these functions returns a pointer to the beginning of the unused portion of the query string.
For example, if you call sqlite3_prepare with the multi-statement SQL string in your example, the first statement is the only one that is active for the resulting prepared statement. The pzTail pointer, if provided, points to the beginning of the second statement. The second statement is not compiled as a prepared statement until you call sqlite3_prepare again with the pzTail pointer.
So, no, multiple statements are not rolled back. Each implicit transaction created by the SQLite engine encompasses a single prepared statement.
I’m trying to cause a ‘SELECT’ query to fail if the record it is trying to read is locked.
To simulate it I have added a trigger on UPDATE that sleeps for 20 seconds and then in one thread (Java application) I’m updating a record (oid=53) and in another thread I’m performing the following query:
“SET STATEMENT max_statement_time=1 FOR SELECT * FROM Jobs j WHERE j.oid =53”.
(Note: Since my mariadb server version is 10.2 I cannot use the “SELECT … NOWAIT” option and must use “SET STATEMENT max_statement_time=1 FOR ….” instead).
I would expect that the SELECT will fail since the record is in a middle of UPDATE and should be read/write locked, but the SELECT succeeds.
Only if I add ‘for update’ to the SELECT query the query fails. (But this is not a good option for me).
I checked the INNODB_LOCKS table during the this time and it was empty.
In the INNODB_TRX table I saw the transaction with isolation level – REPEATABLE READ, but I don’t know if it is relevant here.
Any thoughts, how can I make the SELECT fail without making it 'for update'?
Normally consistent (and dirty) reads are non-locking, they just read some sort of snapshot, depending on what your transaction isolation level is. If you want to make the read wait for concurrent transaction to finish, you need to set isolation level to SERIALIZABLE and turn off autocommit in the connection that performs the read. Something like
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET autocommit = 0;
SET STATEMENT max_statement_time=1 FOR ...
should do it.
Relevant page in MariaDB KB
Side note: my personal preference would be to use innodb_lock_wait_timeout=1 instead of max_statement_time=1. Both will make the statement fail, but innodb_lock_wait_timeout will cause an error code more suitable for the situation.
I have been asked to create an SP which creates temporary table and insert some records.
I am preparing some sample code for the same as mentioned below but the output is not displayed.
create or replace procedure Test
is
stmt varchar2(1000);
stmt2 varchar2(1000);
begin
stmt := 'create global temporary table temp_1(id number(10))';
execute immediate stmt;
insert into temp_1(id) values (10);
execute immediate 'Select * from temp_1';
execute immediate 'Drop table temp_1';
commit;
end;
When i am executing the SP by (Exec Test) desired O/P is not displayed.
I am expecting O/P of "Select * from temp_1" to be displayed. But it is not happening. Please suggest where i am doing wrong.
But i am interesting in knowing why ( execute immediate 'Select * from temp_1';) do not yield any result
For two reasons. Firstly because as #a_horse_with_no_name said PL/SQL won't display the result of a query. But more importantly here, perhaps, the query is never actually executed. This behaviour is stated in the documentation:
If dynamic_sql_statement is a SELECT statement, and you omit both into_clause and bulk_collect_into_clause, then *execute_immediate_statement( never executes.
You would have to execute immediate into a variable, or more likely a collection if your real scenario has more than one row, and then process that data - iterating over the collection in the bulk case.
There is not really a reliable way to display anything from PL/SQL; you can use dbms_output but that's more suited for debugging than real output, and you usually have no guarantee that the client will be configured to show whatever you put into its buffer.
This is all rather academic since creating and dropping a GTT on the fly is not a good idea and there are better ways to accomplish whatever it is you're trying to do.
The block you showed shouldn't actually run at all; as you're creating temp_1 dynamically, the static SQL insert into temp_1 will error as that table does not yet exist when the block is compiled. The insert would have to be dynamic too. Any dynamic SQL is a bit of a warning sign you're maybe doing something wrong, though it is sometimes necessary; having to do everything dynamically suggests the whole approach needs a rethink, as does creating objects at runtime.
I got bit trying to maintain code packages that run on two different Oracle 11g2 systems when a line of code to be changed slipped by me. We develop on one system with a specific data set and then test on another system with a different data set.
The differences aren't tremendous, but include needing to change a single field name in two different queries in two different packages to have the packages run. On one system, we use one field, on the other system... a different one. The databases have the same schema name, object names, and field names, but the hosting system server names are different.
The change is literally as simple as
INSERT INTO PERSON_HISTORY
( RECORD_NUMBER,
UNIQUE_ID,
SERVICE_INDEX,
[... 140 more fields... ]
)
SELECT LOD.ID RECORD_NUMBER ,
-- for Mgt System, use MD5 instead of FAKE_SSN
-- Uncomment below, and comment out Dev system statement
-- MD5 UNIQUE_ID ,
-- for DEV system, use below
'00000000000000000000' || LOD.FAKE_SSN UNIQUE_ID ,
null SERVICE_INDEX ,
[... 140 more fields... ]
FROM LEGACY_DATE LOD
WHERE (conditions follow)
;
I missed one of the field name changes in one of the queries, and our multi-day run is crap.
For stupid reasons I won't go into, I wind up maintaining all of the code, including having to translate and reprocess developer changes manually between versions, then transfer and update the required changes between systems.
I'm trying to reduce the repetitive input I have to provide to swap out code -- I want to automate this step so I don't overlook it again.
I wanted to implement conditional compilation, pulling the name of the database system from Oracle and having the single line swap automatically -- but Oracle conditional compilation requires a package static constant (boolean in this case). I can't use the sys_context function to populate the value. Or, it doesn't seem to let ME pull data from the sys_context and evaluate it conditionally and assign that to a constant. Oracle isn't having any. DB_DOMAIN, DB_NAME, or SERVER_HOST might work to differentiate the systems, but I can't find a way to USE the information.
An option is to create a global constant that I set manually when I move the code to the other system, but at this point, I have so many steps to do for a transfer that I'm worried that I'd even screw that up. I would like to make this independent of other packages or my own processes.
Is there a good way to do this?
-------- edit
I will try the procedure and try to figure out the view over the weekend. Ultimately, the project will be turned over to a customer who expects to "just run it", so they won't understand what any switches are meant to do, or why I have "special" code in a package. And, they won't need to... I don't even know if they'll look at the comments.
Thank you
As Mat says in the comments for this specific example you can solve with a view, however there are other ways for more complex situations.
If you're compiling from a filesystem or using any automatic system you can create a separate PL/SQL block/procedure, which you execute in the same session prior to compilation. I'd do something like this:
declare
l_db varchar2(30) := sys_context('userenv','instance_name');
begin
if l_db = 'MY_DB' then
execute immediate 'alter session set plsql_ccflags = ''my_db:true''';
end if;
end;
/
One important point; conditional compilation does not involve a "package static constant", but a session one. So, you need to ensure that your compilation flags are identical/unique across packages/sessions.
SQLite allows to define custom functions that can be called from SQL statements. I use this to get notified of trigger activity in my application:
CREATE TRIGGER AfterInsert AFTER INSERT ON T
BEGIN
-- main trigger code involves some extra db modifications
-- ...
-- invoke application callback
SELECT ChangeNotify('T', 'INSERT', NEW.id);
END;
However, user-defined functions are added only to current database connection. There may be other clients who haven't defined ChangeNotify and I don't want them to get a "no such function" error.
Is is possible to call a function only if it's defined? Any alternative solution is also appreciated.
SQLite is designed as an embedded database, so it is assumed that your application controls what is done with the database.
SQLite has no SQL function to check for user-defined functions.
If you want to detect changes made only from your own program, use sqlite3_update_hook.
Prior to calling your user defined function, you can check if the function exists by selecting from pragma_function_list;
select exists(select 1 from pragma_function_list where name='ChangeNotify');
1
It would be possible by combining a query against pragma_function_list and a WHEN statement on the trigger --
CREATE TRIGGER AfterInsert AFTER INSERT ON T
WHEN EXISTS (SELECT 1 FROM pragma_function_list WHERE name = 'ChangeNotify')
BEGIN
SELECT ChangeNotify('T', 'INSERT', NEW.id);
END;
except that query preparation attempts to resolve functions prior to execution. So, afaik, this isn't possible to do in a trigger.
I need to do the same thing and asked here: https://sqlite.org/forum/forumpost/a997b1a01d
Hopefully they come back with a solution.
Update
SQLite forum suggestion is to use create temp trigger when your extension loads -- https://sqlite.org/forum/forumpost/96160a6536e33f71
This is actually a great solution as temp triggers are:
not visible to other connections
are cleaned up when the connection creating them ends