Can a optimistic lock give a deadlock? - deadlock

Example:
i have a table A (id, version and a field) and a table B (id, version and a field).
i need to make a transaction that edit a record A, and then a record B.
begin transaction
update tableA set field='aaa', version=version+1 where id=1 and version=savedversion
-if recordupdated=0 then rollback
update tableB set field='bbb', version=version+1 where id=1 and version=savedversion
-if recordupdated=0 then rollback
commit
but if i have another thread that need to update the table in reverse order (in a complex environment, there is the possibility that a developer doesn't follow the policies), or needs to update table A (not same record as first transaction), then table B (same record as first transaction), then A (same record as first transaction), can occur a deadlock?
what is the right way to make a transaction in a optimistic lock?
could the solution be using only stored procedures?

Related

how could mvcc works when the primary key is changed?

In MVCC document, it says when "select query" finds the records, it will compare the transactionId with its own id to judge if the data can be seen and if history records should be reconstructed from redo log. My question is what if it cannot find the original records, and how could it maintain consistent reading?
Consider the following example:
create table tb_a (id bigtint not null primary key auto_increment, name varchar(100) not null default "");
// isolation level is RR
// transaction 1
select * from tb_a where id = 1; // it returns (1, "a")
// transaction 2
// another trx update the first line with its primary key
update tb_a set id = 3 where id = 1;
commit;
// transaction 1
select * from tb_a where id = 1; // still gets (1, "a")
the primary key with the filter id = 1 cannot find the row since the history records are in redo log, and updates in innodb happen inplace. So how does innodb treat this kind of thing and still maintains consistency?
Changing the PK column is probably a "delete" of the old record and an "insert" of the new row. I think this implies that something is left in the table, but marked as deleted (until the cleanup after Commits from both transactions).
Similarly for UNIQUE key changes. Other transactions need to be able to see the deleted row to check for dup key.
Each version of each row (old/new) has a transaction id. So...
Repeatable Read:
When a transaction starts, it is assigned a "transaction id". This is a monotonically increasing sequence number for identifying rows that might be modified. For transaction isolation = RR, the queries can only "see" rows that have that trx id (or older). This explains why your final SELECT sees what it does. And, note, that query is actually (as far as I know) re-executed.
Your other txn had a higher trx id. It created a 'newer' copy of the row. So, there were at least two copies of that row floating around. The isolation mode, plus the trx id, controls which row each transaction can "see".

Sqlite: Are updates to two tables within an insert trigger atomic?

I refactored a table that stored both metadata and data into two tables, one for metadata and one for data. This allows metadata to be queried efficiently.
I also created an updatable view with the original table's columns, using sqlite's insert, update and delete triggers. This allows calling code that needs both data and metadata to remain unchanged.
The insert and update triggers write each incoming row as two rows - one in the metadata table and one in the data table, like this:
// View
CREATE VIEW IF NOT EXISTS Item as select n.Id, n.Title, n.Author, c.Content
FROM ItemMetadata n, ItemData c where n.id = c.Id
// Trigger
CREATE TRIGGER IF NOT EXISTS item_update
INSTEAD OF UPDATE OF id, Title, Author, Content ON Item
BEGIN
UPDATE ItemMetadata
SET Title=NEW.Title, Author=NEW.Author
WHERE Id=old.Id;
UPDATE ItemData SET Content=NEW.Content
WHERE Id=old.Id;
END;
Questions:
Are the updates to the ItemMetadata and ItemData tables atomic? Is there a chance that a reader can see the result of the first update before the second update has completed?
Originally I had the WHERE clauses be WHERE rowid=old.rowid but that seemed to cause random problems so I changed them to WHERE Id=old.Id. The original version was based on tutorial code I found. But after thinking about it I wonder how sqlite even comes up with an old rowid - after all, this is a view across multiple tables. What rowid does sqlite pass to an update trigger, and is the WHERE clause the way I first coded it problematic?
The documentation says:
No changes can be made to the database except within a transaction. Any command that changes the database (basically, any SQL command other than SELECT) will automatically start a transaction if one is not already in effect.
Commands in a trigger are considered part of the command that triggered the trigger.
So all commands in a trigger are part of a transaction, and atomic.
Views do not have a (usable) rowid.

does an update statement in oracle hold a lock even if no rows were updated

If I run an update statement in oracle that says '0 rows updated' because it does not match the where clause and i do not commit, does it still hold the lock on any protion of the table? My guess is no, but i cannot prove it.
No row locks are held after an update that didn't update anything (after all, if there is no row, which one should be locked?)
Your transaction will still have some share locks (on the table) but those are only there to prevent other transactions from altering the table. It's basically the same kind of "lock" a select statement acquires on the table.
From the manual:
A row is locked only when modified by a writer.
And further down in the manual:
A row lock, also called a TX lock, is a lock on a single row of table. A transaction acquires a row lock for each row modified
So if no row is changed, there can't be a lock.
It does not hold any lock.
Simple test case
Open two oracle sessions (sqlplus or sqldeveloper or by any other means)
update table1 where clause (session 1)
update table1 where clause (session 2)
Commit from session1 (if there is a table lock then this should hang)
commit from session2 (if there is a table lock then this statement will cause deadlock)
Condition is same with row locking (both sessions deleting same row if exists) as well.
As documented:
The locking characteristics of INSERT, UPDATE, DELETE, and SELECT ... FOR UPDATE statements are as follows:
The transaction that contains a DML statement acquires exclusive row locks on the rows modified by the statement. Other transactions cannot update or delete the locked rows until the locking transaction either commits or rolls back.
The transaction that contains a DML statement does not need to acquire row locks on any rows selected by a subquery or an implicit query, such as a query in a WHERE clause. A subquery or implicit query in a DML statement is guaranteed to be consistent as of the start of the query and does not see the effects of the DML statement it is part of.
A query in a transaction can see the changes made by previous DML statements in the same transaction, but cannot see the changes of other transactions begun after its own transaction.
In addition to the necessary exclusive row locks, a transaction that contains a DML statement acquires at least a row exclusive table lock on the table that contains the affected rows. If the containing transaction already holds a share, share row exclusive, or exclusive table lock for that table, the row exclusive table lock is not acquired. If the containing transaction already holds a row share table lock, Oracle Database automatically converts this lock to a row exclusive table lock.
The table lock is necessary to protect the table from changes while the update is in progress, and if no rows are modified by the update then this is the only lock applied.
If the statement carries out an update on a row that results in no change to that row (eg SET DATE_OF_BIRTH = NULL for a row where date_of_birth is already nullthe row lock is still taken.

Not sure about the type of SQL Server lock to use for synchronization

I have an ASP.NET web application that populates the SQL Server 2008 database table like this:
INSERT INTO tblName1 (col1, col2, col3)
VALUES(1, 2, 3)
I also have a separate service application that processes the contents of that table (on the background) by first renaming that table, and then by creating an empty table as such:
SET XACT_ABORT ON
BEGIN TRANSACTION
--Rename table
EXEC sp_rename 'tblName1', 'temp_tblName1'
--Create new table
CREATE TABLE tblName1(
id INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
col1 INT,
col2 INT,
col3 INT
)
COMMIT
SET XACT_ABORT OFF
--Begin working with the 'temp_tblName1' table
What I am not sure is which SQL lock do I need to use in this situation on the tblName1 table?
PS. To give you a frequency with which these two code samples run: first may run several times a second (although most times, less frequently), and the second one -- twice a day.
As some of the comments have suggested, consider doing this differently. You may benefit from using the snapshot isolation level. Using snapshot isolation requires ALLOW_SNAPSHOT_ISOLATION to be set to ON on the database. This setting is off by default, so you'll want to check whether you can turn it on.
Once you are able to use snapshot isolation, you would not need to change your INSERT statement, but your other process could change to something like:
SET XACT_ABORT ON
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRANSACTION
-- Do whatever this process does, but don't rename the table.
-- If you want to get rid of the old records:
DELETE [tblName1] WHERE 1 = 1
-- Then
COMMIT TRANSACTION
In case you really do need to create a new non-temporary table for some reason, you may need to do so before entering the transaction, as there are some limits on what you are allowed to do during snapshot isolation.

How to cleanup INVENTDIM table?

In AX2009, there exists a process to cleanup unused inventory dimensions not belonging to any transaction.
Is there such a process that I can do in AX4 where INVENTDIM table is now having 20 million+ records.
If there's no such a standard process you can try the following.
write a job to identify all InventDimId (+ReqCovInventDimId,
etc.) fields in all the tables.
write a job or a SQL query to fill in a
temporary table with the InventDimId values from all these fields.
write a job or a SQL query to remove all such records from the InventDim table that don't have InventDimId's in this temporary table.
There is no such standard process.
The courageous might do:
InventDim.skipDeleteActions(true);
InventDim.skipDeleteMethod(true);
delete_from InventDim
notexists join InventTrans
where InventTrans.inventDimId == InventDim.inventDimId;
This will delete any records not referenced by item transactions.
Unfortunately there might exists other references.
You could try a downgrade of the AX 2009 process.

Resources