How do I avoid these deadlocks? - asp.net

I have a routine which is updating my business entity. The update involves about 6 different tables. All the commands are being executed within a transaction.
Recently, I needed to add some code into the routine which accesses a lookup table from the database. The lookup code already existed in another business object so I used that business object. For example:
Using tr As DbTransaction = myConnection.BeginTransaction()
ExecuteCommand1(tr)
ExecuteCommand2(tr)
If myLookupTable.GetLookupTable().FindById(id).HasFlagSet Then
ExecuteCommand3(tr)
End If
End Using
However, the lookup table business object hangs/deadlocks. I think this is because it doesn't have a reference to the transaction being used by the original routine.
After doing some research, I attempted to put the lookup table logic in its own transaction, setting the IsolationLevel to ReadUncommitted. This gave me the results I desired. However, after further research, I'm now second-guessing if I've implemented this correctly.
Assuming a reference to the active transaction is unavailable to my lookup table object, is what I've described considered best practice? I feel like I might be missing something.

If you're doing a read in the middle of your transaction then you should do it under the transaction context, not using a different transaction and dirty reads. Luckily there is an easy solution: instead of using the ADO.Net transaction objects use the .Net TransactionScope object. The ADO.Net code is sensible to it and will enlist all your operations in this transaction, including your other business component reads. Just make sure your business object does not open a different connection, this will result in attempting to escalate the existing transaction into a distributed transaction and enlist the new connection into it.
The alternative is to pass down your SqlConnection/SqlTransaction pair on each call, but that propagates horribly ugly everywhere in your code.

If it were me I would rewrite the logic so I do not have to do an uncommitted read.

The golden rule to avoid deadlocks is to always take table locks in the same order in every transaction. So look at the code in the other transactions to see what order they take table locks; then make sure you use the same order in your transaction.

Apparently your look up is attempting to access a row(s) that is exclusively locked by transaction tr. If you use a readuncommitted transaction or alternatively use WITH(NOLOCK) in your lookup query, you will see all uncommitted changes by transactions that might be occurring and effecting your lookup logic. So I am not so sure how desirable this would be.
I think it is best to find a way to ensure that your lookup query participates in the current transaction if you need to do the lookup during that transaction. If all of these operations are to be executed in the same thread, one thing that you can do is to store the transaction object in thread local storage when you create one and have GetLookupTable method check the thread local storage for a transaction object and if there is a transaction set, you can get the connection from that transaction object. Otherwise, you create a new connection. This way your lookup will become part of that transaction and it should run its logic without getting blocked by the current transaction and in turn blocking the current transaction and thus leading to a deadlock.

Related

Whats the best way to generate ledger change Events that include the Transaction Command?

The goal is to generate events on every participating node when a state is changed that includes the business action that caused the change. In our case, Business Action maps to the Transaction command and provides the business intent or what the user is doing in business terms. So in our case, where we are modelling the lifecycle of a loan, an action might be to "Close" the loan.
We model Event at a state level as follows: Each Event encapsulates a Transaction Command and is uniquely identified by a (TxnHash, OutputIndex) and a created/consumed status.
We would prefer a polling mechanism to generate events on demand, but an asynch approach to generate events on ledger changes would be acceptable. Either way our challenge is in getting the Command from the Transaction.
We considered querying the States using the Vault Query API vaultQueryBy() for the polling solution (or vaultTrackBy() for the asynch Obvservalble Stream solution). We were able to create a flow that gets the txn for a state. This had to be done in a flow, as Corda deprecated the function that would have allowed us to do this in our Springboot client. In the client we use vaultQueryBy() to get a list of States. Then we call a flow that iterates over the states, gets txHash from each StateRef and then calls serviceHub.validatedTransactions.getTransaction(txHash) to get signedTransaction from which we can ultimately retrieve the Command. Is this the best or recommended approach?
Alternatively, we have also thought of generating events of the Transaction by querying for transactions and then building the Event for each input and output state in the transaction. If we go this route what's the best way to query transactions from the vault? Is there an Observable Stream-based option?
I assume this mapping of states to command is a common requirement for observers of the ledger because it is standard to drive contract logic off the transaction command and quite natural to have the command map to the user intent.
What is the best way to generate events that encapsulate the transaction command for each state created or consumed on the ledger?
If I understand correctly you're attempting to get a notified when certain types of ledger updates occur (open, approved, closed, etc).
First: Asynchronous notifications are best practice in Corda, polling should be avoided due to the added weight it puts on the node for constant querying and delays. Corda provides several mechanisms for Observables which you can use: https://docs.corda.net/api/kotlin/corda/net.corda.core.messaging/-corda-r-p-c-ops/vault-track-by.html
Second: Avoid querying transactions from the database as these are intended to be internal to the node. See this answer for background on why to avoid transaction querying. In general only tables that begin with "VAULT_*" are intended to be queried.
One way to solve your use case would be a "status" field which reflects the command that was used to produce the current state. For example: if a "Close" command was used to produce the state it's status field could be "closed". This way you could use the above vaultTrackBy to look at each state's status field and infer the action that occured.
Just to finish up on my comment: While the approach met the requirements, The problem with this solution is that we have to add and maintain our own code across all relevant states to capture transaction-level information that is already tracked by the platform. I would think a better solution would be for the platform to provide consumers access to transaction-level information (selectively perhaps) just as it does for states. After all, the transaction is, in part, a business/functional construct that is meaningful at the client application level. For example, If I am "transferring" a loan, that may be a complex business transaction that involves many input and output states and may be an important construct/notion for the client application to manage.

firebase database equivalent of MySQL transaction

I'm seeking something where I can thread through multiple updates to multiple firebase.database.References (before performing a commit) a single object and then commit that at the end and if it is unsuccessful no changes are made to any of my Firebase References.
Does this exist? the firebase.database.Transaction I thought would be similar since it is an atomic update and it does involve a callback which says if it has been committed or not, but the update function, I believe, is only for a single object, and the function doesn't seem to return a transactionId or something I could pass to other firebase.database.Transactionss or something.
UPDATE
This transaction's update seems to return a Transaction which would lend itself to perhaps chaining: https://firebase.google.com/docs/reference/js/firebase.firestore.Transaction
however this is different from the other Transaction:
Firebase Database transactions perform an update to a single location based on the current value of that same location. They explicitly do not work across multiple locations, since that would limit their scalability. Sometimes developers work around this by performing a transaction higher up in their JSON tree (at the first common point of the locations). I'd recommend against that, as that would limit the scalability even further.
The only way to efficiently update multiple locations with one API call, is with a multiple location update. This does however not have reading of the current value built-in.
So if you want to update multiple locations based on their current value, you'll have to perform the read operation in your application code, turn that into a multi-location update, and then use security rules to ensure all of those updates follow your application rules. This is a quite non-trivial approach, so I hardly see it being done in practice. See my answer here for an example: Is the way the Firebase database quickstart handles counts secure?

Firebase web - transaction on query

Can I run a transaction on a query referring to multiple locations ?
In the doc I see that for example startAt returns a firebase.database.Query which has a ref property of type firebase.database.Reference which has the transaction method.
So can I do:
ref.startAt(ver).ref.transaction(transactionUpdate).then(... ?
Would the transaction then operate on multiple locations and update them correctly ?
What I'm trying to do is to get all locations since a particular version (key) and then mark them as 'read' so that a writing client will not update them. For that I need a transaction rather than a simple update.
Thx!
The answer is "no" to all questions.
The ref property of a Query gives you the reference of the node on which you set up the query. Consider how you built the query in the first place. In other words, ref.startAt(x).ref is equivalent to ref.
Manipulating a reference (navigating to children, adding query options, etc.) is completely independent of any query results. It's just local, trivial path manipulation, very similar to formatting a URL.
Transactions can only operate on a single node, by definition, using that node's value snapshots for incremental updates. They cannot "operate on multiple locations and update them correctly". These are not SQL transactions, the only thing common is the name – which might be, unfortunately, confusing.
The starting node doesn't have to be a leaf node. But if you start a transaction on a "parent" node, the client will have to download every child to create a whole snapshot, potentially multiple times if any of them is modified by another client.
This is most certainly a very slow, fragile and expensive operation, both for the user and you, the owner of the database. In general, it's not recommended to run transactions if the node might grow unbounded.
I suggest revising the presented strategy. Updating "all children" just to store a "read" marker simply does not scale.
You could for example store the last read ID of the client in a single node, and write security rules to enforce that no data with an ID less than this may be modified.

Optimizing select with transaction under SQLite 3

I read that wrapping a lot of SELECT into BEGIN TRANSACTION/COMMIT was an interesting optimization.
But are these commands really necessary if I use "PRAGMA journal_mode = OFF" before? (Which, if I remember, disables the log and obviously the transaction system too.)
Note that I don't agree with BigMacAttack.
For SQLITE, wrapping SELECTs in a Transaction does do something:
It reduces the number of SHARED locks that are obtained and then dropped.
Reference:
http://www.mail-archive.com/sqlite-users%40sqlite.org/msg79839.html
So I think the transaction would also be beneficial even if you had journal_mode turned off, because there is still the locking overhead to consider.
Maybe read_uncommitted would be something you could consider - I would guess that it would disable the SHARED locking.
"Use transactions – even if you’re just reading the data. This may yield a few milliseconds."
I'm not sure where the Katashrophos.net blog is getting this information, but wrapping SELECT statements in transactions does nothing. Transactions are always and only used when making changes to the database, and transactions cannot be disabled. They are a requirement. What many don't understand is that unless you manually BEGIN and COMMIT a transaction, each statement will be automatically put in their own unique transaction. Be sure to read the de facto SO question on improving sqlite performance. What the author of the blog might have been trying to say, is that if you plan to do an INSERT, then a SELECT, then another INSERT, then it would increase performance to manually wrap these statements in a single transaction. Otherwise sqlite will automatically put the two insert statements in separate unique transactions.
According to the "SQL as Understood by SQLite" documentation concerning transactions:
"No changes can be made to the database except within a transaction. Any command that changes the database (basically, any SQL command other than SELECT) will automatically start a transaction if one is not already in effect."
Lastly, disabling journaling via PRAGMA journal_mode = OFF does not disable transactions, only logging. But disabling the log is a good way to increase performance as well. Normally after each transaction, sqlite will document the transaction in the journal. When it doesn't have to do this, you get a performance boost.
UPDATE:
So it has been brought to my attention by "elegant dice" that the SQLite documentation statement I quote above is misleading. SELECT statements do in fact use the transaction system. This is used to acquire and release a SHARED lock on the database. As a result, it is indeed more efficient to wrap multiple SELECT statements in a single transaction. By doing so, the lock is only acquired and released once, rather than for each individual SELECT statement. This ends up being slightly more efficient while also assuring that all SELECT statements will access the same version of the database in case something has been added/deleted by some other program.

How to lock a record when two members are trying to access it?

I have the scenario like this,
My environment is .Net2.0, VS 2008, Web Application
I need to lock a record when two members are trying to access at the same time.
We can do it in two ways,
By Front end (putting the sessionID and record unique number in the dictionary and keeping it as a static or application variable), we will release when the response is go out of that page, client is not connected, after the post button is clicked and session is out.
By backend (record locking in the DB itself - need to study - my team member is looking ).
Is there any others to ways to do and do I need to look at other ways in each and every steps?
Am I missing any conditions?
You do not lock records for clients, because locking a record for anything more than a few milliseconds is just about the most damaging thing one can do in a database. You should use instead Optimistic Concurrency: you detect if the record was changed since the last read and re-attempt the transaction (eg you re-display the screen to the user). How that is actually implemented, will depend on what DB technology you use (ADO.Net, DataSets, Linq, EF etc).
If the business domain requires lock-like behavior, those are always implemented as reservation logic in the database: when a record is displayed, it is 'reserved' so that no other users can attempt to make the same transaction. The reservation completes or times out or is canceled. But a 'reservation' is never done using locks, is always an explicit update of state from 'available' to 'reserved', or something similar.
This pattern is also describe din P of EAA: Optimistic Offline Lock.
If your talking about only reading data from a record from SQL server database, you don't need to do anything!!! SQL server will do everything about managing multi access to records. but if you want to manipulate data, you have to use Transactions.
I agree with Ramus. But still if u need it. Create a column with name like IsInUse as bit type and set it true if one is accessing. Since other guys will also need same data at same time then u need to save your app from crash .. so at every place from where the data is retrieved you have to put a check if IsInUse is False or not.

Resources