BizTalk WCF-SQL composite operation for calling stored procedure updates, sequence? - biztalk

We having a composite operation to invoke stored procedure to update a few tables. But running into some issues now potentially due to the sequence of event the updates are fired. Trying to understand how the composite operation work for WCF-SQL adapter, I know it is using one transaction context to execute the store procedures, but did it honor the sequence of rows when it comes to execute them? (e.g. run 1st row, then 2nd row then 3rd?) Environment is BizTalk2013 R2

Yes, the operations in a Composite Operation are executed in order and I've never had cause to doubt this.
Are you having a specific problem?

Related

Controlling read locks on table for multithreaded plsql execution

I have a driver table with a flag that determines whether that record has been processed or not. I have a stored procedure that reads the table, picks a record up using a cursor, does some stuff (inserts into another table) and then updates the flag on the record to say it's been processed. I'd like to be able to execute the SP multiple times to increase processing.
Obvious answer seemed to be to use 'for update skip locked' in the select for the cursor but it seems this means I cannot commit within the loop (to update the processed flag and commit my inserts) without getting the fetch out of sequence error.
Googling tells me Oracle's AQ is the answer but for the time being this option is not available to me.
Other suggestions? This must be a pretty common request but I've been unable to find anything that useful.
TIA!
A

BizTalk SQL Adapter composite operation, chaining stored procedure calls

We need to call three stored procedures on the same database, thinking to use composite operation to wrap them in the same call of the same transaction.
Question is, we need the result of first stored procedure to be used as the input for the 2nd and 3rd procedure, is this doable?
Thanks
No, unfortunately not. The map will run and create the XML that the SQL adapter will use to execute afterwards.
You could look at making a two-way send port that only runs the first stored procedure; and another send port that subscribes to the response of the first send port and runs the second and third procedures.
This is not possible I'm afraid.
The input of your composite operation is an XML instance, where every input parameter is supplied before hand.
If its really necessary to execute these particular stored procedures, you can try wrapping them into one, custom stored procedure, where you are free to to what you want.
One could also try merging the logic from these 3 stored procedures into one new one. Try to think about scalar functions, table types, table valued functions and so on. SQL server has quite the arsenal to let you do what you want.
Yes, you absolutely can to that, but you would not use a Composite Operation.
You would use an Orchestration that performs the calls in sequence, using the Response of one to create the Request of the next using a Map.
This is actually a very common pattern.

What happens when 5 second execution time limit exceeds in Azure DocumentDb Stored Procedures

I have a read operation that reads a lot of records from a DocumentDb collection and when executed it will run for a long time. I am writing a stored procedure to move that query to the server-side. I understand that documentdb stored procedures have a execution cap of 5 seconds. What i wanna know is that in a read operation what happens when the query execution hits that time limit. Can i add some kind of a retry logic to continue after some time or will i have to do the read from the beginning?
This is not a problem if you follow this simple pattern when writing your stored procedures and you keep calling the stored procedure until continuation comes back null.
The key help here is that you are given some buffer beyond the 5 seconds to wrap up your stored procedure before it's forcedly shut down. Whenever the sproc is about to be shut down, the most recent database operation will return false instead of true. DocumentDB gives you enough time to process the last batch returned.
For read/query operations (example countDocuments), the key element to the recommended pattern is to store the continuation token for your read/query operation in the body that's returned from your stored procedure. You can set the body as many times as you want. Only the last one will be returned when the stored procedure either exist gracefully when resource limits are reached or whenever the stored procedure's job is done.
For write operations (example createVariedDocuments), documentdb-utils still looks at the continuation that's returned to decide if the sproc has finished its work except in this case, it won't be a read/query continuation and its value doesn't matter. It's simply an indicator for whether or not you need to call the sproc again. That's why I set it to "Value does not matter" in my example. Anything other than null would work.
Key off of the continuation that's returned from the stored procedure execution to decide whether or not to call it again. Documentdb-utils will automatically keep calling your stored procedure until continuation comes back null but you can implement this yourself. Documentdb-utils also includes a number of example sprocs that implement this pattern for you to riff off of. Documentdb-lumenize utilizes this pattern to the nth degree to implement an aggregation engine running inside of a sproc.
Disclosure: I'm the author of documentdb-utils and documentdb-lumenize.

SQL Server database hangs on trigger execution

We have implemented 6-7 trigger on a table and there are 4 update trigger out of these. All of the 4 triggers require long processing because of data manipulation and conditions. But whenever trigger executes then all the pages on the website stop responding and hangs for every other user from different systems also. Even when we execute update statement in SQL Server Management Studio on the trigger holding table then it also hangs. Can we resolve this hanging issue by shifting this trigger code into the stored procedure and call this stored procedures after update statement of the table?
Because I think trigger block the table access to the other user at the time of execution. If not then can anyone provide the solution over it.
Triggers are dangerous - they get fired whenever things happen, and you have no control over when and how often they fire.
You should definitely NOT do any time-consuming processing in a trigger! A trigger should be super fast, and lean.
If you need processing - let the trigger record the info needed into a separate "command" table, and have another process (e.g. a scheduled SQL Agent job) that checks that table for commands to be executed, and then executes those commands - separately, independently of the main application, in a separate execution path.
Don't block your main app by doing excessive data processing / manipulation in a trigger! That's the wrong place to do this!
Can we resolve this hanging issue by shifting this trigger code into the stored procedure
and call this stored procedures after update statement of the table?
You have a box that weights a ton. Does it get lighter when you put it into some nice packaging?
A trigger is already compiled. Putting it into a stored procedure is just dressing it up differently.
Your problem is that you abuse triggers to do heavy processing - something they should not do by design. Change the design.
Because I think trigger block the table access to the other user at the time of execution.
Well, triggers do NO SUCH THING - so you think wrong.
A trigger does what it is told to do and an empty trigger sets zero locks (the locks are there from whatever triggers it). If you do set up a table wide lock - fire whoever did that and redesign.
Triggers should be fast, light and be over fast. NO heavy processing in them.
Without actually seeing the triggers it's impossible to diagnose this confidently but here goes...
The TRIGGER won't set up a lock as such but if it sets off other UPDATE statements they'll require locks and if those UPDATE statements fire other triggers then you could have a chain reaction that produces the kind of grief you seem to be experiencing.
If that sounds like what might be happening then removing the triggers and doing the processing explicitly by running a stored procedure at the end may fix it. If the stored procedure is rubbish then you'll still have problems but at least they'll be easier to fix. Try to ensure that the Stored Procedure only updates the records that need updated
The main problem with shifting the functionality to a stored procedure that you run after the update is ensuring that it is in fact run every time.
If your asp.net skills are stronger than your T-SQL skills then this should be a far easier problem to solve than untangling a web of SQL triggers.
The other issue is that the between the update completing and the Stored Procedure completing the records will be in an intermediate state showing the initial change but not the remaining ones. This may or may not be a problem in your case

How do I avoid these deadlocks?

I have a routine which is updating my business entity. The update involves about 6 different tables. All the commands are being executed within a transaction.
Recently, I needed to add some code into the routine which accesses a lookup table from the database. The lookup code already existed in another business object so I used that business object. For example:
Using tr As DbTransaction = myConnection.BeginTransaction()
ExecuteCommand1(tr)
ExecuteCommand2(tr)
If myLookupTable.GetLookupTable().FindById(id).HasFlagSet Then
ExecuteCommand3(tr)
End If
End Using
However, the lookup table business object hangs/deadlocks. I think this is because it doesn't have a reference to the transaction being used by the original routine.
After doing some research, I attempted to put the lookup table logic in its own transaction, setting the IsolationLevel to ReadUncommitted. This gave me the results I desired. However, after further research, I'm now second-guessing if I've implemented this correctly.
Assuming a reference to the active transaction is unavailable to my lookup table object, is what I've described considered best practice? I feel like I might be missing something.
If you're doing a read in the middle of your transaction then you should do it under the transaction context, not using a different transaction and dirty reads. Luckily there is an easy solution: instead of using the ADO.Net transaction objects use the .Net TransactionScope object. The ADO.Net code is sensible to it and will enlist all your operations in this transaction, including your other business component reads. Just make sure your business object does not open a different connection, this will result in attempting to escalate the existing transaction into a distributed transaction and enlist the new connection into it.
The alternative is to pass down your SqlConnection/SqlTransaction pair on each call, but that propagates horribly ugly everywhere in your code.
If it were me I would rewrite the logic so I do not have to do an uncommitted read.
The golden rule to avoid deadlocks is to always take table locks in the same order in every transaction. So look at the code in the other transactions to see what order they take table locks; then make sure you use the same order in your transaction.
Apparently your look up is attempting to access a row(s) that is exclusively locked by transaction tr. If you use a readuncommitted transaction or alternatively use WITH(NOLOCK) in your lookup query, you will see all uncommitted changes by transactions that might be occurring and effecting your lookup logic. So I am not so sure how desirable this would be.
I think it is best to find a way to ensure that your lookup query participates in the current transaction if you need to do the lookup during that transaction. If all of these operations are to be executed in the same thread, one thing that you can do is to store the transaction object in thread local storage when you create one and have GetLookupTable method check the thread local storage for a transaction object and if there is a transaction set, you can get the connection from that transaction object. Otherwise, you create a new connection. This way your lookup will become part of that transaction and it should run its logic without getting blocked by the current transaction and in turn blocking the current transaction and thus leading to a deadlock.

Resources