I need some inputs on practical use of BizTalk Atomic and Long Running transactions. I have read all the theory but not sure how the Atomic transaction will work if I am making multiple SQL calls and if some SQL call fails how the previously committed transaction/data will be rolled back.
Need some guide/link/pointer to understand the transaction better.
BizTalk version used: 2010
Main difference is that orchestration will never be persisted during Atomic transaction even on sending data to message box - everything will be made in one transaction established by DTC. Actually message isn't really sent to MB if you sent it from Atomic transaction - it's written but not committed.
Another difference is that Atomic transaction automatically rolls back everything inside in case of failure. So you can be sure all the action inside are done at once or not at all.
In reality Atomic transaction has too many limitations and quite exotic way to do things in BizTalk. I've implemented a lot of solutions with BizTalk but never used Atomic transaction so far. But I use a lot Long Running to force orchestration persist some intermediate state (happens at the end of any transaction scope) or define compensation actions.
See this blog Indetail About Atomic Scope / Transactions in BizTalk Server
In particular:
Please note that, the scope of BizTalk with respect to Atomic Transaction is till BizTalk Server Message Box only. Please consider this before you decide to use Atomic scope.
So regarding the multiple SQL transactions, I don't think you can do it that way with an Atomic shape unless you aren't dependent on the results of the first SQL call. If you want a single SQL transaction that can be rolled back, you are better of doing that in a stored procedure that BizTalk calls.
About the only time I've used Atomic scopes is when having to call a Pipeline from inside an Orchestration or calling BRE.
Related
From following document: https://cloud.google.com/datastore/docs/concepts/transactions
What would happen if transaction fails with no explicit rollback defined? For example, if we're performing put() operation on value arguments.
The document states that transaction should be idempotent, what does this mean with respect to put() operation? It is not clear how idempotency is applied in this context.
How do we detect failure if failure from commit is not reliable according to the documentation?
We are seeing some symptoms where put() against value argument is sometimes partially saving the data. Note we do not have explicit rollback defined.
As you may already know, Datastore transactions are guaranteed to be atomic, which means that it applies the all-or-nothing principle; either all operations succeed or they all fail. This ensures that the data in your database remains consistent over time.
Now, regardless whether you execute put or any other operation in your transaction, your implementation of the code should always ensure that your transaction has either successfully commited or rolled back. This means that if you aren't fully sure whether the commit succeeded, you should explicitly issue a rollback.
However, there may be some exceptions where a commit might fail, and this doesn't necessarily mean that no data was written to your database. The documentation even points out that "you can receive errors in cases where transactions have been committed."
The simple way to detect transaction failures would be to add a try/catch block in your code for when an Exception (failed transactional operation) or DatastoreException (errors related to Datastore - failed commit) are thrown. I believe that you may already have an answer in this Stackoverflow post about this particular question.
A good practice is to make your transactions idempotent whenever possible. In other words, if you're executing a transaction that includes a write operation put() to your database, if this operation were to fail and needed to be retried, the end result should ideally remain the same.
A real world example can be - you're trying to transfer some money to your friend; the transaction consists of withdrawing 20 USD from your bank account and depositing this same amount into your friend's bank account. If the transaction were to fail and had to be retried, the transaction should still operate with the same amount of money (20 USD) as the final result.
Keep in mind that the Datastore API doesn't retry transactions by default, but you can add your own retry logic to your code, as per the documentation.
In summary, if a transaction is interrupted and your logic doesn't handle the failure accordingly, you may eventually see inconsistencies in the data of your database.
I have a BizTalk orchestration that is picking up messages from an MSMQ. It processes the message and sends it on to another system.
The thing is, whenever a message is put on the queue, BizTalk dequeues it immediately even if it is still processing the previous message. This is a real pain because if I restart the orchestration then all the unprocessed messages get deleted.
Is there any way to make BizTalk only take one message at a time, so that it completely finishes processing the message before taking the next one?
Sorry if this is an obvious question, I have inherited a BizTalk system and can't find the answer online.
There are three properties of the BizTalk MSMQ adapter you could try to play around with:
batchSize
Specifies the number of messages that the adapter will take off the queue at a time. The default value is 20.
This may or may not help you. Even when set to 1, I suspect BTS will try to consume remaining "single" messages concurrently as it will always try parallel processing, but I may be wrong about that.
serialProcessing
Specifies messages are dequeued in the order they were enqueued. The default is false.
This is more likely to help because to guarantee ordered processing, you are fundamentally limited to single threaded processing. However, I'm not sure if this will be enough on its own, or whether it will only mediate the ordering of message delivery to the message box database. You may need to enable ordered delivery throughout the BTS application too, which can only be done at design time (i.e. require code changes).
transactional
Specifies that messages will be sent to the message box database as part of a DTC transaction. The default is false.
This will likely help with your other problem where messages are "getting lost". If the queue is non-transactional, and moreover, not enlisted in a larger transaction scope which reaches down to the message box DB, that will result in message loss if messages are dequeued but not processed. By making the whole process atomic, any messages which are not committed to the message box will be rolled back onto the queue.
Sources:
https://msdn.microsoft.com/en-us/library/aa578644.aspx
While you can process the messages in order by using Ordered Delivery, there is no way to serialize to they way you're asking.
However, merely stopping the Orchestration should not delete anything, much less 'all the unprocessed messages'. Seems that's you problem.
You should be able to stop processing without losing anything.
If the Orchestration is going into a Suspended state, then all you need to do is Resume that one orchestration and any messages queued will remain and be processed. This would be the default behavior even if the app was created 'correctly' by accident ;).
When you Stop the Application, you're actually Terminating the existing Orchestration and everything associated with it, including any queued messages.
Here's your potential problem, if the original developer didn't properly handle the Port error, the Orchestration might get stuck in an un-finishable Loop. That would require a (very minor) mod to the Orchestration itself.
I want to pull messages off a MQS queue in a C client, and would love to do so asynchronously so I don't have to start (explicitly) multithreading. The messages will be forwarded to another system that acts "transactionally" but is completely incompatible with XA. So I'd like to have a way to explicitly commit (and thereby remove) a message that's been successfully handed off to the other system, and not commit if this failed, so that the last message is retained for a more successful later attempt.
I've read about the SYNCPOINT option and understand how I'd use that around a regular GET, but I haven's seen any hints on how to make asynchronous message retrieval have transactional behavior like this. Any hints, please?
I think you are describing using the asynchronous callback capability, ie you register a routine to be called when a message arrives, and ask for any get to be under syncpoint... An explanation of how some of it works is in here, https://share.confex.com/share/117/webprogram/Handout/Session9513/share_advanced_mqi.pdf page 4+
Effectively you get called with the MQ message under syncpoint, do your processing with another system, then commit or rollback the message before returning.
Be aware without the use of e.g. XA 2 phase commit, there is always going to be the windows of e.g. committing to the external system and a power outage means the message under the unit of work gets rolled back inside MQ as you didnt have time to perform the commit.
Edit: my misunderstanding, didn't realise that the application was using a callback to retrieve messages, which is indeed fully asynchronous behavior. Disregard the answer below.
Do MQGET with MQGMO_SYNCPOINT, then issue either MQCMIT or MQBACK.
"Asynchronous" and "synchronous" may be misnomers - these are your patterns of using MQ - whether you wait for a reply message or not, these patterns do not affect how MQ processes your calls. Transaction management (unit of work management) works across any MQI calls that use SYNCPOINT, no matter if they are part of a request/reply pattern or not.
In my application, there is a thread that is constantly receiving and writing data to a SQLite database inside a transaction, then committing the transaction when it's done.
At the same time, when the application runs a long running query, the write thread seems to get blocked and no data gets written. Each method uses the same connection object.
Is there way to do an equivalent of a SQL (nolock) query, or some other way to have my reads not lock up any of my tables?
Thanks!
You have a concept error. SQLite works on this way:
The traditional File/Open operation does an sqlite3_open() and
executes a BEGIN TRANSACTION to get exclusive access to the content.
File/Save does a COMMIT followed by another BEGIN TRANSACTION. The use
of transactions guarantees that updates to the application file are
atomic, durable, isolated, and consistent.
So you can't work on this way, because is not really need of work that way. I think you must rethink the algorithm to work with SQLite. Thats the reason your connection is blocked.
More information:
When use it.
FAQ
Using threads on SQLite: avoid them!
I inherited an application with a lot of stored procedures, and many of them have exception handling code that inserts a row in an error table and sends a DBMail. We have ELMAH on the ASP.NET side, so I'm wondering if exception management in the stored procs is necessary. But before I rip it out, I want to ensure that I'm not making a grave mistake because of ignorance about a best practice.
Only one application uses the stored procedures.
When would one prefer using exception management in a SQL Server 2005 stored procedure over handling the exception on the ASP.NET side?
If there are other applications utilizing these stored procedures then it might make sense to retain the error handling in the stored procedures. In your edit you indicate that this is not the case so removing the exception handling is probably not a bad idea.
In the MSDN article Exception Handling it is outlined when to catch exceptions and when to let them bubble up the stack. It can be argued that it makes sense to handle and log database exceptions that are recoverable from in the stored procedure.
There is a principle sometimes referred to as "First Failure Data Capture" - ie. it's the responsibility of the first "chunk of code" that indentifies an error to immediately capture it for future diagnosis. In multi-tier architectures this leads to some interesting questions about who "first" actually is.
I believe that it's quite reasonable for the stored procedure to log something to a db (sending an email sounds somewhet overkill for all but the most critical of errors but that's another issue). It cannot assume taht higher layers will be well behaved, you may only have one client now, but you can't predict the future.
The stored procedure can still throw an exception as well as logging. And sometimes in difficult situations being able to correlate errors in the different layers is actually very handy.
I would take a lot of persuading to remove that error logging.
I believe that logging to a table only works for simpler systems where everything is done within a single stored procedure call.
Once the system is complex enough that you implement transactions across database calls, logging to the database within the stored procedure becomes much more of a problem.
The rollbacks undo the logging to the table.
Logic that allows rollbacks and logging, in my opinion, creates too much potential for defects.