Is context.executeQueryAsync a transactional operation? - asynchronous

Let's say i update multiple items in a loop and then call executeQueryAsync() on ClientContext class and this call returns error (failed callback is invoked). Can i be sure that not a sinle item was updated of all these i wanted to update? is there a chance that some of them will get updated and some of them will not? In other words, is this operation transactional? Thank you, i cannot find a single post about it.
I am asking about CSOM model not server solutions.

SharePoint handles its internal updates in a transactional method, so updating a document will actually be multiple calls to the DB that if one method fails, will roll back the other changes so nothing is half updated on a failure.
However, that is not made available to us as an external developer. If you create an update that updates 9 items within your executeQueryAsync call and it fails on #7, then the first 6 will not be rolled back. You are going to have to write code to handle the failures and if rolling back is important, then you will have to manually roll back the changes within your code.

Related

Two conflicting long lived process managers

Let assume we got two long lived process managers. Both sagas operates over 10 milion items for example. First saga adds something to each item. Second saga removes it from each item. Given both process managers need few minutes to complete its job if I run them simultaneously I get into troubles.
Part of those items would hold the value while rest of them not. The result is close to random actually and depends on command order that affect particular item. I wondered if redispatching "Remove" command in case of failure would solve the problem. I mean if you try remove non existing value you should wait for the first saga to add the value. But while process managers are working someone else may dispatch "Remove" or "Add" command. In such case my approach would fail.
How may I solve such problem? :)
It seems that you would want the second saga to not run if the first saga is running (and presumably not run until some process which depends on whatever the first saga added being there). So the apparent solution would be to have a component (could be a microservice, could also be a record in a strongly consistent datastore like zookeeper/etcd/consul) that gives permission for the sagas to start executing. An example protocol might look like:
Saga sends a message to the component identifying the saga and conveying the intention to start
Component validates that no sagas might be running which would prevent this saga from running
Component responds with permission to start running
Subsequent saga attempts result in rejection until the running saga tells the component it's OK to run the other saga
Assuming that this component is reliably durable, the failure mode to worry about is that permission is granted but this component never processes the message that the saga finished (causes of this could include the permission message not getting delivered/processed or the saga crashing). No amount of acknowledgements or extra messages can solve this (it's basically the Two Generals' Problem).
A mitigation is to have this component (or something watching this component) alert if it seems that too much time has passed without saga completion. Whatever/whoever is responsible for ensuring liveness would then investigate to see if the saga is still running and if none is running, inform the component that it's OK to run the other saga. Note that this is not foolproof: it's quite possible for the decider in question to make what turns out to be the wrong decision.
I feel like I need more context. Whilst you don't say it explicitly, is the problem that the second saga tries to remove values that haven't been added by the first?
If that was true, a simple solution would be to just use a third state.
What I mean by that is to just more explicitly define and declare item state. You currently seem to have two states with value, and without value, but nothing to indicate if an item is ready to be processed by the second saga because the first saga has already done it's work on the item in question.
So all that needs to happen is that the second saga keeps looking for items where:
(with_value == true & ready_for_saga2 == true)
Ready_for_saga2 or "Saga 1 processing complete", whatever seems more appropriate in your context.
I'd say that the solution would vary based on which actual problem, we're trying to solve.
Say it's an inventory and add are items added to the inventory and remove are items requested for delivery. Then the order of commands does not matter that much because you could just process the request for delivery, when new items are added to the inventory.
This would lead to an aggregate root with two collections: Items and PendingOrders.
One process manager adds new inventory to Items - if any orders are pending, it will complete these orders in the same transaction and remove both the item and the order from the collections.
If the other process manager adds an order (tries to remove an item), it will either do it right away, if there's any items left - or it will add the order to the pending orders to be processed when new items arrive (and maybe notify someone about the delay, while we're at it).
This way we end up with the same state regardless of the order of commands, but the actual real-world-problem has great influence on the model chosen.
If we have other real world problems, we can make a model those too.
Let's say you have two users that each starts a process that bulk updates titles on inventory items. In this case you - and the users - have to decide how best to resolve this conflict - what will lead to the best real world outcome.
If you want consistency across all the items - all or no items should be updated by a single bulk update - I would embed this knowledge in a new model. Let's call it UpdateTitlesProcesses. We have only one instance of this model in the system. The state is shared between processes. This model is effectually a command queue, and when a user initiates the bulk operation, it adds all the commands to the queue and starts processing each item one at a time.
When the second user initiates another title update, the business logic in our models will reject this, as there's already another update started. Or if the experts say that the last write should win, then we ditch the remaining commands from the first process and add the new ones (and similarly we should decide what should happen if a user issues a single title update, not bulk - should it be rejected, prioritized or put on hold?).
So in short I'd say:
Make it clear which real world problem we are solving - and thus which conflict resolution outcome is best (probably a trade off, often also something that requires user interaction or notification).
Model this explicitly (where processes, actions and conflict handling are also part of the model).

Cosmos DB ChangeFeed Exception Handling

With Cosmos DB ChangeFeed, can anyone please provide some help with exception handling?
Let's say if I have 10 documents in the change feed, I have a loop to iterate through the documents one by one. Let's assume if there was an exception happened after the 5th document that is processed.
What is going to happen with the changefeed?
So far, it looks to me that the entire changefeed is swallowed, i.e. the rest documents after the exception are gone.
I am just wondering what is the backout strategy on this? Is there a way I can completely backout the entire batch so I do not loose any changes.
It is an old question, but hopefully other may find it useful.
To handle the error, the recommended pattern is to wrap your code with try-catch. Catch the error and put that document on a queue (dead-letter). Have a separate program to deal with those document which produced the error. This way if you have 100-document batch, and just one document failed, you do not have to throw away the whole batch.
Second reason is if you can keep getting those documents from Change Feed then you may lose the last snapshot on the document. Change Feed keeps only one last version of the document, in between other processes can come and change the document.
As you keep fixing your code, you will soon find no documents on dead-letter queue.
Azure Function is automatically called by Change Feed system. If you want to roll back the Change Feed and control every aspect of it, you should consider using Change processor Feed SDK.
Recommendation from MS, to add try-catch in your CosmosDB trigger function. If any document throw exception you have to store in place.
Once you will start storing failed messages in some location, you have to build metrics, alerts and re-process strategy.
Below is my strategy to handle this scenario. My One function listing to DB changefeed and pushing data into "Topic" (without any process). I created multiple subscriptions so each subscription maintain own dead-letter queue.

SignalR and SqlDependency refreshment issue - ASP.NET

I have a table in MSSQL database, and I have an ASPX page, I need to push all new rows to the page in a descending order. I found this awesome tutorial which is using SignalR and SqlDependency and it shows only the last row descarding the previous rows which have been added when I'm online, it does that because it has a span element to show data and every time it overwrites this span, so I modified the JavaScript code to append the new data and it works fine.
The problem now is when I refreshed the page for the first time, I'll get the new rows twice, and if I refreshed the page again I'll get the new rows triple .. and so on.
The only solution is to close the application and reopen it again, it looks like reset the IIS.
So, what can I do to avoid duplicating data in the online show?
It is not a SignalR issue, that happens because the mentioned tutorial has a series of mistakes, the most evident being the fact that it continuously creates SqlDependency instances but then it trashes them without never unsubscribing from the OnChange event. You should start by adding something like this:
SqlDependency dependency = sender as SqlDependency;
dependency.OnChange -= dependency_OnChange;
before calling SendNotifications inside your event handler. Check this for some inspiration.
UPDATE (previous answer not fully accurate but kept in place for context)
The main problem here is that this technique creates a sort of auto-regenerating infinite sequence of SqlDependencies from inside instances of Web Forms pages, which makes them unreachable as soon as you page has finished rendering. This means, once your page lifecycle is complete and the page is rendered, the chain of dependencies stays alive and keeps on working even if the page instance which created has finished its cycle. The event handler also keeps the page instance alive even if unreachable, causing a memory leak.
The only way you can control this is actually to generate these chains somewhere else, for example within a static type you can call passing some unique identifier (maybe a combination of page name and username? that depends on your logic). At the first call it will do what currently happens in your current code, but as soon as you do another call with the same parameters it will do nothing, hence the previously created chain will go on being the only one notifying, with no duplicate calls.
It's just a suggestion, there would be many possible solutions, but you need to understand the original problem and the fact that it is practically impossible to remove those chains of auto-regenerating dependencies if you don't find a way to keep track of them and create them only when necessary. I hope that part is clear.
PS: this behavior is very similar to what you get sometimes with event handlers getting leaked and keeping alive objects which should be killed, this is what fooled me with the previous answer. It is in a way a similar problem (leaked objects), but with a totally different cause. The tutorial you follow does not clarify that and brings you to this situation where phantom code keeps on executing and memory is lost.
I got it, although I don't like this way absolutely, I have declared a static member in the Global.asax file and in the Page_Load event I checked its value, if it was true don't run a new instance of SqlDependency, otherwise run it.
if (!Global.PageIsFired)
{
Global.PageIsFired = true;
SqlDependency.Stop(ConfigurationManager.ConnectionStrings["db"].ConnectionString);
SqlDependency.Start(ConfigurationManager.ConnectionStrings["db"].ConnectionString);
SendNotifications();
}
Dear #Wasp,
Your last update helped me a lot to understand the problem, so thank you so much for your time and prompt support.
Dear #dyatchenko,
Thanks a lot for your comments, it was very useful too.

SQL Server database hangs on trigger execution

We have implemented 6-7 trigger on a table and there are 4 update trigger out of these. All of the 4 triggers require long processing because of data manipulation and conditions. But whenever trigger executes then all the pages on the website stop responding and hangs for every other user from different systems also. Even when we execute update statement in SQL Server Management Studio on the trigger holding table then it also hangs. Can we resolve this hanging issue by shifting this trigger code into the stored procedure and call this stored procedures after update statement of the table?
Because I think trigger block the table access to the other user at the time of execution. If not then can anyone provide the solution over it.
Triggers are dangerous - they get fired whenever things happen, and you have no control over when and how often they fire.
You should definitely NOT do any time-consuming processing in a trigger! A trigger should be super fast, and lean.
If you need processing - let the trigger record the info needed into a separate "command" table, and have another process (e.g. a scheduled SQL Agent job) that checks that table for commands to be executed, and then executes those commands - separately, independently of the main application, in a separate execution path.
Don't block your main app by doing excessive data processing / manipulation in a trigger! That's the wrong place to do this!
Can we resolve this hanging issue by shifting this trigger code into the stored procedure
and call this stored procedures after update statement of the table?
You have a box that weights a ton. Does it get lighter when you put it into some nice packaging?
A trigger is already compiled. Putting it into a stored procedure is just dressing it up differently.
Your problem is that you abuse triggers to do heavy processing - something they should not do by design. Change the design.
Because I think trigger block the table access to the other user at the time of execution.
Well, triggers do NO SUCH THING - so you think wrong.
A trigger does what it is told to do and an empty trigger sets zero locks (the locks are there from whatever triggers it). If you do set up a table wide lock - fire whoever did that and redesign.
Triggers should be fast, light and be over fast. NO heavy processing in them.
Without actually seeing the triggers it's impossible to diagnose this confidently but here goes...
The TRIGGER won't set up a lock as such but if it sets off other UPDATE statements they'll require locks and if those UPDATE statements fire other triggers then you could have a chain reaction that produces the kind of grief you seem to be experiencing.
If that sounds like what might be happening then removing the triggers and doing the processing explicitly by running a stored procedure at the end may fix it. If the stored procedure is rubbish then you'll still have problems but at least they'll be easier to fix. Try to ensure that the Stored Procedure only updates the records that need updated
The main problem with shifting the functionality to a stored procedure that you run after the update is ensuring that it is in fact run every time.
If your asp.net skills are stronger than your T-SQL skills then this should be a far easier problem to solve than untangling a web of SQL triggers.
The other issue is that the between the update completing and the Stored Procedure completing the records will be in an intermediate state showing the initial change but not the remaining ones. This may or may not be a problem in your case

When to update index with Lucene.NET? Async or not?

Is it generally fast enough to make simple updates synchronously? For instance with a ASP.NET web app, if I change the person's name... will I have any issues just updating the index synchronously as part of the "Save" mechanism?
OR is the only safe way to have some other asynchronous process to make the index updates?
We do updates both synchronous and asynchronously depending on the kind of action the user is doing. We have implemented the synchronous indexing in a way where we use the asynchronous code and just waits for some time for its completion. We only wait for 2 seconds which means that if it takes longer then the user will not see the update but normally the user will.
We configured logging in a way so we would get notified whenever the "synchronous" indexing took longer than we waited to get an idea of how often it would happen. We hardly ever get over the 2 second limit.
If you are using full text session, then you don't need to update indexs explicitly. Full text session take care of indexing updated entity.

Resources