I am a newbie to marklogic. How do I rollback a document I deleted. I use the function xdmp:rollback() from https://docs.marklogic.com/xdmp:rollback. but I could not see a document restore back to database. Thanks in advance.
xdmp:document-delete("/data/edu/GMU.xml"),
xdmp:rollback()
In your example code you never actually committed the document (because the full statement never concluded successfully) and so you don't need to do a rollback. You just need to prevent the statement from succeeding by, for example, generating an fn:error() to fail the entire statement and then all pending changes in the statement will never be committed.
Related
I have a driver table with a flag that determines whether that record has been processed or not. I have a stored procedure that reads the table, picks a record up using a cursor, does some stuff (inserts into another table) and then updates the flag on the record to say it's been processed. I'd like to be able to execute the SP multiple times to increase processing.
Obvious answer seemed to be to use 'for update skip locked' in the select for the cursor but it seems this means I cannot commit within the loop (to update the processed flag and commit my inserts) without getting the fetch out of sequence error.
Googling tells me Oracle's AQ is the answer but for the time being this option is not available to me.
Other suggestions? This must be a pretty common request but I've been unable to find anything that useful.
TIA!
A
With Cosmos DB ChangeFeed, can anyone please provide some help with exception handling?
Let's say if I have 10 documents in the change feed, I have a loop to iterate through the documents one by one. Let's assume if there was an exception happened after the 5th document that is processed.
What is going to happen with the changefeed?
So far, it looks to me that the entire changefeed is swallowed, i.e. the rest documents after the exception are gone.
I am just wondering what is the backout strategy on this? Is there a way I can completely backout the entire batch so I do not loose any changes.
It is an old question, but hopefully other may find it useful.
To handle the error, the recommended pattern is to wrap your code with try-catch. Catch the error and put that document on a queue (dead-letter). Have a separate program to deal with those document which produced the error. This way if you have 100-document batch, and just one document failed, you do not have to throw away the whole batch.
Second reason is if you can keep getting those documents from Change Feed then you may lose the last snapshot on the document. Change Feed keeps only one last version of the document, in between other processes can come and change the document.
As you keep fixing your code, you will soon find no documents on dead-letter queue.
Azure Function is automatically called by Change Feed system. If you want to roll back the Change Feed and control every aspect of it, you should consider using Change processor Feed SDK.
Recommendation from MS, to add try-catch in your CosmosDB trigger function. If any document throw exception you have to store in place.
Once you will start storing failed messages in some location, you have to build metrics, alerts and re-process strategy.
Below is my strategy to handle this scenario. My One function listing to DB changefeed and pushing data into "Topic" (without any process). I created multiple subscriptions so each subscription maintain own dead-letter queue.
I have a process that opens a database using sqlite3_open and sets journal mode to WAL.
Another process, uses sqlite3_open to open that same database. Everything seems to work, but the problem is that second process does not seem to see changes made by the first process. I am trying to fetch count, or rowids, and they stay the same.
I am sure that database is being updated, because refreshing using SQLiteDatabaseBrowser shows the changes.
I tried multiple ways of opening databases, and multiple ways of querying, but no luck so far. What am I missing? Thanks!
Transactions are used to isolate connections from each other, especially to make changes visible only after a transaction has completed.
So for changes to be visible, the writing connection must end its transaction, and the reading connection must not have started its own transaction before that. (When using automatic transactions, ensure that statements are reset or finalized.)
I figured out what the problem was, and as usual in cases where thing make no sense, mistake on my side. Problem is however subtle, so worth mentioning.
I was doing sqlite3_reset calls on cached prepared statements lazily, that is before I reuse the prepared statement, not immediately after I am done executing it. Problem is that this pattern means that there’s always prepared statement pending reset. Apparently, reset is necessary to be able to see updates to the database (probably some mutex is being held).
Thanks for your help.
EDIT: after sleeping on it this behavior actually makes sense. Updates should not be visible during the time of prepared statement execution, otherwise it might never be done or accurate.
I'm fairly new to SQLite, and I'm just trying to get my head around running transactions (with the C API). In this case, I just need to run two consistent updates on two tables.
The easiest approach seems to be the one in this tutorial. You just sqlite3_exec a single combined sequence of commands, which start with BEGIN and end with COMMIT. So, you never do a ROLLBACK, and you presumably rely on SQLite automatically rolling back if it encounters an error.
The problem is that the section on handling transaction errors in the manual is fairly complex, and it's not obvious to me that this is a good approach. The doc also suggests manually rolling back.
The next approach would be to exec a single BEGIN, and then run each statement individually, checking for errors, and then finally run a COMMIT or ROLLBACK. Is this actually any better, or is it just busywork?
sqlite3_exec() will abort in the first error encountered, an most errors do not result in an automatic rollback.
You should execute the BEGIN first, then do all the stuff inside the transaction, then end the transaction with either COMMIT or ROLLBACK.
In the case of ROLLBACK, you might just ignore any errors. Either the transation already was rolled back, or there is nothing you could do anyway.
We have implemented 6-7 trigger on a table and there are 4 update trigger out of these. All of the 4 triggers require long processing because of data manipulation and conditions. But whenever trigger executes then all the pages on the website stop responding and hangs for every other user from different systems also. Even when we execute update statement in SQL Server Management Studio on the trigger holding table then it also hangs. Can we resolve this hanging issue by shifting this trigger code into the stored procedure and call this stored procedures after update statement of the table?
Because I think trigger block the table access to the other user at the time of execution. If not then can anyone provide the solution over it.
Triggers are dangerous - they get fired whenever things happen, and you have no control over when and how often they fire.
You should definitely NOT do any time-consuming processing in a trigger! A trigger should be super fast, and lean.
If you need processing - let the trigger record the info needed into a separate "command" table, and have another process (e.g. a scheduled SQL Agent job) that checks that table for commands to be executed, and then executes those commands - separately, independently of the main application, in a separate execution path.
Don't block your main app by doing excessive data processing / manipulation in a trigger! That's the wrong place to do this!
Can we resolve this hanging issue by shifting this trigger code into the stored procedure
and call this stored procedures after update statement of the table?
You have a box that weights a ton. Does it get lighter when you put it into some nice packaging?
A trigger is already compiled. Putting it into a stored procedure is just dressing it up differently.
Your problem is that you abuse triggers to do heavy processing - something they should not do by design. Change the design.
Because I think trigger block the table access to the other user at the time of execution.
Well, triggers do NO SUCH THING - so you think wrong.
A trigger does what it is told to do and an empty trigger sets zero locks (the locks are there from whatever triggers it). If you do set up a table wide lock - fire whoever did that and redesign.
Triggers should be fast, light and be over fast. NO heavy processing in them.
Without actually seeing the triggers it's impossible to diagnose this confidently but here goes...
The TRIGGER won't set up a lock as such but if it sets off other UPDATE statements they'll require locks and if those UPDATE statements fire other triggers then you could have a chain reaction that produces the kind of grief you seem to be experiencing.
If that sounds like what might be happening then removing the triggers and doing the processing explicitly by running a stored procedure at the end may fix it. If the stored procedure is rubbish then you'll still have problems but at least they'll be easier to fix. Try to ensure that the Stored Procedure only updates the records that need updated
The main problem with shifting the functionality to a stored procedure that you run after the update is ensuring that it is in fact run every time.
If your asp.net skills are stronger than your T-SQL skills then this should be a far easier problem to solve than untangling a web of SQL triggers.
The other issue is that the between the update completing and the Stored Procedure completing the records will be in an intermediate state showing the initial change but not the remaining ones. This may or may not be a problem in your case