Why can't I recover from a transaction Rollback? - asp.net

I'm using Entity Framework 6 and hitting a situation where I can't recover from a rolled back transaction.
I need to loop through a list, and for each item, add some entries to two tables. My code is roughly this:
Dim db = New Data.Context
Try
For Each item in list
Using tx = db.Database.BeginTransaction
'add objects to table 1
'add objects to table 2
db.SaveChanges()
tx.Commit()
End Using
Next
Catch ex As Exception
'record the error
End Try
I would expect that it would loop through the whole list, and add entries whenever SaveChanges succeeds, and log them when it fails.
But whenever the SaveChanges call fails, the transaction rolls back and I move to the next item in the list, and then SaveChanges fails for that one too, with the same error. It's as if the context still has the new objects in it and tries to re-save them the next time through the loop. So, during the rollback process, how can I tell the context to forget about those objects so I can continue to loop?

SaveChanges synchronizes your in-memory objects with the database. You have added objects to the in-memory model. They never go away until you delete them.
Adding an object does not queue an insert. It simply adds an object. Until it has been inserted SaveChanges will try to bring the database to the latest state.
EF is not a CRUD helper that you can queue writes to. It tries to conceptually mirror the database in-memory. SaveChanges simply executed the necessary DML for that.
Use one context per row.

Related

Can't delete from a table inside a trigger

I'm building this DB about the University for one of my course classes and I'm trying to create a trigger that doesn't allow for a professor to be under 21yo.
I have a Person class and then a Professor subclass.
What I want to happen is, you create a Person object, then a Professor object using that Person object's id, but, if the Person is under 21yo, delete this Professor object, then delete the Person object.
Everything works fine up until the "delete the Person object" part where this doesn't happen and I'm not sure why. Any help?
This is the sqlite code I have:
AFTER INSERT ON Professor
FOR EACH ROW
WHEN strftime('%J', 'now') - strftime('%J', (SELECT dateOfBirth from Person WHERE personId = NEW.personId)) < 7665 -- 21 years in days
BEGIN
SELECT RAISE(ROLLBACK, 'Professor cant be under 21');
DELETE FROM Person WHERE (personId= new.personId);
END;```
One common issue is that there many not be a current transaction scope to rollback to, which would result in this error:
Error: cannot rollback - no transaction is active
If that occurs, then the trigger execution will be aborted and the delete never executed.
If ROLLBACK does succeed, then this creates a paradox, by rolling back to before the trigger was executed in a strictly ACID environment it would not be valid to continue executing the rest of this trigger, because the INSERT never actually occurred. To avoid this state of ambiguity, any call to RAISE() that is NOT IGNORE will abort the processing of the trigger.
CREATE TRIGGER - The RAISE()
When one of RAISE(ROLLBACK,...), RAISE(ABORT,...) or RAISE(FAIL,...) is called during trigger-program execution, the specified ON CONFLICT processing is performed and the current query terminates. An error code of SQLITE_CONSTRAINT is returned to the application, along with the specified error message.
NOTE: This behaviour is different to some other RDBMS, for instance see this explanation on MS SQL Server where execution will specifically continue in the trigger.
As OP does not provide calling code that demonstrates the scenario it is worth mentioning that in SQLite on RAISE(ROLLBACK,)
If no transaction is active (other than the implied transaction that is created on every command) then the ROLLBACK resolution algorithm works the same as the ABORT algorithm.
Generally, if you wanted to Create a Person and then a Professor as a single operation, you would Create a Stored Procedure that would validate the inputs first, preventing the original insert at the start.
To maintain referential integrity, even if an SP is used, you could still add a check constraint on the Professor record or raise an ABORT from a BEFORE trigger to prevent the INSERT from occurring in the first place:
BEFORE INSERT ON Professor
FOR EACH ROW
WHEN strftime('%J', 'now') - strftime('%J', (SELECT dateOfBirth from Person WHERE personId = NEW.personId)) < 7665 -- 21 years in days
BEGIN
SELECT RAISE(ABORT, 'Professor can''t be under 21');
END
This way it is up to the calling process to manage how to handle the error. The ABORT can be caught in the calling logic and would effectively result in rolling back the outer transaction, but the point is that the calling logic should handle negative side effects. As a general rule triggers that cascade logic should only perform positive side effects, that is to say they should only affect data if the inserted row succeeds. In this case we are rolling back the insert, so it becomes hard to identify why the Person would be deleted.

Is MLOAD executed in a single transaction?

I have an MLOAD job that inserts data from an Oracle database into a Teradata database. One of the things it does it drop the destination table and recreate it. Our production website populates a dropdown list based on what's in the destination table.
If the MLOAD script is not on a single transaction then it's possible that the dropdown list could fail to populate properly if the binding occurs during the MLOAD job. If it is transactional, however, it would be a seamless process because the changes would not show until the transaction is committed.
I checked the dbc.DBQLogTbl and dbc.DBQLQryLogsql views after running the MLOAD job and it appears there are several transactions occurring within the job, so it would seem that the entire job is not done in a single transaction. However, I wanted to verify that this is indeed the case before I make assumptions.
A transaction in Teradata cannot include multiple DDL statements, each DDL must be commited seperatly.
A MLoad is treated logically as a single transaction even if you see multiple transactions in DBQL, these are steps to prepare and cleanup.
When your application tries to select from the target table everything will be ok (unless it's doing a dirty read using LOCKING ROW FOR ACCESS).
Btw, there might be another error message "table doesn't exist" when the application tries to select. Why do you drop/recreate the table instead of a simple DELETE?
Another solution would be a loading a copy of the table and use view switching:
mload tab2;
replace view v as select * from tab2;
delete from tab1;
The next load will do:
mload tab1;
replace view v as select * from tab1;
delete from tab2;
And so on. Of course your load job needs to implement the switching logic.

Updating multiple rows from a list in entity framework

I fetch a list of rows (Id,field1,f2...) and do some computation and store the result in a IList.
I want to now update all the values in this list to a table T. I am using entity framework and this needs to be a transaction.
Will it be fine if I open a transactionscope and update using a stored proc or is thr a efficient way to push multiple updates once ?
SaveChanges method of ObjectContext is a gateway to persist all changes made to entities to the database. When you call ObjectContext.SaveChanges(), it performs insert, update or delete operation on the database based on EntityState of the entities.
ObjectContext.SaveChanges();
Hope this may help you.

Symfony2 and Doctrine: how to make an atomic operation?

Imagine this scenario:
I have an array of ids for some entities that have to be deleted from database (i.e. a couple of externals keys that identifies a record into a third table) and an array of ids for some entities that have to be updated/inserted (based on some criteria that, in this moment, doesn't matters).
What can I do for delete those entities ?
Load them from db (repository way)
Call delete() on the obtained objects
Call flush() onto my entity manager
In that scenario I can make all my operation atomical as I can update/insert other records before call flush().
But why have I to load from db some records just for delete them? So I wrote my personal DQL query (into repo) and call it.
The problem is that if I call that function into my repo, this operation is done immediatly and so my "atomicity" can't be guaranteed.
So, how can I "jump" over this obstacle by following the second "delete-option" ?
By using flush() you're making Doctrine to start transactions implicitly. It is also possible to use transactions explicitly and that approach should solve your problem.

LINQ: Cannot insert duplicate key row in object 'dbo.tblOutstandingCompletions' with unique index

I have an application (ASP.NET 3.5) that allows users to rerun a particular process if required. The process inserts records into an MS SQL table. I have the insert in a Try / Catch and ignore the catch if a record already exists (the error in the Title would be valid). This worked perfectly using ADO but after I conveted to LINQ I noticed an interesting thing. If on a re-run of the process there was already records in the table, any new records would be rejected with the same error even though there was no existing record.
The code is as follows:
Dim ins = New tblOutstandingCompletion
With ins
.ControlID = rec.ControlID
.PersonID = rec.peopleID
.RequiredDate = rec.NextDue
.RiskNumber = 0
.recordType = "PC"
.TreatmentID = 0
End With
Try
ldb.tblOutstandingCompletions.InsertOnSubmit(ins)
ldb.SubmitChanges()
Catch ex As Exception
' An attempt to load a duplicate record will fail
End Try
The DataContext for database was set during Page Load .
I resolved the problem by redefining the DataContext before each insert:
ldb = New CaRMSDataContext(sessionHandler.connection.ToString)
Dim ins = New tblOutstandingCompletion
While I have solved the problem I would like to know if anyone can explain it. Without the DataContext redefinition the application works perfectly if there are no duplicate records.
Regards
James
It sounds like the DataContext thinks the record was inserted the first time, so if you don't redefine the context, it rejects the second insert because it "knows" the record is already there. Redefining the context forces it to actually check the database to see if it's there, which it isn't. That's LINQ trying to save a round trip to the database. Creating a new context as you've done forces it to reset what it "knows" about the database.
I had seen a very similar issue in my code were the identity column wasn't an autoincrementing int column, but a GUID with a default value of newguid() - basically LINQ wasn't allowing the database to create the GUID, but inserting Guid.Empty instead, and the second (or later) attempts would (correctly) throw this error.
I ended up ensuring that I generated a new GUID myself during the insert. More details can be seen here: http://www.doodle.co.uk/Blogs/2007/09/18/playing-with-linq-in-winforms.aspx
This allowed me to insert multiple records with the same DataContext.
Also, have you tried calling InsertOnSubmit multiple times (once for each new record) but only calling SubmitChanges once?
gfrizzle seems to be right here...
My code fails with the duplicate key error even though I've just run a stored proc to truncate the table on the database. As far as the data context knows, the previous insertion of a record with the same key is in fact a duplicate key, and an exception is thrown.
The only way that I've found around this is:
db = null;
db = new NNetDataContext();
right after the SubmitChanges() call that executes the previous InsertOnSubmit requests. Seems kind of dumb, but it's the only way that works for me other than redesigning the code.

Resources