I've got a piece of code that looks like this:
public void Foo(int userId)
{
try {
using (var tran = NHibernateSession.Current.BeginTransaction())
{
var user = _userRepository.Get(userId);
user.Address = "some new fake user address";
_userRepository.Save(user);
Validate();
tran.Commit();
}
}
catch (Exception) {
logger.Error("log error and don't throw")
}
}
private void Validate()
{
throw new Exception();
}
And I'd like to unit test if validations ware made correctly. I use nunit and and SQLLite database for testing. Here is test code:
protected override void When()
{
base.When();
ownerOfFooMethod.Foo(1);
Session.Flush();
Session.Clear();
}
[Test]
public void FooTest()
{
var fakeUser = userRepository.GetUserById(1);
fakeUser.Address.ShouldNotEqual("some new fake user address");
}
My test fails.
While I'm debugging I can see that exception is thrown, Commit has not been called. But my user still has "some new fake user address" in Address property, although I was expecting that it will be rollbacked.
While I'm looking in nhibernate profiler I can see begin transaction statement, but it is not followed neither by commit nor by rollback.
What is more, even if I put there try-catch block and do Rollback explicitly in catch, my test still fails.
I assume, that there is some problem in testing environment, but everything seems fine for me.
Any ideas?
EDIT: I've added important try-catch block (at the beginning I've simplified code too much).
If the exception occurs before NH has flushed the change to the database, and if you then keep using that session without evicting/clearing the object and a flush occurs later for some reason, the change will still be persisted, since the object is still dirty according to NHibernate. When rolling back a transaction you should immediately close the session to avoid this kind of problem.
Another way to put it: A rollback will not rollback in-memory changes you've made to persistent entities.
Also, if the session is a regular session, that call to Save() isn't needed, since the instance is already tracked by NH.
Related
I want some kind of mechanism to have more information about a caught exception. (Specifically exceptions I throw myself to abort transactions) I've looked around and pretty much the only thing I could find was "Use the info log". This to me does not seem like a good idea. For one it is cumbersome to access and find the last message. And it is limited in size so at some point the new messages won't even show up.
So my idea is the following: Create a class NuException and pass an instance of that through all methods store an instance in the class where the work methods are located. When I need to throw an exception I call a method on it similar to Global::error() but this one takes an identifier and a message.
Once I reach my catch block I can access those from my object the class that contains the work methods similarly to how CLRExceptions work.
class NuException
{
"public" str identifier;
"public" str message;
public Exception error(str _id, str _msg)
{
//set fields
return Exception::Error;
}
}
class Worker
{
"public" NuException exception;
void foo()
{
throw this.exception.error("Foo", "Record Foo already exists");
}
void bar()
{
this.foo();
}
}
void Job()
{
Worker w = new Worker();
try
{
w.bar(ex);
}
catch (Exception::Error)
{
info(w.exception().message());
}
}
It works but isn't there a better way? Surely someone must have come up with a solution to work around this shortcoming in AX?
Short answer: yes.
While your "brilliant" scheme "works", it gets boring pretty fast, as you now must transport your NuException object deep down 20 level from the listener (job) to the thrower (foo). Your bar method and other middle men has no interest or knowledge about your exception scheme but must pass it on anyway.
This is no longer the case after the update.
There are several ways to go.
Use an observer pattern like the Event broker or in AX 2012 and newer use delegates.
Stick to the infolog system and you use an InfoAction class to peggy bag your information to be used later. It can be used to display a stack trace or other interesting information.
Use a dedicated table for logging.
The third way may seem impractical, as any errors will undo the insert in the log. This is the default behavior but can be circumvented.
MyLogTable log;
Connection con = new UserConnection();
con.ttsBegin();
log.setConnection(con);
... // Set your fields
log.insert();
con.ttsCommit();
Your way to go depends on circumstances you do not mention.
I've read multiple questions similar to this one but none are exactly my situation.
Using linq-to-sql I insert a new record and submit changes. Then, in the same web request, I pull that same record, and update it, then submit changes. The changes are not saved. The DatabaseContext is the same across both these operations.
Insert:
var transaction = _factory.CreateTransaction(siteId, userId, questionId, type, amount, transactionId, processor);
using (IUnitOfWork unitOfWork = UnitOfWork.Begin())
{
transaction.Amount = amount;
_transactionRepository.Add(transaction);
unitOfWork.Commit();
}
Select and Update:
ITransaction transaction = _transactionRepository.FindById(transactionId);
if (transaction == null) throw new Exception(Constants.ErrorCannotFindTransactionWithId.FormatWith(transactionId));
using (IUnitOfWork unitOfWork = UnitOfWork.Begin())
{
transaction.CrmId = crmId;
transaction.UpdatedAt = SystemTime.Now();
unitOfWork.Commit();
}
Here's the unit of work code:
public virtual void Commit()
{
if (_isDisposed)
{
throw new ObjectDisposedException(GetType().Name);
}
_database.SubmitChanges();
}
I even went into the designer.cs file and put a breakpoint on the field that is being set but not updated. I stepped through and it entered and execute the set code, so the Entity should be getting "notified" of the change to this field:
public string CrmId
{
get
{
return this._CrmId;
}
set
{
if ((this._CrmId != value))
{
this.OnCrmIdChanging(value);
this.SendPropertyChanging();
this._CrmId = value;
this.SendPropertyChanged("CrmId");
this.OnCrmIdChanged();
}
}
}
Other useful information:
ObjectTracking is enabled
No errors or exceptions when second SubmitChanges is called (just silently fails update)
SQL profiler shows insert and select but not the subsequent update statement. Linq-To-Sql is not generating the update statement.
There is only one database, one database string, so the update is not going to another database
The table has a primary key.
I don't know what would cause Linq-To-Sql to not issue the update command and not raise some kind of error. Perhaps the problem stems from using the same DataContext instance? I've even refreshed the object from the database using the DataContact.Refresh method before it is pulled for the update, but that didn't help.
I have found what is likely to be the root cause. I am using Unity. The initial insert is being performed in a service class with a PerWebRequest lifetime. The select and update is happening in a class with a Singleton lifetime. So my assumption that the DataContext instances are the same was incorrect.
So, in my class with the Singleton lifetime, I get a fresh instance of the database repository and perform the update and no problem.
Now I still don't know why the original code didn't work and my approach could still be considered more a workaround than a solution, but it did solve my problem and hopefully will be useful to others.
In my web application, somewhere during request cycle I call following method on repository:
repository.Delete(objectToDelete);
and this is NHibernate implementation:
public void Delete(T entity)
{
if (!session.Transaction.IsActive)
{
using (ITransaction transaction = session.BeginTransaction())
{
session.Delete(entity);
transaction.Commit();
}
}
else
{
session.Delete(entity);
}
}
And session.Delete(entity) (inside using statement) fails - which is fine cos I have some database constraints and this is what I expected. However, at the end of the request in Global.asax.cs I close the session with following code:
protected void Application_EndRequest(object sender, EventArgs e)
{
ISession session = ManagedWebSessionContext.Unbind(HttpContext.Current, sessionFactory);
if (session != null)
{
if (session.Transaction != null && session.Transaction.IsActive)
{
session.Transaction.Rollback();
}
else
{
session.Flush();
}
session.Close();
}
}
This is the same session that was used to delete the object. And now, when:
session.Flush();
is called, NHibernate tries to perform the same DELETE operation - that throws exception and application crushes. Not exactly what I wanted, as I have already handled exception before (on repository level) and I show preety UI message box.
How can I prevent NHibernate from trying to perform DELETE (and I guess UPDATE operation in other scenarios) action once again when that session.Flush is called. Basicall I haven't designed that Application_EndRequest so I'm not sure whether it's a good approach to Flush everything.
Thanks
The flush-mode of NHibernate is by default set to 'auto', which means (amongst other things) that committing the transaction will cause NHibernate to flush and the delete fails.
At the end of the request you manually flush the session again, telling NHibernate to do the delete again (since there is no active transaction).
The reason that this is not working as expected, is because your expectations are wrong. A NHibernate session is a unit of work, i.e. everything your application does in one request.
Transactions are completely unrelated. The fact that flushing the session fails the first time, is also the reason that it fails the second time.
If you want to prevent NHibernate to perform the delete a second time, you shouldn't flush twice, only once. Either by committing the transaction or by doing it manually.
On a somewhat unrelated note: you are using NHibernate and transactions wrong. It will give you massive problems later on. There are some good resources online about how to use NHibernate in a web application.
I'm just learning nHibernate and have come across what probably is a simple issue to resolve.
Right so I've figured out so far that you can't/shouldn;t nest nHibernate Transactions within each other; in my case I figured this out when scope went to another routine and I started a new Transaction.
So should I be doing the following?
using (ITransaction transaction = session.BeginTransaction())
{
NHibernateMembership mQuery =
session.QueryOver<NHibernateMembership>()
.Where(x => x.Username == username)
.And(x => x.ApplicationName == ApplicationName)
.SingleOrDefault();
if (mQuery != null)
{
mQuery.PasswordQuestion = newPwdQuestion;
mQuery.PasswordAnswer = EncodePassword(newPwdAnswer);
session.Update(mQuery);
transaction.Commit();
passwordQuestionUpdated = true;
}
}
// Assume this is in another routine elsewhere but being
// called right after the first in the same request
using (ITransaction transaction = session.BeginTransaction())
{
NHibernateMembership mQuery =
session.QueryOver<NHibernateMembership>()
.Where(x => x.Username == username)
.And(x => x.ApplicationName == ApplicationName)
.SingleOrDefault();
if (mQuery != null)
{
mQuery.PasswordQuestion = newPwdQuestion;
mQuery.PasswordAnswer = EncodePassword(newPwdAnswer);
session.Update(mQuery);
transaction.Commit();
passwordQuestionUpdated = true;
}
}
Note: I know they are simply a copy, i'm just demonstrating my question
First Question
Is this the way it is MEANT to be done? Transaction per operation?
Second Question
Do I need call transaction.Commit(); each time or only in the last set?
Third Question
Is there a better way, automated or manual, to do this?
Third Question
Can I use session.Transaction.IsActive to determine if the "Current Session" already is part of a transaction - so in this case I can make the "Transaction wrap" in the highest level, let's say the Web Form code, and let routines be called within it and then commit all work at the end. Is this a flawed method?
I really want to hammer this down so I start as I mean to go on; I don;t want to find 1000s of lines of code in that I need to change it all.
Thanks in advance.
EDIT:
Right so I wrote some code to explain my issue exactly.
private void CallingRoutine()
{
using(ISession session = Helper.GetCurrentSession)
{
using (ITransaction transaction = session.BeginTransaction())
{
// RUN nHIbernate QUERY to get an OBJECT-1
// DO WORK on OBJECT
// Need to CALL an EXTERNAL ROUTINE to finish work
ExternalRoutine();
// DO WORK on OBJECT-1 again
// *** At this point ADO exception triggers
}
}
}
private bool ExternalRoutine()
{
using(ISession session = Helper.GetCurrentSession)
{
using (ITransaction transaction = session.BeginTransaction())
{
// RUN nHIbernate QUERY to get an OBJECT-2
// DO WORK on OBJECT
// Determine result
if(Data)
{
return true;
}
return false;
}
}
}
Hopefully this demonstrates the issue. This is how I understood to write the Transactions but notice how the ADO exception occurs. I'm obviously doing something wrong. How am I meant to write these routines?
Take for example if I was to write a helper object for some provider and within each routine exposed there is a nHibernate query run - how wold I write those routines, in regards to Transactions, assuming I knew nothing about the calling function and data - my job is to work with nHibernate effectively and efficiently and return a result.
This is what I assumed by writing the transaction how I did in ExternalRoutine() - to assume that this is the only use of nHibernate and to explicitly make the Transaction.
If possible, I would suggest using System.Transactions.TransactionScope:
using(var trans = new TransactionScope())
using(var session = .. create your session...) {
... do your stuff ...
trans.Complete();
}
The trans.Complete will cause the session to flush and commit the transaction, in addition you can have nested transactionscopes. The only "downside" to this is that it will escalate to DTC if you have multiple connections (or enlisted resources such as MSMQ), but this is not necessarily a downside unless you're using something like MySQL which doesn't play nicely with DTC.
For simple cases I would probably use a transaction scope in the BeginRequest and commit it in EndRequest if there were no errors (or use a filter if u're using ASP.NET MVC), but that really depends a lot on what you're doing - as long as your operations are short (which they should be in a web app), this should be fine. The overhead of creating a session and transaction scope is not that big that it should cause problems for accessing static resources (or resources that don't need the session), unless you're looking at a really high volume / high performance website.
I am very new to Workflow Foundation development, and am worried that I am opening serious holes in our business process handling by not properly handling application / database exceptions in custom activities.
I would appreciate some steps that I could take to add this resiliency to my custom activities so that I can easily use the designer and other tools to ensure that, as far as I can, I do not create custom activities that are brittle and likely to cause workflow cleanup issues.
Here are some options, at different execution stages, that are available for you to use to handle exceptions.
First option (at activity/workflow execution time):
First of all, on custom activities, you should always try to treat exceptions inside it's execution. Some activities might not work but the overall workflow can continue and, in such cases, log the error to persistence and even show the user that something didn't work as expected but the thing will continue are good options.
That being said there'll always be cases where an activity have to (and even should) thrown exceptions and those should be treated at workflow level. Something like: if this exception occurs on this activity, do this, otherwise, do that.
Lets imagine you've a custom activity which persists something to DB:
public sealed PersistIntegerToDb : CodeActivity
{
public InArgument<int> ValueToPersist { get; set; }
protected override void Execute(CodeActivityMetadata metadata)
{
try
{
// persist
}
catch(SqlException exception)
{
// re throws the SqlException
throw new SqlException("'ValueToPersist' wasn't persisted.", exception);
}
}
}
Then, in your code or through designer you've available TryCatch activity to catch that error and treat it the way you want:
var workflow = new TryCatch
{
Try = new PersistIntegerToDb
{
ValueToPersist = 10
},
Catches =
{
new Catch<SqlException>
{
Action = new ActivityAction<SqlException>
{
Handler = new WriteLine
{
Text = "An error occurred and the value wasn't saved! Anyway workflow will continue..."
}
}
}
}
}
Or you can terminate it using TerminateWorkflow.
Second option (at design time):
Ok, but you can argue that client doesn't know that he have to handle those cases. In that case, and this is an usability option you might consider, instead of making available PersistIntegerToDb on the designer, you can provide an activity already surrounded by exceptions catches to handle, through IActivityTemplateFactory:
public sealed PersistIntegerToDbFactory : IActivityTemplateFactory
{
public Activity Create(DependencyObject target)
{
return new TryCatch
{
Try = new PersistIntegerToDb
{
ValueToPersist = 10
},
Catches =
{
new Catch<SqlException>
{
}
}
};
}
}
Now you just add PersistIntegerToDbFactoryas if it were a regular activity:
new ToolboxItemWrapper(typeof(PersistIntegerToDbFactory), null, "Persist Integer");
Third option (at validation time):
Never forget to validate workflow before execution!
var validationResults =
ActivityValidationServices.Validate(workflow);
foreach(var error in validationResults.Errors)
{
Console.WriteLine(string.Format(
"Validation error '{0}', generated on activity '{1}' in the property named {2}",
error.Message,
error.Source.DisplayName,
error.PropertyName));
}
Fourth option (at application execution time):
You can handle all not treated exception that might happen during execution, using OnUnhandledException event:
var wfApp = new WorkflowApplication(activity);
wfApp.OnUnhandledException +=
delegate(WorkflowApplicationUnhandledExceptionEventArgs e)
{
if (e.UnhandledException is SqlException)
{
Console.WriteLine("Some data wasn't properly persited.");
}
else
{
Console.WriteLine("Unknown error: " + e.UnhandledException.GetType());
Console.WriteLine("With message: " + e.UnhandledException.Message);
}
Console.WriteLine("Ok, workflow will be abort");
return UnhandledExceptionAction.Abort;
};
Note that, at this stage, you can only Abort, Cancel and Terminate the workflow and that's the reason why you should 1) avoid throwing exceptions or 2) treat exceptions inside your workflow. OnUnhandledException is your last chance to end the workflow execution gracefully and should always be treated even if for logging purposes. Something like DivideByZeroExceptions can occur and are almost impossible to predict and catch at validation time, for example.
As far as custom activities goes you should treat them as any other piece of code. Handle the errors you can and let you can't handle the rest bubble up.
At the workflow level you can use the TryCatch activity and workflow persistence to deal with errors. Specially persistence is something people overlook often. Add Persist activities at appropriate steps in your workflow and set the workflow to abort on unhandled errors. Now you can go back in and reload the last good workflow state and retry the actions that cause an unhandled exception. A great way of recovering from failures with resources like databases that might be unavailable for some reason and then come back.