NHibernate tries to flush operation that already failed - how to aviod it - asp.net

In my web application, somewhere during request cycle I call following method on repository:
repository.Delete(objectToDelete);
and this is NHibernate implementation:
public void Delete(T entity)
{
if (!session.Transaction.IsActive)
{
using (ITransaction transaction = session.BeginTransaction())
{
session.Delete(entity);
transaction.Commit();
}
}
else
{
session.Delete(entity);
}
}
And session.Delete(entity) (inside using statement) fails - which is fine cos I have some database constraints and this is what I expected. However, at the end of the request in Global.asax.cs I close the session with following code:
protected void Application_EndRequest(object sender, EventArgs e)
{
ISession session = ManagedWebSessionContext.Unbind(HttpContext.Current, sessionFactory);
if (session != null)
{
if (session.Transaction != null && session.Transaction.IsActive)
{
session.Transaction.Rollback();
}
else
{
session.Flush();
}
session.Close();
}
}
This is the same session that was used to delete the object. And now, when:
session.Flush();
is called, NHibernate tries to perform the same DELETE operation - that throws exception and application crushes. Not exactly what I wanted, as I have already handled exception before (on repository level) and I show preety UI message box.
How can I prevent NHibernate from trying to perform DELETE (and I guess UPDATE operation in other scenarios) action once again when that session.Flush is called. Basicall I haven't designed that Application_EndRequest so I'm not sure whether it's a good approach to Flush everything.
Thanks

The flush-mode of NHibernate is by default set to 'auto', which means (amongst other things) that committing the transaction will cause NHibernate to flush and the delete fails.
At the end of the request you manually flush the session again, telling NHibernate to do the delete again (since there is no active transaction).
The reason that this is not working as expected, is because your expectations are wrong. A NHibernate session is a unit of work, i.e. everything your application does in one request.
Transactions are completely unrelated. The fact that flushing the session fails the first time, is also the reason that it fails the second time.
If you want to prevent NHibernate to perform the delete a second time, you shouldn't flush twice, only once. Either by committing the transaction or by doing it manually.
On a somewhat unrelated note: you are using NHibernate and transactions wrong. It will give you massive problems later on. There are some good resources online about how to use NHibernate in a web application.

Related

JavaFX - Handle Exceptions in one place

I am working on JavaFX application and I want to know if there is a way to handle exceptions in one place.
I am doing inserts into database. And when an insert fails, I get an SQLException.
So, is it possible to handle all SQLExceptions (for all inserts) in one place?
I'm aware of:
Thread.setDefaultUncaughtExceptionHandler(...);
But this is probably not the way to go?
It is bad practice to call any code that executes your SQL query (or any other business logic that may take long to execute) directly in the JavaFX application Thread. (I've observed that under Windows JavaFX applications crash without even printing a stacktrace when an uncaught exeption is thrown in the application thread.)
I would suggest to call your SQL-related code using an javafx.concurrent.Task.
Using the setOnFailed() method you can have code invoked whenever an Execption is thrown. There you can look for the type of exception and call any method that handles your SQLException.
Task<SOME_TYPE> mySqlTask = new Task<>() {
#Override
protected SOME_TYPE call() throws Exception {
... // do sql stuff
return mySqlResult; // or null if not needed
}
};
mySqlTask.setOnFailed(event -> {
Throwable exception = mySqlTask.getException();
if (exception instanceof SQLException) {
// call code that handles the sql exception
}
});
// start the task in a separate thread (or better use an Executor)
new Thread(mySqlTask).start();
By the way, I don't think that using Thread.setDefaultUncaughtExceptionHandler(...); is the way to go neither.

Linq-To-Sql SubmitChanges Not Updating Database

I've read multiple questions similar to this one but none are exactly my situation.
Using linq-to-sql I insert a new record and submit changes. Then, in the same web request, I pull that same record, and update it, then submit changes. The changes are not saved. The DatabaseContext is the same across both these operations.
Insert:
var transaction = _factory.CreateTransaction(siteId, userId, questionId, type, amount, transactionId, processor);
using (IUnitOfWork unitOfWork = UnitOfWork.Begin())
{
transaction.Amount = amount;
_transactionRepository.Add(transaction);
unitOfWork.Commit();
}
Select and Update:
ITransaction transaction = _transactionRepository.FindById(transactionId);
if (transaction == null) throw new Exception(Constants.ErrorCannotFindTransactionWithId.FormatWith(transactionId));
using (IUnitOfWork unitOfWork = UnitOfWork.Begin())
{
transaction.CrmId = crmId;
transaction.UpdatedAt = SystemTime.Now();
unitOfWork.Commit();
}
Here's the unit of work code:
public virtual void Commit()
{
if (_isDisposed)
{
throw new ObjectDisposedException(GetType().Name);
}
_database.SubmitChanges();
}
I even went into the designer.cs file and put a breakpoint on the field that is being set but not updated. I stepped through and it entered and execute the set code, so the Entity should be getting "notified" of the change to this field:
public string CrmId
{
get
{
return this._CrmId;
}
set
{
if ((this._CrmId != value))
{
this.OnCrmIdChanging(value);
this.SendPropertyChanging();
this._CrmId = value;
this.SendPropertyChanged("CrmId");
this.OnCrmIdChanged();
}
}
}
Other useful information:
ObjectTracking is enabled
No errors or exceptions when second SubmitChanges is called (just silently fails update)
SQL profiler shows insert and select but not the subsequent update statement. Linq-To-Sql is not generating the update statement.
There is only one database, one database string, so the update is not going to another database
The table has a primary key.
I don't know what would cause Linq-To-Sql to not issue the update command and not raise some kind of error. Perhaps the problem stems from using the same DataContext instance? I've even refreshed the object from the database using the DataContact.Refresh method before it is pulled for the update, but that didn't help.
I have found what is likely to be the root cause. I am using Unity. The initial insert is being performed in a service class with a PerWebRequest lifetime. The select and update is happening in a class with a Singleton lifetime. So my assumption that the DataContext instances are the same was incorrect.
So, in my class with the Singleton lifetime, I get a fresh instance of the database repository and perform the update and no problem.
Now I still don't know why the original code didn't work and my approach could still be considered more a workaround than a solution, but it did solve my problem and hopefully will be useful to others.

nhibernate transactions and unit testing

I've got a piece of code that looks like this:
public void Foo(int userId)
{
try {
using (var tran = NHibernateSession.Current.BeginTransaction())
{
var user = _userRepository.Get(userId);
user.Address = "some new fake user address";
_userRepository.Save(user);
Validate();
tran.Commit();
}
}
catch (Exception) {
logger.Error("log error and don't throw")
}
}
private void Validate()
{
throw new Exception();
}
And I'd like to unit test if validations ware made correctly. I use nunit and and SQLLite database for testing. Here is test code:
protected override void When()
{
base.When();
ownerOfFooMethod.Foo(1);
Session.Flush();
Session.Clear();
}
[Test]
public void FooTest()
{
var fakeUser = userRepository.GetUserById(1);
fakeUser.Address.ShouldNotEqual("some new fake user address");
}
My test fails.
While I'm debugging I can see that exception is thrown, Commit has not been called. But my user still has "some new fake user address" in Address property, although I was expecting that it will be rollbacked.
While I'm looking in nhibernate profiler I can see begin transaction statement, but it is not followed neither by commit nor by rollback.
What is more, even if I put there try-catch block and do Rollback explicitly in catch, my test still fails.
I assume, that there is some problem in testing environment, but everything seems fine for me.
Any ideas?
EDIT: I've added important try-catch block (at the beginning I've simplified code too much).
If the exception occurs before NH has flushed the change to the database, and if you then keep using that session without evicting/clearing the object and a flush occurs later for some reason, the change will still be persisted, since the object is still dirty according to NHibernate. When rolling back a transaction you should immediately close the session to avoid this kind of problem.
Another way to put it: A rollback will not rollback in-memory changes you've made to persistent entities.
Also, if the session is a regular session, that call to Save() isn't needed, since the instance is already tracked by NH.

What steps can I take to make a worfklow activity resilient to exceptions

I am very new to Workflow Foundation development, and am worried that I am opening serious holes in our business process handling by not properly handling application / database exceptions in custom activities.
I would appreciate some steps that I could take to add this resiliency to my custom activities so that I can easily use the designer and other tools to ensure that, as far as I can, I do not create custom activities that are brittle and likely to cause workflow cleanup issues.
Here are some options, at different execution stages, that are available for you to use to handle exceptions.
First option (at activity/workflow execution time):
First of all, on custom activities, you should always try to treat exceptions inside it's execution. Some activities might not work but the overall workflow can continue and, in such cases, log the error to persistence and even show the user that something didn't work as expected but the thing will continue are good options.
That being said there'll always be cases where an activity have to (and even should) thrown exceptions and those should be treated at workflow level. Something like: if this exception occurs on this activity, do this, otherwise, do that.
Lets imagine you've a custom activity which persists something to DB:
public sealed PersistIntegerToDb : CodeActivity
{
public InArgument<int> ValueToPersist { get; set; }
protected override void Execute(CodeActivityMetadata metadata)
{
try
{
// persist
}
catch(SqlException exception)
{
// re throws the SqlException
throw new SqlException("'ValueToPersist' wasn't persisted.", exception);
}
}
}
Then, in your code or through designer you've available TryCatch activity to catch that error and treat it the way you want:
var workflow = new TryCatch
{
Try = new PersistIntegerToDb
{
ValueToPersist = 10
},
Catches =
{
new Catch<SqlException>
{
Action = new ActivityAction<SqlException>
{
Handler = new WriteLine
{
Text = "An error occurred and the value wasn't saved! Anyway workflow will continue..."
}
}
}
}
}
Or you can terminate it using TerminateWorkflow.
Second option (at design time):
Ok, but you can argue that client doesn't know that he have to handle those cases. In that case, and this is an usability option you might consider, instead of making available PersistIntegerToDb on the designer, you can provide an activity already surrounded by exceptions catches to handle, through IActivityTemplateFactory:
public sealed PersistIntegerToDbFactory : IActivityTemplateFactory
{
public Activity Create(DependencyObject target)
{
return new TryCatch
{
Try = new PersistIntegerToDb
{
ValueToPersist = 10
},
Catches =
{
new Catch<SqlException>
{
}
}
};
}
}
Now you just add PersistIntegerToDbFactoryas if it were a regular activity:
new ToolboxItemWrapper(typeof(PersistIntegerToDbFactory), null, "Persist Integer");
Third option (at validation time):
Never forget to validate workflow before execution!
var validationResults =
ActivityValidationServices.Validate(workflow);
foreach(var error in validationResults.Errors)
{
Console.WriteLine(string.Format(
"Validation error '{0}', generated on activity '{1}' in the property named {2}",
error.Message,
error.Source.DisplayName,
error.PropertyName));
}
Fourth option (at application execution time):
You can handle all not treated exception that might happen during execution, using OnUnhandledException event:
var wfApp = new WorkflowApplication(activity);
wfApp.OnUnhandledException +=
delegate(WorkflowApplicationUnhandledExceptionEventArgs e)
{
if (e.UnhandledException is SqlException)
{
Console.WriteLine("Some data wasn't properly persited.");
}
else
{
Console.WriteLine("Unknown error: " + e.UnhandledException.GetType());
Console.WriteLine("With message: " + e.UnhandledException.Message);
}
Console.WriteLine("Ok, workflow will be abort");
return UnhandledExceptionAction.Abort;
};
Note that, at this stage, you can only Abort, Cancel and Terminate the workflow and that's the reason why you should 1) avoid throwing exceptions or 2) treat exceptions inside your workflow. OnUnhandledException is your last chance to end the workflow execution gracefully and should always be treated even if for logging purposes. Something like DivideByZeroExceptions can occur and are almost impossible to predict and catch at validation time, for example.
As far as custom activities goes you should treat them as any other piece of code. Handle the errors you can and let you can't handle the rest bubble up.
At the workflow level you can use the TryCatch activity and workflow persistence to deal with errors. Specially persistence is something people overlook often. Add Persist activities at appropriate steps in your workflow and set the workflow to abort on unhandled errors. Now you can go back in and reload the last good workflow state and retry the actions that cause an unhandled exception. A great way of recovering from failures with resources like databases that might be unavailable for some reason and then come back.

ASP.NET lock() doesn't work

i try to put a lock to a static string object to access to cache,, the lock() block executes in my local,but whenever i deploy it to the server, it locks forever. i write every single step to event log to see the process and lock(object) just causes the deadlock on the server. The command right after lock() is never executed as the i dont see an entry in the event log.
below is the code:
public static string CacheSyncObject = "CacheSync";
public static DataView GetUsers()
{
DataTable dtUsers = null;
if (HttpContext.Current.Cache["dtUsers"] != null)
{
Global.eventLogger.Write(String.Format("GetUsers() cache hit: {0}",dtUsers.Rows.Count));
return (HttpContext.Current.Cache["dtUsers"] as DataTable).Copy().DefaultView;
}
Global.eventLogger.Write("GetUsers() cache miss");
lock (CacheSyncObject)
{
Global.eventLogger.Write("GetUsers() locked SyncObject");
if (HttpContext.Current.Cache["dtUsers"] != null)
{
Global.eventLogger.Write("GetUsers() opps, another thread filled the cache, release lock");
return (HttpContext.Current.Cache["dtUsers"] as DataTable).Copy().DefaultView;
}
Global.eventLogger.Write("GetUsers() locked SyncObject"); ==> this is never written to the log, so which means to me that, lock() never executes.
You're locking on a string, which is a generally bad idea in .NET due to interning. The .NET runtime actually stores all identical literal strings only once, so you have little control over who sees a specific string.
I'm not sure how the ASP.NET runtime handles this, but the regular .NET runtime actually uses interning for the entire process which means that interned strings are shared even between different AppDomains. Thus you could be deadlocking between different instances of you method.
What happens if you use:
public static object CacheSyncObject = new object();

Resources