I'm just learning nHibernate and have come across what probably is a simple issue to resolve.
Right so I've figured out so far that you can't/shouldn;t nest nHibernate Transactions within each other; in my case I figured this out when scope went to another routine and I started a new Transaction.
So should I be doing the following?
using (ITransaction transaction = session.BeginTransaction())
{
NHibernateMembership mQuery =
session.QueryOver<NHibernateMembership>()
.Where(x => x.Username == username)
.And(x => x.ApplicationName == ApplicationName)
.SingleOrDefault();
if (mQuery != null)
{
mQuery.PasswordQuestion = newPwdQuestion;
mQuery.PasswordAnswer = EncodePassword(newPwdAnswer);
session.Update(mQuery);
transaction.Commit();
passwordQuestionUpdated = true;
}
}
// Assume this is in another routine elsewhere but being
// called right after the first in the same request
using (ITransaction transaction = session.BeginTransaction())
{
NHibernateMembership mQuery =
session.QueryOver<NHibernateMembership>()
.Where(x => x.Username == username)
.And(x => x.ApplicationName == ApplicationName)
.SingleOrDefault();
if (mQuery != null)
{
mQuery.PasswordQuestion = newPwdQuestion;
mQuery.PasswordAnswer = EncodePassword(newPwdAnswer);
session.Update(mQuery);
transaction.Commit();
passwordQuestionUpdated = true;
}
}
Note: I know they are simply a copy, i'm just demonstrating my question
First Question
Is this the way it is MEANT to be done? Transaction per operation?
Second Question
Do I need call transaction.Commit(); each time or only in the last set?
Third Question
Is there a better way, automated or manual, to do this?
Third Question
Can I use session.Transaction.IsActive to determine if the "Current Session" already is part of a transaction - so in this case I can make the "Transaction wrap" in the highest level, let's say the Web Form code, and let routines be called within it and then commit all work at the end. Is this a flawed method?
I really want to hammer this down so I start as I mean to go on; I don;t want to find 1000s of lines of code in that I need to change it all.
Thanks in advance.
EDIT:
Right so I wrote some code to explain my issue exactly.
private void CallingRoutine()
{
using(ISession session = Helper.GetCurrentSession)
{
using (ITransaction transaction = session.BeginTransaction())
{
// RUN nHIbernate QUERY to get an OBJECT-1
// DO WORK on OBJECT
// Need to CALL an EXTERNAL ROUTINE to finish work
ExternalRoutine();
// DO WORK on OBJECT-1 again
// *** At this point ADO exception triggers
}
}
}
private bool ExternalRoutine()
{
using(ISession session = Helper.GetCurrentSession)
{
using (ITransaction transaction = session.BeginTransaction())
{
// RUN nHIbernate QUERY to get an OBJECT-2
// DO WORK on OBJECT
// Determine result
if(Data)
{
return true;
}
return false;
}
}
}
Hopefully this demonstrates the issue. This is how I understood to write the Transactions but notice how the ADO exception occurs. I'm obviously doing something wrong. How am I meant to write these routines?
Take for example if I was to write a helper object for some provider and within each routine exposed there is a nHibernate query run - how wold I write those routines, in regards to Transactions, assuming I knew nothing about the calling function and data - my job is to work with nHibernate effectively and efficiently and return a result.
This is what I assumed by writing the transaction how I did in ExternalRoutine() - to assume that this is the only use of nHibernate and to explicitly make the Transaction.
If possible, I would suggest using System.Transactions.TransactionScope:
using(var trans = new TransactionScope())
using(var session = .. create your session...) {
... do your stuff ...
trans.Complete();
}
The trans.Complete will cause the session to flush and commit the transaction, in addition you can have nested transactionscopes. The only "downside" to this is that it will escalate to DTC if you have multiple connections (or enlisted resources such as MSMQ), but this is not necessarily a downside unless you're using something like MySQL which doesn't play nicely with DTC.
For simple cases I would probably use a transaction scope in the BeginRequest and commit it in EndRequest if there were no errors (or use a filter if u're using ASP.NET MVC), but that really depends a lot on what you're doing - as long as your operations are short (which they should be in a web app), this should be fine. The overhead of creating a session and transaction scope is not that big that it should cause problems for accessing static resources (or resources that don't need the session), unless you're looking at a really high volume / high performance website.
Related
I've read multiple questions similar to this one but none are exactly my situation.
Using linq-to-sql I insert a new record and submit changes. Then, in the same web request, I pull that same record, and update it, then submit changes. The changes are not saved. The DatabaseContext is the same across both these operations.
Insert:
var transaction = _factory.CreateTransaction(siteId, userId, questionId, type, amount, transactionId, processor);
using (IUnitOfWork unitOfWork = UnitOfWork.Begin())
{
transaction.Amount = amount;
_transactionRepository.Add(transaction);
unitOfWork.Commit();
}
Select and Update:
ITransaction transaction = _transactionRepository.FindById(transactionId);
if (transaction == null) throw new Exception(Constants.ErrorCannotFindTransactionWithId.FormatWith(transactionId));
using (IUnitOfWork unitOfWork = UnitOfWork.Begin())
{
transaction.CrmId = crmId;
transaction.UpdatedAt = SystemTime.Now();
unitOfWork.Commit();
}
Here's the unit of work code:
public virtual void Commit()
{
if (_isDisposed)
{
throw new ObjectDisposedException(GetType().Name);
}
_database.SubmitChanges();
}
I even went into the designer.cs file and put a breakpoint on the field that is being set but not updated. I stepped through and it entered and execute the set code, so the Entity should be getting "notified" of the change to this field:
public string CrmId
{
get
{
return this._CrmId;
}
set
{
if ((this._CrmId != value))
{
this.OnCrmIdChanging(value);
this.SendPropertyChanging();
this._CrmId = value;
this.SendPropertyChanged("CrmId");
this.OnCrmIdChanged();
}
}
}
Other useful information:
ObjectTracking is enabled
No errors or exceptions when second SubmitChanges is called (just silently fails update)
SQL profiler shows insert and select but not the subsequent update statement. Linq-To-Sql is not generating the update statement.
There is only one database, one database string, so the update is not going to another database
The table has a primary key.
I don't know what would cause Linq-To-Sql to not issue the update command and not raise some kind of error. Perhaps the problem stems from using the same DataContext instance? I've even refreshed the object from the database using the DataContact.Refresh method before it is pulled for the update, but that didn't help.
I have found what is likely to be the root cause. I am using Unity. The initial insert is being performed in a service class with a PerWebRequest lifetime. The select and update is happening in a class with a Singleton lifetime. So my assumption that the DataContext instances are the same was incorrect.
So, in my class with the Singleton lifetime, I get a fresh instance of the database repository and perform the update and no problem.
Now I still don't know why the original code didn't work and my approach could still be considered more a workaround than a solution, but it did solve my problem and hopefully will be useful to others.
I'm using EF 5.0 to create a web and I have some issues disposing my context. All the times that I use a context is inside a using sentence, so the context should be disposed automatically but in a specific moment I get the next error when I try to attach an entity to a context:
An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key.
It semms that the entity is not disposed. How is the way to manage this situation? Do I have to dispose the ObjectContext to dispose the entities or is there any way to check if the Entity is attached?
Regards.
One way to do it is to detach the existing object before attaching. I don't have VS in front of me so I apologize if the code isn't exactly correct.
var existingObject = dbContext.Users.Local
.FirstOrDefault(x => x.id = newObject.id);
if (existingObject != null)
{
// remove object from local cache
dbContext.Entry(existingObject).State = EntityState.Detached;
}
dbContext.Users.Attach(newObject);
In case this doesn't fix the problem, you'll have to go to the old way of detaching objects.
// remove object from local cache
ObjectContext objectContext = ((IObjectContextAdapter)dbContext).ObjectContext;
objectContext.Detach(existingObject);
If you do something like that:
User u;
using (Entities ent = new Entities())
{
u = ent.Users.Single(a => a.ID == 123);
}
using (Entities ent2 = new Entities())
{
//loading the same user
User user2 = ent2.Users.Single(a => a.ID == 123);
//trying to attach the same object with the same key
ent2.Attach(u);
}
then you will get this error (I haven't tested this code).
EDIT: one of the solutions is to change the object's state:
ent2.Attach(u);
ent2.ObjectStateManager.ChangeObjectState(u, EntityState.Modified);
another solution is to check if the entity is already attached:
ObjectStateEntry state = null;
if(!ent2.ObjectStateManager.TryGetObjectStateEntry(((IEntityWithKey)u).EntityKey, out state))
{
ent2.Attach(u);
}
Dispose doesn't mean "reset to factory settings". It is a way to clean up unmanaged resources like database connections and such.
The problem has nothing to do with disposing a context or not. It even has nothing to do with having multiple contexts somewhere in place. If this would be the problem you would get the "An entity object cannot be referenced by multiple instances of IEntityChangeTracker" exception which is totally different to your exception.
You can simulate your exception quite easily with only a single context:
using (var ctx = new MyContext())
{
var customer1 = new Customer { Id = 1 };
var customer2 = new Customer { Id = 1 }; // a second object with the same key
ctx.Customer.Attach(customer1);
ctx.Customer.Attach(customer2); // your exception will occur here
}
The problem causing this exception is normally more hidden, expecially if you keep in mind that attaching or setting a state (for example to Modified) will also attach all related entities in the object graph of the entity you are attaching. If in this graph are two objects with the same key you'll get the exception as well, although you didn't attach those related entities explicitly.
But it's impossible to find the exact reason without more details about your code.
I've got a piece of code that looks like this:
public void Foo(int userId)
{
try {
using (var tran = NHibernateSession.Current.BeginTransaction())
{
var user = _userRepository.Get(userId);
user.Address = "some new fake user address";
_userRepository.Save(user);
Validate();
tran.Commit();
}
}
catch (Exception) {
logger.Error("log error and don't throw")
}
}
private void Validate()
{
throw new Exception();
}
And I'd like to unit test if validations ware made correctly. I use nunit and and SQLLite database for testing. Here is test code:
protected override void When()
{
base.When();
ownerOfFooMethod.Foo(1);
Session.Flush();
Session.Clear();
}
[Test]
public void FooTest()
{
var fakeUser = userRepository.GetUserById(1);
fakeUser.Address.ShouldNotEqual("some new fake user address");
}
My test fails.
While I'm debugging I can see that exception is thrown, Commit has not been called. But my user still has "some new fake user address" in Address property, although I was expecting that it will be rollbacked.
While I'm looking in nhibernate profiler I can see begin transaction statement, but it is not followed neither by commit nor by rollback.
What is more, even if I put there try-catch block and do Rollback explicitly in catch, my test still fails.
I assume, that there is some problem in testing environment, but everything seems fine for me.
Any ideas?
EDIT: I've added important try-catch block (at the beginning I've simplified code too much).
If the exception occurs before NH has flushed the change to the database, and if you then keep using that session without evicting/clearing the object and a flush occurs later for some reason, the change will still be persisted, since the object is still dirty according to NHibernate. When rolling back a transaction you should immediately close the session to avoid this kind of problem.
Another way to put it: A rollback will not rollback in-memory changes you've made to persistent entities.
Also, if the session is a regular session, that call to Save() isn't needed, since the instance is already tracked by NH.
Apart from blocking other threads reading from the cache what other problems should I be thinking about when locking the cache insert method for a public facing website.
The actual data retrieval and insert into the cache should take no more than 1 second, which we can live with. More importantly i don't want multiple thread potentially all hitting the Insert method at the same time.
The sample code looks something like:
public static readonly object _syncRoot = new object();
if (HttpContext.Current.Cache["key"] == null)
{
lock (_syncRoot)
{
HttpContext.Current.Cache.Insert("key", "DATA", null, DateTime.Now.AddMinutes(5), Cache.NoSlidingExpiration, CacheItemPriority.Normal, null);
}
}
Response.Write(HttpContext.Current.Cache["key"]);
I expect that you do this to prevent the data retrieval is done more than once, perhaps because the amount of data is high, which might have impact on your server when multiple users trigger that retrieval.
A lock like this just on the Cache.Insert itself is useless, because this method is thread-safe. A lock like this can be useful to prevent double data retrieval, but in that case you should consider using a double checked lock:
var data = HttpContext.Current.Cache["key"];
if (data == null)
{
lock (_syncRoot)
{
// Here, check again for null after the lock.
var data = HttpContext.Current.Cache["key"];
if (data == null)
{
var data = [RETRIEVE DATA]
HttpContext.Current.Cache.Insert("key", data, null, ...);
}
}
return data;
But to your main question. Apart from the risk of locking for a too long period of time, causing large delays in your web application, there is nothing to worry about :-). A lock around a Cache.Insert by itself, will do you no harm.
I have an ASP.NET application with a lot of dynamic content. The content is the same for all users belonging to a particular client. To reduce the number of database hits required per request, I decided to cache client-level data. I created a static class ("ClientCache") to hold the data.
The most-often used method of the class is by far "GetClientData", which brings back a ClientData object containing all stored data for a particular client. ClientData is loaded lazily, though: if the requested client data is already cached, the caller gets the cached data; otherwise, the data is fetched, added to the cache and then returned to the caller.
Eventually I started getting intermittent crashes in the the GetClientData method on the line where the ClientData object is added to the cache. Here's the method body:
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null)
_clients = new Dictionary<Guid, ClientData>();
ClientData client;
if (_clients.ContainsKey(fk_client))
{
client = _clients[fk_client];
}
else
{
client = new ClientData(fk_client);
_clients.Add(fk_client, client);
}
return client;
}
The exception text is always something like "An object with the same key already exists."
Of course, I tried to write the code so that it just wasn't possible to add a client to the cache if it already existed.
At this point, I'm suspecting that I've got a race condition and the method is being executed twice concurrently, which could explain how the code would crash. What I'm confused about, though, is how the method could be executed twice concurrently at all. As far as I know, any ASP.NET application only ever fields one request at a time (that's why we can use HttpContext.Current).
So, is this bug likely a race condition that will require putting locks in critical sections? Or am I missing a more obvious bug?
If an ASP.NET application only handles one request at a time all ASP.NET sites would be in serious trouble. ASP.NET can process dozens at a time (typically 25 per CPU core).
You should use ASP.NET Cache instead of using your own dictionary to store your object. Operations on the cache are thread-safe.
Note you need to be sure that read operation on the object you store in the cache are threadsafe, unfortunately most .NET class simply state the instance members aren't thread-safe without trying to point any that may be.
Edit:
A comment to this answer states:-
Only atomic operations on the cache are thread safe. If you do something like check
if a key exists and then add it, that is NOT thread safe and can cause the item to
overwritten.
Its worth pointing out that if we feel we need to make such an operation atomic then the cache is probably not the right place for the resource.
I have quite a bit of code that does exactly as the comment describes. However the resource being stored will be the same in both places. Hence if an existing item on rare occasions gets overwritten the only the cost is that one thread unnecessarily generated a resource. The cost of this rare event is much less than the cost of trying to make the operation atomic every time an attempt to access it is made.
This is very easy to fix:
private _clientsLock = new Object();
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null)
lock (_clientsLock)
// Check again because another thread could have created a new
// dictionary in-between the lock and this check
if (_clients == null)
_clients = new Dictionary<Guid, ClientData>();
if (_clients.ContainsKey(fk_client))
// Don't need a lock here UNLESS there are also deletes. If there are
// deletes, then a lock like the one below (in the else) is necessary
return _clients[fk_client];
else
{
ClientData client = new ClientData(fk_client);
lock (_clientsLock)
// Again, check again because another thread could have added this
// this ClientData between the last ContainsKey check and this add
if (!clients.ContainsKey(fk_client))
_clients.Add(fk_client, client);
return client;
}
}
Keep in mind that whenever you mess with static classes, you have the potential for thread synchronization problems. If there's a static class-level list of some kind (in this case, _clients, the Dictionary object), there's DEFINITELY going to be thread synchronization issues to deal with.
Your code really does assume only one thread is in the function at a time.
This just simply won't be true in ASP.NET
If you insist on doing it this way, use a static semaphore to lock the area around this class.
you need thread safe & minimize lock.
see Double-checked locking (http://en.wikipedia.org/wiki/Double-checked_locking)
write simply with TryGetValue.
public static object lockClientsSingleton = new object();
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null) {
lock( lockClientsSingleton ) {
if( _clients==null ) {
_clients = new Dictionary``();
}
}
}
ClientData client;
if( !_clients.TryGetValue( fk_client, out client ) )
{
lock(_clients)
{
if( !_clients.TryGetValue( fk_client, out client ) )
{
client = new ClientData(fk_client)
_clients.Add( fk_client, client );
}
}
}
return client;
}