OptimisticConcurrencyException with System.Web.Providers - asp.net

I'm using the new Universal Providers from Microsoft for session in SQL Server. The old implementation of session on SQL Server required a job (running every minute) to clear expired sessions. The new one does this check and clear on every request. Since I'm actually running in SQL Azure, I don't have SQL Agent to schedule jobs, so this sounds like a reasonable way to go about it (no, I don't want to pay for Azure Cache for session).
The problem is when multiple users access the site at the same time, they're both trying to clear the same expired sessions at the same time and the second gets an optimistic concurrency exception.
System.Data.OptimisticConcurrencyException: Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries.
at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter)
at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options)
at System.Web.Providers.DefaultSessionStateProvider.PurgeExpiredSessions()
at System.Web.Providers.DefaultSessionStateProvider.PurgeIfNeeded()
at System.Web.SessionState.SessionStateModule.BeginAcquireState(Object source, EventArgs e, AsyncCallback cb, Object extraData)
at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
I'm using ELMAH for error logging, and this is showing up with some frequency. Before I try to configure ELMAH to ignore the errors, does anyone know of a way to stop the errors from happening in the first place? Is there some configuration with the new providers that I'm missing?

Please download the newer version of the providers which can be found here: http://www.nuget.org/packages/System.Web.Providers. We have updated the way that we clean up expired sessions to run asynchronously and to handle error conditions. Please let us know if it does not correct your issues.

This package has a what I would describe as a bug. The premise of out of process session state is that multiple instances might be managing session state clean up. In this case all instances are running this method...
private void PurgeExpiredSessions()
{
using (SessionEntities entities = ModelHelper.CreateSessionEntities(this.ConnectionString))
{
foreach (SessionEntity entity in QueryHelper.GetExpiredSessions(entities))
{
entities.DeleteObject(entity);
}
entities.SaveChanges();
this.LastSessionPurgeTicks = DateTime.UtcNow.Ticks;
}
}
The problem is that one instance deletes the entities before the other(s) and entities throws the error described in the post. I asked the package authors to release the source code or fix this.. My untested text editor fix would be to add public virtual, so one could override the method or just change it too..
private void PurgeExpiredSessions()
{
using (SessionEntities entities = ModelHelper.CreateSessionEntities(this.ConnectionString))
{
var sqlCommand = #"DELETE dbo.Sessions WHERE Expires < GETUTCDATE()";
entities.ExecuteStoreCommand(sqlCommand);
this.LastSessionPurgeTicks = DateTime.UtcNow.Ticks;
}
}
The package authors are really fast with a response (answered while posting this!) and today they stated that they are working on releasing the code but that they might be able to just fix this. I requested an ETA and will try to follow up here if I get one.
Great package, it just needs a little maintenance.
Proper Answer: Wait for source code release or a fix update. Or De-compile and fix it yourself (If that is consistent with the license!)
*Update the package owners are considering fixing it this week. Yeah!
**Update.SOLVED!!! They evidently fixed it a while ago and I was installing the wrong package..
I was using http://nuget.org/packages/System.Web.Providers and I should have been using http://nuget.org/packages/Microsoft.AspNet.Providers/ .. It was not obvious to me which one was legacy and included in another package. They wrapped it in an empty catch..
private void PurgeExpiredSessions()
{
try
{
using (SessionEntities entities = ModelHelper.CreateSessionEntities(this.ConnectionString))
{
foreach (SessionEntity entity in QueryHelper.GetExpiredSessions(entities))
{
entities.DeleteObject(entity);
}
entities.SaveChanges();
this.LastSessionPurgeTicks = DateTime.UtcNow.Ticks;
}
}
catch
{
}
}
Thank you to the package team for such quick responses and great support!!!

I posted a question on this at the NuGet(http://www.nuget.org/packages/System.Web.Providers), and got a very quick response from the owners. After a bit of backwards and forwards, turns out they do have a fix for this, but is going out in the next update.
There was a suggestion here, that Microsoft isnt too keen on supporting this, but my experience has been otherwise, and have always received good support.

Related

Doctrine: atomic updates and exceptions in a loop

We are migrating a project from a more basic ORM to using Symfony+Doctrine. In the project we have a lot of cron jobs looking like this:
$rows = $someRepository->getRows();
foreach ($rows as $row) {
try {
$db->beginTransaction(); //simple begin transaction in db
//do some handling of data
// Maybe load some other entities and update those
// ...
$db->commit();
} catch (Throwable $t) {
//log error
//clear entity cache
$db->rollback(); //simple rollback in db
}
}
When we did it this way, all changes within the try catch was atomic while it at the same time was possible to recover from an error and continue on the next $row.
In Symfony+Doctrine, I simply cannot figure out how to mimic this behaviour. The recommendation from Doctrine to handle an exception is closing the EntityManager, but how do you recover?
The ORM does this implicitly on flush, so most of the time you can avoid the hassle of doing so on your own.
However, if you want clear demarcation you can still do it explicitly, in a similar manner you did so far.
More reading and examples here: https://www.doctrine-project.org/projects/doctrine-orm/en/2.7/reference/transactions-and-concurrency.html
EDIT related to the comment below:
Instead of injecting the manager, you should inject the registry.
After that on catch, you can check if the $em->isOpen(), and call $registry->resetManager() if not.
I suspect this will also reset the unit of work, so you might encounter detached entities. In that case you should do $em->merge();
One thing to note here is that an expection is not considered normal in doctrine, so they are closing the manager because of that. You might think that this is overcompicated - yes it is, because you are working against the philosophy here. Validate your data if you can. Read this section: https://www.doctrine-project.org/projects/doctrine-orm/en/2.7/reference/transactions-and-concurrency.html#exception-handling
As for the why: (This is not offical, just based on my knowledge) The managers internal unit of work is a stateful object. When an exception occures during a transaction that state will remain the same, but couln't be persisted to the database. If they let this go that would mean the EM would try to apply all state changes again, and would encounter the same exception again. So no point in leaving it open in the same state, a reset is needed.

intershop ORMException could not update - refresh ORMObject

In a clustered intershop environment, we see a lot of error messages. I'm suspecting the communication between the application servers is not reliable.
Caused by: com.intershop.beehive.orm.capi.common.ORMException:
Could not UPDATE object: com.intershop.beehive.bts.internal.orderprocess.basket.BasketPO
Is there safe way to for the local application server, to load the latest instance.
BasketPO basket = null;
try{
BasketPOFactory factory = (BasketPOFactory) NamingMgr.getInstance().lookupFactory(BasketPOFactory.FACTORY_NAME);
try(ORMObjectCollection<BasketPO>baskets = factory.getObjectsBySQLWhere("uuid=?", new Object[]{basketID},CacheMode.NO_CACHING);){
if(null != baskets && !baskets.isEmpty()){
basket = baskets.stream().findFirst().get();
}
}
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
Does the ORMObject#refresh method help ?
try{
if(null != basket)
basket.refresh();
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
You experience that error because an optimistic lock "fails". To understand the problem better I'll try to explain how the optimistic locking works in particular in the Intershop ORM layer.
There is a column named OCA in the PO tables (OCA == optimistic control attribute?). Imagine that two servers (or two different threads/transactions) try to update the same row in a table. For performance reasons there is no DB locking involved by default (e.g. by issuing select for update). Instead the first thread/server increments the OCA by one when it updates the row successfully within its transaction.
The second thread/server knows the value of the OCA from the time that it created its own state. It then tries to update the row by issuing a similar query:
UPDATE ... OCA = OCA + 1 ... WHERE UUID = <uuid> AND OCA = <old_oca>
Since the OCA is already incremented by the first thread/server this update fails (in reality - updates 0 rows) and the exception that you posted above is thrown when the ORM layer detects that no rows were updated.
Your problem is not the inter-server communication but rather the fact that either:
multiple servers/threads try to update the same object;
there are direct updates in the database that bypass the ORM layer (less likely);
To solve this you may:
Avoid that situation altogether (highly recommended by me :-) );
Use the ISH locking framework (very cumbersome imHo);
Use pesimistic locking supported by the ISH ORM layer and Oracle (beware of potential performance issues, deadlocks, bugs);
Use Java locking - but since the servers run in different JVM-s this is rarely an option;
OFFTOPIC remarks: I'm not sure why you use getObjectsBySQLWhere when you know the primary key (uuid). As far as I remember ORMObjectCollection-s should be closed if not iterated completely.
UPDATE: If the cluster is not configured correctly and the multicasts can't be received from the nodes you won't be able to resolve the problems programatically.
The "ORMObject.refresh()" marks the cached shared state as invalid. Next access to the object reloads the state from the database. This impacts the performance and increase the database server load.
BUT:
The "refresh()" method does not reload the PO instance state if it already assigned to the current transaction.
Would be best to investigate and fix the server communication issues.
Other possibility is that it isn't a communication problem (multicast between node in the cluster i assume), but that there are simply two request trying to update the basket at the same time. Example two ajax request to update something on the basket.
I would avoid trying to "fix" the orm, it would only cause more harm than good. Rather investigate further and post back more information.

Oracle Coherence fast empty complete cluster

I'm having problems with a cache cluster to empty all cache data stores.
This cluster has 89 cache stores and lasts more than 40 minutes to completely unload data.
I'm using this function:
public void deleteAll() {
try {
getCache().clear();
} catch (Exception e) {
log.error("Error unloading cache",getCache().getCacheName());
}
}
getCache Method retrieves a NamedCache of CacheFactory.
public NamedCache getCache() {
if (cache == null) {
cache = com.tangosol.net.CacheFactory.getCache(this.idCacheFisica);
}
return cache;
}
Has anyone found any other way to do this faster?
Thank you in advance,
It's strange it would take so long, though to be honest, it's unusual to call clear.
You could try destroying the cache with NamedCache.destroy or CacheFactory.destroy(NamedCache).
The only problem with this is that it invalidates any references that there might be to the cache, which would need to be re-obtained with another call the CacheFactory.getCache.
Often, a "bulk operation" call like clear() will get transmitted all the way back to the backing maps as a bulk operation, but will at some point (somewhere inside the server code) become a iteration-based implementation, so that each entry being removed can be evaluated against any registered rules, triggers, event listeners, etc., and so that backups can be captured exactly. In other words, to guarantee "correctness" in a distributed environment, it slows down the operation and forces it to occur as some ordered sequence of sub-operations.
Some of the reasons why this would happen:
You've configured read-through, write-through and/or write-behind behavior for the cache;
You have near caches and/or continuous queries (which imply listeners);
You have indexes (which need to be updated as the data disappears);
etc.
I will ask for some suggestions on how to speed this up.
Update (14 July 2014) - Yes, it turns out that this is a known issue (i.e. that it could be optimized), but that to maintain both "correctness" and backwards compatibility, the optimizations (which would be significant changes) have been deferred, and the clear() is still done in an iterative manner on the back end.

ASP.NET cache objects read-write

what happens if an user trying to read HttpContext.Current.Cache[key] while the other one trying to remove object HttpContext.Current.Cache.Remove(key) at the same time?
Just think about hundreds of users reading from cache and trying to clean some cache objects at the same time. What happens and is it thread safe?
Is it possible to create database aware business objects in cache?
The built-in ASP.Net Cache object (http://msdn.microsoft.com/en-us/library/system.web.caching.cache.aspx) is thread-safe, so insert/remove actions in multi-threaded environments are inherently safe.
Your primary requirement for putting any object in cache is that is must be serializable. So yes, your db-aware business object can go in the cache.
If the code is unable to get the object, then nothing / null is returned.
Why would you bother to cache an object if you would have the chance of removing it so frequently? Its better to set an expiration time and reload the object if its no longer in the cache.
Can you explain "DB aware object"? Do you mean a sql cache dependency, or just an object that has information about a db connection?
EDIT:
Reponse to comment #3.
I think we are missing something here. Let me explain what I think you mean, and you can tell me if its right.
UserA checks for an object in cache
("resultA") and does not find it.
UserA runs a query. Results are
cached as "resultA" for 5 minutes.
UserB checks for an object in cache
("resultA") and does find it.
UserB uses the cached object "resultA"
If this is the case, then you dont need a Sql Cache dependency.
Well i have a code to populate cache:
string cacheKey = GetCacheKey(filter, sort);
if (HttpContext.Current.Cache[cacheKey] == null)
{
reader = base.ExecuteReader(SelectQuery);
HttpContext.Current.Cache[cacheKey] =
base.GetListByFilter(reader, filter, sort);
}
return HttpContext.Current.Cache[cacheKey] as List<CurrencyDepot>;
and when table updated cleanup code below executing:
private void CleanCache()
{
IDictionaryEnumerator enumerator =
HttpContext.Current.Cache.GetEnumerator();
while (enumerator.MoveNext())
{
if (enumerator.Key.ToString().Contains(_TableName))
{
try {
HttpContext.Current.Cache.Remove(enumerator.Key.ToString());
} catch (Exception) {}
}
}
}
Is this usage cause a trouble?

Moving ViewState out of the page?

We are trying to lighten our page load as much as possible. Since ViewState can sometimes swell up to 100k of the page, I'd love to completely eliminate it.
I'd love to hear some techniques other people have used to move ViewState to a custom provider.
That said, a few caveats:
We serve on average 2 Million unique visitors per hour.
Because of this, Database reads have been a serious issue in performance, so I don't want to store ViewState in the database.
We also are behind a load balancer, so any solution has to work with the user bouncing from machine to machine per postback.
Ideas?
How do you handle Session State? There is a built-in "store the viewstate in the session state" provider. If you are storing the session state in some fast, out of proc system, that might be the best option for the viewstate.
edit: to do this add the following code to the your Page classes / global page base class
protected override PageStatePersister PageStatePersister {
get { return new SessionPageStatePersister(this); }
}
Also... this is by no means a perfect (or even good) solution to a large viewstate. As always, minimize the size of the viewstate as much as possible. However, the SessionPageStatePersister is relatively intelligent and avoids storing an unbounded number of viewstates per session as well as avoids storing only a single viewstate per session.
I have tested many ways to remove the load of view state from the page and between all hacks and some software out there the only thing that it is truly scalable is the StrangeLoops As10000 appliance. Transparent, no need to change the underlying application.
As previously stated, I have used the database to store the ViewState in the past. Although this works for us, we don't come close to 2 million unique visitors per hour.
I think a hardware solution is definitely the way to go, whether using the StrangeLoop products or another product.
The following works quite well for me:
string vsid;
protected override object LoadPageStateFromPersistenceMedium()
{
Pair vs = base.LoadPageStateFromPersistenceMedium() as Pair;
vsid = vs.First as string;
object result = Session[vsid];
Session.Remove(vsid);
return result;
}
protected override void SavePageStateToPersistenceMedium(object state)
{
if (vsid == null)
{
vsid = Guid.NewGuid().ToString();
}
Session[vsid] = state;
base.SavePageStateToPersistenceMedium(new Pair(vsid, null));
}
You can always compress ViewState so you get the benefits of ViewState without so much bloat:
public partial class _Default : System.Web.UI.Page {
protected override object LoadPageStateFromPersistenceMedium() {
string viewState = Request.Form["__VSTATE"];
byte[] bytes = Convert.FromBase64String(viewState);
bytes = Compressor.Decompress(bytes);
LosFormatter formatter = new LosFormatter();
return formatter.Deserialize(Convert.ToBase64String(bytes));
}
protected override void SavePageStateToPersistenceMedium(object viewState) {
LosFormatter formatter = new LosFormatter();
StringWriter writer = new StringWriter();
formatter.Serialize(writer, viewState);
string viewStateString = writer.ToString();
byte[] bytes = Convert.FromBase64String(viewStateString);
bytes = Compressor.Compress(bytes);
ClientScript.RegisterHiddenField("__VSTATE", Convert.ToBase64String(bytes));
}
// ...
}
using System.IO;
using System.IO.Compression;
public static class Compressor {
public static byte[] Compress(byte[] data) {
MemoryStream output = new MemoryStream();
GZipStream gzip = new GZipStream(output,
CompressionMode.Compress, true);
gzip.Write(data, 0, data.Length);
gzip.Close();
return output.ToArray();
}
public static byte[] Decompress(byte[] data) {
MemoryStream input = new MemoryStream();
input.Write(data, 0, data.Length);
input.Position = 0;
GZipStream gzip = new GZipStream(input,
CompressionMode.Decompress, true);
MemoryStream output = new MemoryStream();
byte[] buff = new byte[64];
int read = -1;
read = gzip.Read(buff, 0, buff.Length);
while(read > 0) {
output.Write(buff, 0, read);
read = gzip.Read(buff, 0, buff.Length);
}
gzip.Close();
return output.ToArray();
}
}
Due to the typical organizational bloat, requesting new hardware takes eons, and requesting hardware that would involve a complete rewire of our current setup would probably get some severe resistance from the engineering department.
I really need to come up with a software solution, because that's the only world I have some control over.
Yay for Enterprise :(
I've tried to find some of the products I had researched in the past that works just like StrangeLoops (but software based) It looks like they went all out of business, the only thing from my list that still up there is ScaleOut but they are specialized in session state caching.
I understand how hard it is to sell hardware solutions to senior management but it is always a good idea to at least get management to accept listening to the hardware's sales rep. I am much rather putting some hardware that will present me with an immediate solution because it allows me (or buy me some time) to get some other real job done.
I understand, it really sucks but the alternative is to change your code for optimization and that would maybe cost a lot more than getting an appliance.
Let me know if you find another software based solution.
I'm going to see if I can come up with a way to leverage our current State server to contain the viewstate in memory, I should be able to use the user session ID to keep things synched up between machines.
If I come up with a good solution, I'll remove any IP protected code and put it out for public use.
Oh no, red tape. Well this is going to be a tall order to fill. You mentioned here that you use a state server to serve your session state. How do you have this setup? Maybe you can do something similar here also?
Edit
Awh #Jonathan, you posted while I was typing this answer up. I think going that route could be promising. One thing is that it will definitely be memory intensive.
#Mike I don't think storing it in the session information will be a good idea, due to the memory intensiveness of viewstate and also how many times you will need to access the viewstate. SessionState is accessed a lot less often as the viewstate. I would keep the two separate.
I think the ultimate solution would be storing the ViewState on the client some how and maybe worth looking at. With Google Gears, this could be possible now.
Have you considered if you really need all that viewstate? For example, if you populate a datagrid from a database, all the data will be saved in viewstate by default. However, if the grid is just for presenting data, you dont need a form a all, and hence no viewstate.
You only need viewstate when there is some interaction with the user through postbacks, and even then the actual form data may be sufficient to recreate the view. You can selectively disable viewstate for controls on the page.
You have a very special UI if you actually need 100K of viewstate. If you reduce the viewstate to what is absolutely necessary, it might turn out to be the easiest and most scalable to keep the viewstate in the page.
I might have a simple solution for you in another post. It's a simple class to include in your app and a few lines of code in the asp.net page itself. If you combine it with a distributed caching system you could save a lot of dough as viewstate is large and costly. Microsoft’s velocity might be a good product to attach this method too. If you do use it and save a ton of money though I'd love a little mention for that. Also if you are unsure of anything let me know and I can talk with you in person.
Here is the link to my code. link text
If you are concerned with scaling then using the session token as a unique identifier or storing the state in session is more or less guaranteed to work in a web farm scenario.
Store the viewstate in a session object and use a distributed cache or state service to store session seperate from the we servers such as microsofts velocity.
I know this is a little stale, but I've been working for a couple of days on an opensource "virtual appliance" using squid and ecap to:
1.) gzip
2.) handle ssl
3.) replace viewstate with a token on request / response
4.) memcache for object caching
Anyways, it looks pretty promising. basically it would sit in front of the loadbalancers and should really help client performance. Doesnt seem to be very hard to set up either.
I blogged on this a while ago - the solution is at http://www.adverseconditionals.com/2008/06/storing-viewstate-in-memcached-ultimate.html
This lets you change the ViewState provider to one of your choice without having to change each of your Page classes, by using a custom PageAdapter. I stored the ViewState in memcached. In retrospect I think storing it in a database or on disk is better - we filled memcached up very quickly. Its a very low friction solution.
No need to buy or sell anything to eliminate viewstate bloating. Just need to extend the HiddenFieldPageStatePersister. The 100-200KB of ViewState will stay on the server and will send only a 62byte token on the page instead.
Here is a detailed article on how this can be done:
http://ashishnangla.com/2011/07/21/reducing-size-of-viewstate-in-asp-net-webforms-by-writing-a-custom-viewstate-provider-pagestatepersister-part-12/

Resources