HttpApplicationState - Why does Race condition exist if it is thread safe? - asp.net

I just read an article that describes how HttpApplicationState has AcquireRead() / AcquireWrite() functions to manage concurrent access. It continues to explain, that in some conditions however we need to use an explict Lock() and Unlock() on the Application object to avoid a Race condition.
I am unable to understand why a race condition should exist for Application state if concurrent access is implicitly handled by the object.
Could someone please explain this to me ? Why would I ever need to use Application.Lock() and Application.Unlock() ? Thank You !

The AcquireRead and AcquireWrite methods are in the internal HttpApplicationStateLock class, so you don't use them yourself. They synchronise access, but only for a single read or write. From your code you use the Lock and Unlock methods if you need to synchronise access.
You would typically need to synchonise the access if you are changing something that is not a single read or write, like adding two application items that rely on each other, or first checking if an item exist and then add it:
Application.Lock()
if (Application["info"] == null) {
Application.Add("info", FetchInfoFromDatabase());
}
Application.Unlock();

HttpApplicationState - where the globally access variables those are visible to all the
users who are using the application. So in order to avoid the race condition while changing
the value of the variables. We need some precautionary, thats why we are using
Application.Lock() and after the job done releasing the same variable to others in the
queue using Application.Unlock()
Application.Lock()
Application("VisitorCount") = Convert.ToInt32(Application("VisitorCount")) + 1
Application.UnLock()

Related

MDriven ECO_ID duplicates

We appear to have a problem with MDriven generating the same ECO_ID for multiple objects. For the most part it seems to happen in conjunction with unexpected process shutdowns and/or server shutdowns, but it does also happen during normal activity.
Our system consists of one ASP.NET application and one WinForms application. The ASP.NET app is setup in IIS to use a single worker process. We have a mixture of WebForms and MVC, including ApiControllers. We're using a rather old version of the ECO packages: 7.0.0.10021. We're on VS 2017, target framework is 4.7.1.
We have it configured to use 64 bit integers for object id:s. Database is Firebird. SQL configuration is set to use ReadCommitted transaction isolation.
As far as I can tell we have configured EcoSpaceStrategyHandler with EcoSpaceStrategyHandler.SessionStateMode.Never, which should mean that EcoSpaces are not reused at all, right? (Why would I even use EcoSpaceStrategyHandler in this case, instead of just creating EcoSpace normally with the new keyword?)
We have created MasterController : Controller and MasterApiController : ApiController classes that we use for all our controllers. These have a EcoSpace property that simply does this:
if (ecoSpace == null)
{
if (ecoSpaceStrategyHandler == null)
ecoSpaceStrategyHandler = new EcoSpaceStrategyHandler(
EcoSpaceStrategyHandler.SessionStateMode.Never,
typeof(DiamondsEcoSpace),
null,
false
);
ecoSpace = (DiamondsEcoSpace)ecoSpaceStrategyHandler.GetEcoSpace();
}
return ecoSpace;
I.e. if no strategy handler has been created, create one specifying no pooling and no session state persisting of eco spaces. Then, if no ecospace has been fetched, fetch one from the strategy handler. Return the ecospace. Is this an acceptable approach? Why would it be better than simply doing this:
if (ecoSpace = null)
ecoSpace = new DiamondsEcoSpace();
return ecoSpace;
In aspx we have a master page that has an EcoSpaceManager. It has been configured to use a pool but SessionStateMode is Never. It has EnableViewState set to true. Is this acceptable? Does it mean that EcoSpaces will be pooled but inactivated between round trips?
It is possible that we receive multiple incoming API calls in tight succession, so that one API call hasn't been completed before the next one comes in. I assume that this means that multiple instances of MasterApiController can execute simultaneously but in separate threads. There may of course also be MasterController instances executing MVC requests and also the WinForms app may be running some batch job or other.
But as far as I understand id reservation is made at the beginning of any UpdateDatabase call, in this way:
update "ECO_ID" set "BOLD_ID" = "BOLD_ID" + :N;
select "BOLD_ID" from "ECO_ID";
If the returned value is K, this will reserve N new id:s ranging from K - N to K - 1. Using ReadCommitted transactions everywhere should ensure that the update locks the id data row, forcing any concurrent save operations to wait, then fetches the update result without interference from other transactions, then commits. At that point any other pending save operation can proceed with its own id reservation. I fail to see how this could result in the same ID being used for multiple objects.
I should note that it does seem like it sometimes produces id duplicates within one single UpdateDatabase, i.e. when saving a set of new related objects, some of them end up with the same id. I haven't really confirmed this though.
Any ideas what might be going on here? What should I look for?
The issue is most likely that you use ReadCommitted isolation.
This allows for 2 systems to simultaneously start a transaction, read the current value, increase the batch, and then save after each other.
You must use Serializable isolation for key generation; ie only read things not currently in a write operation.
MDriven use 2 settings for isolation level UpdateIsolationLevel and FetchIsolationLevel.
Set your UpdateIsolationLevel to Serializable

Is HttpApplicationState.RemoveAll() thread safe?

In my asp.net application, i want to cache some data in the HttpApplicationState.
My code to set the data looks like this:
Application.Lock();
Application.Set("Name", "Value");
Application.UnLock();
When I read the documentation, it says that HttpApplicationState is implicitly thread safe. But on many blogs it's written that we should use Application.Lock() and Application.Unlock() while writing data to the HttpApplicationState.
On the other hand, I could not find any documentation which says that we should use lock while reading data from HttpApplicationState or while clearing it (using Application.RemoveAll()) method.
My questions are:
Should not we take care of thread-safety when we are calling RemoveAll? In my application, it's possible that one thread is reading a data from HttpApplicationState whereas other thread could call RemoveAll.
In this case when reading and clearing HttpApplicationState is possible from two different threads at the same time, should reading too not be thread safe?
You only need the lock if you are doing more than one operation against the application state. In you case you are just doing one operation, so it's perfectly safe without the lock:
Application.Set("Name", "Value");
If you do more than one operation, and they rely on each other, you need the lock. For example:
Application.Lock();
string name = Application.Get("Name");
if (name == null) {
Application.Set("Name", "Value");
}
Application.UnLock();
As far as I can tell, the RemoveAll is thread safe as it calls the Clear method internally.
The Clear method calls HttpApplicationStateLock.AcquireWrite and then calls the base.BaseClear and finally releases the lock.
Also have a look at
HttpApplicationState - Why does Race condition exist if it is thread safe?

What is the strategy used by JobLockService To enforce the synchronization?

What is the strategy used by JobLockService to enforce the synchronization ? Does it lock the whole Repository? or there is another technique?
When i write a code such that :
String lockString = jobLockService.getLock(QName.createQName(Prefix,LocalName, Resolver));
LockToken lockToken = new LockToken();
lockToken.set(lockString);
// Something going here such as create a node or update or delete
// another somethign processes here
jobLockService.releaseLock(lockString);
As you can notice from that code i use JobLockService What is happening once the lock is acquired ? Does it lock the repository at all and prevents any other prcoesses from accessing the repository ?
I'm asking about actual techniques used to enforece the synchronization.
Also, What is about LockToken here ? what is the benefit from it ?
Thanks in advance, you replies are very highly appreciated.
JobLockService does not put any lock on actual content stored into the repository, let alone lock the whole repository. After you successfully called JobLockService.getLock, any thread is free to update whichever node it wants to edit. It is your code that must ensure that blocks that have to execute with a controlled concurrency are first trying to obtain the same lock.
That LockToken object you create seems of no use and can be dropped.

create a queue of process in classic asp

here is the problem :
there is classic asp app which is calling lame.exe for encoding mp3s for lots of time per day
and there is no control of the way of calling lame.exe from several users in another word there is no queue for that purpose.
so here is what I am thinking about :
//below code all are pseudo-code
//process_flag and mp3 and processId all are reside in a database
function addQ(string mp3)
add a record to database
and set process_flag to undone
then goto checkQ
end function
function checkQ()
if there is a process in queue list and process_flag is undone
sort in by processID asc
for each processID
processQ(processID)
end for
end function
function ProcessQ(int processID)
run lame.exe with the help of wscript.exe
after doing the job set the process_flag to done
end function
so I just want to know is there any better solution?
or any other approaches out there?
regards.
Looks like a reasonable approach for classic asp.
Just make sure that in your checkQ function, you are only retrieving queue items that have the process_flag set to undone, or you might be trying to re-process the same items over and over.
Read this article for another approach using MSMQ - it starts by creating a new Public Queue, then sending messages to it from your asp page. It also required an additional executable to process queued items.
This is a perfect application for MSMQ. Let proven code handle the reliable messaging, concurrency control etc. so you can just focus on the application logic.

Need suggestion for ASP.Net in-memory queue

I've a requirement of creating a HttpHandler that will serve an image file (simple static file) and also it'll insert a record in the SQL Server table. (e.g http://site/some.img, where some.img being a HttpHandler) I need an in-memory object (like Generic List object) that I can add items to on each request (I also have to consider a few hundreds or thousands requests per second) and I should be able unload this in-memory object to sql table using SqlBulkCopy.
List --> DataTable --> SqlBulkCopy
I thought of using the Cache object. Create a Generic List object and save it in the HttpContext.Cache and insert every time a new Item to it. This will NOT work as the CacheItemRemovedCallback would fire right away when the HttpHandler tries to add a new item. I can't use Cache object as in-memory queue.
Anybody can suggest anything? Would I be able to scale in the future if the load is more?
Why would CacheItemRemovedCalledback fire when you ADD something to the queue? That doesn't make sense to me... Even if that does fire, there's no requirement to do anything here. Perhaps I am misunderstanding your requirements?
I have quite successfully used the Cache object in precisely this manner. That is what it's designed for and it scales pretty well. I stored a Hashtable which was accessed on every app page request and updated/cleared as needed.
Option two... do you really need the queue? SQL Server will scale pretty well also if you just want to write directly into the DB. Use a shared connection object and/or connection pooling.
How about just using the Generic List to store requests and using different thread to do the SqlBulkCopy?
This way storing requests in the list won't block the response for too long, and background thread will be able to update the Sql on it's own time, each 5 min so.
you can even base the background thread on the Cache mechanism by performing the work on CacheItemRemovedCallback.
Just insert some object with remove time of 5 min and reinsert it at the end of the processing work.
Thanks Alex & Bryan for your suggestions.
Bryan: When I try to replace the List object in the Cache for the second request (now, count should be 2), the CacheItemRemovedCalledback gets fire as I'm replacing the current Cache object with the new one. Initially, I also thought this is weird behavior so I gotta look deeper into it.
Also, for the second suggestion, I will try to insert record (with the Cached SqlConnection object) and see what performance I get when I do the stress test. I doubt I'll be getting fantastic numbers as it's I/O operation.
I'll keep digging on my side for an optimal solution meanwhile with your suggestions.
You can create a conditional requirement within the callback to ensure you are working on a cache entry that has been hit from an expiration instead of a remove/replace (in VB since I had it handy):
Private Shared Sub CacheRemovalCallbackFunction(ByVal cacheKey As String, ByVal cacheObject As Object, ByVal removalReason As Web.Caching.CacheItemRemovedReason)
Select Case removalReason
Case Web.Caching.CacheItemRemovedReason.Expired, Web.Caching.CacheItemRemovedReason.DependencyChanged, Web.Caching.CacheItemRemovedReason.Underused
' By leaving off Web.Caching.CacheItemRemovedReason.Removed, this will exclude items that are replaced or removed explicitly (Cache.Remove) '
End Select
End Sub
Edit Here it is in C# if you need it:
private static void CacheRemovalCallbackFunction(string cacheKey, object cacheObject, System.Web.Caching.CacheItemRemovedReason removalReason)
{
switch(removalReason)
{
case System.Web.Caching.CacheItemRemovedReason.DependencyChanged:
case System.Web.Caching.CacheItemRemovedReason.Expired:
case System.Web.Caching.CacheItemRemovedReason.Underused:
// This excludes the option System.Web.Caching.CacheItemRemovedReason.Removed, which is triggered when you overwrite a cache item or remove it explicitly (e.g., HttpRuntime.Cache.Remove(key))
break;
}
}
To expand on my previous comment... I get the picture you are thinking about the cache incorrectly. If you have an object stored in the Cache, say a Hashtable, any update/storage into that Hashtable will be persisted without you explicitly modifying the contents of the Cache. You only need to add the Hashtable to the Cache once, either at application startup or on the first request.
If you are worried about the bulkcopy and page request updates happening simultaneously, then I suggest you simple have TWO cached lists. Have one be the list which is updated as page requests come in, and one list for the bulk copy operation. When one bulk copy is finished, swap the lists and repeat. This is similar to double-buffering video RAM for video games or video apps.

Resources