Single thread per request asp.net - asp.net

Is anyone aware of any setting in IIS 7 that will force it to use a single thread per request, and not allow it to switch threads during a request? Or go back to a legacy thread model?
Our problem is the entire request, beginning to end uses multiple connections to different databases and we want to guarantee data integrity by using TransactionScope (which starts as a light weight transaction, then is promoted to a distributed transaction, once a second connection is established).
The reason we need a single thread per request, is when you attempt to dispose a transaction on a thread different than the thread that started it, it throws an exception stating it must be disposed on the same thread that started it. Then the transaction leaks, and nothing gets committed, and it slowly brings the machine to a grinding halt.

To limit the number of threads for per asp.net worker process you can set the maxWorkerThreads which configures the maximum number of worker threads to use for the process on a per-CPU basis. I don't remommend to configure only one thread for each asp.net worker process. That obviously hits the performance.
Configuring the worker threads for application pool  appear to there are several approaches. The first is to set the processModel element in the web.config file:
http://msdn.microsoft.com/en-us/library/7w2sway1.aspx
 The second is to set the aspnet.config file(you can find the aspnet.config file with the path either  "c:\Windows\Microsoft.NET\Framework64\v2.0.50727" or "C:\Windows\Microsoft.NET\Framework64\v4.0.30319"):
http://msdn.microsoft.com/en-us/library/dd560842.aspx
The last approach you can check the first reference you mentioned at the initial post and the reference below is useful complementarity:
http://www.iis.net/ConfigReference/system.applicationHost/applicationPools/add
Credit:http://forums.iis.net/t/1188351.aspx

There was no way to do this in the version of .net that we supported (4.0 at the time). So we were forced to drop the distributed transaction coverage.
In theory you can do it with .net 4.5.1 TransactionScopeAsyncFlowOption

Related

Synchronize webapp clients (IIS concurrent requests)

Task:
I'm using static classes, so everyone shares data that's been already loaded. If someone makes a change, that request will put an item in a list with an incremental ID, and my idea would be that every client has it's version on client side and requests if there's any change.
My solution: For this I use a $.post with a timeout of 5000ms and sending the client version, on server side I have a 500 cycle for loop which checks if there's something newer and breaks the loop, returns the changes and have a 10ms Thread.Sleep in every cycle so it wouldn't hog the cpu. Either if on the client it times out, has an error, I call the post again, if it succeeds I process the return data, than call the post again. This way I always should get the changes almost instantly without an overwhelming number of requests, and if something fails, I only need to wait 5secs for it to resume.
My problem is that when this loop runs, other requests aren't handled. With asp.net development server, that's okay, because it's single threaded. But that's also the case with win7hp iis7.5.
What I tried: Set it in the registry (HKLM\SOFTWARE\Microsoft\ASP.NET\4.0.30319.0\MaxConcurrentRequestsPerCPU), increasing the worker threads for the application pool, updating the aspnet.config file with maxConcurrentRequestsPerCPU="12" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" settings, and I also read that my win7hp should be able to use 3 threads. I also thought it's an optimization, that I use same variables in one request so it queues the others, so I commented those lines, left the for loop with the sleep only, but same result.
Don't use Thread.Sleep on threads handling requests in ASP.Net. This will essentially consume thread and prevent more requests to be started. There is restriction on number of threads ASP.Net will create to handle requests that you've tried to change, but high number of threads will make process less responsive and can easily cause OutOfMemeoryException for 32bit proceses - so it is not a good route.
There are several other threads discussing implemeting long poll requests with ASP.Net like this one - Can ASP.NET MVC's AsyncController be used to service large number of concurrent hanging requests (long poll)? and obviously Comet questions like this - Comet implementation for ASP.NET?.

Why is the AspNetSessionData stage of page processing delaying my page by 20+ seconds?

I have a web application that uses ASP.NET with "InProc" session handling. Normally, everything works fine, but a few hundred requests each day take significantly longer to run than normal. In the IIS logs, I can see that these pages (which usually require 2-5 seconds to run) are running for 20+ seconds.
I enabled Failed Request Tracing in Verbose mode, and found that the delay is happening in the AspNetSessionData section. In the example shown below, there was a 39-second gap between AspNetSessionDataBegin and AspNetSessionDataEnd.
I'm not sure what to do next. I can't find any reason for this delay, and I can't find any more logging features that could be enabled to tell me what's happening here. Does anyone know why this is happening, or have any suggestions for additional steps I can take to find the problem?
My app usually stores 1-5MB in session for each user, mostly cached data for searches. The server has plenty of available memory, and only runs about 50 users.
It could be caused by lock contention for the session state. Take a look at the last paragraph of MSDN's ASP.NET Session State Overview. See also K. Scott Allen's helpful post on this subject.
If a page is annotated with EnableSessionState="True" (or inherits the web.config default), then all requests for that page will acquire a write lock on the session state. All other requests that use session state -- even if they do not acquire a write lock -- are blocked until that request finishes.
If a page is annotated with EnableSessionState="ReadOnly", then the page will not acquire a write lock and so will not block other requests. (Though it may be blocked by another request holding the write lock.)
To eliminate this lock contention, you may want to implement your own [finer grained] locking around the HttpContext.Cache object or static WeakReferences. The latter is probably more efficient. (See pp. 118-122 of Ultra-Fast ASP.NET by Richard Kiessig.)
There is chance your are running up against the maximum amount of memory that Application Pool is allowed to consume, which causes a restart of the Application Pool (which would account for the delay you are seeing in accessing the session). The amount of memory on the server doesn't impact the amount of memory ASP.NET can use, this is controlled in the machine.config in the memoryLimit property and in IIS 6.0 later in IIS itself using the "Maximum memory used" property. Beyond that, have you considered alternatives to each user using 5 MB of session memory? This will not scale well at all and can cause a lot of issues while under load. Might caching be a more effective solution? Do the searches take so long that you need to do this, could the SQL/Database Setup be optimized to speed up your queries?

Using 'Lock' in web applications

A few months ago I was interviewing for a job inside the company I am currently in, I dont have a strong web development background, but one of the questions he posed to me was how could you improve this block of code.
I dont remember the code block perfectly but to sum it up it was a web hit counter, and he used lock on the hitcounter.
lock(HitCounter)
{
// Bla...
}
However after some discussion he said, lock is good but never use it in web applications!
What is the basis behind his statement? Why shouldnt I use lock in web applications?
There is no special reason why locks should not be used in web applications. However, they should be used carefully as they are a mechanism to serialize multi-threaded access which can cause blocking if lock blocks are contended. This is not just a concern for web applications though.
What is always worth remembering is that on modern hardware an uncontended lock takes 20 nanoseconds to flip. With this in mind, the usual practice of trying to make code inside of lock blocks as minimal as possible should be followed. If you have minimal code within a block, the overhead is quite small and potential for contention low.
To say that locks should never be used is a bit of a blanket statement really. It really depends on what your requirements are e.g. a thread-safe in-memory cache to be shared between requests will potentially result in less request blocking than on-demand fetching from a database.
Finally, BCL and ASP.Net Framework types certainly use locks internally, so you're indirectly using them anyway.
The application domain might be recycled.
This might result in the old appdomain still finishing serving some requests and the new appdomain also serving new requests.
Static variables are not shared between them, so locking on a static global would not grant exclusivity in this case.
First of all, you never want to lock an object that you actually use in any application. You want to create a lock object and lock that:
private readonly object _hitCounterLock = new object();
lock(_hitCounterLock)
{
//blah
}
As for the web portion of the question, when you lock you block every thread that attempts to access the object (which for the web could be hundreds or thousands of users). They will all be waiting until each thread ahead of them unlocks.
Late :), but for future readers of this, an additional point:
If the application is run on a web farm, the ASP's running on multiple machines will not share the lock object
So this can only work if
1. No web farm has to be supported AND 2. ASP is configured (non-default) NOT to use parallel instances during recycle until old requests are served (as mentioned by Andras above)
This code will create a bottleneck for your application since all incoming request will have to wait at this point before the previous went out of the lock.
lock is only intended to be used for multithreaded applications where multiple threads require access to the same shared variable, thus a lock is exclusively acquired by the requesting thread and all pending threads will block and wait until the lock is released.
in web applications, user requests are isolated so there is no need for locking by default
Couple reasons...
If you're trying to lock a database read/write operation, there's a really high risk of a race condition happening anyway because the database isn't owned by the process doing the lock, so it could be read from/written to by another process -- perhaps even a hypothetical future version of IIS that runs multiple processes per application.
Locks are typically used in client applications for non-UI threads, i.e. background/worker threads. Web applications don't have as much of a use for multithreaded processing unless you're trying to take advantage of multiple cores (in which case locks on request-associated objects would be acceptable), because each request can be assumed to run on its own thread, and the server can't respond until it's processed the entire output (or at least a sequential chunk) anyway.

Close if no active threads, or if any active, then wait till it's complete and close

My application overview is
alt text http://img823.imageshack.us/img823/8975/modelq.jpg
ASP.Net webservice entertains requests from various applications for digital signing and verification via a client. The webservice will then route these requests to a smart card
When the system date changes, I want the following to happen.
New request from the clients are made to wait
Current work between webservice and smart card should get completed
If there is any prior pending requests then they should get completed.
The reason why I need the above things to happen is, I need to close the existing sessions between the smartcard and webservice. This should happen only when there is no signing/verification of files. I cannot just close all the sessions as it might affect a file being processed by any one of the threads. So I need to make sure that there are no current active threads between webservice and smart card.
I wrote a piece of code which gives the total number of active threads between webservice and smartcard.
int vWorkerThreads,vWorkerThreadsMax;
int vPortThreads,vPortThreadsMax;
System::Threading::ThreadPool ^ vThreadPool;
vThreadPool->GetAvailableThreads(vWorkerThreads, vPortThreads);
vThreadPool->GetMaxThreads(vWorkerThreadsMax, vPortThreadsMax);
ActiveThreadCount = vWorkerThreadsMax - vWorkerThreads;
This means, I also need to make the client requests wait?
CLEANUP MECHANISM: Close the PKCS#11 API using C_CloseAllSessions and C_Finalize call which will free up the library so that it cleans all the session objects. This should be done once everyday.
Any ideas on how I can perform such a task?
UPDATE:
I could have been much more clearer in my query. I want to make it clear that my aim is not to shutdown the ASP.NET webservice. My aim is to reset the smartcard. As I am accessing the smartcard via ASP.NET webservice, I need a mechanism to perform this task of resetting the smart card.
I am giving the current process below
Client detects Date change, At midnight
Client calls the function WebService_Close_SmartCard
Web Service receives the request WebService_Close_SmartCard and in turn
calls PKCS11_Close_SmartCard. This
Call will be served via one of the
available threads from the Thread
Pool. PKCS11_Close_SmartCard will
close all the existing current
sessions with the smartcard.
At this point, I want to make sure that there are no threads with
function calls such as
PKCS11_DigitalSign_SmartCard/
PKCS11_DigitalVerify_SmartCard talking
to smartcard, as
PKCS11_Close_SmartCard will abruptly
end the other ongoing sessions.
PS: I am new to ASP.NET and Multithreading.
The question was updated in a big way, so bear with me...
Given that no threads are being created directly\indirectly by your web method code:
Quesiton So you are not explicitly creating any new threads or using ThreadPool threads directly\indirectly, you are simply receiving calls to your web method and executing your code synchronously?
Answer Yes, you are correct. There is a client API which calls the webservice. Then the webservice manages the threads automatically(creats/allocates etc) inresponse to the client's demands.The webservice talks to a smart card by opening multiple sessions for encryption/decryption.
It is more helpful to rephrase the original question along the lines of "requests" rather than threads, e.g.
When the system date changes I want to re-start my ASP.NET application and ensure that all requests that are currently executing are completed, and that any outstanding\queued requests are completed as well.
This is handled automatically as there is a concept of a request queue and active requests. When your ASP.NET application is restarted, all current and queued requests are completed (unless they do not complete in a timely fashion), and new requests are queued and then serviced when a new worker process comes back up. This process is followed when you recycle the Application Pool that your ASP.NET application belongs to.
You can configure your application pool to recycle at a set time in IIS Manager via the "Recycle" settings for the associated Application Pool. Presumably you want to do this at "00:00".
Update
I think I can glean from your comments that you need to run some cleanup code when all requests have been serviced and then the application is about to shut down. You should place this code in the global "Application_End" event handler.
Update 2
In answer to your updated question. Your requirements are:
When the application is restarted:
New request from the clients are made to wait
Current work between webservice and smart card should get completed
If there is any prior pending requests then they should get completed.
This is supported by the standard recycling pattern that I have described. You do not need to deal with request threads yourself - this is one of the pillars of the ASP.NET framework, it deals with this for you. It is request orientated and abstracts how requests are handled i.e. serviced on multiple threads. It manages putting requests onto threads and manages the lifeclyle of those requests when the application is recycled.
Update 3
OK, I think we have the final piece of the scenario here. You are trying to shut down ASP.NET from your client by issuing a "CLOSED" web service call. Basically
you want to implement your own ASP.NET shut down behaviour by making sure that all current and queued request are dealt with before you then execute your clean-up code.
You are trying to re-invent the wheel.
ASP.NET already has this behaviour and it is supported by:
a. Application Recycling It will service outstanding requests cleanly and start-up a new process to serve new requests. It will even queue any new requests that are received whilst this process is going on.
b. Application_End A global application event handler where you can put your clean-up code. It will execute after recycling has cleanly dealt with your outstanding requests.
You do not need your "CLOSED" command.
You should consider letting IIS recycle your application as it has support for recycling at a specified daily time(s). If you cannot configure IIS due to deployment reasons then you can you use web.config "touching" to force a recycle out-of-bounds of IIS:
a. Have a timer running in the server which can check for the date change condtion and then touch the web.config file.
b. Still have the client call a "CLOSED" web method, but have the "CLOSED" method just touch the web.config file.
IIS, then "a" are the most desirable.
Honestly Microsoft have already thought about it. :)
Update 4
#Raj OK, let me try and rephrase this again.
Your conditions are:
You have a requirement to reset your smartcard once a day.
Before resetting your smartcard, all current and queued web service requests must be completed i.e. the outstanding requests.
After outstanding requests are completed, you reset your smartcard.
Any new requests that come in whilst this process is happening should be queued and then serviced after the smartcard has been reset.
These conditions allow you to complete existing requests, queue any new requests, reset your smartcard, and then start processing new requests after the card has been reset.
What I am suggesting is:
Place your smartcard reset code in "Application_End".
Configure IIS to recycle your application at "00:00". Ensure that in advanced settings for the associated Application Pool that you configure "Disable Overlapped Recycle = True".
At "00:00" application recycling ensures that all current and queued requests will be completed.
After "00:00" application recycling ensures that all new requests will be queued whilst requests in "3" are completed and the application performs shutdown steps.
After requests in "3" are completed, "Applicaton_End" will be called automatically. This ensures that your smartcard is reset after all current requests are completed.
Application recycling ensures that your application is re-started in a new process, and that new requests queued in step "4" start to be processed. The important thing here is that your reset code has been called in "5".
Unless there is some detail missing from your question, the above appears to meet your conditions. You wish to do "x,y,z" and ASP.NET has built-in support which can be used to achieve "x,y,z" and gives you mature, guaranteed and well-documented implementations.
I am still struggling to understand why you are talking about threads. I do multi-threaded development, but talking about threads instead of requests when thinking about ASP.NET adds unnecessary complexity to this discussion. Unless your question is still unclear.
Perhaps you are missing the point I'm making here. I am drawing a parallel between the behaviour you require when you call "CLOSED" from your client application, and what happens when you recycle an application. You can use recycling and "Application_End" to achieve the required results.
I am trying to help you out here, as trying to implement this behaviour yourself is unnecessary and non-trivial.

How to set the ASP.NET SessionState read-write LOCK time-out?

I have a WCF web service that uses ASP.NET session state. WCF sets a read-write lock on the session for every request. What this means is that my web service can only process one request at a time per user, which hurts perceived performance of our AJAX application.
So I'm trying to find a way to get around this limitation.
Using a read-only lock (which then allows concurrent access to the session) isn't supported by WCF.
I haven't found a way to release the read-write lock manually during processing of a request
So now I'm thinking that there may be some way to set the read-write lock timeout to some very short interval, in order that waiting requests don't need to wait very long. See the below part in bold.
From MSDN:
http://msdn.microsoft.com/en-us/library/ms178581.aspx
"If two concurrent requests are made for the same session, the first request gets exclusive access to the session information. The second request executes only after the first request is finished. (The second session can also get access if the exclusive lock on the information is freed because the first request exceeds the lock time-out.) If the EnableSessionState value in the # Page directive is set to ReadOnly, a request for the read-only session information does not result in an exclusive lock on the session data."
...But I haven't found any information on how long this lock time-out is, or how to change it.
I can tell you that httpRuntime execution timeout controls this lock time, however, the documentation for this field states that the thread should be terminated at this point. I know from experience that this thread is not terminated and will eventually return data, but a new thread is spawned to handle requests in the queue.
By default this value is 110 seconds after 2.0 asp, before that it is 90 seconds. I would be concerned about this behavior changing in the future and being "fixed".
Has anyone tried using SQLSessionStateProvider and modifying the SPs? I did this in dev and seems to get around the locking issues but not sure if it has any side-effects. Basically what I did was change the 3 SPs which obtain Exclusive locks so that the Lock column is always set to 0.

Resources