Scalability issue when using outgoing asynchronous web requests on IIS 7.5 - asp.net

A bit of a long description below, but it is a quite tricky problem. I have tried to cover what we do know about the problem in order to narrow down the search. The question is more of an ongoing investigation than a single-question based one but I think it may help others as well. But please add information in comments or correct me if you think I am wrong about some assumptions below.
UPDATE 19/2, 2013: We have cleared some question marks in this and I have a theory of what the main problem is which I'll update below. Not ready to write a "solved" response to it yet though.
UPDATE 24/4, 2013: Things have been stable in production (though I believe it is temporary) for a while now and I think it is due to two reasons. 1) port increase, and 2) reduced number of outgoing (forwarded) requests. I'll continue this update futher down in the correct context.
We are currently doing an investigation in our production environment to determine why our IIS web server does not scale when too many outgoing asynchronous web service requests are being done (one incoming request may trigger multiple outgoing requests).
CPU is only at 20%, but we receive HTTP 503 errors on incoming requests and many outgoing web requests get the following exception: “SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full” Clearly there is a scalability bottleneck somewhere and we need to find out what it is and if it is possible to solve it by configuration.
Application context:
We are running IIS v7.5 integrated managed pipeline using .NET 4.5 on Windows 2008 R2 64 bit operating system. We use only 1 worker process in IIS. Hardware varies slightly but the machine used for examining the error is an Intel Xeon 8 core (16 hyper threaded).
We use both asynchronous and synchronous web requests. Those that are asynchronous are using the new .NET async support to make each incoming request make multiple HTTP requests in the application to other servers on persisted TCP connections (keep-alive). Synchronous request execution time is low 0-32 ms (longer times occur due to thread context switching). For the asynchronous requests, execution time can be up to 120 ms before the requests are aborted.
Normally each server serves up to ~1000 incoming requests. Outgoing requests are ~300 requests/sec up to ~600 requests/sec when problem starts to arise. Problems only occurs when outgoing async. requests are enabled on the server and we go above a certain level of outgoing requests (~600 req./s).
Possible solutions to the problem:
Searching the Internet on this problem reveals a plethora of possible solutions candidates. Though, they are very much dependent upon versions of .NET, IIS and operating system so it takes time to find something in our context (anno 2013).
Below is a list of solution candidates and the conclusions we have come to so far with regards to our configuration context. I have categorised the detected problem areas, so far in the following main categories:
Some queue(s) fill up
Problems with TCP connections and ports (UPDATE 19/2, 2013: This is the problem)
Too slow allocation of resources
Memory problems (UPDATE 19/2, 2013: This is most likely another problem)
1) Some queue(s) fill up
The outgoing asynchronous request exception message does indicate that some queue of buffer has been filled up. But it does not say which queue/buffer. Via the IIS forum (and blog post referenced there) I have been able to distinguish 4 of possibly 6 (or more) different types of queues in the request pipeline labeled A-F below.
Though it should be stated that of all the below defined queues, we see for certain that the 1.B) ThreadPool performance counter Requests Queued gets very full during the problematic load. So it is likely that the cause of the problem is in .NET level and not below this (C-F).
1.A) .NET Framework level queue?
We use the .NET framework class WebClient for issuing the asynchronous call (async support) as opposed to the HttpClient that we experienced had the same issue but with far lower req/s threshold. We do not know if the .NET Framework implementation hides any internal queue(s) or not above the Thread pool. We don’t think this is the case.
1.B) .NET Thread Pool
The Thread pool acts as a natural queue since the .NET Thread (default) Scheduler is picking threads from the thread pool to be executed.
Performance counter: [ASP.NET v4.0.30319].[Requests Queued].
Configuration possibilities:
(ApplicationPool) maxConcurrentRequestsPerCPU should be 5000 (instead of previous 12). So in our case it should be 5000*16=80.000 requests/sec which should be sufficient enough in our scenario.
(processModel) autoConfig = true/false which allows some threadPool related configuration to be set according to machine configuration. We use true which is a potential error candidate since these values may be set erroneously for our (high) need.
1.C) Global, process wide, native queue (IIS integrated mode only)
If the Thread Pool is full, requests starts to pile up in this native (not-managed) queue.
Performance counter:[ASP.NET v4.0.30319].[Requests in Native Queue]
Configuration possibilities: ????
1.D) HTTP.sys queue
This queue is not the same queue as 1.C) above. Here’s an explanation as stated to me “The HTTP.sys kernel queue is essentially a completion port on which user-mode (IIS) receives requests from kernel-mode (HTTP.sys). It has a queue limit, and when that is exceeded you will receive a 503 status code. The HTTPErr log will also indicate that this happened by logging a 503 status and QueueFull“.
Performance counter: I have not been able to find any performance counter for this queue, but by enabling the IIS HTTPErr log, it should be possible to detect if this queue gets flooded.
Configuration possibilities: This is set in IIS on the application pool, advanced setting: Queue Length. Default value is 1000. I have seen recommendations to increase it to 10.000. Though trying this increase has not solved our issue.
1.E) Operating System unknown queue(s)?
Although unlikely, I guess the OS could actually have a queue somewhere in between the network card buffer and the HTTP.sys queue.
1.F) Network card buffer:
As request arrive to the network card, it should be natural that they are placed in some buffer in order to be picked up by some OS kernel thread. Since this is kernel level execution, and thus fast, it is not likely that it is the culprit.
Windows Performance Counter: [Network Interface].[Packets Received Discarded] using the network card instance.
Configuration possibilities: ????
2) Problems with TCP connections and ports
This is a candidate that pops up here and there, though our outgoing (async) TCP requests are made of a persisted (keep-alive) TCP connection. So as the traffic grows, the number of available ephemeral ports should really only grow due to the incoming requests. And we know for sure that the problem only arises when we have outgoing requests enabled.
However, the problem may still arise due to that the port is allocated during a longer timeframe of the request. An outgoing request may take as long as 120 ms to execute (before the .NET Task (thread) is canceled) which might mean that the number of ports get allocated for a longer time period. Analyzing the Windows Performance Counter, verifies this assumption since the number of TCPv4.[Connection Established] goes from normal 2-3000 to peaks up to almost 12.000 in total when the problem occur.
We have verified that the configured maximum amount of TCP connections is set to the default of 16384. In this case, it may not be the problem, although we are dangerously close to the max limit.
When we try using netstat on the server it mostly returns without any output at all, also using TcpView shows very few items in the beginning. If we let TcpView run for a while it soon starts to show new (incoming) connections quite rapidly (say 25 connections/sec). Almost all connections are in TIME_WAIT state from the beginning, suggesting that they have already completed and waiting for clean up. Do those connections use ephemeral ports? The local port is always 80, and the remote port is increasing. We wanted to use TcpView in order to see the outgoing connections, but we can’t see them listed at all, which is very strange. Can’t these two tools handle the amount of connections we are having?
(To be continued.... But please fill in with info if you know it… )
Furhter more, as a side kick here. It was suggested in this blog post "ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0" that ServicePointManager.DefaultConnectionLimit should be set to int maxValue which otherwise could be a problem. But in .NET 4.5, this is the default already from the start.
UPDATE 19/2, 2013:
It is reasonable to assume that we did in fact hit the max limit of 16.384 ports. We doubled the number of ports on all but one server and only the old server would run into problem when we hit the old peak load of outgoing requests. So why did the TCP.v4.[Connections Established] never show us a higher number than ~12.000 at problem times? MY theory: Most likely, although not established as fact (yet), the Performance Counter TCPv4.[Connections Established] is not equivalent to the number of ports that are currently allocated. I have not had time to catch up on the TCP state studying yet, but I am guessing that there are more TCP states than what the "Connection Established" shows which would render the port as being ccupied. Though since we cannot use the "Connection Established" performance counter as a way to detect the danger of running out of ports, it is important that we find some other way of detecting when reaching this max port range. And as described in the text above, we are not able to use either with NetStat or the application TCPview for this on our production servers. This is a problem! (I'll write more about it in an upcoming response I think to this post)
The number of ports are restricted on windows to some maximum 65.535 (although the first ~1000 should probably not be used). But it should be possible to avoid the problem of running out of ports by decreasing the time for TCP state TIME_WAIT (default to 240 seconds) as described in numerous places.It should free up ports faster. I was first a bit hestitant about this doing this since we use both long running database queries as well as WCF calls on TCP and I wouldn't like to descrease the time constraint. Although not having caught up in my TCP state machine reading yet, I think it might not be a problem after all. The state TIME_WAIT, I think, is only there in order to allow for the handshake of a proper shut down to the client. So the actual data transfer on an existing TCP connection should not time out due to this time limit. Worse case scenario, the client is not shut down properly and it instead neads to time out. I guess all browsers may not be implementing this correctly and it could possibly be a problem on the client side only. Though I am guessing a bit here...
END UPDATE 19/2, 2013
UPDATE 24/4, 2013:
We have increased the number of port to to the maximum value. At the same time we do not get as many forwarded outgoing requests as earlier. These two in combination should be the reason why we have not had any incidents. However, it is only temporary since the number of outgoing requests are bound to increase again in the future on these servers. The problem thus lies in, I think, that port for the incoming requests has to remain open during the time frame for the response of the forwarded requests. In our application, this cancelation limit for these forwarded requests is 120 ms which could be compared with the normal <1ms to handle a non forwarded request. So in essence, I believe the definite number of ports is the major scalability bottleneck on such high throughput servers (>1000 requests/sec on ~16 cores machines) that we are using. This in combination with the GC work on cache reload (se below) makes the server especially vulernable.
END UPDATE 24/4
3) Too slow allocation of resources
Our performance counters show that the number of queued requests in the Thread Pool (1B) fluctuates a lot during the time of the problem. So potentially this means that we have a dynamic situation in which the queue length starts to oscillate due to changes in the environment. For instance, this would be the case if there are flooding protection mechanisms that are activated when traffic is flooding. As it is, we have a number of these mechanisms:
3.A) Web load balancer
When things go really bad and the server responds with a HTTP 503 error, the load balancer will automatically remove the web server from being active in production for a 15 second period. This means that the other servers will take the increased load during the time frame. During the “cooling period”, the server may finish serving its request and it will automatically be reinstated when the load balancer does its next ping. Of course this only is good as long as all servers don’t have a problem at once. Luckily, so far, we have not been in this situation.
3.B) Application specific valve
In the web application, we have our own constructed valve (Yes. It is a "valve". Not a "value") triggered by a Windows Performance Counter for Queued Requests in the thread pool. There is a thread, started in Application_Start, that checks this performance counter value each second. And if the value exceeds 2000, all outgoing traffic ceases to be initiated. The next second, if the queue value is below 2000, outgoing traffic starts again.
The strange thing here is that it has not helped us from reaching the error scenario since we don’t have much logging of this occurring. It may mean that when traffic hits us hard, things goes bad really quickly so that the 1 second time interval check actually is too high.
3.C) Thread pool slow increase (and decrease) of threads
There is another aspect of this as well. When there is a need for more threads in the application pool, these threads gets allocated very slowly. From what I read, 1-2 threads per second. This is so because it is expensive to create threads and since you don’t want too many threads anyways in order to avoid expensive context switching in the synchronous case, I think this is natural. However, it should also mean that if a sudden large burst of traffic hits us, the number of threads are not going to be near enough to satisfy the need in the asynchronous scenario and queuing of requests will start. This is a very likely problem candidate I think. One candidate solution may be then to increase the minimum amount of created threads in the ThreadPool. But I guess this may also effect performance of the synchronously running requests.
4) Memory problems
(Joey Reyes wrote about this here in a blog post)
Since objects get collected later for asynchronous requests (up to 120ms later in our case), memory problem can arise since objects can be promoted to generation 1 and the memory will not be recollected as often as it should. The increased pressure on the Garbage Collector may very well cause extended thread context switching to occur and further weaken capacity of the server.
However, we don’t see an increased GC- nor CPU usage during the time of the problem so we don’t think the suggested CPU throttling mechanism is a solution for us.
UPDATE 19/2, 2013: We use a cache swap mechanism at regular intervalls at which an (almost) full in-memory cache is reload into memory and the old cache can get garbage collected. At these times, the GC will have to work harder and steal resources from the normal request handling. Using Windows Performance counter for thread context switching it shows that the number of context switches decreases significantly from the normal high value at the time of a high GC usage. I think that during such cache reloads, the server is extra vulnernable for queueing up requests and it is necessary to reduce the footprint of the GC. One potential fix to the problem would be to just fill the cache without allocating memory all the time. A bit more work, but it should be doable.
UPDATE 24/4, 2013:
I am still in the middle of the cache reload memory tweak in order to avoid having the GC running as much. But we normally have some 1000 queued requests temporarily when the GC runs. Since it runs on all threads, it is naturall that it steals resources from the normal requests handling. I'll update this status once this tweak has been deployed and we can see a difference.
END UPDATE 24/4

I have implemented a reverse proxy through an Async Http Handler for benchmarking purposes (as a part of my Phd. Thesis) and run into the very same problems as you.
In order to scale it is mandatory to have processModel set to false and fine tune the thread pools. I have found that, contrary to what the documentation regarding processModel defaults says, many of the thread pools are not properly configured when processModel is set to true. The maxConnection setting it is also important as it limits your scalability if the limit is set too low. See http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
Regarding your app running out of ports because of the TIME_WAIT delay on the socket, I have also faced the same problem because I was injecting traffic from a limited set of machines with more than 64k requests in 240 seconds. I lowered the TIME_WAIT to 30 seconds without any problems.
I also mistakenly reused a proxy object to a Web Services endpoint in several threads. Although the proxy doesn't have any state, I found that the GC had a lot of problems collecting the memory associated with its internal buffers (String [] instances) and that caused my app to run out of memory.
Some interesting performance counters that you should monitor are the ones related to Queued requests, requests in execution and request time under the ASP.NET apps category. If you see queued requests or that the execution time is low but the clients see long request times, then you have some sort of contention in your server. Also monitor counters under the LocksAndThreads category looking for contention.

Since asynchronous requests hold up the tcp sockets for longer, maybe you need to look at
maxconnection property within connection management in your web.config?
Please refer to this link: http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
We faced similar problem and tuned this parameter to fix our issue. Maybe this will help you.
Edit: Also, lots of TIME_WAITs indicate a connection leak within the code based on past experience. Possible causes: 1) Not disposing connections used. 2) Incorrect implementation of connection pooling.

Related

Odd Asp.Net threadpool sizing behavior

I am load testing an .Net 4.0 MVC application hosted on IIS 7.5 (default config, in particular processModel autoconfig=true), and am observing odd behavior in how .Net manages the threads.
http://msdn.microsoft.com/en-us/library/0ka9477y(v=vs.100).aspx mentions that "When a minimum is reached, the thread pool can create additional threads or wait until some tasks complete".
It seems the duration that threads are blocked for, plays a role in whether it creates new threads or waits for tasks to complete. Not necessarily resulting in optimal throughput.
Question: Is there any way to control that behavior, so threads are generated as needed and request queuing is minimized?
Observation:
I ran two tests, on a test controller action, that does not do much beside Thread.Sleep for an arbirtrary time.
50 requests/second with the page sleeping 1 second
5 requests/second with the page sleeping for 10 seconds
For both cases .Net would ideally use 50 threads to keep up with incoming traffic. What I observe is that in the first case it does not do that, instead it chugs along executing some 20 odd requests concurrently, letting the incoming requests queue up. In the second case threads seem to be added as needed.
Both tests generated traffic for 100 seconds. Here are corresponding perfmon screenshots.
In both cases the Requests Queued counter is highlighted (note the 0.01 scaling)
50/sec Test
For most of the test 22 requests are handled concurrently (turquoise line). As each takes about a second, that means almost 30 requests/sec queue up, until the test stops generating load after 100 seconds, and the queue gets slowly worked off. Briefly the number of concurrency jumps to just above 40 but never to 50, the minimum needed to keep up with the traffic at hand.
It is almost as if the threadpool management algorithm determines that it doesn't make sense to create new threads, because it has a history of ~22 tasks completing (i.e. threads becoming available) per second. Completely ignoring the fact that it has a queue of some 2800 requests waiting to be handled.
5/sec Test
Conversely in the 5/sec test threads are added at a steady rate (red line). The server falls behind initially, and requests do queue up, but no more than 52, and eventually enough threads are added for the queue to be worked off with more than 70 requests executing concurrently, even while load is still being generated.
Of course the workload is higher in the 50/sec test, as 10x the number of http requests is being handled, but the server has no problem at all handling that traffic, once the threadpool is primed with enough threads (e.g. by running the 5/sec test).
It just seems to not be able to deal with a sudden burst of traffic, because it decides not to add any more threads to deal with the load (it would rather throw 503 errors than add more threads in this scenario, it seems). I find this hard to believe, as a 50 requests/second traffic burst is surely something IIS is supposed to be able to handle on a 16 core machine. Is there some setting that would nudge the threadpool towards erring slightly more on the side of creating new threads, rather than waiting for tasks to complete?
Looks like it's a known issue:
"Microsoft recommends that you tune the minimum number of threads only when there is load on the Web server for only short periods (0 to 10 minutes). In these cases, the ThreadPool does not have enough time to reach the optimal level of threads to handle the load."
Exactly describes the situation at hand.
Solution: Slightly increased the minWorkerThreads in machine.config to handle expected traffic burst. (4 would give us 64 threads on the 16 core machine).

Forcing a Biztalk Host to Throttle for Debugging Purposes

we're currently having issue in our production servers and would like to try to replicate the issue in our dev. I'm currently awaiting access to our Performance Monitoring Tool, and while waiting would like to play with it a little.
I'm thinking of, since I suspect a host throttling in prod, forcing hosts to throttle in dev and see if it will recreate the issue.
Is there a way to do this?
As others have mentioned, monitoring of the throttling counters and other counters like memory and WIP messages is a must to see what is going on in your production server. Also would recommend that set up a SCOM alert on throttling states of 3+ (publishing + delivery states), if you have SCOM.
Message throughput can grind to a halt on especially the memory (4, 5) and Queue Size (6) states. States 1+2 are generally short lived (e.g. arrival of a large batch of messages) and Biztalk recovers within a few seconds.
Simulating the memory state in your Dev environment should be straightforward by tweaking the throttling thresholds (obviously not something to be taken lightly in production!)
e.g. to trigger the Memory threshold states - AFAIK the lowest memory usage threshold you can set is 101MB. Running a load test in dev should then be able reproduce the throttle.
There is also apparantly a user-based throttling override to set states 10 and 11 although haven't actually tried this.
Some other experience on avoiding throttling:
(Caveat - I don't have an active BizTalk 2006/R2 setup - this is for 2009 / 2010)
If you do a lot of asynchronous processing (e.g. Queue receives), ensure that you have split functionality into separate Hosts for Receive, Processing and Send hosts. This way you can adjust the throttling for asynch Receive hosts to trigger much earlier than the processing and sending hosts - this should have the effect of constricting new incoming messages to the messagebox but allowing existing messages to complete processing.
On 64 bit hosts, the default 25% memory host usage throttling level is usually an unnecessary liability - we increased this using Yossi Dahan's recommendation of 50% on a 4GB server
Note that suspended messages count toward throttling state 6 - ensure that you have a strategy for dealing with suspended messages (and obviously ensure that the Sql Agent jobs are running!).

asp.net high number of Request Queued and Context switching

We have a fairly popular site that has around 4 mil users a month. It is hosted on a Dedicated Box with 16 gb of Ram, 2 procc with 24 cores.
At any given time the CPU is always under 40% and the memory is under 12 GB but at the highest traffic we see a very poor performance. The site is very very slow. We have 2 app pools one for our main site and one for our forum. Only the site is being slow. We don't have any restrictions on cpu or memory per app pool.
I have looked at he Performance counters and I saw something very interesting. At our peek time for some reason Request are being queued. Overall context switching numbers are very high around 30 - 110 000 k.
As i understand high context switching is caused by locks. Can anyone give me an example code that would cause a high number of context switches.
I am not too concerned with the context switching, and i don't think the numbers are huge. You have a lot of threads running in IIS (since its a 24 core machine), and higher context switching numbers re expected. However, I am definitely concerned with the request queuing.
I would do several things and see how it affects your performance counters:
Your server CPU is evidently under-utilized, since you run below 40% all the time. You can try to set a higher value of "Threads per processor limit" in IIS until you get to a 50-60% utilization. An optimal value of threads per core by the books is 20, but it depends on the scenario, and you can experiment with higher or lower values. I would recommend trying setting a value >=30. Low CPU utilization can also be a sign of blocking IO operations.
Adjust the "Queue Length" settings in IIS properties. If you have configured the "Threads per processor limit" to be 20, then you should configure the Queue Length to be 20 x 24 cores = 480. Again, if the requests are getting Queued, that can be a sign that all your threads are blocked serving other requests or blocked waiting for an IO response.
Don't serve your static files from IIS. Move them to a CDN, amazon S3 or whatever else. This will significantly improve your server performance, because 1,000s of Server requests will go somewhere else! If you MUST serve the files from IIS, than configure IIS file compression. In addition use expire headers for your static content, so they get cached on the client, which will save a lot of bandwidth.
Use Async IO wherever possible (reading/writing from disk, db, network etc.) in your ASP.NET controllers, handlers etc. to make sure you are using your threads optimally. Blocking the available threads using blocking IO (which is done in 95% of the ASP.NET apps i have seen in my life) could easily cause the thread pool to be fully utilized under heavy load, and Queuing would occur.
Do a general optimization to prevent the number of requests that hit your server, and the processing time of single requests. This can include Minification and Bundling of your CSS/JS files, refactoring your Javascript to do less roundtrips to the server, refactoring your controller/handler methods to be faster etc. I have added links below to Google and Yahoo recommendations.
Disable ASP.NET debugging in IIS.
Google and Yahoo recommendations:
https://developers.google.com/speed/docs/insights/rules
https://developer.yahoo.com/performance/rules.html
If you follow all these advices, i am sure you will get some improvements!

How do I maximize HTTP network throughput?

I was running a benchmark on CouchDB when I noticed that even with large bulk inserts, running a few of them in parallel is almost twice as fast. I also know that web browsers use a number of parallel connections to speed up page loading.
What is the reason multiple connections are faster than one? They go over the same wire, or even to localhost.
How do I determine the ideal number of parallel requests? Is there a rule of thumb, like "threadpool size = # cores + 1"?
The gating factor is not the wire itself which, after all, runs pretty quick (ignoring router delays) but the software overhead at each end. Each physical transfer has to be set up, the data sent and stored, and then completely handled before anything can go the other way. So each connection is effectively synchronous, no matter what it claims to be at the socket level: one socket operating asynchronously is still moving data back and forth in a synchronous way because the software demands synchronicity.
A second connection can take advantage of the latency -- the dead time on the wire -- that arises from the software doing its thing for first connection. So, even though each connection is synchronous, multiple connections let things happen much faster. Things seem (but of course only seem) to happen in parallel.
You might want to take a look at RFC 2616, the HTTP spec. It will tell you about the interchanges that happen to get an HTTP connection going.
I can't say anything about optimal number of parallel requests, which is a matter between the browser and the server.
Each connection consume one own thread. Each thread, have a quantum for consume CPU, network and other resources. Mainly, CPU.
When you start a parallel call, thread will dispute CPU time and run things "at the same time".
It's a high level overview of the things. I suggest you to read about asynchronous calls and thread programming to understand it better.
[]'s,
And Past

Is ASP.NET multithreaded (how does it execute requests)

This might be a bit of a silly question but;
If I have two people logging on to my site at exactly the same time, will the server side code be executed one after the other or will they be executed simultaneously in separate threads?
I'm curious in regards to a denial of service attack on a website login. Does the server slow down because it has a massive queue of logins or is it slow because it has a billion simultaneous logins!
This is not related to ASP.NET per se (I have very little knowledge in that area), but generally web servers. Most web servers use threads (or processes) to handle requests, so basically, whatever snippet of code you have will be executed for both connections in parallel. Of course, if you access a database or some other backend system where a lock is placed, allowing just one session to perform queries, you might have implicitly serialized all requests.
Web servers typically have a minimum and maximum number of workers, which are tuned to the current hardware (CPUs, memory, etc). If these are exhausted, new requests will be queued waiting for a worker to become available, or until a maximum queue length of pending requests has been reached at which point it disregards new connections, effectively denying service (if this is on purpose, it's called a denial of service or DoS attack).
So, in your terms it's a combination, it's a huge number of simultaneous requests filling up the queue.
It should use a thread pool. Note that they are still in the same application, so application level items like static variables are still shared between them.
from this article
"Remember ISAPI is multi-threaded so requests will come in on multiple threads through the reference that was returned by ApplicationDomainFactory.Create(). Listing 1 shows the disassembled code from the IsapiRuntime.ProcessRequest method that receives an ISAPI ecb object and server type as parameters. The method is thread safe, so multiple ISAPI threads can safely call this single returned object instance simultaneously."
So yes, in the case of a DOS attack, it would be slow because of the large number of connections
As others said, most webservers use multiple processes or threads (better) to serve multiple requests at a time. In particular, you can set each ASP.NET application pool with a max number of queued requests and max worker processes. Each process has multiple threads up to a maximum (not configurable AFAIK, I may be wrong), and incoming requests are processed on a first-in-first-out basis.
Moreover, ASP.NET processes one single request for each session - but a malicious user can open as many sessions as she wants.
Multiple logins will probably hit the database and bring it to its knees probably before the webserver itself.
As far as I know, there is not a built-in way to throttle ASP.NET requests other than setting the max number of queued requests (waiting to be processed). This number should be ideally very small. You can monitor the number of queued ASP.NET requests using performance counters. Say you find that, on peak traffic, this number is 100. You can then update application so that it refuses login attempts when this number is above 100 so that the database is not hit (never did that, just a thought).

Resources