In IIS, I have the option to change the periodic restart settings which control when the app pool recycles. Most of the attributes make sense to me (memory, private memory, time) except one: "requests". The Microsoft documentation
states that the "request" attribute:
"Specifies that the worker process should be recycled after it processes a specific number of requests. The default value is 0, which disables the attribute."
My question is: since the default value allows unlimited requests (which makes sense to me), then why would it be advantageous for a production app to limit these requests? An app pool recycles would lose the session data for the app, which seems a bit silly to do just because many requests have gone by. Is limiting the number of server request something that would protect against DDOS attacks or some other concern that I'm overlooking? Why would anyone want to have the app pool recycle just because the application is being used?
documentation:
https://learn.microsoft.com/en-us/iis/configuration/system.applicationhost/applicationpools/add/recycling/periodicrestart/index
Keep in mind that, some of the reasons for recycling a w3wp.exe processing in IIS usually is to avoid unstable states due to memory leaks, db connection leaks, wcf handle leaks, iis request hung, or some other unreleased undisposed resource because of poor programming or bad code. So you don't want those resource leaks to accumulate over time.
The "Request Limit" is an alternative to the "Regular Time Interval" because there are instances where you know approximately how often your code leaks resources per number of request. For example in Production, I may have a particular pattern of traffic such as 1,000,000 request per hour, after which I know there are 1Gig of memory leak. So "Request Limit" is simply an alternative if you know very specific information about your own application. Whereas a "Time Interval" setting, may accumulate 1,000,000 request or 10,000,000 request in that time interval resulting in a vastly worst resource Leak that may crash the w3wp.exe process. So given the information you know, you would choose to use a static number "Request Limit", instead of a timed interval.
I need to log to the database every call to my Web API.
Now of course I don't want to go to my database on every call.
So lets say I have a dictionary or a hash table object in my cache,
and every 10000 records I go to the database.
I still don't want this every 10000 user to wait for this operation.
And I can't start a different thread for long operations since the application pool
can be recycled basically on anytime.
What is the best solution for this scenario?
Thanks
I would argue that your view of durability is rather inconsistent. Your cache of 10000 objects could also be lost at any time due to an app pool recycle or server crash.
But to the original question of how to perform a large operation without causing the user to wait:
Put constraints on app pool recycling and deal with the potential data loss.
Periodically dump the cached messages to a Windows service for further processing. This is still not 100% guaranteed to preserve data, e.g. the service/server could crash.
Use a message queue (MSMQ), possibly with WCF. A message queue can persist to disk, so this can be considered reasonably reliable.
Message Queuing (MSMQ) technology enables applications running at
different times to communicate across heterogeneous networks and
systems that may be temporarily offline. Applications send messages to
queues and read messages from queues.
Message Queuing provides guaranteed message delivery, efficient
routing, security, and priority-based messaging. It can be used to
implement solutions to both asynchronous and synchronous scenarios
requiring high performance.
Taking this a step further...
Depending on your requirements and/or environment, you could probably eliminate your cache, and write all messages immediately (and rapidly) to a message queue and not worry about performance loss or a large write operation.
A bit of a long description below, but it is a quite tricky problem. I have tried to cover what we do know about the problem in order to narrow down the search. The question is more of an ongoing investigation than a single-question based one but I think it may help others as well. But please add information in comments or correct me if you think I am wrong about some assumptions below.
UPDATE 19/2, 2013: We have cleared some question marks in this and I have a theory of what the main problem is which I'll update below. Not ready to write a "solved" response to it yet though.
UPDATE 24/4, 2013: Things have been stable in production (though I believe it is temporary) for a while now and I think it is due to two reasons. 1) port increase, and 2) reduced number of outgoing (forwarded) requests. I'll continue this update futher down in the correct context.
We are currently doing an investigation in our production environment to determine why our IIS web server does not scale when too many outgoing asynchronous web service requests are being done (one incoming request may trigger multiple outgoing requests).
CPU is only at 20%, but we receive HTTP 503 errors on incoming requests and many outgoing web requests get the following exception: “SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full” Clearly there is a scalability bottleneck somewhere and we need to find out what it is and if it is possible to solve it by configuration.
Application context:
We are running IIS v7.5 integrated managed pipeline using .NET 4.5 on Windows 2008 R2 64 bit operating system. We use only 1 worker process in IIS. Hardware varies slightly but the machine used for examining the error is an Intel Xeon 8 core (16 hyper threaded).
We use both asynchronous and synchronous web requests. Those that are asynchronous are using the new .NET async support to make each incoming request make multiple HTTP requests in the application to other servers on persisted TCP connections (keep-alive). Synchronous request execution time is low 0-32 ms (longer times occur due to thread context switching). For the asynchronous requests, execution time can be up to 120 ms before the requests are aborted.
Normally each server serves up to ~1000 incoming requests. Outgoing requests are ~300 requests/sec up to ~600 requests/sec when problem starts to arise. Problems only occurs when outgoing async. requests are enabled on the server and we go above a certain level of outgoing requests (~600 req./s).
Possible solutions to the problem:
Searching the Internet on this problem reveals a plethora of possible solutions candidates. Though, they are very much dependent upon versions of .NET, IIS and operating system so it takes time to find something in our context (anno 2013).
Below is a list of solution candidates and the conclusions we have come to so far with regards to our configuration context. I have categorised the detected problem areas, so far in the following main categories:
Some queue(s) fill up
Problems with TCP connections and ports (UPDATE 19/2, 2013: This is the problem)
Too slow allocation of resources
Memory problems (UPDATE 19/2, 2013: This is most likely another problem)
1) Some queue(s) fill up
The outgoing asynchronous request exception message does indicate that some queue of buffer has been filled up. But it does not say which queue/buffer. Via the IIS forum (and blog post referenced there) I have been able to distinguish 4 of possibly 6 (or more) different types of queues in the request pipeline labeled A-F below.
Though it should be stated that of all the below defined queues, we see for certain that the 1.B) ThreadPool performance counter Requests Queued gets very full during the problematic load. So it is likely that the cause of the problem is in .NET level and not below this (C-F).
1.A) .NET Framework level queue?
We use the .NET framework class WebClient for issuing the asynchronous call (async support) as opposed to the HttpClient that we experienced had the same issue but with far lower req/s threshold. We do not know if the .NET Framework implementation hides any internal queue(s) or not above the Thread pool. We don’t think this is the case.
1.B) .NET Thread Pool
The Thread pool acts as a natural queue since the .NET Thread (default) Scheduler is picking threads from the thread pool to be executed.
Performance counter: [ASP.NET v4.0.30319].[Requests Queued].
Configuration possibilities:
(ApplicationPool) maxConcurrentRequestsPerCPU should be 5000 (instead of previous 12). So in our case it should be 5000*16=80.000 requests/sec which should be sufficient enough in our scenario.
(processModel) autoConfig = true/false which allows some threadPool related configuration to be set according to machine configuration. We use true which is a potential error candidate since these values may be set erroneously for our (high) need.
1.C) Global, process wide, native queue (IIS integrated mode only)
If the Thread Pool is full, requests starts to pile up in this native (not-managed) queue.
Performance counter:[ASP.NET v4.0.30319].[Requests in Native Queue]
Configuration possibilities: ????
1.D) HTTP.sys queue
This queue is not the same queue as 1.C) above. Here’s an explanation as stated to me “The HTTP.sys kernel queue is essentially a completion port on which user-mode (IIS) receives requests from kernel-mode (HTTP.sys). It has a queue limit, and when that is exceeded you will receive a 503 status code. The HTTPErr log will also indicate that this happened by logging a 503 status and QueueFull“.
Performance counter: I have not been able to find any performance counter for this queue, but by enabling the IIS HTTPErr log, it should be possible to detect if this queue gets flooded.
Configuration possibilities: This is set in IIS on the application pool, advanced setting: Queue Length. Default value is 1000. I have seen recommendations to increase it to 10.000. Though trying this increase has not solved our issue.
1.E) Operating System unknown queue(s)?
Although unlikely, I guess the OS could actually have a queue somewhere in between the network card buffer and the HTTP.sys queue.
1.F) Network card buffer:
As request arrive to the network card, it should be natural that they are placed in some buffer in order to be picked up by some OS kernel thread. Since this is kernel level execution, and thus fast, it is not likely that it is the culprit.
Windows Performance Counter: [Network Interface].[Packets Received Discarded] using the network card instance.
Configuration possibilities: ????
2) Problems with TCP connections and ports
This is a candidate that pops up here and there, though our outgoing (async) TCP requests are made of a persisted (keep-alive) TCP connection. So as the traffic grows, the number of available ephemeral ports should really only grow due to the incoming requests. And we know for sure that the problem only arises when we have outgoing requests enabled.
However, the problem may still arise due to that the port is allocated during a longer timeframe of the request. An outgoing request may take as long as 120 ms to execute (before the .NET Task (thread) is canceled) which might mean that the number of ports get allocated for a longer time period. Analyzing the Windows Performance Counter, verifies this assumption since the number of TCPv4.[Connection Established] goes from normal 2-3000 to peaks up to almost 12.000 in total when the problem occur.
We have verified that the configured maximum amount of TCP connections is set to the default of 16384. In this case, it may not be the problem, although we are dangerously close to the max limit.
When we try using netstat on the server it mostly returns without any output at all, also using TcpView shows very few items in the beginning. If we let TcpView run for a while it soon starts to show new (incoming) connections quite rapidly (say 25 connections/sec). Almost all connections are in TIME_WAIT state from the beginning, suggesting that they have already completed and waiting for clean up. Do those connections use ephemeral ports? The local port is always 80, and the remote port is increasing. We wanted to use TcpView in order to see the outgoing connections, but we can’t see them listed at all, which is very strange. Can’t these two tools handle the amount of connections we are having?
(To be continued.... But please fill in with info if you know it… )
Furhter more, as a side kick here. It was suggested in this blog post "ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0" that ServicePointManager.DefaultConnectionLimit should be set to int maxValue which otherwise could be a problem. But in .NET 4.5, this is the default already from the start.
UPDATE 19/2, 2013:
It is reasonable to assume that we did in fact hit the max limit of 16.384 ports. We doubled the number of ports on all but one server and only the old server would run into problem when we hit the old peak load of outgoing requests. So why did the TCP.v4.[Connections Established] never show us a higher number than ~12.000 at problem times? MY theory: Most likely, although not established as fact (yet), the Performance Counter TCPv4.[Connections Established] is not equivalent to the number of ports that are currently allocated. I have not had time to catch up on the TCP state studying yet, but I am guessing that there are more TCP states than what the "Connection Established" shows which would render the port as being ccupied. Though since we cannot use the "Connection Established" performance counter as a way to detect the danger of running out of ports, it is important that we find some other way of detecting when reaching this max port range. And as described in the text above, we are not able to use either with NetStat or the application TCPview for this on our production servers. This is a problem! (I'll write more about it in an upcoming response I think to this post)
The number of ports are restricted on windows to some maximum 65.535 (although the first ~1000 should probably not be used). But it should be possible to avoid the problem of running out of ports by decreasing the time for TCP state TIME_WAIT (default to 240 seconds) as described in numerous places.It should free up ports faster. I was first a bit hestitant about this doing this since we use both long running database queries as well as WCF calls on TCP and I wouldn't like to descrease the time constraint. Although not having caught up in my TCP state machine reading yet, I think it might not be a problem after all. The state TIME_WAIT, I think, is only there in order to allow for the handshake of a proper shut down to the client. So the actual data transfer on an existing TCP connection should not time out due to this time limit. Worse case scenario, the client is not shut down properly and it instead neads to time out. I guess all browsers may not be implementing this correctly and it could possibly be a problem on the client side only. Though I am guessing a bit here...
END UPDATE 19/2, 2013
UPDATE 24/4, 2013:
We have increased the number of port to to the maximum value. At the same time we do not get as many forwarded outgoing requests as earlier. These two in combination should be the reason why we have not had any incidents. However, it is only temporary since the number of outgoing requests are bound to increase again in the future on these servers. The problem thus lies in, I think, that port for the incoming requests has to remain open during the time frame for the response of the forwarded requests. In our application, this cancelation limit for these forwarded requests is 120 ms which could be compared with the normal <1ms to handle a non forwarded request. So in essence, I believe the definite number of ports is the major scalability bottleneck on such high throughput servers (>1000 requests/sec on ~16 cores machines) that we are using. This in combination with the GC work on cache reload (se below) makes the server especially vulernable.
END UPDATE 24/4
3) Too slow allocation of resources
Our performance counters show that the number of queued requests in the Thread Pool (1B) fluctuates a lot during the time of the problem. So potentially this means that we have a dynamic situation in which the queue length starts to oscillate due to changes in the environment. For instance, this would be the case if there are flooding protection mechanisms that are activated when traffic is flooding. As it is, we have a number of these mechanisms:
3.A) Web load balancer
When things go really bad and the server responds with a HTTP 503 error, the load balancer will automatically remove the web server from being active in production for a 15 second period. This means that the other servers will take the increased load during the time frame. During the “cooling period”, the server may finish serving its request and it will automatically be reinstated when the load balancer does its next ping. Of course this only is good as long as all servers don’t have a problem at once. Luckily, so far, we have not been in this situation.
3.B) Application specific valve
In the web application, we have our own constructed valve (Yes. It is a "valve". Not a "value") triggered by a Windows Performance Counter for Queued Requests in the thread pool. There is a thread, started in Application_Start, that checks this performance counter value each second. And if the value exceeds 2000, all outgoing traffic ceases to be initiated. The next second, if the queue value is below 2000, outgoing traffic starts again.
The strange thing here is that it has not helped us from reaching the error scenario since we don’t have much logging of this occurring. It may mean that when traffic hits us hard, things goes bad really quickly so that the 1 second time interval check actually is too high.
3.C) Thread pool slow increase (and decrease) of threads
There is another aspect of this as well. When there is a need for more threads in the application pool, these threads gets allocated very slowly. From what I read, 1-2 threads per second. This is so because it is expensive to create threads and since you don’t want too many threads anyways in order to avoid expensive context switching in the synchronous case, I think this is natural. However, it should also mean that if a sudden large burst of traffic hits us, the number of threads are not going to be near enough to satisfy the need in the asynchronous scenario and queuing of requests will start. This is a very likely problem candidate I think. One candidate solution may be then to increase the minimum amount of created threads in the ThreadPool. But I guess this may also effect performance of the synchronously running requests.
4) Memory problems
(Joey Reyes wrote about this here in a blog post)
Since objects get collected later for asynchronous requests (up to 120ms later in our case), memory problem can arise since objects can be promoted to generation 1 and the memory will not be recollected as often as it should. The increased pressure on the Garbage Collector may very well cause extended thread context switching to occur and further weaken capacity of the server.
However, we don’t see an increased GC- nor CPU usage during the time of the problem so we don’t think the suggested CPU throttling mechanism is a solution for us.
UPDATE 19/2, 2013: We use a cache swap mechanism at regular intervalls at which an (almost) full in-memory cache is reload into memory and the old cache can get garbage collected. At these times, the GC will have to work harder and steal resources from the normal request handling. Using Windows Performance counter for thread context switching it shows that the number of context switches decreases significantly from the normal high value at the time of a high GC usage. I think that during such cache reloads, the server is extra vulnernable for queueing up requests and it is necessary to reduce the footprint of the GC. One potential fix to the problem would be to just fill the cache without allocating memory all the time. A bit more work, but it should be doable.
UPDATE 24/4, 2013:
I am still in the middle of the cache reload memory tweak in order to avoid having the GC running as much. But we normally have some 1000 queued requests temporarily when the GC runs. Since it runs on all threads, it is naturall that it steals resources from the normal requests handling. I'll update this status once this tweak has been deployed and we can see a difference.
END UPDATE 24/4
I have implemented a reverse proxy through an Async Http Handler for benchmarking purposes (as a part of my Phd. Thesis) and run into the very same problems as you.
In order to scale it is mandatory to have processModel set to false and fine tune the thread pools. I have found that, contrary to what the documentation regarding processModel defaults says, many of the thread pools are not properly configured when processModel is set to true. The maxConnection setting it is also important as it limits your scalability if the limit is set too low. See http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
Regarding your app running out of ports because of the TIME_WAIT delay on the socket, I have also faced the same problem because I was injecting traffic from a limited set of machines with more than 64k requests in 240 seconds. I lowered the TIME_WAIT to 30 seconds without any problems.
I also mistakenly reused a proxy object to a Web Services endpoint in several threads. Although the proxy doesn't have any state, I found that the GC had a lot of problems collecting the memory associated with its internal buffers (String [] instances) and that caused my app to run out of memory.
Some interesting performance counters that you should monitor are the ones related to Queued requests, requests in execution and request time under the ASP.NET apps category. If you see queued requests or that the execution time is low but the clients see long request times, then you have some sort of contention in your server. Also monitor counters under the LocksAndThreads category looking for contention.
Since asynchronous requests hold up the tcp sockets for longer, maybe you need to look at
maxconnection property within connection management in your web.config?
Please refer to this link: http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
We faced similar problem and tuned this parameter to fix our issue. Maybe this will help you.
Edit: Also, lots of TIME_WAITs indicate a connection leak within the code based on past experience. Possible causes: 1) Not disposing connections used. 2) Incorrect implementation of connection pooling.
I'm having no luck trying to find out how channging the instance count for an ASP.Net web role affects requests currently being processed.
Heres the scenario:
An ASP.Net site is deployed with 6 instances
Via the console I reduce the instancecount to 4
Is azure smart enough to not remove instances from the pool if it is currently progressing requests or does it just kill them mid request?
I've been through the azure doco, goolge and a number of emails to MS tech support none of which were able to answer this seemingly simple question. I know about the events that get triggered by a shutdown etc but that doesnt really help in web site scenario with a live person waiting for a request to their response.
You cannot choose which instances to kill off. Primarily this is due to Windows Azure's instance allocation scheme, where your instances are split into different fault domains (meaning different areas of the data center - different rack, etc.). If you were to choose the instances to kill, this could leave you in a state where your remaining instances are in the same fault domain, which would void the SLA.
Having said that: You get an event when your role instance is shutting down (the OnStop() event). If you capture this event, you can do instance cleanup in preparation for VM shutdown. I can't recall if you're taken out of the load balancer at this point, but you could always force yourself out with a simple PowerShell command (Set-RoleInstanceStatus -Busy). This way your asp.net instance stops taking requests, and you can more easily shut down in a graceful manner.
EDIT: Sorry - didn't quite address all of your question. Since you get to capture OnStop(), you might have to implement a mechanism to make sure nothing's being processed in that instance. Since you're out of the load balancer, and assuming your requests are processed fairly quickly (2-5 seconds), you shouldn't have to wait long to clear out remaining requests. There's probably a performance counter to check, to see how many active requests are being handled.
Just to add to David's answer: the OnStop event happens when you are off the load balancer. For web apps, it is usually sufficient time to bleed out all requests after you are disconnected from the LB until the instance is shutdown. However, for long running or stateful connections (perhaps to a worker role), there would be an abrupt disconnect in some cases. While the OnStop method removes you from the LB, it does not terminate open connections. It simply prevents you from getting new connections. For web apps, this is usually enough time to complete the request (and you can delay the shutdown if necessary in the OnStop as well if you really want to).
In my ASP.NET MVC application I have a number of threads that wait for a certain length of time and wake up to do some clean tasks over and over. I have not deployed this application to a production server yet but on my dev machine they seem to work as expected. For these threads to work the same on IIS7 do I need to look out for anything? Will IIS7 keep my threads alive indefinitely? are there implications to worry about?
Also I want to queue, lets say 50 objects that were created through various requests and process them all in one go. I'd like to maintain them inside a list and then process the list which means that the list object has to be kept alive indefinitely. I'd like to avoid serializing my objects into the DB in order to maintain this queue. What is the correct way of achieving this?
Will IIS7 keep my threads alive
indefinitely?
No, if the application pool recycles (if there's a long inactivity or some memory threshold is hit) those threads will be stopped as the application will be unloaded from memory. If those objects are so much precise I wouldn't recommend you keeping them in memory but rather serialize them to some persistent storage so that they could be processed later in case of failure.
The design you describe is fine when you don't mind losing cached commands in the queue. Otherwise it would be better to go with a different design. ASP.NET isn't suited for this type of processing, because IIS can recycle the process. When that happens you lose your in-memory queue. IIS could also decide to unload the AppDomain because no new requests are coming in. In that case your threads will also stop running which means that pending operations will still not been cached, even when you use a persisted queue.
You'd probably be better of with some sort of transactional queue, such as MSMQ or a custom table in the database (or look at the open source NServiceBus). Adding operations to the queue can be done by your web application and processing items can be done within a Windows service application that will not be recycled and can process the queue in a transactional way.
Since you're talking about multiple threads: when using a Windows service you can build it in such way that it can run multiple threads or make it single threaded and run several instances of the same thread. This is a very flexible design that I used successfully in the past to distribute CPU and disk intensive operations over multiple machines.