I'm reading about AsyncControllers in ASP.NET MVC.
It seems that the sole reason why they exist is so that the IIS threads can be saved while the long running work is delegated to regular CLR threads, that seem to be cheaper.
I have a couple of questions here:
Why are these IIS threads so expensive to justify this whole architecture built to support asynchronous controllers?
How do I know/configure how many IIS threads are running in my IIS application pool?
ASP.NET processes requests by using threads from the .NET thread pool. The thread pool maintains a pool of threads that have already incurred the thread initialization costs. Therefore, these threads are easy to reuse. The .NET thread pool is also self-tuning. It monitors CPU and other resource utilization, and it adds new threads or trims the thread pool size as needed. You should generally avoid creating threads manually to perform work. Instead, use threads from the thread pool. At the same time, it is important to ensure that your application does not perform lengthy blocking operations that could quickly lead to thread pool starvation and rejected HTTP requests.
Disk I/O, web service calls, are all blocking. There are best optimized by using async calls. When you make an async call, asp.net frees your thread and the request will be assigned to another thread when the callback function is invoked.
To configure the number of threads you can set:
<system.web>
<applicationPool maxConcurrentRequestsPerCPU="50" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000"/>
</system.web>
Refer: ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0
These are the setting that Microsoft best practices recommend:
Set maxconnection to 12 * # of CPUs. This setting controls the maximum number of outgoing HTTP connections that you can initiate from a client. In this case, ASP.NET is the client. Set maxconnection to 12 * # of CPUs.
Set maxIoThreads to 100. This setting controls the maximum number of I/O threads in the .NET thread pool. This number is automatically multiplied by the number of available CPUs. Set maxloThreads to 100.
Set maxWorkerThreads to 100. This setting controls the maximum number of worker threads in the thread pool. This number is then automatically multiplied by the number of available CPUs. Set maxWorkerThreads to 100.
Set minFreeThreads to 88 * # of CPUs. This setting is used by the worker process to queue all the incoming requests if the number of available threads in the thread pool falls below the value for this setting. This setting effectively limits the number of requests that can run concurrently to maxWorkerThreads - minFreeThreads. Set minFreeThreads to 88 * # of CPUs. This limits the number of concurrent requests to 12 (assuming maxWorkerThreads is 100).
Set minLocalRequestFreeThreads to 76 * # of CPUs. This setting is used by the worker process to queue requests from localhost (where a Web application sends requests to a local Web service) if the number of available threads in the thread pool falls below this number. This setting is similar to minFreeThreads but it only applies to localhost requests from the local computer. Set minLocalRequestFreeThreads to 76 * # of CPUs.
Note: The recommendations that are provided in this section are not rules. They are a starting point.
You would have to benchmark your application to find what works best for your application.
IIS threads are taken from the default thread pool, which is limited by default based on number of processor cores. If this thread pool queue becomes backed up, IIS will stop responding to requests. By using async code, the thread pool thread can be returned to the pool while the async operation takes place, allowing IIS to service more requests overall.
On the other hand, spawning a new thread on your own does not utilize a thread pool thread. Spawning an unchecked number of independent threads can also be a problem, so it's not a cure all fix to the IIS thread pool issue. Async IO is generally preferred either way.
As for changing the number of threads in the thread pool, check here. However, you should probably really avoid doing so.
Our Webservice need from time to time to serve 100 requets/second while the rest of the time it's 1 request/second. Analazyng IIS logs we found out it tooks around 28s when burst occurs to serve such calls.
Applying Microsoft best practices as cited by #nunespascal drasticly reduced time to 1s in our case.
Below is a the Powershell script we currently use when we deploy our production servers. It updates machine.config taking count of logical Processor number.
<# Get and backup current machine.config #>
$path = "C:\Windows\Microsoft.Net\Framework\v4.0.30319\Config\machine.config";
$xml = [xml] (get-content($path));
$xml.Save($path + "-" + (Get-Date -Format "yyyyMMdd-HHmm" ) + ".bak");
<# Get number of physical CPU #>
$physicalCPUs = ([ARRAY](Get-WmiObject Win32_Processor)).Count;
<# Get number of logical processors #>
$logicalProcessors = (([ARRAY](Get-WmiObject Win32_Processor))[0] | Select-Object “numberOfLogicalProcessors").numberOfLogicalProcessors * $physicalCPUs;
<# Set Number of connection in system.net/connectionManagement #>
$systemNet = $xml.configuration["system.net"];
if (-not $systemNet){
$systemNet = $xml.configuration.AppendChild($xml.CreateElement("system.net"));
}
$connectionManagement = $systemNet.connectionManagement;
if (-not $connectionManagement){
$connectionManagement = $systemNet.AppendChild($xml.CreateElement("connectionManagement"));
}
$add = $connectionManagement.add;
if(-not $add){
$add = $connectionManagement.AppendChild($xml.CreateElement("add")) ;
}
$add.SetAttribute("address","*");
$add.SetAttribute("maxconnection", [string]($logicalProcessors * 12) );
<# Set several thread settings in system.web/processModel #>
$systemWeb = $xml.configuration["system.web"];
if (-not $systemWeb){
$systemWeb = $xml.configuration.AppendChild($xml.CreateElement("system.web"));
}
$processModel = $systemWeb.processModel;
if (-not $processModel){
$processModel = $systemWeb.AppendChild($xml.CreateElement("processModel"));
}
$processModel.SetAttribute("autoConfig","true");
$processModel.SetAttribute("maxWorkerThreads","100");
$processModel.SetAttribute("maxIoThreads","100");
$processModel.SetAttribute("minWorkerThreads","50");
$processModel.SetAttribute("minIoThreads","50");
<# Set other thread settings in system.web/httRuntime #>
$httpRuntime = $systemWeb.httpRuntime;
if(-not $httpRuntime){
$httpRuntime = $systemWeb.AppendChild($xml.CreateElement("httpRuntime"));
}
$httpRuntime.SetAttribute("minFreeThreads",[string]($logicalProcessors * 88));
$httpRuntime.SetAttribute("minLocalRequestFreeThreads",[string]($logicalProcessors * 76));
<#Save modified machine.config#>
$xml.Save($path);
This solution came to us from a blog article witten by Stuart Brierley back in 2009. We sucessfully tested it with Windows Server from 2008 R2 to 2016.
Actually what is written in article you have linked is not true.
Async pattern isn't there to free "super costly IIS Worker Threads" and use in background some other "cheap threads".
Async pattern is there simply to free threads.
You can benefit from it in scenarios where you do not need your threads (and best even not your local machine).
I can name two example scenarios (both I/O):
First:
BeginRequest
Begin async file read
during file read you don't need your thread - so other requests can use it.
File read ends - you get thread from app pool.
Request finishes.
And almost identical second:
BeginRequest
Begin async call to WCF service.
we can leave our machine and don't need our thread - so other requests can use it.
We get response from remote service - we get some thread from app pool to continue.
Request finishes.
It's usually safe to read msdn. You can get information about async pattern here.
Related
I understand that IIS uses a thread from Threadpool to serve an incoming Http Request and releases the thread once after it completes serving the request.
I want to play around this to understand, how many threads are possible in an IIS server for a specific hardware configuration and how many threads it can handle concurrently, and more related scenarios.
I'm looking for the way/tools that will help visually monitor the thread allocation/usage # IIS.
I appreciate Any pointers/suggestions?
To see how many threads are possible in an IIS server for specific hardware, Click on the server. Then on the right side pane, double click on ASP like this.
The ASP Threads Per Processor Limit property specifies the maximum number of worker threads per processor that IIS creates. The default value of Threads Per Processor Limit is 25. The maximum recommended value for this property is 100.
To increase the value for the Threads Per Processor see this.
I need to process about 250.000 documents per day with an EJB 3.1 asynchronous method in order to face an overall long time task.
I do this to use more threads and process more documents concurrently. Here's an example in pseudo code:
// this returns about 250.000 documents per day
List<Document> documentList = Persistence.listDocumentsToProcess();
for(Document currentDocument: documentList){
//this is the asynchronous call
ejbInstance.processAsynchronously(currentDocument);
}
Suppose I have a thread pool of size 10 and 4 core processors, my questions are:
how many documents will the application server process SIMULTANEOUSLY?
what happen when all thread in pool are processing a documents and one more asynchronous call comes? Will this work like a sort of JMS Queue?
would I have any improvement adopting a JMS Queue solution
I work with Java EE 6 and WebSphere 8.5.5.2
The default configuration for asynchronous EJB method calls is as follows (from the infocenter):
The EJB container work manager has the following thread pool settings:
Minimum number of threads = 1
Maximum number of threads = 5
Work request queue size = 0 work objects
Work request queue full action = Block
Remote Future object duration = 86400 seconds
So trying to answer your questions:
how many documents will the application server process SIMULTANEOUSLY? (assuming 10 size thread pool)
This thread pool is for all EJB async calls, so first you need to assume that your application is the only one using EJB async calls. Then you will potentially have 10 runnable instances, that will be processed in parallel. Whether they will be processed concurrently depends on the number of cores/threads available in the system, so you cant have accurate number (some cores/threads may be doing web work for example, or other process using cpu).
what happen when all thread in pool are processing a documents and one more asynchronous call comes?
It depends on the Work request queue size and Work request queue full action, settings. If there are no available threads in the pool, then requests will be queued till the queue size is reached. Then it depends on the action, which might be Block or Fail.
would I have any improvement adopting a JMS Queue solution
Depends on your needs. Here are some pros/cons JMS solution.
Pros:
Persistence - if using JMS your asynchronous task can be persistent, so in case of the server failure you will not lost them, and will be processed after restart or by other cluster member. EJB async queue is held only in memory, so tasks in queue are lost in case of failure.
Scalability - if you put tasks to the queue, they might be concurrently processed by many servers in the cluster, not limited to single JVM
Expiration and priorities - you can define different expiration time or priorities for your messages.
Cons:
More complex application - you will need to implement MDB to process your tasks.
More complex infrastructure - you will need database to store the queues (file system can be used for single server, and shared filesystem can be used for clusters), or external messaging solution like WebSphere MQ
a bit lower performance for processing single item and higher load on server, as it will have to be serialized/deserialized to persistent storage
In our ASP.NET application all methods use async/await keywords to improve IO performance.
However, I would like to now, what is the recommend connection pool size and maxIoThreads option per CPU core when using asynchronuous action methods. The default value for maxIoThreads is 20 and 100 for connection pool. It's also unclear, is the both options define the limit only for running threads or for all executing code in 'awaited' state.
On ASP.NET, async and await actually reduce the number of thread pool threads in use. This is true unless you've implemented something improperly (e.g., using Task.Run).
As of .NET 4.5, the default ASP.NET settings are correct for asynchronous servers. The only recommended changes are:
Increase the IIS HTTP.SYS queue limit from 1000 to 5000.
(Only if your asynchronous requests are dependent on other HTTP/network requests) Increase ServicePointManager.DefaultConnectionLimit from its default of (12 times the number of cores).
for IIS7
Does a webapplication run faster when Maximum Worker Processes is more than one?
By increasing the Maximum Worker Processes over 1 you're creating a Web Garden. So the short answer is: likely no... unless:
To quote Chris Adams an ex IIS PM's article I have flowers... should I get a Web Garden?:
Web gardens was designed for one single reason – Offering applications that are not CPU-bound but execute long running requests the ability to scale and not use up all threads available in the worker process.
The examples might be things like -
Applications that make long running database requests (e.g. high computational database transaction)
Applications that have threads occupied by long-running synchronous and network intensive transactions
The question that you must ask yourself -
What is the current CPU usage of the server?
What is the application’s threads executing and what type of requests are they?
Based on the criteria above, you should understand better when to use Web Gardens. Web Gardens, in the metabase, equals the MaxProcesses metabase property if you are not using the User Interface to configure this feature.
cscript adsutil.vbs set w3svc/apppools/defaultapppool/maxprocesses 4
I hope that I get some mileage out of having this blog and more importantly I hope it helps you understand this better…
You may want to look at "What is Web Garden?" from Deploying ASP.NET Websites on IIS 7.0 [codeproject.com] which says:
By default each Application Pool runs with a Single Worker Process (W3Wp.exe). We can assign multiple Worker Processes With a Single Application Pool. An Application Pool with multiple Worker process is called "Web Gardens". Many worker processes with the same Application Pool can sometimes provide better throughput performance and application response time. And each worker process should have their own Thread and Own Memory space.
WebGarden is faster than single worker process in case if application contains locks that prevent its parallelization. For example, GDI+ based image processing.
See this and this questions for more info.
I have a ASP.NET web application running on an IIS6 server. The application is making potentially long running calls to a xml service on a remote machine. Some of the service calls on the remote machine are taking a long time to execute (sometimes up to 4 minutes). The long term solution would be to make the calls asyncronous, but as a short term solution we want to increase the timeout for the calls and the overall httpRequest time out.
My fear with this is that the long running calls will fill up the request queue and prevent the "normal" page requests from completing. How can the server, IIS and application settings be tuned to temporarely resolve the issue?
Currently there are approximately 200 page requests/minute and this results in 270 service requests/minute.
The current executionTimeout is 360 (6 minutes)
The current service call time out is 2 minutes
There's this article on Microsoft's Knowledgebase that has some pretty much all the information you might need:
* Contention, poor performance, and deadlocks when you make Web service requests from ASP.NET applications
I will give you some research I have done regarding some of the specific items handled in the article above. This information below applies to IIS6, comments for IIS7 where applicable.
Increase the Processor Worker Thread pool from 25 to at least 100
The default values for the Threadpool size is 100 because the default value for autoConfig is true.
The values covered by autoConfig is
maxWorkerThreads
maxIoThreads
maxConnection
There is one value that is still 25 that must change is – ASPProcessorThreadMax, this can only be set in the IIS metabase (via adsutil tool) in IIS6. [IIS7’s equivalent is the processorThreadMax value]
So I'm opting not to change the machine.config settings as they are fine and there are other paramaters that would be affected by turning off autoconfig, but rather change ASPProcessorThreadMax from 25 to 100 via the IIS metabase (the only way to change this value).
e.g.
cscript %SYSTEMDRIVE%\Inetpub\AdminScripts\<nowiki>adsutil.vb</nowiki>s SET W3SVC/AspRequestQueueMax 100
Max connections per server
maxconnection
The autoconfig sets this value to 12*number of cpu’s, that’s how many connections can be made to each address you are connecting to at one time.
Debugging
Here are some things you can do:
Monitor if requests are waiting in the queue
Monitor the following counter:
Run perfmon
Add counter: ASP.NET Applications/Requests In Application Queue
This will show us if work items are queued because of a shortage of workers.
Check Identity Used by Application Pool
Open IIS Manager
Check which application pool is used by your site in IIS manager.
Choose the Application pool being used in the Application Pools list, then Right Click -> properties and see what account identity is being used.
It should be Network Service by default.