Default slsb-strict-max-pool size in Wildfly 10 - ejb

On wildfly 10 server startup we get slsb-strict-max-pool as 64
2017-08-24 12:51:09,164 INFO [org.jboss.as.ejb3] (MSC service thread 1-5) WFLYEJB0481: Strict pool slsb-strict-max-pool is using a max instance size of 64 (per class), which is derived from thread worker pool sizing.
2017-08-24 12:51:09,166 INFO [org.jboss.as.ejb3] (MSC service thread 1-5) WFLYEJB0482: Strict pool mdb-strict-max-pool is using a max instance size of 16 (per class), which is derived from the number of CPUs on this host.
But when I execute below for jboss-cli then the **max pool size is 20** as shown in the below image
/host=master/server=server-one/subsystem=ejb3/strict-max-bean-instance-pool=slsb-strict-max-pool/:read-resource(recursive=false)
Exactly what is the default max pool size of ejb in Wildfly 10 ??

The pool size is dynamic by default because it is derived from the worker thread-pool size ("derive-size" property). So in your case you have 4 CPU cores and get a pool size of 64.
The max-pool-size value is ignored in this case so what the wildfly interface is saying is the correct runtime value.
https://wildscribe.github.io/WildFly/10.1/subsystem/ejb3/strict-max-bean-instance-pool/index.html

Related

Increase RAM usage for IIS server

I am running a large scale ERP system on the following server configuration. The application is developed using AngularJS and ASP.NET 4.5
Dell PowerEdge R730 (Quad Core 2.7 Ghz, 32 GB RAM, 5 x 500 GB Hard disk, RAID5 configured) Software: Host OS is VMWare ESXi 6.0 Two VMs run on VMWare ESXi .. one is Windows Server 2012 R2 with 16 GB memory allocated ... this contains IIS 8 server with my application code Another VM is also Windows Server 2012 R2 with SQL Server 2012 and 16 GB memory allocated .... this just contains my application database.
You see, I separated the application server and database server for load balancing purposes.
My application contains a registration module where the load is expected to be very very high (around 10,000 visitors over 10 minutes)
To support this volume of requests, I have done the following in my IIS server -> increase request queue in application pool length to 5000 -> enable output caching for aspx files -> enable static and dynamic compression in IIS server -> set virtual memory limit and private memory limit of each application pool to 0 -> Increase maximum worker process of each application pool to 6
I then used gatling to run load testing on my application. I injected 500 users at once into my registration module.
However, I see that only 40% / 45% of my RAM is being used. Each worker process is using only a maximum amount of 130 MB or so.
And gatling is reporting that around 20% of my requests are getting 403 error, and more than 60% of all HTTP requests have a response time greater than 20 seconds.
A single user makes 380 HTTP requests over a span of around 3 minutes. The total data transfer of a single user is 1.5 MB. I have simulated 500 users like this.
Is there anything missing in my server tuning? I have already tuned my application code to minimize memory leaks, increase timeouts, and so on.
There is a known issue with the newest generation of PowerEdge servers that use the Broadcom Network Chip set. Apparently, the "VM" feature for the network is broken which results in horrible network latency on VMs.
Head to Dell and get the most recent firmware and Windows drivers for the Broadcom.
Head to VMWare Downloads and get the latest Broadcom Driver
As for the worker process settings, for maximum performance, you should consider running the same number of worker processes as there are NUMA nodes, so that there is 1:1 affinity between the worker processes and NUMA nodes. This can be done by setting "Maximum Worker Processes" AppPool setting to 0. In this setting, IIS determines how many NUMA nodes are available on the hardware and starts the same number of worker processes.
I guess the 1 caveat to the answer you received would be if your server isn't NUMA aware/uses symmetric processing, you won't see those IIS options under CPU, but the above poster seems to know a good bit more than I do about the machine. Sorry I don't have enough street cred to add this as a comment. As far as IIS you may also want to make sure your app pool doesn't use default recycle conditions and pick a time like midnight for recycle. If you have root level settings applied the default app pool recycling at 29 hours may also trigger garbage collection against your child pool/causing delays even in concurrent gc where it sounds like you may benefit a bit from Gcserver=true. Pretty tough to assess that though.
Has your sql server been optimized for that type of workload? If your data isn't paramount you could squeeze faster execution times with delayed durability, then assess queries that are returning too much info for async io wait types. In general there's not enough here to really assess for sql optimizations, but if not configured right (size/growth options) you could be hitting a lot of timeouts due to growth, vlf fragmentation, etc.

Sessions are killed after short time in 64bit application pool

We have a .net web application hosted on IIS 7.5.
Earlier this application was running on a 32bit application pool but some time ago we've switched to 64 bit application pool.
Recently users have started to complain that after 1-2 minutes of idling their session is being killed which we have confirmed today.
In the web.config file the session timeout is set to 60 minutes.
Also we have noticed in task manager that the w3wp process for this application consumes about 2-2,4GB of memory so maybe the problem is that the application pool is trying to recycle some memory?
The recycling is set to limited time periods 21:00 and 4:00
What could be the reason for this problems with sessions?
EDIT:
I have inspected some counters and done the basic memory dump analyze but I don't see any problems.
In the dump eeheap analyze I see only generation 2 objects about 10-30MB for every heap and I have 24 of them
Heap 0 (0000000003083a90) generation 0 starts at 0x00000000fff568b8 generation 1 starts at 0x00000000ffa6acf0 generation 2 starts at 0x00000000ff471000 ephemeral segment allocation context: none segment begin allocated size 00000000ff470000 00000000ff471000 00000000ffff8de0 0xb87de0(12090848) Large object heap starts at 0x00000006ff471000 segment begin allocated size 00000006ff470000 00000006ff471000 00000006ff7495c8 0x2d85c8(2983368)
Heap Size: Size: 0xe603a8 (15074216) bytes.
Heap 1 (00000000030889c0) generation 0 starts at 0x000000013fc36ed8 generation 1 starts at 0x000000013f949348 generation 2 starts at 0x000000013f471000 ephemeral segment allocation context: none segment begin allocated size 000000013f470000 000000013f471000 000000014035e7b8 0xeed7b8(15652792) Large object heap starts at 0x0000000703471000 segment begin allocated size 0000000703470000 0000000703471000 00000007035c5d58 0x154d58(1396056) Heap Size: Size: 0x1042510 (17048848) bytes.
EDIT: 2015-08-19 09:00
Those are the counters for 09:00 2015-08-19
What worries me is why the memory in task manager shows 2,5GB when the Bytes in all Heaps shows only about 100MB and why the Private Bytes (216MB) are bigger then Bytes in all Heaps?
The load in this current moment is about 40 users on this server.
EDIT 2015-08-19 14:09
After some time I see that there could be a problem with assemblies.
How can I check this with windbg when I'm on .NET 4.5 where there is no !dda command?
Try copy the running app to a different pool but in this new one disable all assemblies / references that you dont need, to see what is doing that.
Like you said i think that some assembler is crashing your application pool, maybe because maybe isnt support for 64 bits.
Try disabling all references that you dont use, update all, etc.

Worker processes get ever released if recycling is turned off?

In my web site, I turned off recycling on app pool recycling settings. I was wondering if worker process is still releasing its memory even though recycling is turned off? Since I turned off recycling, the memory usage by the web site is increasing without a limit. Does worker process create a new thread for each request? If so does each thread get killed after it serves the request?
Yes, each request causes a new thread to be created or taken from the thread pool. The number of worker threads available per processor is governed by the maxWorkerThreads in the processModel section of the web.config. The range for this value is 5 to 100, with the default value being 20.
So the answer to your question is that each request gets its own thread and if none are available, then the request is queued up and processed once a thread is available. The thread is not necessarily killed when the request finishes, because it may return to the thread pool.

Why are IIS threads so precious as compared to regular CLR threads?

I'm reading about AsyncControllers in ASP.NET MVC.
It seems that the sole reason why they exist is so that the IIS threads can be saved while the long running work is delegated to regular CLR threads, that seem to be cheaper.
I have a couple of questions here:
Why are these IIS threads so expensive to justify this whole architecture built to support asynchronous controllers?
How do I know/configure how many IIS threads are running in my IIS application pool?
ASP.NET processes requests by using threads from the .NET thread pool. The thread pool maintains a pool of threads that have already incurred the thread initialization costs. Therefore, these threads are easy to reuse. The .NET thread pool is also self-tuning. It monitors CPU and other resource utilization, and it adds new threads or trims the thread pool size as needed. You should generally avoid creating threads manually to perform work. Instead, use threads from the thread pool. At the same time, it is important to ensure that your application does not perform lengthy blocking operations that could quickly lead to thread pool starvation and rejected HTTP requests.
Disk I/O, web service calls, are all blocking. There are best optimized by using async calls. When you make an async call, asp.net frees your thread and the request will be assigned to another thread when the callback function is invoked.
To configure the number of threads you can set:
<system.web>
<applicationPool maxConcurrentRequestsPerCPU="50" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000"/>
</system.web>
Refer: ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0
These are the setting that Microsoft best practices recommend:
Set maxconnection to 12 * # of CPUs. This setting controls the maximum number of outgoing HTTP connections that you can initiate from a client. In this case, ASP.NET is the client. Set maxconnection to 12 * # of CPUs.
Set maxIoThreads to 100. This setting controls the maximum number of I/O threads in the .NET thread pool. This number is automatically multiplied by the number of available CPUs. Set maxloThreads to 100.
Set maxWorkerThreads to 100. This setting controls the maximum number of worker threads in the thread pool. This number is then automatically multiplied by the number of available CPUs. Set maxWorkerThreads to 100.
Set minFreeThreads to 88 * # of CPUs. This setting is used by the worker process to queue all the incoming requests if the number of available threads in the thread pool falls below the value for this setting. This setting effectively limits the number of requests that can run concurrently to maxWorkerThreads - minFreeThreads. Set minFreeThreads to 88 * # of CPUs. This limits the number of concurrent requests to 12 (assuming maxWorkerThreads is 100).
Set minLocalRequestFreeThreads to 76 * # of CPUs. This setting is used by the worker process to queue requests from localhost (where a Web application sends requests to a local Web service) if the number of available threads in the thread pool falls below this number. This setting is similar to minFreeThreads but it only applies to localhost requests from the local computer. Set minLocalRequestFreeThreads to 76 * # of CPUs.
Note: The recommendations that are provided in this section are not rules. They are a starting point.
You would have to benchmark your application to find what works best for your application.
IIS threads are taken from the default thread pool, which is limited by default based on number of processor cores. If this thread pool queue becomes backed up, IIS will stop responding to requests. By using async code, the thread pool thread can be returned to the pool while the async operation takes place, allowing IIS to service more requests overall.
On the other hand, spawning a new thread on your own does not utilize a thread pool thread. Spawning an unchecked number of independent threads can also be a problem, so it's not a cure all fix to the IIS thread pool issue. Async IO is generally preferred either way.
As for changing the number of threads in the thread pool, check here. However, you should probably really avoid doing so.
Our Webservice need from time to time to serve 100 requets/second while the rest of the time it's 1 request/second. Analazyng IIS logs we found out it tooks around 28s when burst occurs to serve such calls.
Applying Microsoft best practices as cited by #nunespascal drasticly reduced time to 1s in our case.
Below is a the Powershell script we currently use when we deploy our production servers. It updates machine.config taking count of logical Processor number.
<# Get and backup current machine.config #>
$path = "C:\Windows\Microsoft.Net\Framework\v4.0.30319\Config\machine.config";
$xml = [xml] (get-content($path));
$xml.Save($path + "-" + (Get-Date -Format "yyyyMMdd-HHmm" ) + ".bak");
<# Get number of physical CPU #>
$physicalCPUs = ([ARRAY](Get-WmiObject Win32_Processor)).Count;
<# Get number of logical processors #>
$logicalProcessors = (([ARRAY](Get-WmiObject Win32_Processor))[0] | Select-Object ā€œnumberOfLogicalProcessors").numberOfLogicalProcessors * $physicalCPUs;
<# Set Number of connection in system.net/connectionManagement #>
$systemNet = $xml.configuration["system.net"];
if (-not $systemNet){
$systemNet = $xml.configuration.AppendChild($xml.CreateElement("system.net"));
}
$connectionManagement = $systemNet.connectionManagement;
if (-not $connectionManagement){
$connectionManagement = $systemNet.AppendChild($xml.CreateElement("connectionManagement"));
}
$add = $connectionManagement.add;
if(-not $add){
$add = $connectionManagement.AppendChild($xml.CreateElement("add")) ;
}
$add.SetAttribute("address","*");
$add.SetAttribute("maxconnection", [string]($logicalProcessors * 12) );
<# Set several thread settings in system.web/processModel #>
$systemWeb = $xml.configuration["system.web"];
if (-not $systemWeb){
$systemWeb = $xml.configuration.AppendChild($xml.CreateElement("system.web"));
}
$processModel = $systemWeb.processModel;
if (-not $processModel){
$processModel = $systemWeb.AppendChild($xml.CreateElement("processModel"));
}
$processModel.SetAttribute("autoConfig","true");
$processModel.SetAttribute("maxWorkerThreads","100");
$processModel.SetAttribute("maxIoThreads","100");
$processModel.SetAttribute("minWorkerThreads","50");
$processModel.SetAttribute("minIoThreads","50");
<# Set other thread settings in system.web/httRuntime #>
$httpRuntime = $systemWeb.httpRuntime;
if(-not $httpRuntime){
$httpRuntime = $systemWeb.AppendChild($xml.CreateElement("httpRuntime"));
}
$httpRuntime.SetAttribute("minFreeThreads",[string]($logicalProcessors * 88));
$httpRuntime.SetAttribute("minLocalRequestFreeThreads",[string]($logicalProcessors * 76));
<#Save modified machine.config#>
$xml.Save($path);
This solution came to us from a blog article witten by Stuart Brierley back in 2009. We sucessfully tested it with Windows Server from 2008 R2 to 2016.
Actually what is written in article you have linked is not true.
Async pattern isn't there to free "super costly IIS Worker Threads" and use in background some other "cheap threads".
Async pattern is there simply to free threads.
You can benefit from it in scenarios where you do not need your threads (and best even not your local machine).
I can name two example scenarios (both I/O):
First:
BeginRequest
Begin async file read
during file read you don't need your thread - so other requests can use it.
File read ends - you get thread from app pool.
Request finishes.
And almost identical second:
BeginRequest
Begin async call to WCF service.
we can leave our machine and don't need our thread - so other requests can use it.
We get response from remote service - we get some thread from app pool to continue.
Request finishes.
It's usually safe to read msdn. You can get information about async pattern here.

What is the maximum and minimum size of connection pool ADO.Net Supports in the connection string?

What is the maximum and minimum size of connection pool ADO.Net Supports in the connection string.Min Pool Size=[max size ?]Max Pool Size=[min size]
Default Max Pool Size 100
Min Pool Size 0
Connection Pooling for the .NET Framework Data Provider for SQL Server
There is no documented limit on Max Pool Size. There is however an exact documented limit on maximum number of concurrent connections to a single SQL Server (32767 per instance, see http://msdn.microsoft.com/en-us/library/ms143432(v=SQL.90).aspx).
A single ADO.NET pool can only go to a single instance, so maximum effective limit is therefore 32767.
Min pool size is zero
Default value of max pool size is 100 and min pool size is 0
The Default Connection Pool Size is 100 . You can increase the pool size using 'Max Pool Size' property in the connection string. for example - Max Pool Size=1000;
If you are using the Azure SQL server, the number of concurrent connection will depends on SQL Server Tier that you are using.
Please refer the link for more information - https://learn.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers

Resources