We have a vanilla ASP.NET application (ASP.NET web forms, Entity Framework, SQL Server 2005) without any explicit multithreading from code. It has been deployed on the stagging environment (OS - Windows Server 2008 R2 64 bit, CPU - Intel Xeon E5507 # 2.27 GHz 2.34 GHz, RAM - 7.5 GB). This environment is composed of a web, a database and a reporting server each a separate instance in the cloud (Amazon EC2). When testing for concurrency, observations are as under:
1 user - CPU usage ~25%, Response time 2-4 seconds
2 users - CPU usage 40-50%, Response time 3-6 seconds
4 users - CPU usage 60-80%, Response time 4-8 seconds
8 users - CPU usage 80-100%, Response time 4-10 seconds
My questions are:
Is CPU usage is relative to no. of concurrent users? And the response time can vary to a great extent as seen in the above observations?
As from the above observations, CPU will be maxed out when concurrent user counts is ~10. Shouldn't the CPU handle much more concurrent users seamlessly without a drastic increase in the response time? In ideal scenario, in case of a basic ASP.NET application, how many concurrent users a CPU can handle?
If yes in the above question, what could be the problem here for high CPU/long response time? What ways we should go ahead for effective debugging to find out bottlenecks in code/IIS settings?
PS: IIS settings (i.e. in machine.config) which have been changed:
maxWorkerThreads = 100
MinWorkerThreads = 50
maxIOThreads = 100
minIOThreads = 50
minFreeThreads = 176
maxConnections = 100
The high CPU usage could be caused by a wide variety of things. The easiest way to find out what's going on is to use a profiling tool:
ANTS Memory Profiler:
http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/
ANTS Performance Profiler:
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
They're very affordable, but you should be able to work unrestricted with the trial versions. These tools do a fantastic job of identifying bottlenecks and memory leaks.
Related
I am running a large scale ERP system on the following server configuration. The application is developed using AngularJS and ASP.NET 4.5
Dell PowerEdge R730 (Quad Core 2.7 Ghz, 32 GB RAM, 5 x 500 GB Hard disk, RAID5 configured) Software: Host OS is VMWare ESXi 6.0 Two VMs run on VMWare ESXi .. one is Windows Server 2012 R2 with 16 GB memory allocated ... this contains IIS 8 server with my application code Another VM is also Windows Server 2012 R2 with SQL Server 2012 and 16 GB memory allocated .... this just contains my application database.
You see, I separated the application server and database server for load balancing purposes.
My application contains a registration module where the load is expected to be very very high (around 10,000 visitors over 10 minutes)
To support this volume of requests, I have done the following in my IIS server -> increase request queue in application pool length to 5000 -> enable output caching for aspx files -> enable static and dynamic compression in IIS server -> set virtual memory limit and private memory limit of each application pool to 0 -> Increase maximum worker process of each application pool to 6
I then used gatling to run load testing on my application. I injected 500 users at once into my registration module.
However, I see that only 40% / 45% of my RAM is being used. Each worker process is using only a maximum amount of 130 MB or so.
And gatling is reporting that around 20% of my requests are getting 403 error, and more than 60% of all HTTP requests have a response time greater than 20 seconds.
A single user makes 380 HTTP requests over a span of around 3 minutes. The total data transfer of a single user is 1.5 MB. I have simulated 500 users like this.
Is there anything missing in my server tuning? I have already tuned my application code to minimize memory leaks, increase timeouts, and so on.
There is a known issue with the newest generation of PowerEdge servers that use the Broadcom Network Chip set. Apparently, the "VM" feature for the network is broken which results in horrible network latency on VMs.
Head to Dell and get the most recent firmware and Windows drivers for the Broadcom.
Head to VMWare Downloads and get the latest Broadcom Driver
As for the worker process settings, for maximum performance, you should consider running the same number of worker processes as there are NUMA nodes, so that there is 1:1 affinity between the worker processes and NUMA nodes. This can be done by setting "Maximum Worker Processes" AppPool setting to 0. In this setting, IIS determines how many NUMA nodes are available on the hardware and starts the same number of worker processes.
I guess the 1 caveat to the answer you received would be if your server isn't NUMA aware/uses symmetric processing, you won't see those IIS options under CPU, but the above poster seems to know a good bit more than I do about the machine. Sorry I don't have enough street cred to add this as a comment. As far as IIS you may also want to make sure your app pool doesn't use default recycle conditions and pick a time like midnight for recycle. If you have root level settings applied the default app pool recycling at 29 hours may also trigger garbage collection against your child pool/causing delays even in concurrent gc where it sounds like you may benefit a bit from Gcserver=true. Pretty tough to assess that though.
Has your sql server been optimized for that type of workload? If your data isn't paramount you could squeeze faster execution times with delayed durability, then assess queries that are returning too much info for async io wait types. In general there's not enough here to really assess for sql optimizations, but if not configured right (size/growth options) you could be hitting a lot of timeouts due to growth, vlf fragmentation, etc.
My current application is comprises of 3 tier- Web tier - App Tier - Database
While testing with 100 users, we found that App tier's cpu is touching almost 90% where as web server and database server are doing fine.
I am not able to figure out what code is causing high cpu usage. Majorly we have CRUD operation there. We take input in the form of DTO, we transfer them into entities (using Entity framework), add/update/delete into database. In case of Get operation, we fetch data into EF entities, store them in DTO and then send DTO to client.
I have tried to use DebugDiag but could not figure out any useful information.
Following are the server's configuration:
Web Server (Quantity = 1) Processor Intel Xeon CPU X5675 #3.07 GHz 2.19 GHz
Number of Cores (Virtual) 8
RAM 8GB
Operating System Windows Sever 2012 Standard
Processor Type 64 Bit
Softwares Installed NET Framework 4.5
App Server (Quantity = 1) Processor Intel Xeon CPU X5675 #3.07 GHz 3.07 GHz
Number of Cores (Virtual) 8
RAM 8GB
Operating System Windows Sever 2012 Standard
Processor Type 64 Bit
Softwares Installed NET Framework 4.5
DB Server (Quantity =1) Processor Intel Xeon CPU E7-4830v2 # 2.20 GHz 2.19 GHz
Number of Cores (Virtual) 8
RAM 8GB
Operating System Windows Sever 2012 Standard
Processor Type 64 Bit
Softwares Installed Microsoft SQL Sever 2014
There is no better solution than to install an APM tool. With them you'll find the root cause very quickly. AppDynamics or NewRelic are easy, Dynatrace a bit more complex but maybe more powerful.
Else go on shooting in the dark
Windows sysinternal tool Process Explorer(Procexp) is a good tool to find the high CPU process and thread call stack(method calls)
OR
- Collect multiple full user dump using task manager/Procexp on the high CPU process
And collect perfmon log with Thread counter. Perfmon -> Add counter -> Thread under thread select %Processor Time, ID thread, ID process.
From the perfmon you can find the high CPU thread ID. Now you can co-relate the thread ID with debug diag analysis report and find the thread call stack.
Hope this helps.
Thanks,
Parthiban
I am running IIS6 on Windows 2003 Server 32 bit. I have read that IIS6 has a maximum virtual memory limit of 2gb (3gb with the 3gb switch fipped).
What I am unclear on is whether this means all ASP.NET sessions have 2gb between them or 2gb each.
So if I have a session variable storing 200kb and have 10,000 active sessions am I going to be hitting up against this 2gb limit?
In general the advice is to leave these options unticked for ASP.NET applications, it affects how quickly the appPool recycles more information here summary below:
Physical and Virtual memory: This section is for recycling application pools which consume too much memory. Focusing on physical I typically like to limit app pools around 800MB to 1200 MB max on a 32 bit app with very few app pools depending on the number and amount of memory. On a server with 2 GB RAM I'd set it at around 800MB max. On a 4GB of RAM server around 1GB and more if more with a max around 1200. On a 64 bit web front end with 8-16 GB memory I've heard of settings of 2GB of RAM or even allowing it to let it ride, rather than limiting it.
You really need to profile it since these can really grow to process and cache. The greater the amount of memory and the greater the load the higher the worker process will grow. When people ask about configuring the app pool, this is where they are usually asking what the numbers should be. What you are doing here is explicitly limiting the app pool from consuming more memory.
Notice this setting is on the recycle tab, there's a reason for that. When an app pool reaches the max it isn't like the max processor setting. It will cycle the worker process which is like a tiny reboot or similar to an iisreset, but not since sometimes we want this to happen so we can release our memory. You really don't want to cycle more than a couple of times per 24 hour period in an ideal world. I've heard of some trying to cycle right before the morning peak occurs so they have the most amount of memory available, then a cycle right at the end of the day before the backups or crawling begins.
Basically the recommendation is not setting a limit (leave the options unchecked) because once the limit is hit IIS will recycle the application pool causing all active users to be temporarily disconnected from the site. You users will likely receive an HTTP 500 while the application pool recycles and then once it's back there will be a delay while the application pool warms up.
Sessions
For an application of any size do not use InProc (stored in memory) sessions use state server or SQL server to store your sessions. http://msdn.microsoft.com/en-us/library/ms178586.aspx
Conclusion
It really depends on the profile of your application, if your expecting 10,000 active sessions though don't use InProc, don't use IIS6 and don't use a 32 bit server.
We have an asp.net 4.0 (integrated mode) web application that runs on iis 7.5 x64 (w2k8) with 12 GB ram and 16 cores that has problem with spikes of requests queued. Normally the queue is zero, but occasionally (probably aroud 15 times over a 10 minute period) the queue spikes up to about 20-100 in the queue.
Sometimes this queue also correlate with a higher number of requests/sec. But that isn't always the case.
Requests current seems to always be between 15-30.
nbr of current logical and physical threads is as low as 60-100
CPU load is avg of 6%
requests/sec is around 150-200
connections active seems to be slowly increasing. It's about 7000.
connections established seems faily consitent around 130-140.
Since we are running .net 4.0 in integrated mode I suppose that we should be able to handle up to 5000 simultanously requests, or atleast 1000 (http.sys kernel)
http://blogs.msdn.com/b/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx
What could be causing .net to queue the requests even though there are threads left and requests/sec is low?
Just a guess: Garbage collection suspends all threads, so perhaps the period immediately following the garbage collection would look like a request spike, since IIS would be piling up requests during the GC. Can you correlate the spikes with garbage collections? If your application is I/O-bound, it may not be possible to drive the CPU load very high, since the threads will spend most of their time blocked.
The apparent leak of active connections is disturbing though, if it really is ever-increasing.
The problem is with Memory management because I keep receiving “Out of Memory exception”.
Here are the scenarios where we face the problem:
Please note:
1. The site/application is developed in ASP.Net and uploaded on a server with the following specs:
- Windows Server 2008 (R2) Standard
- Intel Xeon L5520#2.27GHz 2.27GHz
- RAM = 8GB
- System Type = 64bit
The application is event management based web application where the requirements include saving huge amount of data in Sessions etc (mentioning this in case it is relevant)
The applications/site works fine until we:
Edit a file directly on the server
Update a file from repository
Copy/Paste a file (we don’t usually edit code using this technique)
Please note, all of the above hold true ONLY when the traffic to the site is high that is,
The issue/error “Out of Memory” is not produced when the traffic/visits is low
Details of:
System Properties > Advanced > Performance Settings > Advanced tab
Total paging file size for all drives: 16362 MB
In web.config
Is there any way we can debug this problem to the core and find out a solution. Can you please provide links/help where we can further investigate this problem?
Best regards,
Farrukh
Out of Memory Exceptions are common with applications that see periodic transaction surges while keeping larger volumes of data in memory. This problem does, however, depend on your application and architecture. Below are a few pointers:
Hardware - you have Xeon 5500 (Intel Nehalem chips). These are very good at handling memory. You should be good here.
OS - Windows Server 2008 R2 - As an OS this system will handle more than enough memory for you (you are good here, see link for capabilities: Memory Limits for Windows)
Physical Memory - Did you say you have 8 GB on the server? Note you app is allowing 16 GB. There is one issue. If your app requests more memory than physically available you will see your error. But this is not your only concern ...
CLR / GC limitations - Your application has a "paging file size" of 16+ GB. This is probably your issue.
GC is the heart of your problem for you. In terms of why, it is the same reason Java and the JVM have issues whenever an application exceeds 2-4 GB. That requires a look at the actual process of GC.
You have "old generation" and "young generation" Garbage Collection processes. As you app runs the CLR tries to keep your memory space organized. These processes force all threads to pause (phase changes) when GC mark and swap processes occur. The problem here is, depending on how your code is written and the amount of memory you keep around for long periods, you can run into memory issues.
Any time you press a runtime environment to exceed the 4 GB threshold you will see exponential increases in collection times. When you hit the "stop the world" pause (the old gen GC where everything gets cleaned up) the CLR has to go through the entire heap and de-allocate memory. Based on your app, 16 GB may give you issues even with more physical memory (Windows Server 2008 R2 - Enterprise or DataCenter can support 2 TB). Even if you feed it more physical memory you may see LONG collection times when your full GC hits.
Ideally I would do the following:
Get more physical memory (you never want to come withing 600MB of your total physical memory allocated to your application to avoid out of memory errors, but your buffer does depend on your load and the application's ability to handle it ... you may want a larger safety net to be safe).
Once you have the physical memory you need run GC logs while stressing the app. This will give you an idea where you see exponential degradation in performance and what level your app can support when considering Heap size (Memory). You may want to find a way to get your 16GB page down to a smaller size. I do know with .Net 4.0 Microsoft has made some solid improvements to the GC process, including allowing a background thread to maintain GC. This should give you the ability to support larger heaps (in theory) ... but nothing beats real tests on the app. Check out this link for more info:
Garbage Collection Performance (Asp.net 4.0) - Also, as I am limited on links. Navigate to the Fundamentals page for some great explanations on new GC features of ASP.Net 4.0
(http://msdn.microsoft.com/en-us/library/ee787088.aspx#concurrent_garbage_collection)
Hope this helps!
PS - Anyone out there on lesser hardware will need to be aware of the ASP.NET use of the GC thread. If you are running something in development like a Core Duo you have to consider that 50% of your compute power will go to GC optimization. This means that Hardware (number of cores) is important to consider. If you have more than you need this process should theoretically help performance. If you are constrained on cores either get better hardware or use an older version of ASP.Net or consider turning the feature off (if possible). Second, if latency is a concern, using "hyper-threading" does have an impact on performance as well. You always get better performance on "physical" cores ... but that will not be a concern for 99.9% of the applications out there.
2 GB by default. If the application is large address space aware (linked with /LARGEADDRESSAWARE), it gets 4 GB (see http://msdn.microsoft.com/en-us/library/aa366778.aspx)
They're still limited to 2 GB since many application depends on the top bit of pointers to be zero.