IIS Performance - asp.net

We have the following setup:
Virtual server, Intel Xeon X5650 # 2.67Ghz (4 processors)
8GB RAM
Windows server 2008 Standard 64bit
Sql Server Express
IIS 7.5
Our database is only 200mb. We are running an ASP.net app. We recently ran into some performance issues, ~200 concurrent connections was causing 100% CPU usage (mostly consumed by IIS) and bringing the response time to around 20sec! After some tweaks to our code we have been able to run a load test from loader.io with 1500 concurrent users over 1 minute and our response time at the end was around 5 seconds and CPU was around 95%, again consumed mainly by IIS, our memory was sitting at around 4GB usage. However we are expecting bigger spikes than 1500, anywhere up to around 4000 users in a short amount of time.
My questions are the following:
1) Is this normal performance for our current setup? Our site is quite intensive on the database and we are using Entity Framework.
2) Would upgrading to Sql Web edition have any benefit seeing as though our Database is so small?
3) Do you think that this type of setup could handle 4000 users?
4) Any suggestions on what we could do to handle this load?
I know this is somewhat subjective, but any answers are much appreciated.

Is this normal performance for our current setup?
Depends on your code. Did you profile the code to make sure you dont have anything stupid in there?
Our site is quite intensive on the database and we are using Entity Framework.
Again, did you pofile to figure out you spend a lot of time in entity framework? It is slow, ut the question is what "intensive" means. This is what profilers are for.
Would upgrading to Sql Web edition have any benefit seeing as though our Database is so
small?
Help, my pizza comes too late. Wiould upgrade to a larger car help? You say yourself that you spend the time in IIS, not sql server.
Do you think that this type of setup could handle 4000 users?
You think my car is big enough? Note I don't tell you what I need it for. Without looking at usage patterns and your code - no idea. THAT SAID: the server is pathetic compared to what you buy today. As such, this is a irrelevant question - just upgrade if you have to.
Any suggestions on what we could do to handle this load?
Load test + profiler, optimize code. Get bigger server. Realize that we dont have crystal balls to figure out how good / bad / stupid your code is.

Number one question arising here, is: did you deploy RELEASE or DEBUG compiled binaries of your project?
Upgrade to WebEdition will not solve any problem here, since the difference in the versions is very simple: WebEdition is just throttled in the internal scheduler/etc. - so you will be just fine with the standard edition.

My experience is that the most crucial aspect of concurrent request is the amount of server memory and the consumption of this memory by your code.
As the physical memory is consumed, the server starts to swap from physical to virtual memory which slows down processing dramatically and leads to symptoms you describe.
I would start with putting another 8gb of ram into the server. In the meantime try to optimize your code so that less data is processed during requests or less memory is used. Also, move sql server to a separate machine so that there is no competition between iis and sql server when it comes to memory availability.

With your current machine, I doubt the problem is the IIS itself, but rather related to the way your app is designed and/or utilize frameworks. I personally learned just recently that IIS requests including multiple rounds trips to the database can be measured in hundreds of micro-seconds, not hundreds of milliseconds... A single locking bug, or unbalanced queuing can limit your application scalability and regardless of your hardware specs [https://twitter.com/michaelzino/status/454512110165184512].
Entity Framework is known for validating your models against the database schema for the first initial calls. I would suggest profiling your app layers, starting from the data access layer, or the intrinsic database calls, and going up.

Related

High App Pool Memory Usage Causes ASP.net Site To Have Slow Response Time

We have an ASP.net MVC web application that uses a lot of memory (4GB) after it has run a few days. The server we are running it on has plenty of power to spare (CPU running about 10% and 7% memory usage), but for some reason, as the memory in the App Pool increases, the response time of the site lowers.
Even pages that are cached are taking a very long time to load. These cached pages should be served up from memory, which should be instant. We cache a lot of pages and a lot of database calls, so it doesn't surprise me that the App Pool is that big, but it doesn't make any sense to me that the more we cache, the longer it takes for the site to respond.
If I recycle the App Pool, the site is super fast again. We are using Windows 2012, IIS 8, and SQL 2012.
Does anyone have any ideas why this might be happening?
Thanks so much!
This is a pretty broad question but I would suggest installing something like New Relic on the server to try and find bottlenecks or code that is causing memory leakage.
It will give you a report of intensive functions.
You can get a pretty decent understanding of what is going wrong during the free premium trial (no card or anything needed).

Worker process taking high CPU%

All of my websites are hosted in IIS and configured with one application pool. This application pool consists 10 websites running.
It is working fine till today, but all of sudden I am observing that there is sudden up and down % in CPU usage. I am unable to trace out the problem.
Is there anyway to check which website is taking much load among all in the application pool?
Performance counters, task manager and native code analysis tools only tell part of the story. To gain a deeper understanding of what is happening inside your ASP.NET application you need to use WinDBG, SOS and ADPlus.
Tess Ferrandez has a great series of articles on tracking down what is to blame here:
.NET Debugging Demos Lab 4: High CPU hang
.NET Debugging Demos Lab 4: High CPU Hang - Review
This is a real world example:
High CPU in .NET app using a static Generic.Dictionary
You will probably want to separate your sites into individual application pools so you can identify and isolate the site that is causing the high CPU (but it already looks like you have a suspect so I'd isolate that one). From then you can follow Tess's advice and guidance to track down the cause.
You should also take a look at the logs to see if you're experiencing an unexpected spike or increase in traffic. Perhaps there's a badly behaved search engine site indexer nailing the site. If that's the case then maybe you need to (if you haven't already done so) create a robots.txt to prevent crawlers from indexing parts of the site that don't need to be indexed. On top of that if certain crawlers are being overly promiscious then just ban them. Perhaps consider a sitemap for google to tame and tune its activities.
If your server has reached it's max capacity, you will see CPU go up and down erratically because the GC will start trying to recover resources(cache..etc), which in turn causes your sites to work even harder. It's an endless cycle.
Have you been monitoring your performance counters? Do you have any idea what normal capacity is for your site? If you cannot answer these questions, I suggest you gather some perf numbers as soon as possible.
My rule of thumb is to always measure first, then make necessary changes.
Most of the time performance bottlenecks aren't where you think they would be.
There is really no performance counter way to tell, because the CPU counters are at the process level. Your best bet would be to do a time corelation with other events in the event log and .NET/ASP.NET counters for garbage collection, requests etc.
If you really want to go hardcore, you could use the SysInternals toolset to take snapshots of your app pool over time and then do a post-analysis to figure out what code was executed when the spike happened. Here is a related example from Mark Russinovich's blog - http://blogs.technet.com/b/markrussinovich/archive/2008/04/07/3031251.aspx.

Weird memory increase on application pool recycle

I'd like to describe strange issue I've noticed while analyzing my asp.net application in production and ask for some advice or opinion on the following matter.
Application usually runs with some 80-90 MB of memory footprint. This seems stable since no memory leaks have been detected so far - no slight increase in memory usage over time. Yet, problem occurs when application pool recycles (I'm using shared hosting and judging by logs it occurs either when app is idle for 20 mins or every ~30 hours - something like that). The issue is that used memory almost doubles for some period on recycle - it goes to some 160-170 MBs without any explanation. This is confusing, since it is common claim that recycling should purge the memory and all other resources - at least I get it that way. System holds this amount of memory for some 7-8 hours and then memory usage drops to it's usual level of 90-100 MB, again, with no apparent reason (at least not know to me). All the time, application seems to work well - no significant delays or troubles with site availability - to the users everything seems OK, no complaints so far. When looking at the memory consumption over time graph - it looks almost like a step function.
The important thing is that I haven't been able to reproduce this sort of behavior in my testing environment. Occasionally, I've been getting notes from provider administrators that my app is using more resources than allowed and this really bugs me.
So, what I would like to know - is there any possible scenario where application pool recycling does not release all memory resources ? Is there any advice or guideline what I should focus on ? I'm not an expert in this area, but I've been reading about things like overlapping recycling, serialization problems on recycle and couple more issues ... Any ideas ? Similar experience ?
Thanks
This post provides a pretty good overview of what happens when your site's app pool is recycled: http://blogs.msdn.com/b/tess/archive/2006/08/02/asp-net-case-study-lost-session-variables-and-appdomain-recycles.aspx
My speculation is that your memory usage is increasing due to the JIT-compilation that follows every recycle of the app pool. My guess is that your shared host has different configuration and environmental settings than your development server.
IMHO, if you're using ~100 megs of memory on a shared host, you are asking for trouble if it's a host like DiscountASP.NET or GoDaddy. If you care at all about that website, go get a VPS or some more configurable hosting where you can pay a premium for a higher memory limit.

IIS memory leak detection techniques

I have a server running about 100+ WordPress sites of varying complexity and traffic volume. The OS is Windows 2003 Server running IIS 6 with the domains being managed via HELM. The thing is there are times when sites stop responding due to insufficient memory, but it has been difficult to track the particular site(s) or other culprit that could be the cause. What makes it even more complicated is that the problem will disappear for weeks and then show up again. The most recent solution was to migrate the sites to a higher capacity server and this seemed to have worked for some time.
What tools/techniques can I use to track down the problem while keeping in mind that this is a production server?
Tess Ferrandez has a number of great articles about tracking down memory pressure and process hangs in IIS using WinDbg and DebugDiag:
If it is broken, fix it you should
Whilst the techniques often focus on ASP.NET, many of the techniques can be applied to other languages. The only problem is that because PHP is written using native code your WinDbg-fu will probably need to be fairly good.

Replicating load related crashes in non-production environments

We're running a custom application on our intranet and we have found a problem after upgrading it recently where IIS hangs with 100% CPU usage, requiring a reset.
Rather than subject users to the hangs, we've rolled back to the previous release while we determine a solution. The first step is to reproduce the problem -- but we can't.
Here's some background:
Prod has a single virtualized (vmware) web server with two CPUs and 2 GB of RAM. The database server has 4GB, and 2 CPUs as well. It's also on VMWare, but separate physical hardware.
During normal usage the application runs fine. The w3wp.exe process normally uses betwen 5-20% CPU and around 200MB of RAM. CPU and RAM fluctuate slightly under normal use, but nothing unusual.
However, when we start running into problems, the RAM climbs dramatically and the CPU pegs at 98% (or as much as it can get). The site becomes unresponsive, necessitating a IIS restart. Resetting the app pool does nothing in this situation, a full IIS restart is required.
It does not happen during the night (no usage). It happens more when the site is under load, but it has also happened under non-peak periods.
First step to solving this problem is reproducing it. To simulate the load, we starting using JMeter to simulate usage. Our load script is based on actual usage around the time of the crash. Using JMeter, we can ramp the usage up quite high (2-3 times the load during the crash) but the site behaves fine. CPU is up high, and the site does become sluggish, but memory usage is reasonable and nothing is hanging.
Does anyone have any tips on how to reproduce a problem like this in a non-production environment? We'd really like to reproduce the error, determine a solution, then test again to make sure we've resolved it. During the process we've found a number of small things that we've improved that might solve the problem, but I'd really feel a lot more confident if we could reproduce the problem and test the improved version.
Any tools, techniques or theories much appreciated!
You can find some information about troubleshooting this kind of problem at this blog entry. Her blog is generally a good debugging resource.
I have an article about debugging ASP.NET in production which may provide some pointers.
Is your test env the same really as live?
i.e
2 separate vm instances on 2 physical servers - with the network connection and account types?
Is there any other instances on the Database?
Is there any other web applications in IIS?
Is the .Net Config right?
Is the App Pool Config right for service accounts ?
Try look at this - MS Article on II6 Optmising for Performance
Lots of tricks.

Resources