High Performance ASP.NET Site (> 1000 Request/Second) - asp.net

I am writing a High Performance ASP.NET Json API with soon > 1000 Request/Second. All my logic and processing is done in an IHttpHandler. I measured via Stopwatch Class and the handler finishes a request in around 0,1 - 0,5 Millisecond.
But it seems IIS and/or other HTTPHandlers (Modules?) are taking away a lot of performance. Can i measure that somehow ? How much overhead will a request produce in IIS when configured for best performance ?
Will removing all those HTTPHandlers help, or are there other tricks to speed it up? I dont need much of the ASP.NET Featureset besides Session (could even workaround that if it give a significant performance boost).

Measuring performance of a web server is no trivial task. A few things to consider:
Find the actual bottleneck. This can be memory, disk access, caching, database access, network latency etc. Use a memory profiler, or other performance profiler to find out.
Use WireShark to find the difference between how long the request is on your machine and how long your code runs.
Try other configurations. Give ASP.NET more memory. Upgrade the test system. I.e., going from 8GB / 2.5GHz with 600 requests/sec to 16GB / 3.0GHz can yield 6500 requests/sec. Performance growth is often not linear. See this document from Microsoft.
Consider adding an extra machine. This can yield up to a 50 or even higher performance upgrade depending on how you configure it. See again that document from MS.
Check these hints by Jon Skeet. The comment thread reveals some non-obvious potential bottlenecks as well.
NOTE 1: know your tools. ASP.NET runs each request in its own thread. Thread swapping is faster than process swapping, but it still requires time. If other handlers take time because they are in the request chain, it's beneficial to disable them.
NOTE 2: one of the original side-goals of stackoverflow was to create a site in ASP.NET that had great performance on max 2 servers and could handle > 1Mln visitors per hour. They managed to do that. I believe they wrote some blogposts on it, but I don't remember where they are.

This is a very good question. I have noticed the same once you get into the single-millisecond range of response times, ASP.NET overhead starts to be noticable. I can confirm your observation.
What I have done successfully is to find out, which HttpModules are registered (using IIS Manager) and disable all of them which I could possibly get rid of. The standard ASP.NET pipeline has a lot of modules and functionality configured.
If you need ultimate performance, you could of course use a tiny HTTP server library and get rid of almost all overhead that way. This would be so incredibly fast.

Related

ASP.NET Web Services + IIS performance diagnostics [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
There is a need to find a performance bottleneck in server application under big load. Application consists of single services instance (.asmx) and some files that are requested over http from time to time. My plan to solve this problem is 1) get to exceptional situation when server starts failing somehow 2) analyze performance counters and logs in that moment of time to deduct what kind of calls caused that.
To start achieving this I've implemented a special client that issues both types of requests and made it repeat respective cycles indefinitely hoping at some point I'll get errors during WebMethod/GET url requests (NB - standard already existing solutions like JMeter and WAPT can't be used duo to complexity of services usage scenario). So far what I am observing is increased response time in service calls and some network timeout exceptions during files loading (using HttpClient that throws OperationCanceledException which is considered timeout according to - this thread). Btw, that's strange, because files are few kb in size, and service methods returns 5-10 mb of data per request. Thought "larger" requests are more likely to fail first.
Perfmon shows increased CPU load and absolutely no memory spikes/leaks. Request Execution Time counters are pretty random and looks irrelevant, Queue Lengths are always 0.
That said, looks like IIS handles my improvised DDoS well and at the same time makes testing approach ineffective (increased response times means more active requests in memory on test client which causes memory overflow at some point, and I'm already flushing data right after I receive it without doing anything with it).
More details : server machine is 4x3Ghz cores, 4 Gb RAM. I generate load of 50-100 requests per second which results in 10-20 Mb/sec bandwidth (test clients are situated on VM inside server's datacenter, 4 Gbps NIC). 30 minute testing session is ~10-30 Gb of pure data transfer between server and client.
How can I actually make Web Service/IIS go down?
Firstly, I wouldn't write my own load testing tool; there are plenty available. I've used JMeter (open source). You can use JMeter (and other similar tools) to send both POST and GET parameters, cookies and other HTTP headers - though admittedly, this does become challenging for complex cases.
Next, make sure your problem really is the server, and not the other infrastructure - network, routers, firewalls etc. all have maximum capabilities, and may be the root cause of the problem. Most of them have logging and reporting tools. For instance, I've seen tests report a throughput issue when they reached the maximum capacity of the firewall; the servers were not even close to breaking point. This happened because we had included a rather large binary file in the test cases, which normally would be served from a CDN.
Next, on the whole it's unlikely that serving static HTTP requests is the problem - IIS is really, really good at that. For the kind of hardware you mention, I'd expect to handle many thousands of requests per second. for static files.
In most situations, it's the dynamic pages that cause the problem - your .asmx. So, I'd ignore all the static files in the load testing, and focus on the .asmx. On the kind of hardware you mention, you probably need to generate many hundreds of requests per second if the asmxes are working properly.
Working on the assumption that your web server is tuned correctly, and the asmx scripts are reasonably performant, I'd expect to need at least twice the (CPU and memory) capacity from the test system as your server has to bring it to breaking point (this is based on my experience with JMeter, which is not as efficient as my web applications, but does make it easy to deploy multiple test clients). So in your case, I'd look for 2 machines matching your server specification.
With JMeter (and pretty much all the other load testing tools I've worked with), you can fairly easily use multiple machines as load test clients; I've also used Cloud-based load generation using JMeter.
I'm not totally sure why this rule of thumb is true - but I've observed it over multiple projects.

ASP.NET Requests Queued causes website to crumble. SQL backend, IIS6

I have inherited a somewhat complex system (and problem) that I need help with.
I have a webserver w/ the following specs:
Hardware:
Server 2003 32bit
IIS 6
8 cores (16 w/ hyperthreading)
12gb RAM
ASP.NET site
3 app pools, so 3 instances of w3wp.exe running.
This system serves a large number of people and bandwidth is fairly constant during business hours reaching ~ 68,000kbit/s
There are moments when the system "comes down" - site gets very slow which generates a lot of phone calls. Things usually slow down for 60 seconds, but has varied greatly in length. Sometimes only a few seconds and sometimes 3 minutes or more.
I have my app pools set to recycle somewhere about 600mb of consumed memory. That's not exact but they recycle on their own with much success. At times I recycle the "main" pool manually to clear the problem I'm describing.
This is what I know is going on when things are running slow.
Network bandwidth takes a considerable dip.
Requests Queued in the ASP.NET performance counters goes up.
In tandem w/ the Requests Queued rising page latency increases. (I employ a simple ASP page that makes a SQL call and just says "The system is live" - this page is monitored for latency)
Overall CPU usage rises.
Overall memory consumption of w3wp.exe rises.
In my mind here is what I imagine is happening.
Someone asks the system to generate a report or glob of data. This spins up a process that consumes a large number of threads (ie, all available threads) This causes all other requests to the system to wait in the ASP.NET que pool which essentially kills the site. The lack of activity causes the network traffic to dip.
I have read many articles about thread queues, thread pools, etc. This is a good example: http://williablog.net/williablog/post/2008/12/02/Increase-ASPNET-Scalability-Instantly.aspx and does what I believe is a clue to help me solve my problem... but I'm not sure. My "Machine.config" file for the version of asp.net that I am using does not specify any of the thread values listed in the article so we are default for everything which I believe is incorrect given our situation.
If you were me; What would you do next? Where do you think the problem is?
edit: Here is a screenshot. It should be obvious when the problem is happening.
http://i.imgur.com/5BJlq.png
edit:
I want to change these values for our setup. A few questions first:
1) After making the changes, what needs to be restarted for them to take effect?
2) How do these settings look for a system with 8 physical cores?
maxconnection = 96
maxIoThreads = 100
maxWorkerThreads = 100
minFreeThreads = 704
minLocalRequestFreeThreads = 608
Not fun.
Many root causes share common symptoms which makes it difficult to diagnose without getting dirty with the application. :) Pardon if some of these steps were implied.
Some next steps might be:
Review the IIS logs of each site looking for things like:
HTTP response codes (5xx,4xx,3xx)
Request response times
Review Windows Event Logs
How often are application pools cycling?
Application errors, etc.
Verify processModel settings as suggested by #vinayc to make sure predecessor didn't get 'tricky'
Install DebugDiag, its a surprisingly good tool for some basic analysis of memory and crash related problems.
This can also help you capture memory snaps to diagnose later.
Tess Ferrandez blog can help make heads/tails of memory snap analysis.
Understand how many web applications are running in each AppPool.
Investigate using a 'web garden' to possibly help minimize number of users impacted by 'slow down'
Is a virus scanner enabled? Is it running? If so, verify exclusions.
Are application teams available to help troubleshoot? Identify if they have any custom application instrumentation that might help diagnose problem.
Is the behavior 'new'? Or has it always been there? If 'new', can you track down which deployment might have caused the new behavior?
Could the the description given of the 'slow down' behavior be attributed to an apppool recycle and resulting jitting of the application again? ala - the first request syndrome.
Reviewing the logs helps understand how the sites/applications are being used, which can be especially important if you don't own the codebase. Logparser is an excellent tool for doing some IIS log analysis (as well as other formats).
Good luck!
Z
The settings that your are talking are part of processModel element under system.web element from machine.config. For IIS6, following are applicable:
autoConfig
maxIoThreads
maxWorkerThreads
minIoThreads
minWorkerThreads
requestQueueLimit
responseDeadlockInterval
Typically, you will only find autoConfig="true" and not other elements. Auto-config sets the values as per your machine configuration - the tuning is done as per recommended values (see Threading Explained section from this article) which are same as sighted by the link that you have provided.
The article although dated, i excellent resource if you want to tune up these settings manually.
On the other hand, at the load that you are serving, I would recommend two things (if you haven't tried already)
Use output caching aggressively - even if the data is dynamic, caching for say 30-60 seconds can give a definite boost at your load
If you suspect certain requests are hogging too many threads then attempt to move those resources under different app-pool (you can use different web-site with different sub-domain or you can use different virtual directory/application and choose different app-pool)

How many requests per second should my asp(class) app handle

I'm profiling a asp(classic) web service. The web service makes database calls, reads/writes to files, and processes xml. On a windows server 2003 box(2.7ghz, 4 core, 4gb ram) how many requests per second should I be able to handle before things start to fail.
I'm building a tool to test this, but I'm looking for a number of requests per second to shoot for.
I know this is fairly vague, but please give the best estimate you can. If you need more information, please ask.
95% of the performance of any data-driven app is dependent on the database: 1) the way you do your calls, 2) the indexes, 3) the hardware under the database (disk subsystem in particular).
I have seen a machine, like you are describing, handle 40 requests per second (2500/minute), but numbers like 10 per second (600/minute) are more common. I would expect even lower if you are running your DB on the same machine, and even lower still if that DB is SQLExpress or MSAccess.
Also, at capacity, your app will probably not fail, but IIS will Queue requests, once it is saturated, and may timeout some of those requests if it can't service them before the timeout expires.
Btw, instead of building a tool to test your app, you may want to look into using a test tool such as Microsoft WCAT. It is pretty smooth and easy to use.
How fast should it be? Fast enough.
How fast is fast enough? That's a question that only you and your users can answer. If your service is horrifically inefficient and keeps up with demand, it's fast enough. If your service is assembly-optimized, lightning-fast, and overwhelmed with requests, it's not fast enough.
If the server is handling its actual workload, then don't worry about how fast it "should" be. When the server is having trouble, or when you anticipate that it soon will, then you should look at improving the code or upgrading the hardware. Remember Knuth's Law – premature optimization is the root of all evil. Any work you do now to make it faster may never pay off, and you may be forced to make compromises with flexivility or maintainability. Remember, too, an older adage – if it ain't broke, don't fix it.
Yes I would also say 10 per second is a good benchmark. For a high performance app you would want to get more than this, but if you have no specific goal you should generally be able to get at least 10 requests per sec for a general web page with a bunch of database queries.

Polling webservice performance - Will this work?

Our app need instant notification, so obvious I should use some some WCF duplex, or socket communication. Problem is the the app is partial trust XBAP, and thus I'm not allowd to use anything but BasicHttpBinding. Therefore I need to poll for changes.
No comes the question: My PM says the update interval should be araound 2 sec, and run on a intranet with 500 users on a single web server.
Does any of you have experience how polling woould influence the web server.
The service is farly simple, it takes a guid as an arg, and returns a list of guids. All data access are cached, so I guess the load on the server is minimal for one single call, but for 500...
Except from the polling, the webserver has little work.
So, based on this little info (assume a standard server HW, whatever that is), is it possible to make a qualified guess?
Is it possible or not to implement this, will it work?
Yes, I know estimating this is difficult, but I'd be really glad if some of you could share some thoughts on this
Regards
Larsi
Don't estimate, benchmark.
Polling.. bad, but if you have no other choice, then it's ideal :)
Bear in mind the fact that you will no doubt have keep-alives on, so you will have 500 permanently-connected users. The memory usage of that will probably be more significant than the processor usage. I can't imagine network access (even in a relatively bloaty web service) would use much network capacity, but your network latency might become an issue - especially as we've all seen web applications 'pause' for a little while.
In the end though, you'll probably be ok, but you'll have to check it yourself. There are plenty of web service stress testers, you can use Microsoft's WAS tool for one, here's a few links to others.
Try using soapui, a web service testing tool, to check the performance of your web service. There is a paid version and an open source version that is free.
cheers
I don't think it will be a particular problem. I'd imagine the response time for each request would be pretty low, unless you're pulling back a hell of a lot of data, so 500 connections spread over 2 seconds shouldn't hit it too hard.
You can use a stress testing tool to verify your webserver can handle the load though, before you commit to this design.
250 qps probably is doable with quite modest hardware and network bandwidth provided you do minimize the data sent back & forth. I assume you're caching these GUID lists on the client so you can just send a small "no updates" response in the normal case.
Should be pretty easy to measure with a simple prototype though to be more confident.

Best way to determine the number of servers needed

How much traffic can one web server handle? What's the best way to see if we're beyond that?
I have an ASP.Net application that has a couple hundred users. Aspects of it are fairly processor intensive, but thus far we have done fine with only one server to run both SqlServer and the site. It's running Windows Server 2003, 3.4 GHz with 3.5 GB of RAM.
But lately I've started to notice slows at various times, and I was wondering what's the best way to determine if the server is overloaded by the usage of the application or if I need to do something to fix the application (I don't really want to spend a lot of time hunting down little optimizations if I'm just expecting too much from the box).
What you need is some info on Capacity Planning..
Capacity planning is the process of planning for growth and forecasting peak usage periods in order to meet system and application capacity requirements. It involves extensive performance testing to establish the application's resource utilization and transaction throughput under load. First, you measure the number of visitors the site currently receives and how much demand each user places on the server, and then you calculate the computing resources (CPU, RAM, disk space, and network bandwidth) that are necessary to support current and future usage levels.
If you have access to some profiling tools (such as those in the Team Suite edition of Visual Studio) you can try to set up a testing server and running some synthetic requests against it and see if there's any specific part of the code taking unreasonably long to run.
You should probably check some graphs of CPU and memory usage over time before doing this, to see if it can even be that. (A number alike to the UNIX "load average" could be a useful metric, I don't know if Windows has anything like it. Basically the average number of threads that want CPU time for every time-slice.)
Also check the obvious, that you aren't running out of bandwidth.
Measure, measure, measure. Rico Mariani always says this, and he's right.
Measure req/sec, RAM, CPU, Sessions, etc.
You may come up with a caching strategy (Output caching, data caching, caching dependencies, and so on.)
See also how your SQL Server is doing... indexes are a good place to start but not the only thing to look at..
On that hardware, a .NET application should be able to serve about 200-400 requests per second. If you have only a few hundred users, I doubt you are seeing even 2 requests per second, so I think you have a lot of capacity on that box, even with SQL server running.
Without know all of the details, I would say no, you will not see any performance improvement by adding servers.
By the way, if you're not using the Output Cache, I would start there.

Resources