Increase request queue limit of asp.net - asp.net

Not sure if my title is technically correct but I have a problem.
I have a asp.net 4.5 site on IIS 8 and use asp.net to control/limit file downloads.
Instead of letting IIS to server large (10-20MB) static files such as zip I use asp.net.
It works fine until around 500 to 700 users start downloading. After that asp.net starts to queue all the request to the domain until the active request count goes below some sort of pre determined number.
Static content such as html isn't affected. If I enable more than one worker processes it handles more requests but that brings issue of managing session state.
There is no queue issue if I let IIS serve files.
Is there any way increasing queue length of asp.net?

You can modify the request queue limit and the maximum concurrent threads allowed per CPU
<system.web>
<applicationPool
maxConcurrentRequestsPerCPU="12"
maxConcurrentThreadsPerCPU="0"
requestQueueLimit="5000"/>
</system.web>
http://blogs.msdn.com/b/rakkimk/archive/2009/07/08/iis7-improving-asp-net-performance-concurrent-requests-while-on-integrated-mode.aspx
However, I would suggest serving larger static files via a CDN if that is possible in your situation. The default limits are in place for a reason. If you lift the configured values too far above the defaults, you may start to experience performance issues and runtime errors.
Amazon's S3 CDN (and probably other CDN's) provide a range of options to control access to files on the CDN
enables you to leverage the fine grained access control that IAM User policies provides while also reducing your exposure by enabling you to further restrict and limit the request to a predefined time for each one of your users.
http://aws.amazon.com/articles/5050/

Related

What is the maximum number of thread allowed to run on a web page

I have developed a page using ExtJS and ASP.NET. The page got multiple widgets. Each widget sends multiple AJAX request at the time of load. I assume each AJAX request runs over a new thread. I just want to understand:
How many such threads are allowed to run from a single window? How many request a broswer can send to the server, without getting queued at the broswer itself. I know request can be queued at the server side depending on various parameters.
Does this behavior changes from browser to browser ?
Is there any way to trace the number of active request?
I am not talking about number of threads that can run on Server-Side. I am not talking about number of request a server can process.
How many such threads are allowed to run from a single window?
It depends on your server's hardware capabilities and configuration.
Does this behavior changes from browser to browser ?
No.
Is there any way to trace the number of active request?
Microsoft
http://msdn.microsoft.com/en-us/library/bb386420(v=vs.100).aspx
Examples & Tips
http://www.dotnetperls.com/trace
A server typically can handle several simultaneous requests, even with a small number of threads.
On IIS you configure this using
<system.web>
<applicationPool maxConcurrentRequestsPerCPU="50" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000"/>
</system.web>
Refer: Microsoft best practices
On a browser when using javascript, you should know that javascript doesn't have any multi-threading capabilities.
The number of simultaneous ajax(or other) requests to a server is browser dependent.
Usually 2, 6, 8 threads are allowed per domain name. This should not bother you too much, cause the rest will be queued.
Refer: BrowserScope for the latest network data

Securing large static files without hogging ASP.Net threads using IIS

While my ASP.NET code is streaming a large file, is it tying down a thread completely? In other words, if 8 people are downloading large files and I only have 8 threads available, will no further requests be processed?
In any case, I need to find an alternative way of securing large static files, preferably by letting IIS serve it directly after the user has been authorized, in order to free the application server from having to deal with something that IIS, Nginx, etc can do better without hitting any managed code.
I believe Nginx allows this if your app puts the "X-Accel-Redirect" header in its response: http://kovyrin.net/2006/11/01/nginx-x-accel-redirect-php-rails/.
Apache and Lighttpd have the same feature.
Any advice?
Returning the URL of the file is an appropriate solution.
You can prevent unauthorised users who have that URL from also downloading the file by using the standard authentication providers in asp.net. If you turn runAllManagedModulesForAllRequests on (see http://www.iis.net/configreference/system.webserver/modules) the users authentication will be verified when they hit the URL, if they are authorised, they will be allowed to access the file.
In either case, downloading doesn't lock threads, just execution. This is why the maxConnections settings has a default setting of 4294967295. (see http://www.iis.net/configreference/system.applicationhost/sites/site/limits)

Is the ASP.NET Cache independent for each host header set in IIS7

I have a site that dynamically loads website contents based on domain host name, served from IIS7. All of the domains share a cached collection of settings. The settings are being flushed from the cache on almost every page request, it seems. This is verified by logging the times at which the Cache value is null and reloaded from SQL. This codes works as expected on other servers and sites. Is it possible that ASP.NET Cache is being stored separately for each domain host name?
Having different host headers for your site will not affect the cache.
There are a few reasons why your Cache might be getting flushed. Off the top of my head I would say either your AppDomain is getting dumped, your web.config file is getting updated, or some piece of code is explicitly expiring/clearing out your cache.
The cache is per application, I would look at a few other items.
Is your application pool recycling (Timeout, memory limit, file changes, other)
Do you have Web Gardening Enabled, this would create different buckets for each garden thread
One other thing to check -- how much memory is available? The ASP.NET cache will start ejecting stuff left and right once it senses a memory crunch. Remember, RAM is expensive and valuable storage . . .

Using a remote, external web service instead of a database

I am building an ASP.NET web application that will be deployed to a 4-node web farm.
My web application's farm is located in California.
Instead of a database for back-end data, I plan to use a set of web services served from a data center in New York.
I have a page /show-web-service-result.aspx that works like this:
1) User requests page /show-web-service-result.aspx?s=foo
2) Page's codebehind queries a web service that is hosted by the third party in New York.
3) When web service returns, the returned data is formatted and displayed to user in page response.
Does this architecture have potential scalability problems? Suppose I am getting hundreds of unique hits per second, e.g.
/show-web-service-result.aspx?s=foo1
/show-web-service-result.aspx?s=foo2
/show-web-service-result.aspx?s=foo3
etc...
Is it typical for web servers in a farm to be using web services for data instead of database? Any personal experience?
What change should I make to the architecture to improve scalability?
You have most definitely a scalability problem: the third-party web service. Unless you have a service-level agreement with that service (agreeing on the number of requests that you can submit per second), chances are real that you overload that service with your anticipated load. That you have four nodes yourself doesn't help you then.
So you should a) come up with an agreement with the third party, and b) test what the actual load is that they can take.
In addition, you need to make sure that your framework can use parallel connections for accessing the remote service. Suppose you have a round-trip time of 20ms from California to New York (which would be fairly good), you can not make more than 50 requests over a single TCP connection. Likewise, starting new TCP connections for every request will also kill performance, so you want pooling on these parallel connections.
I don't see a problem with this approach, we use it quite a bit where I work. However, here are some things to consider:
Is your page rendering going to be blocked while waiting for the web service to respond?
What if the response never comes, i.e. the service is down?
For the first problem I would look into using AJAX to update the page after you get a response back from the web service. You'll also want to consider how to handle the no response or timeout condition.
Finally, you should really think about how you could cache the web service data locally. For example if you are calling a stock quoting service then unless you have a real-time feed, there is no reason to call the web service with every request you get. Store the data locally for a period of time and return that until it becomes stale.
You may have scalability problems but most of these can be carefully engineered around.
I recommend you use ASP.NET's asynchronous tasks so that the web service is queued up, the thread is released while the request waits for the web service to respond, and then another thread picks up when the web service is done to finish off the request.
MSDN Magazine - Wicked Code - Asynchronous Pages in ASP.NET 2.0
Local caching is an absolute must. The fewer times you have to go from California to New York, the better. You might want to look into Microsoft's Velocity (although that's still in CTP) or NCache, or another distributed cache, so that each of your 4 web servers don't all have to make and cache the same data from the web service - once one server gets it, it should be available to all.
Microsoft Project Code Named "Velocity"
NCache
Other things that can go wrong that you should engineer around:
The web service is down (obviously) and data falls out of cache, and you can't get it back. Try to make it so that the data is not actually dropped from cache until you're sure you have an update available. Then the only risk is if the service is down and your application pool is reset, so don't reset it as a first-line troubleshooting maneuver!
There are two different timeouts on web requests, a connect and an overall timeout. Make sure both are set extremely low and you handle both of them timing out. If the service's DNS goes down, this can look like quite a different failure.
Watch perfmon for ASP.NET Queued Requests. This number will rise rapidly if the service goes down and you're not covering it properly.
Research and adjust ASP.NET performance registry settings so you have a highly optimized ASP.NET thread pool. I don't remember the specifics, but I seem to remember that there's a limit on IO Completion Ports and something else of that nature that are absurdly low for the powerful hardware I'm assuming you have on hand.
the trendy answer is REST. Any GET request can be HTTP Response cached (with lots of options on how that is configured) and it will be cached by the internet itself (your ISP, essentially).
Your project has an architecture that reflects they direction that Microsoft and many others in the SOA world want to take us. That said, many people try to avoid this type of real-time risk introduced by the web service.
Your system will have a huge dependency on the web service working in an efficient manner. If it doesn't work, or is slow, people will just see that your page isn't working properly.
At the very least, I would get a web stress tool and performance test your web service to at least the traffic levels you expect to get at peaks, and likely beyond this. When does it break (if ever?), when does it start to slow down? These are good metrics to know.
Other options to look at: perhaps you can get daily batches of data from the web service to a local database and hit the database for your web site. Then, if for some reason the web service is down or slow, you could use the most recently obtained data (if this is feasible for your data).
Overall, it should be doable, but you want to understand and measure the risks, and explore any potential options to minimize those risks.
It's fine. There are some scalability issues. Primarily, with the number of calls you are allowed to make to the external web service per second. Some web services (Yahoo shopping for example) limit how often you can call their service and will lock out your account if you call too often. If you have a large farm and lots of traffic, you might have to throttle your requests.
Also, it's typical in these situations to use an interstitial page that forks off a worker thread to go and do the web service call and redirects to the results page when the call returns. (Think a travel site when you do search, you get an interstitial page while they call out to an external source for the flight data and then you get redirected to a results page when the call completes). This may be unnecessary if your web service call returns quickly.
I recommend you be certain to use WCF, and not the legacy ASMX web services technology as the client. Use "Add Service Reference" instead of "Add Web Reference".
One other issue you need to consider, depending on the type of application and/or data you're pulling down: security.
Specifically, I'm referring to authentication and authorization, both of your end users, and the web application itself. Where are these things handled? All in the web app? by the WS? Or maybe the front-end app is authenticating the users, and flowing the user's identity to the back end WS, allowing that to verify that the user is allowed? How do you verify this? Since many other responders here mention a local data cache on the front end app (an EXCELLENT idea, BTW), this gets even MORE complicated: do you cache data that is allowed to userA, but not for userB? if so, how do you verify that userB cannot access data from the cache? What if the authorization is checked by the WS, how do you cache the permissions then?
On the other hand, how are you verifying that only your web app is allowed to access the WS (and an attacker doesn't directly access your WS data over the Internet, for instance)? For that matter, how do you ensure that your web app contacts the CORRECT WS server, and not a bogus one? And of course I assume that all the connection to the WS is only over TLS/SSL... (but of course also programmatically verify the cert applies to the accessed server...)
In short, its complicated, and many elements to consider here.... but it is NOT insurmountable.
(as far as input validation goes, that's actually NOT an issue, since this should be done by BOTH the front end app AND the back end WS...)
Another aspect here, as mentioned by #Martin, is the need for an SLA on whatever provider/hosting service you have for the NY WS, not just for performance, but also to cover availability. I.e. what happens if the server is inaccessible how quickly they commit to getting it back up, what happens if its down for extended periods of time, etc. That's the only way to legitimately transfer the risk of your availability being controlled by an externality.

How to create an ASP.NET web farm?

I am looking for information on how to create an ASP.NET web farm - that is, how to make an ASP.NET application (initially designed to work on a single web server) work on 2, 3, 10, etc. servers?
We created a web application which works fine when, say, there are 500 users at the same time. But now we need to make it work for 10 000 users (working with the web app at the same time).
So we need to set up 20 web servers and make something so that 10 000 users could work with the web app by typing "www.MyWebApp.ru" in their web browsers, though their requests would be handled by 20 web-servers, without their knowing that.
1) Is there special standard software to create an ASP.NET web farm?
2) Or should we create a web farm ourselves, by transferring requests between different web servers manually (using ASP.NET / C#)?
I found very little information on ASP.NET web farms and scalability on the web: in most cases, articles on scalability tell how to optimize and ASP.NET app and make it run faster. But I found no example of a "Hello world"-like ASP.NET web app running on 2 web servers.
Would be great if someone could post a link to an article or, better, tell about one's own experience in ASP.NET "web farming" and addressing scalability issues.
Thank you,
Mikhail.
1) Is there special standard software
to create an ASP.NET web farm?
No.
2) Or should we create a web farm
ourselves, by transferring requests
between different web servers manually
(using ASP.NET / C#)?
No.
To build a web farm, you will need some form of load balancing. For up to 8 servers or so, you can use Network Load Balancing (NLB), which is built in to Windows. For more than 8 servers, you should use a hardware load balancer.
However, load balancing is really just the tip of the iceberg. There are many other issues that you should address, including things like:
State management (cookies, ViewState, session state, etc)
Caching and cache invalidation
Database loading (managing round-trips, partitioning, disk subsystem, etc)
Application pool management (WSRM, pool resets, partitioning)
Deployment
Monitoring
In case it might be helpful, I cover many of these issues in my book: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.
I'd say you should configure an NLB cluster (Network Load Balancing), which basically splits all requests between cluster nodes (And as an added benefit detects if things are down and stops sending them requests). There's features built into windows for this, but they don't compare to a hardware device for performance or scalability. If you're using Windows 2008 it really is simple to set one up. If you do this make sure you have a shared machine key or you'll start getting exceptions for viewstate being invalid (When 1 server submits the form and it posts to the other and they're using different keys to encode the data).
You can also use DNS round-robin but at 20 servers presumably in 1 datacenter I wouldn't see a point to going to such crazy lengths. If you've got multiple data centers though this is definitely worth considering (As NLB won't really work well between data centers).
You'll also want to be sure if a user swaps servers they don't loose their session. The simplest way would be to use a Session State database (Configurable in the web.config, or you can do it server-wide in IIS's configs). If you don't use sessions though just turn them off in the Pages directive of the web.config and call it a day. You could also use a session state server, but I don't have any experience with this.
It may also be worth considering spending some time optimizing the code or adding caching directives to static content - it can be very cost-effective even if you only trim the need for a few of those servers.
Hope that helps.
If you keep your server stateless, it is easy with a good router that implements some round-Robbin protocol (that send each call to the single published server ip to a different web server).
if it is not stateless (like - if a login is required, or ssl) than you need to keep each session to the same server.
Here is some info about MS Application Request Routing - you will get everything there:
IIS Load balancing
I would not recommend #2. You will do much better off with a load balancer.
Pay attention to session state management. Unless you configure the load balancer to keep each user on the same web server, you will have to use the session state server or database.
Also, check your code's usage of Application and Cache variables. These will be different on every web server. If those values are static, you may not have a problem. But if they can change, you can end up with different values on each web server.
There used to be a problem with ViewState in 1.x, as explained here. I'm not sure if this problem still exists.
Then, there are some changes that you need to make to the Machine Key in web.config, as explained here.

Resources