How to create an ASP.NET web farm? - asp.net

I am looking for information on how to create an ASP.NET web farm - that is, how to make an ASP.NET application (initially designed to work on a single web server) work on 2, 3, 10, etc. servers?
We created a web application which works fine when, say, there are 500 users at the same time. But now we need to make it work for 10 000 users (working with the web app at the same time).
So we need to set up 20 web servers and make something so that 10 000 users could work with the web app by typing "www.MyWebApp.ru" in their web browsers, though their requests would be handled by 20 web-servers, without their knowing that.
1) Is there special standard software to create an ASP.NET web farm?
2) Or should we create a web farm ourselves, by transferring requests between different web servers manually (using ASP.NET / C#)?
I found very little information on ASP.NET web farms and scalability on the web: in most cases, articles on scalability tell how to optimize and ASP.NET app and make it run faster. But I found no example of a "Hello world"-like ASP.NET web app running on 2 web servers.
Would be great if someone could post a link to an article or, better, tell about one's own experience in ASP.NET "web farming" and addressing scalability issues.
Thank you,
Mikhail.

1) Is there special standard software
to create an ASP.NET web farm?
No.
2) Or should we create a web farm
ourselves, by transferring requests
between different web servers manually
(using ASP.NET / C#)?
No.
To build a web farm, you will need some form of load balancing. For up to 8 servers or so, you can use Network Load Balancing (NLB), which is built in to Windows. For more than 8 servers, you should use a hardware load balancer.
However, load balancing is really just the tip of the iceberg. There are many other issues that you should address, including things like:
State management (cookies, ViewState, session state, etc)
Caching and cache invalidation
Database loading (managing round-trips, partitioning, disk subsystem, etc)
Application pool management (WSRM, pool resets, partitioning)
Deployment
Monitoring
In case it might be helpful, I cover many of these issues in my book: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.

I'd say you should configure an NLB cluster (Network Load Balancing), which basically splits all requests between cluster nodes (And as an added benefit detects if things are down and stops sending them requests). There's features built into windows for this, but they don't compare to a hardware device for performance or scalability. If you're using Windows 2008 it really is simple to set one up. If you do this make sure you have a shared machine key or you'll start getting exceptions for viewstate being invalid (When 1 server submits the form and it posts to the other and they're using different keys to encode the data).
You can also use DNS round-robin but at 20 servers presumably in 1 datacenter I wouldn't see a point to going to such crazy lengths. If you've got multiple data centers though this is definitely worth considering (As NLB won't really work well between data centers).
You'll also want to be sure if a user swaps servers they don't loose their session. The simplest way would be to use a Session State database (Configurable in the web.config, or you can do it server-wide in IIS's configs). If you don't use sessions though just turn them off in the Pages directive of the web.config and call it a day. You could also use a session state server, but I don't have any experience with this.
It may also be worth considering spending some time optimizing the code or adding caching directives to static content - it can be very cost-effective even if you only trim the need for a few of those servers.
Hope that helps.

If you keep your server stateless, it is easy with a good router that implements some round-Robbin protocol (that send each call to the single published server ip to a different web server).
if it is not stateless (like - if a login is required, or ssl) than you need to keep each session to the same server.
Here is some info about MS Application Request Routing - you will get everything there:
IIS Load balancing

I would not recommend #2. You will do much better off with a load balancer.
Pay attention to session state management. Unless you configure the load balancer to keep each user on the same web server, you will have to use the session state server or database.
Also, check your code's usage of Application and Cache variables. These will be different on every web server. If those values are static, you may not have a problem. But if they can change, you can end up with different values on each web server.
There used to be a problem with ViewState in 1.x, as explained here. I'm not sure if this problem still exists.
Then, there are some changes that you need to make to the Machine Key in web.config, as explained here.

Related

Avoiding invalid viewstate when deploying on a load balanced website without downtime

Here is the scenario:
We have 3 web servers A, B, C.
We want to release a new version of the application without taking the application down
(e.g. not using the "Down for maintenance page").
Server A goes live with latest code.
Server B gets taken off-line. Users on Server B get routed to A and C.
Page1.aspx was updated with new control. Anyone that came from Server B to Server A while
on this page will get a viewstate error when they perform an action on this page. This is what we want to prevent.
How do some of you resolve this issue?
Here are some thoughts we had (whether it's possible or not using our load balancer, I don't know... I am not familiar with load balancer configuration [it's an F5]):
The more naive approach:
Take down servers A and B and update. C retains the old code. All traffic will be directed to C, and that's ok since it's the old code. When A and B go live with the update, if possible tell the load balancer to only keep people with active sessions on C and all new sessions get initiated on A and B. The problem with this approach is that in theory sessions can stick around for a long time if the user keeps using the application.
The less naive approach:
Similar to the naive approach, except (if possible) we tell the load balancer about "safe" pages, which are pages that were not changed. When the user eventually ends up on a "safe" page, he or she gets routed to server A or B. In theory the user may never land on one of these pages, but this approach is a little less risky (but requires more work).
I assume that your load balancer is directing individual users back to the same server in the web farm during normal operations, which is why you do not normally experience this issue, but only when you start redirecting users between servers.
If that assumption is correct then it is likely the issue is a inconsistent machinekey across the server farm.
ViewState is hashed against the machine key of the server to prevent tampering by the user on the client side. The machine key is generated automatically by IIS, and will change every time the server restarts or is reset, as well as being unique to each server.
In order to ensure that you don't hit viewstate validation issues when users move between servers there are two possible courses of action.
Disable the anti-tampering protection on the individual page or globally in the pages element of the web.config file using the enableViewStateMac attribute with a false value. I mention this purely for the sake of completeness - you should never do this on a production website.
Manually generate a machine key and share that same value across each application (you could use the same key for all your applications, but it is sensible to use one key per application to maximise security), on each of your servers. To do this you need to generate keys (do not use any you may see in demos on the internet, this defeats the purpose of the unique machine key), this can be done programatically or in IIS manager (see http://www.codeproject.com/Articles/221889/How-to-Generate-Machine-Key-in-IIS7). Use the same machine key when deploying the website to all of your servers.
I can't answer on the best practice for upgrading applications that require 100% uptime.

How to handle ASP.NET Application variables in a load-balanced web farm

I am moving a site from a single server to a server farm consisting of three web servers behind a load balancer. It seems easy enough to handle session management - just make sessions "sticky" at the Load Balancer (we evaluated SQL-based session management but have decided to continue using InProc session management for efficiency).
However, we also use a sizable configuration object that we keep in the Application space (e.g. Application[ObjName]). Since the config object is loaded from memory, we have no problem until someone makes a change to the configuration. At that point, the application on the hosting server will have the change and the database will have the change. However, the other two servers won't have the change. We've debated having a "once a minute" polling rule (e.g. on new sessions), keeping information in the session instead (not very efficient), etc. All have serious drawbacks. I am wondering what other people do. Is it possible to keep the Application space on SQL Server but the Session space inproc? Any help or insight about how to handle this would be appreciated!
Application[] is always going to be local memory based, so no matter what you've going to have some code changes to make. So put it somewhere else, like a distributed cache, AppFabric, NCache, memcached.net etc. When someone makes a change to the configuration update the cache, when you need to read the settings read from the cache. Propogation/Sync is taken care of by the cache itself.
We currently decided to use NCache as we have got 4 web servers for our web farms. This third party caching tool can work perfectly with load balancer and is easy to configure(just the Express version of it is free. For Professional and Enterprise version, only Developer Machine is out of charge). It is also really fast and stable. You must setup the NChache on each server and set the load balancer to work with all of them. Hope it helps.

Using a remote, external web service instead of a database

I am building an ASP.NET web application that will be deployed to a 4-node web farm.
My web application's farm is located in California.
Instead of a database for back-end data, I plan to use a set of web services served from a data center in New York.
I have a page /show-web-service-result.aspx that works like this:
1) User requests page /show-web-service-result.aspx?s=foo
2) Page's codebehind queries a web service that is hosted by the third party in New York.
3) When web service returns, the returned data is formatted and displayed to user in page response.
Does this architecture have potential scalability problems? Suppose I am getting hundreds of unique hits per second, e.g.
/show-web-service-result.aspx?s=foo1
/show-web-service-result.aspx?s=foo2
/show-web-service-result.aspx?s=foo3
etc...
Is it typical for web servers in a farm to be using web services for data instead of database? Any personal experience?
What change should I make to the architecture to improve scalability?
You have most definitely a scalability problem: the third-party web service. Unless you have a service-level agreement with that service (agreeing on the number of requests that you can submit per second), chances are real that you overload that service with your anticipated load. That you have four nodes yourself doesn't help you then.
So you should a) come up with an agreement with the third party, and b) test what the actual load is that they can take.
In addition, you need to make sure that your framework can use parallel connections for accessing the remote service. Suppose you have a round-trip time of 20ms from California to New York (which would be fairly good), you can not make more than 50 requests over a single TCP connection. Likewise, starting new TCP connections for every request will also kill performance, so you want pooling on these parallel connections.
I don't see a problem with this approach, we use it quite a bit where I work. However, here are some things to consider:
Is your page rendering going to be blocked while waiting for the web service to respond?
What if the response never comes, i.e. the service is down?
For the first problem I would look into using AJAX to update the page after you get a response back from the web service. You'll also want to consider how to handle the no response or timeout condition.
Finally, you should really think about how you could cache the web service data locally. For example if you are calling a stock quoting service then unless you have a real-time feed, there is no reason to call the web service with every request you get. Store the data locally for a period of time and return that until it becomes stale.
You may have scalability problems but most of these can be carefully engineered around.
I recommend you use ASP.NET's asynchronous tasks so that the web service is queued up, the thread is released while the request waits for the web service to respond, and then another thread picks up when the web service is done to finish off the request.
MSDN Magazine - Wicked Code - Asynchronous Pages in ASP.NET 2.0
Local caching is an absolute must. The fewer times you have to go from California to New York, the better. You might want to look into Microsoft's Velocity (although that's still in CTP) or NCache, or another distributed cache, so that each of your 4 web servers don't all have to make and cache the same data from the web service - once one server gets it, it should be available to all.
Microsoft Project Code Named "Velocity"
NCache
Other things that can go wrong that you should engineer around:
The web service is down (obviously) and data falls out of cache, and you can't get it back. Try to make it so that the data is not actually dropped from cache until you're sure you have an update available. Then the only risk is if the service is down and your application pool is reset, so don't reset it as a first-line troubleshooting maneuver!
There are two different timeouts on web requests, a connect and an overall timeout. Make sure both are set extremely low and you handle both of them timing out. If the service's DNS goes down, this can look like quite a different failure.
Watch perfmon for ASP.NET Queued Requests. This number will rise rapidly if the service goes down and you're not covering it properly.
Research and adjust ASP.NET performance registry settings so you have a highly optimized ASP.NET thread pool. I don't remember the specifics, but I seem to remember that there's a limit on IO Completion Ports and something else of that nature that are absurdly low for the powerful hardware I'm assuming you have on hand.
the trendy answer is REST. Any GET request can be HTTP Response cached (with lots of options on how that is configured) and it will be cached by the internet itself (your ISP, essentially).
Your project has an architecture that reflects they direction that Microsoft and many others in the SOA world want to take us. That said, many people try to avoid this type of real-time risk introduced by the web service.
Your system will have a huge dependency on the web service working in an efficient manner. If it doesn't work, or is slow, people will just see that your page isn't working properly.
At the very least, I would get a web stress tool and performance test your web service to at least the traffic levels you expect to get at peaks, and likely beyond this. When does it break (if ever?), when does it start to slow down? These are good metrics to know.
Other options to look at: perhaps you can get daily batches of data from the web service to a local database and hit the database for your web site. Then, if for some reason the web service is down or slow, you could use the most recently obtained data (if this is feasible for your data).
Overall, it should be doable, but you want to understand and measure the risks, and explore any potential options to minimize those risks.
It's fine. There are some scalability issues. Primarily, with the number of calls you are allowed to make to the external web service per second. Some web services (Yahoo shopping for example) limit how often you can call their service and will lock out your account if you call too often. If you have a large farm and lots of traffic, you might have to throttle your requests.
Also, it's typical in these situations to use an interstitial page that forks off a worker thread to go and do the web service call and redirects to the results page when the call returns. (Think a travel site when you do search, you get an interstitial page while they call out to an external source for the flight data and then you get redirected to a results page when the call completes). This may be unnecessary if your web service call returns quickly.
I recommend you be certain to use WCF, and not the legacy ASMX web services technology as the client. Use "Add Service Reference" instead of "Add Web Reference".
One other issue you need to consider, depending on the type of application and/or data you're pulling down: security.
Specifically, I'm referring to authentication and authorization, both of your end users, and the web application itself. Where are these things handled? All in the web app? by the WS? Or maybe the front-end app is authenticating the users, and flowing the user's identity to the back end WS, allowing that to verify that the user is allowed? How do you verify this? Since many other responders here mention a local data cache on the front end app (an EXCELLENT idea, BTW), this gets even MORE complicated: do you cache data that is allowed to userA, but not for userB? if so, how do you verify that userB cannot access data from the cache? What if the authorization is checked by the WS, how do you cache the permissions then?
On the other hand, how are you verifying that only your web app is allowed to access the WS (and an attacker doesn't directly access your WS data over the Internet, for instance)? For that matter, how do you ensure that your web app contacts the CORRECT WS server, and not a bogus one? And of course I assume that all the connection to the WS is only over TLS/SSL... (but of course also programmatically verify the cert applies to the accessed server...)
In short, its complicated, and many elements to consider here.... but it is NOT insurmountable.
(as far as input validation goes, that's actually NOT an issue, since this should be done by BOTH the front end app AND the back end WS...)
Another aspect here, as mentioned by #Martin, is the need for an SLA on whatever provider/hosting service you have for the NY WS, not just for performance, but also to cover availability. I.e. what happens if the server is inaccessible how quickly they commit to getting it back up, what happens if its down for extended periods of time, etc. That's the only way to legitimately transfer the risk of your availability being controlled by an externality.

Invite suggestions and guidelines for Load balancing of asp .net application

I have an ASP .NET 2.0 web application which is a online survey system. At times it has huge number of users and the application go slow.
I wanted to do load balancing for the web application which runs in a single server now.
Will anyone suggest me...
In what all scenarios i should consider load balancing to my application?
what type of applications need load balancing?
what is the pros and cons of load balancing?
what is the guidelines for devoloping applications which targets load balancing,
At max how many number of concurrent users can access web application without load balancing without affecting performance much?
In my case application is already devoloped. What all the areas i should make changes to prepare it for load balancing?
Thanks in advance.
First you need to ensure that you know where the bottleneck to performance is.
If you can focus first on getting the total round trip time for each user's page load down then you will be able to handle more users.
In a case where you are bottlenecked on database calls you could setup more servers for load balancing your web application and get very little benefit.
Is there a bandwidth issue with your webserver? Are requests for images, css and javascript files slowing down just as much as web application page_load requests?
Ensure that you aren't storing too much data in session state. If you are storing lists or other large objects in memory you have to remember that you will be multiplying that memory usage by the number of users causing things to get out of control pretty quickly if you have 10,000 users with an active session. In some of these cases it may be preferable to move state information out to cookies that are stored by the user.
From my experience, the software load balancing options are limited, inconsistent and inflexible.
Of course, when developing the application, you need to ensure that your application can scale out for a web farm scenario - i.e, things like ensuring you are using a distributed cache provider, a state server etc.
The hardware based solutions will provide multiple methods of distributing loads and are very consistent. Consider options like using NLB/F5 Big-IP load balancers.
Have a look at this post from Scott as well - Old, but useful. http://www.hanselman.com/blog/LoadBalancingAndASPNET.aspx
Thanks, Anoop
In what all scenarios i should consider load balancing to my application?
In any scenario when your current server can no longer handle the load by itself. There are two kinds of scaling, vertical (buy a better server) or horizontal (add more servers). Load balancing is a form of horizontal scaling, and typically gives you more bang for your buck, but is also much harder to set up.
What type of applications need load balancing?
Pretty much the same answer here as with a.
What is the pros and cons of load balancing?
Pro is that it lets you scale. Cons are that it introduces unique issues.
What is the guidelines for devoloping applications which targets load balancing?
The big one is session. Session state by default is stored in proc, which means that it only exists on the one server. If the user has the potential of being bounced across different servers, they will lose their session. I would recommend reading up on it here (the way you probably want to go is either dont use session, or use state server)
At max how many number of concurrent users can access web application without load balncing without affecting performance much?
Completely depends on your server hardware and your application requirements. I believe there are 5 asp worker processes per core, so that is one consideration right off the bat. There is also RAM usage to consider.

Allowing Session in a Web Farm? Is StateServer Good Enough?

First of all to give you a bit of background on the current environment. We have a number of ASP.NET applications, all of which use session for certain aspects. We are "Load Balanced" over multiple servers due to traffic levels, however, our load balancing is set to use "Sticky Sessions" as currently all web applications are set to use "InProc" for session state.
We are looking at being able to remove the "Sticky Sessions" configuration on our load balancer, as due to our traffic loads servers can and do get overloaded. We want to go with a more balanced approach, but must be able to use session.
I know that SqlServer for session state will work, but for reasons beyond our control, we cannot use SqlServer to store our state. In researching it seems that StateServer is our best bet. We have an additional server, with loads of memory sitting around. This server could be our StateServer for the entire Web Cluster. We just want to know the following things.
1.) Besides any potential serialization issues with the switch from InProc to StateServer, are there any major known issues with losing session objects or generating errors with the above listed environment?
2.) Aside from the single point of failure, and slighly slower performance are there any other gotchas that we need to be aware of with using StateServer.
3.) Are there any metrics that show the performance differences between the three types of state storage?
Here is a decent FAQ on asp.net state: http://www.eggheadcafe.com/articles/20021016.asp
From that Article, here is some information on StateServer:
In a web farm, make sure you have the same MachineKey in all your web servers. See KB 313091 on how to do it.
Also, make sure your objects are serializable. See KB 312112 for details.
For session state to be maintained across different web servers in the web farm, the Application Path of the website (For example \LM\W3SVC\2) in the IIS Metabase should be identical in all the web servers in the web farm. See KB 325056 for details
I have only used sql and in-proc. But these 3 that apply when using sql server apply as well:
Avoid storing too much information in the session, as it affects both in serialization and data transmitted over the network.
Make sure you don't have anything that depends on the Session_onEnd. This is just not available for out of process sessions.
Turn off session on pages that doesn't uses it. This don't make a difference for in-process session, but for out of process it will save you a lot.
Make sure your server etag ids are synchronized across the web farm otherwise caching at client browsers will be upset.
Have you reviewed your code in detail to make sure everything can be serialized out of process and across a LAN efficiently?
Are you solving the main performance problem within your system? I ask because the database is the typical source of contention.
My main motivation for moving away from sticky sessions was operational flexibility i.e. cycle down a problematic server or to deploy a software upgrade. So having implemented a central session state service make sure you take full advantage from an operational stand point.
In my experience we've found out that native state server or even using SQL Server for sessions is a very scary scenario as both have issues (mainly performance). By the way, we are also using sticky sessions.
I think you can explore other products for this to achive the absolute best. A free option would be Velocity but it is still not released.
And another comprehensive but proven product will be (Very expensive actually) NCache. THis will even help in your serilizations with less cost, If you use their API's it will be even better results.
Take a look and see which looks best for you.
About SQL Server, you server will die very soon if you have enough number of hits coming in (I belive you have some hits already which yielded you to do Web Farm or you do it just for the sake of redundancy)
Bottom line: We are evaluating Velocity because NCAchce is really expensive. However advantages are huge.
We are using StateServer for a very small web farm with only two nodes for a few hundred users.
I'm not responsible for its operation but I remember only two issues in two years where the service had to be restarted because it crashed.
I would like to another one more point to the accepted answer:
Make sure the version of framework dlls is the same.
In my case the System.Web dll versions were different as a few windows updates were skipped on one of the servers of the farm.

Resources