ASP.NET In a Web Farm - asp.net

What issues do I need to be aware of when I am deploying an ASP.NET application as a web farm?

All session state information would need to be replicated accross servers. The simplest way would be to use the MSSQL session state provider as noted.
Any disk access, such as dynamic files stored by users, would need to be on an area avialable to all servers. Such as by using some form of Network Attached storage. Script files, images and html etc would just be replicated on each server.
Attempting to store any information in the application object or to load information on application startup would need to be reviewed. The events would fire each time the user hit a new machine in the farm.
Machine keys across each server is a very big one as other people have suggested. You may also have problems if you are using ssl against an ip address rather than a domain.
You'll have to consider what load balancing strategy your going to go through as this could change your approach.

Sessions is a big one, make sure you use SQL Server for managing sessions and that all servers point to the same SQL Server instance.

One of the big ones I've run across is issues with different machineKeys spread across the different servers. ASP.NET uses the machineKey for various encryption operations such as ViewState and FormsAuthentication tickets. If you have different machineKeys you could end up with servers not understanding post backs from other servers. Take a look here if you want more information: http://msdn.microsoft.com/en-us/library/ms998288.aspx

Don't use sessions, but use profiles instead. You can configure a SQL cluster to serve them. Sessions will query your session database way too often, while profiles just load themselfs, and that's it.
Use a distributed caching store like memached for caching data, and ASP.Net cache for stuff you'll need alot
Use a SAN or an EMC to serve your static content
Use S3 or something similar to have a fallback on 3.
Have some decent loadbalancer, so you can easily update per server, without ever needing to shut down the site

HOW TO: Set Up Multi-Server ASP.NET Web Applications and Web Services
Log aggregation is easily overlooked - before processing HTTP logs, you might need to combine them to create a single log that includes requests sent to across servers.

Related

How to implement SignalR scale-out without using existing backplane options

I am using SignalR hosted in multiple servers behind a load balancer. I am storing the connnection id and the user id in the custom database table in sql server. Every time, I need to send notification to the selected users. It is working fine in the single server environment. How do I scale the SignalR implementation with custom database table without using existing backplane options?
I am not sure what is your current implementation because it seems to be a bit mixed your explanation. If you have multiple servers behind a load balancer it means you applied some techniques (I think so!). But you said it's working fine in the single server environment but not in multiple servers. Let's review what is mandatory for multiple servers (scale out)
Communication between instances: It means that any message in one instance is available on all the other instances. The classic implementation is any type of queue, SignalR supports Redis, you can use SQL Server but it's clear the limitations of any SQL solution. Azure has a Redis Cache as a PaaS
In-memory storage: You normally use this in a single server but it's mandatory to implement shared memory. Again, Redis has a shared memory solution in case you have the server available. There is not any possibility of implementing this without a solution like Redis.
Again, a lower-performance solution would be a MemStorage implementation in SQL.
Authentication: The out-of-the-box implementation of security uses a cookie to store the encrypted key. But once you have multiple servers every server has its unique key. To solve the problem you have to implement your own DataProtector in case this is your method used.
The examples are extremely beyond this explanation, most of the code (even templates without the actual methods implemented) would take several pages. I suggest you to take a look at the 3 items that are mandatory to scale out your application.

how to sync data between company's internal database and externally hosted application's database

My organisation (a small non-profit) currently has an internal production .NET system with SQL Server database. The customers (all local to our area) submit requests manually that our office staff then input into the system.
We are now gearing up towards online public access, so that the customers will be able to see the status of their existing requests online, and in future also be able to create new requests online. A new asp.net application will be developed for the same.
We are trying to decide whether to host this application on-site on our servers(with direct access to the existing database) or use an external hosting service provider.
Hosting externally would mean keeping a copy of Requests database on the hosting provider's server. What would be the recommended way to then keep the requests data synced real-time between the hosted database and our existing production database?
Trying to sync back and forth between two in-use databases will be a constant headache. The question that I would have to ask you is if you have the means to host the application on-site, why wouldn't you go that route?
If you have a good reason not to host on site but you do have some web infrastructure available to you, you may want to consider creating a web service which provides access to your database via a set of well-defined methods. Or, on the flip side, you could make the database hosted remotely with your website your production database and use a webservice to access it from your office system.
In either case, providing access to a single database will be much easier than trying to keep two different ones constantly and flawlessly in sync.
If a webservice is not practical (or you have concerns about availability) you may want to consider a queuing system for synchronization. Any change to the db (local or hosted) is also added to a messaging queue. Each side monitors the queue for changes that need to be made and then apply the changes. This would account for one of the databases not being available at any given time.
That being said, I agree with #LeviBotelho, syncing two db's is a nightmare and should probably be avoided if you can. If you must, you can also look into SQL Server replication.
Ultimately the data is the same, customer submitted data. Currently it is being entered by them through you, ultimately it will be entered directly by them, I see no need in having two different databases with the same data. The replication errors alone when they will pop-up (and they will), will be a headache for your team for nothing.

system.web.caching - At what level is the cache maintained?

I am looking at implementing caching in a .net Web App. Basically... I want to cache some data that is pulled in on every page, but never changes on the database.
Is my Cache Element unique to each:
Session?
App Pool?
Server?
If it is session, this could get out of hand if thousands of people are hitting my site and each cache is ~5k.
If App Pool, and I had several instances of one site running (say with a different DB backend, all on one server, though) then I'd need individual App Pools for each instance.
Any help would be appreciated... I think this data is probably out there I just don't have the right google combination to pull it up.
By default it is stored in memory on the server. This means that it will be shared among all users of the web site. It also means that if you are running your site in a web farm, you will have to use an out-of-process cache storage to ensure that all nodes of the farm share the same cache. Here's an article on MSDN which discusses this.
"One instance of this class is created per application domain, and it remains valid as long as the application domain remains active" - MSDN

Application variable across load balanced servers (ASP.Net)

We have a website that runs on two load balanced servers. We used the ASP.Net Application variable to make application state "online/ offline", or for some important messages across the application,
So when i try update a application variable its available on one server but not on other.
How i can manage a application variable across load balanced servers.
What may I use? Of course keeping it as simple as possible.
Are you using sticky sessions? How often does the data change? Is application cache even necessary?
One option: You can have each webserver store (and manage, refresh, invalidate) its own application cache. But then you run the chance of storing different copies.
Another option: distributed cache such as memcached or ncache or something else.
Another option: read/write the data out to a shared disk.
Store that information in a database that all servers have access to and access information from.

How to deploy website to production with minimal impact to users

I'm trying to find the best server architecture solution to deploy monthly updates to an Asp.net external public facing website. What I'm looking for are ways to release a new version of a website with minimal impact to users. Besides deploying the standard way (ie. stop IIS, copy new website over existing website, start IIS), what are some "better" solutions for deployment out there? It would be nice if they kept their session and didn't have to see a "Website under maintenance" message during the update.
My server configuration
We have 2 IIS web servers (2003) and are trying to figure out the best way to utilize them for deployments. My first thought was to update the non-active web server with the latest release. Then to gracefully point the web traffic to that server with minimal impact to users (best case, the user doesn't lose his session). How would you go about "repointing" the web traffic from server 1 to server 2? Changing firewall NAT? Changing DNS records? Some other way?? We need to be able to test the live site immediately after we release the new changes (duh).
BTW, we are using nant and cruise control to automate the builds, and a custom web service to deploy the build to production. So it's all automated with the click of a button.
Could a better solution be achieved using a 3rd server? If so how?
The way we do is
We have a load balancer from netscaler,
take one webserver out of loadbalancer , do all deployments, do a iisreset and the put back in load balancer.
Do the same thing for server2 .
Finally invalidate loadbalancer cache.
Well, there are a couple of things here:
First, consider using a load balancing solution. Windows 2003 server ships with windows load balancing (WLBS), though its not the greatest product. It is, though, free. With that, you can point all traffic to one server, update it, and then do the opposite.
Secondly, you may want to consider looking at how you're working with sessions. HTTP is stateless, which means that as long as you can reconstruct a user's session on any page hit, you should be fine. One ideal step towards this is using ASP.NET Forms Authentication - the cookie written by it isn't tied to an ASP.NET session. Of course, this approach leads to greater risk - there is a chance users will get an error screen if they hit something JUST AS you're copying files. And then there will be a delay while the app pool refreshes.
Overall, your better option is load balancing. Even with it, though, consider trying the second option as well - having sessions that can regenerate works well if users fail to be sticky to one of the servers in the pool.
Just wanted to add this for brevity. At my previous work, we achieved seamless deployments by using the following setup:
A load balancer would point to the production ASP.NET webservers (two in your case, but we had three), and the webservers would have their session setup to pull from a third server dedicated to hosting OutOfProc ASP.NET session.
To deploy a site, we'd pull one of the servers out of the load balancer, update the files, fire it back up, and place it back into the load balancer pool. Repeat for the rest of the webservers.
Because each webserver got the session data from the one central server, taking one webserver out, did not log out the users on that server.
If we had code changes that were incompatible with the existing session data, we'd wait till a scheduled maintenance window to deploy. Otherwise, users with that session data would get errors till they logged out.
Additionally, since this setup relies on the webserver being up, if you wanted to increase reliability, you could change the OutOfProc to SQL based session servers. You would need several servers that replicated the same session database and point the webservers to them. More complicated, but would reduce site downtime.

Resources