asp.net how to test load balancing is working - asp.net

Our infrastructure team has configured a a load balancing using Radware. Basically we have 3 web server that are load balanced.
Before we go live I would like to test and make sure that load balancing is working. How do I test the following:
3 servers are load balanced and requested are evenly distributed. (Any automated tool exists?)
Asp.net InProc session are working.

You can test by first generating an artificial load on your site (with any one of a number of load generators). Then have a look at the Windows Performance Counters for each site: things like HTTP requests per second and CPU use would be reasonable high-level metrics.
Yes, there are automated tools, but they usually require quite a bit of setup, and the better ones charge a fee. Perf counters are fast, easy and free.
As #swannee said, InProc sessions won't work in a load-balanced scenario unless your load balancer is configured to use sticky sessions. It's better to use SQL Server sessions with load balancing.
FWIW, you can test your software in a "mini" load balanced scenario on a single server by enabling IIS web gardens (multiple worker processes), from the AppPool config dialog.

Can you look in the IIS server logs to see how many hits each server is getting?
http://msdn.microsoft.com/en-us/library/ms953324.aspx
Also, unless you are using sticky sessions, you are going to have problems with InProc session. It won't work on a server farm (unless as stated you have sticky sessions turned on). If you don't have sticky sessions, you'll be able to tell real quickly that your session is being lost between requests with just some manual testing.

Our organization makes a series of ping and advanced status pages. These pages are monitored by our load balancers so it can take out unhealthy nodes in the event one node loses a connection to a database server or the node itself is having issues.
Our ping pages spit out the server name that you're connecting to and the status. They are avaliable by the common server names themself, like server01.application.com/ping and server02.application.com/ping but more important, they all answer on application.com/ping.
Refreshing the page will show us a new connection (you can see the server name change).
To test load you could use WCat, it's not the easiest tool to setup and script but it works.
To test sessions. you'll need to build out some pages that you can do load testing on to test sessions

Related

DDOS Attack in ASP.NET with State Server Session

Cant find this issue anywhere...
Using ASP.NET 3.5, I have 3 web servers, in a web farm, using ASP.NET State Server (on a different Server).
All pages Uses session (they read and do update the session)
Issue: my pages are prone to DDOS attack, it so easy to attack, just go to any page, and HOLD down 'F5' key for 30-60 seconds, and the request will pile up in all web servers.
I read, that if you make multiple call to session each will LOCK the session, hence the other request has to wait to get the same user's session, this waiting, ultimately causes DDOS.
OUR solution has been pretty primitive, from preventing (master page, custom control) to call session and only allow the page to call, to adding javascript that disable's F5 key.
I just realize ASP.NET with session is prone to such DDOS attacks!!
Anyone faced similar issue? any global/elegant solution? please do share
Thanks
Check this:
Dynamic IP Restrictions:
The Dynamic IP Restrictions Extension for IIS provides IT Professionals and Hosters a configurable module that helps mitigate or block Denial of Service Attacks or cracking of passwords through Brute-force by temporarily blocking Internet Protocol (IP) addresses of HTTP clients who follow a pattern that could be conducive to one of such attacks. This module can be configured such that the analysis and blocking could be done at the Web Server or the Web Site level.
Also, Check this:
DoS Attack:
Most sites/datacenters will control (D)DOS attacks via hardware not software. Firewalls, routers, load balancers, etc. It is not effeicent or deesirable to have this at the application level of IIS. I don't want bloat like this slowing down IIS.
Also DDOS preventation is a complex setup with even deadicated hardware boxes just to deal with it with different rules and analysis for them that take a lot of processing power.
Look at your web enviornment infrastucuture and see the setup and see what your hardware provides as protection and if it is a problem look at dedicated hardware solutions. You should block DDOS attacks as soon as possible in the chain, not at the end at the webserver level.
Well, for the most elegant solution; it has to be done on network level.
Since it is "nearly" impossible to differentiate a DDOS attack from a valid session traffic, you need a learning algorithm running on the network traffic; most of the enterprise level web applications need a DDOS defender on network level. Those are quite expensive and more stable solutions for DDOS. You may ask your datacenter, if they have a DDOS defender hardware and if they have, they can put your server traffic behind the device.
Two of the main competitors on this market :
http://www.arbornetworks.com/
http://www.riorey.com/
We had the same issue at work. Its not solved yet but two workarounds we were looking at were:
Changing the session state provider so that it doesn't lock
session. If your application logic would allow this...
Upgrading the session state server so that it was faster (SQL 2016 in-memory session state for example). This makes it a little harder for users to cause issues and means your app should recover faster.

ASP.Net load balancing

I am working on asp.net (newbie) and I am trying to understand what it means to do "load balancing" for the web site. The website will be used by multiple users and resources (database, web service,..).
If anyone could help me understanding the concept of the load balance for asp.net web site, I would really appreciate it.
Thanks.
One load-balancing-related issue you may want to be aware of at development time: where you store your session state. This MSDN article gives a good overview of your options.
If you implement your asp.net system using "out-of-process" or "sql-server-mode" session state management, that will give you some additional flexibliity later, if you decide to introduce a load-balancer to your deployed system:
Your load balancer needn't handle session affinity. As one poster mentioned above, all modern load-balancers handle it anyway, so this is a minor consideration in any case.
Web-gardens (a sort of IIS/server-implemented load-balancer) REQUIRES use of "out-of-process" or "sql-server-mode" session state management. So if your system is already configured that way, you'll be one step closer to being able to use web-gardens.
What is it?
Load balancing simply refers to distributing a workload between two or more computers. As a concept, it's not unique to asp.net. Although having separate machines for your database and web server could be called "load balancing" it more commonly refers to using multiple machines to serve a single role, such as having multiple web servers.
Should you worry about it? Probably not. Do you already have a performance problem? Are your database and web server on their own machines? If you do find that your server resources are strained, it would probably be easier to scale up (a more powerful single machine) than out (load balancing). These days, a dedicated box can handle a LOT of traffic if your code is decent.
Load Balancing, in the programming sense, does not apply to ASP.NET; it applies to a technique to try to distribute server load across two or more machines, rather than it all being used on one machine. Unless you will have many thousands (millions?) of users, you probably do not need to worry about it.
Check the Wikipedia article for more information.
Load balancing is not specific for any on technology stack be it asp.net, jsp etc. To load balance is to spread the incoming requests to a web site over more than one server. This is typically done with a software or hardware load balancer. The load balancer sits in front of two or more web servers and delegates the incoming traffic. Although this technique is not limited to web servers. Load Balancing
Enjoy!
I've never used it, but an option is IIS Application Request Routing.
IIS Application Request Routing (ARR)
2.0 enables Web server administrators, hosting providers, and Content
Delivery Networks (CDNs) to increase
Web application scalability and
reliability through rule-based
routing, client and host name
affinity, load balancing of HTTP
server requests, and distributed disk
caching
In a typical web server/database scenario, the db is almost always guaranteed to load up the machine first. This is because dealing with storing data requires more resources. Before you even start looking at load balancing your web server, you need to think about how to load balance the database.
Spreading one database across multiple servers is a lot harder than load balancing a web server. One of the techniques that can be used is sharding (or horizontal partitioning). This is where some records are stored on one server, and other records - on another server. For example records with ID 1-900000 are on server 1 and records 900001- are on server 2.
In comparison to DB load balancing, spreading the load across multiple ASP.NET servers is not overly complicated. Most of the session issues can be easily mitigated by using out of process session and/or never talking to Application.Cache directly. Data load balancing on the other hand is hard and requires a lot of planning and trial and error. In most cases, talking to a load balanced DB requires using an ORM which supports it (e.g. NHibernate) or your own Data Access Layer. The reason being is that you need to take out establishing a connection from the code that uses the database, so that the decision which DB to talk to is handled in one place.
the exact solution is to save session into the SQL Server with Stored Procedure. To read session call 'SessionCheck' stored Procedure.
I'd add that it really isn't something to worry about. By the time you need a load balancer, you can probably afford one of the neato newfangled ones with sticky sessions so you don't even have to deal with the session boogeyman.

ASP.NET Deployment to Multiple Web Servers

I'm putting together my deployment plan for a major deployment next week (basically taking over a site).
I've never had to deploy to multiple web servers before.
Do I need to copy the files to each web server, or is there a tool which will do this for me?
I have to supply the IP address to some 3rd party vendors, which IP do I give them since there are four separate servers?
Please check this thread, hope this will help you: What method do you use to deploy ASP.Net applications to the wild?
I would of expected that there would be a load balancer which would spread the traffic between the servers. In which case you would give out the IP address of the external interface of the load balanacer.
For updates in this scenario I would typically take one server out of the loop for the load balancer then update that server, test it works ok then if you have 4 servers take another out and update/test that server. Then switch the load balancer so that the 2 updated servers are live and the other 2 are offline update/test those servers and then put them back into the loop so they're live and your update is complete with no downtime. Of course I'd typically do this during a period of low traffic where possible.
Whether you do this using some sort of automatic script or manually would depend on what systems you have in place and how often you would expect to make updates.
It's worth saying that Microsoft have since released a couple of tools to help with this:
http://www.iis.net/download/webdeploy
http://www.iis.net/download/WebFarmFramework

Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load.
I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night.
Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution
Thanks
I've used Squid before to reduce load on dynamically-created RSS feeds, and it worked quite well. It just takes some careful configuration and tuning to get it working the way you want.
Using a primed cache server is an excellent idea (I've done the same thing using wget and Squid). However, it is probably unnecessary in this scenario.
It sounds like your data is fairly static and the problem is server load, not network bandwidth. Generally, the problem exists in one of two areas:
Database query load on your DB server.
Business logic load on your web/application server.
Here is a JSP-specific overview of caching options.
I have seen huge performance increases by simply caching query results. Even adding a cache with a duration of 60 seconds can dramatically reduce load on a database server. JSP has several options for in-memory cache.
Another area available to you is output caching. This means that the content of a page is created once, but the output is used multiple times. This reduces the CPU load of a web server dramatically.
My experience is with ASP, but the exact same mechanisms are available on JSP pages. In my experience, with even a small amount of caching you can expect a 5-10x increase in max requests per sec.
I would use tiered caching here; deploy Squid as a reverse proxy server in front of your app server as you suggest, but then deploy a Squid at each client site that points to your origin cache.
If geographic latency isn't a big deal, then you can probably get away with just priming the origin cache like you were planning to do and then letting the remote caches prime themselves off that one based on client requests. In other words, just deploying caches out at the clients might be all you need to do beyond priming the origin cache.

Anyone using Memcached with ASP.NET on a distributed farm?

We have 22 HTTP servers each running their own individual ASP.NET Caches. They read from a read only DB that is only updated off peak hours.
We use a file dependency to invalidate the cache, prompting the servers to "new up" their caches...If this is accidentally done during peak hours, it risks bringing down our DB cluster due to the sudden deluge of open connections.
Has anyone used memcached with ASP.NET in this distributed form? It seems to me that it would offer a huge advantage of having to only build up one cache (and hit the DB 21 times less), while memcached would handle distributing it on each box.
If you have, do you place it on the same box as the HTTP boxes, or do you run a separate cache tier? How well does it scale, can we expect it to need powerful servers? Our working dataset is not huge (We fit it into 4 gigs of memory on each HTTP box just fine).
How do you handle invalidation?
Looking for experiences and war stories.
EDIT: Win2k3, IIS6, 64-bit servers...4 gigs per box (I believe, we may have upped it to 16 gigs when we changed to 64-bit servers).
"memcached would handle distributing it on each box"
memcached does not distribute or replicate a cache to each box in a memcached farm. The memcached client basically hashes the key and chooses a cache server based on that hash. When one of the memcached servers fail you will lose whatever cached items existed on that server, however, the client will recognize the failure and begin writing values to a different server. This being the case, your code needs to account for missing items in the cache and reset them if necessary.
This article discusses the memcached architecture in more detail: How memcached works.
Best practice (according to the memcached site) is to run memcached on the same box as your web server app or else you're making http calls (which isn't all that bad, but it's not optimal). If you're running a 64-bit app server (which you probably should if you're going to be running memcached), then you can load up each of the servers with loads of memory and it will be available to memcached. There's not much in the way of CPU resources used by memcached, so if your current app server isn't very taxed, it will remain that way.
Haven't used them together, but I've used them both on separate projects.
Last I saw the documentation explicitly said that sharing with the web server was ok.
Memcache really only needs RAM and if you take your asp.net cache out of the equation how much RAM is you web server actually using? Probably not much. It won't compete much with your web server for CPU and it doesn't need disk at all. You might consider segmenting off the network traffic (if you don't already) from the incoming web requests.
It worked well and was fast I didn't have any problems with it.
Oh, invalidation was explicit on the project I used it on. Not sure what other modes there are for that.
If you want to get replication accross your memcached servers then it maybe worth a look at repcached. It's a patch for memcached that handles the replication part.
Worth checking out Velocity, which is a distributed cache provided by Microsoft. I cannot give you a point-by-point comparison to memcached, but Velocity is integrated with ASP.NET and will continue to get more development and integration.

Resources