Can a load balancer recognize when an ASP.NET worker process is restarting and divert traffic? - asp.net

Say I have a web farm of six IIS 7 web servers, each running an identical ASP.NET application.
They are behind a hardware load balancer (say F5).
Can the load balancer detect when the ASP.NET application worker process is restarting and divert traffic away from that server until after the worker process has restarted?

What happens during an IIS restart is actually a roll-over process. A new IIS worker process starts that accepts new connections, while the old worker process continues to process existing connections.
This means that if you configure your load balancer to use a balancing algorithm other than simple round-robin, that it will tend to "naturally" divert some, but not all, connections away from the machine that's recycling. For example, with a "least connections" algorithm, connections will tend to hang around longer while a server is recycling. Or, with a performance or latency algorithm, the recycling server will suddenly appear slower to the load balancer.
However, unless you write some code or scripts that explicitly detect or recognize a recycle, that's about the best you can do--and in most environments, it's also really all you need.

We use a Cisco CSS 11501 series load balancer. The keepalive "content" we are checking on each server is actually a PHP script.
service ac11-192.168.1.11
ip address 192.168.1.11
keepalive type http
keepalive method get
keepalive uri "/LoadBalancer.php"
keepalive hash "b026324c6904b2a9cb4b88d6d61c81d1"
active
Because it is a dynamic script, we are able to tell it to check various aspects of the server, and return "1" if all is well, or "0" if not all is well.
In your case, you may be able to implement a similar check script that will not work if the ASP.NET application worker process is restarting (or down).

It depends a lot on the polling interval of the load balancer, a request from the balancer has to fail in order before it can decide to divert traffic

IIS 6 and 7 restart application pools every 1740 minutes by default. It does this in an overlapped manner so that services are not impacted.
http://technet.microsoft.com/en-us/library/cc770764%28v=ws.10%29.aspx
http://forums.iis.net/t/1153736.aspx/1
On the other hand, in case of a fault, a good load balancer (I'm sure F5 is) can detect a fault with one of the web servers and send requests to the remaining, healthy web servers. That's a critical part of a high-availability web infrastructure.

Related

Load Stressing Web applications deployed in openstack instances under an autoscaling group

I am working testing the auto-scaling feature of OpenStack. In my test set-up, java servlet applications are deployed in tomcat web servers behind a HAproxy load balancer. I aim at stressing testing the application, to see how it scales and the response time using JMeter as the stress tester. However the I observe that HAProxy (or something else) terminates the connection immediately the onComplete signal is sent by one of the member instances. Consequently, the subsequent responses from the remaining servers are reported as failures (timeouts). I have configured the HAProxy server to use a round-robin algorithm with sticky sessions. See attached JMeter results tree , I am not sure of the next step to undertake. The web applications are asyncronous hence my expectation was that the client (HAProxy in this case) should wait until the last thread is submitted before sending the response.
Is there be some issues with my approach or some set up flaws ?

HTTP connection pools to share among processes

Where I work our main web application is served with nginx+uwsgi+Django. A given production box has 80 uwsgi worker processes running on it. Our Django application makes moderately frequent requests to Amazon S3 but, if each of those 80 workers has to use its own HTTP connection for such requests, they're not frequent enough to take advantage of the (relatively short) HTTP Keep-Alive allowed for by Amazon's servers. So, we frequently have to pay a reconnection penalty after the connection is dropped on Amazon's side.
What I would like is if there were a proxy service running on the same box that could "concentrate" the S3 connections from those 80 processes down into a smaller pool of HTTP connections that would get enough use that they would be kept alive. The Django app would connect to the proxy, and the proxy would use its pool of kept-alive connections to forward the requests to S3. I see that it is possible to use nginx itself as a forward proxy, but it's not clear to me if or how this can take advantage of connection pooling the way I have in mind. An ideal solution would be good at auto-scaling so that a uwsgi worker would never have to wait on the proxy itself for a connection, but would pare back connections as load drops so as to keep the connections as "hot" as possible (perhaps keeping 1 or 2 spare to handle occasional upticks).
I've run across other forward proxies such as Squid but these products seem designed to fulfill the more traditional caching proxy role for use by e.g. ISPs that have many disparate remote clients.
Does anyone know of an existing solution for this type of problem? Many thanks!

Looking for a message bus for HTTP load sharing

I have an HTTP application with standalone workers that perform well. The issue is that some times they need to purge and rebuild their caches, so they stop responding for up to 30 seconds.
I have looked into a number of load balancers, but none of them seem to address this issue. I have tried Perlbal and some Apache modules (like fcgid) and they happily send requests to workers that are busy rebuilding their cache.
So my take is this: isn't there some kind of message bus solution where all http requests are queued up, leaving it up to the workers to process messages when they are able to?
Or - alternatively - a load balancer that can take into account that the workers are some times unable to respond.
Added later: I am aware that a strategy could be that the workers could use a management protocol to inform the load balancer when they are busy, but that solution seems kludgy and I worry that there will be some edge cases that results in spurious errors.
If you use Amazon Web Services Load Balancer you can achieve your desired result. You can mark an EC2 Instance behind an Elastic Load Balancer (ELB) as unhealthy while it does this cache purge and rebuild.
What I would do is create an additional endpoint for each instance, that is called rebuild_cache for example. So if you have 5 instances behind your ELB, you can make a script to hit each individual instance (not through the load balancer) on that rebuild_cache endpoint. This endpoint would do 3 things:
Mark the instance as unhealthy. The load balancer will realize it's unhealthy after a failed health check (the timing and threshold of health checks are configurable from AWS Web Console).
Run your cache purge and rebuild
Mark the instance as healthy. The load balancer will run a health check on the instance and only start sending it traffic once it has been healthy for the required amount of healthy health checks (again, this threshold is defined through ELB Health configuration)
I see two strategies here: put a worker offline for the period, so a balancer will abandon it; inverse control - workers pull for tasks from a balancer, instead of the balancer pushes tasks to workers. Second strategy easy to do with a Message Queue.

Can Nginx be used instead of Gunicorn to manage multiple local OpenERP worker servers?

I'm currently using Nginx as a web server for Openerp. It's used to handle SSL and cache static data.
I'm considering extending it's use to also handle fail over and load balancing with a second server, using the upstream module.
In the process, it occurred to me that Nginx could also do this on multiple Openerp servers on the same machine, so I can take advantage of multiple cores. But Gunicorn seems to the the preferred tool for this.
The question is: can Nginx do a good job handling traffic to multiple local OpenERP servers, bypassing completely the need for Gunicorn?
Let first talk what they both are bascially.
Nginx is a pure web server that's intended for serving up static content and/or redirecting the request to another socket to handle the request.
Gunicorn is based on the pre-fork worker model. This means that there is a central master process that manages a set of worker processes. The master never knows anything about individual clients. All requests and responses are handled completely by worker processes.
If you see closely Gunicorn is Designed from Unicron, Follow the link for the detail more diff
which show the ngix and unicrom same model work on Gunicron also.
nginx is not a "pure web server" :) It's rather a web accelerator capable of doing load balancing, caching, SSL termination, request routing AND static content. A "pure web server" would be something like Apache - historically a web server for static content, CGIs and later for mod_something.

Round robin load balancing options for a single client

We have a biztalk server that makes frequent calls to a web service that we also host.
The web service is hosted on 4 servers with a DNS load balancer sitting between them. The theory is that each subsequent call to the service will round robin the servers and balance the load.
However this does not work presumably because the result of the DNS lookup is cached for a small amount of time on the client. The result is that we get a flood of requests to each server before it moves on to the next.
Is that presumption correct and what are the alternative options here?
a bit more googling has suggested that I can disable client side caching for DNS: http://support.microsoft.com/kb/318803
...however this states the default cache time is 1 day which is not consistent with my experiences
You need to load balance at a lower level with NLB Clustering on Windows or LVS on Linux (or other equivalent piece of software). If you are letting clients to the web service keep an HTTP connection open for longer than a single request/response you still might not get the granularity of load balancing you are looking for so you might have to reconfigure your application servers if that is the case.
The solution we finally decided to go with was Application Request Routing which is an IIS extension. In tests this has shown to do what we want and is far easier for us (as developers) to get up and running as compared to a hardware load balancer.
http://www.iis.net/download/ApplicationRequestRouting

Resources