Is there a way to serve a maintenance page in case there are no healthy instances in the associated target group in an amazon application load balancer?
I was thinking of a way to show a nice html maintenance page to my users if my instances whithin the associated target group are not healthy (i.e.: are returning 4xx, 5xx status codes)
There is no way to do it using ALB, but it should be doable using Route53.
Related
Suggestion Needed for Server Architecture for Single Code Base Multiple sub-domain (Multi tenant architecture) MVC .NET.
Introduction
Working on creating MVC .NET Web based application, which will be connected to different DB , based on Sub-domain name.
2k Accounts , will have approx 100 users each.
Finding the best architecture suggestions to handle this.
Description
- Account / Sub domains count will be 2K or more.
- Each Account will contain approx 100 users.
- Database will be MSSQL, each subdomain will have different db connection.
May i request you to help / expert feedback in below questions,
Questions
Type of URL structure which will suite best for above requirement.
Logic 1 Or Logic 2 OR Logic 3, which is the best ?
How server can withstand this many responses ? Do this is related to Application pool in .NET ? Please advice more on this.
2000 Subdomain X 100 Users = 2,00,000 Requests , one code base can handle this many requests ?
Logic 1 Or Logic 2 OR Logic 3, which is best to implement ? What is advantage to one over other?
Google / Facebook etc handling single domain with multiple servers.Is this is the best industry standard ?
As we know , Logic 1 , we can set multiple Application Pool for each Sub domain , will this be applicable for Logic 3 as well ?
Considering Logic 1 and Logic 3 , which is the best and why ? when there will be too much load on web server , we need to add another web server at that time which Logic will be best suitable?
Sharing the points which is known,
We can add different application pool for different sub domains.
APP server should be high end server to manage this many requests.
Need to add load balancer when load / traffic on server increases.
Need a best approach , expensive infrastructure also can be considered.
Url structure is really irrelevant here (as long as it is not "Logic 3" -- just don't expose your "database name" in query strings). Beware, though, that if you go the "subdomain for each customer" way, you will have to purchase a wildcard SSL certificates, and these are a bit more expensive. Plus, IIS still does not support wildcard subdomains, so there will be quirks in setting things up. Other than that, either option is fine.
Load balancers are pretty much a must these days. On the other hand, I would not invest in "high end" hardware. A couple of middle-of-the-road servers will be more than sufficient. What you should be worrying about is getting these 200,000 users.
Regarding database schema, see the Multi-Tenant Data Architecture article on MSDN. Basically, there's no single right answer. Each option comes with its own sets of benefits and downsides.
Single "code base" (as in, single instance of an application) can handle as much databases as you wish. The only limit here is SQL Server itself. Maximum Capacity Specifications for SQL Server says that the maximum number of "Databases per instance of SQL Server" is 32 767, but whether it's practical or not is debatable.
What is the difference between the single and multiple instances sites in asp.net when using Azure cloud services?
Ok - there's a few concepts here that you need to grok to answer your question.
// Arrange.
First, I'll make some assumptions about your question mainly based on the link to documentation, so my answers have less ambiguity.
You're dealing with Azure WebSites and not a Cloud Web Role or a custom Windows Virtual Machine with IIS. **
You're trying to remember stuff with the Session object (ie: State data).
You're not sure what an instance or multiple instances are, with respect to Azure WebSites.
NOTE: My answer applies to WebSites, Web Roles and Windows VM's running IIS .. but I just wanted to be uber clear on the Q.
// Act.
When you create a website (either in an WebSite, Web-Role or a custom Windows Server with IIS) the website has some defined memory boundary/space/garden/wall/magic bubble which is called the App Pool. It means that your website is 100% isolated from other websites on that single server. You do something bad, it doesn't mess with anyone else's sites.
So that website which is installed on that single server is called an instance.
Next, we decide that we need to handle so many people hitting our websites, so we need to scale out. This means, make copies/clones of this website which has the effect of splitting the load up. If you scale out to 3 copies, then each webserver should (for simplicity) split the work load by a 3rd - so each handles about 33% of the load***.
Now, you have 1 website on 3 servers and this is called multiple instances.
So an instance is therefore a term used to describe how many servers the website is installed on.
Ok - so why is this important and what does this have to do with State (as suggested by that article you were reading/referring to) ?
Remember how I said that a instance is a single server and if you have multi instances you have more than 1 server? well .. now that the website exists on different servers - they can't share their State data between them unless you do some special stuff. That special stuff is what that document is chatting about, with lots of funky terms like Inproc, OutProc, Distributed caching, etc.
// Assert.
So the TL;DR; is that you now know that when you scale out and have multiple copies of your website on separate hardware, that's called multi instances and when you do that (have more than one copy) then you need to consider some special code to handle sharing of the State across these multiple servers -- if you need to share state.
Now - have a pic of a beautiful Mola Mola for reading all of this :)
*** Yes yes yes .. there's a number of algorithms to handle load balancing of scaled out sites, like round robin, etc. Lets just keep it really simple for the purpose of this question. K? thxgoodbai.
The "multiple instances" means what it says - more than a single instance of your website hosted on two IIS instances. If you utilise in-process state tracking you will not be able to balance traffic between the two web servers (typically sticky sessions are used to get around this but this is not an option with the load balancing capabilities of Azure).
We are applying unittests, integration tests and we are practicing test driven and behaviour driven development.
We are also monitoring our applications and servers from outside (with dedicated software in our network)
What is missing is some standard for a live monitoring inside the apllication.
I give an example:
There should be a cron-like process inside the application, that regularily checks some structural health inside our data structures
We need to monitor that users have done some regular stuff that does not endanger the health of the applications (there are some actions and input that we can not prevent them to do)
My question is, what is the correct name for this so I can further research in the literature. I did a lot of searching but I almosdt always find the xunit and bdd / integration test stuff that I already have.
So how is this called, what is the standard in professional application development, I would like to know if there is some standard structure like xunit, or could xunit libraries even bee used for it? I could not even find appropriate tagging for this question, so please if you read this and know some better tags, why not add them to this answer and remove the ones that don't fit.
I need this for applications written in python, erlang or javascript and those are mostly server side applications, web applications or daemons.
What we are already doing is that we created http gateway from inside the applications that report some stuff and this is monitored by the nagios infrastructure.
I have no problem rolling some cron-like controlled self health scheme inside the applications, but I am interested about knowing some professional standardized way of doing it.
I found this article, it already comes close: Link
It looks like you are asking about approaches how to monitor your application. In general, one can distinguish between active monitoring and passive monitoring.
In active monitoring, you create some artificial user load that would mimic real user behavior, and monitor your application based on these artificial responses from a non-existing user (active = you actively cause traffic to your application). Imagine that you have a web application which allows to get weather forecast for specific city. To have active monitoring, you will need to deploy another application that would call your web application with some predefined request ("get weather for Seattle") every N hours. If your application does not respond within the specified time interval, you will trigger alert based on that.
In passive monitoring, you observe real user behavior over time. You can use log parsing to get number of (un)successful requests/responses, or inject some code into your application that would update some values in database whenever successful or not successful response was returned (passive = you only check other users' traffic). Then, you can create graphs and check whether there is a significant deviation in user traffic. For example, if during the same time of the day one week ago your application served 1000 requests, and today you get only 200 requests, it may mean some problem with your software.
Here is the scenario:
We have 3 web servers A, B, C.
We want to release a new version of the application without taking the application down
(e.g. not using the "Down for maintenance page").
Server A goes live with latest code.
Server B gets taken off-line. Users on Server B get routed to A and C.
Page1.aspx was updated with new control. Anyone that came from Server B to Server A while
on this page will get a viewstate error when they perform an action on this page. This is what we want to prevent.
How do some of you resolve this issue?
Here are some thoughts we had (whether it's possible or not using our load balancer, I don't know... I am not familiar with load balancer configuration [it's an F5]):
The more naive approach:
Take down servers A and B and update. C retains the old code. All traffic will be directed to C, and that's ok since it's the old code. When A and B go live with the update, if possible tell the load balancer to only keep people with active sessions on C and all new sessions get initiated on A and B. The problem with this approach is that in theory sessions can stick around for a long time if the user keeps using the application.
The less naive approach:
Similar to the naive approach, except (if possible) we tell the load balancer about "safe" pages, which are pages that were not changed. When the user eventually ends up on a "safe" page, he or she gets routed to server A or B. In theory the user may never land on one of these pages, but this approach is a little less risky (but requires more work).
I assume that your load balancer is directing individual users back to the same server in the web farm during normal operations, which is why you do not normally experience this issue, but only when you start redirecting users between servers.
If that assumption is correct then it is likely the issue is a inconsistent machinekey across the server farm.
ViewState is hashed against the machine key of the server to prevent tampering by the user on the client side. The machine key is generated automatically by IIS, and will change every time the server restarts or is reset, as well as being unique to each server.
In order to ensure that you don't hit viewstate validation issues when users move between servers there are two possible courses of action.
Disable the anti-tampering protection on the individual page or globally in the pages element of the web.config file using the enableViewStateMac attribute with a false value. I mention this purely for the sake of completeness - you should never do this on a production website.
Manually generate a machine key and share that same value across each application (you could use the same key for all your applications, but it is sensible to use one key per application to maximise security), on each of your servers. To do this you need to generate keys (do not use any you may see in demos on the internet, this defeats the purpose of the unique machine key), this can be done programatically or in IIS manager (see http://www.codeproject.com/Articles/221889/How-to-Generate-Machine-Key-in-IIS7). Use the same machine key when deploying the website to all of your servers.
I can't answer on the best practice for upgrading applications that require 100% uptime.
I'm making changes to a number of ASP.NET applications on a dedicated server and am wondering about the potential issues that might occur as a result of this.
Basically for every page load I will need to make at least one call to a SOAP service on another server to obtain user data process it and render the page.
I may have to make one or two calls to the remote server in the Page_Load event depending on the situation.
The site handles large amounts of traffic on a daily basis and I'm wondering if there are limits on the amount of outbound connections the site cam make from within ASP.NET.
i.e. is it advisable to make as many outbound connections as inbound and how scalable dose this solution sound to be?
Thanks,
C