Asp.net application hosting in Amazon Cloud - asp.net

Hosting .NET application in Amazon EC2. what would be optimum
configuration for a group that has 525 employers and around 85,000 employees ? I am googling this for past 1 week but could not found a reliable solution

You might want to consider hosting your application on AppHarbor. We'll seamlessly scale you application, and you won't have to worry about sizing your infrastructure up front.
(disclaimer, I'm co-founder of AppHarbor)

Perhaps you need to provide more information to get better answers - for example, what does your application do? How many users it has? What is the relevance of "525 employers and around 85,000 employees" - does it indicate amount of data or users? How many users will be concurrent at a time? What will be the average request time? What will be the usage pattern? How much memory it needs? Is your app CPU intensive or IO intensive? If its IO intensive, where exactly is your data stored?
Said all that, you need not worry too much from provisioning/scaling front. Amazon EC2 offers on-demand resourcing - so you can easily up-scale your configuration as per your need.
If you really want to find out optimal configuration, only way is to load test your application (with typical usage pattern/scenarios). Decide your parameters such as average response time and find out user limits served by say 1, 4 and 8 ECU (Elastic Compute Unit). You can load test using say standard instances - small, large and extra large. You can easily interpolate to project your actual ECU & Memory needs. Based on that you can choose actual optimal configuration.
You can try off-site load testing considering the fact that as per Amazon:
EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the
equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon
processor.
You can arrange hardware equivalent of say 1, 2 and 4 ECU and do your load testing looking at memory consumption with performance counter. That should give you some clue as to what is needed. IMO, you will be better off load testing in actual EC2 environment.

Related

How to handle scaling when request per minute go from 500 to 5000 instanly

I have an application that spikes from 500 rpm to 5000 and stays there for 20-30min. I know that's not a ton of requests but its the magnitude of the jump that is killing me. AWS-EC2 takes 5 min to scale up so that's not helpful when things move so fast. Maybe multiple DB's that handle different pieces of the application.
How would you go about analyzing this and thinking about infrastructure if you will always go from 500 to 5000RPM or higher in one minute?
This is the graph from my AWS logs:
If you can predict that demand will increase at some point you can automate provisioning of new instances. If you can't determine this then you need to do proper capacity planning. For instance, how many servers/containers do you need running to sustain the load with an acceptable user experience? This will be key to determine.
You also should look at implement asynchronous messaging patterns that offload the spike although this may come with some performance degradation.
One additional consideration would be moving to a serverless architecture like AWS Lambda. This likely wouldn't fully solve the problem but would provide you more ability to quickly provision on demand infrastructure.

Asp.net website on AWS ec2 t2 medium

We have an asp.net application currently running on a shared hosting provider.
The application specifications are
.NET 4.5 Web forms webiste.
1 MSSQL data base of size - 100 MB
Web Application size including images - 1.2GB
Page visits/day - ~2500
Bandwidth consumed/day - ~200MB
We are not happy with the Hosting providers and have faced a lot of issues recently. Our application requires trust level as full for some of the services and the hosting provider is unwilling to set the trust of the application as full.
Statistics obtained(1 week) from the hosting provider(See image below)
(We have around 2500 page visits a day on week days and 500 page visits on weekends)
Based on the above information, we are planning to move to AWS. I was thinking to go with a t2 medium reserved windows instance(It has 2 cores and 4 gb ram) and EBS storage(Will cost us around 50 USD a month)
Since we have a small database when compared to enterprise standards, I was thinking to host it in the same instance. We are also planning to go with Sequel server express as this does not need licensing rights.
I have read up a lot of documentation and I am not able to reach a conclusion about the size of the instance to go for and will the t2 medium EC2 instance be able to serve my purpose?
1. It would be great if anyone can tell me if the above t2 medium ec2 infrastructure will suit our needs?
2. Do I need a t2 medium instance or can I go for a lower level
configuration ?
Open to suggestions on change of above mentioned infrastructure as well.
Instead of quoting page visits, etc, you are better off showing RAM and CPU utilisation.
A couple of things to consider:
1 - The t instance is a burstable throughput CPU. That means you gain 'credits' when your CPU is not being used, and when you use it (I cant remember exactly but possibly >10%?) you use up credits. This means if your average workload is either very low, or very bursty, the t series is a good choice, but if your workload is quite constant and high, it is a very bad choice.
2 - How static is the website? You might see benefit in decoupling the database onto an RDS instance that matches your needs, and then having the website on a spot instance to reduce cost, for example.
3 - With respect to configuration requirements, only you can answer this.

Capacity planning for service oriented architecture?

I have a collection of SOA components that can handle a series of business processes. For example one SOA component imports user data, another runs analytics on it.
I'm familiar with business process modeling for manufacturing, i.e. calculating WIP, throughput, cycle times, utilization etc. for each process. Little's Law, theory of constraints, etc.
Can I apply this approach to capacity planning for my SOA architecture, or is there a more rigorous / more widely accepted approach?
A bit of a broad question. Some guidelines for you but there is no real perfect answer here.
What you are looking for is Business Activity Monitoring used together with performance metrics reported from your servers.
BAM/Business Activity Monitoring will allow you to measure how many orders per seconds you are processing. How many sales you have made today etc. You all then monitor and collect information such as CPU usage, network bandwidth, disk io performance, memory usage and other technical performance metrics. In windows you can use performance counters for this. In the Linux world there is various tools and techniques that you can use.
Using the number of orders placed you can then look at the performance statistics of the systems used by the order placing software to give you some indication of what is happening.
For example we process 10 orders a second on average using roughly 8GB of ram on the ESB server where the orders service is hosted. We are seeing a average increase of 25% per month in the order coming through. We have noticed several alerts about swapping to disk when orders are at their peak. To ensure that we can cater with the demand we will need to double the memory on the server every 4 months. Thus in a year we will need 3*8GB of memory extra or another 32GB of memory. Now you can decide on the implementation do you create a cluster with 4 machines with 8GB of ram in or do I load balance.
Using this information you can start to get a good idea of where your limits are and what you need to budget for in the future.
Go look at some BAM tools and some monitoring tools and see what suits you.

Host ASP.Net WebSite - What would be an Ideal Hardware Configuration?

I am studying various ASP.Net deployment approaches. In there, I got a basic question. Is there any thumb rule about enviornment definition? What could be called a 'good' setup if I have to support 1000 concurrent users(requests).
I understand that there are many factors like how application is designed etc. But assuming that everything else is great, what configuration should I look for like Which processor, how much RAM etc?
Also how many concurrent users below configuration should be able to support ?
CPU: Dual 3.40 GHz Intel Xeon (Hyper-Threaded)
Memory : 3GB
OS: Windows Server 2003 SP2
Thanks for thelp
Having been on both sides of the equation (web developer and hardware engineer), my current opinion is that the answer involves both of those sides as well.
Your hardware needs to be not only sufficient for general usage, but it also has to cope with reasonable unexpected peaks and failures - which means that it needs to be redundant, and in excess of your capacity planning.
Your software needs to be designed so its easily redundant - theres no point in speccing a tiered hardware architecture (now or for future planning) if the software is going to require significant amount of changes to handle it.
Your software also needs to be designed so sudden unexpected peaks in resource usage don't happen as a regular occurrence for no external reason (eg marketing campaign).
I know that you say you understand the non-hardware factors, but the real answer to your question is that there is no real way to answer it without knowing the other factors - each situation and circumstance is unique, and requires a unique solution.
However, in an effort to add generalised recommendations, try these:
CPU - choose something with a lot of cache, and individual cache per core as well. This will do wonders to speed up the system. I typically go for dual core, dual processor at a minimum (for a total of 4 cores on two seperate physical cpus). Processor speed ratings don't really matter as much as you think these days.
Memory - fast memory, minimum of 8GB of it. Use the smallest dimms possible for the server.
Harddisk - SAS 15K RPM at a minimum, RAID 6 for the data partition on one controller, RAID 1 or 6 for the system partition on another controller. Choose a good quality controller backed by a good support or warranty package - your controller is no good if it dies in 3 years time and you can't get a replacement.
But above all, don't just install the OS and app and let it be, profile the set up as much as possible, don't be afraid of making changes to optimise to the individual setup (within reason). Move your ASP.Net temporary files to a fast disk (or a ram disk - if they are going to be rebuilt anyway, no matter worrying over losing them). Move the database to a second server, with a crossover 1GBit link between the two. Turn off disk maintenance in the OS, turn off services you do not need.
Good luck!

Best way to determine the number of servers needed

How much traffic can one web server handle? What's the best way to see if we're beyond that?
I have an ASP.Net application that has a couple hundred users. Aspects of it are fairly processor intensive, but thus far we have done fine with only one server to run both SqlServer and the site. It's running Windows Server 2003, 3.4 GHz with 3.5 GB of RAM.
But lately I've started to notice slows at various times, and I was wondering what's the best way to determine if the server is overloaded by the usage of the application or if I need to do something to fix the application (I don't really want to spend a lot of time hunting down little optimizations if I'm just expecting too much from the box).
What you need is some info on Capacity Planning..
Capacity planning is the process of planning for growth and forecasting peak usage periods in order to meet system and application capacity requirements. It involves extensive performance testing to establish the application's resource utilization and transaction throughput under load. First, you measure the number of visitors the site currently receives and how much demand each user places on the server, and then you calculate the computing resources (CPU, RAM, disk space, and network bandwidth) that are necessary to support current and future usage levels.
If you have access to some profiling tools (such as those in the Team Suite edition of Visual Studio) you can try to set up a testing server and running some synthetic requests against it and see if there's any specific part of the code taking unreasonably long to run.
You should probably check some graphs of CPU and memory usage over time before doing this, to see if it can even be that. (A number alike to the UNIX "load average" could be a useful metric, I don't know if Windows has anything like it. Basically the average number of threads that want CPU time for every time-slice.)
Also check the obvious, that you aren't running out of bandwidth.
Measure, measure, measure. Rico Mariani always says this, and he's right.
Measure req/sec, RAM, CPU, Sessions, etc.
You may come up with a caching strategy (Output caching, data caching, caching dependencies, and so on.)
See also how your SQL Server is doing... indexes are a good place to start but not the only thing to look at..
On that hardware, a .NET application should be able to serve about 200-400 requests per second. If you have only a few hundred users, I doubt you are seeing even 2 requests per second, so I think you have a lot of capacity on that box, even with SQL server running.
Without know all of the details, I would say no, you will not see any performance improvement by adding servers.
By the way, if you're not using the Output Cache, I would start there.

Resources