We have an asp.net application currently running on a shared hosting provider.
The application specifications are
.NET 4.5 Web forms webiste.
1 MSSQL data base of size - 100 MB
Web Application size including images - 1.2GB
Page visits/day - ~2500
Bandwidth consumed/day - ~200MB
We are not happy with the Hosting providers and have faced a lot of issues recently. Our application requires trust level as full for some of the services and the hosting provider is unwilling to set the trust of the application as full.
Statistics obtained(1 week) from the hosting provider(See image below)
(We have around 2500 page visits a day on week days and 500 page visits on weekends)
Based on the above information, we are planning to move to AWS. I was thinking to go with a t2 medium reserved windows instance(It has 2 cores and 4 gb ram) and EBS storage(Will cost us around 50 USD a month)
Since we have a small database when compared to enterprise standards, I was thinking to host it in the same instance. We are also planning to go with Sequel server express as this does not need licensing rights.
I have read up a lot of documentation and I am not able to reach a conclusion about the size of the instance to go for and will the t2 medium EC2 instance be able to serve my purpose?
1. It would be great if anyone can tell me if the above t2 medium ec2 infrastructure will suit our needs?
2. Do I need a t2 medium instance or can I go for a lower level
configuration ?
Open to suggestions on change of above mentioned infrastructure as well.
Instead of quoting page visits, etc, you are better off showing RAM and CPU utilisation.
A couple of things to consider:
1 - The t instance is a burstable throughput CPU. That means you gain 'credits' when your CPU is not being used, and when you use it (I cant remember exactly but possibly >10%?) you use up credits. This means if your average workload is either very low, or very bursty, the t series is a good choice, but if your workload is quite constant and high, it is a very bad choice.
2 - How static is the website? You might see benefit in decoupling the database onto an RDS instance that matches your needs, and then having the website on a spot instance to reduce cost, for example.
3 - With respect to configuration requirements, only you can answer this.
Related
I have a wordpress site hosted on dedicated server having below configuration.
CPU (8 cores): Intel Xeon CPU E3-1265L v3 # 2.50GHz,
Memory: 24GB,
Currently Used Storage: 350GB,
MySQL size: 3GB,
I have maximum daily visitors of 20,000 and maximum concurrent users at any point would be 400.
I would like to know which Google Cloud "Compute Engine" I should choose to cater these many requests without compromising the performance. Also, what are the other required resource I need to buy?
Is AWS better for this than GCP in this case?
Nobody can give you an answer to this type of question. It totally depends upon the application setup and the pattern of usage. The best way to determine this would be to create an automated test system that simulates usage in a pattern similar to how your website will be used, then to monitor the system (CPU, RAM usage, etc) and determine performance.
Alternatively, you can choose to oversize the system and monitor real-life metrics. Then, scale-down the system such that the metrics stay within acceptable ranges. It is relatively cheap to over-size systems in the cloud given that it might only be for a week.
I am looking for general information regarding the below scenario. I am just looking for opinions. I myself think the first option is better and offers less headaches.
You have an IIS website (www.site.com) that is hit millions of times per day. You have 5 web servers serving the traffic. After a while, the worker processes begin to reach their limit. There are 3 worker processes and one app pool per server.
Option 1: Turn these 5 physical servers into virtual hosts and run 4 VMs from each. That increases the pool of servers to 20. Drop worker processes to 2 and have 1 app pool per VM.
Option 2: Add 5 for IPs for each physical server and 5 instances of the same site on each physical server. For example, Server 1 will have 5 IPs and 5 IIS app pools and 5 IIS websites called something like this. Site1, Site2, Site3, Site4, Site5. Yet all of these go to www.site.com.
I personally think Option 2 is ridiculous.
Please let me know what you think.
Good you ask. Both options seem to go in the wrong way.
Option 1 : Turn a physical server into a host and set up 4 virtual machines on it. Each VM will get a quarter of the memory, processor cores and processor time. The host also uses some amount of resources itself. This means that you end up with less power after this change.
Option 2 : You're right, it is ridiculous. It will not improve anything, just add useless complexity.
If your management has such absurde ideas like you discribe, you should really hire a consultant. At least you could get some realistic scenarios to choose.
Is there something that requires the web server to run on premises ? I'd recommend to move to a managed server with a hosting company. They would take care of system administration. This will take a burden off your system administrator ( that doesn't seem to be very competent ).
Since this question is from a user's (developer's) perspective I figured it might fit better here than on Server Fault.
I'd like an ASP.NET hosting that meets the following criteria:
The application seemingly runs on a single server (so no need to worry about e.g. session state or even static variables)
There is an option to scale storage, memory, DB size and CPU-power up and down on demand, in an "unlimited" way
I researched but there seems not to be such a platform, that completely abstracts the underlying architecture away and thus has the ease of use of a simple shared hosting but "unlimited" scalability.
"Single server" and "scalability" are mutually exclusive, I'm afraid. But a good load-balancer will apply affinity to requests so you don't need to needlessly double-cache data on multiple servers.
However, well-designed web applications are easy to port to a multiple-server scenario.
I think your best option is something like Windows Azure Websites (separate from Azure Web Workers) which run on a VM you don't have access to. The VM itself provides enough power as-is necessary to run your website, so you don't need to worry about allocating extra CPU power or RAM.
Things like SQL Server are handled separately, but is very cheap to run, and you can drag a slider to give yourself more storage space.
This can be still accomplished by using a cloud host like www.gearhost.com. Apps live in the cloud and by default get 1 node worker so session stickiness is maintained. You can then scale that application larger workers to accomplish what you need, all while maintaining HA and LB. Even further you can add multiple web workers. Each visitor is tied to a particular node to maintain session state even though you might have 10 workers for example. It's an easy and cheap way to scale a site with 100 visitors to many million in just a few clicks.
I saw this question:
How many users on one azure instance before I hit performance issues?
Which discusses how many users an azure instance could support for a webpage. I'm wondering if this would be any different for a webpage vs webserver that client applications (such as mobile phones) are call into, to get data. For example, if you have a single azure webrole running that exposes a REST enpoint, how many devices could call into the service before it starts to buckle under pressure?
How long is a string? :-)
If your app computes one million digits of pi on each web request, it will probably handle fewer concurrent web requests than an app that replies to each web request with "hello world."
(This is another, blunter, version of David's answer.)
A Web Role instance is merely a Windows 2008 Server R2 (or SP2) virtual machine of a given size (1-8 cores, 1.75-14GB usable RAM, 100-800Mbps network). You can run web sites, different web servers (tomcat, for example), WCF services (through IIS or standalone ServiceHosts), etc.
Scaling is going to depend heavily on the app itself: Is it CPU-constrained? Network-constrained? Do you have queue-based workload and your queue backlog is growing?
Sometimes it's critical to scale up to larger VMs, just to handle one of the constraints mentioned. It's always wise to pick the smallest VM size to run in a baseline mode (e.g. 1 or 2 users), then scale out to more instances as needed.
It's important to identify the key performance indicators (KPI's) for your app. You can then automate your scaling, with something like the Autoscale Appliction Block (WASABi).
Here's a reference page with all VM sizes, with details about CPU, local disk, network bandwidth, and RAM.
Hosting .NET application in Amazon EC2. what would be optimum
configuration for a group that has 525 employers and around 85,000 employees ? I am googling this for past 1 week but could not found a reliable solution
You might want to consider hosting your application on AppHarbor. We'll seamlessly scale you application, and you won't have to worry about sizing your infrastructure up front.
(disclaimer, I'm co-founder of AppHarbor)
Perhaps you need to provide more information to get better answers - for example, what does your application do? How many users it has? What is the relevance of "525 employers and around 85,000 employees" - does it indicate amount of data or users? How many users will be concurrent at a time? What will be the average request time? What will be the usage pattern? How much memory it needs? Is your app CPU intensive or IO intensive? If its IO intensive, where exactly is your data stored?
Said all that, you need not worry too much from provisioning/scaling front. Amazon EC2 offers on-demand resourcing - so you can easily up-scale your configuration as per your need.
If you really want to find out optimal configuration, only way is to load test your application (with typical usage pattern/scenarios). Decide your parameters such as average response time and find out user limits served by say 1, 4 and 8 ECU (Elastic Compute Unit). You can load test using say standard instances - small, large and extra large. You can easily interpolate to project your actual ECU & Memory needs. Based on that you can choose actual optimal configuration.
You can try off-site load testing considering the fact that as per Amazon:
EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the
equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon
processor.
You can arrange hardware equivalent of say 1, 2 and 4 ECU and do your load testing looking at memory consumption with performance counter. That should give you some clue as to what is needed. IMO, you will be better off load testing in actual EC2 environment.