When to scale up an Azure Standard Instance Size - asp.net

I have 19 websites running on Azure Standard Websites, with the instance size set to Small.
Right now I can't scale out to multiple instances (or use auto scale) because some of these sites are legacy sites that won't play nice across multiple sites.
The sites running now are fairly basic, but there are 3 sites that are growing fast, and I don't want to have them all bogged down because of the small instance, but I also don't want to pay for a large instance if I don't have to.
How to know when I should scale up to a medium or large instance?
There doesn't seem to be any way to see CPU load in the portal, only CPU time.

Instead of scale up, you need to scale out. you can set CPU metric and set the number if instances you need for that metric -

Related

DynamoDB DAX metrics

I want to build serverless app using lambda и dynamodb. To create a cluster I must know how particular configuration will work with the structure and the size of my table. To find out it I was going to use CloudWatch metrics, but as it turns out they do not reflect the objective reality and can't show the "needs" of the cluster at a particular moment in time. There may be someone who has encountered such a problem and can suggest how best to determine the cluster configuration with respect to the table parameters, the number and type of requests?
So much of it depends on a particular workload, your expected hit rate, distribution of key accesses, etc. There a few rules of thumb, but these may change over time due to changes in the service, so it's always best to do your own testing with your own workloads:
Within a family (t2, r3, r4) latency is pretty much constant, although bigger node types tend to be more consistent (lower p99).
Throughput scales ~linearly with node size (i.e. a 2xl is ~2x the throughput of an xl)
Throughput scales ~linearly with cluster size
TPS scales ~linearly with response sizes - if a node handles 50 000 1kB gets, it'll do about 5 000 10kB gets.
My recommendation is figure out your workload, test on a few different cluster sizes to get some baselines, and use the notes above to scale. Do note that DAX doesn't currently allow changing the node type for a cluster, and scaling a cluster out only increases throughput, not cacheable memory.
As for better CloudWatch metrics, it would be helpful to know what you're looking for - it might better to start a thread in the AWS forums for that discussion.

scaling an azure website

I have a Standard website in Azure with a small instance, (1 core and 1.75 GB memory). It seems to be coping fine and handling the requests smoothly, although I am expecting a lot more within the week.
It is unclear though under what circumstances I should be looking to scale the instance size to the next level ie to Medium. (Besides MemoryWorkingSet of course, rather obvious :))
ie. Will moving up to a Medium instance resolve high CPU time ?
What other telltales should I be watching for ?
I am NOT comfortable scaling the number of instances to more than one at the moment until I resolve some cache issues.
I think the key point I am trying to understand is the link between the metrics provided and the means of scaling available regardless of it being scaled horizontally or vertically.
I am trying to keep the average response time as low as possible as the number of users that interact with the website increase.
Which of the other metrics will alert me when the load on the server is getting to its limits & I will need to scale Vertically ?
The idea behind scaling in Azure is to scale horizontally, i.e. add more instances. Azure can do this for you automatically. If you can't add more instances, Azure can't do the scaling for you automatically.
You can move to Medium instance, overall capacity will increase, but it is impossible to say what your application will require under heavy load. I suggest you run profiler and load test to find out the weak parts of your app and improve these before you have an actual increase in useage.

Script to automatically grow LVM partition CentOS

I'm looking for a script to check the size of a particular LVM volume on CentOS 6.5 and when it reaches a certain threshold, have it automatically extend the partition and online re-size the file system.
I have this particular machine monitored, and could do it manually, but I saw a script once to do just this.
I have plenty of disk space on the physical volumes but, since it's easier to expand when needed than reduce later, I'd rather expand my logical partitions only when they start to fill up. There are several logical volumes on this machine, but only one that regularly grows.
Any tips are appreciated; and, if the overall best thing to do is just expand the volume manually when the time comes that advice is welcome as well!

What sort of scaling can be expected with mpiblast?

On our HPC cluster, one of the users runs mpiblast jobs on upward of 30 cores. These will typically end up on about 10 different nodes, the nodes normally being shared between users. Although these jobs occasionally scale fairly well and can effectively use about 90% of the cores available, often scaling is very bad with jobs only accumulating CPU-time corresponding to around 10% of the cores available.
Should mpiblast scale better in general? Does anyone know what factors might lead to poor scaling?
mpiblast should work faster in general but there's no guarantee that the scaling would be better. Few factors are there:
For parallel processing, you need to make sure that the nodes that are being used are not idle/not being used properly. That is one of the main reason for poor scaling!
Also, it depends on the files you are using for BLAST. For instance, there are some parameters in mpiblast, you should go through them first.
But in general, mpiblast should scale well when the nodes are equally being used that means greatly load-balanced :)

What is the most accurate method of estimating peak bandwidth requirement for a web application?

I am working on a client proposal and they will need to upgrade their network infrastructure to support hosting an ASP.NET application. Essentially, I need to estimate peak usage for a system with a known quantity of users (currently 250). A simple answer like "you'll need a dedicated T1 line" would probably suffice, but I'd like to have data to back it up.
Another question referenced NetLimiter, which looks pretty slick for getting a sense of what's being used.
My general thought is that I'll fire the web app up and use the system like I would anticipate it be used at the customer, really at a leisurely pace, over a certain time span, and then multiply the bandwidth usage by the number of users and divide by the time.
This doesn't seem very scientific. It may be good enough for a proposal, but I'd like to see if there's a better way.
I know there are load tools available for testing web application performance, but it seems like these would not accurately simulate peak user load for bandwidth testing purposes (too much at once).
The platform is Windows/ASP.NET and the application is hosted within SharePoint (MOSS 2007).
In lieu of a good reporting tool for bandwidth usage, you can always do a rough guesstimate.
N = Number of page views in busiest hour
P = Average Page size
(N * P) /3600) = Average traffic per second.
The server itself will have a lot more internal traffic for probably db server/NAS/etc. But outward facing that should give you a very rough idea on utilization. Obviously you will need to far surpass the above value as you never want to be 100% utilized, and to allow for other traffic.
I would also not suggest using an arbitrary number like 250 users. Use the heaviest production day/hour as a reference. Double and triple if you like, but that will give you the expected distribution of user behavior if you have good log files/user auditing. It will help make your guesstimate more accurate.
As another commenter pointed out, a data center is a good idea, when redundancy and bandwidth availability become are a concern. Your needs may vary, but do not dismiss the suggestion lightly.
There are several additional questions that need to be asked here.
Is it 250 total users, or 250 concurrent users? If concurrent, is that 250 peak, or 250 typically? If it's 250 total users, are they all expected to use it at the same time (eg, an intranet site, where people must use it as part of their job), or is it more of a community site where they may or may not use it? I assume the way you've worded this that it is 250 total users, but that still doesn't tell enough about the site to make an estimate.
If it's a community or "normal" internet site, it will also depend on the usage - eg, are people really going to be using this intensely, or is it something that some users will simply log into once, and then forget? This can be a tough question from your perspective, since you will want to assume the former, but if you spend a lot of money on network infrastructure and no one ends up using it, it can be a very bad thing.
What is the site doing? At the low end of the spectrum, there is a "typical" web application, where you have reasonable size (say, 1-2k) pages and a handful of images. A bit more intense is a site that has a lot of media - eg, flickr style image browsing. At the upper end is a site with a lot of downloads - streaming movies, or just large files or datasets being downloaded.
This is getting a bit outside the threshold of your question, but another thing to look at is the future of the site: is the usage going to possibly double in the next year, or month? Be wary of locking into a long term contract with something like a T1 or fiber connection, without having some way to upgrade.
Another question is reliability - do you need redundancy in connections? It can cost a lot up front, but there are ways to do multi-homed connections where you can balance access across a couple of links, and then just use one (albeit with reduced capacity) in the event of failure.
Another option to consider, which effectively lets you completely avoid this entire question, is to just host the application in a datacenter. You pay a relatively low monthly fee (low compared to the cost of a dedicated high-quality connection), and you get as much bandwidth as you need (eg, most hosting plans will give you something like 500GB transfer a month, to start with - and some will just give you unlimited). The datacenter is also going to be more reliable than anything you can build (short of your own 6+ figure datacenter) because they have redundant internet, power backup, redundant cooling, fire protection, physical security.. and they have people that manage all of this for you, so you never have to deal with it.

Resources