To how many users per second, 1 MB page can be served through 100 Mbps (12.5 MBps) uplink port of a dedicated server. - networking

To how many users per second, 1 MB page can be served through 100 Mbps (12.5 MBps) uplink port of a dedicated server.
I am planning to increase capacity of my dedicated server as my current server is not able to manage the load of my application.
Henceforth, I need to understand the uplink port connection offered by varied dedicated server providers.
In Amazon EC2 this is mentioned as Network Performance, which only providsions 10 Gigabit on its largest instances.
Pls guide.

Simply put, a 12.5MB/s connection is going to be able to serve a 1MB page to 12.5 users every second.
That said, are you absolutely sure it's the network throughput that's causing the problem, rather than a CPU or memory limit? In my experience, the network link is very rarely the bottleneck.
Bear in mind that a 1MB page will often compress to far less than that, assuming the server's compression is configured correctly. And unless you're genuinely seeing 12.5 new users every second, they will likely have a lot of the static assets (images, scripts, etc) cached either in their browser or by an upstream proxy, so they won't be requested every time.
If you really are just serving a 1MB page to a very high number of users rather than being bound by CPU, then you might more luck investigating a CDN (like Cloudflare or Cloudfront) than simply upgrading to a quicker link.

Related

How to handle 20k concurrent listeners on an Icecast server

I want to know how to handle more than 20k listeners concurrently on an Icecast server. I am using liquidsoap as the audio stream generator (Only one audio stream is distributed through the Icecast server ). The server is configured on AWS. Further, I want to know whether I need to use LB and CDN to handle this much traffic.
Your main concern is bandwidth. Nothing else, bandwidth. You always run out of bandwidth first. Really.
You'll likely want to spread the load across multiple servers and e.g. do simple DNS round-robin. Also because multiple servers means more bandwidth available.
Feeding in a Primary+Relays (master/slave) topology is typical and documented. For more details I'd recommend the Icecast documentation and searching the Icecast mailing list archives.
There are some minor things like making sure that your ulimit for file descriptors is high enough for the Icecast process.
PS: In theory you can squeeze around 20k concurrent connections out of Icecast, but most of the time you won't have enough actual bandwidth to feed those anyway.

Why do browsers limit the number of concurrent requests per domain?

I know that the max concurrent requests per domain count vary depending on browser, and that it is a good practice to use CDNs to increase parallelism. But what is the reason for this? I can't find an answer for this anywhere.
Who and in what way would suffer if it were say 50 concurrent requests for a domain?
Each connection takes resources on the server (and other network infrastructure). The limit put in place by browsers is designed to try and avoid hammering the remote server too hard. Handling 1000 concurrent users with a limit of 4 per browser is much easier than with 50 (50k connections open).

Increase in number of requests form server cause website slow?

In My office website,webpage has 3css files ,2 javascript files ,11images and 1page request total 17 requests from server, If 10000 people visit my office site ...
This may slow the website due to more requests??
And any issues to the server due to huge traffic ??
I remember My tiny office server has
Intel i3 Processor
Nvidia 2Gb Graphic card
Microsoft 2008 server
8 GB DDR3 Ram and
500GB Hard disk..
Website developed on Asp.Net
Net speed was 10mbps download and 2mbps upload.using static ip address.
There are many reasons a website may be slow.
A huge spike in Additional Traffic.
Extremely Large or non-optimized graphics.
Large amount of external calls.
Server issue.
All websites should have optimized images, flash files, and video's. Large types media slow down the overall loading of each page. Optimize each image.PNG images have an improved weighted optimization that can offer better looking images with smaller file size.You could also run a Traceroute to your site.
Hope this helps.
This question is impossible to answer because there are so many variables. It sounds like you're hypothesising that you will have 10000 simultaneous users, do you really expect there to be that many?
The only way to find out if your server and site hold up under that kind of load is to profile it.
There is a tool called Apache Bench http://httpd.apache.org/docs/2.0/programs/ab.html which you can run from the command line and simulate a number of requests to your server to benchmark it. The tool comes with an install of apache, then you can simulate 10000 requests to your server and see how the request time holds up. At the same time you can run performance monitor in windows to diagnose if there are any bottlenecks.
Example usage taken from wikipedia
ab -n 100 -c 10 http://www.yahoo.com/
This will execute 100 HTTP GET requests, processing up to 10 requests
concurrently, to the specified URL, in this example,
"http://www.yahoo.com".
I don't think that downloads your page dependencies (js, css, images), but there probably are other tools you can use to simulate that.
I'd recommend that you ensure that you enable compression on your site and set up caching as this will significanly reduce the load and number of requests for very little effort.
Rather than hardware, you should think about your server's upload capacity. If your upload bandwidth is low, of course it would be a problem.
The most possible reason is because one session is lock all the rest requests.
If you not use session, turn it off and check again.
relative:
Replacing ASP.Net's session entirely
jQuery Ajax calls to web service seem to be synchronous

IIS 7 - Does the number of HTTP connections matters?

I'm optimizing a very popular website and since the user base is constantly growing I'm interested in what matters when it comes to scaling.
Currently I am scaling by adding more CPU power / RAM memory to the server. This works nicely - even though the site is quite popular, currently CPU usage is at 10%.
So, if possible, I'd keep doing that. What I am worried about is whether I could get to the point where CPU usage is low but users have problems connecting because of the number of HTTP connections. Is it better to scale horizontally, by adding more servers to the cluster?
Thanks!
Eventually just adding more memory won't be enough. There are concurrent connection limits for TCP rather than IIS (though both factors do come into account, IIS can handle about 3000 connections without a strain).
You probably won't encounter what you suggest where the CPU usage is low but number of HTTP connections is high unless it is a largely static site, but the more connections open, the higher the CPU usage.
But regardless of this, what you need for a popular site is redundancy, which is essential for a site which has a large user base. There is nothing more annoying to the user than the site being down as your sole server goes offline for some reason. If you have 2 servers behind a load balancer, you can grow the site, even take a server offline with less fear of your site going offline.

Monitor active web connections on IIS 7 in real time (perhaps throttle individual IP's)?

We develop a web app that manages files and resources for different users to download throughout the day on a web server with very limited upstream bandwidth.
Is there any way to monitor in real time how much upstream bandwidth is being taken up by individual connections to IIS (7.0)?
Ideally we'd like way to see a list of each active IIS connection, the KB/s being delivered to each in real time, and the destination IP address.
As a super bonus: Is there any way to individually throttle connections/IP's so that they don't hog all the bandwidth?
Some prosumer-level software firewalls let you do this. If you configure IIS so that each worker process is easily distinguishable from the others, you can accomplish what you want using software like Net Limiter.
Have you looked into the Bit Rate Throttling module? It can be used to throttle media and non-media files at specified bit rates.

Resources