Why do browsers limit the number of concurrent requests per domain? - http

I know that the max concurrent requests per domain count vary depending on browser, and that it is a good practice to use CDNs to increase parallelism. But what is the reason for this? I can't find an answer for this anywhere.
Who and in what way would suffer if it were say 50 concurrent requests for a domain?

Each connection takes resources on the server (and other network infrastructure). The limit put in place by browsers is designed to try and avoid hammering the remote server too hard. Handling 1000 concurrent users with a limit of 4 per browser is much easier than with 50 (50k connections open).

Related

Detect high traffic peak in ASP.NET MVC

I'm building an ASP.NET MVC application that will be hosted on relatively small dedicated server and I'm currently in a process of optimizing its performances (trying to leverage most of the points stated in this post, except for the load balancing part, no scaling).
Now I'm expecting that most of the time the traffic will be small and consistent, but on some very rare occasion it could peak.
The question is can this be handled gracefully?
Is there a way how I could detect when there are too much requests that the server cannot handle?
Also in case of detection, is there a way how I could perhaps send a static HTML to new user(s), that would notify him(them) about too much traffic?
I'm not exactly sure but I believe you're looking for request throttling.
Check this question and the provided answers, in short you can use Dynamic IP Restrictions to block unusually high number of concurrent requests and you can use some throttle implementation, if interested see Throttling Pattern for more information.
In case you're not worried about concurrent requests but with the amount of requests in general then the only thing that I would suggest is to limit the number of simultaneous executions of some expensive controller's actions.

To how many users per second, 1 MB page can be served through 100 Mbps (12.5 MBps) uplink port of a dedicated server.

To how many users per second, 1 MB page can be served through 100 Mbps (12.5 MBps) uplink port of a dedicated server.
I am planning to increase capacity of my dedicated server as my current server is not able to manage the load of my application.
Henceforth, I need to understand the uplink port connection offered by varied dedicated server providers.
In Amazon EC2 this is mentioned as Network Performance, which only providsions 10 Gigabit on its largest instances.
Pls guide.
Simply put, a 12.5MB/s connection is going to be able to serve a 1MB page to 12.5 users every second.
That said, are you absolutely sure it's the network throughput that's causing the problem, rather than a CPU or memory limit? In my experience, the network link is very rarely the bottleneck.
Bear in mind that a 1MB page will often compress to far less than that, assuming the server's compression is configured correctly. And unless you're genuinely seeing 12.5 new users every second, they will likely have a lot of the static assets (images, scripts, etc) cached either in their browser or by an upstream proxy, so they won't be requested every time.
If you really are just serving a 1MB page to a very high number of users rather than being bound by CPU, then you might more luck investigating a CDN (like Cloudflare or Cloudfront) than simply upgrading to a quicker link.

How to avoid the max connection limit?

Dear StackOverflow members,
I was wondering, let's say with Whatsapp... you're continously connected to their servers.(Using TCP)
And assuming there's a max of 65535 connections/port, how do they avoid that limit?
Seeing as that'd mean once a server hits 65535 one time it'll always stay on that and never go down, as everyone's phone simply stays connected.
I'm not sure if you guys understand my question, but if you have any questions feel free to ask.
Kind regards,
Rene Roosen
Any large website wouldn't rely on one server. They'd usually use a load balancing proxy (commercial or open-source ones like ATS or HA proxy) and have several servers behind those. Those proxies have mechanisms to scale to much higher connections.
As long as the 4-tuple is unique (source-ip, source-port, dest-ip, dest-port), a proxy can handle the connection provided other resources (memory, cpu, etc.) are available. They don't restrict traffic to 64k connections/port.

IIS 7 - Does the number of HTTP connections matters?

I'm optimizing a very popular website and since the user base is constantly growing I'm interested in what matters when it comes to scaling.
Currently I am scaling by adding more CPU power / RAM memory to the server. This works nicely - even though the site is quite popular, currently CPU usage is at 10%.
So, if possible, I'd keep doing that. What I am worried about is whether I could get to the point where CPU usage is low but users have problems connecting because of the number of HTTP connections. Is it better to scale horizontally, by adding more servers to the cluster?
Thanks!
Eventually just adding more memory won't be enough. There are concurrent connection limits for TCP rather than IIS (though both factors do come into account, IIS can handle about 3000 connections without a strain).
You probably won't encounter what you suggest where the CPU usage is low but number of HTTP connections is high unless it is a largely static site, but the more connections open, the higher the CPU usage.
But regardless of this, what you need for a popular site is redundancy, which is essential for a site which has a large user base. There is nothing more annoying to the user than the site being down as your sole server goes offline for some reason. If you have 2 servers behind a load balancer, you can grow the site, even take a server offline with less fear of your site going offline.

How Many Network Connections Can a Computer Support?

When writing a custom server, what are the best practices or techniques to determine maximum number of users that can connect to the server at any given time?
I would assume that the capabilities of the computer hardware, network capacity, and server protocol would all be important factors.
Also, do you think it is a good practice to limit the number of network connections to a certain maximum number of users? Or should the server not limit the number of network connections and let performance degrade until the response time is extremely high?
Dan Kegel put together a summary of techniques for handling large amounts of network connections from a single server, here: http://www.kegel.com/c10k.html
In general modern servers can handle very large numbers of concurrent connections. I've worked on systems having over 8,000 concurrently open TCP/IP sockets.
You will need a high quality servicing interface to handle that kind of load, check out libevent or libev.
That is a good question and it definitely is situational. What is your computer? Do you have a 4 socket machine filled with Quad Core Xeons, 128 GB of RAM, and Fiber Channel Connectivity (like the pair of Dell R900s we just bought)? Or are you running on a p3 550 with 256 MB of RAM, and 56K modem? How much load does each connection place on your server? What kind of response is acceptible?
These are the questions you need to answer. I guess the best way to find the answer is through load testing. Create a unit test of the expected (and maybe some unexpected) paths that your code will perform against your server. Find a load testing framework that will allow you to simulate 10, 100, 1000, 10000 users performing those tasks at the same time.
That will tell you how many connections your computer can support.
The great thing about the load/unit test scenario is that you can put in response time expectations in your unit tests and increase the load until you fall outside of your response time. If you have a requirement of supporting X number of Users with Y second response, you will be able to demonstrate it with your load tests.
One of the biggest setbacks in high concurrency connections is actually the routers involved. Home user oriented routers usually have a small NAT table, preventing the router from actually servicing the server the connections.
Be sure to research your router/ network infrastructure setup just as well.
I think you shouldn't limit the number of connections your server will allow - just catch and handle properly any exceptions that might occur when accepting and closing connections and you should be fine. You should leave that kind of lower level programming to the underlying OS layers - that way you can port your server easier etc.
This really depends on your operating system.
Different Unix flavors will support "unlimited" number of file handles / sockets others have high values like 32768.
A typical user limit is 8192 but it can usually be set higher.
I think windows is more limiting but the server version may have higher limits.

Resources