How many tcp connection created on a queuing protocol such as ZeroMQ - tcp

Excuse me for my basic question, but I didn't find my answer in my Google searches.
I want to develop a server which should respond to hundreds of clients. Each client may send tens to hundreds of messages per second.
I want to know if I use queuing protocols such as AMQP (RabbitMQ implementation) or ZeroMQ, how many TCP connections the server should supports?
Is it the total number of clients or total number of messages per second?

Nota Bene: ZeroMQ is definitely not a "queuing"-protocol. One shall rather think about it to be a powerful framework of low-level building blocks [primitives] that enable designers setup very fast and rather abstract behaviour-oriented designs for advanced use-cases from Messaging per se to a robust, non-blocking, asynchronous, distributed systems concurrency signalling and content-related transport + controls. Indeed a powerful set of tools, believe me.
AMQP is a Broker-based approach
ZeroMQ is Broker-less solution
Message count per se does not typically create a problem.
Their associated processing typically does.
Limit No.1: operating system TCP-settings constraints
Solution: review your system documentation for limits to work within and setup/tweak values, on the OS-level, as appropriate.
Limit No.2: growing end-point's process delay(s) grows also a risk of RECV/SEND buffer overflow(s).
Solution: review your code-architecture whether it can increase the transaction-performance ( be it by a distributed pipe-line processing or by a load-balancer to distribute the flow of incoming connections / transactions onto multiple target worker-units ).

Related

Load Testing Thousands of SLOW Connections

I would like to test an upload service with hundreds, if not thousands,
of slow HTTPS connections simultaneously.
I would like to have lots of, say, 3G-quality connections,
each throttled with low bandwidth and high latency,
each sending a few megabytes of data up to the server,
resulting in lots of concurrent, long-lived requests being handled by the server.
There are many load generation tools that can generate thousands of simultaneous requests.
(I'm currently using Locust, mostly so that I can take
advantage of my existing client library written in Python.)
Such tools typically run each concurrent request as fast as possible
over the shared network link.
There are various ways to adjust the apparent bandwidth and latency of TCP connections,
such as Linux's TC
and handy wrappers like Comcast.
As far as I can tell, TC and the like control the shared link
but they cannot throttle the individual requests.
If you want to throttle a single request, TC works well.
In theory, with many clients sharing the same throttled network link,
each request could be run serially,
subject to the constrained bandwidth,
rather than having lots of requests executing concurrently,
a few packets at a time.
The former would result in much fewer active requests executing
concurrently on the server.
I suspect that the tool I want has to actively manage each individual client's sending
and receiving to throttle them fairly.
Is there such a tool?
You can take a look at Apache JMeter, it can "throttle" connections to the throughput configurable via the following properties:
httpclient.socket.http.cps=0
httpclient.socket.https.cps=0
The properties can be defined either in user.properties file or passed to JMeter via -J command-line argument
cps stands for character per second so you can "slow down" JMeter threads (virtual users) to the given throughput rate, the formula for cps calculation is:
cps = (target bandwidth in kbps * 1024) / 8
Check out How to Simulate Different Network Speeds in Your JMeter Load Test for more information.
Yes, these are network simulators. A very primitive one is in the form of WanEM. It is not going to cover your testing needs. You will need something akin to Shunra Storm, a hardware device which can manage individual connections and impairment with models derived from Ookla (think speedtest.com) related to 3,4,5g connections from the wild. Well, perhaps I should say, "could manage," as this product has been absent since the HP acquisition of Shunra.
There are some other market competitors on the network front from companies such as Ixia, Agilent, PacketStorm, Spirent and the like. None of them are inexpensive, but I see your need. Slow, and particularly dirty connections likes cell phones, have a disproportionate impact on the stack and can result in the server running out of resources with fewer mobile connections than desktop ones.
On a side note, be sure you are including a representative model for think time in your test code. If you collapse the client-server model with no or extremely limited think time & impair the network only bad things can happen. This will play particular havoc with both predictability and repeatability on your tests. You may also wind up chasing dozens of engineering ghosts related to load in your code that will not occur in production because of the natural delays and the release of resources which should occur during those windows of activity between client requests.

Why use more than one endpoint in a Rebus system?

In a Rebus service bus, there is a single message transport queue per endpoint. It is possible for an endpoint to handle more than one message, and it is possible to have only a single endpoint in a system.
Other than the throughput of messages, what reasons are there to use more than a single endpoint in a Rebus service bus system?
Excellent question! :) There can be many reasons why you might want to have several Rebus endpoints active at the same time.
An obvious reason is that you might want to host the endpoints in separate processes so you can update them independently of each other. But since this reason is pretty obvious, I assume you are thinking about reasons one might want to host multiple Rebus endpoints in the same process.
Let me just mention a few(*):
Concurrency requirements
One endpoint might be hosting data that experiences contention and therefore does not benefit from being able to process messages concurrently - this endpoint will probably have only a few threads and low parallelism, possibly 1/1.
Another endpoint might be doing stream-based data processing (e.g. loading blobs from one place into another, downloading data from web services, etc.), which can be done with very high throughput and low resource requirements with one single thread and a high level of parallelism - e.g. 1/20.
Yet another endpoint might be doing a lot of serialization/deserialization, which is usually CPU-bound, and therefore might benefit from running on a many-core box with many worker threads and matching parallelism - e.g. 10/10.
As you can see, the type of tasks performed by an endpoint can call for a configuration that matches the nature of the tasks.
SLAs
One endpoint might be designated for processing low-priority background stuff, like e.g. moving data to cold storage, optimizing storage of historic data, etc.
Another endpoint might be processing messages where low latency is the most important quality attribute.
If these two were using the same queue, the low-priority background stuff could sometimes clog up the queue, hindering low-latency processing of the other messages.
Logical separation
I have many times started out by hosting several Rebus endpoints in the same process because it was easy to deal with during development, while keeping the endpoints separate because they were implementing different business functions.
This way it is easy to physically break them apart some time later on, allowing for a higher degree of separation and independence.
(*) Udi Dahan works with the concepts "business components" and "autonomous components" where the first one is an implementation of a business capability and the second one is what business components are decomposed into, mostly for technical reasons.
I guess you could say that the first two reasons I mentioned are separate endpoints for "autonomous component" reasons, whereas the third is separation because things belong to different business components.
Udi keeps a pretty strict view of these concepts that is completely orthogonal to how the system is physically composed, but I almost always end up with pretty high convergence between logical separation and physical separation.

How many socket connections can a web server handle?

Say if I was to get shared, virtual or dedicated hosting, I read somewhere a server/machine can only handle 64,000 TCP connections at one time, is this true? How many could any type of hosting handle regardless of bandwidth? I'm assuming HTTP works over TCP.
Would this mean only 64,000 users could connect to the website, and if I wanted to serve more I'd have to move to a web farm?
In short:
You should be able to achieve in the order of millions of simultaneous active TCP connections and by extension HTTP request(s). This tells you the maximum performance you can expect with the right platform with the right configuration.
Today, I was worried whether IIS with ASP.NET would support in the order of 100 concurrent connections (look at my update, expect ~10k responses per second on older ASP.Net Mono versions). When I saw this question/answers, I couldn't resist answering myself, many answers to the question here are completely incorrect.
Best Case
The answer to this question must only concern itself with the simplest server configuration to decouple from the countless variables and configurations possible downstream.
So consider the following scenario for my answer:
No traffic on the TCP sessions, except for keep-alive packets (otherwise you would obviously need a corresponding amount of network bandwidth and other computer resources)
Software designed to use asynchronous sockets and programming, rather than a hardware thread per request from a pool. (ie. IIS, Node.js, Nginx... webserver [but not Apache] with async designed application software)
Good performance/dollar CPU / Ram. Today, arbitrarily, let's say i7 (4 core) with 8GB of RAM.
A good firewall/router to match.
No virtual limit/governor - ie. Linux somaxconn, IIS web.config...
No dependency on other slower hardware - no reading from harddisk, because it would be the lowest common denominator and bottleneck, not network IO.
Detailed Answer
Synchronous thread-bound designs tend to be the worst performing relative to Asynchronous IO implementations.
WhatsApp can handle a million WITH traffic on a single Unix flavoured OS machine - https://blog.whatsapp.com/index.php/2012/01/1-million-is-so-2011/.
And finally, this one, http://highscalability.com/blog/2013/5/13/the-secret-to-10-million-concurrent-connections-the-kernel-i.html, goes into a lot of detail, exploring how even 10 million could be achieved. Servers often have hardware TCP offload engines, ASICs designed for this specific role more efficiently than a general purpose CPU.
Good software design choices
Asynchronous IO design will differ across Operating Systems and Programming platforms. Node.js was designed with asynchronous in mind. You should use Promises at least, and when ECMAScript 7 comes along, async/await. C#/.Net already has full asynchronous support like node.js. Whatever the OS and platform, asynchronous should be expected to perform very well. And whatever language you choose, look for the keyword "asynchronous", most modern languages will have some support, even if it's an add-on of some sort.
To WebFarm?
Whatever the limit is for your particular situation, yes a web-farm is one good solution to scaling. There are many architectures for achieving this. One is using a load balancer (hosting providers can offer these, but even these have a limit, along with bandwidth ceiling), but I don't favour this option. For Single Page Applications with long-running connections, I prefer to instead have an open list of servers which the client application will choose from randomly at startup and reuse over the lifetime of the application. This removes the single point of failure (load balancer) and enables scaling through multiple data centres and therefore much more bandwidth.
Busting a myth - 64K ports
To address the question component regarding "64,000", this is a misconception. A server can connect to many more than 65535 clients. See https://networkengineering.stackexchange.com/questions/48283/is-a-tcp-server-limited-to-65535-clients/48284
By the way, Http.sys on Windows permits multiple applications to share the same server port under the HTTP URL schema. They each register a separate domain binding, but there is ultimately a single server application proxying the requests to the correct applications.
Update 2019-05-30
Here is an up to date comparison of the fastest HTTP libraries - https://www.techempower.com/benchmarks/#section=data-r16&hw=ph&test=plaintext
Test date: 2018-06-06
Hardware used: Dell R440 Xeon Gold + 10 GbE
The leader has ~7M plaintext reponses per second (responses not connections)
The second one Fasthttp for golang advertises 1.5M concurrent connections - see https://github.com/valyala/fasthttp
The leading languages are Rust, Go, C++, Java, C, and even C# ranks at 11 (6.9M per second). Scala and Clojure rank further down. Python ranks at 29th at 2.7M per second.
At the bottom of the list, I note laravel and cakephp, rails, aspnet-mono-ngx, symfony, zend. All below 10k per second. Note, most of these frameworks are build for dynamic pages and quite old, there may be newer variants that feature higher up in the list.
Remember this is HTTP plaintext, not for the Websocket specialty: many people coming here will likely be interested in concurrent connections for websocket.
This question is a fairly difficult one. There is no real software limitation on the number of active connections a machine can have, though some OS's are more limited than others. The problem becomes one of resources. For example, let's say a single machine wants to support 64,000 simultaneous connections. If the server uses 1MB of RAM per connection, it would need 64GB of RAM. If each client needs to read a file, the disk or storage array access load becomes much larger than those devices can handle. If a server needs to fork one process per connection then the OS will spend the majority of its time context switching or starving processes for CPU time.
The C10K problem page has a very good discussion of this issue.
To add my two cents to the conversation a process can have simultaneously open a number of sockets connected equal to this number (in Linux type sytems) /proc/sys/net/core/somaxconn
cat /proc/sys/net/core/somaxconn
This number can be modified on the fly (only by root user of course)
echo 1024 > /proc/sys/net/core/somaxconn
But entirely depends on the server process, the hardware of the machine and the network, the real number of sockets that can be connected before crashing the system
It looks like the answer is at least 12 million if you have a beefy server, your server software is optimized for it, you have enough clients. If you test from one client to one server, the number of port numbers on the client will be one of the obvious resource limits (Each TCP connection is defined by the unique combination of IP and port number at the source and destination).
(You need to run multiple clients as otherwise you hit the 64K limit on port numbers first)
When it comes down to it, this is a classic example of the witticism that "the difference between theory and practise is much larger in practise than in theory" - in practise achieving the higher numbers seems to be a cycle of a. propose specific configuration/architecture/code changes, b. test it till you hit a limit, c. Have I finished? If not then d. work out what was the limiting factor, e. go back to step a (rinse and repeat).
Here is an example with 2 million TCP connections onto a beefy box (128GB RAM and 40 cores) running Phoenix http://www.phoenixframework.org/blog/the-road-to-2-million-websocket-connections - they ended up needing 50 or so reasonably significant servers just to provide the client load (their initial smaller clients maxed out to early, eg "maxed our 4core/15gb box # 450k clients").
Here is another reference for go this time at 10 million: http://goroutines.com/10m.
This appears to be java based and 12 million connections: https://mrotaru.wordpress.com/2013/06/20/12-million-concurrent-connections-with-migratorydata-websocket-server/
Note that HTTP doesn't typically keep TCP connections open for any longer than it takes to transmit the page to the client; and it usually takes much more time for the user to read a web page than it takes to download the page... while the user is viewing the page, he adds no load to the server at all.
So the number of people that can be simultaneously viewing your web site is much larger than the number of TCP connections that it can simultaneously serve.
in case of the IPv4 protocol, the server with one IP address that listens on one port only can handle 2^32 IP addresses x 2^16 ports so 2^48 unique sockets. If you speak about a server as a physical machine, and you are able to utilize all 2^16 ports, then there could be maximum of 2^48 x 2^16 = 2^64 unique TCP/IP sockets for one IP address. Please note that some ports are reserved for the OS, so this number will be lower. To sum up:
1 IP and 1 port --> 2^48 sockets
1 IP and all ports --> 2^64 sockets
all unique IPv4 sockets in the universe --> 2^96 sockets
There are two different discussions here: One is how many people can connect to your server. This one has been answered adequately by others, so I won't go into that.
Other is how many ports yours server can listen on? I believe this is where the 64K number came from. Actually, TCP protocol uses a 16-bit identifier for a port, which translates to 65536 (a bit more than 64K). This means that you can have that many different "listeners" on the server per IP Address.
I think that the number of concurrent socket connections one web server can handle largely depends on the amount of resources each connection consumes and the amount of total resource available on the server barring any other web server resource limiting configuration.
To illustrate, if every socket connection consumed 1MB of server resource and the server has 16GB of RAM available (theoretically) this would mean it would only be able to handle (16GB / 1MB) concurrent connections. I think it's as simple as that... REALLY!
So regardless of how the web server handles connections, every connection will ultimately consume some resource.

How do I connect a pair of clients together via a server for an online game?

I'm developing a multi-player game and I know nothing about how to connect from one client to another via a server. Where do I start? Are there any whizzy open source projects which provide the communication framework into which I can drop my message data or do I have to write a load of complicated multi-threaded sockety code? Does the picture change at all if teh clients are running on phones?
I am language agnostic, although ideally I would have a Flash or Qt front end and a Java server, but that may be being a bit greedy.
I have spent a few hours googling, but the whole topic is new to me and I'm a bit lost. I'd appreciate help of any kind - including how to tag this question.
If latency isn't a huge issue, you could just implement a few web services to do message passing. This would not be a slow as you might think, and is easy to implement across languages. The downside is the client has to poll the server to get updates. so you could be looking at a few hundred ms to get from one client to another.
You can also use the built in flex messaging interface. There are provisions there to allow client to client interactions.
Typically game engines send UDP packets because of latency. The fact is that TCP is just not fast enough and reliability is less of a concern than speed is.
Web services would compound the latency issues inherent in TCP due to additional overhead. Further, they would eat up memory depending on number of expected players. Finally, they have a large amount of payload overhead that you just don't need (xml anyone?).
There are several ways to go about this. One way is centralized messaging (client/server). This means that you would have a java server listening for udp packets from the clients. It would then rebroadcast them to any of the relevant users.
A second way is decentralized (peer to peer). A client registers with the server to state what game / world it's in. From that it gets a list of other clients in that world. The server maintains that list and notifies the other clients of people who join / drop out.
From that point forward clients broadcast udp packets directly to the other users.
If you look for communication framework with high performance try look at ACE C++ framework (it has Java bindings).
Official web-site is: http://www.cs.wustl.edu/~schmidt/ACE-overview.html
You could also look into Flash Media Interactive Server, or if you want a Java implementation, Wowsa or Red5. Those use AMF and provide native functionality for ShareObjects including synching of the ShareObjects among connected clients.
Those aren't peer to peer though (yet, it's coming soon I hear). They use centralized messaging managed by the server.
Good luck

How Many Network Connections Can a Computer Support?

When writing a custom server, what are the best practices or techniques to determine maximum number of users that can connect to the server at any given time?
I would assume that the capabilities of the computer hardware, network capacity, and server protocol would all be important factors.
Also, do you think it is a good practice to limit the number of network connections to a certain maximum number of users? Or should the server not limit the number of network connections and let performance degrade until the response time is extremely high?
Dan Kegel put together a summary of techniques for handling large amounts of network connections from a single server, here: http://www.kegel.com/c10k.html
In general modern servers can handle very large numbers of concurrent connections. I've worked on systems having over 8,000 concurrently open TCP/IP sockets.
You will need a high quality servicing interface to handle that kind of load, check out libevent or libev.
That is a good question and it definitely is situational. What is your computer? Do you have a 4 socket machine filled with Quad Core Xeons, 128 GB of RAM, and Fiber Channel Connectivity (like the pair of Dell R900s we just bought)? Or are you running on a p3 550 with 256 MB of RAM, and 56K modem? How much load does each connection place on your server? What kind of response is acceptible?
These are the questions you need to answer. I guess the best way to find the answer is through load testing. Create a unit test of the expected (and maybe some unexpected) paths that your code will perform against your server. Find a load testing framework that will allow you to simulate 10, 100, 1000, 10000 users performing those tasks at the same time.
That will tell you how many connections your computer can support.
The great thing about the load/unit test scenario is that you can put in response time expectations in your unit tests and increase the load until you fall outside of your response time. If you have a requirement of supporting X number of Users with Y second response, you will be able to demonstrate it with your load tests.
One of the biggest setbacks in high concurrency connections is actually the routers involved. Home user oriented routers usually have a small NAT table, preventing the router from actually servicing the server the connections.
Be sure to research your router/ network infrastructure setup just as well.
I think you shouldn't limit the number of connections your server will allow - just catch and handle properly any exceptions that might occur when accepting and closing connections and you should be fine. You should leave that kind of lower level programming to the underlying OS layers - that way you can port your server easier etc.
This really depends on your operating system.
Different Unix flavors will support "unlimited" number of file handles / sockets others have high values like 32768.
A typical user limit is 8192 but it can usually be set higher.
I think windows is more limiting but the server version may have higher limits.

Resources