Number of connections per client on MongoDB Realm - realm

According to this comment, there's a minimum of 4 connections per "running application process" on MongoDB Realm, three of these four are constant (monitoring connections) and the rest depends on the number of queries/writes. As I understand, each Realm synchronized is another connection.
A complex app can have multiple Realms synchronized, meaning it wouldn't be hard to have 10 open connections. Does this mean that even paying for the highest cluster tier (M700, $33.26/hr) the app couldn't have more than 12800 (128000 connections / 10 open connections per client) active users?

Related

for each observe()/listen is equal to one concurrent connection?

Consider a chat system where one user is listening to multiple parent nodes within one chat conversation.
Group title
Group description
Messages
Does observing/listening to the 3 above mean that it adds up to 3 connections on the 200k concurrency limit? I can't seem to understand the proper definition of a concurrent connection.
Each app (strictly speaking: each FirebaseDatabase instance) keeps a single connection to the Firebase Realtime Database server, no many how many listeners it has open.
Also see the Firebase FAQ, which says this about it:
A simultaneous connection is equivalent to one mobile device, browser tab, or server app connected to the database.

Multiple Azure redis connection

To overcome latency, On "Startup.cs" of asp.net core 2.1, I am creating 2 static connections to Azure Redis & reuse those same connection instances during the application entire life cycle.
Is it is good practice to create multiple connections to one Azure redis instance? what is max. no. of connections? will multiple instance have billing implications? Is Azure redis usage charges based number of connections or as per the amount of data transfer? please confirm.
First, it is not a good practice to create two Azure Redis static connections in the application.
In general projects, Redis is not frequently used, but is instantiated and created when the business needs it, and released after use. If you need to use it frequently, you can instantiate it in Startup.cs when the project starts, and define an instance globally, so that there will be no frequent creation and deletion of instances.
For Azure Redis billing methods, you can refer to the official documentation. It is not based on the number of connections nor the amount of transmission. It is billed according to time.
It is actually recommend to use different connections to reflect the varying data packet sizes ie you could setup a connection with higher timeout for data that is bigger in size, as opposed to data that is small in size. This is recommended only when you have data packets being stored in redis of varying sizes ex: 1kb to 100 kb and you cannot reduce their size of the packet.
Having different connection ensures that pipelining that usually happens when fetching data does not result in cascading timeouts. Multi connection is also recommended in Microsoft docs, have a look here by scrolling to the bottom and see point 3
https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-troubleshoot-client#large-request-or-response-size

JMS Connection Latency

I am examining an application where a JBOSS Application Server communicates with satellite JBOSS Application Servers (1 main server, hundreds of satellites).
When observing in the Windows Resource Monitor I can view the connections and see the latency by satellite - most are sub-second, but I see 10 over 1 second, of those 4 over 2 seconds and 1 over 4 seconds. This is a "moment in time" view, so as connections expire and rebuild when they need, the trend can shift. I do observe the same pair of systems have a ping latency matching seen on the connection list - so I suspect it's connection related (slow pipe, congestion, or anything in the line between points A and B).
My question is what should be a target latency, keeping in mind the satellites are VPN'd from various field sites. I use 2 seconds as a dividing line to when I need the network team to investigate, I'd like to survey what rule of thumb do you use in evaluating when the latency for a transient connection starts peaking - is it over a second? I do observe the same pair of systems have a ping latency matching seen on the connection list - so I know it's connection related.

How many connections (Max.) can be kept in a pool..?

How does connection pooling works? i want to know that if i set max. pool size = 20 so are only 20 users able to connect to the web app. at a same time and make a transaction ? What happens to the large websites like Amazon where thousands of users log in at same time throughout the world i.e what pool size do they keep? i am not getting the core concept. I know that a connection pool keeps open connections and users reuse the open connections but i want my first question to be answered.
There is no such document where you can find the maximum size of the pool.Default value of max pool size is 100
Check out MSDN
There can be maximum 32767 connections to the database at a time. That is, at a single point of time only 32767 users can make transactions to database via web app. Not even one more than that. A pool size is not mentioned anywhere only default is there(100). But SQL Server will only accept 32767 connections from user. Proof: Select ##MAX_CONNECTIONS . If misunderstood please correct me.
The user connections option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server. The actual number of user connections allowed also depends on the version of SQL Server that you are using, and also the limits of your application or applications and hardware. SQL Server allows a maximum of 32,767 user connections. Because user connections is a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable.

How many socket connections can a web server handle?

Say if I was to get shared, virtual or dedicated hosting, I read somewhere a server/machine can only handle 64,000 TCP connections at one time, is this true? How many could any type of hosting handle regardless of bandwidth? I'm assuming HTTP works over TCP.
Would this mean only 64,000 users could connect to the website, and if I wanted to serve more I'd have to move to a web farm?
In short:
You should be able to achieve in the order of millions of simultaneous active TCP connections and by extension HTTP request(s). This tells you the maximum performance you can expect with the right platform with the right configuration.
Today, I was worried whether IIS with ASP.NET would support in the order of 100 concurrent connections (look at my update, expect ~10k responses per second on older ASP.Net Mono versions). When I saw this question/answers, I couldn't resist answering myself, many answers to the question here are completely incorrect.
Best Case
The answer to this question must only concern itself with the simplest server configuration to decouple from the countless variables and configurations possible downstream.
So consider the following scenario for my answer:
No traffic on the TCP sessions, except for keep-alive packets (otherwise you would obviously need a corresponding amount of network bandwidth and other computer resources)
Software designed to use asynchronous sockets and programming, rather than a hardware thread per request from a pool. (ie. IIS, Node.js, Nginx... webserver [but not Apache] with async designed application software)
Good performance/dollar CPU / Ram. Today, arbitrarily, let's say i7 (4 core) with 8GB of RAM.
A good firewall/router to match.
No virtual limit/governor - ie. Linux somaxconn, IIS web.config...
No dependency on other slower hardware - no reading from harddisk, because it would be the lowest common denominator and bottleneck, not network IO.
Detailed Answer
Synchronous thread-bound designs tend to be the worst performing relative to Asynchronous IO implementations.
WhatsApp can handle a million WITH traffic on a single Unix flavoured OS machine - https://blog.whatsapp.com/index.php/2012/01/1-million-is-so-2011/.
And finally, this one, http://highscalability.com/blog/2013/5/13/the-secret-to-10-million-concurrent-connections-the-kernel-i.html, goes into a lot of detail, exploring how even 10 million could be achieved. Servers often have hardware TCP offload engines, ASICs designed for this specific role more efficiently than a general purpose CPU.
Good software design choices
Asynchronous IO design will differ across Operating Systems and Programming platforms. Node.js was designed with asynchronous in mind. You should use Promises at least, and when ECMAScript 7 comes along, async/await. C#/.Net already has full asynchronous support like node.js. Whatever the OS and platform, asynchronous should be expected to perform very well. And whatever language you choose, look for the keyword "asynchronous", most modern languages will have some support, even if it's an add-on of some sort.
To WebFarm?
Whatever the limit is for your particular situation, yes a web-farm is one good solution to scaling. There are many architectures for achieving this. One is using a load balancer (hosting providers can offer these, but even these have a limit, along with bandwidth ceiling), but I don't favour this option. For Single Page Applications with long-running connections, I prefer to instead have an open list of servers which the client application will choose from randomly at startup and reuse over the lifetime of the application. This removes the single point of failure (load balancer) and enables scaling through multiple data centres and therefore much more bandwidth.
Busting a myth - 64K ports
To address the question component regarding "64,000", this is a misconception. A server can connect to many more than 65535 clients. See https://networkengineering.stackexchange.com/questions/48283/is-a-tcp-server-limited-to-65535-clients/48284
By the way, Http.sys on Windows permits multiple applications to share the same server port under the HTTP URL schema. They each register a separate domain binding, but there is ultimately a single server application proxying the requests to the correct applications.
Update 2019-05-30
Here is an up to date comparison of the fastest HTTP libraries - https://www.techempower.com/benchmarks/#section=data-r16&hw=ph&test=plaintext
Test date: 2018-06-06
Hardware used: Dell R440 Xeon Gold + 10 GbE
The leader has ~7M plaintext reponses per second (responses not connections)
The second one Fasthttp for golang advertises 1.5M concurrent connections - see https://github.com/valyala/fasthttp
The leading languages are Rust, Go, C++, Java, C, and even C# ranks at 11 (6.9M per second). Scala and Clojure rank further down. Python ranks at 29th at 2.7M per second.
At the bottom of the list, I note laravel and cakephp, rails, aspnet-mono-ngx, symfony, zend. All below 10k per second. Note, most of these frameworks are build for dynamic pages and quite old, there may be newer variants that feature higher up in the list.
Remember this is HTTP plaintext, not for the Websocket specialty: many people coming here will likely be interested in concurrent connections for websocket.
This question is a fairly difficult one. There is no real software limitation on the number of active connections a machine can have, though some OS's are more limited than others. The problem becomes one of resources. For example, let's say a single machine wants to support 64,000 simultaneous connections. If the server uses 1MB of RAM per connection, it would need 64GB of RAM. If each client needs to read a file, the disk or storage array access load becomes much larger than those devices can handle. If a server needs to fork one process per connection then the OS will spend the majority of its time context switching or starving processes for CPU time.
The C10K problem page has a very good discussion of this issue.
To add my two cents to the conversation a process can have simultaneously open a number of sockets connected equal to this number (in Linux type sytems) /proc/sys/net/core/somaxconn
cat /proc/sys/net/core/somaxconn
This number can be modified on the fly (only by root user of course)
echo 1024 > /proc/sys/net/core/somaxconn
But entirely depends on the server process, the hardware of the machine and the network, the real number of sockets that can be connected before crashing the system
It looks like the answer is at least 12 million if you have a beefy server, your server software is optimized for it, you have enough clients. If you test from one client to one server, the number of port numbers on the client will be one of the obvious resource limits (Each TCP connection is defined by the unique combination of IP and port number at the source and destination).
(You need to run multiple clients as otherwise you hit the 64K limit on port numbers first)
When it comes down to it, this is a classic example of the witticism that "the difference between theory and practise is much larger in practise than in theory" - in practise achieving the higher numbers seems to be a cycle of a. propose specific configuration/architecture/code changes, b. test it till you hit a limit, c. Have I finished? If not then d. work out what was the limiting factor, e. go back to step a (rinse and repeat).
Here is an example with 2 million TCP connections onto a beefy box (128GB RAM and 40 cores) running Phoenix http://www.phoenixframework.org/blog/the-road-to-2-million-websocket-connections - they ended up needing 50 or so reasonably significant servers just to provide the client load (their initial smaller clients maxed out to early, eg "maxed our 4core/15gb box # 450k clients").
Here is another reference for go this time at 10 million: http://goroutines.com/10m.
This appears to be java based and 12 million connections: https://mrotaru.wordpress.com/2013/06/20/12-million-concurrent-connections-with-migratorydata-websocket-server/
Note that HTTP doesn't typically keep TCP connections open for any longer than it takes to transmit the page to the client; and it usually takes much more time for the user to read a web page than it takes to download the page... while the user is viewing the page, he adds no load to the server at all.
So the number of people that can be simultaneously viewing your web site is much larger than the number of TCP connections that it can simultaneously serve.
in case of the IPv4 protocol, the server with one IP address that listens on one port only can handle 2^32 IP addresses x 2^16 ports so 2^48 unique sockets. If you speak about a server as a physical machine, and you are able to utilize all 2^16 ports, then there could be maximum of 2^48 x 2^16 = 2^64 unique TCP/IP sockets for one IP address. Please note that some ports are reserved for the OS, so this number will be lower. To sum up:
1 IP and 1 port --> 2^48 sockets
1 IP and all ports --> 2^64 sockets
all unique IPv4 sockets in the universe --> 2^96 sockets
There are two different discussions here: One is how many people can connect to your server. This one has been answered adequately by others, so I won't go into that.
Other is how many ports yours server can listen on? I believe this is where the 64K number came from. Actually, TCP protocol uses a 16-bit identifier for a port, which translates to 65536 (a bit more than 64K). This means that you can have that many different "listeners" on the server per IP Address.
I think that the number of concurrent socket connections one web server can handle largely depends on the amount of resources each connection consumes and the amount of total resource available on the server barring any other web server resource limiting configuration.
To illustrate, if every socket connection consumed 1MB of server resource and the server has 16GB of RAM available (theoretically) this would mean it would only be able to handle (16GB / 1MB) concurrent connections. I think it's as simple as that... REALLY!
So regardless of how the web server handles connections, every connection will ultimately consume some resource.

Resources