Number of concurrent requests a server can handle - tcp

In the book "Computer Networking - A Top Down Approach", I just studied that, TCP sockets are uniquely identified by four parameters:
Sender's IP address
Sender's port number
Receiver's IP address
Receiver's port number
In the case of server, we know that the server's IP address and the port number is fixed. Clients' IP addresses and the port numbers will obviously vary. In case of new request, the server will open a new thread (to open a new TCP socket) to handle that request.
My question is that, is there any calculation we can perform to calculate maximum number of clients a single server machine can handle?
Thanks

Hi,it's a good question.
First, if one thread handles one connection, then the limitation of x86 computer will be really low. It will be something like 1000 for every process since the virtual memory is limited as 4GB. For x64, it will be relatively higher. But the switch between threads is not we want to see.
So, we use a thread to process lots of connections, that is I/O multiplexing.
What's more, this still has a limitation, we call it C10K and C10M, and we solve these probolems by non-blocking I/O multiplexing like epoll or IOCP, and the DPDK is useful to solve C10M.
If you want to reach a higher connection number, then we use distributed systems instead only one server.
It's a long story of the limitation of server, and I think you can learn much from them.

Related

How to deal with many incoming UDP packets when the server has only 1 UDP socket?

When a server has only 1 UDP socket, and many clients are sending UDP packets to it, what would be the best approach to handle all of the incoming packets?
I think this can also be a problem with TCP packets, since there's a limited thread count, which cannot cover all client TCP socket receive events.
But things are better in this situation because there's 1 TCP socket per client, and even if the network buffer is full, packet receiving is blocked until the queue has space (let me know if I'm wrong).
UDP packets, however, are discarded when the buffer is full, and there's only 1 socket, so the chances of that happening are higher.
How can I solve this problem? I've searched for a while, but I couldn't get a clear answer. Should I implement my own queueing system? Or just maximize the network buffer size?
There is no way to guarantee you won't drop UDP messages. No matter what you do, if the rate of packets being sent is too large, you will drop some, either on the receiving host or somewhere in the network.
Some things that can help include:
Implementing an internal queue for messages in your Java app, and handing them over to a thread pool to process.
Increasing the kernel's message buffering.
But neither of these can deal with the case where the average message arrival rate is higher that the receiver's ability to process them or the network capacity. This will inevitably lead to lost messages (requests).
I've searched for a while, but I couldn't get a clear answer.
That is because there isn't one! Some problems are fundamentally unsolvable. For others, the best answer depends on factors that are too hard to measure or predict.
(If you want certainty ... don't use networking!)
In the TCP case, what you should do is use a (long-term) socket for each client. Depending on the number of sockets you need to support, you could either:
Dedicate a server-side thread to each socket (and client).
Use java.nio.channels.Selector and a thread pool.
You will still get problems if the rate of requests exceeds your server's ability to process them. However, the TCP connections will ensure that requests are not lost, and that the clients get some "back pressure".

Requirements for Repeated TCP connects

I am using Winsock, and I have a need to issue a TCP connect repeatedly to a third-party server. These applications will stay up potentially for days at a time. I am the only client connecting to the server. The time between connects is on the order of seconds, and the connection stays up only long enough to send a single message of a few bytes. I am currently seeing that the connects start to fail (WSAECONNREFUSED) after a few hours. Is there anything I must do (e.g. socket options, etc.) to ensure these frequent repeated connects will succeed for an indefinite amount of time? Thanks!
When doing a lot of transaction based connections and having issues with TCP's TIME_WAIT state duration (which last 2MSL = 120 seconds) leading to no more connections available for a client host toward a special server host, you should consider UDP and managing yourself the re-sending of lost requests.
I know that sounds odd. But standard services like DNS are required to use UDP to handle a ton of transactions (request then a single answer in one UDP segment) in order to avoid issues you are experimenting yourself. Web browsers send a request using UDP to the DNS. Re-request is done using UDP after a short time, no longer than a few milliseconds I guess. Sometimes the resolved name is too long and does not fit in the UDP paquet. As a consequence the DNS server send a UDP reply with a dedicated flag raised, in order to ask the client to use TCP this time.
Moreover you may consider also the T/TCP extension (Transactional TCP) of TCP, if available on your Windows platform. It provides TCP reliability with shorter TIME_WAIT state, as nearly no costs in the modifications of your client code. As far as I know it may work even though the server does not handle that extension. As a side note it is currently not used on the internet as it is know to have some flaw...

Can TCP re-transmit a handshake when it’s lost in transport?

I saw a large number of failed connections between two hosts on my intranet (call them client and server).
Using netstat on both machines, I see corresponding port numbers where the server end is in SYN_RECV state and the client is in SYN_SENT.
My interpretation is that the server has responded to the client’s SYN with a SYN,ACK but this packet has been lost. The handshake is disrupted, the socket connection is in an incomplete state, and I see the client time out after 20-45 seconds.
My question is, does TCP offer a way for the server to re-transmit the SYN,ACK after some interval? Is this a good or bad idea?
More system details if relevant: both ends RHEL5, ssh succeeds, ping loses 100%, traceroute succeeds. Client is built on OpenOrb (Java), server is Mico (C++).
SYN and FIN flags are considered part of the sequence space and are transmitted reliably (so, the answer to your immediate question is "yes, it does, by default").
However, I think you really want to dig a bit deeper, because:
If you have a large number of failed connections on the hosts on your intranet, this points to a problem in the network - normally you should have a low, if any, connections that are stuck in these states. Retransmissions would mean your connection will hiccup for 2,4,8,.. seconds (though not necessary - depends on the TCP stack. Nonetheless nothing pretty for the users).
I would advise to run tcpdump or wireshark on both hosts and trace where the loss of the packets happens - and fix it.
On older hardware, a frequent reason could be a duplex mismatch on some pair of the devices in the path (incorrectly autodetected, or incorrectly hardcoded). Some other reasons may be a problem with the driver, or a bad cable (not enough bad to cause complete outage, but bad enough to cause periodic blackouts).

Constantly reading/writing data over a TCP/IP Port. Which one?

Unfortunately I don't know much networks. I am writing a program that has two versions. A server version and a client version. Lets assume that the client versions are installed on, say 20 PCs that are connected to the server over ethernet. The client versions needs to CONSTANTLY get some data from the server. The data is kind of serial. I wanted to know a way to broadcast the data that gets updated every second and make it available to all the other PCs in the network. Could I use the HTTP Port for this?, like writing the data to an HTML page or something? or Is there a better port or method for doing this?
Any ideas will be greatly appreciated.
This sounds like a pretty straightforward application of TCP sockets. The server would be set up to "listen" on a particular port (you pick the port number, say 12345), and each client would make a TCP connection to the server on that port.
Whenever the server has data to send, it would send it once to each connected client. This could mean that the server sends the data up to 20 times on different sockets, but that's fine. The client would read the data from its connected socket to the server.
There are other alternatives, such as UDP or even UDP multicast, but these usually end up being a lot more complicated because UDP doesn't guarantee that packets always arrive at the destination (and they may even be duplicated or out of order). TCP ensures that the data you send either arrives complete in the correct order, or doesn't arrive at all (in that case the connection would be dropped).
An example of this sort of multiple TCP connection is VNC:
VNC is widely used in educational contexts, for example to allow a distributed group of students simultaneously to view a computer screen being manipulated by an instructor, or to allow the instructor to take control of the students' computers to provide assistance.
There are many ways. you can choose any of them but i think, document below will help you a lot.
Multicast over TCP/IP HOWTO:
http://www.ibiblio.org/pub/Linux/docs/howto/other-formats/html_single/Multicast-HOWTO.html#sect-trans-prots

What is the cost of many TIME_WAIT on the server side?

Let's assume there is a client that makes a lot of short-living connections to a server.
If the client closes the connection, there will be many ports in TIME_WAIT state on the client side. Since the client runs out of local ports, it becomes impossible to make a new connection attempt quickly.
If the server closes the connection, I will see many TIME_WAITs on the server side. However, does this do any harm? The client (or other clients) can keep making connection attempts since it never runs out of local ports, and the number of TIME_WAIT state will increase on the server side. What happens eventually? Does something bad happen? (slowdown, crash, dropped connections, etc.)
Please note that my question is not "What is the purpose of TIME_WAIT?" but "What happens if there are so many TIME_WAIT states on the server?" I already know what happens when a connection is closed in TCP/IP and why TIME_WAIT state is required. I'm not trying to trouble-shoot it but just want to know what is the potential issue with it.
To put simply, let's say netstat -nat | grep :8080 | grep TIME_WAIT | wc -l prints 100000. What would happen? Does the OS's network stack slow down? "Too many open files" error? Or, just nothing to worry about?
Each socket in TIME_WAIT consumes some memory in the kernel, usually somewhat less than an ESTABLISHED socket yet still significant. A sufficiently large number could exhaust kernel memory, or at least degrade performance because that memory could be used for other purposes. TIME_WAIT sockets do not hold open file descriptors (assuming they have been closed properly), so you should not need to worry about a "too many open files" error.
The socket also ties up that particular src/dst IP address and port so it cannot be reused for the duration of the TIME_WAIT interval. (This is the intended purpose of the TIME_WAIT state.) Tying up the port is not usually an issue unless you need to reconnect a with the same port pair. Most often one side will use an ephemeral port, with only one side anchored to a well known port. However, a very large number of TIME_WAIT sockets can exhaust the ephemeral port space if you are repeatedly and frequently connecting between the same two IP addresses. Note this only affects this particular IP address pair, and will not affect establishment of connections with other hosts.
Each connection is identified by a tuple (server IP, server port, client IP, client port). Crucially, the TIME_WAIT connections (whether they are on the server side or on the client side) each occupy one of these tuples.
With the TIME_WAITs on the client side, it's easy to see why you can't make any more connections - you have no more local ports. However, the same issue applies on the server side - once it has 64k connections in TIME_WAIT state for a single client, it can't accept any more connections from that client, because it has no way to tell the difference between the old connection and the new connection - both connections are identified by the same tuple. The server should just send back RSTs to new connection attempts from that client in this case.
Findings so far:
Even if the server closed the socket using system call, its file descriptor will not be released if it enters the TIME_WAIT state. The file descriptor will be released later when the TIME_WAIT state is gone (i.e. after 2*MSL seconds). Therefore, too many TIME_WAITs will possibly lead to 'too many open files' error in the server process.
I believe OS TCP/IP stack has been implemented with proper data structure (e.g. hash table), so the total number of TIME_WAITs should not affect the performance of the OS TCP/IP stack. Only the process (server) which owns the sockets in TIME_WAIT state will suffer.
If you have a lot of connections from many different client IPs to the server IPs you might run into limitations of the connection tracking table.
Check:
sysctl net.ipv4.netfilter.ip_conntrack_count
sysctl net.ipv4.netfilter.ip_conntrack_max
Over all src ip/port and dest ip/port tuples you can only have net.ipv4.netfilter.ip_conntrack_max in the tracking table. If this limit is hit you will see a message in your logs "nf_conntrack: table full, dropping packet." and the server will not accept new incoming connections until there is space in the tracking table again.
This limitation might hit you long before the ephemeral ports run out.
In my scenario i ran a script which schedules files repeatedly,my product do some computations and sends response to client ie client is making a repetitive http call to get the response of each file.When around 150 files are scheduled socket ports in my server goes in time_wait state and an exception is thrown in client which opens a http connection ie
Error : [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
The result was that my application hanged.I do not know may be threadshave gone in wait state or what has happened but i need to kill all processes or restart my application to make it work again.
I tried reducing wait time to 30 seconds since it is 240 seconds by default but it did not work.
So basically overall impact was critical as it made my application non-responsive
it looks like the server can just run out of ports to assign for incoming connections (for the duration of existing TIMED_WAITs) - a case for a DOS attack.

Resources