Assuming I have a long lasting persistent TCP connection from my EC2 instance to a client and I send a single byte every 30 seconds (app level keep-alive).
Is that counted as 1 byte traffic, or are TCP headers or even Ethernet overhead counted in as well (which would significantly change the equation)?
Typically bandwidth is measured at the switch level. And they measure the size of all your packets.
I would assume that's what Amazon does for EC2.
Related
I came to my knowledge that if a machine makes 2 requests to the same destination IP and the same destination port, the source ports have to be different. But if that is the case, there must be a maximum number of active connections a client can have to a server. Is there a limit on how many such connections there can be?
A port is 16 bits, so the absolute limit would be 216.
Of course, port 0 is not really ever used and ports 1 to 1023 are clearly reserved for servers. Plus, in most cases, you have a limited range of ports you can use to connect as a client. These are called ephemeral ports and are between 49152 and 65535.
The number 49152 is 0xC000. So you get the top quarter of the ports available to clients or 16384 ports (214). That's your limit as a client.
Note that memory is also required. More or less depending on your application but also the kernel needs enough memory to allocate so many ports. So you are more likely to run out of memory before you can allocate that many ports (unless, like me, you have a computer with 512Mb of RAM or more... then you'll have a hard time to stress the memory allocation in most cases).
In practical use, I've never run out of client ports. The main issue I run into is multiple applications trying to listen on the same port (i.e. two servers trying to listen on port 80, for example).
Let's assume TCP Reno version
I have this situation: a VoIP (UDP) stream and a TCP session on the same host.
Let's say at t=10s the TCP opens the session with the TCP receiver (another host), they exchanges the max window during the 3-way handshake and then they start the stream with the slow start approach.
At t=25s, a VoIP stream starts. Since it's an UDP stream, the aim is to saturate the receiver. Not having any congestion control, it should be bursting packets as much as it can.
Since there is this concurrency in the same channel and we are assuming that in the topology of the network no router goes down etc (so no anomalies), my question is:
Is there any way for achieve packet loss for the VoIP stream?
I was thinking that since VoIP is sensible to jitter, and the slow-start approach of TCP is not really slow, the packet loss could be achieved because the routers queues add variation of delay and they are "flooded" by the TCP early packets.
Is there any other reason?
A couple of comments first:
VoIP will not usually 'saturate' the receiver (or the network) - it will simply send as many packets as it needs for the particular codec you are using. In other words it won't just keep growing until it fills the network.
VoIP systems are sensitive to jitter as you note. Packet loss is actually related to this as a VoIP system will generally consider a packet lost if it arrives outside the jitter buffer window. So even though the packet may not in fact be lost, and only delayed, if it arrives outside the jitter buffer window it is effectively lost as far as the VoIP system is concerned.
Answering your specific question: yes other traffic can create delayed packets which may appear lost to the VoIP receiver. It is worth nothing that in a link where UDP and TCP are sharing the bandwidth, TCP is better 'behaved' than UDP in that it will try to limit itself to avoid congestion. UDP does not and hence may actually get more than its fair share of the bandwidth compared to the TCP traffic because of this.
I have a server and a client running on 2 Unix machines. They can be two machines in a LAN or far apart and connected in VLAN. The client only receives packets and server only sends.(UDP or TCP)
How do I measure the latency between them programmatically?
One way of doing this is to add a timestamp on the packet before send, but the clocks are not guaranteed to be synced. Any suggestions?
If your communications are strictly unidirectional and the clocks aren't synchronised, you can't do it.
You could introduce a new packet sent from the client to the server, that asks "what time is it?" The server would respond with its time, and the client would divide the response time by two to get the one-way latency. As a side benefit, the client can find out what time the server thinks it is.
If we have a some Mbps connection between source and destination and known latency and the source has two processes sending data via TCP and UDP respectively, which of two process will have a higher throughput and how to calculate it?
I am not a computer science student and I don't know networks.
TCP and UDP are both using the IP layer and will both have the same network available to them. Depending on the protocol you use you could get more throughput via UDP. This would require you to write a protocol to transfer data that was more aggressive than TCP or discard data without having to resend it.
If you did write a protocol more aggressive than TCP it would likely be banned by anyone managing a network that came into contact with it since it will degrade TCP sessions on that network.
If you could discard any data that came through then you wouldn't waste the bandwidth resending the lost packets in TCP and UDP would be a more natural choice but since you care about bandwidth I'm guessing that thats not the case?
Assuming infinite performance from hardware, can a Linux box support >65536 open TCP connections?
I understand that the number of ephemeral ports (<65536) limits the number of connections from one local IP to one port on one remote IP.
The tuple (local ip, local port, remote ip, remote port) is what uniquely defines a TCP connection; does this imply that more than 65K connections can be supported if more than one of these parameters are free. e.g. connections to a single port number on multiple remote hosts from multiple local IPs.
Is there another 16 bit limit in the system? Number of file descriptors perhaps?
A single listening port can accept more than one connection simultaneously.
There is a '64K' limit that is often cited, but that is per client per server port, and needs clarifying.
Each TCP/IP packet has basically four fields for addressing. These are:
source_ip source_port destination_ip destination_port
<----- client ------> <--------- server ------------>
Inside the TCP stack, these four fields are used as a compound key to match up packets to connections (e.g. file descriptors).
If a client has many connections to the same port on the same destination, then three of those fields will be the same - only source_port varies to differentiate the different connections. Ports are 16-bit numbers, therefore the maximum number of connections any given client can have to any given host port is 64K.
However, multiple clients can each have up to 64K connections to some server's port, and if the server has multiple ports or either is multi-homed then you can multiply that further.
So the real limit is file descriptors. Each individual socket connection is given a file descriptor, so the limit is really the number of file descriptors that the system has been configured to allow and resources to handle. The maximum limit is typically up over 300K, but is configurable e.g. with sysctl.
The realistic limits being boasted about for normal boxes are around 80K for example single threaded Jabber messaging servers.
If you are thinking of running a server and trying to decide how many connections can be served from one machine, you may want to read about the C10k problem and the potential problems involved in serving lots of clients simultaneously.
If you used a raw socket (SOCK_RAW) and re-implemented TCP in userland, I think the answer is limited in this case only by the number of (local address, source port, destination address, destination port) tuples (~2^64 per local address).
It would of course take a lot of memory to keep the state of all those connections, and I think you would have to set up some iptables rules to keep the kernel TCP stack from getting upset &/or responding on your behalf.