How to count packets that is ECN marked at switch - networking

2 hosts connect through a switch with link1, link2 correspondingly.
H1-------------------Switch---------------------H2
BW1 B/K BW2
Switch has buffer B (size in packets) and ECN supported. It will mark packets when buffer queue exceeds K (size in packets).
Link1 has bandwidth bw1 bigger than that of link2 bw2 and host1 sending packets continuously.
Host1 increases its sending rate by increasing Congestion Window. Because bw1>bw2 so after some time, switch has to buffer packets. If queue is approach K threshold, switch will mark packets using ECN.
Assume, at one time, host1 has Congestion Window CW1. That means from that time, in one RTT, there's maximum of CW1 packets in-flight.
So, how many packets that were marked by ECN at switch in an RTT. I want to compute the fraction.
Thank you.

Just monitors output port of switches, and count a packet with CE bit.
use TCPDUMP, and you will obtain all of packets, and you can easily count it using regular expression filter.

Related

Estimating TCP and UDP delay between two nodes

Suppose we have 2 nodes, A and B, directly connected by Internet (we can ignore the underlyng network eg, routers, ISP etc).
We know RTT between nodes (80ms)
We know packet loss (0.1)
We know jitter (1ms)
We know bandwith, A=100/10mbps B=50/5mbps (first value is download, second is upload)
A sends a 1GB file to B by using the TCP protocol (with 64KB segment size).
How many times they need to exchange the file?
How many times it takes to do the same thing by using the UDP
protocol?
EDIT:
i guess the main difference in the calculation between UDP and TCP is that in TCP we need to wait for every packet to be sent before sending the next one. Or, in other words, we have to add in the delay calculation one RTT for every packet. Moreover, packetloss is not considered at all in UDP. I am not sure of what I'm sayng in this edit, so let me know if I'm wrong.

Send packets larger than 64K in TCP

As far as we know the absolute limitation on TCP packet size is 64K (65535 bytes), and in practicality this is far larger than the size of any packet you will see, because the lower layers (e.g. ethernet) have lower packet sizes. The MTU (Maximum Transmission Unit) for Ethernet, for instance, is 1500 bytes.
I want to know, Is there any any way or any tools, to send packets larger than 64k?
I want to test a device in facing with packet larger than 64k! I mean I want to see, if I send a packet larger than 64K, how it behave? Does it drop some part of it? Or something else.
So :
1- How to send this large packets? What is the proper layer for this?
2- How the receiver behave usually?
The IP packet format has only 16 bit for the size of the packet, so you will not be able to create a packet with a size larger than 64k. See http://en.wikipedia.org/wiki/IPv4#Total_Length. Since TCP uses IP as the lower layer this limit applies here too.
There is no such thing as a TCP packet. TCP data is sent and received in segments, which can be as large as you like up to the limits of the API you're using, as they can be comprised of multiple IP packets. At the receiver TCP is indistinguishable from a byte stream.
NB osi has nothing to do with this, or anything else.
TCP segments are not size-limited. The thing which imposes the limit is that IPv4 and IPv6 packets have 16 bit length fields, so a size larger than this limit is not possible to express.
However, RFC 2675 is a proposed standards for IPv6 which would expand the length field to 32 bits, allowing much larger TCP segments.
See here for a talk about why this change could help improve performance and here for a set of (experimental) patches to Linux to enable this RFC.

Buffer queue in router/switch

I'm so confused of understanding buffer queue concept in router/switch.
Normally, when 2 hosts connected to a same switch with the same delay, link of host1 and switch has bandwidth BW1 and link of host2 and switch has bandwidth BW2.
Host1 send packets continuously to host2.
If bw1 = bw2 then when packet come to router, it immediately switch packet to host2. That means router doesn't need a buffer queue, right???
if bw1 > bw2 then sending rate is bigger than receiving rate, and router has to keep some packets in buffer queue.
I wonder what is really buffer queue. Is queue concept different to buffer concept?
Please help me out.
Thank you
Even if the bandwidths of both the links are same, the router needs to do some processing on the packet.
It extracts the IP headers and looks at the destination IP address.
It looks up the routing table and finds the next hop that it needs to send the packet.
Reconstructs the packet and sends it to the next hop.
So there is some processing overhead and if packets arrive faster than the router can process them, then it needs to buffer the packets.

Packet loss showing at point of entry onto network - what could cause?

A traffic source (server) with a 1gigabit NIC is attached to a 1gigabit port of a Cisco switch.
I mirror this traffic (SPAN) to a separate gigabit port on the same switch and then capture this traffic on a high throughput capture device (riverbed shark).
Wireshark analysis of the capture shows that there is a degree of packet loss - around 0.1% of TCP segments are being lost (based on sequence number analysis).
Given that this is the first point on the network for this traffic, what can cause this loss?
The throughput is not anywhere near 1gigabit, there are no port errors (which might indicate a dodgy patch lead).
In Richard Stevens illustrated TCP book he makes mention of 'local congestion' - where the TCP stack is producing data at a rate faster than the underlying local queues can be emptied.
Could this be what I am seeing?
If so, is there a way to confirm it on an AIX box?
(Stevens example used the Linux 'tc' command for a ppp0 device to demonstrate drops at the lower level)
The lost can be anywhere along the network path.
If there is loss between two hosts, you should be seeing DUP ACKs. You need to see what side is sending the DUP ACKs. This would be the host that isn't receiving all the packets. ( When a packet is not seen, it will send a DUP ACK to ask for the packet again.)
There may be congestion somewhere else along the path. Look for output drops on interfaces. Or CRC erros .

Maximum buffer length for sendto?

How do you get the maximum number of bytes that can be passed to a sendto(..) call for a socket opened as a UDP port?
Use getsockopt(). This site has a good breakdown of the usage and options you can retrieve.
In Windows, you can do:
int optlen = sizeof(int);
int optval;
getsockopt(socket, SOL_SOCKET, SO_MAX_MSG_SIZE, (int *)&optval, &optlen);
For Linux, according to the UDP man page, the kernel will use MTU discovery (it will check what the maximum UDP packet size is between here and the destination, and pick that), or if MTU discovery is off, it'll set the maximum size to the interface MTU and fragment anything larger. If you're sending over Ethernet, the typical MTU is 1500 bytes.
On Mac OS X there are different values for sending (SO_SNDBUF) and receiving (SO_RCVBUF).
This is the size of the send buffer (man getsockopt):
getsockopt(sock, SOL_SOCKET, SO_SNDBUF, (int *)&optval, &optlen);
Trying to send a bigger message (on Leopard 9216 octets on UDP sent via the local loopback) will result in "Message too long / EMSGSIZE".
As UDP is not connection oriented there's no way to indicate that two packets belong together. As a result you're limited by the maximum size of a single IP packet (65535). The data you can send is somewhat less that that, because the IP packet size also includes the IP header (usually 20 bytes) and the UDP header (8 bytes).
Note that this IP packet can be fragmented to fit in smaller packets (eg. ~1500 bytes for ethernet).
I'm not aware of any OS restricting this further.
Bonus
SO_MAX_MSG_SIZE of UDP packet
IPv4: 65,507 bytes
IPv6: 65,527 bytes

Resources