Systematic TCP retransmission - tcp

I have 5 same devices connected to the switch. The IPs are constant. Using Wireshark I see a systematic TCP retransmissions within 1 or 2 usec on ports 1 and 5 (only). After swapping Port 1 with 2 for example the retransmission remains the same. It is probably not due to the lose data. What can be the issue?

Related

TCP Dup ACK after reconnection - sequence number problem?

currently I'm debugging the network traffic between two devices in the same network. The network architecture is quite simple.
Device 1 <-> Switch 1 <-> Switch 2 <-> Device 2
To verify the software on the devices, I check different scenarios.
One of them is the correct reconnection after I unplugged the network cable between Switch 1 and Switch 2 and plugged it in again after few seconds.
I uploaded a wireshark capture to my onedrive: Wireshark capture
Packets 1-9 are correct communication.
Between Packet 9 and Packet 10 the cable is unplugged.
In Packet 10 Device 1 tries to send data to Device 2 without receiving ACK.
In Packets 11/12 Device 2 sends Keep-Alive messages.
Between Packet 12 and 13 the cable is plugged in again.
Packet 13 ACKs the last Keep-Alive message, which seems fine.
From now on, it gets weird. I only see TCP Dup ACK messages.
I assume that the TCP stacks get confused on the difference in the sequence numbers.
While Device 1 thinks it's own sequence number is 49, Device 2 thinks it is 37.
Device 1 does not support Fast Retransmission.
Can someone explain what is happening here? I'm struggeling to understand where the problem is.
Is the problem in Device 1 where the TCP stack thinks it is on sequence number 49 while the package is not yet acknowledged or is it in Device 2?
I really appreciate your help.
Kindly,
Philipp

Wireshark show http only three wave not four data transition

When I'm a Student, being taught there is 4 process in stop http connection.
However today I test only 3 process. It's Wireshark merge data transition?
You were likely taught that TCP connections require a four way close: FIN/ACK -> ACK, FIN/ACK -> ACK. This is true, but it does not have to take four packets to do it.
In the case that you present, the 192.168.0.106 host begins to close with a FIN/ACK. The other end of the connection, rather than simply ACKing this, takes the opportunity to begin closing as well. So, when it responds with a FIN/ACK, it is both ACKing the FIN that it received and beginning its own close. The final packet is the acknowledgement of the FIN from the 211 host.
What this means is that, in this case, only three packets were used, but we still had a FIN from host A that was acknowledged and a FIN from host B that was acknowledged. That is really the only requirement in the protocol.

Send bits over physical ethernet cable without any error correction like FCS or CRC

I would like to send some raw bits over ethernet cable between two computers. The errors in data that occurred during transmission are corrected by Ethernet Frame Check Sequence(FCS) (like CRC: cycle redundancy check) and further checks by the upper layers like TCP.
But, I do not want any error correction techniques to be applied. I want to see the exact bits received with the errors occurring in transmission. I have seen some articles(example http://hacked10bits.blogspot.in/2011/12/sending-raw-ethernet-frames-in-6-easy.html) on sending raw ethernet frames but I think they also undergo FCS like CRC checks. Is it possible to send data without any such error corrections. Thanks.
Edit 1
I am connecting two computers directly end to end using an Ethernet cable (no switches or router in between).
The ethernet cable is "CAT 5E", labelled as "B Network Patch Cable CAT 5E 24AWG 4PR-ETL TIA/EIA-568B"
The output of lspci -v is (nearly same for both the computers):
Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
Subsystem: Lenovo RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
Flags: bus master, fast devsel, latency 0, IRQ 28
I/O ports at e000 [size=256]
Memory at f7c04000 (64-bit, non-prefetchable) [size=4K]
Memory at f7c00000 (64-bit, prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [70] Express Endpoint, MSI 01
Capabilities: [b0] MSI-X: Enable- Count=4 Masked-
Capabilities: [d0] Vital Product Data
Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Virtual Channel
Capabilities: [160] Device Serial Number 01-00-00-00-68-4c-e0-00
Capabilities: [170] Latency Tolerance Reporting
Kernel driver in use: r8169
Kernel modules: r8169
I used the following command to show FCS and not drop bad frames
sudo ethtool -K eth0 rx-fcs on rx-all on
Still I am not receiving any error/bad frames. I am sending 1000 bits of zeros in each frame and none of the bits received had any 1's. Do I need to keep sending a lot of such frames in order to receive a bad frame? (Because the bit error rate is probably a lot less for a CAT 5E cable)
Also, can I implement my own LAN protocol with the same current NIC and ethernet cable?
Basically, I want to get as many errors as possible during the transmission and detect all of them.
While it's generally not possible to send Ethernet frames without appending a proper FCS, it is often possible to receive frames that don't have a correct FCS. Many network controllers support this, though it would likely require you to modify the native network device driver.
Many Intel NICs for example, have a mode setting that causes framing errors, FCS errors and other sorts of error frames to be discarded. The driver usually turns that feature on. This is generally desirable because such frames are unlikely to be useful (since they are known to be corrupted). However, for troubleshooting purposes, the NIC supports receiving all frames, including error frames. It's just that there's usually no reason to expose that feature to users. After all, who wants to receive known-corrupted frames?
It is customary to enable counters for such frames. Many NICs actually expose those via diagnostic counters. In Linux, you can often see these with ethtool -S <interface>.
For example, on my machine (note rx_crc_errors):
$ ethtool -S eth0
NIC statistics:
rx_packets: 1629186
tx_packets: 138121
rx_bytes: 747886491
tx_bytes: 12198820
rx_broadcast: 0
tx_broadcast: 0
rx_multicast: 0
tx_multicast: 0
rx_errors: 0
tx_errors: 0
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
tx_timeout_count: 0
tx_restart_queue: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 269
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 747886491
rx_csum_offload_good: 1590047
rx_csum_offload_errors: 0
alloc_rx_buff_failed: 0
tx_smbus: 0
rx_smbus: 0
dropped_smbus
This is not possible. FCS is mandatory for Ethernet frames (layer 2); it checks for errors but locating/correcting error bits isn't possible. FEC is used with faster PHYs (layer 1) and isn't optional either.
When switching off FCS checking on a NIC you have to keep in mind that any switch you use also checks for FCS errors and drops ingress frames with bad FCS. Ethernet is explicitly designed not to propagate error frames.
Edit 1 question comments:
With decent cabling, error frames on GbE should be very rare. If you actually want errors(?), use a good length of Cat3 cable or severly abuse the Cat5 cable...
An Ethernet NIC speaks Ethernet. If you want your own protocol you'd need to build your own hardware.
Sending wrong CRC/FCS is not possible with your NIC device (Realtek) . Your NIC keeps adding the FCS 4 bytes for every packet you send, even for "hand made" raw packets using AF_PACKET socket .
The only standard NICs that support sending wrong CRC/FCS, until now and as I know ,are the following INTEL NIC drivers :
e1001, e1000, e100, ixgbe, i40e and igb .

Number of tcp connections used by MPI program (MPICH2+nemesis+tcp)

How much tcp connections will be used for sending data by MPI program if the MPI used is MPICH2? If you know also about pmi connections, count them separately.
For example, if I have 4 processes and additional 2 Communicators (COMM1 for 1st and 2nd processes and COMM2 for 3rd and 4rd); the data is sent between each possible pair of processes; in every possible communicator.
I use recent MPICH2 + hydra + default pmi. OS is linux, network is switched Ethernet. Every process in on separated PC.
So, here are pathes of data (in pairs of processes):
1 <-> 2 (in MPI_COMM_WORLD and COMM1)
1 <-> 3 (only in MPI_COMM_WORLD)
1 <-> 4 (only in MPI_COMM_WORLD)
2 <-> 3 (only in MPI_COMM_WORLD)
2 <-> 4 (only in MPI_COMM_WORLD)
3 <-> 4 (in MPI_COMM_WORLD and COMM2)
I think there can be
Case 1:
Only 6 tcp connections will be used; data sent in COMM1 and MPI_COMM_WORLD will be mixed in the single tcp connection.
Case 2:
8 tcp connections: 6 in MPI_COMM_WORLD (all-to-all = full mesh) + 1 for 1 <-> 2 in COMM1 + 1 for 3 <-> 4 in COMM2
other variant that I didn't think about.
Which communicators are being used doesn't affect the number of TCP connections that are established. For --with-device=ch3:nemesis:tcp (the default configuration), you will use one bidirectional TCP connection between each pair of processes that directly communicate via point-to-point MPI routines. In your example, this means 6 connections. If you use collectives then under the hood additional connections may be established. Connections will be established lazily, only as needed, but once established they will stay established until MPI_Finalize (and sometimes also MPI_Comm_disconnect) is called.
Off the top of my head I don't know how many connections are used by each process for PMI, although I'm fairly sure it should be one per MPI process connecting to the hydra_pmi_proxy processes, plus some other number (probably logarithmic) of connections among the hydra_pmi_proxy and mpiexec processes.
I can't answer your question completely, but here's something to consider. In MVAPICH2 for the PMI we developed a tree based connection mechanism. So each node would have log (n) TCP connections at the max. Since opening a socket would subject you to the open file descriptor limit on most OSes, its probable that the MPI library would use a logical topology over the ranks to limit the number of TCP connections.

Winsock TCP Packets sent but not reaching host

When the server sends 4 or more - 25 Byte packets to the client only the first 2 are processed by the client. I am using Event select on the client, and send on the server. There are no errors but only the first 2 packets are displayed. Thanks in advance.
Without looking at your code I can only think of one issue that you might be overlooking,
Maybe you are missing a point that TCP is a stream based protocol. If you send data by calling Send function 10 times from client then it is not necessary that you have to call the receive function 10 times on receiving side. All data maybe retrieved in 1 receive or 5 or 8 or 12 receives. I mean don't try to look at it in form of packets. You have to do Framing yourself to identify the packets.
When you send 4 packets of 25 byte each. The total is 100 byte data.
On receiving side you may be getting 2 packets of 50 bytes and you have to identify your packets yourself by using some start and end markers etc...
You can also get a single packet of 100 bytes or 10 packets of 10 bytes. keep that in mind.

Resources