I understood that the tcp checksum calculates automaticly if we write 0 in the function libnet_build_tcp, so why do we need libnet_do_checksum?
I have an error, when I am trying to build a new packet. A regulat TCP packet(SYN,ACK) works fine, but an HTTP packet don't work, beacuse a tcp checksum error.
Do I have to use libnet_do_checksum?
You use libnet_do_checksum() when you want to manually calculate the checksum, so you can check it before sending, for example.
Are you sure the packet carrying HTTP data has a checksum error? It can happen that the OS is using checksum offloading. Wireshark would report a bad checksum on the origin machine but the network card will compute it before sending the packet on the wire.
Related
so I'm working on a project where the program can detect when its being scanned for malicious purposes by checking how many ports are being scanned at the same time and scanning them back using the SYN method and I would like to know if the TCP or UDP protocol is better for a so called "counter-scan" to the target without getting noticed I have some ideas like:
I can send them using UDP and the attacker wouldn't notice them .
using the TCP method use the existing 3 way handshake to mask the
SYN packets with his responses
sorry I have no source code since I'm still brain storming
Yes, UDP scan can be done by looking at ICMP (NOT IMCP) port unreachables, but these are often filtered.
I guess UDP would not be less "noticed"--TCP does more harm since it needs state saved (waiting for ACKs).
(nit: please work on your English)
I'm working on a server/client socket application that is using Linux TUN interface.
Server gets packets directly from TUN interface and pass them to clients and clients put received packets directly in the TUN interface.
<Server_TUN---><---Server---><---Clients---><---Client_TUN--->
Sometimes the packets from Server_TUN need to be fragmented in IP layer before transmitting to a client.
So at the server I read a packet from TUN, start fragmenting it in the IP layer and send them via socket to clients.
When the fragmentation logic was implemented, the solution did not work well.
After starting Wireshark on Client_TUN I noticed for all incoming fragmented packets I get TCP Checksum error.
At the given screenshot, frame number 154 is claimed to be reassembled in in 155.
But TCP checksum is claimed to be incorrect!
At server side, I keep tcp data intact and for the given example, while you see the reverse in Wireshark, I've split a packet with 1452 bytes (including IP header) and 30 bytes (Including IP header)
I've also checked the TCP checksum value at the server and its exactly is 0x935e and while I did not think that Checksum offloading matters for incoming packets, I checked offloading at the client and it was off.
$ sudo ethtool -k tun0 | grep ": on"
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: on
generic-segmentation-offload: on
generic-receive-offload: on
tx-vlan-offload: on
tx-vlan-stag-hw-insert: on
Despite that, because of the solution is not working now, I don't think its caused by offload effect.
Do you have any idea why TCP checksum could be incorrect for fragmented packets?
Hopefully I found the issue. It was my mistake. Some tcp data was missing when I was coping buffers. I was tracing on the indexes and lengths but because of the changes in data, checksum value was calculating differently in the client side.
From what I have read about UDP, it has no error handling, no checking for things like sequence of data sent/recieved, no checking for duplicate packets, no checking for corrupt packets and obviously no guarantee that the packets sent are even received...
So with that in mind, why an earth is there actually an option to use checksums in UDP?? Because surely if you want to make sure the data being sent is received in the correct order (and not corrupt and so on) then you would use TCP...
UDP packets include a field for a 16 bit CRC checksum which the receiving operating system will use to check for packet corruption. If the checksum is present and fails, then the packet will be silently discarded. It is up to the application to notice that the packet disappeared and take corrective action.
UDP checksums are enabled by default on all modern operating systems. It is possible to disable UDP checksums in IPv4, either at the socket or OS level. Doing so would reduce the CPU overhead of processing each packet at both the sender and receiver. This might be desirable if, for example, the application were calculating its own checksum separately. Without any checksum, there would be no guarantee that the bytes received are the same as the bytes sent.
The task of UDP is to transport datagrams, which are "network data packets". For UDP, every data packet is a transmission of its own. If you send 3 packets, those are three independent transmissions for UDP. Whether the content of these 3 packets somehow belongs together or if these are three individual requests (think of DNS requests, where every request is sent as an own UDP packet), UDP doesn't know and doesn't care. All that UDP guarantees is that a packet is either transmitted as a whole or not at all; either the entire packet arrives or the entire packet is lost, you will never see "half of a packet" arriving. So if you just want to send a bunch of data packets, you use UDP.
The task of TCP, on the other hand, is to transport a stream of data. It's not about packets. It's about a stream of bytes somehow making it from one host to another. How this happens, e.g. how TCP is breaking the data stream into chunks and sending these chunks over the network and ensuring that no data is lost and all data is in order, is up to TCP. All that TCP guarantees is that the bytes will arrive correctly and in order at the other side, unless the TCP connection is lost, in which case the stream ends abruptly somewhere in the middle but all data, that arrived up to that point, did arrive correctly and in correct order. So despite TCP also working with packets, the transmission behaves like a stream that has no internal "data units". When sending 80 bytes over TCP, there may be one packet with 80 bytes or 10 packets with each 8 bytes or anything in between, you cannot know and you don't have to.
But just because you use UDP doesn't mean you don't care for data corruption in UDP packets. Keep in mind that corruption may not just affect your data, it may also affect the UDP header itself. If only a single bit swaps, the UDP packets may have an incorrect destination port. So they added a checksum which ensures that neither the UDP header nor the data payload has been corrupted but made it optional, so it's up to you whether you want to use it or not. If used, corrupt packets are dropped and thus behave like lost packets. If your code takes care of lost packets, it will automatically take care of corrupt packets, too.
With IPv6 though, the checksum was dropped from the IP header, which means that IP header corruptions are no longer detected. But this was seen as a small problem, as most layer 2 protocols have their own mechanism to detect corrupt data (e.g. Ethernet and WiFi already guarantee that data is not corrupted on its way through the network) and the checksums of UDP/TCP also cover some of the IP header fields, so even without layer 2 error checking, the recipient would notice if the IP addresses in the header have been corrupted along the way and drop the packet. As a consequence, the UDP checksum is no longer optional with IPv6.
I have a packet trace that I forge with scapy and resend with tcpreplay. I recompute IP and transport-layer checksums with Scapy, save the packets to disk on pcap file and call tcpreplay on it.
By running tcpdump in parallel I noticed that all IP checksums of those outgoing packets have no value at all. It seems that tcpreplay is removing it each time.
Now, does this happen on purpose? Am I missing something?
Checksums should be correct, so I don't think tcpreplay removes them because a check on it failed.
You didn't specify the actual tcpreplay command you are using, but tcpreplay never edits packets. You can use tcpreplay-edit or tcprewrite to edit packets, but not tcpreplay. And even then tcpreplay-edit/tcprewrite will calculate/fix your checksums; not zero them out.
Have you opened up the original pcap generated by scapy in Wireshark and verified there are actually checksums there? Honestly, this sounds like a simple case of garbage in, garbage out.
FWIW, I'm not aware of anything that would zero out your checksums... at least I can't imagine why the kernel would do that for packets sent via the PF_PACKET interface- that would be a bug IMHO.
If you figure it out, let me know.
I'm not really sure about what's going on but i suspect that tcpreplay detect that the interface is going to use to send out the packet has the Offload Checksum active and let the NIC to calculate the right checksum.
Try to disactivate the offload checksum with
ethtool -K eth0 rx off tx off
then retry and let us know
You can solve this issue using the tcpreplay-edit which is included in the same package that tcpreplay, in particular this option:
-C, --fixcsum Force recalculation of IPv4/TCP/UDP header checksums
Desactivating the offload checksum of the interface is a non sense: when the packet goes out it would be rejected by the next machine having checksum checking enabled (+99%)
I'm building my own webserver based on a tutorial.
I have found a simple way to initiate a TCP connection and send one segment of http data (the webserver will run on a microcontroller, so it will be very small)
Anyway, the following is the sequence I need to go through:
receive SYN
send SYN,ACK
receive ACK (the connection is now established)
receive ACK with HTTP GET command
send ACK
send FIN,ACK with HTTP data (e.g 200 OK)
receive FIN,ACK <- I don't recieve this packet!
send ACK
Everything works fine until I send my acknowledgement and HTTP 200 OK message.
The client won't send an acknowledgement to those two packages and thus
no webpage is being displayed.
I've added a pcap file of the sequence how I recorded it with wireshark.
Pcap file: http://cl.ly/5f5/httpdump2.pcap
All sequence and acknowledgement numbers are correct, checksum are ok. Flags are also right.
I have no idea what is going wrong.
I think that step 6. should be just FIN, without ACK. What packet from the client are you ACKing at that place? Also I don't see why 4. should be an ACK instead of just a normal data packet - the client ACKed the connection at 3.
This diagram on TCP states might help.
WireShark says (of the FIN packet):
Broken TCP: The acknowledge field is
nonzero while the ACK flag is not set
I don't know for sure that's what's causing your problem, but if WireShark doesn't like that packet, maybe the client doesn't either. So, it should be FIN+ACK, or you should set the acknowledge field to 0.
If that doesn't solve it, you might also try sending the data first, then a separate FIN packet. It's valid to include data with the FIN, but it's more common to send the FIN by itself (as seen in the other pcap trace you posted earlier).
Also, you should probably be setting the PUSH flag in the packet with the 200 OK
Finally, I don't see any retransmission attempts for the FIN packet - is that because you stopped the capture right away?
The IP length field was consequently counting 8 bits too much. I made a mistake in my calculations. Everythings works like a charm now!