tcpreplay removing IP checksums? - tcp

I have a packet trace that I forge with scapy and resend with tcpreplay. I recompute IP and transport-layer checksums with Scapy, save the packets to disk on pcap file and call tcpreplay on it.
By running tcpdump in parallel I noticed that all IP checksums of those outgoing packets have no value at all. It seems that tcpreplay is removing it each time.
Now, does this happen on purpose? Am I missing something?
Checksums should be correct, so I don't think tcpreplay removes them because a check on it failed.

You didn't specify the actual tcpreplay command you are using, but tcpreplay never edits packets. You can use tcpreplay-edit or tcprewrite to edit packets, but not tcpreplay. And even then tcpreplay-edit/tcprewrite will calculate/fix your checksums; not zero them out.
Have you opened up the original pcap generated by scapy in Wireshark and verified there are actually checksums there? Honestly, this sounds like a simple case of garbage in, garbage out.
FWIW, I'm not aware of anything that would zero out your checksums... at least I can't imagine why the kernel would do that for packets sent via the PF_PACKET interface- that would be a bug IMHO.
If you figure it out, let me know.

I'm not really sure about what's going on but i suspect that tcpreplay detect that the interface is going to use to send out the packet has the Offload Checksum active and let the NIC to calculate the right checksum.
Try to disactivate the offload checksum with
ethtool -K eth0 rx off tx off
then retry and let us know

You can solve this issue using the tcpreplay-edit which is included in the same package that tcpreplay, in particular this option:
-C, --fixcsum Force recalculation of IPv4/TCP/UDP header checksums
Desactivating the offload checksum of the interface is a non sense: when the packet goes out it would be rejected by the next machine having checksum checking enabled (+99%)

Related

is the UDP or TCP protocol best for sending back un-noticed packets / datagrams

so I'm working on a project where the program can detect when its being scanned for malicious purposes by checking how many ports are being scanned at the same time and scanning them back using the SYN method and I would like to know if the TCP or UDP protocol is better for a so called "counter-scan" to the target without getting noticed I have some ideas like:
I can send them using UDP and the attacker wouldn't notice them .
using the TCP method use the existing 3 way handshake to mask the
SYN packets with his responses
sorry I have no source code since I'm still brain storming
Yes, UDP scan can be done by looking at ICMP (NOT IMCP) port unreachables, but these are often filtered.
I guess UDP would not be less "noticed"--TCP does more harm since it needs state saved (waiting for ACKs).
(nit: please work on your English)

generating network traffic with iperf without a server

I need to exercise some hardware by sending a network traffic with it. While it is doing it I will probing some of the lines with an oscilloscope. Need to verify signaling. The problem is that I won't be able to connect to any server during the test. Many reasons for that, one of each is that hardware isn't complete yet.
Does anyone know if there is a away to generate network traffic with iperf without using a server? All I need is to just send some data, don't need to know if it was received. If there isn't can someone point me to a tool that can do that.
iperf UDP will do it you just need to make sure there is an arp entry for the destination (enter it manually) or use a multicast destination which doesn't require ARP, e.g. iperf -u -c 239.1.1.1 -b 10M

Can we use ping to see packet dropped in traffic control?

I am studying in traffic control and want to know how we can check packet dropped in a traffic control that I config it. Can we use ping icmp not?
You can use ping to check if there is currently some packet loss, but if you need to see if any packets were dropped before something like "netstat -s" or regularly checking the data in /proc/net/netstat (on unix-like systems) might be more useful.

tcpdump slowed down by... its own filter?

Do long BPF filters slow down tcpdump?
I replay a packet trace where all the packets have ttl=k and wait for ICMP messages back. What I've been noticing is that if I use the following filter (on eth0):
(ip and ip[8]=$k and src host $myAddress) or (icmp and dst host $myAddress and icmp[0]=11)
...I always miss 20-30 packets among the sent packets, whereas if I just do:
ip
... and then do the exact above filtering offline on the capture file, I find all the packets I had sent.
Is this a known behaviour?
If tcpdump is not fast enough to pop out captured packets from the queue, the kernel could drop some of them.
Look at the "XXXX packets dropped by kernel" message at the end of the dump to see if effectively some of them is lost.
Ensure to add the -n option to the command line. This will avoid DNS resolving and it will speed up a little (depending on your network)

Why do we need libnet_do_checksum? HTTP checksum doesnt work

I understood that the tcp checksum calculates automaticly if we write 0 in the function libnet_build_tcp, so why do we need libnet_do_checksum?
I have an error, when I am trying to build a new packet. A regulat TCP packet(SYN,ACK) works fine, but an HTTP packet don't work, beacuse a tcp checksum error.
Do I have to use libnet_do_checksum?
You use libnet_do_checksum() when you want to manually calculate the checksum, so you can check it before sending, for example.
Are you sure the packet carrying HTTP data has a checksum error? It can happen that the OS is using checksum offloading. Wireshark would report a bad checksum on the origin machine but the network card will compute it before sending the packet on the wire.

Resources