Malformed DNS Request Packet - tcp

I've been working on a project which involves sending DNS requests with information (not actual domains) in the questions (2 of them). I've been tracking the packets with wireshark.
Here is the tcp dump of the packet created.
00000000 00 02 01 00 00 02 00 00 00 00 00 00 01 32 03 65
00000010 6e 64 03 63 6f 6d 00 00 01 00 01 01 32 04 73 61
00000020 76 65 03 63 6f 6d 00 00 01 00 01
........ .....2.e
nd.com.. ....2.sa
ve.com.. ...
The i.d. and qdcount should be 2, recursion desired, and the domains shown are correct. Wireshark is saying that it is a malformed DNS packet. Any idea what is wrong with the packet?

OK, so:
If you're doing the transport-layer networking yourself, your code will determine whether it's going over UDP or TCP, by specifying, when creating the socket on which to send the packet, whether it's a UDP or TCP socket;
TCP is used if the packet won't fit in a maximum-sized UDP packet;
if you're sending it over TCP, you need to precede it with a header, as per section 4.2.2 "TCP usage" in RFC 1035.
"Maximum-sized" is a bit vague. RFC 791, the IPv4 specification, says, in section 3.1 "Internet Header Format":
Total Length: 16 bits
Total Length is the length of the datagram, measured in octets,
including internet header and data. This field allows the length of
a datagram to be up to 65,535 octets. Such long datagrams are
impractical for most hosts and networks. All hosts must be prepared
to accept datagrams of up to 576 octets (whether they arrive whole
or in fragments). It is recommended that hosts only send datagrams
larger than 576 octets if they have assurance that the destination
is prepared to accept the larger datagrams.
The number 576 is selected to allow a reasonable sized data block to
be transmitted in addition to the required header information. For
example, this size allows a data block of 512 octets plus 64 header
octets to fit in a datagram. The maximal internet header is 60
octets, and a typical internet header is 20 octets, allowing a
margin for headers of higher level protocols.
However, these days, the old networking hardware that would impose a maximum packet size limit as low as 576 bytes is mostly if not completely gone, and the real-world "maximum packet size" would generally be the Ethernet packet size - a total length of 1518 bytes, with 14 bytes of Ethernet header and 4 bytes of FCS, leaving 1500 bytes of payload. For UDP, with a typical IPv4 header length of 20 bytes and a UDP header length of 8 bytes, that's 1472 bytes of data, so it's probably good enough to use TCP rather than UDP for DNS messages larger than 1472 bytes (IP fragmentation and reassembly will happen if any hop in the network route can't handle a 1500-byte IPv4 packet; that does increase the chances of the packet not getting through, as, if one fragment gets through but the other doesn't, the entire packet doesn't get through).

Related

Bluetooth LE, once read fails, it returns "Read Characteristic Fail" for subsequent requests, but able to receive notifications

Scenario 1 -
Reading characteristics from BLE device, one request fails ("Read Timeout"), then all subsequent requests fails ("Read Characteristic Fail"), but when BLE device sends notification, it receives. but unable to Read or Write. Device is still connected.
`12-28 13:25:00.620 25648 25666 D Device : resolve: read|<F9...SERVICE_UUID>|<06...CHARACTERISTIC_UUID> 03 E8 00 64 01 7D 50 00 01 F5 01 2D 00 5B 00 47 00 15 00 09
// FULL 20 BYTES RETURNED at 13:25:00, it was working normally then
12-28 13:25:26.230 25648 25666 D Device : reject: read|<F9...SERVICE_UUID>|<06...CHARACTERISTIC_UUID>
// REJECTED at 13:25:26, errors started from here`
Scenario 2 -
Reading characteristics from BLE device, one request returns ~half data (was expecting 20 bytes, but only received 9 bytes). then all subsequent requests fails ("Read Characteristic Fail"), but when BLE device sends notification, it receives. but unable to Read or Write. Device is still connected.
Logs
`12-28 13:25:00.620 25648 25666 D Device : resolve: read|<F9...SERVICE_UUID>|<06...CHARACTERISTIC_UUID> 03 E8 00 64 01 7D 50 00 01 F5 01 2D 00 5B 00 47 00 15 00 09
// FULL 20 BYTES RETURNED at 13:25:00, it was working normally then
12-28 13:25:26.230 25648 25666 D Device : resolve: read|<F9...SERVICE_UUID>|<06...CHARACTERISTIC_UUID> 0F 00 C5 00 00 00 10 00 00
// ONLY 9 BYTES RETURNED at 13:25:26, errors started from here`
Can't BLE ignore one "read fail", and continue reading ? when it is still connected. When reading same characteristic again, it also returns "Read Characteristic Fail"
Most probably notifications and read requests are conflicting (can't avoid, because notification can come anytime), and starting/ stopping notifications multiple times causes same issues.
Environment
Ionic--6.20.6
Angular--15.0.4
Capacitor--4.6.1
capacitor-community/bluetooth-le--2.0.1
Device
Xiaomi Redmi Note 10 Pro (android 12)
Also tested with other android/ios phones, but same result
I did tried -
Reading same characteristic again, but same error
Tried reading 1 second after notification came (1 second time gap after notification received, it is not possible to add time gap before notification arrives because it can arrive at any time, and in random numbers)
implemented Queue, at a time only 1 request will be active.

Is it possible for a certain sequence of data bytes to drop a TCP connection

I have a simple TCP client/server arrangement (running on Windows 10/11) used to transfer binary data from multiple remote clients to a single server. This works 99% of the time. However, whenever the following hexadecimal sequence appears in the data (being sent from the client to the server) the TCP connection drops and the client generates a 10053 error.
6C 74 01 00 08 00 00 00
Running the server application on a local network has slightly different results... the connection does not drop but the client receives no ACK from the server.
Is it possible for a certain sequence of bytes to drop, or otherwise interfere with, a TCP connection?

How to understand the TCP option field I received on my PIC18?

My goal is to exchange packets between my PIC18F67j60 microcontroller (it has a Ethernet module) and my host computer.
I programmed the PIC18F using MPLABX IDE (C language, pickit3) and on the computer side, I programmed a simple application on CODE::BLOCKS (C language). The application running on my computer works (I tested it). The goal is to establish a TCP communication between the PIC18 and the computer (I know TCP is not that good for embedded devices like microcontrollers cause it takes memory space).
I already managed to establish a UDP communication and I could send and receive any data from both sides.
The issue takes place with TCP communication. The issue is the following : my computer sends a TCP PDU to my microcontroller (to start a connection process, so sets SYN Flag) and my micontroller receives it. Then I decided to display on a screen(using UART) the data received by the microcontroller.
I finally noticed I'm getting a TCP option field added to the TCP "regular" header (in this TCP regular header, the OFFSET byte is "0x80", which means the whole TCP header is 8 * 4 bytes = 32 bytes long or 256 bits long if you want, also 32 = 20 + 12, it means I have 12 more bytes in addition to TCP 20 regular bytes).
The last byte of the TCP Header is the "Urgent Pointer" and right after begins the TCP option that is : TCP option field = " 02 04 05 B4 01 03 03 08 01 01 04 02"
What does this option field means ? I understand that "02 04 05 B4" is for MSS field, but then I'm clueless, I dont understand what the other bytes represent... Any help please ?
Thank you for the help provided.
TCP option field = " 02 04 05 B4 01 03 03 08 01 01 04 02", represents some of the fields appearing in TCP option field with their order.
Atleast, the tcp option values found through pyshark packet analyzer, represents encoding for some values:
MSS - 02:04:05:b4,
NOP - 01,
Window Scale - 03:03:08 or 03:03:02,
SACK perm - 04:02.

Implementing OSDP encryption problems

I'm having trouble implementing the encryption part of the OSDP protocol on an Arduino.
http://www.siaonline.org/SiteAssets/SIAStore/Standards/OSDP_V2%201_5_2014.pdf
I've successfully done the negotiation part and have verified the RMAC-I response by decrypting the data and comparing with the plaintext. The part I'm stuck on is the encryption of the data packets. According to the spec, I use the RMAC-I response as my ICV for the aes128 CBC and I encrypt the packet using the S-MAC2 key.
My POLL packet (in hex) is as follows:
53 01 0e 00 0c 02 15 60
This gets padded
53 01 0e 00 0c 02 15 60 80 00 00 00 00 00 00 00
This gets xored with the ICV then encrypted with S-MAC2 as the key.
The first 4 bytes of the result is stored in the packet and sent
53 01 0e 00 0c 02 15 60 91 86 b9 3d 4a 29
Unfortunately the reader rejects the poll command with a NAK 06
I'm presuming my MAC values have not been computed correctly as I've compared my packet with the HID DTK tool (obviously the MAC and CRC values are the only difference). Can someone validate my process?
Seems my process was correct but was let down by implementation (off by one error).
2.1.7 is the current SIA spec. IEC 60839-11-5 should be out soon. (the IEC standards version.)
The processing your describe is for the MAC suffix not the payload encryption. MAC2 because it's only one block long (else you'd use MAC1 and then MAC2.) OSDP uses AES to encrypt a throw-away copy of the entire message and then uses some bytes of the last cipherblock as the MAC that is transmitted. OSDP encrypts the payload, if there is one. In modern AES implementations you pass in an IV and a key and a buffer so one would not look at it as xoring the IV with the plaintext.

Packet is fragmented but the flags are on Don't Fragment

I have the following 2 TCP packets I'm picking up on winpcap:
http://pastebin.com/FUAs3UZ7
or in a pcap format https://www.dropbox.com/s/0ss4j0weszy92no/SO.pcap
Those 2 packets are to be reassembled, but their IP flags are "010", meaning "Don't Fragment", and the fragment offset is on 0. They do have a consecutive identification number, but if I understand correctly this alone is not enough to define a fragmented packet.
Wireshark does reassemble those packets, and I can't really understand why.
What am I missing here? How does Wireshark know to reassemble those 2 packets?
First packet:
00 80 f4 09 e6 a5 - Ethernet destination address
00 50 56 26 ab 04 - Ethernet source address
08 00 - Ethernet type, which is IPv4
45 - IP version (4, for IPv4) and header length (5, for 5*4 = 20 bytes)
00 - DSCP/ECN (or TOS, in the old days)
02 40 - total length (576 bytes)
74 ff - identification
40 00 - flags and fragment offset; DF, and a fragment offset of 0
80 - time to live
06 - protocol, which is TCP
When you say "Wireshark does reassemble those packets", are you referring to IP reassembly or TCP reassembly? Those take place at different layers, and I suspect what Wireshark is doing is reassembling all or part of the TCP segment in the first packet and the TCP segment in the second packet to make a packet for the protocol running on top of TCP; TCP is a byte-stream protocol, so there is no guarantee that TCP segment boundaries (which turn into link-layer frame boundaries in almost all cases) correspond to packet boundaries for protocols running on top of TCP.

Resources