Implementing OSDP encryption problems - encryption

I'm having trouble implementing the encryption part of the OSDP protocol on an Arduino.
http://www.siaonline.org/SiteAssets/SIAStore/Standards/OSDP_V2%201_5_2014.pdf
I've successfully done the negotiation part and have verified the RMAC-I response by decrypting the data and comparing with the plaintext. The part I'm stuck on is the encryption of the data packets. According to the spec, I use the RMAC-I response as my ICV for the aes128 CBC and I encrypt the packet using the S-MAC2 key.
My POLL packet (in hex) is as follows:
53 01 0e 00 0c 02 15 60
This gets padded
53 01 0e 00 0c 02 15 60 80 00 00 00 00 00 00 00
This gets xored with the ICV then encrypted with S-MAC2 as the key.
The first 4 bytes of the result is stored in the packet and sent
53 01 0e 00 0c 02 15 60 91 86 b9 3d 4a 29
Unfortunately the reader rejects the poll command with a NAK 06
I'm presuming my MAC values have not been computed correctly as I've compared my packet with the HID DTK tool (obviously the MAC and CRC values are the only difference). Can someone validate my process?

Seems my process was correct but was let down by implementation (off by one error).

2.1.7 is the current SIA spec. IEC 60839-11-5 should be out soon. (the IEC standards version.)
The processing your describe is for the MAC suffix not the payload encryption. MAC2 because it's only one block long (else you'd use MAC1 and then MAC2.) OSDP uses AES to encrypt a throw-away copy of the entire message and then uses some bytes of the last cipherblock as the MAC that is transmitted. OSDP encrypts the payload, if there is one. In modern AES implementations you pass in an IV and a key and a buffer so one would not look at it as xoring the IV with the plaintext.

Related

Bluetooth LE, once read fails, it returns "Read Characteristic Fail" for subsequent requests, but able to receive notifications

Scenario 1 -
Reading characteristics from BLE device, one request fails ("Read Timeout"), then all subsequent requests fails ("Read Characteristic Fail"), but when BLE device sends notification, it receives. but unable to Read or Write. Device is still connected.
`12-28 13:25:00.620 25648 25666 D Device : resolve: read|<F9...SERVICE_UUID>|<06...CHARACTERISTIC_UUID> 03 E8 00 64 01 7D 50 00 01 F5 01 2D 00 5B 00 47 00 15 00 09
// FULL 20 BYTES RETURNED at 13:25:00, it was working normally then
12-28 13:25:26.230 25648 25666 D Device : reject: read|<F9...SERVICE_UUID>|<06...CHARACTERISTIC_UUID>
// REJECTED at 13:25:26, errors started from here`
Scenario 2 -
Reading characteristics from BLE device, one request returns ~half data (was expecting 20 bytes, but only received 9 bytes). then all subsequent requests fails ("Read Characteristic Fail"), but when BLE device sends notification, it receives. but unable to Read or Write. Device is still connected.
Logs
`12-28 13:25:00.620 25648 25666 D Device : resolve: read|<F9...SERVICE_UUID>|<06...CHARACTERISTIC_UUID> 03 E8 00 64 01 7D 50 00 01 F5 01 2D 00 5B 00 47 00 15 00 09
// FULL 20 BYTES RETURNED at 13:25:00, it was working normally then
12-28 13:25:26.230 25648 25666 D Device : resolve: read|<F9...SERVICE_UUID>|<06...CHARACTERISTIC_UUID> 0F 00 C5 00 00 00 10 00 00
// ONLY 9 BYTES RETURNED at 13:25:26, errors started from here`
Can't BLE ignore one "read fail", and continue reading ? when it is still connected. When reading same characteristic again, it also returns "Read Characteristic Fail"
Most probably notifications and read requests are conflicting (can't avoid, because notification can come anytime), and starting/ stopping notifications multiple times causes same issues.
Environment
Ionic--6.20.6
Angular--15.0.4
Capacitor--4.6.1
capacitor-community/bluetooth-le--2.0.1
Device
Xiaomi Redmi Note 10 Pro (android 12)
Also tested with other android/ios phones, but same result
I did tried -
Reading same characteristic again, but same error
Tried reading 1 second after notification came (1 second time gap after notification received, it is not possible to add time gap before notification arrives because it can arrive at any time, and in random numbers)
implemented Queue, at a time only 1 request will be active.

Is it possible for a certain sequence of data bytes to drop a TCP connection

I have a simple TCP client/server arrangement (running on Windows 10/11) used to transfer binary data from multiple remote clients to a single server. This works 99% of the time. However, whenever the following hexadecimal sequence appears in the data (being sent from the client to the server) the TCP connection drops and the client generates a 10053 error.
6C 74 01 00 08 00 00 00
Running the server application on a local network has slightly different results... the connection does not drop but the client receives no ACK from the server.
Is it possible for a certain sequence of bytes to drop, or otherwise interfere with, a TCP connection?

How to understand the TCP option field I received on my PIC18?

My goal is to exchange packets between my PIC18F67j60 microcontroller (it has a Ethernet module) and my host computer.
I programmed the PIC18F using MPLABX IDE (C language, pickit3) and on the computer side, I programmed a simple application on CODE::BLOCKS (C language). The application running on my computer works (I tested it). The goal is to establish a TCP communication between the PIC18 and the computer (I know TCP is not that good for embedded devices like microcontrollers cause it takes memory space).
I already managed to establish a UDP communication and I could send and receive any data from both sides.
The issue takes place with TCP communication. The issue is the following : my computer sends a TCP PDU to my microcontroller (to start a connection process, so sets SYN Flag) and my micontroller receives it. Then I decided to display on a screen(using UART) the data received by the microcontroller.
I finally noticed I'm getting a TCP option field added to the TCP "regular" header (in this TCP regular header, the OFFSET byte is "0x80", which means the whole TCP header is 8 * 4 bytes = 32 bytes long or 256 bits long if you want, also 32 = 20 + 12, it means I have 12 more bytes in addition to TCP 20 regular bytes).
The last byte of the TCP Header is the "Urgent Pointer" and right after begins the TCP option that is : TCP option field = " 02 04 05 B4 01 03 03 08 01 01 04 02"
What does this option field means ? I understand that "02 04 05 B4" is for MSS field, but then I'm clueless, I dont understand what the other bytes represent... Any help please ?
Thank you for the help provided.
TCP option field = " 02 04 05 B4 01 03 03 08 01 01 04 02", represents some of the fields appearing in TCP option field with their order.
Atleast, the tcp option values found through pyshark packet analyzer, represents encoding for some values:
MSS - 02:04:05:b4,
NOP - 01,
Window Scale - 03:03:08 or 03:03:02,
SACK perm - 04:02.

Malformed DNS Request Packet

I've been working on a project which involves sending DNS requests with information (not actual domains) in the questions (2 of them). I've been tracking the packets with wireshark.
Here is the tcp dump of the packet created.
00000000 00 02 01 00 00 02 00 00 00 00 00 00 01 32 03 65
00000010 6e 64 03 63 6f 6d 00 00 01 00 01 01 32 04 73 61
00000020 76 65 03 63 6f 6d 00 00 01 00 01
........ .....2.e
nd.com.. ....2.sa
ve.com.. ...
The i.d. and qdcount should be 2, recursion desired, and the domains shown are correct. Wireshark is saying that it is a malformed DNS packet. Any idea what is wrong with the packet?
OK, so:
If you're doing the transport-layer networking yourself, your code will determine whether it's going over UDP or TCP, by specifying, when creating the socket on which to send the packet, whether it's a UDP or TCP socket;
TCP is used if the packet won't fit in a maximum-sized UDP packet;
if you're sending it over TCP, you need to precede it with a header, as per section 4.2.2 "TCP usage" in RFC 1035.
"Maximum-sized" is a bit vague. RFC 791, the IPv4 specification, says, in section 3.1 "Internet Header Format":
Total Length: 16 bits
Total Length is the length of the datagram, measured in octets,
including internet header and data. This field allows the length of
a datagram to be up to 65,535 octets. Such long datagrams are
impractical for most hosts and networks. All hosts must be prepared
to accept datagrams of up to 576 octets (whether they arrive whole
or in fragments). It is recommended that hosts only send datagrams
larger than 576 octets if they have assurance that the destination
is prepared to accept the larger datagrams.
The number 576 is selected to allow a reasonable sized data block to
be transmitted in addition to the required header information. For
example, this size allows a data block of 512 octets plus 64 header
octets to fit in a datagram. The maximal internet header is 60
octets, and a typical internet header is 20 octets, allowing a
margin for headers of higher level protocols.
However, these days, the old networking hardware that would impose a maximum packet size limit as low as 576 bytes is mostly if not completely gone, and the real-world "maximum packet size" would generally be the Ethernet packet size - a total length of 1518 bytes, with 14 bytes of Ethernet header and 4 bytes of FCS, leaving 1500 bytes of payload. For UDP, with a typical IPv4 header length of 20 bytes and a UDP header length of 8 bytes, that's 1472 bytes of data, so it's probably good enough to use TCP rather than UDP for DNS messages larger than 1472 bytes (IP fragmentation and reassembly will happen if any hop in the network route can't handle a 1500-byte IPv4 packet; that does increase the chances of the packet not getting through, as, if one fragment gets through but the other doesn't, the entire packet doesn't get through).

Xbee 64-bit address in API mode

I'm currently working on a project in which I use antennas such xbee XBee 2mW Wire Antenna - Series 2 (ZigBee Mesh).
how can I get my antenna64bit address so I can set it up using my software automatically?
Can I send zigbee message to antenna so that it returns a message that contains it`s antenna address, then I decode the message and know the address of my antenna.
thanks.
If you want an easy way of doing this, you can send one message from the Router/End-Device to the Coordinator in your ZigBee network. You can use the special 16-bit Network Address 0x0000 to address the Coordinator.
This message should contain the 16-bit Network Address (or the 64-bit Address), so later the Coordinator can use this address in order to communicate back with this node. That is how you can do if you work with AT Mode. If you work with API Mode, the "Receive Packet" already contains the address of the sender, so you do not need to explicitely add it to your message.
When you press one time the commission button: the module sends a node identification broadcast transmission.
Thus, I assume you are using the API mode, so from your Coordinator API (software side) you can send a Remote AT Command Request, in broadcast, which set the CB (commission button) to 1. This is the same of press the commission button virtually at one time. Here is the packet:
7E 00 10 17 00 00 00 00 00 00 00 FF FF FF FE 00 43 42 01 67
Then, when all your devices receive this packet, they should answer to the coordinator with a Node Identification Indicator, which contains their 16-bit and 64-bit address. This way, you can automatically set up your network on software.

Resources