How is the dot character represented within the bytes of a packet? - networking

If we capture a DNS packet with wireshark, we can see among its bytes the domain name, in this case tools.kali.org:
The bytes in hexadecimal are 746f6f6c73046b616c69036f726700 and we can check with an ascii table how 74 corresponds to the character 't', 6f to 'o' and so on. The problem comes when we arrive at the dot character, which should be represented in hexadecimal by 2e and, however, is represented by 04 in the first appearance and 03 in the second appearance.
Why does this happen and how does wireshark know that it has to represent a dot if they are different values?

Dots are not represented. Dots are a textual representation, separating labels, so they don't appear in the packet.
Each label is represented as a one octet length field followed by that
number of octets.
—IETF RFC 1035 DOMAIN NAMES - IMPLEMENTATION AND SPECIFICATION
tools has 5 octets (bytes), kali has 4, org has 3. It's just happenstance that they are that sequence.

Related

what does the returned value of BLE thermometer mean?

I am using xiaomi LYWSD03MMC , I have got temperature of this device by BLE characteristics and it shows : 21 0a 17 7e 0b , However I know this is hex value but unfortunately i can't understand what does it mean.I only know the number 17, which is the amount of humidity, which is hex, and when I convert it to decimal it returns 23.
What is the UUID of the characteristic you are reading?
If it is of the format 0000xxxx-0000-1000-8000-00805F9B34FB then it should be documented on https://www.bluetooth.com/
You can find the mapping of Characteristic UUID to name in the 16-bit UUID Numbers Document at:
https://www.bluetooth.com/specifications/assigned-numbers/
The name can then be used to look in GATT Specification Supplement for more detailed description of the fields.
According to this script from github the first two values describe the temperature and need to be converted to little endian. This would result in a hex value of 0a21 which in decimal is 2539. This needs to be divided by 100 to give you a temperature of 25.39 degrees.
As you said, the third value is humidity, the last two values describe the battery voltage. Converted to little endian (0b7e) and decimal (2942), this value needs to be divided by 1000 and gives you a voltage of 2.942

Interpret double written as hex

I am working with doubles (64-bit) stored in a file by printing the raw bytes from the C# representation. For example the decimal number 0.95238095238095233 is stored as the bytes "9e e7 79 9e e7 79 ee 3f" in hex. Everything works as expected, I can write and read, but I would like to be able to understand this representation of the double myself.
According to the C# documentation https://learn.microsoft.com/en-us/dotnet/api/system.double?view=netframework-4.7.2#explicit-interface-implementations and wikipedia https://en.wikipedia.org/wiki/Double-precision_floating-point_format the first bit is supposedly the sign with 0 for positive and 1 for negative numbers. However, no matter the direction I read my bytes, the first bit is 1. Either 9 = 1001 or f = 1111. I am puzzled since 0.95... is positive.
As a double check, the following python code returns the correct decimal number as well.
unpack('d', binascii.unhexlify("9ee7799ee779ee3f"))
Can anyone explain how a human can read these bytes and get to 0.95238095238095233?
Figured it out, the collection of bytes are read like a 64-bit number (first bit on the left), but each byte is read like a string(first bit on the right). So my bytes should be read "3F" first, 3F reads left to right, so I'm starting with the bits 0011 1111 etc. This gives the IEEE 754 encoding as expected: First bit is the sign, next 11 bits the exponent, and then the fraction.

How can I encode 0000 to 11110 in 4B/5B encoding scheme

From the 4B/5B encoding scheme dataward 0000 in encoded to 11110 codeword similarly 0001 is encoded to 01001 etc.
Here the result of XOR operation between two codewords will be another valid codeword.
For example XOR of 11110 and 01001 is another codeword 10111 whose dataword is 1011.Here I have no problem.
Again, to avoid dc component NRZ-I line coding scheme is used. As a result there is not three consecutive Zero's in the output codewords.
There is no more one heading and two tailing zero's in codewords. We have no worry about the number of one's in NRZ-I coding scheme.
But, how can I encode 0000 to 11110 or 0001 to 01001 and which
algorithm I should apply for this encoding scheme.
I search google and study books too. But everywhere they are telling only the same thing but I did not get my answer.
Thanks in advance
Decimal Representation
To understand this mechanism properly we should consider all codewords’ decimal value. Observe the above table carefully I converted all binary value of your table to decimal form.
Now to avoid dc component during transmission we should consider only the codewords which don’t have more than one starting and two tailing zeros .
So we get every two consecutive datawords are assigned to another two consecutive codewords.
Like this
(2,3) to (20,21),
(4,5) to (10,11)
(6,7) to (14,15)
(8,9) to (18,19)
(10,11) to (22,23)
(12, 13) to (26,27)
(14,15) to (28,29)
Exception
(0,1) to (30,9)
1 is assigned to 9 because all codewords from 0 to 8 (inclusive) are invalid because of having excessive zero . So first valid codeword 9 is assigned to 1.
If all valid codewords are assigned consecutively then changing only one bit (single bit error) during transmission it will convert to next or previous codeword and this error will remain undetected.
We know that in block coding if a valid codeword is convert to another valid codeword during transmission as a result of error , it will remain undetected and this a limitation of block coding. So to avoid this these all valid codewords are not consecutively assigned with datawords.

RFID algorithm to get card ID

I receive bytes from an RFID reader when presenting a card, but I'm unable to figure out how to derive the card ID from these bytes.
For example, I have a card that has these numbers printed on it: 0007625328 116,23152. I would expect that this is the ID of that card, right?
For this card, I get the following bytes from the reader (in hexadecimal representation): <42><09><01><74><00><74><5A><70>.
The decimal number 0007625328 translates to 0x00745A70 in hexadecimal representation.
The number 116,23152 is actually a different representation of that same value (0007625328):
116 in decimal is 0x74 in hexadecimal.
23152 in decimal is 0x5A70 in hexadecimal.
Combined, this also gives 0x00745A70.
So the value that you receive (42 09 01 74 00 74 5A 70) seems to be the concatenation of some form of prefix value (0x42090174) and the printed card serial number (0x00745A70).

How to extract first 20 bits of Hexadecimal address?

I have the following hexadecimal 32 bit virtual address address: 0x274201
How can I extract the first 20 bits, then convert them to decimal?
I wanted to know how to do this by hand.
Update:
#Pete855217 pointed out that the address 0x274201 is not 32 bit.
Also 0x is not part of the address as it is used to signify
a hexadecimal address.
Which suggests that I will add 00 after 0X, so now a true 32 bit address would be: 0x00274201. I have updated my answer!
I believe I have answered my own question and I hope I am correct?
First convert HEX number 0x00274201 to BIN (this is the long way but I learned something from this):
However, I noticed the first 20 bits include 00274 in HEX. Which makes sense because every HEX digit is four BIN digits.
So, since I wanted the first 20 bits, then I am really asking for the
first five HEX digits because 5 * 4 = 20 (bits in BIN)
Thus this will yield 00274 in HEX = 628 in DEC (decimal).

Resources