How to calculate the difference between two hex offsets? - hex

I have been searching how to do this but I haven't find the way to do it. There's another way to calculate this difference instead of be counting one by one?
For example:
0x7fffffffe070 - 0x7fffffffe066 = 0x04
0x7fffffffe066 - 0x7fffffffe070 = -0x04
0x7fffffffdbe0 - 0x7fffffffda98 = ????
To understand these results let's suppose we open a file with an hex editor and we have the following hex numbers: 8A B7 00 00 FF, with their corresponding hex offsets: 0x7fffffffe066 0x7fffffffe067 0x7fffffffe068 0x7fffffffe069 0x7fffffffe070. The difference of the hex offsets of the numbers 8A and FF is 0x04 because they differ in 4 positions.

The difference between 0x7fffffffe070 and 0x7fffffffe066 cannot be 4.
I think 0x7fffffffe070 should be 0x7fffffffe06a in your example.
Other than that I don't understand the question.
Normally you would calculate the difference with a calculator set to programmer/hexadecimal mode.
In a previous answer I explained how to calculate the number by hand, but that answer got deleted.

Related

what does the returned value of BLE thermometer mean?

I am using xiaomi LYWSD03MMC , I have got temperature of this device by BLE characteristics and it shows : 21 0a 17 7e 0b , However I know this is hex value but unfortunately i can't understand what does it mean.I only know the number 17, which is the amount of humidity, which is hex, and when I convert it to decimal it returns 23.
What is the UUID of the characteristic you are reading?
If it is of the format 0000xxxx-0000-1000-8000-00805F9B34FB then it should be documented on https://www.bluetooth.com/
You can find the mapping of Characteristic UUID to name in the 16-bit UUID Numbers Document at:
https://www.bluetooth.com/specifications/assigned-numbers/
The name can then be used to look in GATT Specification Supplement for more detailed description of the fields.
According to this script from github the first two values describe the temperature and need to be converted to little endian. This would result in a hex value of 0a21 which in decimal is 2539. This needs to be divided by 100 to give you a temperature of 25.39 degrees.
As you said, the third value is humidity, the last two values describe the battery voltage. Converted to little endian (0b7e) and decimal (2942), this value needs to be divided by 1000 and gives you a voltage of 2.942

Interpret double written as hex

I am working with doubles (64-bit) stored in a file by printing the raw bytes from the C# representation. For example the decimal number 0.95238095238095233 is stored as the bytes "9e e7 79 9e e7 79 ee 3f" in hex. Everything works as expected, I can write and read, but I would like to be able to understand this representation of the double myself.
According to the C# documentation https://learn.microsoft.com/en-us/dotnet/api/system.double?view=netframework-4.7.2#explicit-interface-implementations and wikipedia https://en.wikipedia.org/wiki/Double-precision_floating-point_format the first bit is supposedly the sign with 0 for positive and 1 for negative numbers. However, no matter the direction I read my bytes, the first bit is 1. Either 9 = 1001 or f = 1111. I am puzzled since 0.95... is positive.
As a double check, the following python code returns the correct decimal number as well.
unpack('d', binascii.unhexlify("9ee7799ee779ee3f"))
Can anyone explain how a human can read these bytes and get to 0.95238095238095233?
Figured it out, the collection of bytes are read like a 64-bit number (first bit on the left), but each byte is read like a string(first bit on the right). So my bytes should be read "3F" first, 3F reads left to right, so I'm starting with the bits 0011 1111 etc. This gives the IEEE 754 encoding as expected: First bit is the sign, next 11 bits the exponent, and then the fraction.

How to extract first 20 bits of Hexadecimal address?

I have the following hexadecimal 32 bit virtual address address: 0x274201
How can I extract the first 20 bits, then convert them to decimal?
I wanted to know how to do this by hand.
Update:
#Pete855217 pointed out that the address 0x274201 is not 32 bit.
Also 0x is not part of the address as it is used to signify
a hexadecimal address.
Which suggests that I will add 00 after 0X, so now a true 32 bit address would be: 0x00274201. I have updated my answer!
I believe I have answered my own question and I hope I am correct?
First convert HEX number 0x00274201 to BIN (this is the long way but I learned something from this):
However, I noticed the first 20 bits include 00274 in HEX. Which makes sense because every HEX digit is four BIN digits.
So, since I wanted the first 20 bits, then I am really asking for the
first five HEX digits because 5 * 4 = 20 (bits in BIN)
Thus this will yield 00274 in HEX = 628 in DEC (decimal).

Simple SDLC CRC calculation not giving the correct value

I am trying to figure out how the calculate the CRC for very simple SDLC frames.
Using an MLT I am capturing the stream and i see some simple frames being sent out like: 0x3073F9E3 and 0x3011EDE3
From my understanding the F9E3 and EDE3 are the 2 byte checksums of the 3073 and 3011 since that is all that was in that frame.
using numerous CRC calculators and calculations I have been able to get the first byte of the checksum, but not the last byte (the F9 and the ED).
Using this calculator (http://www.zorc.breitbandkatze.de/crc.html):
Select CRC-CCITT
Change Final XOR Value to: FFFF
Check Reverse Data Bytes and reverse CRC result before Final XOR
Then type the input: %30%11
Which will give the output B8ED so the last byte is the ED.
Any ideas?
You are getting the correct crc16's (F9 F8, ED B8). I don't know why your last byte is E3 in both cases. This is perhaps a clue that the packets are not being disassembled correctly.

One's complement instead of just a sum of bits

A question in my university homework is why use the one's complement instead of just the sum of bits in a TCP checksum. I can't find it in my book and Google isn't helping. Any chance someone can point me in the right direction?
Thanks,
Mike
Since this is a homework question, here is a hint:
Suppose you calculated a second checksum over the entire packet, including the first checksum? Is there a mathematical expression which would determine the result?
Probably the most important is that it is endian independent.
Little Endian computers store hex numbers with the LSB last (Intel processors for example). Big Endian computers put the LSB first (IBM mainframes for example). When carry is added to the LSB to form the 1's complement sum) it doesn't matter if we add 03 + 01 or 01 + 03: the result is the same.
Other benefits include the easiness of checking the transmission and the checksum calculation plus a variety of ways to speed up the calculation by updating only IP fields that have changed.
Ref: http://www.netfor2.com/checksum.html

Resources