assembly BYTE PTR clarification - pointers

I'm working through some practice problems, and I'm stuck on one particular thing:
What is the decimal value in BYTE PTR value+2?
.data
list DWORD 50 DUP(?)
matrix DWORD 20 DUP(5 DUP(?))
value DWORD 10597059 ; decimalstring
BYTE "Computer Architecture",0
The base address of list is 1000H.
I know the answer is 161, but I'm not sure how to get to that spot. Can anyone help explain that process? (there is extra data info in there from other questions using the same data set, fyi).
Thanks!

The variable value has the decimal value of 10597059 - which is the DWORD value 00A1B2C3 in hexadecimal. Now, because the x86 architecture is of little-endian format, this value is stored in reverse order in memory:
0 1 2 3 0 1 2 3 ( position )
00 A1 B2 C3 will be stored as C3 B2 A1 00 ( value )
Here you can see that the single BYTE value pointed to by BYTE PTR value+2 is A1 hexadecimal = 161 decimal. Because BYTE PTR value points to the BYTE at position 0 and BYTE PTR value+2 points to the BYTE at position 2(relative to value), in the above illustration (the right one is how it is stored in memory). These BYTEs pointed to are only one quarter of the DWORD value, directly addressed.

Related

What does these hex values mean?

I have a table of values of hex values (I think they are hex bytes?) and I'm trying to figure out what they mean in decimal.
On the website the author states that the highlighted values 43 7A 00 00 mean 250 in decimal, but when I input these into a hex to decimal converter I get 1132068864.
For the life of me I don't understand what's going on. I know that the naming above the highlighted values 1 2 3 4 5 6 7 8 9 A B C D E F are the hex system, but I don't understand how you read the values inside the table.
Help would be appreciated!
What's happening here is that the bytes 43 7A 00 00 are not being treated as an integer. They are being treated as an IEEE-format 32-bit floating-point number. This is why the Type column in the Inspector window in the image says Float. When those bytes are interpreted in that way they do indeed represent the value 250.0.
You can read about the details of the format at https://en.wikipedia.org/wiki/Single-precision_floating-point_format
In this particular case the bytes would be decoded as:
a "sign" bit of 0, meaning that the value is a positive number
an "exponent" field containing bits 1000 0110 (or hex 86, decimal 134), meaning that the exponent has the value 7 (calculated by subtracting 127 from the raw value of the field, 134)
a "significand" field containing bits 1111 1010 0000 0000 0000 0000 (or hex FA0000, decimal 16384000)
The significand and exponent are combined according to the formula:
value = ( significand * (2 ^ exponent) ) / (2 ^ 23)
where a ^ b means "a raised to the power b" .
In this case that gives:
value = ( 16384000 * 2^7 ) / 2^23
= ( 16384000 * 128 ) / 8388608
= 250.0

Floating point data format sign+exponent

I am receiving data over UART from a heat meter, but I need some help to understand how i should deal with the data.
I have the documentation but that is not enough for me, I have to little experience with this kind of calculations.
Maybe someone with the right skill could explain to me how it should be done with a better example that I have from the documentation.
One value consists of the following bytes:
[number of bytes][sign+exponent] (integer)
(integer) is the register data value. The length of the integer value is
specified by [number of bytes]. [sign+exponent] is an 8-bit value that
specifies the sign of the data value and sign and value of the exponent. The
meaning of the individual bits in the [sign+exponent] byte is shown below:
Examples:
-123.45 = 04h, C2h, 0h, 0h, 30h, 39h
87654321*103 = 04h, 03h , 05h, 39h, 7Fh, B1h
255*103 = 01h, 03h , FFh
And now to one more example with actual data.
This is the information that I have from the documentation about this.
This is some data that I have received from my heat meter
10 00 56 25 04 42 00 00 1B E4
So in my example then 04 is the [number of bytes], 42 is the [sign+exponent] and 00 00 1B E4 is the (integer).
But I do not know how I should make the calculation to receive the actual value.
Any help?
Your data appears to be big-endian, according to your example. So here's how you break those bytes into the fields you need using bit shifting and masking.
n = b[0]
SI = (b[1] & 0x80) >> 7
SE = (b[1] & 0x40) >> 6
exponent = b[1] & 0x3f
integer = 0
for i = 0 to n-1:
integer = (integer << 8) + b[2+i]
The sign of the mantissa is obtained from the MSb of the Sign+exponent byte, by masking (byte & 80h != 0 => SI = -1).
The sign of the exponent is similarly obtained by byte & 40h != 0 => SE = -1.
The exponent value is EXP = byte & 3Fh.
The mantissa INT is the binary number formed by the four other bytes, which can be read as a single integer (but mind the indianness).
Finally, compute SI * INT * pow(10, SE * EXP).
In your example, SI = 1, SE = -1, EXP = 2, INT = 7140, hence
1 * 7140 * pow(10, -1 * 2) = +71.4
It is not in the scope of this answer to explain how to implement this efficiently.

how to encode 27 vector3's into a 0-256 value?

I have 27 combinations of 3 values from -1 to 1 of type:
Vector3(0,0,0);
Vector3(-1,0,0);
Vector3(0,-1,0);
Vector3(0,0,-1);
Vector3(-1,-1,0);
... up to
Vector3(0,1,1);
Vector3(1,1,1);
I need to convert them to and from a 8-bit sbyte / byte array.
One solution is to say the first digit, of the 256 = X the second digit is Y and the third is Z...
so
Vector3(-1,1,1) becomes 022,
Vector3(1,-1,-1) becomes 200,
Vector3(1,0,1) becomes 212...
I'd prefer to encode it in a more compact way, perhaps using bytes (which I am clueless about), because the above solution uses a lot of multiplications and round functions to decode, do you have some suggestions please? the other option is to write 27 if conditions to write the Vector3 combination to an array, it seems inefficient.
Thanks to Evil Tak for the guidance, i changed the code a bit to add 0-1 values to the first bit, and to adapt it for unity3d:
function Pack4(x:int,y:int,z:int,w:int):sbyte {
var b: sbyte = 0;
b |= (x + 1) << 6;
b |= (y + 1) << 4;
b |= (z + 1) << 2;
b |= (w + 1);
return b;
}
function unPack4(b:sbyte):Vector4 {
var v : Vector4;
v.x = ((b & 0xC0) >> 6) - 1; //0xC0 == 1100 0000
v.y = ((b & 0x30) >> 4) - 1; // 0x30 == 0011 0000
v.z = ((b & 0xC) >> 2) - 1; // 0xC == 0000 1100
v.w = (b & 0x3) - 1; // 0x3 == 0000 0011
return v;
}
I assume your values are float not integer
so bit operations will not improve speed too much in comparison to conversion to integer type. So my bet using full range will be better. I would do this for 3D case:
8 bit -> 256 values
3D -> pow(256,1/3) = ~ 6.349 values per dimension
6^3 = 216 < 256
So packing of (x,y,z) looks like this:
BYTE p;
p =floor((x+1.0)*3.0);
p+=floor((y+1.0)*3.0*6.0);
p+=floor((y+1.0)*3.0*6.0*6.0);
The idea is convert <-1,+1> to range <0,1> hence the +1.0 and *3.0 instead of *6.0 and then just multiply to the correct place in final BYTE.
and unpacking of p looks like this:
x=p%6; x=(x/3.0)-1.0; p/=6;
y=p%6; y=(y/3.0)-1.0; p/=6;
z=p%6; z=(z/3.0)-1.0;
This way you use 216 from 256 values which is much better then just 2 bits (4 values). Your 4D case would look similar just use instead 3.0,6.0 different constant floor(pow(256,1/4))=4 so use 2.0,4.0 but beware case when p=256 or use 2 bits per dimension and bit approach like the accepted answer does.
If you need real speed you can optimize this to force float representation holding result of packet BYTE to specific exponent and extract mantissa bits as your packed BYTE directly. As the result will be <0,216> you can add any bigger number to it. see IEEE 754-1985 for details but you want the mantissa to align with your BYTE so if you add to p number like 2^23 then the lowest 8 bit of float should be your packed value directly (as MSB 1 is not present in mantissa) so no expensive conversion is needed.
In case you got just {-1,0,+1} instead of <-1,+1>
then of coarse you should use integer approach like bit packing with 2 bits per dimension or use LUT table of all 3^3 = 27 possibilities and pack entire vector in 5 bits.
The encoding would look like this:
int enc[3][3][3] = { 0,1,2, ... 24,25,26 };
p=enc[x+1][y+1][z+1];
And decoding:
int dec[27][3] = { {-1,-1,-1},.....,{+1,+1,+1} };
x=dec[p][0];
y=dec[p][1];
z=dec[p][2];
Which should be fast enough and if you got many vectors you can pack the p into each 5 bits ... to save even more memory space
One way is to store the component of each vector in every 2 bits of a byte.
Converting a vector component value to and from the 2 bit stored form is as simple as adding and subtracting one, respectively.
-1 (1111 1111 as a signed byte) <-> 00 (in binary)
0 (0000 0000 in binary) <-> 01 (in binary)
1 (0000 0001 in binary) <-> 10 (in binary)
The packed 2 bit values can be stored in a byte in any order of your preference. I will use the following format: 00XXYYZZ where XX is the converted (packed) value of the X component, and so on. The 0s at the start aren't going to be used.
A vector will then be packed in a byte as follows:
byte Pack(Vector3<int> vector) {
byte b = 0;
b |= (vector.x + 1) << 4;
b |= (vector.y + 1) << 2;
b |= (vector.z + 1);
return b;
}
Unpacking a vector from its byte form will be as follows:
Vector3<int> Unpack(byte b) {
Vector3<int> v = new Vector<int>();
v.x = ((b & 0x30) >> 4) - 1; // 0x30 == 0011 0000
v.y = ((b & 0xC) >> 2) - 1; // 0xC == 0000 1100
v.z = (b & 0x3) - 1; // 0x3 == 0000 0011
return v;
}
Both the above methods assume that the input is valid, i.e. All components of vector in Pack are either -1, 0 or 1 and that all two-bit sections of b in Unpack have a (binary) value of either 00, 01 or 10.
Since this method uses bitwise operators, it is fast and efficient. If you wish to compress the data further, you could try using the 2 unused bits too, and convert every 3 two-bit elements processed to a vector.
The most compact way is by writing a 27 digits number in base 3 (using a shift -1 -> 0, 0 -> 1, 1 -> 2).
The value of this number will range from 0 to 3^27-1 = 7625597484987, which takes 43 bits to be encoded, i.e. 6 bytes (and 5 spare bits).
This is a little saving compared to a packed representation with 4 two-bit numbers packed in a byte (hence 7 bytes/56 bits in total).
An interesting variant is to group the base 3 digits five by five in bytes (hence numbers 0 to 242). You will still require 6 bytes (and no spare bits), but the decoding of the bytes can easily be hard-coded as a table of 243 entries.

Arduino: Formula to convert byte

Im looking for a way to modify a binary byte value on Arduino.
Because of the Hardware, its neccesarry, to split a two digit number into 2 4-bit.
the code to set output is wire.write(byte, 0xFF) which sets all outputs on High.
0xFF = binary 1111 1111
the formula should be convert a value like this:
e.g nr 35 is binary 0010 0011
but for my use it should displayed as 0011 0101 which would be refer to 53 in reality.
The first 4 bits are for a BCD-Input IC which displays the 5 from 35, the second 4 bits are for a BCD-Input IC which displays the 3 from 35.
Does anybody has a idea how to convert this by code, or like a mathematical formula?
Possible numbers are from 00 to 59.
Thank you for your help
To convert a value n between 0 and 99 to BCD:
((n / 10) * 16) + (n % 10)
assuming n is an integer and thus / is doing integer division; also assumes this will be stored in an unsigned byte.
(If this is not producing the desired result, please either explain how it is incorrect for the example given, or provide a different example for which it is incorrect.)
#include <string.h>
int num = // Any number from 0 to 59
int tens = num/10;
int units = num-(tens*10);
// Make string array for binary
string tensbinary;
int quotient = tens;
char buffer[1];
// Convert numbers
for (int i = 0; i < 4; i++)
{
quotientint = quotientint % 2;
sprintf(buffer, 1, "%d", quotientint);
binary.append(buffer);
}
// Repeat above for the units
// Now join the two together
binarytens.append(binaryunits);
I don't know if this will work, but still, you might be able to extrapolate based on the available information in my code.
The last thing you need to do is convert the string to binary.

hex offset sector

I'm getting a response from a nameserver which is longer then 512 bytes. in that response are some offsets. an offset from the beginning of the response is going fine, but when i get above 512 bytes the offset changes and it doesn't work anymore.
c0 0c = byte 12 from the start(works like a charm)
i have an offset:c1 f0 which means(in my knowledge so far)
c1 = 1 x 512 = 512
f0 = 240
c1 f0= byte 240 from byte 512 == byte 752
my offset should point to the beginning of a name, which should be located at byte 752
but at byte 752 the name isn't located.
Question
how does the offset work after 512 bytes?
It is a relative reference. In order to indicate that it is a relative reference, the first 2 bits are "reserved". You can reference a maximum of 14 bits: 2 bytes with the highest 2 bits are reserved. C0 01 is the reference offset 1. It does therefore not always have to be C0. it can also be C1, C2, C3, C4, CF etc. In practice this will be fairly rare unless you have a very complex long running queries which is the case. I have a query of 3000+ bytes:)
C1 = 11000001
strip 2 highest bits : 000001
number = 1
offset of C1 F0 is 1 x 256 + 240 = 496
offset of C9 9F is 9 x 256 + 159 = 2463
in one byte there are 256 combinations, not 512 which is used :S
The max of C0 is C0 FF which is 255. after that C1 00 starts
Credits of this explanation go to http://www.helpmij.nl/forum/member.php/215405-wampier

Resources