Hex Addition Overflow detection - hex

I'm trying to detect whether a hex arithmetic results in overflow or not.
Using just 8 bit two's complement signed operation.
0xFF + 0x1
But first, I'm having trouble determining number is negative or positive in hexadecimal.

In 2's complement, overflow occurs when the result is the wrong sign.
Example:
Two positives yield a negative result:
01111111 (+127)
+ 00000001 (+ 1)
-------------------
10000000 (-128) <-- overflow (wrong sign)
Two negatives yield a positive result:
11111111 ( -1)
+ 10000000 (-128)
-------------------
01111111 (+127) <-- overflow (wrong sign)
Note: overflow cannot occur if adding numbers with opposite signs.
01111111 (+127)
+ 10000000 (-128)
-------------------
11111111 ( -1)
Concerning the sign, the leftmost bit is the sign bit. "0" is positive
and "1" is negative.
Example:
+------- sign bit
|
v
0xFF = 11111111 = -1
0x80 = 10000000 = -128
0x01 = 00000001 = +1
0x7F = 01111111 = +127
If the leftmost hexadecimal digit is 0, 1, 2, 3, 4, 5, 6, or 7,
then it is positive. If the leftmost hexadecimal digit is 8, 9, A, B,
C, D, E, or F, then it is negative.

Related

Error in LLVM 13 documentation for little-endian vectors?

LLVM 13 added a short note on the bit representation of sub-byte elements to its documentation on vector types. I can follow everything it says except for the memory diagram for little endian which doesn't look right and disagrees with my experiments. I'm wondering if I'm misunderstanding something:
The same example for little endian:
%val = bitcast <4 x i4> <i4 1, i4 2, i4 3, i4 5> to i16
; Bitcasting from a vector to an integral type can be seen as
; concatenating the values:
; %val now has the hexadecimal value 0x5321.
store i16 %val, i16* %ptr
; In memory the content will be (8-bit addressing):
;
; [%ptr + 0]: 01010011 (0x53)
; [%ptr + 1]: 00100001 (0x21)
I agree that %val has value 0x5321, but shouldn't the memory layout be 0x21 0x53 (33 83 in decimal) instead of 0x53 0x21? For example, bitcasting %val to <2 x i8> yields <i8 33, i8 83>.

Sum of 2 two's complement binary number

I am familiar with two's complement when performing addition in 4 bits, then I am confused when I face the question below
** Find the sum of the 2 two's complement binary number 010111 and 110101 in 8 bit output**
Below is my attempt, but I am in a dilemma, should I
(1) discard the carry, then add two 0, so the answer is 00001100, which is 12 in decimal
Thank you !
(2) just add 1 in the beginning, so it the answer is 11001100, which is 204 in decimal
For the Twos Complement in 8 bit you have to invert ALL 8 bits of the number
As far as I know the ones complement and twos complement operate on the absolute value so:
The Binary Number 010111 is represented in 8 bits with 00010111
C_1(00010111) = 00010111 xor 11111111 = 11101000
C_2(00010111) = C_1 + 1 = 00010111 + 1 = 00011000
The Binary Number 110101 is represented in 8 bits -> 00110101
C_1(00110101) = 00110101 xor 11111111 = 11001010
C_2(00010111) = C_1 + 1 = 11001010 + 1 = 11001011
Now add the two twos-complements:
C_2(00010111) + C_2(00010111) = 00011000 + 11001011 = 11100011
Pls correct me if messed something up with sign bits (I just took the binary numbers as is in 8 bit)...

Converting a number to IEEE 754

Can someone help me with this question:
“Convert the decimal number 10/32 to the 32-bit IEEE 754 floating point and
express your answer in hexadecimal. (Reminder: the 32 bits are used as
follows: Bit 1: sign of mantissa, bits 2-9: 8-bits of exponent in excess 127, bits 10-32: 23 bits for magnitude of mantissa.)”
I understand how to convert a decimal number to IEE 754. But I am confused on how to answer this—it only gives me a quotient? I am not allowed to use a calculator, so I am unsure how to work this out. Should I convert them both to binary first and divide them?
10/32 = 5/16 = 5•2−4 = 1.25•2−2 = 1.012•2−2.
The sign is +, the exponent is −2, and the significand is 1.012.
A positive sign is encoded as 0.
Exponent −2 is encoded as −2 + 127 = 125 = 011111012.
Significand 1.012 is 1.010000000000000000000002, and it is encoded using the last 23 bits, 010000000000000000000002.
Putting these together, the IEEE-754 encoding is 0 01111101 01000000000000000000000. To convert to hexadecimal, first organize into groups of four bits: 0011 1110 1010 0000 0000 0000 0000 0000. Then the hexadecimal can be easily read: 3EA0000016.
I see it like this:
10/32 = // input
10/2^5 = // convert division by power of 2 to bitshift
1010b >> 5 =
.01010b // fractional result
--^-------------------------------------------------------------
|
first nonzero bit is the exponent position and start of mantissa
----------------------------------------------------------------
man = (1)010b // first one is implicit
exp = -2 + 127 = 125 // position from decimal point + bias
sign = 0 // non negative
----------------------------------------------------------------
0 01111101 01000000000000000000000 b
^ ^ ^
| | mantissa + zero padding
| exp
sign
----------------------------------------------------------------
0011 1110 1010 0000 0000 0000 0000 0000 b
3 E A 0 0 0 0 0 h
----------------------------------------------------------------
3EA00000h
Yes the answer of Eric Postpischil is the same approach (+1 btw) but I didn't like the formating as it was not clear from a first look what to do without proper reading the text.
Giving the conversion of 10/322 without a calculator as an exercise is pure sadism.
There is a a general method doable without tools, but it may be tedious.
N is the number to code. We assume n<1
exp=0
mantissa=0
repeat
n *= 2
exp ++
if n>1
n = n-1
mantissa = mantissa <<1 | 1
else
mantissa = mantissa <<1
until mantissa is a 1 followed by 23 bits
Then you just have to code mantissa and (23-exp) in IEEE format.
Note that frequently this kind of computations lead to loops. Whenever you find the same n, you know that the sequence will be repeated.
As an example, assume we have to code 3/14
3/14 -> 6/14 e=1 m=0
6/14 -> 12/14 e=2 m=00
12/14 -> 24/14-14/14=10/14 e=3 m=001
10->14 -> 20/14-14/14=6/14 e=4 m=0011
6/14 -> 12/14 e=5 m=00110
Great we found a loop !
6/14->12/14->10/14->6/14.
So the mantissa will be 110 iterated as required 110110110...
If we fill the mantissa with 24 bits, we need 26 iterations and exponent is 23-26=-3 (another way to get it is to remark that n became >1 for the first time at iteration 3 and exponent is -3 as 1≤3/14*2^3<2).
And we can do the IEEE754 coding with exponent=127-3=124 and mantissa =1.1011011011011....

4 byte sequence 0x86 0x65 0x71 0xA5 in a little-endian architecture interpreted as a 32-bit signed integer represents what decimal?

I know how to convert 0x86 0x65 0x71 0xA5 into decimals, I'm just not sure how to approach it past that point.
I assume it is (from least significant to most significant) 134 101 113 165, but what exactly do I do past this point? I'm guessing 134,101,113,165 is not correct. Do I need to convert anything into binary to do this? Kind of lost conceptually.
By converting each octet into decimal, you've essentially converted the number into base 256. You can do it that way, but it's not particularly easy. You'd have to combine the parts as follows:
134 x (256^0) + 101 x (256^1) + 113 x (256^2) + 165 x (256^3)
0x86 0x65 0x71 0xA5 as a 32-bit unsigned integer in little-endian notation would mean that the integer in hex is 0xA5716586. Then just convert from hex to decimal normally.
Either way, you will get 2,775,672,198.
However, this is a signed integer, not an unsigned integer. And because the most significant byte is A5, the most significant bit is 1. Therefore, this is a negative number.
So we need to do some math:
FFFFFFFF - A5716586 = 5A8E9A79
So:
A5716586 + 5A8E9A79 = FFFFFFFF
Also, in 32-bit arithmetic:
FFFFFFFF + 1 = 0
So:
FFFFFFFF => -1
Combining these two:
A5716586 + 5A8E9A79 => -1
A5716586 = -1 -5A8E9A79 = - (5A8E9A79 + 1) = - 5A8E9A7A
Also:
5A8E9A7A => 1,519,295,098 (decimal)
So our final answer is -1,519,295,098

2's complement representation of fractions?

I'm a little lost on this. I need to use two fractional bits
0.(a-1)(a-2)
Like that, now I can use .00 .01 .10 and .11
But I need negative numbers (in 2's complement) also, so would .10 be -.5 ? or would it be -.25 ?
The same with .11 , that would be -.75? or would it be -.5 ?
I'm pretty sure it would be the former in both cases, but I'm not entirely positive.
In two's complement notation, all of the most significant bits of a negative number are set to 1. Let's assume you're storing these numbers as 8 bits, with 2 to the right of the "binary point."
By definition, x + -x = 0, so we can write:
0.5 + -0.5 = 0.10 + 111111.10 = 0 // -0.5 = 111111.10
0.25 + -0.25 = 0.01 + 111111.11 = 0 // -0.25 = 111111.11
0.75 + -0.75 = 0.11 + 111111.01 = 0 // -0.75 = 111111.01
and so on.
Using 8 bits like this, the largest number you can store is
011111.11 = 31.75
the least-positive number is
000000.01 = 0.25
the least-negative number is
111111.11 = -0.25
and the smallest (that is, the most negative) is
100000.00 = -32
see it this way:
you have normal binary representation
let's assume 8 bit words ...
the first bit (MSB) has the value 128, the second 64, and so on ...
in other words the first bit (MSB) is 2^7 ... the second bit is 2^6 ... and the last bit is 2^0
now we can assume our 8 bit word has 2 decimal places ....
we now start with the first bit (MSB) 2^5 and end with the last bit beeing 2^-2
no magic here ...
now to turn that into binary complement: simply negate the value of the first bit
so instead of 2^5 it would be -2^5
so base 10 -0.75 would be in binary complement
111111.01 ...
(1*(-32) + 1*16 + 1*8 + 1*4 + 1*2 +1*1 + 0*0.5 + 1*0.25)
(1*(-2^5) + 1*2^4 + 1*2^3 + 1*2^2 + 1*2^1 +1*2^0 + 0*2^(-1) + 1*2^(-2))
A number stored in two's complement inverts the sign of the uppermost bit's magnitude (so that for e.g. a 16-bit number, the upper bit is -32768 rather than +32768). All other bits behave as normal. Consequently, when performing math on multi-word numbers, the upper word of each number should be regarded as two's-complement (since its uppermost bit will be the uppermost bit of the overall number), but all other words in each number should be regarded as unsigned quantities.
For example, a 16-bit two's complement number has place values (-32768, 16384, 8192, 4096, 2048, 1024, 512, 256, 128, 64, 32, 16, 8, 4, 2, and 1). Split into two 8-bit parts, those parts will have place values (-32768, 16384, 8192, 4096, 2048, 1024, 512, and 256); and (128, 64, 32, 16, 8, 4, 2, and 1). The first set of values is in a two's complement 8-bit number, times 256; the latter set is an unsigned 8-bit number.

Resources