What makes the processor to know about unsigned and signed numbers? - unsigned

If I talk about 2'complement, the MSB is used as a sign bit. For example in 8 bit 2'complement signed notation 01111111 is +127 and and 11111111 is -128. But on the contrary 11111111 is 255 in unsigned notation. How does the processor know if the number is signed or unsigned? Is there any other bit used for this purpose? Compiler makes something?

The beauty of 2's complement is that the bit operations for arithmetic operations are exactly equivalent to unsigned. So most likely, the processor doesn't give a monkeys.
The same cannot be said for 1's complement arithmetic (it requires, among other things, a complementing subtractor), or signed magnitude arithmetic.

Related

Is there a more efficient way to divide and conquer a uint 256 log on 64 bit hardware with rust or inline assembly than converging Taylor series?

I am looking to take the log base n (10 would be fine) of a 256 bit unsigned integer as a floating point in rust, with no loss of precision. It would seem to me that I need to implement an 8xf64 512 bit float 512 type and use a Taylor series to approximate ln and then the log. I know there are assembly methods to obtain the log of an f64. I am wondering if anyone on stack overflow can think of a divide and conquer or other method which would be more efficient. I would be amenable to inline assembly operating on the 8xf64 512 bit array.
This might be a useful starting point / outline of an algorithm. IDK if it will get you exact results, like error <= 0.5ulp (i.e. the last bit of the mantissa of your 512-bit float correctly rounded), or even error <= 1 ulp. Perhaps worth looking into what extended-precision calculators like bc / dc / calc do.
I think log converges quickly, so if you're going to do Newton iterations to refine, this bit-scan method might be a fast way to get a good starting point. Even if you only really need about 256 mantissa bits correct, I don't know how big a polynomial it would take to get that, and each multiply / add / fma would be on 512-bit (8x) or 320-bit (5x double precision).
Start by converting integer to binary float
For normal-sized floating-point numbers, the usual method takes advantage of the logarithmic nature of binary floating point. Without 256-bit HW float, you'll want to find the ilog2(int) yourself, i.e. position of the highest set bit (Efficiently find least significant set bit in a large array?).
Then treat your 256-bit integer as the mantissa of a number in the [1..2) or [0.5 .. 1) range, and yes use a polynomial approximation for log2() that's accurate over that limited range. (Before actual soft-float stuff, you might want to left-shift the number so it's normalized, i.e. the highest set bit is at the top. i.e. x <<= clz(x).
Then a polynomial approximation over the mantissa
And then add the integer exponent + log_approx(mantissa) => log2(x).
Efficient implementation of log2(__m256d) in AVX2 has more detail on implementing log2(double) (with SIMD doing 4 at a time, very different from doing one extended precision calculation).
It includes some links to implementations, e.g. Agner Fog's VCL using the ratio of two polynomials instead of one larger polynomial, and various tricks to maintain as much precision as possible: https://github.com/vectorclass/version2/blob/9874e4bfc7a0919fda16596144d393da5f8bf6c0/vectormath_exp.h#L942. Such as further range reduction: if x > SQRT2*0.5, then increment the exponent and double the mantissa. (If 512-bit FP division is really expensive, you might just use more terms in one polynomial.) VCL is currently Apache licensed, so feel free to copy as much as you want from it into anything.
IDK if there are more tricks that might become more valuable for big extended precision, or for soft-float, which that implementation doesn't use. VCL's math functions spend more effort to maintain high precision than some faster approximations, but they're not exact.
Do you really need 512-bit float? Maybe only 320-bit (5x double)?
If you don't need more exponent-range than a double, you might be able to extend the double-double-arithmetic technique to wider floats, taking advantage of hardware FP to get 52 or 53 mantissa bits per 64-bit chunk. (From comments, apparently you're already planning to do that.)
You might not need 512-bit float to have sufficient precision. 256/52 = 4.92, so only 5x double chunks have more precision (mantissa bits) than your input, and could exactly represent any 256-bit integer. (IEEE double does have a large enough exponent range; -1022 .. +1023). And have enough to spare that log2(int) should map each 256-bit input to a unique monotonic output, even with some rounding error.

Radix in hexadecimal

Pierre Terdiman in his article "Radix Sort Revisited" tells us:
For example you’ll need 4 passes to sort standard 32 bits integers,
since in hexadecimal the radix is a byte.
But 0xAB has two radices, namely A and B, 4-bit wide either.
So, what is the radix in hexadecimal? Because i can't understand the article.
From what I understand, the 0xAB was just an example to what radix is. When approaching Radix sort, it's easier to use bytes (no need for shifting - only casting, in C/C++, anyway).
The last part of the sentence is the important thing here:
since in hexadecimal the radix is a byte
After saying that, it doesn't matter what was said earlier...
Just to strengthen the argument, examine his example - the SortedBuffer initialization uses byte as radix (256*sizeof(int)), not nibble:
memset(SortedBuffer, -1, 256*sizeof(int)); // Fill with –1
(again, from I understand in this article...)

Truncating 64-bit IEEE doubles to 61-bits in a safe fashion

I am developing a programming language, September, which uses a tagged variant type as its main value type. 3 bits are used for the type (integer, string, object, exception, etc.), and 61 bits are used for the actual value (the actual integer, pointer to the object, etc.).
Soon, it will be time to add a float type to the language. I almost have the space for a 64-bit double, so I wanted to make use of doubles for calculations internally. Since I'm actually 3 bits short for storage, I would have to round the doubles off after each calculation - essentially resulting in a 61-bit double with a mantissa or exponent shorter by 3 bits.
But! I know floating point is fraught with peril and doing things which sound sensible on paper can produce disastrous results with FP math, so I have an open-ended question to the experts out there:
Is this approach viable at all? Will I run into serious error-accumulation problems in long-running calculations by rounding at each step? Is there some specific way in which I could do the rounding in order to avoid that? Are there any special values that I won't be able to treat that way (subnormals come to mind)?
Ideally, I would like my floats to be as well-behaved as a native 61-bit double would be.
I would recommend borrowing bits from the exponent field of the double-precision format. This is the method described in this article (that you would modify to borrow 3 bits from the exponent instead of 1). With this approach, all computations that do not use very large or very small intermediate results behave exactly as the original double-precision computation would. Even computations that run into the subnormal region of the new format behave exactly as they would if a 1+8+52 61-bit format had been standardized by IEEE.
By contrast, naively borrowing any number of bits at all from the significand introduces many double-rounding problems, all the more frequent that you are rounding from a 52-bit significand to a significand with only a few bits removed. Borrowing one bit from the significand as you suggest in an edit to your question would be the worst, with half the operations statistically producing double-rounded results that are different from what the ideal “native 61-bit double” would have produced. This means that instead of being accurate to 0.5ULP, the basic operations would be accurate to 3/4ULP, a dramatic loss of accuracy that would derail many of the existing, finely-designed numerical algorithms that expect 0.5ULP.
Three is a significant number of bits to borrow from an exponent that only has 11, though, and you could also consider using the single-precision 32-bit format in your language (calling the single-precision operations from the host).
Lastly, I give visibility here to another solution found by Jakub: borrow the three bits from the significand, and simulate round-to-odd for the intermediate double-precision computation before converting to the nearest number in 49-explicit-significand-bit, 11-exponent-bit format. If this way is chosen, it may useful to remark that the rounding itself to 49 bits of significand can be achieved with the following operations:
if ((repr & 7) == 4)
repr += (repr & 8) >> 1); /* midpoint case */
else
repr += 4;
repr &= ~(uint64_t)7; /* round to the nearest */
Despite working on the integer having the same representation as the double being considered, the above snippet works even if the number goes from normal to subnormal, from subnormal to normal, or from normal to infinite. You will of course want to set a tag in the three bits that have been freed as above. To recover a standard double-precision number from its unboxed representation, simply clear the tag with repr &= ~(uint64_t)7;.
This is a summary of my own research and information from the excellent answer by #Pascal Cuoq.
There are two places where we can truncate the 3-bits we need: the exponent, and the mantissa (significand). Both approaches run into problems which have to be explicitly handled in order for the calculations to behave as if we used a hypothetical native 61-bit IEEE format.
Truncating the mantissa
We shorten the mantissa by 3 bits, resulting in a 1s+11e+49m format. When we do that, performing calculations in double-precision and then rounding after each computation exposes us to double rounding problems. Fortunately, double rounding can be avoided by using a special rounding mode (round-to-odd) for the intermediate computations. There is an academic paper describing the approach and proving its correctness for all doubles - as long as we truncate at least 2 bits.
Portable implementation in C99 is straightforward. Since round-to-odd is not one of the available rounding modes, we emulate it by using fesetround(FE_TOWARD_ZERO), and then setting the last bit if the FE_INEXACT exception occurs. After computing the final double this way, we simply round to nearest for storage.
The format of the resulting float loses about 1 significant (decimal) digit compared to a full 64-bit double (from 15-17 digits to 14-16).
Truncating the exponent
We take 3 bits from the exponent, resulting in a 1s+8e+52m format. This approach (applied to a hypothetical introduction of 63-bit floats in OCaml) is described in an article. Since we reduce the range, we have to handle out-of-range exponents on both the positive side (by simply 'rounding' them to infinity) and the negative side. Doing this correctly on the negative side requires biasing the inputs to any operation in order to ensure that we get subnormals in the 64-bit computation whenever the 61-bit result needs to be subnormal. This has to be done a bit differently for each operation, since what matters is not whether the operands are subnormal, but whether we expect the result to be (in 61-bit).
The resulting format has significantly reduced range since we borrow a whopping 3 out of 11 bits of the exponent. The range goes down from 10-308...10308 to about 10-38 to 1038. Seems OK for computation, but we still lose a lot.
Comparison
Both approaches yield a well-behaved 61-bit float. I'm personally leaning towards truncating the mantissa, for three reasons:
the "fix-up" operations for round-to-odd are simpler, do not differ from operation to operation, and can be done after the computation
there is a proof of mathematical correctness of this approach
giving up one significant digit seems less impactful than giving up a big chunk of the double's range
Still, for some uses, truncating the exponent might be more attractive (especially if we care more about precision than range).

Arduino Arithmetic error negative result

I am trying to times 52 by 1000 and i am getting a negative result
int getNewSum = 52 * 1000;
but the following code is ouputting a negative result: -13536
An explanation of how two's complement representation works is probably better given on Wikipedia and other places than here. What I'll do here is to take you through the workings of your exact example.
The int type on your Arduino is represented using sixteen bits, in a two's complement representation (bear in mind that other Arduinos use 32 bits for it, but yours is using 16.) This means that both positive and negative numbers can be stored in those 16 bits, and if the leftmost bit is set, the number is considered negative.
What's happening is that you're overflowing the number of bits you can use to store a positive number, and (accidentally, as far as you're concerned) setting the sign bit, thus indicating that the number is negative.
In 16 bits on your Arduino, decimal 52 would be represented in binary as:
0000 0000 0011 0100
(2^5 + 2^4 + 2^2 = 52)
However, the result of multiplying 52 by 1,000 -- 52,000 -- will end up overflowing the magnitude bits of an int, putting a '1' in the sign bit on the end:
*----This is the sign bit. It's now 1, so the number is considered negative.
1100 1011 0010 0000
(typically, computer integer arithmetic and associated programming languages don't protect you against doing things like this, for a variety of reasons, mostly related to efficiency, and mostly now historical.)
Because that sign bit on the left-hand end is set, to convert that number back into decimal from its assumed two's complement representation, we assume it's a negative number, and then first take the one's complement (flipping all the bits):
0011 0100 1101 1111
-- which represents 13,535 -- and add one to it, yielding 13,536, and call it negative: -13,536, the value you're seeing.
If you read up on two's complement/integer representations in general, you'll get the hang of it.
In the meantime, this probably means you should be looking for a bigger type to store your number. Arduino has unsigned integers, and a long type, which will use four bytes to store your numbers, giving you a range from -2,147,483,648 to 2,147,483,647. If that's enough for you, you should probably just switch to use long instead of int.
Matt's answer is already a very good in depth explanation of what's happening, but for those looking for a more TL;dr practical answer:
Problem:
This happens quite often for Arduino programmers when they try to assign (= equal sign) the result of an arithmetic (usually multiplication) to a normal integer (int). As mentioned, when the result is bigger than the memory size compiler has assigned to the variables, overflowing happens.
Solution 1:
The easiest solution is to replace the int type with a bigger datatype considering your needs. As this tutorialspoint.com tutorial has explained there are different integer types we can use:
int:
16 bit: from -32,768 to 32,767
32 bit: from -2,147,483,648 to 2,147,483,647
unsigned int: from 0 to 65,535
long: from 2,147,483,648 to 2,147,483,647
unsigned long: from 0 to 4,294,967,295
Solution 2:
This works only if you have some divisions with big enough denominators, in your arithmetic. In Arduino compiler multiplication is calculated prior to the division. so if you have some divisions in your equation try to encapsulate them with parentheses. for example if you have a * b / c replace it with a * (b / c).

Difference between signed and unsigned 16bit BCDs?

How do you tell the difference?
For example, say you have 0110 0101 1001 0011.
The unsigned BCD is 6593, but what is the signed value?
Usually, you tell the difference by explicitly storing the sign.
Radix complement (en.wikipedia.org/wiki/Method_of_complements)
in normal binary system, signed numbers uses the MSB (most significant bit) to determine the sign of the number, the rest of the number is the actual value.
Unlike that in Packed BCD, the 4 LSb (least significant bits) represent the sign, and the rest (to the left) of the number represents the actual value.

Resources