can you tell me how I can best calculate C.04 (which is a hexadecimal number)* 16^2 (decimal) in my head. I first converted the hexadecimal number to decimal and then multiplied the result by the 16^2, but that should be faster because the 16 implies hexadecimal.
Thanks :)
Related
This is maybe more of a math question, but I'm stumped on it:
Let's say I have an 8-digit hex string. That can represent values from 0 to 2^32-1. Now let's say I want to have an 8-digit string of another base like base32. Is it possible to construct an alphabet for base32 (or another base) that is a strict superset of hexadecimal so that any hex string below 2^32-1 will decode via base32 to the same value and only larger values >=2^32 start incorporating base32 characters outside the hex range?
In other words is it possible to "upgrade" from base 16 to a higher numbered base in a way that is backward compatible with hex identifiers?
You can assign numbers to 8-character strings however you like.
There are 232 8-character hex strings, to which you can certainly assign their hex values.
There are 240 8-character strings with characters in, say, 0123456789ABCDEFGHJKMNPQRSTUVWXY. 232 are hex strings, and the remaining 240 - 232 strings can be assigned any numbers you like.
You won't be able to assign them numbers via a "normal" decimal-like system, however, because hex requres "10" to be 16, not 32. There are ways that aren't that hard, however. For example, given a 40-bit number:
Convert the lower 32-bits to 8 character HEX.
Assign one of the remaining bits to each character, and for each one bit, add 'G' to the corresponding character, changing its range from '0-F' to 'G-Y'
Now you have a string for each 40-bit number, and the smaller ones have the same strings as their hex representations.
I am not sure if I understand you right; please correct me if I am wrong. Anyway:
A hex digit (base 16) is represented by 4 bits. Its range is 0000 … 1111, representing digits 0 … F.
An 8-digit hex string is thus represented by 32 bits, that can represent values from 0 to 2^32-1. Its range is 00000000 … FFFFFFFF.
Lets consider a base 17 system, called here a 17dec system.
A 17dec digit (base 17) is represented by 5 bits. Its range is 00000 … 11111, representing digits 0 … V (using a standard Latin alphabet).
A 8-digit 17dec string is thus represented by 40 bits, that can represent values from 0 to 2^40-1. Its range is 00000000 … VVVVVVVV.
Thus, hex and 17dec cover the same bit combinations from 0 to 2^32-1. It is thus not possible to have a number system with a higher base that is bit-wise compatible with a lower base system.
Take, e.g. the value 10000.
The hex representation of 10000 is 10.
The 17dec representation of 10000 is G.
There is no way to make this compatible.
I'm doing an assignment for one of my classes and I'm stuck on those two questions:
Express the decimal -412.8 using binary floating point notation using 11 fraction bits for
the significand and 3 digits for the exponent without bias
I think I managed to solve it, but my exponent has 4 bits not 3. I don't really understand how you can convert -412.8 to floating point notation using only 3 bit exponents. Here is how I tried to solve it:
First of all, the floating point notation has three parts. The sign part, 0 for positive numbers and 1 for negative numbers, the exponent part and finally the mantissa. The mantissa in this case includes the leading 1. Since the number is negative, the sign bit is going to be 1. For the mantissa, I first converted 412.8 to binary, which gave me 110011100.11 and then I shifted the decimal point to the left 8 times, which gives me 1.1001110011. The mantissa is therefore 1100 1110 011 (11 bits as the teacher asked). Finally, the exponent is going to be 2^8, since I shifted the decimal 8 times to the right. 8 is 1000 in binary. So am I correct to assume that my floating point notation should be 1 1000 11001110011?
Represent the decimal number 16.1875×2-134 in single-precision IEEE 754 format.
I'm completely stuck on this one. I don't know how to convert that number. When I enter it in wolfram, the decimal number is way beyond the limit of the single precision format. I do know that the sign bit is going to be 0 since the number is positive. I don't know what the mantissa is though, nor how to find it. I also don't know how to find the exponent. Can someone guide me through this problem? Thanks.
For 1, you appear to be correct -- there's no way to reperesent the exponent unbiased in 3 bits. Of course, the problem says "3 digits" and doesn't define a base for the digits...
2 is relatively straight-forward -- convert the value to binary gives you 10000.0011 then normalize, giving 1.00000011×2-130. Now -130 is too small for a single-precision exponent (minimum is -126), so we have to denormalize (continue shifting the point to get an exponent of -126), which gives us 0.000100000011×2-126. That's then our mantissa (with the 0 dropped) and an exponent field of 0: 0|00000000|00010000001100000000000 (vertical bars separating the sign/exponent/mantissa fields) or 0x00081800
How do I represent integers numbers, for example, 23647 in two bytes, where one byte contains the last two digits (47) and the other contains the rest of the digits(236)?
There are several ways do to this.
One way is to try to use Binary Coded Decimal (BCD). This codes decimal digits, rather than the number as a whole into binary. The packed form puts two decimal digits into a byte. However, your example value 23647 has five decimal digits and will not fit into two bytes in BCD. This method will fit values up to 9999.
Another way is to put each of your two parts in binary and place each part into a byte. You can do integer division by 100 to get the upper part, so in Python you could use
upperbyte = 23647 // 100
Then the lower part can be gotten by the modulus operation:
lowerbyte = 23647 % 100
Python will directly convert the results into binary and store them that way. You can do all this in one step in Python and many other languages:
upperbyte, lowerbyte = divmod(23647, 100)
You are guaranteed that the lowerbyte value fits, but if the given value is too large the upperbyte value many not actually fit into a byte. All this assumes that the value is positive, since negative values would complicate things.
(This following answer was for a previous version of the question, which was to fit a floating-point number like 36.47 into two bytes, one byte for the integer part and another byte for the fractional part.)
One way to do that is to "shift" the number so you consider those two bytes to be a single integer.
Take your value (36.47), multiply it by 256 (the number of values that fit into one byte), round it to the nearest integer, convert that to binary. The bottom 8 bits of that value are the "decimal numbers" and the next 8 bits are the "integer value." If there are any other bits still remaining, your number was too large and there is an overflow condition.
This assumes you want to handle only non-negative values. Handling negatives complicates things somewhat. The final result is only an approximation to your starting value, but that is the best you can do.
Doing those calculations on 36.47 gives the binary integer
10010001111000
So the "decimal byte" is 01111000 and the "integer byte" is 100100 or 00100100 when filled out to 8 bits. This represents the float number 36.46875 exactly and your desired value 36.47 approximately.
I'm implementing an Math equation in verilog, in a combinational scheme (assigns = ...) to the moment Synthesis tool (Quartus II) has been able to do add, sub and mul easly 32 bit unsigned absolute numbers by using the operators "+,- and *" respectively.
However, one of the final steps of the equation is to divide two 64 bits unsigned fixed point variables, the reason why is such of large 64 bit capacity is because I'm destinating 16 bits for integers and 48 bits for fractions (although, computer does everything in binary and doesn't care about fractions, I would be able to check the number to separate fraction from integer in the end).
Problem is that the operator "/" is useless since it auto-invokes a so-called "LPM_divide" library which output only gives me the integer, disregarding fractions, plus in a wrong position (the less significant bit).
For example:
b1000111010000001_000000000000000000000000000000000000000000000000 / b1000111010000001_000000000000000000000000000000000000000000000000
should be 1, it gives me
b0000000000000000_000000000000000000000000000000000000000000000001
So, how can I make this division for synthesizable verilog? What methods or algorithms should I follow, I'd like it to be faster, maybe a full combinational?
I'd like it to keep the 16 integers - 24 fractions user point of view. Thanks in advance.
First assume you multiply two fixed-point numbers.
Let's call them X and Y, first containing Xf fractional bits, and second Yf fractional bits accordingly.
If you multiply those numbers as integers, the LSB Xf+Yf bits of the integer result could be treated as fractional bits of resulting fixed-point number (and you still multiply them as integers).
Similarly, if you divide number of Sf fractional bits by number of Df fractional bits, the resulting integer could be treated as fixed-point number having Sf-Df fractional bits -- therefore your example with resulting integer 1.
Thus, if you need to get 48 fractional bits from your division of 16.48 number by another 16.48 number, append divident with another 48 zeroed fractional bits, then divide the resulting 64+48=112-bit number by another 64-bit number, treating both as integers (and using LPM_divide). The result's LSB 48 bits will then be what you need -- the resulting fixed-point number's 48 fractional bits.
I want to convert the Hexadecimal values to voltage conversion as mentioned below,
2 Byte Signed 2s Comp Binary Fraction with Binary Point to the right of the most significant bit. 1:512V scaling.
Example :
0x2A80 → 170.00 V
0xD580 → ‐170.00 V
But the 0x2A80 conversion gives me 10880 decimal value. How can i get 170.00 V from 0x2A80?
If 0x2A80 is 170.00, then that means you have 10 bits before the point and 6 bits after the point. Or in other words, you have 10880/64 == 170.
Your question seems to contain a few misconceptions:
The fact that 170.0 is a voltage is irrelevant. Numbers work the same no matter whether they are voltages, distances, or just numbers without a unit.
In most programming languages, you don't have "decimal" or "hexadecimal" values, you just have values. Decimal and hexadecimal only come in when you're dealing with text output and string. 0x2A80 is 10880, and 0xD580 is -10880.
If you happen to be programming in C:
short fixedPointNumber;
float floatingPointNumber;
scanf("%hx", &fixedPointNumber);
floatingPointNumber = fixedPointNumber / 64.0f;
printf("Converted number: %f\n", floatingPointNumber);