How does a decimal get converted into dotted decimal - math

Is the way to convert the decimal to hexadecimal and then to doted decimal.And does it always have to be 10 digit decimal.

Convert to binary and then to dotted decimal. The number must be within 223.255.255.255 and not necessarily of just 10 digits.

Related

Multiplication with hexadecimal

can you tell me how I can best calculate C.04 (which is a hexadecimal number)* 16^2 (decimal) in my head. I first converted the hexadecimal number to decimal and then multiplied the result by the 16^2, but that should be faster because the 16 implies hexadecimal.
Thanks :)

Difference between two hexadecimal numbers

I have two hexadecimal numbers: 12C (300 in decimal) and 78 (120 in decimal). I want to calculate the difference between these two numbers without using their decimal equivalence and without using the minus sign (-). How can I do this?

Base32 alphabet that is an overlapping superset of hexadecimal? (math/CS)

This is maybe more of a math question, but I'm stumped on it:
Let's say I have an 8-digit hex string. That can represent values from 0 to 2^32-1. Now let's say I want to have an 8-digit string of another base like base32. Is it possible to construct an alphabet for base32 (or another base) that is a strict superset of hexadecimal so that any hex string below 2^32-1 will decode via base32 to the same value and only larger values >=2^32 start incorporating base32 characters outside the hex range?
In other words is it possible to "upgrade" from base 16 to a higher numbered base in a way that is backward compatible with hex identifiers?
You can assign numbers to 8-character strings however you like.
There are 232 8-character hex strings, to which you can certainly assign their hex values.
There are 240 8-character strings with characters in, say, 0123456789ABCDEFGHJKMNPQRSTUVWXY. 232 are hex strings, and the remaining 240 - 232 strings can be assigned any numbers you like.
You won't be able to assign them numbers via a "normal" decimal-like system, however, because hex requres "10" to be 16, not 32. There are ways that aren't that hard, however. For example, given a 40-bit number:
Convert the lower 32-bits to 8 character HEX.
Assign one of the remaining bits to each character, and for each one bit, add 'G' to the corresponding character, changing its range from '0-F' to 'G-Y'
Now you have a string for each 40-bit number, and the smaller ones have the same strings as their hex representations.
I am not sure if I understand you right; please correct me if I am wrong. Anyway:
A hex digit (base 16) is represented by 4 bits. Its range is 0000 … 1111, representing digits 0 … F.
An 8-digit hex string is thus represented by 32 bits, that can represent values from 0 to 2^32-1. Its range is 00000000 … FFFFFFFF.
Lets consider a base 17 system, called here a 17dec system.
A 17dec digit (base 17) is represented by 5 bits. Its range is 00000 … 11111, representing digits 0 … V (using a standard Latin alphabet).
A 8-digit 17dec string is thus represented by 40 bits, that can represent values from 0 to 2^40-1. Its range is 00000000 … VVVVVVVV.
Thus, hex and 17dec cover the same bit combinations from 0 to 2^32-1. It is thus not possible to have a number system with a higher base that is bit-wise compatible with a lower base system.
Take, e.g. the value 10000.
The hex representation of 10000 is 10.
The 17dec representation of 10000 is G.
There is no way to make this compatible.

When dealing with hexadecimal numbers how do I use bit offset and length to get a value?

I have the following number
0000C1FF61A40000
The offset or start is 36 or 0x23
The length of the number is 12 or 0xc
Can someone help me understand how to get the resulting value? I thought the offset meant what pair of hex numbers to start with and then length would be how many to grab. There definitely aren't 36 pairs, only 8. Not sure how I'd do a length of 12 with only 8.
Each hex digit represents four binary bits. Therefore your offset of 36 bits (which BTW is 0x24, not 0x23) is equivalent to 9 hex digits. So discard the rightmost 9 digits from your original number, leaving you with 0000C1F.
Then the length of the number you want is 12 bits, which is 3 hex digits. So discard all but the rightmost 3 digits, leaving you with C1F as the answer.
If the numbers of bits had not been nice multiples of 4 then you would have had to convert the original hex number into binary, then discard offset number of bits from the right, retain only the rightmost length bits from the result, and finally convert those length bits back into hex.

unable to get desired precision of the output from division of 2 integers in R

Iam dividing two numbers in R. The numerator is a big integer ( ranges in millions) divided by a 13.00001
It is taking 13.000001 as 13 and the output that comes is limited to only 1 decimal place.
I require the output to be uptil 2 decimal places which is not happening.
I tried round, format and as.numeric but it is fruitless
round is not giving anything (round(divison,1)
format(nsmall=2) makes it upto 2 decimal places but converts it into character
as.numeric reconverts it from character but the 2 decimal places are replaced by 1 decimal place
Is there any way that I can get 2 decimal places when I divide an integer with a number like 13.000001?
Be careful not to confuse output with internal precision:
x <- 13e7/13.000001
sprintf("%10.20f",x)
#[1] "9999999.23076928965747356415"
sprintf("%10.10f",x*13)
#[1] "129999990.0000007600"
sprintf("%10.10f",x*13.000001)
#[1] "129999999.9999999851"
Differences to the expected output are due to the limited floating point precision.

Resources