How to convert hex code to decimal code or vice versa - hex

How can I know what these bunch of hex code means?
02 00 A0 E3 1E FF 2F E1
Any convertor of these codes to decimal code like 1,2,3 etc or vice versa like deciaml code to this type of hex code?
Thanks

This is my first response in stack overflow. So here goes...
What Hex Code (a.k.a. hexadecimal) represents purely depends on its context, or what does it mean to the program or machine. It could be a string, machine code (assembly language), flags, pointers to memory, data, part of an image or whatever. And this is dependent on the processor where this code is located also.
Each 2-digit hex code is a byte and represents decimal number (0-255 or 00-FF), half of a byte or 1 digit hex code is called a nibble.
Converting Hex Code to decimal is trivial. Convert from decimal to hex, not as trivial.
There are many calculators that have this functionality built in.
0-9 => 0 – 9, A=10, B=11, C=12, D=13, E=14, F=15.
Now, if you want to convert a 2 digit number like 12 hex (i.e. 0x12 or 12h ). Here is the formula.
(16 x 1) + (1 x 2) = 18 (decimal)
A four-digit hexadecimal 4A3E =>
(4096 x 4) + (256 x 10) + (16 x 3) + (1 x 14) = 19006 (decimal)
An integer in C# is 4 bytes, so your example hex code could also represent 2 integers in C#. Or it could be simply 1 number in C# called a “long” which is 8 bytes and could represent a number between:
0 to 18,446,744,073,709,551,615 unsigned long OR
-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 signed long
Also something to note hex code also represent characters called ASCII (pronounced a·skee) This is an internal mapping within the microprocessor and could be different. 00 is not mapped so it typically means the end of the string.

Hex codes like that could represent a binary number. You could paste "0200A0E31EFF2FE1" into a converter like this to find out that the decimal representation of that number is "144292085413916641", for example.
But, from the way that your hex codes are grouped, it appears that you're looking at binary data, rather than a single integer that's represented in hexadecimal. When hex codes are grouped in pairs, each group of two characters represents one byte. https://en.wikipedia.org/wiki/Hexadecimal#Written_representation

Related

Base32 alphabet that is an overlapping superset of hexadecimal? (math/CS)

This is maybe more of a math question, but I'm stumped on it:
Let's say I have an 8-digit hex string. That can represent values from 0 to 2^32-1. Now let's say I want to have an 8-digit string of another base like base32. Is it possible to construct an alphabet for base32 (or another base) that is a strict superset of hexadecimal so that any hex string below 2^32-1 will decode via base32 to the same value and only larger values >=2^32 start incorporating base32 characters outside the hex range?
In other words is it possible to "upgrade" from base 16 to a higher numbered base in a way that is backward compatible with hex identifiers?
You can assign numbers to 8-character strings however you like.
There are 232 8-character hex strings, to which you can certainly assign their hex values.
There are 240 8-character strings with characters in, say, 0123456789ABCDEFGHJKMNPQRSTUVWXY. 232 are hex strings, and the remaining 240 - 232 strings can be assigned any numbers you like.
You won't be able to assign them numbers via a "normal" decimal-like system, however, because hex requres "10" to be 16, not 32. There are ways that aren't that hard, however. For example, given a 40-bit number:
Convert the lower 32-bits to 8 character HEX.
Assign one of the remaining bits to each character, and for each one bit, add 'G' to the corresponding character, changing its range from '0-F' to 'G-Y'
Now you have a string for each 40-bit number, and the smaller ones have the same strings as their hex representations.
I am not sure if I understand you right; please correct me if I am wrong. Anyway:
A hex digit (base 16) is represented by 4 bits. Its range is 0000 … 1111, representing digits 0 … F.
An 8-digit hex string is thus represented by 32 bits, that can represent values from 0 to 2^32-1. Its range is 00000000 … FFFFFFFF.
Lets consider a base 17 system, called here a 17dec system.
A 17dec digit (base 17) is represented by 5 bits. Its range is 00000 … 11111, representing digits 0 … V (using a standard Latin alphabet).
A 8-digit 17dec string is thus represented by 40 bits, that can represent values from 0 to 2^40-1. Its range is 00000000 … VVVVVVVV.
Thus, hex and 17dec cover the same bit combinations from 0 to 2^32-1. It is thus not possible to have a number system with a higher base that is bit-wise compatible with a lower base system.
Take, e.g. the value 10000.
The hex representation of 10000 is 10.
The 17dec representation of 10000 is G.
There is no way to make this compatible.

When dealing with hexadecimal numbers how do I use bit offset and length to get a value?

I have the following number
0000C1FF61A40000
The offset or start is 36 or 0x23
The length of the number is 12 or 0xc
Can someone help me understand how to get the resulting value? I thought the offset meant what pair of hex numbers to start with and then length would be how many to grab. There definitely aren't 36 pairs, only 8. Not sure how I'd do a length of 12 with only 8.
Each hex digit represents four binary bits. Therefore your offset of 36 bits (which BTW is 0x24, not 0x23) is equivalent to 9 hex digits. So discard the rightmost 9 digits from your original number, leaving you with 0000C1F.
Then the length of the number you want is 12 bits, which is 3 hex digits. So discard all but the rightmost 3 digits, leaving you with C1F as the answer.
If the numbers of bits had not been nice multiples of 4 then you would have had to convert the original hex number into binary, then discard offset number of bits from the right, retain only the rightmost length bits from the result, and finally convert those length bits back into hex.

When does a hexidecimal number pivot to letters rather than numbers

Assume this number 173250103518582539668252657343418508842, if I wanted to convert it to a hexadecimal number such that a 10 = F, 11 = E, etc. where are the breaks/how does that work?
I've done a bit of research online and I can't seem to find the answer. It's a really low-level question, I know.
6 characters in there's a 10, would that be flipped to an F or would that get missed because whatever triggers the flip in the int -> string hexadecimal conversion happens another way?
Hexadecimal is an encoding used to express binary data in base-16 where the ascending sequence is 0-9a-f (upper or lower case a-f), once character per 4-bits (4-bits has 16 possible values). Thus 2 hex characters per byte.
binary bits (msb on left) and hexadecimal:
0000 0
0001 1
0010 2
0011 3
...
1001 9
1010 a
...
1111 f
To say "10 = F, 11 = E" is not hexadecimal.
To encode the decimal number 173250103518582539668252657343418508842 convert it is a Big Integer and then hexadecimal encode the underlying bytes to hexadecimal.
or
To encode the ASCI string "173250103518582539668252657343418508842" to hexadecimal convert each letter to the underlying ASCII binary code and then encode that into hexadecimal: "313733323530313033353138353832353339363638323532363537333433343138353038383432".
See Hexadecimal and ASCII.
Aside: My first day as a programmer I had to know hex, binary and ASCII encoding, funny how things change.

Hexadecimal to decimal voltage Conversion

I want to convert the Hexadecimal values to voltage conversion as mentioned below,
2 Byte Signed 2s Comp Binary Fraction with Binary Point to the right of the most significant bit. 1:512V scaling.
Example :
0x2A80 → 170.00 V
0xD580 → ‐170.00 V
But the 0x2A80 conversion gives me 10880 decimal value. How can i get 170.00 V from 0x2A80?
If 0x2A80 is 170.00, then that means you have 10 bits before the point and 6 bits after the point. Or in other words, you have 10880/64 == 170.
Your question seems to contain a few misconceptions:
The fact that 170.0 is a voltage is irrelevant. Numbers work the same no matter whether they are voltages, distances, or just numbers without a unit.
In most programming languages, you don't have "decimal" or "hexadecimal" values, you just have values. Decimal and hexadecimal only come in when you're dealing with text output and string. 0x2A80 is 10880, and 0xD580 is -10880.
If you happen to be programming in C:
short fixedPointNumber;
float floatingPointNumber;
scanf("%hx", &fixedPointNumber);
floatingPointNumber = fixedPointNumber / 64.0f;
printf("Converted number: %f\n", floatingPointNumber);

Understanding Adruino Binary to Decimal Conversations

I was looking at some code today for integrating a real time clock with an arduino and it had some binary to decimal (and vice versa) that I don't fully understand.
The code in question is below:
byte decToBcd(byte val)
{
return ( (val/10*16) + (val%10) );
}
byte bcdToDec(byte val)
{
return ( (val/16*10) + (val%16) );
}
ex: decToBcd(12);
I really fail to grasp how this works. I am not sure I understand the math, or if some sort of assumptions are being taken advantage of.
Would someone mind explaining how exactly the math and data types below are supposed to work? If possible touching on why the value "16" is used in the conversions instead of "8" when we are supposed to be working with a byte value.
For context, the full code can be found here: http://www.codingcolor.com/microcontrollers/an-arduino-lcd-clock-using-a-chronodot-rtc/
The key hint here is BCD - Binary-coded decimal - in the function name. In BCD each decimal digit is represented by four bits (half of a byte). As a result the maximum (decimal) number you can store using BCD notation is 99 - 9 in the upper nibble (half of the byte) and 9 in the lower nibble.
Let's take a look at number 12 as an example. Number 12 looks as follows in the binary notation:
12 = %00001010
However in BCD it looks as follows:
12 = %00010010
because
0001 0010
1 2
Now if you look at the decToBcd function val%10 is responsible for calculating the value of the ones place (i.e. the last digit). Since this goes to the lower part of the byte we don't need to do anything special here. val/10*16 first calculates the value of the tens place - val/10. However since the value has to go to the upper half of the byte it needs to be shifted up by four bits - hence *16. Another (in my opinion more readable) way of writing this function would be:
((val / 10) << 4) | (val % 10)
The bcdToDec does the reverse conversion.
RTC usually stores Year in 1 byte as 2 digits only, i.e: 2014 is 14.
And some of them stores it as a number from the year 1970 so 2014 = 44.
So maximum it can hold is 99 in both cases.

Resources