Binary coded decimal (BCD) to Hexadecimal conversion - hex

can someone explain to me how to convert BCD to Hexadecimal? For example how can i convert 98(BCD) to Hexadecimal.
Thanks.

I don't quite understand your question, but I'm guessing that e.g. someone gives you a number 98 encoded in BCD, which would be:
1001 1000
and you are supposed to get:
62H
What I would propose:
1) convert BCD-encoded value to decimal value (D)
2) convert D to hexadecimal value.
Depending on which programming language you choose, this task will be easier or harder.
EDIT: In Java it could be:
byte bcd = (byte)0x98; // BCD value: 1001 1000
int decimal = (bcd & 0xF) + (((int)bcd & 0xF0) >> 4)*10;
System.out.println(
Integer.toHexString(decimal)
);

BCD is a subset of hexadecimal, so there is no conversion necessary -- any given BCD value is identical to the corresponding hexadecimal value. For example, '98' in BCD is 10011000, which is the same as 98 in hexadecimal

For any BCD encoded value (that will fit in an int).
Iterative:
unsigned int bcd2dec(unsigned int bcd)
{
unsigned int dec=0;
unsigned int mult;
for (mult=1; bcd; bcd=bcd>>4,mult*=10)
{
dec += (bcd & 0x0f) * mult;
}
return dec;
}
Recursive:
unsigned int bcd2dec_r(unsigned int bcd)
{
return bcd ? (bcd2dec_r(bcd>>4)*10) + (bcd & 0x0f) : 0;
}

I'd create a table of 256 entries, mapping all BCD bytes into their binary equivalent; you can then use hex printing of your programming language.

Go between different combinations of Hex, Decimal, and Binary. If you know how binary works then you should easily be able to use BCD:
Hex Dec BCD
0 0 0000
1 1 0001
2 2 0010
3 3 0011
4 4 0100
5 5 0101
6 6 0110
7 7 0111
8 8 1000
9 9 1001
A 10 0001 0000 <-- notice that each digit looks like hex except it can only go to 9.
B 11 0001 0001
C 12 0001 0010
D 13 0001 0011
E 14 0001 0100
F 15 0001 0101
Once you got this part of it down, you should be able to use divide by 10 or %10 to find any combination to generate your BCD. Since it only uses 10 combinations instead of all 16 you will lose some information.

Related

In Unix what does the ^ do when placed in a math expression

I have been searching for the answer to this and was unable to find an exact answer help will be much appreciated.
echo $[ 2 ^ 2 ]
returns value 0
echo $[ 2 ^ 3 ]
returns 1
echo $[ 2 ^ 4 ]
returns 6
My question is what math operation is taking place when using the ^ in this context?
I expected to see a power of function. Would really appreciate any clarification, thanks in advance.
It's a bitwise XOR operation.
It compares the bits for the two numbers, and if for a given position, one of the bits is 1, the resulting bit will also be set to 1. In all other cases, the resulting bit will be 0.
So, for your examples:
2 010
2 010
--------
0 000
2 010
3 011
--------
1 001
2 010
4 100
--------
6 110
I would say, your commands are doing a bit-xor with the numbers.

how to convert 1012 (decimal) to 100C (hexadecimal)

When I try to convert the decimal value 1012 to hexadecimal, I get the value 3F4. However, in the class the teacher got the value 100C.
Could anyone tell me how that is possible?
Well 12 in hexadecimal is C...
So 1 = 1, 0 = 0, 12 = C
1 0 12 = 1 0 0xC
That's the only way I can see it being possible.
Well 100C if I am not wrong, is not 1012. 100C is:
1 * (16^3) + 12 * (16^0) = (4096 + 12) = 4108
Are you sure that your teacher gave that result?

What is the correct name of this error correction method (it is similar to Hamming Code)

What is the correct name of this error correction method?
It is quite similar to Hamming Code, but much more simple. I also cannot find it in the literature any more. The only internet sources, I'm now able to find, which describes the method, are this:
http://www.mathcs.emory.edu/~cheung/Courses/455/Syllabus/2-physical/errors-Hamming.html
And the german-language Wikipedia.
http://de.wikipedia.org/w/index.php?title=Fehlerkorrekturverfahren
In the Wikipedia article, the method is called Hamming-ECC method. But I'm not 100% sure, this is correct.
Here is an example, which describes the way the method works.
Payload: 10011010
Step 1: Determine parity bit positions. Bits, which are powers of 2 (1, 2, 4, 8, 16, etc.) are parity bits:
Position: 1 2 3 4 5 6 7 8 9 10 11 12
Data to be transmitted: ? ? 1 ? 0 0 1 ? 1 0 1 0
Step 2: Calculate parity bit values. Each bit position in the transmission is assigned to a position number. In this example, the position number is a 4-digit number, because we have 4 parity bits. Calculate XOR of the values of those positions (in 4-digit format), where the payload is a 1 bit in the transmission:
0011 Position 3
0111 Position 7
1001 Position 9
XOR 1011 Position 11
--------------------
0110 = parity bit values
Step 3: Insert parity bit values into the transmission:
Position: 1 2 3 4 5 6 7 8 9 10 11 12
Data to be transmitted: 0 1 1 1 0 0 1 0 1 0 1 0
Is is quite simple to verify, if a received message was transmitted correctly and single-bit errors can be corrected. Here is an example. The receiver calulates XOR of the calculated and received payload bits where the value is a 1 bit. Is the result is 0, there the transmission is error-free. Otherwise the result contains the position of the bit with the wrong value.
Received message: 0001101100101101
Position: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Received data: 0 0 0 1 1 0 1 1 0 0 1 0 1 1 0 1
Parity bits: X X X X X
00101 Position 5
00111 Position 7
01011 Position 11
01101 Position 13
XOR 01110 Position 14
--------------------
01010 Parity bits calculated
XOR 00111 Parity bits received
--------------------
01101 => Bit 13 ist defective!
I hope, anybody here knows the correct name of the method.
Thanks for any help.
This looks like a complicated implementation of the Hamming(15,11) encoding & decoding algorithm.
Interleaving the parity bits with the information bits does not change the behaviour (or performance) of the code. Your description only uses 8 information bits, where the Hamming(15,11) corrects all single bit errors even with 11 information bits being transmitted.
Your description does not explain how the transmitted 12-bit message gets extended to a 16-bit message on the receive side.

Converting base 2 to base 16 [duplicate]

This question already has answers here:
Converting binary to hexadecimal?
(5 answers)
Closed 9 years ago.
I have seen many people get confused about how to convert base 2 to base 16 directly. In this tutorial I will explain how to convert a Binary number to a Hexadecimal number in 5 easy steps.
1) When you have a number in base 2, all digits must be either 0 or 1. If you have a digit(s) that isn't 0 or 1, your number is not in base 2 (Binary) and this tutorial won't be of use for you.
2) Make sure the length of you number is divisible by 4 (4,8,12,16 etc...). In this tutorial I will use 10001111011 in base 2 as the base number. Notice there there are only 11 digits. to make it divisible by 4 we will add a 0 to the left side of the number and check if the length is divisible by 4, keep on adding 0's until it is divisible.
3) Part your base 2 number into groups of four. In our case, 010001111011 will be 0100 0111 1011.
4) Now use the following table to convert each group of four digits to its matching value in base 16:
0000 = 0
0001 = 1
0010 = 2
0011 = 3
0100 = 4
0101 = 5
0110 = 6
0111 = 7
1000 = 8
1001 = 9
1010 = A
1011 = B
1100 = C
1101 = D
1110 = E
1111 = F
5) As a reminder, out number was 0100 0111 1011. Then 0100=4, 0111=7, 1011=B. Therefore. 010001111011 in base 2 is 47B in base 16(hexadecimal).

tricky binary subtraction

So I was practicing my binary subtraction. It's been a long while since my first exam and I decided to create my own tricky binary subtraction and I came up with this one:
1100
-1101
Of course the "borrowing trick" does not work for this problem at least I could not get it to work. Is my only choice to flip the bits of the second binary number(the bottom one) and then add a one basically doing 2's complement so 1101 becomes 0011. Then add the primary binary number(1100) with the 2's complement representation(0011) which means it would look like this:
1100 (-4) assume 2's complement
+ 0011 (3) assume 2's complement
sum:1111 (-1) assume 2's complement
I just need confirmation on this problem since its been a long time since I did binary subtraction.
1100
-1101
0 - 1 = 1 (borrow 1)
1100
-1101
1
=====
1
0 - 0 - 1 = 1 (borrow 1)
1100
-1101
11
=====
11
1 - 1 - 1 = 1 (borrow 1)
1100
-1101
111
=====
111
1 - 1 - 1 = 1 (borrow 1)
1100
-1101
1111
=====
1111
The result is 1111 with 1 borrowed. In terms of unsigned arithmetic, this means that either the result underflowed or you need to borrow from the next significant digit. (In terms of signed arithmetic there is no overflow as you have also borrowed the second bit and the calculation corresponds to -4 - -3 = -1.)

Resources