how to find the remainder of division - math

Q=A/B , Q is a real number expressed as a pair of 8 bits:
most significant 8 bits for the integer part
least significant 8 bits for the fractional part
the number is unsigned
for example:
0 0 1 0 1 1 0 1 . 0 1 0 1 0 0 0 0
Can you find the remainder of division if you know B, on a paper? how?
I'll give an example:
2/172 :
0000 0010 . 0000 0000 /
1010 1100 . 0000 0000
=0000 0000 . 0001 0010
0000 0000 . 0001 0010 *
1010 1100 . 0000 0000
=0000 0000 . 1100 0001 (should be 2, or at least something greater than 1.5)

There are two algorithms: restoring and non-restoring. This is very well described in Division Algorithms and Hardware Implementations by Sherif Galal and Dung Pham. And here is about implementation in VHDL.

Related

Calculating TCP Header Length?

Can anyone guide me on the following?
I'm trying to figure out the answer as seen in the first question inside the blog malwarejake[.]blogspot.com/2015/05/packet-analysis-practice-part-3.html .
As per sample packet found
What is the embedded protocol, the destination port, and the amount of data not including protocol headers?
0x0000: 4500 004c 1986 4000 4006 9cba c0a8 0165
0x0010: c0a8 01b6 0015 bf3c dad0 5039 2a8c 25be
0x0020: 8018 0072 06ec 0000 0101 080a 008a 70ac
The answer for the above question is as above.
Embedded protocol: TCP
Total packet length: 76
IP Header length: 20
Protocol header length: 32
Data length: 24
Dest Port: 0xbf3c (48956)
I managed to get all the other answer with the exception of Protocol Header Length and Data Length.
Isn't TCP Header Length normally 20 bytes with the extension up to 40 bytes? But how is 32 bytes derived from the above packet? I don't understand.
Thanks!
Here's the TCP Header directly from the RFC:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Acknowledgment Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Data | |U|A|P|R|S|F| |
| Offset| Reserved |R|C|S|S|Y|I| Window |
| | |G|K|H|T|N|N| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Checksum | Urgent Pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options | Padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
The values 0015 and bf3c are the ports.
The values dad0 5039 and 2a8c 25be are the sequence/ack numbers.
Now take a look at the next 4 bits. The ones at offset 0x20. The value of the byte is 0x80, which means that the topmost 4 bits are 1000. They correspond to the "data offset" field:
Data Offset: 4 bits
The number of 32 bit words in the TCP Header. This indicates where
the data begins. The TCP header (even one including options) is an
integral number of 32 bits long.
So 1000 means that the header consists of 8 x 32-bit words, which means 8 x 4 bytes = 32 bytes.

15 Hex in Binary form, explanation required?

I really really really can't understand how 15hex converted in Binary form gives me 10101bin.
That should be easy but I can't get it 😰
0x15 == 1*16 + 5*1 == 21
21 == 1*16 + 0*8 + 1*4 + + 0*2 + 1*1 == 10101 (binary)
What's not to love?
Well it's simple. In decimal base, number 15 means
10 + 5, because the number 1 means 1 * 10, and number 5 means 5 * 1.
And in hex, number 15 means:
1 * 16 + 5 * 1, meaning its 21. 21 in binary is 10101.
How to convert hex to binary
Convert each hex digit to 4 binary digits according to this table:
Hex Binary
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
A 1010
B 1011
C 1100
D 1101
E 1110
F 1111
Example #1
Convert (4E)16 to binary:
(4)16 = (0100)2
(E)16 = (1110)2
So
(4E)16 = (01001110)2
Example #2
Convert(4A01)16 to binary:
(4)16 = (0100)2
(A)16 = (1010)2
(0)16 = (0000)2
(1)16 = (0001)2
So
(4A01)16 = (0100101000000001)2

How do I convert a hexadecimal number in IEEE 754 double precision (64bit) format to a decimal number?

My task is to convert a hexadecimal number in double precision format (IEEE 754) on a paper.
I've converted a hexadecimal number: 0x40790A0000000000 to a binary 64bit format so far and now I have:
0 10000000111 1001000010100000000000000000000000000000000000000000
For the next step I am not totally sure what to do. I have to convert it into a decimal number and I've tried out several ways, but never got the right result.
Hope you can help me and thank you.
Going from https://en.wikipedia.org/wiki/Double-precision_floating-point_format,
4 0 7 9 0 A 0 0 0 0 0 0 0 0 0 0
0100 0000 0111 1001 0000 1010 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0 - Sign bit (this is a positive number)
100 0000 0111 - Exponent
1001 0000 1010 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 - Fraction
The exponent's value is 1031. Because it's nonzero the fractional part is given by the expression 1 + sum from i = 1 to 52 bit_(52-i) * 2^(-i).
The fraction's value is therefore 1 + 1/2 + 0/4 + 0/8 + 1/16 + ... ~= 1.56
From there you should be able to figure the rest out.
(Not solving this completely because this looks like homework.)

Decimal <--> Two's Complement <--> Hex conversion

I'm wondering if I get a question like:
"Convert a decimal number to two's complement, then give your answer in Hex".
Is the path below, how one would do it?
Decimal number: -23
23 = 00010111 = in hex 17 = -17
-23 = 11101001 = in hex E9
So to convert it to Hex would the answer be -17 or E9?
Thanks
The -17 has no relevance here, since according to your task, you have to return the two's complement as HEX and that is E9.
Your conversion path in general looks correct to me.
DEC to BIN without the sign:
23 → 0001 0111
Negate the BIN string:
0001 0111 → 1110 1000
Add 1 to the negated BIN result:
1110 1000 + 0000 0001 → 1110 1001
Verify the correct two's complement calculation:
-128 + 64 + 32 + 8 + 1 = -23 → correct
Convert final BIN string to HEX:
1110 1001 → 0xE9

Why does 0xff & abcd truncate half of the abcd?

I can understand binary operation 11 & x, for example if x = 1011, the operation will take out 10 from x and left x to be 11. However, when it comes to hexadecimal, I am very confused. What is the math and reasoning behind the similar effect of 0xff & x? I can only understand this if I convert them all to binary.
0xFF & 0xABCD = 0xCD ... why?
Because:
A = 1010
B = 1011
C = 1100
D = 1101
F = 1111
So the 0xFF = 0x00FF = 0000 0000 1111 1111
The 0xABCD = 1010 1011 1100 1101
-------------------
0xFF & 0xABCD = 0000 0000 1100 1101
As with most things, once you work with hex for a while, you'll learn some tricks for remembering the values.

Resources