How come
E9EF - FDBA
is equal to
FFFF FFFF FFFF EC35?
Although according to this calculator answer is
-13CB
which is what I got when I solved question myself. I want to know how is
FFFF FFFF FFFF EC35
correct answer?
Related
I have troubles with understanding Deflate algorithm (RFC 1951).
TL; DR How to parse Deflate compressed block 4be4 0200?
I created a file with a letter and newline a\n in it, and run gzip a.txt. Resultant file a.txt.gz:
1f8b 0808 fe8b eb55 0003 612e 7478 7400
4be4 0200
07a1 eadd 0200 0000
I understand that first line is header with additional information, and last line is CRC32 plus size of input (RFC 1951). These two gives no trouble to me.
But how do I interpret the compressed block itself (the middle line)?
Here's hexadecimal and binary representation of it:
4be4 0200
0100 1011
1110 0100
0000 0010
0000 0000
As far as I understood, somehow these ones:
Each block of compressed data begins with 3 header bits containing the following data:
first bit BFINAL
next 2 bits BTYPE
...actually ended up at the end of first byte: 0100 1011. (I'll skip the question why would anyone call "header" something which is actually at the tail of something else.)
RFC contains something that as far as I understand is supposed to be an explanation to this:
Data elements are packed into bytes in order of
increasing bit number within the byte, i.e., starting
with the least-significant bit of the byte.
Data elements other than Huffman codes are packed
starting with the least-significant bit of the data
element.
Huffman codes are packed starting with the most-
significant bit of the code.
In other words, if one were to print out the compressed data as
a sequence of bytes, starting with the first byte at the
right margin and proceeding to the left, with the most-
significant bit of each byte on the left as usual, one would be
able to parse the result from right to left, with fixed-width
elements in the correct MSB-to-LSB order and Huffman codes in
bit-reversed order (i.e., with the first bit of the code in the
relative LSB position).
But sadly I don't understand that explanation.
Returning to my data. OK, so BFINAL is set, and BTYPE is what? 10 or 01?
How do I interpret the rest of the data in that compressed block?
First lets look at the hexadecimal representation of the compressed data as a series of bytes (instead of a series of 16-bit big-endian values as in your question):
4b e4 02 00
Now lets convert those hexadecimal bytes to binary:
01001011 11100100 00000010 000000000
According to the RFC, the bits are packed "starting with the least-significant bit of the byte". The least-significant bit of a byte is the right-most bit of the byte. So first bit of the first byte is this one:
01001011 11100100 00000010 000000000
^
first bit
The second bit is this one:
01001011 11100100 00000010 000000000
^
second bit
The third bit:
01001011 11100100 00000010 000000000
^
third bit
And so on. Once you gone through all the bits in the first byte, you then start on the least-significant bit of the second byte. So the ninth bit is this one:
01001011 11100100 00000010 000000000
^
ninth bit
And finally the last-bit, the thirty-second bit, is this one:
01001011 11100100 00000010 000000000
^
last bit
The BFINAL value is the first bit in the compressed data, and so is contained in the single bit marked "first bit" above. It's value is 1, which indicates that this is last block in compressed data.
The BTYPE value is stored in the next two bits in data. These are the bits marked "second bit" and "third bit" above. The only question is which bit of the two is the least-significant bit and which is the most-significant bit. According to the RFC, "Data elements other than Huffman codes are packed
starting with the least-significant bit of the data element." That means the first of these two bits, the one marked "second bit", is the least-significant bit. This means the value of BTYPE is 01 in binary. and so indicates that the block is compressed using fixed Huffman codes.
And that's the easy part done. Decoding the rest of the compressed block is more difficult (and with a more realistic example, much more difficult). Properly explaining how do that would be make this answer far too long (and your question too broad) for this site. I'll given you a hint though, the next three elements in the data are the Huffman codes 10010001 ('a'), 00111010 ('\n') and 0000000 (end of stream). The remaining 6 bits are unused, and aren't part of the compressed data.
Note to understand how to decode deflate compressed data you're going to have to understand what Huffman codes are. The RFC you're following assumes that you do. You should also know how LZ77 compression works, though the document more or less explains what you need to know.
As an example, take hex number 0x04D2 equal to decimal 1234.
Can someone think of a process to transition 0x04D2 to 0x1234?
I am writing a program in MIPS and plan on taking each nibble of the new hex number and convert it to ASCII for printing, but I can't seem to figure out how to make the transition to the new hex number. When I get the new hex number, the ASCII transition should be a piece of cake. Even though I intend to implement this in MIPS, I'm more interested in a universal bitwise process or algorithm.
Also, I do know that MIPS can print the decimal number by integer_print syscall'ing the register, but I would rather do it the hard way. Plus, I need the ASCII in the register for what I am doing.
So, starting with only 0x04D2, is it possible to make this transition to 0x1234?
Here are the conversions for hex, dec, bin:
0x04D2 = 1234 = 0000 0100 1010 0010
0x1234 = 4660 = 0001 0010 0011 0100
Thanks!
This is my first post and I absolutely <3 this site! So much great content!
So, I have the following TCPDump command I want to understand what it is asking (in plain English).
tcpdump 'tcp[12] & 80 !=0'
Is it asking to grab all TCP packets on byte offset 12 (TCP Header length and Reserved bits) with values at least 80 that is true? I believe I am wrong.
If the above is true, can someone write out the possible binaries for it?
80 gives 0101 0000. My mentor also wrote down: 1111 0000 and 0111 0000. But I don't know why...
If it's at least 80, the binary combo for that could be countless...
Is it asking to grab all TCP packets on byte offset 12 (TCP Header length and Reserved bits) with values at least 80 that is true
No. 80 in decimal is 50 in hexadecimal, so it's equivalent to tcp[12] & 0x50 !=0, which tests whether either the 0100 0000 bit or the 0001 0000 bit in the 12th byte of the TCP header are set. That's true of 0101 0000, but is also true of 1111 0000 and 0111 0000, as well as 0100 0000 and 0001 0000 and 0100 1111 and....
If you want to test the uppermost bit of that byte, you'd use tcp[12] & 0x80 !=0. That would, in effect, match all values >= 0x80.
Question1. Suppose computers A and B have IP addresses 10.105.1.113 and 10.105.1.91 respectively and they both use the same net mask N. Which of the values of N given below should not be used if A and B should belong to the same network?
255.255.255.0
255.255.255.128
255.255.255.192
255.255.255.224
Question2. While opening a TCP connection, the initial sequence number is to be derived using a time-of-day (ToD) clock that keeps running even when the host is down. The low order 32 bits of the counter of the ToD clock is to be used for the initial sequence numbers. The clock counters increments once per millisecond. The maximum packet lifetime is given to be 64s. Which one of the choices given below is closest to the minimum permissible rate at which sequence numbers used for packets of a connection can increase?
0.015/s
0.064/s
0.135/s
0.327/s
During an interview in company interviewer ask me these questions. How to solve these question. Please help me.
Thank you.
Really you should ask only one question per post...
For question 1, after masking the IP addresses have to look the same. Masking is a bitwise AND operation, so you need to write down the numbers in question in binary. Now the first three groups don't matter, since 255 == 11111111 and you will not change anything. Let's focus on the last number only:
113 = 0111 0001
91 = 0101 1011
And for the mask:
0 = 0000 0000
128 = 1000 0000
192 = 1100 0000
224 = 1110 0000
Now for the masking:
Example:
1110 0000
0111 0001
========= AND
0110 0000
Since 0 AND 1 == 0, but 1 AND 1 == 1
Applying this mask to the two addresses, we get
113 91
0 0000 0000 0000 0000
128 0000 0000 0000 0000
192 0100 0000 0100 0000
224 0110 0000 0100 0000 **** when this mask is applied to the two IP addresses, the result is different
We conclude that the two addresses would end up on different subnets.
Conclusion: you can't use 255.255.255.224 as the mask if you want these two IP addresses on the same subnet. For more information you can go to https://en.wikipedia.org/wiki/Subnetwork for example.
As for question 2, it is one of those badly phrased questions. Is a "minimum rate" the lowest number, or the highest number? When you say "this is the maximum rate" you typically mean "the lowest number" but it's open for interpretation. I think in this case they are asking about the "maximum rate" (the smallest number), since the literal interpretation of the question makes no sense. Still I am struggling to understand what they are asking. When two computers communicate, they increase the sequence number on each packet. So what is "permissible"? I don't know. But 0.015/s is close to 1/64s - if I were a betting man, that's where I'd put my money but I can't explain it. I hope the answer to your first question at least is useful... and maybe that the rambling for the second spurs some good discussion and an actual answer.
I'm trying to add CRC16 error detection to a Motorola HCS08 microcontroller application. My checksums don't match, though. One online CRC calculator provides both the result I see in my PC program and the result I see on the micro.
It calls the micro's result "XModem" and the PC's result "Kermit."
What is the difference between the way those two ancient protocols specify the use of CRC16?
you can implement 16 bit IBM, CCITT, XModem, Kermit, and CCITT 1D0F using the same basic code base. see http://www.acooke.org/cute/16bitCRCAl0.html which uses code from http://www.barrgroup.com/Embedded-Systems/How-To/CRC-Calculation-C-Code
the following table shows how they differ:
name polynomial initial val reverse byte? reverse result? swap result?
CCITT 1021 ffff no no no
XModem 1021 0000 no no no
Kermit 1021 0000 yes yes yes
CCITT 1D0F 1021 1d0f no no no
IBM 8005 0000 yes yes no
where 'reverse byte' means that each byte is bit-reversed before processing; 'reverse result' means that the 16 bit result is bit-reversed after processing; 'swap result' means that the two bytes in the result are swapped after processing.
all the above was validated with test vectors against http://www.lammertbies.nl/comm/info/crc-calculation.html (if that is wrong, we are all lost...).
so, in your particular case, you can convert code for XModem to Kermit by bit-reversing each byte, bit reversing the final result, and then swapping the two bytes in the result.
[i believe, but haven't checked or worked out the details, that reversing each byte is equivalent to reversing the polynomial (plus some extra details). which is why you'll see very different explanations in different places for what is basically the same algorithm.
also, the approach above is not efficient, but is good for testing. if you want efficient the best thing to do is translate the above to lookup-tables.]
edit what i have called CCITT above is documented in the RevEng catalogue as CCITT-FALSE. for more info, see the update to my blog post at the link above.
My recollection (I used to do modem stuff way back when) is that Kermit processes the bits in each byte of the data using the least significant bit first.
Most software CRC implementations (Xmodem, probably) run through the data bytes most significant bit first.
When looking at the library source (download it from http://www.lammertbies.nl/comm/software/index.html) used for the CRC Calculation page you linked to, you'll see that XModem uses CRC16-CCITT, the polynomial for which is:
x^16 + x^12 + x^5 + 1 /* the '^' character here represents exponentition, not xor */
The polynomial is represented by the bitmap (note that bit 16 is implied)
0x1021 == 0001 0000 0010 0001 binary
The Kermit implementation uses:
0x8408 == 1000 0100 0000 1000 binary
which is the same bitmap as XModem's, only reversed.
The text file that accompanies the library also mentions the following difference for Kermit:
Only for CRC-Kermit and CRC-SICK: After all input processing, the one's complement of the CRC is calculated and the two bytes of the CRC are swapped.
So it should probably be easy to modify your CRC routine to match the PC result. Note that the source in the CRC library seems to have a pretty liberal license - it might make sense to use it more or less as is (at least the portions that apply for your application).
X-Modem 1K CRC16.
Process for bytewise CRC-16 using input data {0x01, 0x02} and polynomial 0x1021
Init crc = 0
Handle first input byte 0x01:
2.1 'Xor-in' first input byte 0x01 into MSB(!) of crc:
0000 0000 0000 0000 (crc)
0000 0001 0000 0000 (input byte 0x01 left-shifted by 8)
0000 0001 0000 0000 = 0x0100
The MSB of this result is our current divident: MSB(0x100) = 0x01.
2.2 So 0x01 is the divident. Get the remainder for divident from our table: crctable16[0x01] = 0x1021. (Well this value is famila from the manual computation above.)
Remember the current crc value is 0x0000. Shift out the MSB of current crc and xor it with the current remainder to get the new CRC:
0001 0000 0010 0001 (0x1021)
0000 0000 0000 0000 (CRC 0x0000 left-shifted by 8 = 0x0000)
0001 0000 0010 0001 = 0x1021 = intermediate crc.
Handle next input byte 0x02:
Currently we have intermediate crc = 0x1021 = 0001 0000 0010 0001.
3.1 'Xor-in' input byte 0x02 into MSB(!) of crc:
0001 0000 0010 0001 (crc 0x1021)
0000 0010 0000 0000 (input byte 0x02 left-shifted by 8)
0001 0010 0010 0001 = 0x1221
The MSB of this result is our current divident: MSB(0x1221) = 0x12.
3.2 So 0x12 is the divident. Get the remainder for divident from our table: crctable16[0x12] = 0x3273.
Remember the current crc value is 0x1021. Shift out the MSB of current crc and xor it with the current remainder to get the new CRC:
0011 0010 0111 0011 (0x3273)
0010 0001 0000 0000 (CRC 0x1021 left-shifted by 8 = 0x2100)
0001 0011 0111 0011 = 0x1373 = final crc.