I don't understand this question:
Consider a system that has a byte-addressable memory organized in 32-bit words according to the big-endian scheme. A program reads 2 integers into an array and stores them in successive locations, starting at location at address 0x00001000. The 2 integers are 1025 and 287,454,020.
Show the contents of the two memory words at locations 0x00001000 and 0x00001004 after the two integers have been stored.
Can anyone explain how to do this? This is like a new language to me.
Big endian just means that the bytes are ordered from most significant to least significant at increasing memory addresses, so:
0x00001000 00 04 00 01 ; 1024 (decimal) = 00040001 (hex)
0x00001004 11 22 33 44 ; 287454020 (decimal) = 11223344 (hex)
Just for completeness, if this were a little endian system then memory would look like this:
0x00001000 01 00 04 00 ; 1024 (decimal) = 00040001 (hex)
0x00001004 44 33 22 11 ; 287454020 (decimal) = 11223344 (hex)
Related
AES-128 authentication process with Desfire EV1 cards goes like this:
Get Application IDs: 90 6A 00 00 00
Select Application: 90 5A 00 00 03 10 00 00 00 (AID: 0x000010)
Start Authentication with a key: 90 AA 00 00 01 02 00 (Key: 0x02)
Card responses with a random 16 bytes array, lets call that RANDOM_B
Now card needs more data to continue authentication, so we will do that
Decrypt RANDOM_B with KEY and an empty IV, lets call that RANDOM_B_DEC
Left shift 1 byte RANDOM_B_DEC, lets call that RANDOM_B_DEC_LS
Create a 16 bytes long random array, lets call that RANDOM_A
Combine RANDOM_A and RANDOM_B_DEC_LS into a single 32 bytes array, lets call that ARRAY
Encrypt this ARRAY using KEY and RANDOM_B as IV, lets call that ARRAY_ENC
Send ARRAY_ENC to continue authentication process: 90 AF 00 00 20 + ARRAY_ENC + 00
Now card responses with 16 bytes shifted and encrypted RANDOM_A and we need to decrypt and compare this with our own RANDOM_A that we created earlier
Here is my question: While decrypting received RANDOM_A (last step) what IV array I must use?
After that, how can I read data from a file in selected and authenticated application? Is this data also received encrypted? If yes, what will I use as IV array for decryption?
Thanks
I'm trying to understand the LDAP message structure, particularly the searchResEntry type in order to do some parsing. Using Wireshark as a guide, I have a general understanding but I can't find more specifics on the actual data structure. For example, it appears that each block starts with
0x30 0x84 0x0 0x0
Then from there, there is some variability on the remaining bytes before the actual data for the block. For example the first 17 bytes of a searchResEntry is
30 84 00 00 0b 8f 02 01 0c 64 84 00 00 0b 86 04 3b
30 84 00 00 - block header
0b 8f - size of entire searchResEntry remaining
02 - I believe represents a type code where the next byte (01) is a length and 0c is the messageId.
64 84 00 00 - No idea
0b 86 - size of entire searchResEntry remaining
04 - some type code
3b - length of block data
But then other blocks that begin with 30 84 00 00 are not 17 bytes long.
I've looked at rfc4511 but they just provide an unhelpful notation that doesn't actually describe the what the bytes mean.
searchResultEntry ::= [APPLICATION 4] SEQUENCE {
objectName LDAPDN,
attributes PartialAttributeList }
I've also looked at Wireshark's packet-ldap.c but it is very hard to follow. I wouldn't think it would be this hard to find a good description of the data structure layout and associated flags.
LDAP protocol is encoded according to the ASN.1 BER encoding rules which is a standard defined by ITU. Full specifications are here: https://www.itu.int/ITU-T/studygroups/com17/languages/X.690-0207.pdf
Today i got this reader from local shop. Earlier i worked with Wiegand type readers with no problem. So anyway, when i try to read EM type card with 0009177233 ID (written on card) i should get at least 9177233 with start and stop chars expected. But instead i get 50008C0891
ASCII 50008C0891
HEX 02 35 30 30 30 38 43 30 38 39 31 0D 0A 03
BIN 00000010 00110101 00110000 00110000 00110000 00111000 01000011 00110000
00111000 00111001 00110001 00001101 00001010 00000011
I use USB-RS232 converter and RealTerm software.
Does anyone has any ideas why?
Are there 2 ID's?
The decimal 9177233 equals HEX 8C0891, so the software gives you the serialnumber in hexadecimal notation. I think, the full number 50008C0891 is the 5 Bytes (40bit) from the UID of the EM-type chip.
Regards
I'm trying to interpret the communication between an ISO 7816 type card and the card reader. I've connected inline between the card and the reader when i dump the ouput to console i'm getting data that that im not expecting, see below:
Action: Card inserted to reader, expect an ATR only
Expected output:
3B 65 00 00 B0 40 40 22 00
Actual Output:
E0 3B 65 00 B0 40 40 22 00 90 00 80 9B 80 E0 E2
The 90 00 is the standard for OK that it reset, but why i am still logging additional data both before the ATR (E0) as well as data after
The communication line is documented in ISO 7816-3 (electrical interface and transmission protocols), look for the respective chapters of T=0 or T=1 protocol. T=1 is a block oriented protocol involving a prolog containing node addresses and an epilog containing a CRC/LRC.
For the ATR however, no protocol is running yet, since here the information is contained, which protocols the card supports, for the terminal to choose. Surely so early 90 00 is no SW1/SW2.
On the same machine R and MATLAB produce different hex representations of doubles, e.g.
R:
x <- 2.28
writeBin(x,raw(0))
gives
3d 0a d7 a3 70 3d 02 40
MATLAB:
x = 2.28;
num2hex(x)
gives
40023d70a3d70a3d
Octave produces the same result as MATLAB. Why is MATLAB's output reversed?
Update: So it's indeed the endianness. It remains to find out why R gets it wrong. Using an example from chappjc's answer below I get following output on a little-endian CPU:
writeBin(1024, raw(), endian='little')
00 00 00 00 00 00 90 40
and
writeBin(1024, raw(), endian='big')
40 90 00 00 00 00 00 00
which is exactly the opposite of what I would have expected.
Is it wrong output from R or misunderstanding on my part?
MATLAB on Intel systems stores floating point values as little endian. Use computer to check:
>> [computerType, ~, endian] = computer
computerType =
PCWIN64
endian =
L
You can use swapbytes to convert between little an big-endian:
>> num2hex(1024)
ans =
4090000000000000
>> num2hex(swapbytes(1024))
ans =
0000000000009040
In R, specify endian="big" (or endian="little") in writeBin to match your MATLAB:
writeBin(x,raw(0),endian="big")
This is endianness. Few people know that this is an actual word. Now you are one of them.