I'm using a HID Omnikey 5321 reader to communicate with a Mifare DESFire EV1 tag. I want to write 16 bytes in a standard data file. I'm using the WinSCard DLL (C++) to wrap Native DESFire command in ISO 7816 APDU message structure.
I manage to write data in existing file:
File Nb : 00
Offset : 00 00 00
Length : 10 00 00 (LSB first)
Data (16 bytes) : 23 00 00 00 00 00 00 08 12 34 56 78 00 00 00 00
I calculate the CRC from the Native command :
Native command : 3D (File Nb) (Offset) (Length) (Data)
CRC = 30 D2 07 00
Then I encipher with the session key and an IV set to 00 :
32 bytes data to encipher : (Data) (CRC) 00 00 00 00 00 00 00 00 00 00 00 00
APDU sended :
90 3D 00 00 27 00 00 00 00 10 00 00 (32 bytes enciphered data) 00
But I have problem when I try to write in file I create before.
After AES authentication with Application Master Key, I calculate the two subkey (K1 and K2). I set IV to 0x00...00. Then I create two file :
File 0
CMAC calculation :
(CD 00 03 00 21 30 00 00 80 00...00) XOR (K2)
encryption with session key and IV
copy of encryption result into IV
APDU sended : 90 CD 00 00 07 00 03 00 21 30 00 00 00
Creation file 0 OK
File 1
CMAC calculation :
(CD 01 03 00 11 F0 05 00 80 00...00) XOR (K2)
encryption with session key and IV
copy of encryption result into IV
APDU sended : 90 CD 00 00 07 01 03 00 11 F0 05 00 00
Creation file 1 OK
Then I write data in file 0 in using IV for encryption. I get "1E" error.
I manage to read data in an already existing file :
CMAC calculation
(BD 00 00 00 00 10 00 00 80 00...00) XOR (K2)
encryption with session key and IV = 0x00...00
copy of encryption result into IV
I use IV to decrypt the data I receive and I get good bytes values. So I think the subkey K2 is good.
I don't know where the problem in my write data command. It may be the IV but I don't know why.
Related
I want to parse a IPV6 packet that consists of extension headers. The "Hdr Ext Len" (second) field of extension headers such as Hop by Hop option, Routing, destination options and Authentication header, is defined as
8-bit unsigned integer. Length of the header in 8-octet units, not including the first 8 octets.
Consider the following IPV6 packet with authentication header (AH). The second byte of AH has a value 04 which signifies 24 bytes (got this information from hex packet decoder in https://hpd.gasmi.net/).
I would like to know how to decode the value of this field "Hdr Ext Len" to get the actual extension header size.
IPV6 Header
6E 00 00 00 00 34 33 01 FE 80 00 00 00 00 00 00 00 00 00 00 00 00 00 02 FE 80 00 00 00 00 00 00 00 00 00 00 00 00 00 01
Authentication extension header
59 04 00 00 00 00 01 00 00 00 00 15 8F 74 57 E3 C2 D4 13 EF 5E F1 FF 13
OSPF packet
03 02 00 1C 02 02 02 02 00 00 00 01 E9 DF 00 00 00 00 00 13 05 DC 00 01 00 00 0B 93
I'm working on understanding a mystery protocol in a DLP 3d printer. A raspberry pi is talking to a motor/led controller via a serial bus. The device seems to be proprietary but I'm guessing it uses some kind of open standard (like GCode). It may help to know the device was probably made and programmed in china. No idea if this factors but there may be some programming cultural thing I'm missing. I'm trying to figure out how to control this motor/led control board via the serial port so I captured data using interceptty.
This seems to be an idle state sent to the mystery device from the pi.
55 55 55 55 00 08 00 00 00 00 00 00 00 00 00 00 00 00 00 08 aa aa aa aa
This tends to be how the mystery device acknowledges
55 55 55 55 00 03 00 00 00 00 00 00 00 00 00 00 00 00 01 04 aa aa aa aa
55 55 55 55 00 03 00 00 00 00 00 00 00 00 00 00 00 00 01 04 aa aa aa aa
This seems to be noting an idle state always after acknowledgement from the mystery device.
55 55 55 55 00 03 00 00 00 00 00 00 00 00 00 00 00 00 55 58 aa aa aa aa
This is a command that started the print off. So this begins moving a motor.
55 55 55 55 00 03 e8 03 00 00 40 0d 03 00 01 00 00 00 00 3f aa aa aa aa
For reference 55555555 and aaaaaaaa are 01010101 etc in binary. They seem to be a way to clear coms for async serial transmission. It certainly LOOKS like I'm seeing extremely low level communication. As if I hooked a logic analyzer up to the circuit.
There are 16 hex bytes in between each of these clearing/syncing steps. I'm not sure if I'm just seeing VERY low level communication or if these 16 bytes contain all of the data in any given command or the data plus check bytes or something.
Finally, I'm seeing LOTS of repetition. This leads me to think that this isn't Gcode but that the pi is sending a command every cycle and the slave/mystery device is updating as quickly as possible.
For example the output below repeats over and over 1145 times after starting a print. This would be when the motor has descended fully into a vat and an LED is held on for an extended period of time. > denotes received transmissions < denotes outgoing transmissions from the pi.
> 55 55 55 55 00 03 00 00 00 00 | UUUU
> 00 00 00 00 00 00 00 00 01 04 |
> aa aa aa aa 55 55 55 55 00 03 | UUUU
> 00 00 00 00 00 00 00 00 00 00 |
> 00 00 01 04 aa aa aa aa |
< 55 55 55 55 00 03 20 03 00 00 | UUUU
< 78 5d 02 00 00 00 00 00 00 fd | x]
< aa aa aa aa |
I'm hoping to get some direction. None of this hex seems to translate well into ascii or utf. I don't think it's passing ints or chars. Maybe it's backwards bitwise? I'm not sure. I'm having lots of trouble making heads or tails of it.
What level is UUUU and aaaa at? It seems like something you'd see on a logic analyzer not from through a driver.
Anyway, any direction would be much appreciated.
I have a packet that I have manually created for a SYN/ACK but I get no reply from the server.
This is all wireless/GSM stuff so I cannot use a sniffer.
I have calculated the TCP and the IP header checksums manually a few times and they seem correct but I really need a 3rd party method to be sure.
I had several endian issues but I think I have it right now. But who knows...
I only found an online parser but it does not test/verify the checksums.
Does anyone have an easy idea for me?
Just in case someone has suitable access to a test method, and feels like pasting it in for me, here is the packet:
45 10 00 3C 00 02 00 00 64 06 E8 1F 0A AA 61 43 51 8A B1 13
01 BB 01 BB 00 00 00 0A 00 00 00 00 50 02 00 00 3D D8 00 00
Regards
berntd
I've creating a pcap from your hex data using Net::PcapWriter:
use strict;
use warnings;
use Net::PcapWriter;
my $w = Net::PcapWriter->new('test.pcap');
my $ip = pack('H*','4510003C000200006406E81F0AAA6143518AB11301BB01BB0000000A00000000500200003DD80000');
$w->packet($w->layer2prefix('1.1.1.1').$ip);
Loading it into Wireshark shows both the IP checksum and the TCP checksum as correct, so it is probably not a problem of the checksum calculation.
But tcpdump says that the length is wrong:
IP truncated-ip - 20 bytes missing! 10.170.97.67.443 > 81.138.177.19.443: Flags [S], seq 10:30, win 0, length 20
This is because you've set the total length in the IP header to 60 bytes (00 3C) but the IP header + TCP header is only 40 bytes in total and your packet does not have any payload, i.e. the total length should be 40 and not 60 bytes.
Here is what I came up with to do it the manual way:
Put packet into a text file like so:
45 10 00 3C 00 02 00 00 64 06 E8 1F 0A AA 61 43 51 8A B1 13
01 BB 01 BB 00 00 00 0A 00 00 00 00 50 02 00 00 3D D8 00 00
add addressing offsets and group into 16 byte lines as in a hex dump:
000000 45 10 00 3C 00 02 00 00 64 06 E8 1F 0A AA 61 43
000010 51 8A B1 13 01 BB 01 BB 00 00 00 0A 00 00 00 00
000020 50 02 00 00 3D D8 00 00
Save it (source).
Now run ext2pcap.exe -e 0x800 source dest
The dest file can now be imported as a PCAP file into wireshark for decoding.
Multiple packets can be converted byt starting the address offset for each new packet at 000000 again in the source file.
text2pcap.exe seems to come with wireshark.
Tedious but works.
Cheers
With a large file (1GB) created by saving a large data.frame (or data.table) is it possible to very quickly load a small subset of rows from that file?
(Extra for clarity: I mean something as fast as mmap, i.e. the runtime should be approximately proportional to the amount of memory extracted, but constant in the size of the total dataset. "Skipping data" should have essentially zero cost. This can be very easy, or impossible, or something in between, depending on the serialiization format. )
I hope that the R serialization format makes it easy to skip forward through the file to the relevant portions of the file.
Am I right in assuming that this would be impossible with a compressed file, simply because gzip requires to uncompress everything from the beginning?
saveRDS(object, file = "", ascii = FALSE, version = NULL,
compress = TRUE, refhook = NULL)
But I'm hoping binary (ascii=F) uncompressed (compress=F) might allow something like this. Use mmap on the file, then quickly skip to the rows and columns of interest?
I'm hoping it has already been done, or there is another format (reasonably space efficient) that allows this and is well-supported in R.
I've used things like gdbm (from Python) and even implemented a custom system in Rcpp for a specific data structure, but I'm not satisfied with any of this.
After posting this, I worked a bit with the package ff (CRAN) and am very impressed with it (not much support for character vectors though).
Am I right in assuming that this would be impossible with a compressed
file, simply because gzip requires to uncompress everything from the
beginning?
Indeed, for a short explanation let's take some dummy method as starting point:
AAAAVVBABBBC gzip would do something like: 4A2VBA3BC
Obviously you can't extract all A from the file without reading it all as you can't guess if there's an A at end or not.
For the other question "Loading part of a saved file" I can't see a solution on top of my head. You probably can with write.csv and read.csv (or fwrite and fread from the data.table package) with skipand nrows parameters could be an alternative.
By all means, using any function on a file already read would mean loading the whole file in memory before filtering, which is no more time than reading the file and then subsetting from memory.
You may craft something in Rcpp, taking advantage of streams for reading data without loading them in memory, but reading and parsing each entry before deciding if it should be kept or not won't give you a real better throughput.
saveDRS will save a serialized version of the datas, example:
> myvector <- c("1","2","3").
> serialize(myvector,NULL)
[1] 58 0a 00 00 00 02 00 03 02 03 00 02 03 00 00 00 00 10 00 00 00 03 00 04 00 09 00 00 00 01 31 00 04 00 09 00 00 00 01 32 00 04 00 09 00 00
[47] 00 01 33
It is of course parsable, but means reading byte per byte according to the format.
On the other hand, you could write as csv (or write.table for more complex data) and use an external tool before reading, something along the line:
z <- tempfile()
write.table(df, z, row.names = FALSE)
shortdf <- read.table(text= system( command = paste0( "awk 'NR > 5 && NR < 10 { print }'" ,z) ) )
You'll need a linux system with awk wich is able to parse millions of lines in a few milliseconds, or to use a windows compiled version of awk obviously.
Main advantage is that awk is able to filter on a regex or some other conditions each line of data.
Complement for case of data.frame, a data.frame is more or less a list of vectors (simple case), this list will be saved sequentially so if we have a dataframe like:
> str(ex)
'data.frame': 3 obs. of 2 variables:
$ a: chr "one" "five" "Whatever"
$ b: num 1 2 3
It's serialization is:
> serialize(ex,NULL)
[1] 58 0a 00 00 00 02 00 03 02 03 00 02 03 00 00 00 03 13 00 00 00 02 00 00 00 10 00 00 00 03 00 04 00 09 00 00 00 03 6f 6e 65 00 04 00 09 00
[47] 00 00 04 66 69 76 65 00 04 00 09 00 00 00 08 57 68 61 74 65 76 65 72 00 00 00 0e 00 00 00 03 3f f0 00 00 00 00 00 00 40 00 00 00 00 00 00
[93] 00 40 08 00 00 00 00 00 00 00 00 04 02 00 00 00 01 00 04 00 09 00 00 00 05 6e 61 6d 65 73 00 00 00 10 00 00 00 02 00 04 00 09 00 00 00 01
[139] 61 00 04 00 09 00 00 00 01 62 00 00 04 02 00 00 00 01 00 04 00 09 00 00 00 09 72 6f 77 2e 6e 61 6d 65 73 00 00 00 0d 00 00 00 02 80 00 00
[185] 00 ff ff ff fd 00 00 04 02 00 00 00 01 00 04 00 09 00 00 00 05 63 6c 61 73 73 00 00 00 10 00 00 00 01 00 04 00 09 00 00 00 0a 64 61 74 61
[231] 2e 66 72 61 6d 65 00 00 00 fe
Translated to ascii for an idea:
X
one five Whatever?ð## names a b row.names
ÿÿÿý class
data.frameþ
We have the header of the file, the the header of the list, then each vector composing the list, as we have no clue on how much size the character vector will take we can't skip to arbitrary datas, we have to parse each header (the bytes just before the text data give it's length). Even worse now to get the corresponding integers, we have to go to the integer vector header, which can't be determined without parsing each character header and summing them.
So in my opinion, crafting something is possible but will probably not be really much quicker than reading all the object and will be brittle to the save format (as R has already 3 formats to save objects).
Some reference here
Same view as the serialize output in ascii format (more readable to get how it is organized):
> write(rawToChar(serialize(ex,NULL,ascii=TRUE)),"")
A
2
197123
131840
787
2
16
3
262153
3
one
262153
4
five
262153
8
Whatever
14
3
1
2
3
1026
1
262153
5
names
16
2
262153
1
a
262153
1
b
1026
1
262153
9
row.names
13
2
NA
-3
1026
1
262153
5
class
16
1
262153
10
data.frame
254
Edit: It turns out that this is a custom checksum algorithm, not a CRC-32. For the curious here is a snippet of C code calculating the 21 FF 1D E4 checksum in the below example.
I am working with some hex files that seem to be protected by a CRC-32 of sorts, but recalculating using the standard CRC-32 and other known parameters failed to produce a match.
All I know is, the data spans 116 bytes and four additional bytes for the checksum. I can produce tons of examples of message-key pairs, I just cannot find any relation between them.
I don't want to fill this post with hex dumps so I pasted a couple more here: http://mathb.in/12246.
11 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 43
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 21 FF 1D E4
Could this be a CRC-32 with some strange settings, or is it completely different? What would be a way to determine how the checksum is produced?
Update: I was able to make minute changes of 1 bit in the messages:
00000000000000000000000000000000000000000000000000000000, 32C9A1E6
00000000000000000000000000000000000000000000000000000001, BD25904E
00000000000000000000000000000000000000000000000000000002, B437A286
00000000000000000000000000000000000000000000000000000003, 8AB790EE
00000000000000000000000000000000000000000000000000000004, 2DDC3AEB
00000000000000000000000000000000000000000000000000000005, 208B3859
00000000000000000000000000000000000000000000000000000006, 87E0925C
00000000000000000000000000000000000000000000000000000007, E59391AE
00000000000000000000000000000000000000000000000000000008, B07292FC
00000000000000000000000000000000000000000000000000000009, 830EA655
Changing anything without updating the checksum appropriately creates a message that is rejected.
If this cannot be a CRC-32 due to the superposition principle (exclusive-or check), would there be any way to perform an analysis to find any properties or patterns in pairs of message and checksums?
Assuming that you got the messages and check values correct in your linked examples, then it is not a CRC. For a CRC or any linear function over GF(2) of the message, it should be that if message 1 exclusive-or'ed with message 2 is equal to message 3 exclusive-or'ed with message 4, then the same should be true of the check values.
There is only one byte that differs in your messages. Since 45 ^ 4A == 43 ^ 4C, then the associated check values when exclusive-or'ed should also be equal. They are not.
If you did have a CRC, then you could use reveng to try to tease out the CRC parameters.