I have been trying for hours to understand what minecraft packets mean but they don't seem to adhere to the protocol I've been using wireshark to sniff the packets and according to the protocol they should start with 0x something but they never do I'm really confused rn and any help would be greatly appreciated.
Here is some of the data that came with packets:
57 9e e9 f7 7f 3c 0b c7 b2 f0 f2 1d 8e 42 6e
9c 14 57 71 74 6b 83
54 ad d7 3a 51 60 55
any help is really appreciated
0x is just a way of telling (almost all language compilers) that the following number is a hex number, 9e would be written as 0x9e in Java. There are also other number prefixes, e.g. 0b means that the following number is binary (0 and 1's).
I have been trying to parse for a long time, but I just can't get it. I tried a lot of types and converters, but it didn't give any effect. Also i googled everywhere i can.
Here is full block of data:
3D 08 41 CF AC D9 59 64 44 44
Where is
3D - type of data (datetime of length),
08 - length of data and data (of datetime)
41 CF AC D9 59 64 44 44 = 8 October 2020 10:38:46 UTC
All other data was Big-endian and this block, maybe, same.
The closest binary data were Apple Absolute and Ole automation (current data was encoded in DCode and сompared with my data), but nevertheless they were not. Back-end is old and written on Java, maybe this will be some kind of clue.
Edit
Thanks to #GiacomoCatenazzi, business got off the ground. I think that start of time is 01.01.1970, but it is not unix epoch. And there is difference between mod by hour or by seconds (like seconds fracture, but without milliseconds).
The main problem I see at the moment is to find out datetime with or without seconds.
First byte 41 - may be a true value. And last 2 or 3 bytes is often equal.
Relative times to Zero
3C - 3D C2 3F BF -1m
Where 3C - type of data (datetime of 4 bytes)
3D 08 - 41 CE E1 1F E0 00 00 00 -1s
3D 08 - 41 CE E1 1F E0 02 22 22 1s
3D 08 - 41 CF AC EB BF A2 A2 A2 Current datetime (Oct 15 2020 02:59:15 GMT+0300)
3C - 3D C2 3F C0 0 - Center
3C - 3D C2 3F C1 1m
3C - 3D C2 3F FC 1 hour
3C - 3D C2 45 60 1 day
3C - 3D C2 F3 C0 1 month
3C - 3D CA 44 E0 1 Year
3C - 3E 8A BF E0 25 year (Dec 26 1994 03:00:00 GMT+0300)
3C - 3F 59 D6 CC Current date only (Oct 15 2020) (26711820 - minutes since 01.01.1970)
I need to compress data stored in Redis. I write the data from R (with package rredis) to Redis like this:
redisSet("x","{\"email\":\"master#disaster.com\",\"Ranking\":[{\"Number\":37665,\"rank\":1},{\"Number\":41551,\"rank\":2},{\"Number\":21684,\"rank\":3},{\"Number\":35946,\"rank\":4}]}")
Instead of 4 elements in the list of this value there will be 4000 in the real scenario and 70000 keys like that in total. At the moment each of these keys take ~0.15 MB.
I read that it is possible to compress the memory usage of those entries in Redis significantly, e.g. with algorithms like LZO or Snappy. But I could not find information about the concrete implementation.
Some suggestions to solve the problem? Thanks!
There's no built-in way for that.
However, Redis can store binary data, so you can compress your data with any preferred compression algorithm, and store the compressed binary data to Redis. When reading the data, you need to get the binary data, and decompress it with the same compression algorithm.
Thanks for the hint! I was able to implement it, here are some details:
I used a compression function implemented in R (base package), as suggested.
memCompress()
I tried different compressions gzip and bzip2.
memCompress(type="gzip",...)
memCompress(type="bzip2",...)
Here is the code with a toy example:
#In R:
x<-"{\"email\":\"master#disaster.com\",\"Ranking\":[{\"Number\":37665,\"rank\":1},{\"Number\":41551,\"rank\":2},{\"Number\":21684,\"rank\":3},{\"Number\":35946,\"rank\":4}]}"
y<-memCompress(charToRaw(x),type="gzip")
# [1] 78 9c 4d c9 3d 0a 80 20 00 06 d0 bb 7c b3 04 e6 4f e5 d4 09 1a 5a a3 c1 4a 42 ca 02 ad 29 bc 7b 11 48 6d 0f de 05 e3 b4 5d a1 e0 74 38 8c
# [47] af 27 1b 5e 64 e3 ee 40 d0 ea 6d b1 db 0c d5 5d 68 4e 37 18 0f c5 0a 29 05 81 7f 0a 8a 46 f2 0d a7 42 d0 34 f9 7f 72 2a 4b 9e 86 fd 87 89
# [93] 8a cb 34 3c f6 f1 06 b3 67 2d c4
rawToChar(memDecompress(y,type="gzip"))
# [1] "{\"email\":\"master#disaster.com\",\"Ranking\":[{\"Number\":37665,\"rank\":1},{\"Number\":41551,\"rank\":2},{\"Number\":21684,\"rank\":3},{\"Number\":35946,\"rank\":4}]}"
# R to redis:
redisConnect(host="XXX.XX.XX.XXX", port=XXXX, timeout =10) # Insert your redis information (IP and port)
x<-"{\"email\":\"master#disaster.com\",\"Ranking\":[{\"Number\":37665,\"rank\":1},{\"Number\":41551,\"rank\":2},{\"Number\":21684,\"rank\":3},{\"Number\":35946,\"rank\":4}]}"
redisSet("x",x) #Wrong transfer of character string to redis
# redis-cli # On redis server
# get x # On redis server
# "X\n\x00\x00\x00\x02\x00\x03\x03\x02\x00\x02\x03\x00\x00\x00\x00\x10\x00\x00\x00\x01\x00\x04\x00\t\x00\x00\x00\x93{\"email\":\"master#disaster.com\",\"Ranking\":[{\"Number\":37665,\"rank\":1},{\"Number\":41551,\"rank\":2},{\"Number\":21684,\"rank\":3},{\"Number\":35946,\"rank\":4}]}"
redisSet("x",charToRaw(x)) #Right transfer of character string to redis
# redis-cli # On redis server
# get x # On redis server
# "{\"email\":\"master#disaster.com\",\"Ranking\":[{\"Number\":37665,\"rank\":1},{\"Number\":41551,\"rank\":2},{\"Number\":21684,\"rank\":3},{\"Number\":35946,\"rank\":4}]}"
redisSet("x",memCompress(charToRaw(x),type="gzip")) #Right transfer of compressed string to redis
# redis-cli # On redis server
# get x # On redis server
# "x\x9cM\xc9=\n\x80 \x00\x06\xd0\xbb|\xb3\x04\xe6O\xe5\xd4\t\x1aZ\xa3\xc1JB\xca\x02\xad)\xbc{\x11Hm\x0f\xde\x05\xe3\xb4]\xa1\xe0t8\x8c\xaf'\x1b^d\xe3\xee#\xd0\xeam\xb1\xdb\x0c\xd5]hN7\x18\x0f\xc5\n)\x05\x81\x7f\n\x8aF\xf2\r\xa7B\xd04\xf9\x7fr*K\x9e\x86\xfd\x87\x89\x8a\xcb4<\xf6\xf1\x06\xb3g-\xc4"
# I have not implemented the reimport to R yet. So feel free to add it :)
There is no improvement for this short string. But for the long version (4000 instead of 4 elements in the json file) there was a huge improvement:
Compression gives an improvement by ~85% in memory usage and writing time in this case!
Compressed:
Time to write 500 users with 4.000 elements each to redis: 12.62 sec
Memory kept by 500 users with 4.000 elements each on redis: 11.782 MB
Uncompressed:
Time to write 500 users with 4.000 elements each to redis: 88.76 sec
Memory kept by 500 users with 4.000 elements each on redis: 78.192 MB
Out of curiosity, I spent some time looking through TCP dumps of an https web connection I made. I have been able to make sense of most of it, but I am stuck on one particular TLS Record. Here is the hex dump:
16 03 01 00 24 ae f5 83 cb 35 db dd 67 f5 bf 4a
c7 52 b5 16 56 59 52 40 fa 7b f8 f6 40 a7 13 74
0a f3 b0 6e 5b 4f 2b 88 a3
The previous Record is a Change Cipher Spec Record (i.e. Content Type 0x14) if that helps. Also, I used wget to make the request.
As far as I can tell, this should follow the handshake subprotocol (16), uses TLS 1.0 (03 01), the message length is 36 Bytes (00 24). And here is where I am stuck: what does the ae mean?! At first I thought it might have something to do with SNI or some other TLS extension, but so far no luck there either.
Any help interpreting this would be appreciated.
There is no HandshakeType with a value of 174. The 174 shows up because the TLS connection just finished negotiating a cipher suite, and is now encrypting the record's payload!
I'm trying to implement certificate signature verification on a Microchip pic controller (certificates are generated and signed using OpenSSL). The Microchip PIC controller doesn't support OpenSSL libraries, but it does have an encryption/decryption function. I was successful in getting a SSL connection between PIC controller and a web server. My next step is to setup signature verification on the PIC controller.
After reading PKCS#1 V2.1 RSA Cryptography Standard (http://www.rsa.com/rsalabs/node.asp?id=2125)
I realized that encryption is essentially the same as signature verification and decryption is the same as signing. More specifically both encryption and verification uses the public key and the following formula:
m = s ^ e mod n
Where s is the signature or the message, e is the public exponent, n is the modulus and m is the encrypted message or decoded signature. Therefore, I'm trying to use the encryption algorithm provided to perform signature verification.
In order to verify the certificate, I generated the SHA1 hash of the certificate; Decoded signature using CA's public key and encryption algorithm. Remove the padding from the decoded signature, the result hash should be equal to the SHA1 hash of the certificate.
However, I cannot get the two hash values to be equal. I tried to verify my assumption and PIC controller results using OpenSSL command line.
This is the hash value I got from both OpenSSL command line and PIC controller
openssl rsautl -in signature.txt -verify -asn1parse -inkey pubkey.pem
-pubin
db e8 c6 cb 78 19 3c 0f-fd 96 1c 4f ed bd b2 34 45 60 bf 65
This is what I got from Signature verification using OpenSSL. After removing "ff" paddings I'll end up with asn1 format of the certificate hash.
openssl rsautl -verify -in signature.txt -inkey pubkey.pem -pubin
-raw -hexdump
00 01 ff ff ff ff ff ff-ff ff ff ff 00 30 21 30
09 06 05 2b 0e 03 02 1a-05 00 04 14 db e8 c6 cb
78 19 3c 0f fd 96 1c 4f-ed bd b2 34 45 60 bf 65
However this is what I got from the PIC controller which is much different from the above
8e fb 62 0e 09 c8 0b 49 40 1f 4d 2d a7 7d d6 8c
9b bc 95 e6 bc 98 4b 96 aa 74 e5 68 90 40 bf 43
b5 c5 02 6d ab e3 ad 7b e6 98 fd 10 22 af b9 fb
This is my signature
7951 9b3d 244a 37f6 86d7 dc02 dc18 3bb4
0f66 db3a a3c1 a254 5be5 11d3 a691 63ef
0cf2 ec59 c48b 25ad 8881 9ed2 5230 bcd6
This is my public key (I'm using a very small key just for testing, will make it larger once everything works)
96 FE CB 59 37 AE 8C 9C 6C 7A 01 50 0F D6 4F B4
E2 EC 45 D1 88 4E 1F 2D B7 1E 4B AD 76 4D 1F F1
B0 CD 09 6F E5 B7 43 CA F8 14 FE 31 B2 06 F8 7B
Exponent is 01 00 01
I'm wondering are my assumptions wrong that I cannot use encryption algorithm for decoding signature? or I'm doing something else wrong?
It turned out the method I described above is correct. I was able to get the matching result from hashing the certificate and unsigning the signature using encryption.
The problem that caused my previous failing attempts was the endianess used by Microchip Pic controller. They use small-endian instead of big-endian. I did not pay attention to the endianness of the exponent since 01 00 01 is the same in either format. However I was wrong, it turns out Microchip looks at a 4 byte value as the exponent (RSA standard??). So it pads 00 in the front resulting 00 01 00 01. Therefore, the endianness matters now since 00 01 00 01 is different from 01 00 01 00. And 01 00 01 00 is the small-endian format that Microchip Pic uses.