gRPC message format - grpc

I'd like to know how to properly construct GRPC REQUEST and RESPONSE. There are only two resources that I found on the "codec" part:
Encoding: https://developers.google.com/protocol-buffers/docs/encoding
HTTP/2: https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#message-encoding
I believe I understand both documents. I understand how the HTTP/2 framing works (I implemented my own HTTP/2 server and client in Rust so I understand it in detail) but the sent message is always somehow invalid.
Let's take for an example this simple message (proto3):
message Test1 {
int32 a = 1;
}
I'm dealing with the RESPONSE part only for now. The wire format for value 150 (as specified in the google's example) should be Hex[08, 96, 01] ([00001000, 10010110, 00000001]). I pack this into a DATA frame and send it as a RESPONSE back to the GRPC client. Here's what the response looks like:
...
[ 0.010] recv SETTINGS frame <length=0, flags=0x00, stream_id=0>
(niv=0)
00000000 00 00 00 04 01 00 00 00 00 |.........|
00000009
[ 0.010] recv SETTINGS frame <length=0, flags=0x01, stream_id=0>
; ACK
(niv=0)
[ 0.010] send SETTINGS frame <length=0, flags=0x01, stream_id=0>
; ACK
(niv=0)
00000000 00 00 0e 01 04 00 00 00 0d 88 5f 8b 1d 75 d0 62 |.........._..u.b|
00000010 0d 26 3d 4c 4d 65 64 |.&=LMed|
00000017
[ 0.012] recv (stream_id=13) :status: 200
[ 0.012] recv (stream_id=13) content-type: application/grpc
[ 0.012] recv HEADERS frame <length=14, flags=0x04, stream_id=13>
; END_HEADERS
(padlen=0)
; First response header
00000000 00 00 03 00 00 00 00 00 0d 08 96 01 |............|
0000000c
[ 0.016] recv DATA frame <length=3, flags=0x00, stream_id=13>
00000000 00 00 0f 01 05 00 00 00 0d 40 0b 67 72 70 63 2d |.........#.grpc-|
00000010 73 74 61 74 75 73 81 07 |status..|
00000018
[ 0.016] recv (stream_id=13) grpc-status: 0
[ 0.016] recv HEADERS frame <length=15, flags=0x05, stream_id=13>
; END_STREAM | END_HEADERS
(padlen=0)
[ 0.021] send GOAWAY frame <length=8, flags=0x00, stream_id=0>
(last_stream_id=0, error_code=NO_ERROR(0x00), opaque_data(0)=[])
There must be a missing header or I'm missing the message format. Can someone please post a working example using curl. Thank you!

I realized that the Length-Prefixed-Message is actually Length-Prefixed, Message so every DATA payload must be prefixed.

Related

Lua TCP communication

I have a proprietary client application that sends and receives TCP data packets to|from the network device like this:
Sent: [14 bytes]
01 69 80 10 01 0E 0F 00 00 00 1C 0D 64 82 .i..........d.
Received: [42 bytes] [+00:000]
01 69 80 10 01 2A 00 D0 DC CC 0C BB C0 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 .i...*.......#..................
00 00 00 00 00 00 1C 0D F6 BE ..........
or see the picture
I need to make same requests with Lua. I've found some working examples (for ex) for such communications, but I can't understand what string should I give as an argument for
tcp:send("string");
Should I give it a string of hex? I.e.
'01698010010E0F0000001C0D6482'
Or first convert hex to ACSII? If so, then how (zeroes doesn't convert to symbols)?
You should give it the string you want to send. If you write "016980..." it's a string containing decimal values 48 (ascii digit 0), 49 (ascii digit 1), 54 (ascii digit 6), and so on. Which is not what you want to send. You want to send decimal values 1, 105 (hex 69), 128 (hex 80) and so on.
Luckily, Lua strings can hold any bytes (unlike e.g. Python strings). So you just have to write a string with those bytes. You can write any byte into a string using \x and then a 2-digit hex code. So you could write your call like this:
tcp:send("\x01\x69\x80\x10\x01\x0E\x0F\x00\x00\x00\x1C\x0D\x64\x82")
If you are using a Lua version older than 5.2, \x is not available, but you can still use \ and a 3-digit decimal code.

Get consistent md5 checksum of gzipped file using data.table and R.utils

I'm trying to replicate the checksum of a compressed file with R.utils::gzip and the compress = "gzip" argument of data.table::fwrite but I keep getting different results. Here is an example
library(data.table); library(R.utils); library(digest)
dt <- data.frame(a = c(1, 2, 3))
fwrite(dt, "r-utils.csv")
gzip("r-utils.csv")
fwrite(dt, "datatable-v1.csv.gz", compress = "gzip")
digest(file = "r-utils.csv.gz")
#> [1] "8d4073f4966f94ac5689c6e85de2d92d"
digest(file = "datatable-v1.csv.gz")
#> [1] "5d58f9eeefb166c6d50ac00f3215e689"
Initially I though that fwrite was storing the filename and timestamp in the output file (per the usual gzip behaviour without the --no-name option), but that don't appear to be the case since I get the same checksum in different calls to fwrite
fwrite(dt, "datatable-v2.csv.gz", compress = "gzip")
digest(file = "datatable-v2.csv.gz")
#> [1] "5d58f9eeefb166c6d50ac00f3215e689"
Any ideas on what might be causing the difference?
PS. Incidentally, the checksum of the uncompressed files are the same
$ md5sum datatable-v1.csv
7e034138dc91aa575d929c1ed65aa67c datatable-v1.csv
$ md5sum r-utils.csv
7e034138dc91aa575d929c1ed65aa67c r-utils.csv
If you look at the bytes, you can see they are different
r-utils.csv.gz
Offset: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
00000000: 1F 8B 08 00 00 00 00 00 00 06 4B E4 E5 32 E4 E5 ..........Kde2de
00000010: 32 E2 E5 32 E6 E5 02 00 21 EB 62 BF 0C 00 00 00 2be2fe..!kb?....
and
datatable-v2.csv.gz
Offset: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
00000000: 1F 8B 08 00 00 00 00 00 00 0A 4B E4 E5 02 00 56 ..........Kde..V
00000010: EF 2F E3 03 00 00 00 1F 8B 08 00 00 00 00 00 00 o/c.............
00000020: 0A 33 E4 E5 32 E2 E5 32 E6 E5 02 00 49 C4 FF 4D .3de2be2fe..ID.M
00000030: 09 00 00 00 ....
So the data.table one is longer. This appears to be because the default compression settings are different. Specifically it looks like the two method use a different "window size" parameter. The data.table code uses a windowBits value of 31 (15+16) which will include "trailing checksum in the output" but the R.utils::gzip function uses the base R gzfile() function which uses a windowBits value of -15 (MAX_WBITS) and that negative value means a trailing checksum should not be used. So I think that accounts for the extra bytes in the in the data.table output.
Because you can use different compression level settings and checksums and gzip headers, it's not necessarily the case the you will get the same checksum for compressed versions of data files if two different compression pipelines are used. So it's possible for the data inside to be identical but the actual compressed files to be different.
Since these settings are part of the C code for the package and for base R this is not something you will be able to change in R code. It's not possible for these two different methods to return identical output.

Problem with sending HDLC frames by using GSM modem

I have SL7000 meter and GSM Modem iRZ. When i send by using RS-485 cable - everything work. But when i'm trying to use GSM modem i'm getting issues.
When i send SNRM like this:
7E A0 0A 00 22 00 51 03 93 6A 34 7E
I get normal UA.
But when i try to send SNRM like this:
7E A0 21 00 22 00 51 03 93 6B 21 81 80 12 05 01 80 07 04 00 00 00 02 08 04 00 00 00 01 3D 93 7E (It's from DXDLMSDirector)
I get nothing. Absolutely!
Maybe there is some trick to use hdlc with gsm modem? Maybe special delays or something?
If both of these frames work via the RS-485, and not via the GSM, then there are a couple of possible answers:
1) the addressing you are using is not permitted if this is a seperate port
2) if it is the same port on the meter, then the GSM Modem is not directing traffic to the same RS485 address

Cannot write CIE's IEEE address to IAS zone device

I am using trying to add the following IAS zone devices (from HEIMAN) to my ZCL co-ordinator(CIE) + IoT gateway (from NXP)
emergency button - gets added easily and triggers successfully
door sensor - joins the network but no enrolment process is seen
Q1. Why is it such that one device undergoes enrolment process correctly and the other doesn't? My understanding is that the ZCL stack should do all the enrolment activities. Am I correct?
Q2. I tried writing IEEE address of the CIE to the node's cluster(0x0500) attribute (0x0010) of attribute type (0xf0). But no response. How to tackle this issue?
For a CIE device, the enrolment is more complex and the ZCL stack will not perform this for you (although this may depend on the stack, and any add-on features it provides).
A CIE device may perform its own service discovery using the ZDO Match Descriptor functions. It may send a MatchDescriptorRequest report looking for an IAS server, and you will need to respond with the MatchDescriptorResponse to report that you support this. Typically the request will be looking for the IAS Zone Server cluster (0x500), but you should inspect the packets and respond appropriately. See 2.4.3.1.7 Match_Desc_req, and 2.4.4.1.7 Match_Desc_rsp of the ZigBee specification. If an IAS device is looking for a zone controller, it may not accept any requests until it receives this response, and in fact some devices may leave the network if they don't find the services they are requesting.
Next, it may enrol with the IAS service by sending the ZoneEnrollRequest command, and your application will need to respond to this with the ZoneEnrollResponse to tell the device that it is now enrolled in your system. Refer to 8.2.2.4.2 Zone Enroll Request Command in the ZCL specification.
From your traces, it is hard to say what is happening as the log viewer doesn't provide any information on the contents of the Data Request frames in this view. However, we can see a lot of frames being sent from the device to the coordinator, and it is likely that it is performing one, or both of the discovery services discussed above. You should inspect the requests to find out what they are, and check the appropriate sections of the ZigBee specification, or the ZigBee Cluster Library Specification.
CIE IEEE Address to IAS zone worked successfully. Tested using Xbee s2c.
Explicit Addressing Command Frame (API 2)
7E 00 22 7D 31 01 28 6D 97 00 01 04 2B 7D 5D FF FE E8 01 05 00 01 04 00 20 00 01 02 10 00 F0 6B 7A 29 41 00 A2 7D 33 00 FD
Start delimiter: 7E
Length: 00 22 (34)
Frame type: 11 (Explicit Addressing Command Frame)
Frame ID: 01 (1)
64-bit dest. address: 28 6D 97 00 01 04 2B 7D
16-bit dest. address: FF FE
Source endpoint: E8
Dest. endpoint: 01
Cluster ID: 05 00
Profile ID: 01 04
Broadcast radius: 00 (0)
Transmit options: 20
RF data: 00 01 02 10 00 F0 6B 7A 29 41 00 A2 13 00
Checksum: FD
Explicit RX Indicator (API 2)
7E 00 16 91 28 6D 97 00 01 04 2B 7D 5D A3 87 01 E8 05 00 01 04 21 18 01 04 00 3A
Start delimiter: 7E
Length: 00 16 (22)
Frame type: 91 (Explicit RX Indicator)
64-bit source address: 28 6D 97 00 01 04 2B 7D
16-bit source address: A3 87
Source endpoint: 01
Destination endpoint: E8
Cluster ID: 05 00
Profile ID: 01 04
Receive options: 21
RF data: 18 01 04 00
Checksum: 3A

IRP_MJ_DEVICE_CONTROL — how to?

Coding a app using serial port, when debugging, I have been compelled to work with low level (link control) protocol.
And here my problems begun.
Sniffer gives me values:
IOCTL_SERIAL_SET_BAUD_RATE 80 25 00 00 means baud rate 9600. Well, 00 c2 01 00 means 115200. How is it possible to guess it?
IOCTL_SERIAL_SET_TIMEOUTS 32 00 00 00 05 00 00 00 00 00 00 00 60 09 00 00 00 00 00 00 - what does this mean? What is the value? What is the range of admissible values? I had read MSDN - "Setting Read and Write Timeouts for a Serial Device", for example. Blah-blah-blah, but no any value. What to read? How to understand sniffer data? And how to control it?

Resources