I'm trying to figure out why my grpc call isn't working, but I can't figure out how to turn on debugging, so I can see the data that's being sent and received over the grpc connection.
How do you turn on debugging for grpc calls?
You can set the GRPC_TRACE environment variable to all to have grpc dump a whole bunch of data about what the connection is doing:
export GRPC_TRACE=all
edit from comment: apparently you also need to set:
export GRPC_VERBOSITY=DEBUG
In Golang, you need to set the GODEBUG environment variable to see HTTP2 traces, i.e. headers set by gRPC:
GODEBUG=http2debug=1 # enable verbose HTTP/2 debug logs
GODEBUG=http2debug=2 # ... even more verbose, with frame dumps
The output will then be sent to stdout. here's an example:
{"level":"info","msg":"2017/06/11 08:52:20 http2: Framer 0xc42009c0e0: wrote SETTINGS len=0","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: Framer 0xc42009c0e0: wrote WINDOW_UPDATE len=4 (conn) incr=983025","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: Framer 0xc42009c0e0: read SETTINGS len=18, settings: ENABLE_PUSH=0, MAX_CONCURRENT_STREAMS=0, INITIAL_WINDOW_SIZE=1048576","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: Framer 0xc42009c0e0: read WINDOW_UPDATE len=4 (conn) incr=983041","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: Framer 0xc42009c0e0: wrote SETTINGS flags=ACK len=0","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: Framer 0xc42009c0e0: read SETTINGS flags=ACK len=0","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: Framer 0xc42009c0e0: read HEADERS flags=END_HEADERS|PRIORITY stream=3 len=249","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \":authority\" = \"\"","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \":path\" = \"/internal.push.v1.UnifiedDevicePush/SendMessage\"","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \":method\" = \"POST\"","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \":scheme\" = \"http\"","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \"content-type\" = \"application/grpc\"","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \"te\" = \"trailers\"","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \"user-agent\" = \"grpc-java-netty/1.0.3\"","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \"root-common.xirequestid-bin\" = \"ChIJzE6lBfCTCsYRoIIJujc92JY=\"","time":"2017-06-11T08:52:20Z"}
{"level":"info","msg":"2017/06/11 08:52:20 http2: decoded hpack field header field \"te\" = \"trailers\"","time":"2017-06-11T08:52:20Z"}
export GRPC_GO_LOG_VERBOSITY_LEVEL=99
export GRPC_GO_LOG_SEVERITY_LEVEL=info
Try this with latest grpc go version.
From the docs
GRPC_VERBOSITY is used to set the minimum level of log messages printed by gRPC (supported values are DEBUG, INFO, and ERROR). If this environment variable is unset, only ERROR logs will be printed.
Also, check GRPC_TRACE
There are 15+ grpc environment variables
A note for Windows users, quoting from docs:
Known limitations: GPRC_TRACE=tcp is currently not implemented for Windows (you won't see any TCP traces).
You can use the Mediator Tool to debug and tracing gRPC calls.
It is a GUI tool just like the charles, but if you need to resolve the protobuf
message body, the gRPC server needs to support the Server Reflection.
There is another tool grpcdebug
grpcdebug is a command line interface focusing on simplifying the debugging process of gRPC applications. grpcdebug fetches the internal states of the gRPC library from the application via gRPC protocol and provide a human-friendly UX to browse them. Currently, it supports Channelz/Health Checking/CSDS (aka. admin services). In other words, it can fetch statistics about how many RPCs has being sent or failed on a given gRPC channel, it can inspect address resolution results, it can dump the in-effective xDS configuration that directs the routing of RPCs.
Here are some samples
Usage 1: Raw Channelz Output
For all Channelz commands, you can add --json to get the raw Channelz output.
grpcdebug localhost:50051 channelz channels --json
grpcdebug localhost:50051 channelz servers --json
Usage 2: List Client Channels
grpcdebug localhost:50051 channelz channels
# Channel ID Target State Calls(Started/Succeeded/Failed) Created Time
# 7 localhost:10001 READY 5136/4631/505 8 minutes ago
Usage 3: List Servers
grpcdebug localhost:50051 channelz servers
# Server ID Listen Addresses Calls(Started/Succeeded/Failed) Last Call Started
# 1 [:::10001] 2852/2530/322 now
# 2 [:::50051] 29/28/0 now
# 3 [:::50052] 4/4/0 26 seconds ago
Related
I sent a HTTP Raw request default with a host and port which I'm certain that I could connect using my windows 10 device.(Tested with Test-NetConnection and sent data via Ubuntu App, echo "DEMO Message" | nc mYHOST 6021)
However, when I sent the request in Jmeter , I get below errors.
Sampler result is as per the below.
Thread Name:Thread Group 1-1
Sample Start:2021-09-17 15:55:08 IST
Load time:9300
Connect Time:0
Latency:0
Size in bytes:921
Sent bytes:0
Headers size in bytes:0
Body size in bytes:921
Sample Count:1
Error Count:1
Data type ("text"|"bin"|""):text
Response code:500
Response message:java.net.SocketTimeoutException: Timeout exceeded while reading from socket
SampleResult fields:
ContentType:
DataEncoding: null
Response is as per the below.
java.net.SocketTimeoutException: Timeout exceeded while reading from socket
java.net.SocketTimeoutException: Timeout exceeded while reading from socket
at kg.apc.io.SocketChannelWithTimeouts.read(SocketChannelWithTimeouts.java:133)
at kg.apc.jmeter.samplers.HTTPRawSampler.readResponse(HTTPRawSampler.java:64)
at kg.apc.jmeter.samplers.HTTPRawSampler.processIO(HTTPRawSampler.java:163)
at kg.apc.jmeter.samplers.AbstractIPSampler.sample(AbstractIPSampler.java:112)
at kg.apc.jmeter.samplers.HTTPRawSampler.sample(HTTPRawSampler.java:42)
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:630)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:558)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
at java.base/java.lang.Thread.run(Thread.java:830)
This demo host and port is working though and I used exact same in my real request too.
Similar issues come when I use JSR223 Sampler with below code.
def client = new Socket('DemoHostname', 9000);
client.setSoTimeout(2000);
client << "Hello ";
client.withStreams { input, output ->
def reader = input.newReader()
def response = reader.readLine()
log.info('Response = ' + response)
}
client.close()
Also with TCP Sampler of Jmter.
So based on the suggestion, I sent it as a Http request to websocket of Logstash.
Good thing is add the record get added, but Jmter http request is running in an endless state. Seems since logstash communicate via TCP it's unable to provide a http response.
What are you trying to achieve by sending HELLO TEST to echo.websocket.org?
If you want to send a HTTP Request there, you need to do something like:
GET / HTTP/1.1
Host: echo.websocket.org
Connection: close
Demo:
If you need to load test a WebSocket server - you're using the wrong plugin, you should go for JMeter WebSocket Samplers instead and consider using more "alive" endpoint as this echo.websocket.org doesn't seem to be working anymore.
I am trying to extend a server with HTTP/2 which already supports HTTP/1.1 with TLS v1.2. I am writing it in Go where I define tls config like this -
tlsConfig := &tls.Config{
Certificates: []tls.Certificate{cert},
ServerName: "mysrvr",
NextProtos: []string{"h2", "http/1.1", "http/1.0"},
Time: time.Now,
Rand: rand.Reader,
}
As is evident, I have used "h2" string to set up ALPN handshake.
Now when I make request via curl, I receive this request -
$ curl -v https://127.0.0.1:8000 -k --http2
When I parse request, it shows a PRI method being sent first instead of GET -
HTTP/2.0
PRI
I got some idea on PRI method from https://www.rfc-editor.org/rfc/rfc7540#page-78 wherein it says the following -
This method is never used by an actual client.
This method will appear to be used when an HTTP/1.1 server or
intermediary attempts to parse an HTTP/2 connection preface.
My question now is why was PRI request sent, when clearly the server supports HTTP/2? Do I need to parse it and respond with empty SETTINGS frame in accordance with HTTP/2 spec or should the Go http2 runtime should have taken care of it?
I am using http.ReadRequest to parse client requests, but that doesn't seem to be working for HTTP/2 requests even when I ignore PRI requests (as suggested below).
The first message a HTTP/2 client should send is this PRI message. From the HTTP/2 specification:
In HTTP/2, each endpoint is required to send a connection preface as a final confirmation of the protocol in use and to establish the initial settings for the HTTP/2 connection. The client and server each send a different connection preface.
The client connection preface starts with a sequence of 24 octets, which in hex notation is:
0x505249202a20485454502f322e300d0a0d0a534d0d0a0d0a
That is, the connection preface starts with the string PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n). This sequence MUST be followed by a SETTINGS frame (Section 6.5), which MAY be empty.
...
Note: The client connection preface is selected so that a large proportion of HTTP/1.1 or HTTP/1.0 servers and intermediaries do not attempt to process further frames.
The point of this message is that it is a fake HTTP/1-like message so any server which is not HTTP/2 aware should response with an error.
Any HTTP/2 server should expect this message to be sent, and then should just ignore it, and the carry on speaking HTTP/2.
In fact if this message is NOT sent, then the server should treat this as an error and not continue:
Clients and servers MUST treat an invalid connection preface as a connection error (Section 5.4.1) of type PROTOCOL_ERROR. A GOAWAY frame (Section 6.8) MAY be omitted in this case, since an invalid preface indicates that the peer is not using HTTP/2.
How do popular HTTP servers or frameworks use HTTP protocol to implement asynchronous streams of data from HTTP server to HTTP client?
(client could be browser or non-browser)
[client] ----request for data----> [server]
[client] <-------xxx------[server]
[---delay---]
[client] <-------xxxxxx---[server]
[---delay---]
[client] <-------x--------[server]
[---delay---]
[client] <-------xxx------[server]
[---delay---]
[client] <-------xxxx-----[server]
delay can be non deterministic
x is say individual data object that makes sense to server & client.
Just to emphasize, I am not looking for implementation of streams (ex. reactive streams, RxJava etc..), but
I would like to know details of how HTTP protocol is used to implement this asynchronous streaming of data (not video streaming, but say, json streaming).
For ex, which HTTP headers they use, what kind of connection is used etc.
Basically, the HTTP headers of interest here are:
header-name: header-value (comment)
connection: keep-alive (keep the connection open)
transfer-encoding: chunked (data is sent in a series of chunks)
accept: application/stream+json (or other similar streaming media type)
content-type: application/stream+json (or other similar streaming media type)
this information is gathered from observing http traffic between postman/curl
and simple spring webflux service.
for complete description of these headers and their values:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers
I want to create a server similar to Twitter Streaming API, so a client could read the response in real-time staying connected. How to do that in Crystal?
Extracted from this issue:
#MakeNowJust says:
You should append \n to sent text to gets in client and do io.flush.
require "http/server"
port = 5000
server = HTTP::Server.new(port) do |context|
loop do
context.response.puts "Something\n"
context.response.flush
sleep 1
end
end
puts "Listening on #{port}"
server.listen
#rx14 says:
crystal already handles writing chunked responses. Just keep on writing to the output response, and call flush when you want to ensure the client receives the message. If there is no content length header, the response will automatically select chunked encoding for you.
I have a doubt regarding sending of mime attachments over HTTP:
in http specs the following is quoted :
“C.4 No Content-Transfer-Encoding: HTTP does not use the Content-Transfer-Encoding (CTE) field of RFC 1521. Proxies and gateways from MIME-compliant protocols to HTTP must remove any non-identity CTE ("quoted-printable" or "base64") encoding prior to delivering the response message to an HTTP client. Proxies and gateways from HTTP to MIME-compliant protocols are responsible for ensuring that the message is in the correct format and encoding for safe transport on that protocol, where "safe transport" is defined by the limitations of the protocol being used. Such a proxy or gateway should label the data with an appropriate Content-Transfer-Encoding if doing so will improve the likelihood of safe transport over the destination protocol.”
Does this mean that specifically for sending MIME attachments only over http, we shouldn't specify content-transfer-encoding as quoted-printable or base64 ?
Also, what is the behavior of conetent-transfer-encoding when i send such attachments over other transports like JMS, or over Mail? For example in a SOAP over JMS message?
Also the found following relevant from RFC 4130 :
“5.2. Unused MIME Headers and Operations
5.2.1. Content-Transfer-Encoding Not Used in HTTP Transport
HTTP can handle binary data and so there is no need to use the content transfer encodings of MIME [1]. This difference is discussed in [3], Section 19.4.5. However, a content transfer encoding value of binary or 8-bit is permissible but not required. The absence of this header MUST NOT result in transaction failure. Content transfer encoding of MIME body parts within the AS2 message body is also allowed.”
So i am basically thoroughly confused over the behavior of mime attachments specific to the HTTP protocol, and would like to get its behavior clarified.
HTTP is not MIME, it just borrows from the MIME message format. Payloads in HTTP are binary, and there simply is no Content-Transfer-Encoding header field. You can specify it, but it has zero effect and keeps distracting people looking at wire traces.