gRPC Channel Stay Open in C# - grpc

I0804 13:52:46.505722 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x86\src\core\lib\surface\call.c:1802: grpc_call_start_batch(call=05BEFFD8, ops=05EFED58, nops=6, tag=05BC64E0, reserved=00000000)
I0804 13:52:46.505722 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x86\src\core\lib\surface\call.c:1445: ops[0]: SEND_INITIAL_METADATA(nil)
I0804 13:52:46.505722 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x86\src\core\lib\surface\call.c:1445: ops[1]: SEND_MESSAGE ptr=05BDAFD0
I0804 13:52:46.505722 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x86\src\core\lib\surface\call.c:1445: ops[2]: SEND_CLOSE_FROM_CLIENT
I0804 13:52:46.505722 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x86\src\core\lib\surface\call.c:1445: ops[3]: RECV_INITIAL_METADATA ptr=05BC64FC
I0804 13:52:46.505722 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x86\src\core\lib\surface\call.c:1445: ops[4]: RECV_MESSAGE ptr=05BC6508
I0804 13:52:46.505722 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x86\src\core\lib\surface\call.c:1445: ops[5]: RECV_STATUS_ON_CLIENT metadata=05BC650C status=05BC6518 details=05BC651C
I am confused why gRPC logs a message SEND_CLOSE_FROM_CLIENT between calls when I thought that the TCP connection was kept open.
Essentially I just have a random class with a constructor that opens a client channel to a gRPC server. In a method that is called on this class, there is a loop inside which calls an RPC method each iteration. However, it's quite slow and I think it is because it's creating the connection each time it tries to make the RPC call.
How can I keep the connection open? Is this a case for duplex streaming?

SEND_CLOSE_FROM_CLIENT is a primitive defined by gRPC C core library and expresses a "halfclose" in the HTTP/2 terminology (it's notifying the client has finished sending requests in a streaming RPC - where every RPC corresponds 1:1 to a HTTP/2 stream; unary calls are just a special case of streaming calls).
SEND_CLOSE_FROM_CLIENT has nothing to do with closing TCP connections (I think you were too quick to draw conclusions without reading up on the wire protocol and HTTP2 first
https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md)

Related

TCP PSH flag is not set for packets for which it should be

As far as I understand from Is it possible to handle TCP flags with TCP socket? and what I've read so far, a server application does not handle and cannot access TCP flags at all.
And from what I've read in RFCs the PSH flag tells the receiving host's kernel to forward data from receive buffer to the application.
I've found this interesting read https://flylib.com/books/en/3.223.1.209/1/ and it mentions that "Today, however, most APIs don't provide a way for the application to tell its TCP to set the PUSH flag. Indeed, many implementors feel the need for the PUSH flag is outdated , and a good TCP implementation can determine when to set the flag by itself."
"Most Berkeley-derived implementations automatically set the PUSH flag if the data in the segment being sent empties the send buffer. This means we normally see the PUSH flag set for each application write, because data is usually sent when it's written. "
If my understanding is correct and TCPStack decides by itself using different conditions,etc. when to set the PSH flag, then what can I do if TCPStack doesn't set the PSH flag when it should?
I have a server application written in Java and client written in C, there are 1000 clients each on a separate host and they all connect to server. A mechanism which acts as a keep-alive involves server sending each 60 seconds a request to each client that requests some info. The response is always less than MTU(1500bytes) so all the time response frames should have PSH flag set.
It happened at some point that client was sending 50 replies to only one request and all of them with PSH flag not set. Buffer got full probably before the client even sent the 3rd or 4th time the same reply and receiving app thrown an exception because it received more data than it was expecting from receive buffer of host.
My question is, what can I do in such a situation if I cannot communicate at all with TCPStack?
P.S. - I know that client should not send more than 1 reply but still in normal operation all the replies have PSH flag set and in this certain situation they didn't, which is not application fault

How to use a existing established TCP connection?

I wanted to use a already established TCP connection to send request and receive response
tcp4 0 0 192.168.58.72.50913 17.248.162.6.https ESTABLISHED
as you can see above, a tcp connection is already in established state, this connection is created by some other process. I being a root user wanted to use the same connection to send request and receive response. is this possible. ??? if yes, can you please tell how to do it ?
If you mean to SHARE the connection (or FD, file descriptor) between the existing process -- that's quite dirty and not recommended.
Although there's a way to pass FD between processes (see this: Can I share a file descriptor to another process on linux or are they local to the process?), it needs target process to send its FD to you (rather than fetch the FD by yourself).

TCP sessions with gRPC

Sorry if this question is naive. (gRPC novice here). But, I would like to understand this.
Let's say I have a gRPC service definition like this:
service ABC {
// Update one or more entities.
rpc Write(WriteRequest) returns (WriteResponse) {
}
// Read one or more entities.
rpc Read(ReadRequest) returns (stream ReadResponse)
{
}
// Represents the bidirectional stream
rpc StreamChannel(stream StreamMessageRequest)
returns (stream StreamMessageResponse) {
}
}
Our potential use case would be the server built using C++ and the client using Java. (Not sure is that matters).
I would like to understand how the TCP sessions are managed. The Stream Channel would be used for constant telemetry data streaming between the client and the server. (Constant data transfer, but the bulk from the server to the client).
Does the StreamChannel have a separate TCP session, while for every Write and Read a new session would be established and terminated after the call is done?
Or is there a single TCP session over which all the communication happens?
Again, please excuse me if this is very naive.
Thanks for your time.
Since gRPC uses HTTP/2, it can multiplex multiple RPCs on the same TCP connection. The Channel abstraction in gRPC lets gRPC make connection decisions without the application needing to be strongly-aware.
By default, gRPC uses the "pick first" load balancing policy, which will use a single connection to the backend. All new RPCs would go over that connection.
Connections may die (due to I/O failures) or need to be shut down (various reasons), so gRPC handles reconnecting automatically. Because it can take a very long time to shut down a connection (as gRPC waits for RPCs on that connection to complete), it's still possible that gRPC would have 2 or more connections to the same backend.
So for your case, all the RPCs would initially exist on the same connection. As time goes on new RPCs may use a newer connection, and an old, long-lived StreamChannel RPC may keep the initial TCP connection alive. If that long-lived StreamChannel is closed and re-created by the application, then it could share the newer connection again.
I also posted the same question in grpc.io, and the response I got was inline with the marked answer.
Summary:
If there is no load-balancing, all the RPCs use the same session. The session remains connected across requests. The session establishment happens the first time a call is attempted on the channel.

Is it possbile that request is sent but response is failed?

I'm new in programming and recently writing a hobby angular 2 app that make a network request within the application, The request is AJAX with http object.
During writing the application, I wonder.. is it possible the request that made by client application is accepted and processed by the server, but the server failed to make a response to the client due to connection error?
If that possible, how do I avoid multiple request being processed?
In the OSI model, HTTP is application level layer which is transported over transport layer i.e. TCP to the peer for sending OR receiving messages.
Transport layer has following attributes: reliable/non-reliable, connection/connection-less, flow-control, congestion, etc. To support these features TCP or UDP protocols are used.
Since we are transporting HTTP PDU, even if PDU gets dropped (due to various reasons), it will be retransmitted as TCP support sliding-window. And only after receiving FIN (final) from peer the connection is closed. Until then you can consider all your PDU will be transported faithfully. TCP also has timeouts and retries, they are configurable too.
But if server is dead or not responding then you will get appropriate error message while making HTTP request, because mere TCP connection itself will fail. Following are well defined error messages. Enable logging OR console traces to notice these errors.
you can check the connection at the time of response back.
Please also check the below points
Check the return type from the server.
Is your angular program is expecting the same type of response which is returned from the server?

connect on "connection less" boost::asio::ip::udp::socket

I've been learning about UDP socket lately by browsing the net and all the pages that were explaining it were mentioning that UDP sockets are "connection less". This, if I understand it correctly means that one does not have a "connection" between two sockets but instead shoots datagram packets to specified endpoints without knowing whether the other end is listening.
Then I go and start reading the boost::asio::ip::udp::socket docs and find that it mentions API like:
async_connect: Start an asynchronous connect.
async_receive: Start an asynchronous receive on a connected socket.
async_send: Start an asynchronous send on a connected socket.
Now this is a bit confusing for a novice. I can find 3 possible causes for my confusion (in order of likehood :) )
I'm missing something
The asio implementation is doing something behind the scenes to virtualize the connection.
The documentation is wrong
There is also a slight glitch in the docs, when you open the page for basic_datagram_socket::async_connect the example in there is instantiating TCP sockets (instead of UDP ones).
Would someone please enlighten me?
The Single UNIX specification has a better explanation of what connect does for connection-less sockets:
If the initiating socket is not connection-mode, then connect() sets the socket's peer address, but no connection is made. For SOCK_DGRAM sockets, the peer address identifies where all datagrams are sent on subsequent send() calls, and limits the remote sender for subsequent recv() calls.

Resources