what is the difference between grpc stream error and TCP error - grpc

It is introduced in "google.golang.org/grpc/codes" that some errors can be generated by grpc and some cannot be generated, and the grpc stream should correspond to the stream in http2.0. Then I want to know whether it means that there is an error with the TCP connection when those grpc streams throw exceptions, so do I only need to reconnect the TCP connection, or do I have somes methods for only reconnect stream (such as reconnecting streams, etc.)
for {
request, err3 := stream.Recv()
if err3 == io.EOF {
return nil
}
if err3 != nil {
return err3 // how can i handle this error (grpc generated)
}
do something
}

gRPC-go handles network level reconnections for you: https://pkg.go.dev/google.golang.org/grpc#ClientConn
A ClientConn encapsulates a range of functionality including name resolution, TCP connection establishment (with retries and backoff) and TLS handshakes. It also handles errors on established connections by re-resolving the name and reconnecting.
If the error returns an error and is not type io.EOF, it means that something went wrong (including network errors) and that you have to request a new stream: https://github.com/grpc/grpc-go/blob/master/stream.go#L126 but you don't have to worry to create a new TCP connection.
RecvMsg blocks until it receives a message into m or the stream is done. It returns io.EOF when the stream completes successfully. On any other error, the stream is aborted and the error contains the RPC status.
If it can't get a new stream, means that it can't connect to the server or that something is very wrong with it, but if it is a transient network error, you will eventually be able to get a new stream.

Related

gRPC Server-side non-streaming request

I have a gRPC service, and I would like to have a message initiated from the server to get order states from the client. I would like this server=>client request to be synchronous, and the client must initiate the service because of firewall constraints.
I do not see a way to accomplish this with gRPC messages, but I came up with two approaches that may work.
message OrderStates {
repeated OrderState order_state = 1;
}
Option 1 - Non-streaming request + Streaming response
service < existing service > {
rpc OrderStatuses(OrderStates) returns (stream google.protobuf.Empty);
}
With this approach, the client sends OrderStates when it starts up. Each time the server wants to get the current states from the client, it sends the streamed Empty response.
Option 2 - Streaming request + Streaming response
service < existing service > {
rpc OrderStatuses(stream google.protobuf.Empty) returns (stream OrderStates);
}
This is the same as Option 1, but the client sends the initial request as a streaming request.
Any advice would be helpful.
Your approach is the way to accomplish this because you have a constraint that the server cannot act as a gRPC client and initiate a connection to the client acting as a gRPC server which would be the way to achieve this without your constraint.
Because of the constraint that the client must initiate the connection, the only solution is to hold the connection open (with a stream) so that the server may send messages to the client unbidden.
I would go with option #2 and the semantic of the RPC being "Hey server, ping me when you want OrderStates. You must use streaming on the client so that it can send updates.
An unstated optimization may be that, if the client remains alive but does not send an update in response to the server's ping within some timeframe, then the server assumes that there is no update.

Using a KillSwitch in an akka http streaming request

I'm using Akka's HTTP client to make a connection to an infinitely streaming HTTP endpoint. I am having difficulty getting the client to close the upstream to the HTTP server.
Here's my code (StreamRequest().stream returns a Source[T, Any]. It's generated by Http().outgoingConnectionHttps and then a Flow[HttpResponse, T, NotUsed] to convert HttpResponse to a stream of T):
(killSwitch, tFuture) = StreamRequest()
.stream
.takeWithin(timeToStreamFor)
.take(toPull)
.viaMat(KillSwitches.single)(Keep.right)
.toMat(Sink.seq)(Keep.both)
.run()
Then I have
tFuture.onComplete { _ =>
info(s"Shutting down the connection")
killSwitch.shutdown()
}
When I run the code I see the 'Shutting down the connection' log message but the server tells me that I'm still connected. It disconnects only when the JVM exits.
Any ideas what I'm doing wrong or what I should be doing differently here?
Thanks!
I suspect you should invoke Http().shutdownAllConnectionPools() when tFuture completes. The pool does not close connections because they can be reused by the different stream materialisations, so when the stream completes it does not close the pool. The shut connection you seen in the log can be because the idle timeout has triggered for one of the connections.

Apache Camel Netty Socket

I want to use apache camel netty connection in client mode. And also this client is not in syncrionized mode. I provided following configuration to achive this but appache created two connection to server one for receving message and one for replying to it. how we can use netty connector in this mode.
from("netty4:tcp://localhost:7000?sync=false&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true&reconnect=true&reconnectInterval=1000")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.getOut().setBody("Hello " + exchange.getIn().getBody());
}
})
.to("netty4:tcp://localhost:7000?sync=false&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true");
and in Hercules Utitly i see two connection for this request processing
11:00:51 AM: 127.0.0.1 Client connected
11:00:51 AM: 127.0.0.1 Client connected
So this is what you want right?
"after receiving request from server. i want to push that in a MQ and wait on other MQ for processed response. so when packet is processed and available in MQ i want to use same connection to transmit response to socket".
So first thing is to probably agree on some requirements. If you need to send a response back i.e. a client is waiting to hear back regarding the request it sent, then it should be synchronous communication and not asynchronous.
So you can then simply write:
from("netty4:tcp://localhost:7000?sync=true&allowDefaultCodec=false&encoder=#stringEncoder&decoder=#stringDecoder&clientMode=true&reconnect=true&reconnectInterval=1000")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.getOut().setBody("Hello " + exchange.getIn().getBody());
}
})
.to("ACTIVE_MQ");
Off course in the active mq part you need to set the reply to and time out so that if you don't get a response in time it times out and you notify the client with some good error message.
What will happen is that the message is received, and sent to an active mq queue with the appropiate reply to properties. If the message is received, the response is sent back over the same connection to the client.
I would advise you to read upon on the JMS request/reply in Camel as it will help you to setup the active mq part.
http://camel.apache.org/jms.html

How to query TCP connection state in go?

On the client side of a TCP connection, I am attempting to to reuse established connections as much as possible to avoid the overhead of dialing every time I need a connection. Fundamentally, it's connection pooling, although technically, my pool size just happens to be one.
I'm running into a problem in that if a connection sits idle for long enough, the other end disconnects. I've tried using something like the following to keep connections alive:
err = conn.(*net.TCPConn).SetKeepAlive(true)
if err != nil {
fmt.Println(err)
return
}
err = conn.(*net.TCPConn).SetKeepAlivePeriod(30*time.Second)
if err != nil {
fmt.Println(err)
return
}
But this isn't helping. In fact, it's causing my connections to close sooner. I'm pretty sure this is because (on a Mac) this means the connection health starts being probed after 30 seconds and then is probed at 8 times at 30 second intervals. The server side must not be supporting keepalive, so after 4 minutes and 30 seconds, the client is disconnecting.
There might be nothing I can do to keep an idle connection alive indefinitely, and that would be absolutely ok if there were some way for me to at least detect that a connection has been closed so that I can seamlessly replace it with a new one. Alas, even after reading all the docs and scouring the blogosphere for help, I can't find any way at all in go to query the state of a TCP connection.
There must be a way. Does anyone have any insight into how that can be accomplished? Many thanks in advance to anyone who does!
EDIT:
Ideally, I'd like to learn how to handle this, low-level with pure go-- without using third-party libraries to accomplish this. Of course if there is some library that does this, I don't mind being pointed in its direction so I can see how they do it.
The socket api doesn't give you access to the state of the connection. You can query the current state it in various ways from the kernel (/proc/net/tcp[6] on linux for example), but that doesn't make any guarantee that further sends will succeed.
I'm a little confused on one point here. My client is ONLY sending data. Apart from acking the packets, the server sends nothing back. Reading doesn't seem an appropriate way to determine connection status, as there's noting TO read.
The socket API is defined such that that you detect a closed connection by a read returning 0 bytes. That's the way it works. In Go, this is translated to a Read returning io.EOF. This will usually be the fastest way to detect a broken connection.
So am I supposed to just send and act on whatever errors occur? If so, that's a problem because I'm observing that I typically do not get any errors at all when attempting to send over a broken pipe-- which seems totally wrong
If you look closely at how TCP works, this is the expected behavior. If the connection is closed on the remote side, then your first send will trigger an RST from the server, fully closing the local connection. You either need to read from the connection to detect the close, or if you try to send again you will get an error (assuming you've waited long enough for the packets to make a round trip), like "broken pipe" on linux.
To clarify... I can dial, unplug an ethernet cable, and STILL send without error. The messages don't get through, obviously, but I receive no error
If the connection is actually broken, or the server is totally unresponsive, then you're sending packets off to nowhere. The TCP stack can't tell the difference between packets that are really slow, packet loss, congestion, or a broken connection. The system needs to wait for the retransmission timeout, and retry the packet a number of times before failing. The standard configuration for retries alone can take between 13 and 30 minutes to trigger an error.
What you can do in your code is
Turn on keepalive. This will notify you of a broken connection more quickly, because the idle connection is always being tested.
Read from the socket. Either have a concurrent Read in progress, or check for something to read first with select/poll/epoll (Go usually uses the first)
Set timeouts (deadlines in Go) for everything.
If you're not expecting any data from the connection, checking for a closed connection is very easy in Go; dispatch a goroutine to read from the connection until there's an error.
notify := make(chan error)
go func() {
buf := make([]byte, 1024)
for {
n, err := conn.Read(buf)
if err != nil {
notify <- err
return
}
if n > 0 {
fmt.Println("unexpected data: %s", buf[:n])
}
}
}()
There is no such thing as 'TCP connection state', by design. There is only what happens when you send something. There is no TCP API, at any level down to the silicon, that will tell you the current state of a TCP connection. You have to try to use it.
If you're sending keepalive probes, the server doesn't have any choice but to respond appropriately. The server doesn't even know that they are keepalives. They aren't. They are just duplicate ACKs. Supporting keepalive just means supporting sending keepalives.

what happens in an application server (tomcat etc.) when a client request is cancelled and the server is still working ? (writing on its output)

If a client cancel its request, the application server is suposed to throw the following error :
java.net.SocketException: Connection reset by peer: socket write error
But what is exactly happening ?
Let's say I'm doing a very expensive operation on the server side, and I'm writing some data to the outputstream everytime my server service get a new result (kind of streaming).
In the middle of this operation, the client cancel the request. What happens ?
The operation stops, because the socket throws this error when the connection closed ? If it's not stopped, what happens to the data flushed in the outputstream after that ?
Thanks
I can't tell what Tomcat is doing but here is what happens:
the client closed the socket gracefully (then the server is notified about the close and closes its side of the connection too, in which case any buffered data ready to be sent is lost);
the client cut the socket brutally (then the server is NOT notified and it will detect the connection loss after a timeout or at the first attempt to send data - this will fail).
So, if your streaming is "constant", the server will always be 'protected' against undetected lost connections (the first send attempt will clean-up the area).
If this streaming is not constant, then you should make room for a timeout, or use TCP Keep-Alives to make sure that the connection state is tested on a regulary basis.
Hope it helps.

Resources