GRPC io.grpc.StatusRuntimeException: UNAVAILABLE: io exception - grpc

The code model looks like as
Iterator iterator = grpc.invokeSomeRequest(requestData) //returns iterator
while(iterator.hasNext()){
//do some code.
}
Sometimes the error occurs at iterator.hasNext() with stack trace:
UNAVAILABLE: io exception
io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:533)
A version of grpc is 1.21.0
I figured out that this problem most likely at grpc. I could be wrong.
What is the best workaround for this problem?

Related

what is the difference between grpc stream error and TCP error

It is introduced in "google.golang.org/grpc/codes" that some errors can be generated by grpc and some cannot be generated, and the grpc stream should correspond to the stream in http2.0. Then I want to know whether it means that there is an error with the TCP connection when those grpc streams throw exceptions, so do I only need to reconnect the TCP connection, or do I have somes methods for only reconnect stream (such as reconnecting streams, etc.)
for {
request, err3 := stream.Recv()
if err3 == io.EOF {
return nil
}
if err3 != nil {
return err3 // how can i handle this error (grpc generated)
}
do something
}
gRPC-go handles network level reconnections for you: https://pkg.go.dev/google.golang.org/grpc#ClientConn
A ClientConn encapsulates a range of functionality including name resolution, TCP connection establishment (with retries and backoff) and TLS handshakes. It also handles errors on established connections by re-resolving the name and reconnecting.
If the error returns an error and is not type io.EOF, it means that something went wrong (including network errors) and that you have to request a new stream: https://github.com/grpc/grpc-go/blob/master/stream.go#L126 but you don't have to worry to create a new TCP connection.
RecvMsg blocks until it receives a message into m or the stream is done. It returns io.EOF when the stream completes successfully. On any other error, the stream is aborted and the error contains the RPC status.
If it can't get a new stream, means that it can't connect to the server or that something is very wrong with it, but if it is a transient network error, you will eventually be able to get a new stream.

Frequent Kusto Error: Insufficient winsock resources available to complete socket connection initiation

From the last few days we have been trying to hunt down some kusto query failures. Time and again we see the following error.
Have looked into any increase in ingestions for the past few days and don't see any.
In few scenarios we use direct ingestion, can this be the issue?
Does this error mean that the server is out of sockets? If so, what is generally the cause for that? ANd ho can that be alleviated?
Couldn't find any troubleshooting guide for this.
Happy to provide more details as needed.
Failure details: Query execution has resulted in error (0x8013153D): Partial query failure: 0x8013153D (message: 'Insufficient winsock resources available to complete socket connection initiation. ==> ExecuteRemoteSubQuery failure: ', details: 'Source: mscorlib System.InsufficientMemoryException: Insufficient winsock resources available to complete socket connection initiation. ---> System.Net.Sockets.SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full 11.0.0.82:23107 Server stack trace: Exception rethrown at [0]: --- End of stack trace from previous location where exception was thrown --- [0]Kusto.Data.Exceptions.KustoDataStreamException: Query execution has resulted in error (0x8013153D): Partial query failure: 0x8013153D (message: 'Insufficient winsock resources available to complete socket connection initiation. ==> ExecuteRemoteSubQuery failure: ', details: 'Source: mscorlib System.InsufficientMemoryException: Insufficient winsock resources available to complete socket connection initiation. ---> System.Net.Sockets.SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full 11.0.0.82:23107 Server stack trace: Exception rethrown at [0]: --- End of stack trace from previous location where exception was thrown --- Timestamp=2020-12-04T17:47:28.3230123Z
I would recommend that you open a support ticket for your resource, via the Azure portal (https://portal.azure.com)

How to keep grpc-js client connection open (alive) during inactive times?

I have a grpc server streaming RPC that communicates with the client. The client throws an error when it does not receive communication from the server. The error is:
Error: 13 INTERNAL: Received RST_STREAM with code 2
at Object.callErrorFromStatus (/app/node_modules/#grpc/grpc-js/build/src/call.js:30:26)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client.js:328:49)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:304:181)
at Http2CallStream.outputStatus (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:116:74)
at Http2CallStream.maybeOutputStatus (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:155:22)
at Http2CallStream.endCall (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:141:18)
at ClientHttp2Stream.stream.on (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:410:22)
at ClientHttp2Stream.emit (events.js:198:13)
at emitCloseNT (internal/streams/destroy.js:68:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
Emitted 'error' event at:
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client.js:328:28)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:304:181)
[... lines matching original stack trace ...]
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
I have tried searching for solutions and the only "solution" that kept the connection open was to manually ping the client every 10 seconds using setInterval. I read that grpc is supposed to keep the connections open for you and even reconnect if it is lost in this article.
As mentioned above, KeepAlive provides a valuable benefit: periodically checking the health of the connection by sending an HTTP/2 PING to determine whether the connection is still alive.
The way I set up the client is below
var grpc = require('#grpc/grpc-js');
require('dotenv').config({path:'/app/.env'});
var responderProto = grpc.loadPackageDefinition(packageDefinition).responder;
var client = new responderProto.ResponderService(process.env.GRPC_HOST_AND_PORT,
grpc.credentials.createInsecure(),
{
"grpc.http2.max_pings_without_data" : 0,
"grpc.keepalive_time_ms": 10000,
"grpc.keepalive_permit_without_calls" : 1
});
My understanding was that the "grpc.keepalive_time_ms" is supposed to ping the server every x ms
Am I doing something wrong or missing or misunderstanding something essential?

'System.InvalidOperationException' occurred in System.dll in send data by serialport

i am trying to send a string to serial port but i got An unhandled exception of type 'System.InvalidOperationException' occurred in System.dll
my code is simple :
serialport.write("110");
Please read this article properly.
You seem to have written to a port that is not open.
SerialPort.Write Method
Exceptions
InvalidOperationException
The specified port is not open.

Net tcp Binding Service:The request was aborted: The request was canceled

I am uploading files to Amazon S3.
1.While uploading large files(upto 50 MB) I am getting this error:
The request was aborted: The request was canceled.
2.The problem is when I dont use the net tcp binding service and directly upload from UI to business layer without a service layer in between,I am able to upload without any exceptions.
3.Here is the exception details:
System.Net.WebException: The request was aborted: The request was canceled. ---> System.IO.IOException: Cannot close stream until all bytes are written. at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting) --- End of inner exception stack trace --- at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting) at System.Net.ConnectStream.System.Net.ICloseEx.CloseEx(CloseExState closeState) at System.Net.ConnectStream.Dispose(Boolean disposing) at System.IO.Stream.Close() at Amazon.S3.AmazonS3Client.getRequestStreamCallback[T](IAsyncResult result)
4.In the service's web. config I have increased the timeout of both idle and receive to more than half an hour. But still getting the exception.
5.Is this related to any configuration settings for the net tcp binding service?
Any solution?

Resources