Java gRPC: exception from client to server - grpc

Is it's possible to throw an exception from the client to the server?
We have an open stream from the server to the client:
rpc addStream(Request) returns (stream StreamMessage) {}
When i try something like this:
throw Status.INTERNAL.withDescription(e.getMessage()).withCause(e.getCause()).asRuntimeException();
I got the exception in the StreamObserver.onError on the client, but there is no exception on the server-side.

Servers can respond with a "status" that the stub API exposes as a StatusRuntimeException. Clients, however, can only "cancel" the RPC. Servers will not know the source of the cancellation; it could be because the client cancelled or maybe the TCP connection broke.
In a client-streaming or bidi-streaming call, the client can cancel by calling observer.onError() (without ever calling onCompleted()). However, if the client called onCompleted() or the RPC has a unary request, then you need to use ClientCallStreamObserver or Context:
stub.someRpc(request, new ClientResponseObserver<Request, Response>() {
private ClientCallStreamObserver<Request> requestStream;
#Override public void beforeStart(ClientCallStreamObserver<Request> requestStream) {
this.requestStream = requestStream;
}
...
});
// And then where you want to cancel
// RequestStream is non-thread-safe. For unary requests, wait until
// stub.someRpc() returns, since it uses the stream internally.
// The string is not sent to the server. It is just "echoed"
// back to the client's `onError()` to make clear that the
// cancellation was locally caused.
requestStream.cancel("some message for yourself", null);
// For thread-safe cancellation (e.g., for client-streaming)
CancellableContext ctx = Context.current().withCancellation();
StreamObserver requestObserver = ctx.call(() ->
stub.someRpc(new StreamObserver<Response>() {
#Override public void onCompleted() {
// The ctx must be closed when done, to avoid leaks
ctx.cancel(null);
}
#Override public void onError() {
ctx.cancel(null);
}
}));
// The place you want to cancel
ctx.cancel(ex);

Related

How to debug exceptions in TCP connection when App is restarted?

I have an application that uses Spring Integration to send messages to a vendor application over TCP and receive and process responses. The vendor sends messages without a length header or an message-ending token and the message contains carriage returns so I have implemented a custom deserializer. The messages are sent as XML strings so I have to process the input stream, looking for a specific closing tag to know when the message is complete. The application works as expected until the vendor application is restarted or a port switch occurs on my application, at which time the CPU usage on my application spikes and the application becomes unresponsive. The application throws a SocketException: o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Send Failed; nested exception is java.net.SocketException: Connection or outbound has closed when the socket closes. I have set the SocketTimeout to be 1 minute.
Here is the connection factory implementation:
#Bean
public AbstractClientConnectionFactory tcpConnectionFactory() {
TcpNetClientConnectionFactory factory = new TcpNetClientConnectionFactory(this.serverIp,
Integer.parseInt(this.port));
return getAbstractClientConnectionFactory(factory, keyStoreName, trustStoreName,
keyStorePassword, trustStorePassword, hostVerify);
}
private AbstractClientConnectionFactory getAbstractClientConnectionFactory(
TcpNetClientConnectionFactory factory, String keyStoreName, String trustStoreName,
String keyStorePassword, String trustStorePassword, boolean hostVerify) {
TcpSSLContextSupport sslContextSupport = new DefaultTcpSSLContextSupport(keyStoreName,
trustStoreName, keyStorePassword, trustStorePassword);
DefaultTcpNetSSLSocketFactorySupport tcpSocketFactorySupport =
new DefaultTcpNetSSLSocketFactorySupport(sslContextSupport);
factory.setTcpSocketFactorySupport(tcpSocketFactorySupport);
factory.setTcpSocketSupport(new DefaultTcpSocketSupport(hostVerify));
factory.setDeserializer(new MessageSerializerDeserializer());
factory.setSerializer(new MessageSerializerDeserializer());
factory.setSoKeepAlive(true);
factory.setSoTimeout(60000);
return factory;
}
Here is the deserialize method:
private String readUntil(InputStream inputStream) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
String s = "";
byte[] closingTag = CLOSING_MESSAGE_TAG.getBytes(ASCII);
try {
Integer bite;
while (true) {
bite = inputStream.read();
byteArrayOutputStream.write(bite);
byte[] bytes = byteArrayOutputStream.toByteArray();
int start = bytes.length - closingTag.length;
if (start > closingTag.length) {
byte[] subarray = Arrays.copyOfRange(bytes, start, bytes.length);
if (Arrays.equals(subarray, closingTag)) {
s = new String(bytes, ASCII);
break;
}
}
}
} catch (SocketTimeoutException e) {
logger.error("Expected SocketTimeoutException thrown");
} catch (Exception e) {
logger.error("Exception thrown when deserializing message {}", s);
throw e;
}
return s;
}
Any help in identifying the cause of the CPU spike or a suggested fix would be greatly appreciated.
EDIT #1
Adding serialize method.
#Override
public void serialize(String string, OutputStream outputStream) throws IOException {
if (StringUtils.isNotEmpty(string) && StringUtils.startsWith(string, OPENING_MESSAGE_TAG) &&
StringUtils.endsWith(string, CLOSING_MESSAGE_TAG)) {
outputStream.write(string.getBytes(UTF8));
outputStream.flush();
}
}
the inbound-channel-adapter uses the ConnectionFactory
<int-ip:tcp-inbound-channel-adapter id="tcpInboundChannelAdapter"
channel="inboundReceivingChannel"
connection-factory="tcpConnectionFactory"
error-channel="errorChannel"
/>
EDIT #2
Outbound Channel Adapter
<int-ip:tcp-outbound-channel-adapter
id="tcpOutboundChannelAdapter"
channel="sendToTcpChannel"
connection-factory="tcpConnectionFactory"/>
Edit #3
We have added in the throw for the Exception and are still seeing the CPU spike, although it is not as dramatic. Could we still be receiving bytes from socket in the inputStream.read() method? The metrics seem to indicate that the read method is consuming server resources.
#Artem Bilan Thank you for your continued feedback on this. My server metrics seem to indicate that they deserialize method is what is consuming the CPU. I was thinking that the SendFailed error occurs because of the vendor restarting their application.
Thus far, I have been unable to replicate this issue other than in production. The only exception I can find in production logs is the SocketException mentioned above.
Thank you.

Rebus - Bus with sql transport in MemoryCache callback

I have a message handler which accumulates the messages in a MemoryCache for a given time, so that only the last one will be handled.
When the callback happens i want to forward another message to an handler using sql transport, but the sql connection has now been closed.
The code looks something like this:
public IBus SqlBus { get; set; }
public async Task Handle(ServiceMessage message)
{
await base.Handle(() =>
{
cache.Set(CacheKey, message, new CacheItemPolicy()
{
AbsoluteExpiration = DateTimeOffset.Now.AddSeconds(10),
RemovedCallback = new CacheEntryRemovedCallback(CacheCallback),
});
return Task.FromResult(0);
}, message);
}
private void CacheCallback(CacheEntryRemovedArguments arguments)
{
if (arguments.RemovedReason == CacheEntryRemovedReason.Expired)
{
var message = arguments.CacheItem.Value as ServiceMessage;
SqlBus.Send(new AnotherMessage()).GetAwaiter().GetResult();
}
}
Is there any approaches which lets me do this?
When is the CacheCallback method called, and on which thread?
It sounds to me like the problem is that the thread calling CacheCallback has a value in AmbientTransactionContext.Current, which is where Rebus enlists queue operations when it can.
If the transaction context is was somehow preserved even though the handler finished executing, then the associated cached items (like e.g. the SqlConnection and SqlTransaction associated with the SQL transport) will be closed.

SignalR Long Running Process

I have setup a SignalR hub which has the following method:
public void SomeFunction(int SomeID)
{
try
{
Thread.Sleep(600000);
Clients.Caller.sendComplete("Complete");
}
catch (Exception ex)
{
// Exception Handling
}
finally
{
// Some Actions
}
m_Logger.Trace("*****Trying To Exit*****");
}
The issue I am having is that SignalR initiates and defaults to Server Sent Events and then hangs. Even though the function/method exits minutes later (10 minutes) the method is initiated again ( > 3 minutes) even when the sendComplete and hub.stop() methods are initiated/called on the client prior. Should the user stay on the page the initial "/send?" request stays open indefinitely. Any assistance is greatly appreciated.
To avoid blocking the method for so long, you could use a Taskand call the client method asynchronously.
public void SomeFunction(Int32 id)
{
var connectionId = this.Context.ConnectionId;
Task.Delay(600000).ContinueWith(t =>
{
var message = String.Format("The operation has completed. The ID was: {0}.", id);
var context = GlobalHost.ConnectionManager.GetHubContext<SomeHub>();
context.Clients.Client(connectionId).SendComplete(message);
});
}
Hubs are created when request arrives and destroyed after response is sent down the wire, so in the continuation task, you need to create a new context for yourself to be able to work with a client by their connection identifier, since the original hub instance will no longer be around to provide you with the Clients method.
Also note that you can leverage the nicer syntax that uses async and await keywords for describing asynchronous program flow. See examples at The ASP.NET Site's SignalR Hubs API Guide.

SignalR Silverlight Client keeps "connected", when Hub is disconnected

I have implemented SignalR in my Silverlight 5 application and it's working fine as long as the client stays on-line. But as soon as the network connection drops for more than about 5 seconds, it stops functioning and I can't make it reconnect.
When a client loses the network connection, the Hub's event "OnDisconnected" is triggered.
But on the client side the HubConnection's Closed or StateChanged events are not triggered and the ConnectionState remains Connected. It then tries to make a call to hubproxy.Invoke(), but that will not invoke the client-side method as it would if the network connection stayed alive.
I instantiate the signalr client in App.xaml.xs:
private void Application_UserLoaded(LoadUserOperation operation)
{
//Some checks whether user is logged in
_signalRClient = new SignalRClient();
_signalRClient.RunAsync();
}
.
public class SignalRClient
{
public async void RunAsync()
{
SetHubConnection();
SetHubProxy();
await StartHubConnection();
SendTestSignal();
}
private void SetHubConnection()
{
try
{
_hubConnection = new HubConnection("https://10.1.2.3/HubWeb");
}
catch (WebException ex)
{
LoggerManager.WriteLog(LogType.ERROR, ex.ToString());
}
_hubConnection.Closed += () => TimeoutHelper.SetTimeout(5000, () => _hubConnection.Start());
_hubConnection.StateChanged += (change) => LoggerManager.WriteLog(LogType.DEBUG, String.Format("SignalR Client: Connection State Changed from {0} to {1}", change.OldState, change.NewState));
}
I tried to implement automatic reconnect, as the documentation suggests, by handling the client side Closed event and that starting the hubconnection.
But because the ConnectionState is still "Connected", this event is not triggered and I do not see a way to restart the connection from the client.
What could be the cause of the Connectionstate property of client's hubconnection not changing tot "Disconnected" and why is the Closed event not triggered?
Any help appreciated.

Signalr Context Thread Safety

We are using SignalR to push messages from server to client. Some of the things we are using server broadcast for:
Live notifications
Updating changes of shared data
Chat like functionality
One of our devs started with the StockTicker example, and we expanded it to push all of our different message types. Here is our general scheme:
private void RunJobs()
{
_jobs = GetAllJobs();
while (true)
{
bool workDone = false;
for (int i = 0; i < _jobs.Count; i++)
{
var j = _jobs.ElementAt(i);
bool workToDo = j.MessageAvailable();
workDone = workDone || workToDo;
if (workToDo)
{
var message = j.GetMessage();
_threadPool.QueueWorkItem(ProcessJob, j, message);
}
}
if (!workDone)
{
Thread.Sleep(_sleepTime);
}
}
}
/// <summary>
/// Method called by threads to process queued up Work Item (ISignalRJob)
/// </summary>
/// <param name="job">Job to run.</param>
private void ProcessJob(ISignalRJob job, QueueMessage message)
{
try
{
job.ProcessMessage(message);
}
catch (Exception e)
{
//handle exception
}
}
As each job processes, it performs an operation like:
protected override void ProcessMessage(QueueMessage message)
{
var nqm = JsonConvert.DeserializeObject<NotificationQueueMessage>(message.Body);
var notification = webService.GetNotification(notification.Id);
foreach(var userConnectionId in GetUserConnectionIds(nqm.UserId)){
_signalRConnectionContext.Clients.Client(userConnectionId).pushNotification(notification);
}
}
In a thread, monitor a series of queues for messages. If a message turns up, pop the message off the queue, and start a new thread to process the message (ProcessJob). The jobs will then do any service calls / db calls necessary to build the client message, then push the message to the client.
The service seems to work, but periodically the client will stop receiving the messages, although I have verified they are being sent from the server. Is it possible that pushing to a client connection in multiple threads is putting it in a bad state?
Should I perhaps be returning the result of the QueueMessage processing to the main SignalR thread, and return them all synchronously?

Resources