gRPC onComplete for bidistream - grpc

In all the gRPC bidistream examples that I have seen follow a pattern that when (inbound) requestObserver receives onComplete it invokes the onComplete method of the (outbound) responseObserver. However, this is not done for onError.
Wondering what happens if I don't invoke responseObserver.onComplete() does it lead to memory leak? Why we don't do it for onError?
public StreamObserver<Point> recordRoute(final StreamObserver<RouteSummary> responseObserver) {
return new StreamObserver<Point>() {
#Override
public void onNext(Point point) {
// does something here
}
#Override
public void onError(Throwable t) {
logger.log(Level.WARNING, "recordRoute cancelled");
}
#Override
public void onCompleted() {
responseObserver.onCompleted();
}
};
}

Wondering what happens if I don't invoke responseObserver.onComplete() does it lead to memory leak?
An RPC is not complete/done until the response stream is also "completed" so yes there will be resource leak if you don't eventually call responseObserver.onCompleted(). In this particular example it just so happens that the response stream is terminated when the request stream is "complete" but there could be cases where the response stream is "completed" only after more processing is done or more data is sent on the response stream.
Why we don't do it for onError?
onError() is a terminating error from the stream which means the call is terminated. onError() on the response stream is not needed and most probably won't do anything.

Related

REST-Endpoint: Async execution without return value

My problem might be very easy to solve, but I don't get it at the moment. In my Quarkus-App I have a REST-Endpoint which should call a method, don't wait for the result and immediately return a 202-HTTP-Statuscode.
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Response calculateAsync(String input) {
process();
return Response.accepted().build();
}
I've read the Quarkus-Documentation about Vert.x and asynchronous processing. But the point there is: the processing is done asynchronously, but the client waits for the result. My clients don't need to wait, because there is no return value. It's something like the invocation of a batch-processing.
So I need something like a new Thread, but with all the Quarkus-Context.
We've found a solution:
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Response calculateAsync(String input) {
Uni.createFrom().item(input).emitOn(Infrastructure.getDefaultWorkerPool()).subscribe().with(
item -> process(input), Throwable::printStackTrace
);
return Response.accepted().build();
}
You can use #Suspended AsyncResponse response in parameters and make the method return void
Here's example of a similar method:
#GET
#Produces(MediaType.TEXT_PLAIN)
public void hello(#Suspended AsyncResponse response) throws InterruptedException {
response.resume(Response.ok().build());
Thread.sleep(10000);
System.out.println("Do some work");
}

gRPC services's Context CancellationListener is not fired when client cancels a service call

I have a streaming service that indefinitely streams from the server to a client until the client cancels.
On the server side, I have a thread that populates an ehcache with data sourced from a database.
Ehcache provides callbacks on cache events, i.e, when an item is added, when an item is removed, etc. I only care about notifying clients when an element is put into the cache, so when a client connects to my gRPC service, I register a notifyElementPut() callback with the cache, that has a reference to the connected clients StreamObserver:
public class GrpcAwareCacheEventListener extends CacheEventListenerAdapter {
private StreamObserver<FooUpdateResponse> responseObserver;
public GrpcAwareCacheEventListener(
StreamObserver<FooUpdateResponse> responseObserver) {
this.responseObserver = responseObserver;
}
#Override
public void notifyElementPut(Ehcache cache, Element element) throws CacheException {
Foo foo = (Foo) element.getObjectValue();
if (foo != null) {
responseObserver.onNext(
FooResponse.newBuilder().setFoo(foo).build());
}
}
}
My streaming foo service is as follows:
public void streamFooUpdates(Empty request,
StreamObserver<FooResponse> responseObserver) {
final CacheEventListener eventListener = new GrpcAwareCacheEventListener(responseObserver);
fooCache.getCacheEventNotificationService().registerListener(eventListener);
Context.current().withCancellation().addListener(new CancellationListener() {
public void cancelled(Context context) {
log.info("inside context cancelled callback");
fooCache.getCacheEventNotificationService().unregisterListener(eventListener);
}
}, ForkJoinPool.commonPool());
}
This all works fine, the client is notified of all foo updates as long as he is connected.
However, after the client disconnects or explicitly cancels the call, I expect that the server's Context's cancellation listener would fire, unregistering the callback with the cache.
This is not the case, regardless of whether the client shutdowns the channel, or explicitly cancels the call. (I expect the server side cancelled context to fire for both of these events). I'm wondering if my cancel semantics on the client side are incorrect, here is the my client code, taken from a test case:
Channel channel = ManagedChannelBuilder.forAddress("localhost", 25001)
.usePlaintext().build();
FooServiceGrpc.FooService stub = FooServiceGrpc
.newStub(channel);
ClientCallStreamObserver<FooResponse> cancellableObserver = new ClientCallStreamObserver<FooResponse>(){
public void onNext(FooResponse response) {
log.info("received foo: {}", response.getFoo());
}
public void onError(Throwable throwable) {
}
public void onCompleted() {
}
public boolean isReady() {
return false;
}
public void setOnReadyHandler(Runnable runnable) {
}
public void disableAutoInboundFlowControl() {
}
public void request(int i) {
}
public void setMessageCompression(boolean b) {
}
public void cancel(#Nullable String s, #Nullable Throwable throwable) {
}
};
stub.streamFooUpdates(Empty.newBuilder().build(), cancellableObserver);
Thread.sleep(10000); // sleep 10 seconds while messages are received.
cancellableObserver.cancel("cancelling from test", null); //explicit cancel
((ManagedChannel) chan).shutdown().awaitTermination(5, TimeUnit.SECONDS); //shutdown as well, for good measure.
Thread.sleep(7000); //channel should be shutdown by now.
}
I'm wondering why the server is not firing the "Context cancelled" callback.
Thanks!
You are not cancelling the client call correctly. The StreamObserver on the second argument of stub.streamFooUpdates() is your callback. You shouldn't call anything on that StreamObserver.
There are two ways to cancel the call from the client-side.
Option 1: Pass a ClientResponseObserver as the second argument, implement beforeStart(), which gives you a ClientCallStreamObserver, on which you can call cancel().
Option 2: Run stub.streamFooUpdates() inside a CancellableContext, and cancel the Context to cancel the call. Note that a CancellableContext must be always be cancelled, that's what the finally block is for.
CancellableContext withCancellation = Context.current().withCancellation();
try {
withCancellation.run(() -> {
stub.streamFooUpdates(...);
Thread.sleep(10000);
withCancellation.cancel(null);
});
} finally {
withCancellation.cancel(null);
}

Undertow : use Hystrix Observable in Http handler

I managed to setup an Hystrix Command to be called from an Undertow HTTP Handler:
public void handleRequest(HttpServerExchange exchange) throws Exception {
if (exchange.isInIoThread()) {
exchange.dispatch(this);
return;
}
RpcClient rpcClient = new RpcClient(/* ... */);
try {
byte[] response = new RpcCommand(rpcClient).execute();
// send the response
} catch (Exception e) {
// send an error
}
}
This works nice. But now, I would like to use the observable feature of Hystrix, calling observe instead of execute, making the code non-blocking.
public void handleRequest(HttpServerExchange exchange) throws Exception {
RpcClient rpcClient = new RpcClient(/* ... */);
new RpcCommand(rpcClient).observe().subscribe(new Observer<byte[]>(){
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
exchange.setStatusCode(StatusCodes.INTERNAL_SERVER_ERROR);
exchange.endExchange();
}
#Override
public void onNext(byte[] body) {
exchange.getResponseHeaders().add(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender().send(ByteBuffer.wrap(body));
}
});
}
As expected (reading the doc), the handler returns immediately and as a consequence, the exchange is ended; when the onNext callback is executed, it fails with an exception:
Caused by: java.lang.IllegalStateException: UT000127: Response has already been sent
at io.undertow.io.AsyncSenderImpl.send(AsyncSenderImpl.java:122)
at io.undertow.io.AsyncSenderImpl.send(AsyncSenderImpl.java:272)
at com.xxx.poc.undertow.DiyServerBootstrap$1$1.onNext(DiyServerBootstrap.java:141)
at com.xxx.poc.undertow.DiyServerBootstrap$1$1.onNext(DiyServerBootstrap.java:115)
at rx.internal.util.ObserverSubscriber.onNext(ObserverSubscriber.java:34)
Is there a way to tell Undertow that the handler is doing IO asynchronously? I expect to use a lot of non-blocking code to access database and other services.
Thanks in advance!
You should dispatch() a Runnable to have the exchange not end when the handleRequest method returns. Since the creation of the client and subscription are pretty simple tasks, you can do it on the same thread with SameThreadExecutor.INSTANCE like this:
public void handleRequest(HttpServerExchange exchange) throws Exception {
exchange.dispatch(SameThreadExecutor.INSTANCE, () -> {
RpcClient rpcClient = new RpcClient(/* ... */);
new RpcCommand(rpcClient).observe().subscribe(new Observer<byte[]>(){
//...
});
});
}
(If you do not pass an executor to dispatch(), it will dispatch it to the XNIO worker thread pool. If you wish to do the client creation and subscription on your own executor, then you should pass that instead.)

Regulate network calls in SyncAdapter onPerformSync

I m sending several retrofit calls via SyncAdapter onPerformSync and I m trying to regulate http calls by sending out via a try/catch sleep statement. However, this is blocking the UI and will be not responsive only after all calls are done.
What is a better way to regulate network calls (with a sleep timer) in background in onPerformSync without blocking UI?
#Override
public void onPerformSync(Account account, Bundle extras, String authority, ContentProviderClient provider, SyncResult syncResult) {
String baseUrl = BuildConfig.API_BASE_URL;
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(baseUrl)
.addConverterFactory(GsonConverterFactory.create())
.build();
service = retrofit.create(HTTPService.class);
Call<RetroFitModel> RetroFitModelCall = service.getRetroFit(apiKey, sortOrder);
RetroFitModelCall.enqueue(new Callback<RetroFitModel>() {
#Override
public void onResponse(Response<RetroFitModel> response) {
if (!response.isSuccess()) {
} else {
List<RetroFitResult> retrofitResultList = response.body().getResults();
Utility.storeList(getContext(), retrofitResultList);
for (final RetroFitResult result : retrofitResultList) {
RetroFitReview(result.getId(), service);
try {
// Sleep for SLEEP_TIME before running RetroFitReports & RetroFitTime
Thread.sleep(SLEEP_TIME);
} catch (InterruptedException e) {
}
RetroFitReports(result.getId(), service);
RetroFitTime(result.getId(), service);
}
}
}
#Override
public void onFailure(Throwable t) {
Log.e(LOG_TAG, "Error: " + t.getMessage());
}
});
}
}
The "onPerformSync" code is executed within the "SyncAdapterThread" thread, not within the Main UI thread. However this could change when making asynchronous calls with callbacks (which is our case here).
Here you are using an asynchronous call of the Retrofit "call.enqueue" method, and this has an impact on thread execution. The question we need to ask at this point:
Where callback methods are going to be executed?
To get the answer to this question, we have to determine which Looper is going to be used by the Handler that will post callbacks.
In case we are playing with handlers ourselves, we can define the looper, the handler and how to process messages/runnables between handlers. But this time it is different because we are using a third party framework (Retrofit). So we have to know which looper used by Retrofit?
Please note that if Retrofit didn't already define his looper, you
could have caught an exception saying that you need a looper to
process callbacks. In other words, an asynchronous call needs to be in
a looper thread in order to post callbacks back to the thread from
where it was executed.
According to the code source of Retrofit (Platform.java):
static class Android extends Platform {
#Override CallAdapter.Factory defaultCallAdapterFactory(Executor callbackExecutor) {
if (callbackExecutor == null) {
callbackExecutor = new MainThreadExecutor();
}
return new ExecutorCallAdapterFactory(callbackExecutor);
}
static class MainThreadExecutor implements Executor {
private final Handler handler = new Handler(Looper.getMainLooper());
#Override public void execute(Runnable r) {
handler.post(r);
}
}
}
You can notice "Looper.getMainLooper()", which means that Retrofit will post messages/runnables into the main thread message queue (you can do research on this for further detailed explanation). Thus the posted message/runnable will be handled by the main thread.
So that being said, the onResponse/onFailure callbacks will be executed in the main thread. And it's going to block the UI, if you are doing too much work (Thread.sleep(SLEEP_TIME);). You can check it by yourself: just make a breakpoint in "onResponse" callback and check in which thread it is running.
So how to handle this situation? (the answer to your question about Retrofit use)
Since we are already in a background thread (SyncAdapterThread), so there is no need to make asynchronous calls in your case. Just make a Retrofit synchronous call and then process the result, or log a failure. This way, you will not block the UI.

Are there any restrictions in writing multiple http responses?

I am building a HTTP proxy with netty, which supports HTTP pipelining. Therefore I receive multiple HttpRequest Objects on a single Channel and got the matching HttpResponse Objects. The order of the HttpResponse writes is the same than I got the HttpRequest. If a HttpResponse was written, the next one will be written when the HttpProxyHandler receives a writeComplete event.
The Pipeline should be convenient:
final ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("writer", new HttpResponseWriteDelayHandler());
pipeline.addLast("deflater", new HttpContentCompressor(9));
pipeline.addLast("handler", new HttpProxyHandler());
Regarding this question only the order of the write calls should be important, but to be sure I build another Handler (HttpResponseWriteDelayHandler) which suppresses the writeComplete event until the whole response was written.
To test this I enabled network.http.proxy.pipelining in Firefox and visited a page with many images and connections (a news page). The problem is, that the browser does not receive some responses in spite of the logs of the proxy consider them as sent successfully.
I have some findings:
The problem only occurs if the connection from proxy to server is faster than the connection from proxy to browser.
The problem occurs more often after sending a larger image on that connection, e.g. 20kB
The problem does not occur if only 304 - Not Modified responses were sent (refreshing the page considering browser cache)
Setting bootstrap.setOption("sendBufferSize", 1048576); or above does not help
Sleeping a timeframe dependent on the responses body size in before sending the writeComplete event in HttpResponseWriteDelayHandler solves the problem, but is a very bad solution.
I found the solution and want to share it, if anyone else has a similar problem:
The content of the HttpResponse is too big. To analyze the content the whole HTML document was in the buffer. This must be splitted in Chunks again to send it properly. If the HttpResponse is not chunked I wrote a simple solution to do it. One needs to put a ChunkedWriteHandler next to the logic handler and write this class instead of the response itself:
public class ChunkedHttpResponse implements ChunkedInput {
private final static int CHUNK_SIZE = 8196;
private final HttpResponse response;
private final Queue<HttpChunk> chunks;
private boolean isResponseWritten;
public ChunkedHttpResponse(final HttpResponse response) {
if (response.isChunked())
throw new IllegalArgumentException("response must not be chunked");
this.chunks = new LinkedList<HttpChunk>();
this.response = response;
this.isResponseWritten = false;
if (response.getContent().readableBytes() > CHUNK_SIZE) {
while (CHUNK_SIZE < response.getContent().readableBytes()) {
chunks.add(new DefaultHttpChunk(response.getContent().readSlice(CHUNK_SIZE)));
}
chunks.add(new DefaultHttpChunk(response.getContent().readSlice(response.getContent().readableBytes())));
chunks.add(HttpChunk.LAST_CHUNK);
response.setContent(ChannelBuffers.EMPTY_BUFFER);
response.setChunked(true);
response.setHeader(HttpHeaders.Names.TRANSFER_ENCODING, HttpHeaders.Values.CHUNKED);
}
}
#Override
public boolean hasNextChunk() throws Exception {
return !isResponseWritten || !chunks.isEmpty();
}
#Override
public Object nextChunk() throws Exception {
if (!isResponseWritten) {
isResponseWritten = true;
return response;
} else {
HttpChunk chunk = chunks.poll();
return chunk;
}
}
#Override
public boolean isEndOfInput() throws Exception {
return isResponseWritten && chunks.isEmpty();
}
#Override
public void close() {}
}
Then one can call just channel.write(new ChunkedHttpResponse(response) and the chunking is done automatically if needed.

Resources