REST-Endpoint: Async execution without return value - asynchronous

My problem might be very easy to solve, but I don't get it at the moment. In my Quarkus-App I have a REST-Endpoint which should call a method, don't wait for the result and immediately return a 202-HTTP-Statuscode.
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Response calculateAsync(String input) {
process();
return Response.accepted().build();
}
I've read the Quarkus-Documentation about Vert.x and asynchronous processing. But the point there is: the processing is done asynchronously, but the client waits for the result. My clients don't need to wait, because there is no return value. It's something like the invocation of a batch-processing.
So I need something like a new Thread, but with all the Quarkus-Context.

We've found a solution:
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Response calculateAsync(String input) {
Uni.createFrom().item(input).emitOn(Infrastructure.getDefaultWorkerPool()).subscribe().with(
item -> process(input), Throwable::printStackTrace
);
return Response.accepted().build();
}

You can use #Suspended AsyncResponse response in parameters and make the method return void
Here's example of a similar method:
#GET
#Produces(MediaType.TEXT_PLAIN)
public void hello(#Suspended AsyncResponse response) throws InterruptedException {
response.resume(Response.ok().build());
Thread.sleep(10000);
System.out.println("Do some work");
}

Related

gRPC onComplete for bidistream

In all the gRPC bidistream examples that I have seen follow a pattern that when (inbound) requestObserver receives onComplete it invokes the onComplete method of the (outbound) responseObserver. However, this is not done for onError.
Wondering what happens if I don't invoke responseObserver.onComplete() does it lead to memory leak? Why we don't do it for onError?
public StreamObserver<Point> recordRoute(final StreamObserver<RouteSummary> responseObserver) {
return new StreamObserver<Point>() {
#Override
public void onNext(Point point) {
// does something here
}
#Override
public void onError(Throwable t) {
logger.log(Level.WARNING, "recordRoute cancelled");
}
#Override
public void onCompleted() {
responseObserver.onCompleted();
}
};
}
Wondering what happens if I don't invoke responseObserver.onComplete() does it lead to memory leak?
An RPC is not complete/done until the response stream is also "completed" so yes there will be resource leak if you don't eventually call responseObserver.onCompleted(). In this particular example it just so happens that the response stream is terminated when the request stream is "complete" but there could be cases where the response stream is "completed" only after more processing is done or more data is sent on the response stream.
Why we don't do it for onError?
onError() is a terminating error from the stream which means the call is terminated. onError() on the response stream is not needed and most probably won't do anything.

gRPC services's Context CancellationListener is not fired when client cancels a service call

I have a streaming service that indefinitely streams from the server to a client until the client cancels.
On the server side, I have a thread that populates an ehcache with data sourced from a database.
Ehcache provides callbacks on cache events, i.e, when an item is added, when an item is removed, etc. I only care about notifying clients when an element is put into the cache, so when a client connects to my gRPC service, I register a notifyElementPut() callback with the cache, that has a reference to the connected clients StreamObserver:
public class GrpcAwareCacheEventListener extends CacheEventListenerAdapter {
private StreamObserver<FooUpdateResponse> responseObserver;
public GrpcAwareCacheEventListener(
StreamObserver<FooUpdateResponse> responseObserver) {
this.responseObserver = responseObserver;
}
#Override
public void notifyElementPut(Ehcache cache, Element element) throws CacheException {
Foo foo = (Foo) element.getObjectValue();
if (foo != null) {
responseObserver.onNext(
FooResponse.newBuilder().setFoo(foo).build());
}
}
}
My streaming foo service is as follows:
public void streamFooUpdates(Empty request,
StreamObserver<FooResponse> responseObserver) {
final CacheEventListener eventListener = new GrpcAwareCacheEventListener(responseObserver);
fooCache.getCacheEventNotificationService().registerListener(eventListener);
Context.current().withCancellation().addListener(new CancellationListener() {
public void cancelled(Context context) {
log.info("inside context cancelled callback");
fooCache.getCacheEventNotificationService().unregisterListener(eventListener);
}
}, ForkJoinPool.commonPool());
}
This all works fine, the client is notified of all foo updates as long as he is connected.
However, after the client disconnects or explicitly cancels the call, I expect that the server's Context's cancellation listener would fire, unregistering the callback with the cache.
This is not the case, regardless of whether the client shutdowns the channel, or explicitly cancels the call. (I expect the server side cancelled context to fire for both of these events). I'm wondering if my cancel semantics on the client side are incorrect, here is the my client code, taken from a test case:
Channel channel = ManagedChannelBuilder.forAddress("localhost", 25001)
.usePlaintext().build();
FooServiceGrpc.FooService stub = FooServiceGrpc
.newStub(channel);
ClientCallStreamObserver<FooResponse> cancellableObserver = new ClientCallStreamObserver<FooResponse>(){
public void onNext(FooResponse response) {
log.info("received foo: {}", response.getFoo());
}
public void onError(Throwable throwable) {
}
public void onCompleted() {
}
public boolean isReady() {
return false;
}
public void setOnReadyHandler(Runnable runnable) {
}
public void disableAutoInboundFlowControl() {
}
public void request(int i) {
}
public void setMessageCompression(boolean b) {
}
public void cancel(#Nullable String s, #Nullable Throwable throwable) {
}
};
stub.streamFooUpdates(Empty.newBuilder().build(), cancellableObserver);
Thread.sleep(10000); // sleep 10 seconds while messages are received.
cancellableObserver.cancel("cancelling from test", null); //explicit cancel
((ManagedChannel) chan).shutdown().awaitTermination(5, TimeUnit.SECONDS); //shutdown as well, for good measure.
Thread.sleep(7000); //channel should be shutdown by now.
}
I'm wondering why the server is not firing the "Context cancelled" callback.
Thanks!
You are not cancelling the client call correctly. The StreamObserver on the second argument of stub.streamFooUpdates() is your callback. You shouldn't call anything on that StreamObserver.
There are two ways to cancel the call from the client-side.
Option 1: Pass a ClientResponseObserver as the second argument, implement beforeStart(), which gives you a ClientCallStreamObserver, on which you can call cancel().
Option 2: Run stub.streamFooUpdates() inside a CancellableContext, and cancel the Context to cancel the call. Note that a CancellableContext must be always be cancelled, that's what the finally block is for.
CancellableContext withCancellation = Context.current().withCancellation();
try {
withCancellation.run(() -> {
stub.streamFooUpdates(...);
Thread.sleep(10000);
withCancellation.cancel(null);
});
} finally {
withCancellation.cancel(null);
}

Undertow : use Hystrix Observable in Http handler

I managed to setup an Hystrix Command to be called from an Undertow HTTP Handler:
public void handleRequest(HttpServerExchange exchange) throws Exception {
if (exchange.isInIoThread()) {
exchange.dispatch(this);
return;
}
RpcClient rpcClient = new RpcClient(/* ... */);
try {
byte[] response = new RpcCommand(rpcClient).execute();
// send the response
} catch (Exception e) {
// send an error
}
}
This works nice. But now, I would like to use the observable feature of Hystrix, calling observe instead of execute, making the code non-blocking.
public void handleRequest(HttpServerExchange exchange) throws Exception {
RpcClient rpcClient = new RpcClient(/* ... */);
new RpcCommand(rpcClient).observe().subscribe(new Observer<byte[]>(){
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
exchange.setStatusCode(StatusCodes.INTERNAL_SERVER_ERROR);
exchange.endExchange();
}
#Override
public void onNext(byte[] body) {
exchange.getResponseHeaders().add(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender().send(ByteBuffer.wrap(body));
}
});
}
As expected (reading the doc), the handler returns immediately and as a consequence, the exchange is ended; when the onNext callback is executed, it fails with an exception:
Caused by: java.lang.IllegalStateException: UT000127: Response has already been sent
at io.undertow.io.AsyncSenderImpl.send(AsyncSenderImpl.java:122)
at io.undertow.io.AsyncSenderImpl.send(AsyncSenderImpl.java:272)
at com.xxx.poc.undertow.DiyServerBootstrap$1$1.onNext(DiyServerBootstrap.java:141)
at com.xxx.poc.undertow.DiyServerBootstrap$1$1.onNext(DiyServerBootstrap.java:115)
at rx.internal.util.ObserverSubscriber.onNext(ObserverSubscriber.java:34)
Is there a way to tell Undertow that the handler is doing IO asynchronously? I expect to use a lot of non-blocking code to access database and other services.
Thanks in advance!
You should dispatch() a Runnable to have the exchange not end when the handleRequest method returns. Since the creation of the client and subscription are pretty simple tasks, you can do it on the same thread with SameThreadExecutor.INSTANCE like this:
public void handleRequest(HttpServerExchange exchange) throws Exception {
exchange.dispatch(SameThreadExecutor.INSTANCE, () -> {
RpcClient rpcClient = new RpcClient(/* ... */);
new RpcCommand(rpcClient).observe().subscribe(new Observer<byte[]>(){
//...
});
});
}
(If you do not pass an executor to dispatch(), it will dispatch it to the XNIO worker thread pool. If you wish to do the client creation and subscription on your own executor, then you should pass that instead.)

Processing GET Body with Zuul

I am using Zuul to proxy a strange client that sends a body as part of a GET request. There is unfortunately no way I can change the client.
With curl such a request can be sent as:
curl -XGET 'localhost:8765/kibana/index.html' -d' {"key": "value"}'
And the data is really sent in the body. On zuul side, however, when I try to read the body it is empty. Here is my prototype zuul code:
#Configuration
#ComponentScan
#EnableAutoConfiguration
#Controller
#EnableZuulProxy
public class ZuulServerApplication {
#Bean
public ZuulFilter myFilter() {
return new ZuulFilter(){
#Override
public Object run() {
RequestContext ctx = RequestContext.getCurrentContext();
HttpServletRequest request=(HttpServletRequest)ctx.getRequest();
try {
InputStream is=request.getInputStream();
String content=IOUtils.toString(is);
System.out.println("Request content:"+content);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
#Override
public boolean shouldFilter() {
return true;
}
#Override
public int filterOrder() {
return 10;
}
#Override
public String filterType() {
return "pre";
}};
}
public static void main(String[] args) {
new SpringApplicationBuilder(ZuulServerApplication.class).web(true).run(args);
}
}
If I send a POST request, the this code prints the request body without problem. However, if I send the above GET request, the body is not printed. Anything I can do to actually get the body sent as part of a GET request?
It seems that some underlying machinery[0], e.g. some built-in Zuul filter with lesser filter order, replaces default "raw" HttpServletRequest with HttpServletRequestWrapper which, under standard circumstances (i.e. not GET method with body), is able to handle multiple acquisition of input stream. But in the case of GET method with body HttpServletRequestWrapper seems to not proxy input stream at all.
Thus solution could be to change filterOrder e.g. to -10.
Then it works for the filter since HttpServletRequest is used - the mentioned machinery did not get to its turn and thus didn't replace HttpServletRequest with HttpServletRequestWrapper yet. But potential issue with this solution is that the filter might exhaust input stream for something else, e.g. filter with higher filter order. But since GET with body is not a good practice anyway, it might be good enough solution after all :)
[0] I've debug into this longer time ago, but did not get to exact point - thus vague definition of "the machinery".

how to block http request until callback method called

Are there any idears to block http request until callback method called?
like this(if use java):
protected void get(HttpRequest request, HttpResponse response)
{
dosomething();
// call some async service
call_some_service(new Callback(){
public void callback(String result)
{
// continue request
request.continue();
}
});
// wait for callback
request.wait();
}
Thank you.
Use a CountDownLatch with a count of 1.

Resources