Are there any idears to block http request until callback method called?
like this(if use java):
protected void get(HttpRequest request, HttpResponse response)
{
dosomething();
// call some async service
call_some_service(new Callback(){
public void callback(String result)
{
// continue request
request.continue();
}
});
// wait for callback
request.wait();
}
Thank you.
Use a CountDownLatch with a count of 1.
Related
In all the gRPC bidistream examples that I have seen follow a pattern that when (inbound) requestObserver receives onComplete it invokes the onComplete method of the (outbound) responseObserver. However, this is not done for onError.
Wondering what happens if I don't invoke responseObserver.onComplete() does it lead to memory leak? Why we don't do it for onError?
public StreamObserver<Point> recordRoute(final StreamObserver<RouteSummary> responseObserver) {
return new StreamObserver<Point>() {
#Override
public void onNext(Point point) {
// does something here
}
#Override
public void onError(Throwable t) {
logger.log(Level.WARNING, "recordRoute cancelled");
}
#Override
public void onCompleted() {
responseObserver.onCompleted();
}
};
}
Wondering what happens if I don't invoke responseObserver.onComplete() does it lead to memory leak?
An RPC is not complete/done until the response stream is also "completed" so yes there will be resource leak if you don't eventually call responseObserver.onCompleted(). In this particular example it just so happens that the response stream is terminated when the request stream is "complete" but there could be cases where the response stream is "completed" only after more processing is done or more data is sent on the response stream.
Why we don't do it for onError?
onError() is a terminating error from the stream which means the call is terminated. onError() on the response stream is not needed and most probably won't do anything.
My problem might be very easy to solve, but I don't get it at the moment. In my Quarkus-App I have a REST-Endpoint which should call a method, don't wait for the result and immediately return a 202-HTTP-Statuscode.
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Response calculateAsync(String input) {
process();
return Response.accepted().build();
}
I've read the Quarkus-Documentation about Vert.x and asynchronous processing. But the point there is: the processing is done asynchronously, but the client waits for the result. My clients don't need to wait, because there is no return value. It's something like the invocation of a batch-processing.
So I need something like a new Thread, but with all the Quarkus-Context.
We've found a solution:
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Response calculateAsync(String input) {
Uni.createFrom().item(input).emitOn(Infrastructure.getDefaultWorkerPool()).subscribe().with(
item -> process(input), Throwable::printStackTrace
);
return Response.accepted().build();
}
You can use #Suspended AsyncResponse response in parameters and make the method return void
Here's example of a similar method:
#GET
#Produces(MediaType.TEXT_PLAIN)
public void hello(#Suspended AsyncResponse response) throws InterruptedException {
response.resume(Response.ok().build());
Thread.sleep(10000);
System.out.println("Do some work");
}
I have a HTTPPUT request that is not being called. I have a similar put request that manages another tab and it works. Both pages are pretty identical. I don't know what I am doing wrong.
I have tried almost everything and don't know what else to try.
controller:
[HttpPut]
[Route("updateAllocations({type})")]
public IHttpActionResult UpdateAllocations(string type, T_LOC entity)
{
System.Diagnostics.Debug.WriteLine("inside");
_allocationsService.UpdateAllocations(type,entity);
return Ok();
}
interface:
using OTPS.Core.Objects;
using System.Collections.Generic;
using OTPS.Core.Models;
namespace OTPS.Core.Interfaces
{
public interface IAllocationsService
{
void UpdateAllocations(string type, T_LOC entity);
}
}
service:
public void UpdateAllocations(string type, T_LOC entity)
{
System.Diagnostics.Debug.WriteLine("inside");
}
CLIENT SIDE:
public updateAllocation(type: string , entity) {
console.log("sdfsdf")
console.log(`${this.baseUrl}/api/allocations/updateAllocations(${type})`)
return this.http.put(`${this.baseUrl}/api/allocations/updateAllocations({type})`, entity, { headers: this.headers, withCredentials: true })
.pipe(catchError((error: Error) => {
console.log("sdfasd111111sdf")
return this.errorService.handleError(error);
}));
}
I am expecting the clinet side to call the put request before making any further logic but the print on server side never gets called..
Make sure that you subscribe to the service method inside component:
this.myService.updateAllocation(type, entity).subscribe( response => {
// do something here with response
});
You must call subscribe() or nothing happens. Just calling
Service method does not initiate the PUT/DELETE/POST/GET request.
Always subscribe!
An HttpClient method does not begin its HTTP request until you call
subscribe() on the observable returned by that method. This is true
for all HttpClient methods.
I have a streaming service that indefinitely streams from the server to a client until the client cancels.
On the server side, I have a thread that populates an ehcache with data sourced from a database.
Ehcache provides callbacks on cache events, i.e, when an item is added, when an item is removed, etc. I only care about notifying clients when an element is put into the cache, so when a client connects to my gRPC service, I register a notifyElementPut() callback with the cache, that has a reference to the connected clients StreamObserver:
public class GrpcAwareCacheEventListener extends CacheEventListenerAdapter {
private StreamObserver<FooUpdateResponse> responseObserver;
public GrpcAwareCacheEventListener(
StreamObserver<FooUpdateResponse> responseObserver) {
this.responseObserver = responseObserver;
}
#Override
public void notifyElementPut(Ehcache cache, Element element) throws CacheException {
Foo foo = (Foo) element.getObjectValue();
if (foo != null) {
responseObserver.onNext(
FooResponse.newBuilder().setFoo(foo).build());
}
}
}
My streaming foo service is as follows:
public void streamFooUpdates(Empty request,
StreamObserver<FooResponse> responseObserver) {
final CacheEventListener eventListener = new GrpcAwareCacheEventListener(responseObserver);
fooCache.getCacheEventNotificationService().registerListener(eventListener);
Context.current().withCancellation().addListener(new CancellationListener() {
public void cancelled(Context context) {
log.info("inside context cancelled callback");
fooCache.getCacheEventNotificationService().unregisterListener(eventListener);
}
}, ForkJoinPool.commonPool());
}
This all works fine, the client is notified of all foo updates as long as he is connected.
However, after the client disconnects or explicitly cancels the call, I expect that the server's Context's cancellation listener would fire, unregistering the callback with the cache.
This is not the case, regardless of whether the client shutdowns the channel, or explicitly cancels the call. (I expect the server side cancelled context to fire for both of these events). I'm wondering if my cancel semantics on the client side are incorrect, here is the my client code, taken from a test case:
Channel channel = ManagedChannelBuilder.forAddress("localhost", 25001)
.usePlaintext().build();
FooServiceGrpc.FooService stub = FooServiceGrpc
.newStub(channel);
ClientCallStreamObserver<FooResponse> cancellableObserver = new ClientCallStreamObserver<FooResponse>(){
public void onNext(FooResponse response) {
log.info("received foo: {}", response.getFoo());
}
public void onError(Throwable throwable) {
}
public void onCompleted() {
}
public boolean isReady() {
return false;
}
public void setOnReadyHandler(Runnable runnable) {
}
public void disableAutoInboundFlowControl() {
}
public void request(int i) {
}
public void setMessageCompression(boolean b) {
}
public void cancel(#Nullable String s, #Nullable Throwable throwable) {
}
};
stub.streamFooUpdates(Empty.newBuilder().build(), cancellableObserver);
Thread.sleep(10000); // sleep 10 seconds while messages are received.
cancellableObserver.cancel("cancelling from test", null); //explicit cancel
((ManagedChannel) chan).shutdown().awaitTermination(5, TimeUnit.SECONDS); //shutdown as well, for good measure.
Thread.sleep(7000); //channel should be shutdown by now.
}
I'm wondering why the server is not firing the "Context cancelled" callback.
Thanks!
You are not cancelling the client call correctly. The StreamObserver on the second argument of stub.streamFooUpdates() is your callback. You shouldn't call anything on that StreamObserver.
There are two ways to cancel the call from the client-side.
Option 1: Pass a ClientResponseObserver as the second argument, implement beforeStart(), which gives you a ClientCallStreamObserver, on which you can call cancel().
Option 2: Run stub.streamFooUpdates() inside a CancellableContext, and cancel the Context to cancel the call. Note that a CancellableContext must be always be cancelled, that's what the finally block is for.
CancellableContext withCancellation = Context.current().withCancellation();
try {
withCancellation.run(() -> {
stub.streamFooUpdates(...);
Thread.sleep(10000);
withCancellation.cancel(null);
});
} finally {
withCancellation.cancel(null);
}
I managed to setup an Hystrix Command to be called from an Undertow HTTP Handler:
public void handleRequest(HttpServerExchange exchange) throws Exception {
if (exchange.isInIoThread()) {
exchange.dispatch(this);
return;
}
RpcClient rpcClient = new RpcClient(/* ... */);
try {
byte[] response = new RpcCommand(rpcClient).execute();
// send the response
} catch (Exception e) {
// send an error
}
}
This works nice. But now, I would like to use the observable feature of Hystrix, calling observe instead of execute, making the code non-blocking.
public void handleRequest(HttpServerExchange exchange) throws Exception {
RpcClient rpcClient = new RpcClient(/* ... */);
new RpcCommand(rpcClient).observe().subscribe(new Observer<byte[]>(){
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
exchange.setStatusCode(StatusCodes.INTERNAL_SERVER_ERROR);
exchange.endExchange();
}
#Override
public void onNext(byte[] body) {
exchange.getResponseHeaders().add(Headers.CONTENT_TYPE, "text/plain");
exchange.getResponseSender().send(ByteBuffer.wrap(body));
}
});
}
As expected (reading the doc), the handler returns immediately and as a consequence, the exchange is ended; when the onNext callback is executed, it fails with an exception:
Caused by: java.lang.IllegalStateException: UT000127: Response has already been sent
at io.undertow.io.AsyncSenderImpl.send(AsyncSenderImpl.java:122)
at io.undertow.io.AsyncSenderImpl.send(AsyncSenderImpl.java:272)
at com.xxx.poc.undertow.DiyServerBootstrap$1$1.onNext(DiyServerBootstrap.java:141)
at com.xxx.poc.undertow.DiyServerBootstrap$1$1.onNext(DiyServerBootstrap.java:115)
at rx.internal.util.ObserverSubscriber.onNext(ObserverSubscriber.java:34)
Is there a way to tell Undertow that the handler is doing IO asynchronously? I expect to use a lot of non-blocking code to access database and other services.
Thanks in advance!
You should dispatch() a Runnable to have the exchange not end when the handleRequest method returns. Since the creation of the client and subscription are pretty simple tasks, you can do it on the same thread with SameThreadExecutor.INSTANCE like this:
public void handleRequest(HttpServerExchange exchange) throws Exception {
exchange.dispatch(SameThreadExecutor.INSTANCE, () -> {
RpcClient rpcClient = new RpcClient(/* ... */);
new RpcCommand(rpcClient).observe().subscribe(new Observer<byte[]>(){
//...
});
});
}
(If you do not pass an executor to dispatch(), it will dispatch it to the XNIO worker thread pool. If you wish to do the client creation and subscription on your own executor, then you should pass that instead.)