How do I make a Vertx handler execute earlier in eventloop? - asynchronous

I'm using Vertx 3.5.0 and very new to it. I'm trying to cancel the code execution when a client cancels their request.
Currently it's setup to where the first thing we do is deploy a verticle to run an HttpServer, and we add all of our Routes to the Router. From here we have a handler function per route. Inside this handler I'm trying this:
routingContext.request().connection().closeHandler({
//execute logic for ending execution
});
This is the only method I've seen that actually catches the closing of the connection, but the problem is it doesn't execute the handler early enough in the eventloop. So if I have any logs in there it will look like:
...[vert.x-eventloop-thread-0].....
...[vert.x-eventloop-thread-0]..... (Let's say I cancelled the request at this point)
...[vert.x-eventloop-thread-0].....
...[vert.x-eventloop-thread-0]..... (Final log of regular execution before waiting on asynchronous db calls)
...[vert.x-eventloop-thread-0]..... (Execution of closeHandler code)
I'd like for the closeHandler code to interrupt the process and execute essentially when the event actually happens.
This seems to always be the case regardless of when I cancel the request so I figure I'm missing something about how Vertx is handling the asynchronousity.
I've tried executing the closeHandler code via a worker verticle, inside the blockingHandler from the Router object, and inside the connectionHandler from the HttpServer object. All had the same result.
The main code execution is also not executed by a worker verticle, just a regular one.
Thanks!

It seems you misunderstand what a closeHandler is. It's a callback method, that Vert.x invokes when the request is being closed. It is not way to terminate the request early.
If you would like to terminate request early, one way is to use response().close() instead.
As a footnote, I'd like to mention that Vert.x 3.5.0 is 4 years old now, and you should be upgrading to 3.9, or, if you can, to 4.0

Related

Netty -- Performing async work inside PortUnificationServerHandler

I have a Netty ChannelInboundHandler in which I'm making calls to an external service (which is relatively slow). Depending upon the result of that call, I'm rewriting the pipeline to invoke different subsequent handlers. I have this working using a different executor group for the handler, but that's still inefficient, since the handler thread is doing nothing while waiting for the external service to respond.
Complicating the issue is that I'm doing this from a derivative of the PortUnificationServerHandler (itself a derivative of ByteToMessageDecoder), since the external service looks at the SNI hostname to determine whether or not to insert a SslHandler and decode or just to pass the traffic along straight.
I've seen how the HexDumpProxy example makes a call to an external service, but I don't see how this can be done from within something like ByteToMessageDecoder. My rough idea is to create a future for the external request, then have the futures call ChannelHandlerContext.fireUserEventTriggered on completion with a custom event that my handler can look for and do the pipeline rewrites. But that feels ugly, and tests didn't make it look like my event would reach my own handler...
Suggestions?

With gRPC can I have multiple RPC calls in progress over a single connection?

I'm having trouble getting multiple RPC calls to operate over a single connection. Server and client are both operating asynchronously using a completion queue.
I fire off a streaming call (getData), which sends one reply per second for 10 seconds. I wait a couple of seconds, then try to fire off a getVersion call (a unary call) and it doesn't come back until the getData call completes. Examination of the server shows that the getVersion call never hit the server until getData finished up.
And if I try to start multiple calls while the first getData is running, they all run once the first getData finishes. And in fact, they all run in parallel - for instance, if I fire off multiple getData calls, I can see all of them running in parallel after the first (blocking) getData finishes.
It's like you can queue up all you want, but once something is in progress you can't get a new call started on that channel?
Is this supposed to do this? It doesn't seem like the correct behavior, but my experience with gRPC is somewhat limited.
The problem was a bug in the way I was waiting for the next time to send data. I was blocking things that I shouldn't have been.

Difference between this.unblock and Meteor.defer?

In my methods, I use without difference alternatively this.unblock and Meteor.defer.
The idea, as I understand it, is to follow the good practice to let other method calls from the same client start running, without waiting for one single method to complete.
So is there a difference between them?
Thanks.
this.unblock()
only runs on the server
allows the next method call from the client to run before this call has finished. This is important because if you have a long running method you don't want it to block the UI until it finishes.
Meteor.defer()
can run on either the server or the client
basically says "run this code when there's no other code pending." It's similar to Meteor.setTimeout(func, 0)
If you deferred execution of a function on the server for example, it could still be running when the next method request came in and would block that request.

AWS Flow Framework, .get on Promises waits forever

I'm using the samples for the AwsFlowFramework, specifically helloworld and fileprocessing. I have followed all the setup instructions given here. All the client classes are successfully created with the aspect weaver. It all compiles and runs.
But trying to do .get on a Promise within an asynchronous method doesn't work. It waits forever and a result is never returned.
What am I doing wrong?
In particular the helloworld sample doesn't have any asynchronous method nor does it attempt to do a .get on a Promise. Therefore, it does work when copied outright and I can see in the activities client the "hello world" message printed. Yet, if I create a stub Asynchronous method to call get on the Promise<Void> returned by printHello, the client of the activities is never called and so the workflow waits forever. In fact the example works if I set the returned promise to a variable. The problem only arises if I try to call .get on the Promise. The fileprocessing example which does have asynchronous methods doesn't work.
I see the workflows and the activity types being registered in my aws console.
I'm using the Java SDK 1.4.1 and Eclipse Juno.
My list of unsuccessful attempts:
Tried it with Eclipse Indigo in case the aspect weaver does different things.
Made all asynchronous methods private as suggested by this question.
If I call .isReady() on the Promise this is always false even if I call it after I see the "helloworld" message printed (made certain by having a sleep in between). This leads me to think that Promise.get blocks the caller until Promise.isReady is true but because somehow this is never true, the client is not called and the workflow waits forever.
Tried different endpoints.
My very bad. I had a misconfiguration in the aop.xml file and so the load aspectj weaving for the remote calls was not correct.

handling XMLHttpRequest abort on asp.net

I use asynchronous XMLHttpRequest to call a function in ASP.net web service.
When I call an abort method on the XMLHttpRequest, after the server has received the request and processing it, the server continues processing the request.
Is there a way to stop the request processing on the server?
Generally speaking, no, you can't stop the request being processed by the server once it has started. After all, how would the server know when a request has been aborted?
It's like if you navigated to a web page but browsed to another one before the first one had loaded. That initial request will, at least to some extent (any client-side work will of course not take place), be fulfilled.
If you do wish to stop a long-running operation on the server, the service that is being invoked will need to be architected such that it can support being interrupted. Some psuedo code:
void MyLongRunningMethod(opId, args)
{
work = GetWork(args)
foreach(workItem in work)
{
DoWork(workItem)
//Has this invocation been aborted?
if(LookUpSet.Contains(opId))
{
LookUpSet.Remove(opId)
return
}
//Or try this:
if(Response.IsClientConnected)
{
HttpContext.Current.Response.End();
return;
}
}
}
void AbortOperation(opId)
{
LookUpSet[opId] = true
}
So the idea here is that MyLongRunningMethod periodically checks to see if it has been aborted, returning if so. It is intended that opId is unique, so you could generate it based on the session Id of the client appended with the current time or something (in Javascript, new Date().getTime() will get you the number of milliseconds since the epoch).
With this sort of approach, the server must maintain state (the LookUpSet in my example), so you will need some way of doing that, such as a database or just storing it in memory. The service will also need to be architected such that calling abort does not leave things in a non-working state, which of course depends very heavily on what it does.
The other really important requirement is that the data can be split up and worked on in chunks. This is what allows the service to be interruptable.
Finally, if some operation is to be aborted, then AbortOperation must be called - simply aborting the XMLHttpRequest invocation won't do help as the operation will continue until completion.
Edit
From this question: ASP.Net: How to stop page execution when browser disconnects?
You could also check the Response.IsClientConnected property to try and determine whether the invocation had been aborted.
Generally speaking, the server isn't going to know that a client has disconnected until it attempts to send data to it. See Best practice to detect a client disconnection in .NET? and Instantly detect client disconnection from server socket.
As nick_w wrote you can't stop the request being processed by the server once it has started. But there is ability to implement solution which will give you ability to cancel server task. Dino Esposito has several great articles about how such things can be implemented:
Canceling Server Tasks with ASP.NET AJAX
And in the following articles to implement pooling to server Dino Esposito describes how to use SignalR library:
Build a Progress Bar with SignalR;
Long Polling and SignalR
So if you really need to cancel some task on server these articles can be used as starting point to implement required solution.

Resources