Netty -- Performing async work inside PortUnificationServerHandler - asynchronous

I have a Netty ChannelInboundHandler in which I'm making calls to an external service (which is relatively slow). Depending upon the result of that call, I'm rewriting the pipeline to invoke different subsequent handlers. I have this working using a different executor group for the handler, but that's still inefficient, since the handler thread is doing nothing while waiting for the external service to respond.
Complicating the issue is that I'm doing this from a derivative of the PortUnificationServerHandler (itself a derivative of ByteToMessageDecoder), since the external service looks at the SNI hostname to determine whether or not to insert a SslHandler and decode or just to pass the traffic along straight.
I've seen how the HexDumpProxy example makes a call to an external service, but I don't see how this can be done from within something like ByteToMessageDecoder. My rough idea is to create a future for the external request, then have the futures call ChannelHandlerContext.fireUserEventTriggered on completion with a custom event that my handler can look for and do the pipeline rewrites. But that feels ugly, and tests didn't make it look like my event would reach my own handler...
Suggestions?

Related

How do I make a Vertx handler execute earlier in eventloop?

I'm using Vertx 3.5.0 and very new to it. I'm trying to cancel the code execution when a client cancels their request.
Currently it's setup to where the first thing we do is deploy a verticle to run an HttpServer, and we add all of our Routes to the Router. From here we have a handler function per route. Inside this handler I'm trying this:
routingContext.request().connection().closeHandler({
//execute logic for ending execution
});
This is the only method I've seen that actually catches the closing of the connection, but the problem is it doesn't execute the handler early enough in the eventloop. So if I have any logs in there it will look like:
...[vert.x-eventloop-thread-0].....
...[vert.x-eventloop-thread-0]..... (Let's say I cancelled the request at this point)
...[vert.x-eventloop-thread-0].....
...[vert.x-eventloop-thread-0]..... (Final log of regular execution before waiting on asynchronous db calls)
...[vert.x-eventloop-thread-0]..... (Execution of closeHandler code)
I'd like for the closeHandler code to interrupt the process and execute essentially when the event actually happens.
This seems to always be the case regardless of when I cancel the request so I figure I'm missing something about how Vertx is handling the asynchronousity.
I've tried executing the closeHandler code via a worker verticle, inside the blockingHandler from the Router object, and inside the connectionHandler from the HttpServer object. All had the same result.
The main code execution is also not executed by a worker verticle, just a regular one.
Thanks!
It seems you misunderstand what a closeHandler is. It's a callback method, that Vert.x invokes when the request is being closed. It is not way to terminate the request early.
If you would like to terminate request early, one way is to use response().close() instead.
As a footnote, I'd like to mention that Vert.x 3.5.0 is 4 years old now, and you should be upgrading to 3.9, or, if you can, to 4.0

What is the correct way to add an EventListener to an AtmosphereResource?

I am using Atmosphere Framework 2.0.8.
I have implemented an AtmosphereHandler in my application and have two way communication occurring correctly over WebSockets and Long Polling.
I am trying to add some handling code for when the client disconnects to clean up some resources specific to that client (ie. I have an entry in a table I want to remove).
I have read the following wiki entries:
OnDisconnect Tricks: https://github.com/Atmosphere/atmosphere/wiki/onDisconnect-tricks
Configuring Atmosphere Listener: https://github.com/Atmosphere/atmosphere/wiki/Configuring-Atmosphere-Listener
The thing I am not clear on is where I should add the call to
atmosphereResource.addEventListener( new AtmosphereResourceEventListenerAdapter() {} );
I eventually found some example code in the JavaDoc for the AtmosphereHandler that registers the EventListener in the onRequest() method. http://atmosphere.github.io/atmosphere/apidocs/org/atmosphere/cpr/AtmosphereHandler.html
What I would like to know is if this is the correct way to go about it?
It is my understanding that the AtmosphereResource represents the connection between a client and the server for the life of that connection. The uuid stays consistent for the object on multiple calls through the onRequest() method from the same client. As such, the same AtmosphereResource object will get the EventListener added every time the onRequest method is called.
This seems wrong. Wouldn't this lead to thousands of EventListeners being registered for each AtmosphereResource?
It seems that the EventLister should only be registered once for each AtmosphereResource.
I feel like I am missing something fundamental here. Could someone please explain?
Here's an example using MeteorServlet, so it won't look exactly like what you'll have to do, but it should get you started. I add the listener to a Meteor instance, and you'll add yours to an AtmosphereResource. Each resource gets just one listener.
The overridden onDisconnect() method calls this Grails service method that handles the event. Of course, you'll want to call something that cleans up your database resource.
Note that the servlet is configured using these options. I think you might need the org.atmosphere.interceptor.HeartbeatInterceptor, but it's been so long since I initially set it up, I can't remember if it's necessary.

Web API 2 - are all REST requests asynchronous?

Do I need to do anything to make all requests asynchronous or are they automatically handled that way?
I ran some tests and it appears that each request comes in on its own thread, but I figure better to ask as I might have tested wrong.
Update: (I have a bad habit of not explaining fully - sorry) Here's my concern. A client browser makes a REST request to my server of http://data.domain/com/employee_database/?query=state:Colorado. That comes in to the appropriate method in the controller. That method queries the database and returns an object which is then turned into a JSON structure and returned to the calling app.
Now let's say 10,000 clients all make a similar query to the same server. So I have 10,000 requests coming in at once. Will my controller method be called simultaneously in 10,000 distinct threads? Or must the first request return before the second request is called?
I'm not asking about the code in my handler method having asynchronous components. For my case the request becomes a single SQL query so the code has nothing that can be handled asynchronously. And until I get the requested data, I can't return from the method.
No REST is not async by default. the request are handled synchronously. However, your web server (IIS) has a number of max threads setting which can work at the same time, and it maintains a queue of the request received. So, the request goes in the queue and if a thread is available it gets executed else, the request waits in the IIS queue till a thread is available
I think you should be using async IO/operations such as database calls in your case. Yes in Web Api, every request has its own thread, but threads can run out if there are many consecutive requests. Also threads use memory so if your api gets hit by too many request it may put pressure on your system.
The benefit of using async over sync is that you use your system resources wisely. Instead of blocking the thread while it is waiting for the database call to complete in sync implementation, the async will free the thread to handle more requests or assign it what ever process needs a thread. Once IO (database) call completes, another thread will take it from there and continue with the implementation. Async will also make your api run faster if your IO operations take longer to complete.
To be honest, your question is not very clear. If you are making an HTTP GET using HttpClient, say the GetAsync method, request is fired and you can do whatever you want in your thread until the time you get the response back. So, this request is asynchronous. If you are asking about the server side, which handles this request (assuming it is ASP.NET Web API), then asynchronous or not is up to how you implemented your web API. If your action method, does three things, say 1, 2, and 3 one after the other synchronously in blocking mode, the same thread is going to the service the request. On the other hand, say #2 above is a call to a web service and it is an HTTP call. Now, if you use HttpClient and you make an asynchronous call, you can get into a situation where one request is serviced by more than one thread. For that to happen, you should have made the HTTP call from your action method asynchronously and used async keyword. In that case, when you call await inside the action method, your action method execution returns and the thread servicing your request is free to service some other request and ultimately when the response is available, the same or some other thread will continue from where it was left off previously. Long boring answer, perhaps but difficult to explain just through words by typing, I guess. Hope you get some clarity.
UPDATE:
Your action method will execute in parallel in 10,000 threads (ideally). Why I'm saying ideally is because a CLR thread pool having 10,000 threads is not typical and probably impractical as well. There are physical limits as well as limits imposed by the framework as well but I guess the answer to your question is that the requests will be serviced in parallel. The correct term here will be 'parallel' but not 'async'.
Whether it is sync or async is your choice. You choose by the way to write your action. If you return a Task, and also use async IO under the hood, it is async. In other cases it is synchronous.
Don't feel tempted to slap async on your action and use Task.Run. That is async-over-sync (a known anti-pattern). It must be truly async all the way down to the OS kernel.
No framework can make sync IO automatically async, so it cannot happen under the hood. Async IO is callback-based which is a severe change in programming model.
This does not answer what you should do of course. That would be a new question.

Invoking timed tasks in asynchronous Jax-RS requests

I've joined a project that uses Jax-RS (and originally there was quite a bit of Spring-based Controller code in there too, but all URL handlers use Jax-RS now). Now we want to be able to fill in a queue of tasks that should be run with a small delay between each of them. The delay can be specified in ms. I've avoided Thread.sleep, as I've heard you should not manage threads manually in Java EE. Before I came in there was already a busy wait loop implemented.
I would like to switch this to an asynchronous background task. I could of course let the client poll the server with the given delay, and just have an AsyncResponse that can be resumed. But can the same AsyncResponse be resumed/suspended multiple times? The resource does have state, so it would be possible to drop the asynchrony completely and just do client polling to handle all of it.
A lot of example code for showing off asynchronous tasks use Thread.sleep. How bad is it to do this in a background task on an ExecutorService or something similar?
The point of the delay is to simulate human interaction, and post a long list of JMS messages to a queue but ensure that two listeners don't pick up and handle messages that depend on one another.
Is it easier/better to handle this on the client side rather than the server side? Writing some JavaScript that handles all the polling would be quite simple, so if this seems like a bad idea for handling on the server side, it's not that big a deal.
The tool is only going to be used by a single user, as it's a developer testing tool. Therefore we went for solving this on the client side, pushing the messages onto the queue through AJAX calls. This works fine for our purposes, but if anyone has a solution that might help someone else. Feel free to drop a new answer.

Which is better in this case - sync or async web service?

I'm setting up a web service in Axis2 whose job it will be to take a bunch of XML and put it on to a queue to be processed later. I understand its possible to set up a client to invoke a synchronous web service asynchronously by creating a using an "invokeNonBlocking" operation on the "Call" instance. (ref http://onjava.com/pub/a/onjava/2005/07/27/axis2.html?page=4)
So, my question is, is there any advantage to using an asynchronous web service in this case? It seems redundant because 1) the client isn't blocked and 2) the service has to accept and write the xml to queue regardless if it's synchronous or asynchronous
In my opinion, asynchronous is the appropriate way to go. A couple of things to consider:
Do you have multiple clients accessing this service at any given moment?
How often is this process occurring?
It does take a little more effort to implement the async methods. But I guarantee, in the end you will be much happier with the result. For one, you don't have to manage threading. Your primary concern might just be the volatility of the data in the que (i.e. race/deadlock conditions).
A "sync call" seems appropriate, I agree.
If the request from the client isn't time consuming, then I don't see the advantage either in making the call asynchronous. From what I understand of the situation in question here, the web-service will perform its "processing" against the request some time in the future.
If, on the contrary, the request had required a time consuming process, then an async call would haven been appropriate.
After ruminating some more about it, I'm thinking that the service should be asynchronous. The reason is that it would put the task of writing the data to the queue into a separate thread, thus lessening the chances of a timeout. It makes the process more complicated, but if I can avoid a timeout, then it's got to be done.

Resources