How to rewrite synchronous controller to be asynchronous in Play? - asynchronous

I'm using Play framework 2.2 for one of my upcoming web application. I have implemented my controllers in synchronous pattern, with several blocking calls (mainly, database).
For example,
Synchronous version:
public static Result index(){
User user = db.getUser(email); // blocking
User anotherUser = db.getUser(emailTwo); // blocking
...
user.sendEmail(); // call to a webservice, blocking.
return ok();
}
So, while optimising the code, decided to make use of Asynchronous programming support of Play. Gone through the documentation, but the idea is still vague to me, as I'm confused about how to properly convert the above synchronous block of code to Async.
So, I came up with below code:
Asynchronous version:
public static Promise<Result> index(){
return Promise.promise(
new Function0<Result>(){
public Result apply(){
User user = db.getUser(email); // blocking
User anotherUser = db.getUser(emailTwo); // blocking
...
user.sendEmail(); // call to a webservice, blocking.
return ok();
}
}
);
}
So, I just wrapped the entire control logic inside a promise block.
Is my approach correct?
Should I convert each and every blocking request inside the controller, as Asynchronous, or wrapping several blocking calls inside single Async block is enough?

The play framework is asynchronous by nature and it allows the creation of fully non-blocking code. But in order to be non-blocking - with all its benefits - you can't just wrap your blocking code and expect magic to happen...
In an ideal scenario, your complete application is written in a non-blocking manner. If this is not possible (for whatever reason), you might want to abstract your blocking code in Akka actors or behind async interfaces which return scala.concurrent.Future's. This way you can execute your blocking code (simultaneously) in a dedicated Execution Context, without impacting other actions. After all, having all your actions share the same ExecutionContext means they share the same Thread pool. So an Action that blocks Threads might drastically impact other Actions doing pure CPU while having CPU not fully utilized!
In your case, you probably want to start at the lowest level. It looks like the database calls are blocking so start by refactoring these first. You either need to find an asynchronous driver for whatever database you are using or if there is only a blocking driver available, you should wrap them in a future to execute using a DB-specific execution context (with a ThreadPool that's the same size as the DB ConnectionPool).
Another advantage of abstracting the DB calls behind an async interface is that, if at some point in the future, you switch to a non-blocking driver, you can just change the implementation of your interface without having to change your controllers!
In your re-active controller, you can then handle these futures and work with them (when they complete). You can find more about working with Futures here
Here's a simplified example of your controller method doing non-blocking calls, and then combining the results in your view, while sending an email asynchronous:
public static Promise<Result> index(){
scala.concurrent.Future<User> user = db.getUser(email); // non-blocking
scala.concurrent.Future<User> anotherUser = db.getUser(emailTwo); // non-blocking
List<scala.concurrent.Future<User>> listOfUserFutures = new ArrayList<>();
listOfUserFutures.add(user);
listOfUserFutures.add(anotherUser);
final ExecutionContext dbExecutionContext = Akka.system().dispatchers().lookup("dbExecutionContext");
scala.concurrent.Future<Iterable<User>> futureListOfUsers = akka.dispatch.Futures.sequence(listOfUserFutures, dbExecutionContext);
final ExecutionContext mailExecutionContext = Akka.system().dispatchers().lookup("mailExecutionContext");
user.andThen(new OnComplete<User>() {
public void onComplete(Throwable failure, User user) {
user.sendEmail(); // call to a webservice, non-blocking.
}
}, mailExecutionContext);
return Promise.wrap(futureListOfUsers.flatMap(new Mapper<Iterable<User>, Future<Result>>() {
public Future<Result> apply(final Iterable<User> users) {
return Futures.future(new Callable<Result>() {
public Result call() {
return ok(...);
}
}, Akka.system().dispatcher());
}
}, ec));
}

It you don't have anything to not block on then there may not be a reason to make your controller async. Here is a good blog about this from one of the creators of Play: http://sadache.tumblr.com/post/42351000773/async-reactive-nonblocking-threads-futures-executioncont

Related

How to immediately stop processing new messages when inside a message handler?

I have a Rebus bus setup with a single worker and max parallelism of 1 that processes messages "sequentialy". In case an handler fails, or for specific business reason, I'd like the bus instance to immediately stop processing messages.
I tried using the Rebus.Event package to detect the exception in the AfterMessageHandled handler and set the number of workers to 0, but it seems other messages are processed before it can actually succeed in stoping the single worker instance.
Where in the event processing pipeline could I do
bus.Advanced.Workers.SetNumberOfWorkers(0); in order to prevent further message processing?
I also tried setting the number of workers to 0 inside a catch block in the handler itself, but it doesn't seem like the right place to do it since SetNumberOfWorkers(0) waits for handlers to complete before returning and the caller is the handler... Looks like a some kind of a deadlock to me.
Thank you
This particular situation is a little bit of a dilemma, because – as you've correctly observed – SetNumberOfWorkers is a blocking function, which will wait until the desired number of threads has been reached.
In your case, since you're setting it to zero, it means your message handler needs to finish before the number of threads has reached zero... and then: 💣 ☠🔒
I'm sorry to say this, because I bet your desire to do this is because you're in a pickle somehow – but generally, I must say that wanting to process messages sequentually and in order with message queues is begging for trouble, because there are so many things that can lead to messages being reordered.
But, I think you can solve your problem by installing a transport decorator, which will bypass the real transport when toggled. If the decorator then returns null from the Receive method, it will trigger Rebus' built-in back-off strategy and start chilling (i.e. it will increase the waiting time between polling the transport).
Check this out – first, let's create a simple, thread-safe toggle:
public class MessageHandlingToggle
{
public volatile bool ProcessMessages = true;
}
(which you'll probably want to wrap up and make pretty somehow, but this should do for now)
and then we'll register it as a singleton in the container (assuming Microsoft DI here):
services.AddSingleton(new MessageHandlingToggle());
We'll use the ProcessMessages flag to signal whether message processing should be enabled.
Now, when you configure Rebus, you decorate the transport and give the decorator access to the toggle instance in the container:
services.AddRebus((configure, provider) =>
configure
.Transport(t => {
t.Use(...);
// install transport decorator here
t.Decorate(c => {
var transport = c.Get<ITransport>();
var toggle = provider.GetRequiredService<MessageHandlingToggle>();
return new MessageHandlingToggleTransportDecorator(transport, toggle);
})
})
.(...)
);
So, now you'll just need to build the decorator:
public class MessageHandlingToggleTransportDecorator : ITransport
{
static readonly Task<TransportMessage> NoMessage = Task.FromResult(null);
readonly ITransport _transport;
readonly MessageHandlingToggle _toggle;
public MessageHandlingToggleTransportDecorator(ITransport transport, MessageHandlingToggle toggle)
{
_transport = transport;
_toggle = toggle;
}
public string Address => _transport.Address;
public void CreateQueue(string address) => _transport.CreateQueue(address);
public Task Send(string destinationAddress, TransportMessage message, ITransactionContext context)
=> _transport.Send(destinationAddress, message, context);
public Task<TransportMessage> Receive(ITransactionContext context, CancellationToken cancellationToken)
=> _toggle.ProcessMessages
? _transport.Receive(context, cancellationToken)
: NoMessage;
}
As you can see, it'll just return null when ProcessMessages == false. Only thing left is to decide when to resume processing messages again, pull MessageHandlingToggle from the container somehow (probably by having it injected), and then flick the bool back to true.
I hope can work for you, or at least give you some inspiration on how you can solve your problem. 🙂

Is fireUserEventTriggered correct way to "glue" non-netty callback-providing services with netty pipline?

Good day!
Wondering if using fireUserEventTriggered/userEventTriggered is netty way to collaborate with callback-oriented external services while processing message in channel handlers?
I mean, if there is some "alien" service with nonblocking(callback mechanic) methods, is this is right way to call ChannelHandlerContext#fireUserEventTriggered(passing some params from callback closure) and then handle it within overloaded ChannelInboundHandler#userEventTriggered for continue communication within original channel where it all started.
Example for illustration
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
externalServiceWithAsyncApi.doAndRegisterCallback(
//some call that will finish later and trigger callback handler
(callbackParam)->
ctx.fireUserEventTriggered(
new ExternalServiceCallbackEvent(callbackParam)
)
);
}
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
//seems its for us to handle
if (evt instanceof ExternalServiceCallbackEvent) {
//some processing and answer in the original?
ctx.channel()
.writeAndFlush(...)
.addListener(...);
// let other handlers process
} else {
super.userEventTriggered(ctx, evt);
}
}
Seems example with HeartbeatHandler in "Netty in Action" (in Listing 11.7) is relevant, but this part is a bit ahead from my current point of reading, so decided to ask for a help.
There is very similar question but something did not work for author and no answer Netty, writing to channel from non-Netty thread
UPD
The correct way seems to call NOT
ctx.fireUserEventTriggered(...)
but
ctx.channel().pipeline().fireUserEventTriggered(...)
It's definitely something you could used for that. That said you can also just do the write directly from your callback.

In ASP.NET, why does DbSet.LastAsync() not exist?

I've make some codes implementing some web API.
This API method returns last record of Foo Table.
public class FooController : ApiController
{
private FooContext db = new FooContext();
// GET: api/Foo
[ResponseType(typeof(Foo))]
public async Task<IHttpActionResult> GetLastFoo()
{
Foo foo = await db.Foo.Last();
if (foo == null)
{
return NotFound();
}
return Ok(foo);
}
}
I want make this API asynchronous, but there is no LastAsync() method.
Why and how can I solve it?
Thanks in advance.
You should not use Task.Run in ASP.NET:
Async and await on ASP.NET are all about I/O. They really excel at
reading and writing files, database records, and REST APIs. However,
they’re not good for CPU-bound tasks. You can kick off some background
work by awaiting Task.Run, but there’s no point in doing so. In fact,
that will actually hurt your scalability by interfering with the
ASP.NET thread pool heuristics. If you have CPU-bound work to do on
ASP.NET, your best bet is to just execute it directly on the request
thread. As a general rule, don’t queue work to the thread pool on
ASP.NET.
From Async Programming : Introduction to Async/Await on ASP.NET
Still it does not mean you can't use async and await in ASP.NET for I/O (and that's what you want) just use real async methods and not a fake async wrappers over synchronous methods (that will just push the work into other ThreadPool thread and you will not gain any performance benfit from it).
Since you don't have LastAsync method in EF I suggest you to use OrderBy to order your collection as you want and to use FirstAsync method (or FirstAsync overload that supports lambda predicate) instead of using Last method, FirstAsync is a true async I/O method that is supported in EF.
More info about when it's appropriate to use Task.Run can be found in Task.Run Etiquette Examples: Don't Use Task.Run in the Implementation article in Stephen Cleary's blog.
Your question why there is no LastAsync method in the first place should be directed to the team at Microsoft that desiged EF api, but my guess is that they didn't bother implementing it since it is so easy to achieve the same functionally with FirstAsync method as I suggested.
A working version of that code would look like this:
public class FooController : ApiController
{
private FooContext db = new FooContext();
// GET: api/Foo
[ResponseType(typeof(Foo))]
public async Task<IHttpActionResult> GetLastFoo()
{
Foo foo = await db.Foo.OrderByDesc(x=> x.Timestamp).FirstOrDefaultAsync();
if (foo == null)
{
return NotFound();
}
return Ok(foo);
}
}
Last etc don't work against an EF IQueryable because SQL doesn't have an equivalent command
about how to solve it
you can use
await Task.Run(()=>db.Foo.Last());

Solution for asynchronous notification upon future completion in GridGain needed

We are evaluating Grid Gain 6.5.5 at the moment as a potential solution for distribution of compute jobs over a grid.
The problem we are facing at the moment is a lack of a suitable asynchronous notification mechanism that will notify the sender asynchronously upon job completion (or future completion).
The prototype architecture is relatively simple and the core issue is presented in the pseudo code below (the full code cannot be published due to an NDA). *** Important - the code represents only the "problem", the possible solution in question is described in the text at the bottom together with the question.
//will be used as an entry point to the grid for each client that will submit jobs to the grid
public class GridClient{
//client node for submission that will be reused
private static Grid gNode = GridGain.start("config xml file goes here");
//provides the functionality of submitting multiple jobs to the grid for calculation
public int sendJobs2Grid(GridJob[] jobs){
Collection<GridCallable<GridJobOutput>> calls = new ArrayList<>();
for (final GridJob job : jobs) {
calls.add(new GridCallable<GridJobOutput>() {
#Override public GridJobOutput call() throws Exception {
GridJobOutput result = job.process();
return result;
}
});
}
GridFuture<Collection<GridJobOutput>> fut = this.gNode.compute().call(calls);
fut.listenAsync(new GridInClosure<GridFuture<Collection<GridJobOutput>>>(){
#Override public void apply(GridFuture<Collection<GridJobOutput>> jobsOutputCollection) {
Collection<GridJobOutput> jobsOutput;
try {
jobsOutput = jobsOutputCollection.get();
for(GridJobOutput currResult: jobsOutput){
//do something with the current job output BUT CANNOT call jobFinished(GridJobOutput out) method
//of sendJobs2Grid class here
}
} catch (GridException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
return calls.size();
}
//This function should be invoked asynchronously when the GridFuture is
//will invoke some processing/aggregation of the result for each submitted job
public void jobFinished(GridJobOutput out) {}
}
}
//represents a job type that is to be submitted to the grid
public class GridJob{
public GridJobOutput process(){}
}
Description:
The idea is that a GridClient instance will be used to in order to submit a list/array of jobs to the grid, notify the sender how many jobs were submitted and when the jobs are finished (asynchronously) is will perform some processing of the results. For the results processing part the "GridClient.jobFinished(GridJobOutput out)" method should be invoked.
Now getting to question at hand, we are aware of the GridInClosure interface that can be used with "GridFuture.listenAsync(GridInClosure> lsnr)"
in order to register a future listener.
The problem (if my understanding is correct) is that it is a good and pretty straightforward solution in case the result of the future is to be "processed" by code that is within the scope of the given GridInClosure. In our case we need to use the "GridClient.jobFinished(GridJobOutput out)" which is out of the scope.
Due to the fact that GridInClosure has a single argument R and it has to be of the same type as of GridFuture result it seems impossible to use this approach in a straightforward manner.
If I got it right till now then in order to use "GridFuture.listenAsync(..)" aproach the following has to be done:
GridClient will have to implement an interface granting access to the "jobFinished(..)" method let's name it GridJobFinishedListener.
GridJob will have to be "wrapped" in new class in order to have an additional property of GridJobFinishedListener type.
GridJobOutput will have to be "wrapped" in new class in order to have an addtional property of GridJobFinishedListener type.
When the GridJob will be done in addition to the "standard" result GridJobOutput will contain the corresponding GridJobFinishedListener reference.
Given the above modifications now GridInClosure can be used now and in the apply(GridJobOutput) method it will be possible to call the GridClient.jobFinished(GridJobOutput out) method through the GridJobFinishedListener interface.
So if till now I got it all right it seems a bit clumsy work around so I hope I have missed something and there is a much better way to handle this relatively simple case of asynchronous call back.
Looking forward to any helpful feedback, thanks a lot in advance.
Your code looks correct and I don't see any problems in calling jobFinished method from the future listener closure. You declared it as an anonymous class which always has a reference to the external class (GridClient in your case), therefore you have access to all variables and methods of GridClient instance.

Suggestions for simple ways to do Asynchronous processing in Grails

Let's say I have a simple controller like this:
class FooController {
def index = {
someVeryLongCompution() //e.g crawl a set of web pages
render "Long computation was launched."
}
}
When the index action is invoked, I want the method to return immediately to the user while running the long computation asynchronously.
I understand the most robust way to do this would be to use a message broker in the architecture, but I was wondering if there is a simpler way to do it.
I tried the Executor plugin but that blocks the http request from returning until the long computation is done.
I tried the Quartz plugin, but that seems to be good for periodic tasks (unless there is a way to run a job just once?)
How are you guys handling such requests in Grails?
Where do you want to process veryLongComputation(), on the same Grails server, or a different server?
If the same server, you don't need JMS, another option is to just create a new thread and process the computation asynchronously.
def index = {
def asyncProcess = new Thread({
someVeryLongComputation()
} as Runnable )
asyncProcess.start()
render "Long computation was launched."
}
If you use a simple trigger in Grails Quartz and set the repeatCount to 0, the job will only run once. It runs separate from user requests, however, so you'd need to figure out some way to communicate to user when it completed.
I know this is a very old question, just wanted to give an updated answer.
Since grails 2.3, the framework supports async calls using Servlet 3.0 async request handling (of course, a servlet 3.0 container must be used and the servlet version should be 3.0 in the configuration, which it is per default)
It is documented here : http://grails.org/doc/latest/guide/async.html
In general, there are two ways achieving what you asked for:
import static grails.async.Promises.*
def index() {
tasks books: Book.async.list(),
totalBooks: Book.async.count(),
otherValue: {
// do hard work
}
}
or the Servlet Async way:
def index() {
def ctx = startAsync()
ctx.start {
new Book(title:"The Stand").save()
render template:"books", model:[books:Book.list()]
ctx.complete()
}
}
Small note - The grails method is using promises, which is a major (async) leap forward. Any Promise can be chained to further promised, have a callback on success and fail etc'
Have you tried Grails Promisses API? It should be as simple as
import static grails.async.Promise
import static grails.async.Promises
class FooController {
def index = {
Promise p = Promises.task {
someVeryLongCompution() //e.g crawl a set of web pages
}
render "Long computation was launched."
}
}
Try Spring events plugin - It supports asynchronous event listeners.
If you'd like to use the Quartz plugin (as we always do), you can do it like this. It works well for us:
DEFINE A JOB (with no timed triggers)
static triggers = {
simple name:'simpleTrigger', startDelay:0, repeatInterval: 0, repeatCount: 0
}
CALL <job>.triggerNow() to manually execute the job in asynchronous mode.
MyJob.triggerNow([key1:value1, key2: value2]);
Pro Tip #1
To get the named parameters out the other side...
def execute(context) {
def value1 = context.mergedJobDataMap.get('key1');
def value2 = context.mergedJobDataMap.get('key2');
...
if (value1 && value2) {
// This was called by triggerNow(). We know this because we received the parameters.
} else {
// This was called when the application started up. There are no parameters.
}
}
Pro Tip #2
The execute method always gets called just as the application starts up, but the named parameters come through as null.
Documentation: http://grails.org/version/Quartz%20plugin/24#Dynamic%20Jobs%20Scheduling
The best solution for this kind of problem is to use JMS, via the JMS plugin.
For a simpler implementation, that doesn't requires an externel server/service, you can try the Spring Events plugin.
Grails 2.2.1 Solution I had an additional requirement that said the report had to auto pop a window when it was complete. So I chose the servlet way from above with a twist. I replaced the render a view with a json formatted string return it looked like this.
Additionally my client side is not a gsp view, it is ExtJS 4.1.1 (a HTML5 product)
enter code here
def index() {
def ctx = startAsync()
ctx.start ({
Map retVar = [reportId: reportId, success: success];
String jsonString = retVar as JSON;
log.info("generateTwoDateParamReport before the render out String is: " + jsonString);
ctx.getOriginalWebRequest().getCurrentResponse().setContentType("text/html");
ctx.getOriginalWebRequest().getCurrentResponse().setCharacterEncoding("UTF-8");
log.info("current contentType is: "ctx.getOriginalWebRequest().getCurrentResponse().contentType);
try {
ctx.getOriginalWebRequest().getCurrentResponse().getWriter().write(jsonString);
ctx.getOriginalWebRequest().getCurrentResponse().getWriter().flush();
ctx.getOriginalWebRequest().getCurrentResponse().setStatus(HttpServletResponse.SC_OK);
}
catch (IOException ioe)
{
log.error("generateTwoDateParamReport flush data to client failed.");
}
ctx.complete();
log.info("generateNoUserParamsReport after complete");
});
}

Resources