Asynchronous Callback in GWT - why final? - asynchronous

I am developing an application in GWT as my Bachelor's Thesis and I am fairly new to this. I have researched asynchronous callbacks on the internet. What I want to do is this: I want to handle the login of a user and display different data if they are an admin or a plain user.
My call looks like this:
serverCall.isAdmin(new AsyncCallback<Boolean>() {
public void onFailure(Throwable caught) {
//display error
}
public void onSuccess(Boolean admin) {
if (!admin){
//do something
}
else{
//do something else
}
}
});
Now, the code examples I have seen handle the data in the //do something// part directly. We discussed this with the person who is supervising me and I had the idea that I could fire an event upon success and when this event is fired load the page accordingly. Is this a good idea? Or should I stick with loading everything in the inner function? What confuses me about async callbacks is the fact that I can only use final variables inside the onSuccess function so I would rather not handle things in there - insight would be appreciated.
Thanks!

Since the inner-class/ anonymous function it is generated at runtime it needs a static memory reference to the variables it accesses. Putting final to a variable makes its memory address static, putting it to a safe memory region. The same happens if you reference a class field.

Its just standard java why you can only use Final variables inside an inner-class. Here is a great discussion discussing this topic.
When I use the AsyncCallback I do exactly what you suggested, I fire an event though GWT's EventBus. This allows several different parts of my application to respond when a user does log in.

Related

How to make command to wait until all events triggered against it are completed successfully

I have came across a requirement where i want axon to wait untill all events in the eventbus fired against a particular Command finishes their execution. I will the brief the scenario:
I have a RestController which fires below command to create an application entity:
#RestController
class myController{
#PostMapping("/create")
#ResponseBody
public String create(
org.axonframework.commandhandling.gateway.CommandGateway.sendAndWait(new CreateApplicationCommand());
System.out.println(“in myController:: after sending CreateApplicationCommand”);
}
}
This command is being handled in the Aggregate, The Aggregate class is annotated with org.axonframework.spring.stereotype.Aggregate:
#Aggregate
class MyAggregate{
#CommandHandler //org.axonframework.commandhandling.CommandHandler
private MyAggregate(CreateApplicationCommand command) {
org.axonframework.modelling.command.AggregateLifecycle.apply(new AppCreatedEvent());
System.out.println(“in MyAggregate:: after firing AppCreatedEvent”);
}
#EventSourcingHandler //org.axonframework.eventsourcing.EventSourcingHandler
private void on(AppCreatedEvent appCreatedEvent) {
// Updates the state of the aggregate
this.id = appCreatedEvent.getId();
this.name = appCreatedEvent.getName();
System.out.println(“in MyAggregate:: after updating state”);
}
}
The AppCreatedEvent is handled at 2 places:
In the Aggregate itself, as we can see above.
In the projection class as below:
#EventHandler //org.axonframework.eventhandling.EventHandler
void on(AppCreatedEvent appCreatedEvent){
// persists into database
System.out.println(“in Projection:: after saving into database”);
}
The problem here is after catching the event at first place(i.e., inside aggregate) the call gets returned to myController.
i.e. The output here is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in myController:: after sending CreateApplicationCommand
in Projection:: after saving into database
The output which i want is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in Projection:: after saving into database
in myController:: after sending CreateApplicationCommand
In simple words, i want axon to wait untill all events triggered against a particular command are executed completely and then return to the class which triggered the command.
After searching on the forum i got to know that all sendAndWait does is wait until the handling of the command and publication of the events is finalized, and then i tired with Reactor Extension as well using below but got same results: org.axonframework.extensions.reactor.commandhandling.gateway.ReactorCommandGateway.send(new CreateApplicationCommand()).block();
Can someone please help me out.
Thanks in advance.
What would be best in your situation, #rohit, is to embrace the fact you are using an eventually consistent solution here. Thus, Command Handling is entirely separate from Event Handling, making the Query Models you create eventually consistent with the Command Model (your aggregates). Therefore, you wouldn't necessarily wait for the events exactly but react when the Query Model is present.
Embracing this comes down to building your application such that "yeah, I know my response might not be up to date now, but it might be somewhere in the near future." It is thus recommended to subscribe to the result you are interested in after or before the fact you have dispatched a command.
For example, you could see this as using WebSockets with the STOMP protocol, or you could tap into Project Reactor and use the Flux result type to receive the results as they go.
From your description, I assume you or your business have decided that the UI component should react in the (old-fashioned) synchronous way. There's nothing wrong with that, but it will bite your *ss when it comes to using something inherently eventually consistent like CQRS. You can, however, spoof the fact you are synchronous in your front-end, if you will.
To achieve this, I would recommend using Axon's Subscription Query to subscribe to the query model you know will be updated by the command you will send.
In pseudo-code, that would look a little bit like this:
public Result mySynchronousCall(String identifier) {
// Subscribe to the updates to come
SubscriptionQueryResult<Result> result = QueryGateway.subscriptionQuery(...);
// Issue command to update
CommandGateway.send(...);
// Wait on the Flux for the first result, and then close it
return result.updates()
.next()
.map(...)
.timeout(...)
.doFinally(it -> result.close());
}
You could see this being done in this sample WebFluxRest class, by the way.
Note that you are essentially closing the door to the front-end to tap into the asynchronous goodness by doing this. It'll work and allow you to wait for the result to be there as soon as it is there, but you'll lose some flexibility.

Async web calls bottlenecking and running sequencially

I have a web site which makes frequent requests to an external web service, and I'd like these calls to be async and parallel to avoid blocking and to speed up the site a bit. Basically, I have 8 widgets, each of which has to make its own web call(s).
For some reason, only the first 3 or so of them truly load async, and then the threads don't free up in time, and the rest of the widgets load sequencially. If i could get 3 of them to load in parallel, then 3 more in parallel, then 2 more in parallel, i'd be happy. So the issue is really that the threads aren't freeing up in time.
I'm guessing the answer has to do with some IIS configuration. I'm testing on a non-server OS, so maybe that's part of it.
Edit for #jon skeet:
I'm using reflection to invoke the web calls like this:
output = methodInfo.Invoke(webservice, parameters);
The widget actions (which eventually call the web service) are called via a jquery $.each() loop and the .load function (maybe this causes a bottleneck?). The widget actions are set up as async methods in an async controller.
Here is the code for one of the async methods (they are all set up like this):
public void MarketTradeWidgetAsync()
{
AsyncManager.OutstandingOperations.Increment();
//a bunch of market trade logic
//this eventually calls the web service
PlanUISetting uiSettingMarketQuotesConfig = WebSettingsProviderManager.Provider.GetMarketQuotes(System.Configuration.ConfigurationManager.AppSettings["Theme"], SessionValues<String>.GlobalPlanID, SessionValues<String>.ParticipantID, "MARKETQUOTES");
AsyncManager.OutstandingOperations.Decrement();
}
public ActionResult MarketTradeWidgetCompleted(MarketTradeTool markettradetool)
{
if (Session.IsNewSession)
return PartialView("../Error/AjaxSessionExpired");
else
{
ViewData["MarketData"] = markettradetool;
return PartialView(markettradetool);
}
}
And, like I said, these methods are called via jquery. My thinking is that since the action methods are async, they should give control back to the jquery after they get called, right?
SessionState = "readonly" for the page at hand fixed this issue. Evidently session locking was the issue.

ASP.NET and ThreadStatic as part of TransactionScope's implementation

I was wondering how TransactionScope class works to keep the transaction between different method calls (without the need to pass it as a parameter) and I came to this doubt. I've got two considerations about this question:
1
Looking into TransactionScope's implementation through Telerik JustDecompile, I've found that the current transaction is stored in a ThreadStatic member of the System.Transactions.ContextData class (code below).
internal class ContextData
{
internal TransactionScope CurrentScope;
internal Transaction CurrentTransaction;
internal DefaultComContextState DefaultComContextState;
[ThreadStatic]
private static ContextData staticData;
internal WeakReference WeakDefaultComContext;
internal static ContextData CurrentData
{
get
{
ContextData contextDatum = ContextData.staticData;
if (contextDatum == null)
{
contextDatum = new ContextData();
ContextData.staticData = contextDatum;
}
return contextDatum;
}
}
public ContextData()
{
}
}
The CurrentData property is called by TransactionScope's PushScope() method, and the last one is used by most of the TransactionScope constructors.
private void PushScope()
{
if (!this.interopModeSpecified)
{
this.interopOption = Transaction.InteropMode(this.savedCurrentScope);
}
this.SetCurrent(this.expectedCurrent);
this.threadContextData.CurrentScope = this;
}
public TransactionScope(TransactionScopeOption scopeOption)
{
// ...
this.PushScope();
// ...
}
Ok, I guess I've found how they do that.
2
I've read about how bad is to use ThreadStatic members to store objects within ASP.NET (http://www.hanselman.com/blog/ATaleOfTwoTechniquesTheThreadStaticAttributeAndSystemWebHttpContextCurrentItems.aspx) due the ASP.NET thread switching that might occur, so this data can be lost among the worker threads.
So, it looks like TransactionScope should not work with ASP.NET, right? But as far I have used it on my web applications, I don't remember any problem that I've run into about transaction data being lost.
My question here is "what's the TransactionScope's trick to deal with ASP.NET's thread switching?".
Did I make a superficial analysis on how TransactionScope stores its transaction objects? Or TransactionScope class wasn't made to work with ASP.NET, and I can be considered a lucky guy that never had any pain about it?
Could anyone who knows the "very deep buried secrets" of .NET explain that for me?
Thanks
I believe ASP.NET thread switching happens only in specific situations (involving asych IO operations) and early in the request life cycle. Typically, once the control is passed to the actual http handler (for example, Page), thread does not get switched. I believe that in most of situation, transaction scope will get initialized only after that (after page_init/load) and should not be an issue.
Here are few links that might interest you:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
http://piers7.blogspot.com/2005/12/log4net-context-problems-with-aspnet.html

Actionscript 3: How to do multiple async webservice call requests

I am using Flex and Actionscript 3, along with Webservices, rpc and a callResponder. I want to be able to, for example, say:
loadData1(); // Loads webservice data 1
loadData2(); // Loads webservice data 2
loadData3(); // Loads webservice data 3
However, Actionscript 3 works with async events, so for every call you need to wait for the ResultEvent to trigger when it is done. So, I might want to do the next request every time an event is done. However, I am afraid that threading issues might arise, and some events might not happen at all. I don't think I'm doing a good job of explaining, so I will try to show some code:
private var service:Service1;
var cp:CallResponder = new CallResponder();
public function Webservice()
{
cp.addEventListener(ResultEvent.RESULT, webcalldone);
service = new Service1();
}
public function doWebserviceCall()
{
// Check if already doing call, otherwise do this:
cp.token = service.WebserviceTest_1("test");
}
protected function webcalldone(event:ResultEvent):void
{
// Get the result
var result:String = cp.lastResult as String;
// Check if other calls need to be done, do those
}
Now, I could ofcourse save the actions in an arraylist, but whose to say that the addToArrayList and the check if other calls are available do not mess eachother up, or just miss each other, thereby halting execution? Is there something like a volatile Arraylist? Or is there a completely different, but better solution for this problem?
Use an AsyncToken to keep track of which call the returned data was for http://flexdiary.blogspot.com/2008/11/more-thoughts-on-remoting.html
When I want to store data in an async manor I put it in an array and make a function that will "pop" the element as I send it off.
This function will be called on complete and on error events.
Yes I know there could be an issue with the server and data lost but oh well. That can also be handled
Events will always fire however, it may not be a complete event that gets fired but could be an error event.
Once the array is empty the function is done.

Still having problems understanding ASP.NET events. What's the point of them?

Maybe I'm slow, but I just don't get why you would ever use an event that is not derived from an actual action (like clicking). Why go through the rigamarole of creating delegates and events when you can just call a method? It seems like when you create an event, all you're doing is creating a way for the caller to go through some complicated process to call a simple method. And the caller has to raise the event themselves! I don't get it.
Or maybe I'm just not grasping the concept. I get the need for events like OnClick and interactions with controls, but what about for classes? I tried to implement events for a class of mine, say, when the source of an item changed, but quickly realized that there was no point since I could just call a method whenever I wanted to perform a certain action instead of creating an event, raising an event, and writing an event handler. Plus, I can reuse my method, whereas I can't necessarily reuse my event handler.
Someone set me straight please. I feel like I'm just wrong here, and I want to be corrected. The last question I asked didn't really garner any sort of helpful answer.
Thanks.
I've always like the Radio Station metaphor.
When a radio station wants to broadcast something, it just sends it out. It doesn't need to know if there is actually anybody out there listening. Your radio is able to register itself with the radio station (by tuning in with the dial), and all radio station broadcasts (events in our little metaphor) are received by the radio who translates them into sound.
Without this registration (or event) mechanism. The radio station would have to contact each and every radio in turn and ask if it wanted the broadcast, if your radio said yes, then send the signal to it directly.
Your code may follow a very similar paradigm, where one class performs an action, but that class may not know, or may not want to know who will care about, or act on that action taking place. So it provides a way for any object to register or unregister itself for notification that the action has taken place.
Events in general can be a good way to decouple the listener/observer from the caller/raiser.
Consider the button. When someone clicks on the button, the click event fires. Does the listener of the button click care what the button looks like, does, or anything of that nature? Most likely not. It only cares that the click action has taken place, and it now goes and does something.
The same thing can be applied to anything that needs to same isolation/decoupling. It doesn't have to be UI driven, it can just be a logic separation that needs to take place.
Using events has the advantage of separating the class(es) which handle events from the classes which raise them (a la the Observer Pattern). While this is useful for Model View Controller (the button which raises click events is orthogonal to the class that handles them), it is also useful any time you want to make it easy to (whether at runtime or not) to keep the class which handles the events separated from the class which raises them (allowing you to change or replace them).
Basically, the whole point is to keep the classes which handle the events separated from the classes which raise them. Unnecessary coupling is bad, since it makes code maintainence much more difficult (since a change in one place in code will require changes in any pieces of code coupled to it).
Edit:
The majority of the event-handling you will do in Asp.Net will probably be to handle events from classes that are provided to you. Because such classes use event-handling, it makes it easy for you to interface with them. Often, the event will also serve to allow you to interact with the object that raised the event. For example, the databound controls usually raise an event right before they connect to the database, which you can handle in order to use runtime information to alter the arguments being passed to the stored procedure talking to your database, e.g. if the query string provides a page number parameter and the stored procedure has a page number argument.
Events can be used like messages to notify that something has happened, and an consumer can react to them in an appropriate way, so different components are loosely coupled. There's lots of things you can use events for, one example is auditing things that are happening in the system; the auditing component can consume various events and write out to a log when they are fired.
The things that you seem to be missing are two:
There are happenings in software that can take an arbitrary amount of time not only due to waiting for user input (async i/o, database queries and so on). If you launch an async i/o request you would want to subscribe to the event notifying you when the reading is done so you can do something with the data. This idea is generalized in .NET's BackgroundWorker class, which allows you to do heavy tasks in the background (another thread) and receive notifications in the calling thread when it's done.
There are happenings in software that are used by multiple clients not only a single one. For instance, if you have a plugin architecture where your main code offers hooks to plugin code, you could either do something like
foreach (Plugin p in getAvaliablePlugins()) { p.hook1(); }
in every hook all over your code which reduces flexibility (p cannot decide what to do, and has to provide the hook1 method publicly) or you can just
raiseEvent(hook1,this)
where all registered plugins can execute their code because they receive the event, letting them do their job as they see fit.
I think you're confusing ASP.NET's (mis)use of events, with plain ol' event handling.
We'll start with plain ol' event handling. Events is a (yet another) way of fulfilling the "Open [for extension]/Closed [for modification]" principle. When your class exposes an event, it allows external classes (perhaps classes that aren't even thought of, much less built, yet) to have code run by your class. That's a pretty powerful extension mechanism, and it doesn't require your class to be modified in any way.
As an example, consider a web server, that knows how to accept a request, but doesn't know how to process a file (I'll use bi-directional events here, where the handler can pass data back to the event source. Some would argue that's not kosher, but it's the first example that came to mind):
class WebServer {
public event EventHandler<RequestReceivedEventArgs> RequestReceived;
void ReceiveRequest() {
// lots of uninteresting network code here
var e = new RequestReceivedEventArgs();
e.Request = ReadRequest();
OnRequestReceived(e);
WriteResponse(e.Response);
}
void OnRequestReceived(RequestReceivedEventArgs e) {
var h = RequestReceived;
if (h != null) h(e);
}
}
Without changing the source code of that class - maybe it's in a 3rd party library - I can add a class that knows how to read a file from disk:
class FileRequestProcessor {
void WebServer_RequestReceived(object sender, EventArgs e) {
e.Response = File.ReadAllText(e.Request);
}
}
Or, maybe an ASP.NET compiler:
class AspNetRequestProcessor {
void WebServer_RequestReceived(object sender, EventArgs e) {
var p = Compile(e.Request);
e.Response = p.Render();
}
}
Or, maybe I'm just interested in knowing that an event happened, without affecting it at all. Say, for logging:
class LogRequestProcessor {
void WebServer_RequestReceived(object sender, EventArgs e) {
File.WriteAllText("log.txt", e.Request);
}
}
All of these classes are basically "injecting" code in the middle of WebServer.OnRequestReceived.
Now, for the ugly part. ASP.NET has this annoying little habit of having you write event handlers to handle your own events. So, the class that you're inheriting from (System.Web.UI.Page) has an event called Load:
abstract class Page {
public event EventHandler Load;
virtual void OnLoad(EventArgs e) {
var h = this.Load;
if (h != null) h(e);
}
}
and you want to run code when the page is loaded. Following the Open/Closed Principle, we can either inherit and override:
class MyPage : Page {
override void OnLoad(EventArgs e) {
base.OnLoad(e);
Response.Write("Hello World!");
}
}
or use eventing:
class MyPage : Page {
MyPage() {
this.Load += Page_Load;
}
void Page_Load(EventArgs e) {
Response.Write("Hello World!");
}
}
For some reason, Visual Studio and ASP.NET prefer the eventing approach. I suppose you can then have multiple handlers for the Load event, and they would all get run auto-magically - but I never see anyone doing that. Personally, I prefer the override approach - I think it's a bit clearer and you'll never have the question of "why am I subscribed to my own events?".

Resources