How to make command to wait until all events triggered against it are completed successfully - axon

I have came across a requirement where i want axon to wait untill all events in the eventbus fired against a particular Command finishes their execution. I will the brief the scenario:
I have a RestController which fires below command to create an application entity:
#RestController
class myController{
#PostMapping("/create")
#ResponseBody
public String create(
org.axonframework.commandhandling.gateway.CommandGateway.sendAndWait(new CreateApplicationCommand());
System.out.println(“in myController:: after sending CreateApplicationCommand”);
}
}
This command is being handled in the Aggregate, The Aggregate class is annotated with org.axonframework.spring.stereotype.Aggregate:
#Aggregate
class MyAggregate{
#CommandHandler //org.axonframework.commandhandling.CommandHandler
private MyAggregate(CreateApplicationCommand command) {
org.axonframework.modelling.command.AggregateLifecycle.apply(new AppCreatedEvent());
System.out.println(“in MyAggregate:: after firing AppCreatedEvent”);
}
#EventSourcingHandler //org.axonframework.eventsourcing.EventSourcingHandler
private void on(AppCreatedEvent appCreatedEvent) {
// Updates the state of the aggregate
this.id = appCreatedEvent.getId();
this.name = appCreatedEvent.getName();
System.out.println(“in MyAggregate:: after updating state”);
}
}
The AppCreatedEvent is handled at 2 places:
In the Aggregate itself, as we can see above.
In the projection class as below:
#EventHandler //org.axonframework.eventhandling.EventHandler
void on(AppCreatedEvent appCreatedEvent){
// persists into database
System.out.println(“in Projection:: after saving into database”);
}
The problem here is after catching the event at first place(i.e., inside aggregate) the call gets returned to myController.
i.e. The output here is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in myController:: after sending CreateApplicationCommand
in Projection:: after saving into database
The output which i want is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in Projection:: after saving into database
in myController:: after sending CreateApplicationCommand
In simple words, i want axon to wait untill all events triggered against a particular command are executed completely and then return to the class which triggered the command.
After searching on the forum i got to know that all sendAndWait does is wait until the handling of the command and publication of the events is finalized, and then i tired with Reactor Extension as well using below but got same results: org.axonframework.extensions.reactor.commandhandling.gateway.ReactorCommandGateway.send(new CreateApplicationCommand()).block();
Can someone please help me out.
Thanks in advance.

What would be best in your situation, #rohit, is to embrace the fact you are using an eventually consistent solution here. Thus, Command Handling is entirely separate from Event Handling, making the Query Models you create eventually consistent with the Command Model (your aggregates). Therefore, you wouldn't necessarily wait for the events exactly but react when the Query Model is present.
Embracing this comes down to building your application such that "yeah, I know my response might not be up to date now, but it might be somewhere in the near future." It is thus recommended to subscribe to the result you are interested in after or before the fact you have dispatched a command.
For example, you could see this as using WebSockets with the STOMP protocol, or you could tap into Project Reactor and use the Flux result type to receive the results as they go.
From your description, I assume you or your business have decided that the UI component should react in the (old-fashioned) synchronous way. There's nothing wrong with that, but it will bite your *ss when it comes to using something inherently eventually consistent like CQRS. You can, however, spoof the fact you are synchronous in your front-end, if you will.
To achieve this, I would recommend using Axon's Subscription Query to subscribe to the query model you know will be updated by the command you will send.
In pseudo-code, that would look a little bit like this:
public Result mySynchronousCall(String identifier) {
// Subscribe to the updates to come
SubscriptionQueryResult<Result> result = QueryGateway.subscriptionQuery(...);
// Issue command to update
CommandGateway.send(...);
// Wait on the Flux for the first result, and then close it
return result.updates()
.next()
.map(...)
.timeout(...)
.doFinally(it -> result.close());
}
You could see this being done in this sample WebFluxRest class, by the way.
Note that you are essentially closing the door to the front-end to tap into the asynchronous goodness by doing this. It'll work and allow you to wait for the result to be there as soon as it is there, but you'll lose some flexibility.

Related

Chaining Handlers with MediatR

We are using MediatR to implement a "Pipeline" for our dotnet core WebAPI backend, trying to follow the CQRS principle.
I can't decide if I should try to implement a IPipelineBehavior chain, or if it is better to construct a new Request and call MediatR.Send from within my Handler method (for the request).
The scenario is essentially this:
User requests an action to be executed, i.e. Delete something
We have to check if that something is being used by someone else
We have to mark that something as deleted in the database
We have to actually delete the files from the file system.
Option 1 is what we have now: A DeleteRequest which is handled by one class, wherein the Handler checks if it is being used, marks it as deleted, and then sends a new TaskStartRequest with the parameters to Delete.
Option 2 is what I'm considering: A DeleteRequest which implements the marker interfaces IRequireCheck, IStartTask, with a pipeline which runs:
IPipelineBehavior<IRequireCheck> first to check if the something is being used,
IPipelineBehavior<DeleteRequest> to mark the something as deleted in database and
IPipelineBehavior<IStartTask> to start the Task.
I haven't fully figured out what Option 2 would look like, but this is the general idea.
I guess I'm mainly wondering if it is code smell to call MediatR.Send(TRequest2) within a Handler for a TRequest1.
If those are the options you're set on going with - I say Option 2. Sending requests from inside existing Mediatr handlers can be seen as a code smell. You're hiding side effects and breaking the Single Responsibility Principle. You're also coupling your requests together and you should try to avoid situations where you can't send one type of request before another.
However, I think there might be an alternative. If a delete request can't happen without the validation and marking beforehand you may be able to leverage a preprocessor (example here) for your TaskStartRequest. That way you can have a single request that does everything you need. This even mirrors your pipeline example by simply leveraging the existing Mediatr patterns.
Is there any need to break the tasks into multiple Handlers? Maybe I am missing the point in mediatr. Wouldn't this suffice?
public async Task<Result<IFailure,ISuccess>> Handle(DeleteRequest request)
{
var thing = await this.repo.GetById(request.Id);
if (thing.IsBeignUsed())
{
return Failure.BeignUsed();
}
var deleted = await this.repo.Delete(request.Id);
return deleted ? new Success(request.Id) : Failure.DbError();
}

In Disassembler pipeline component - Send only last message out from GetNext() method

I have a requirement where I will be receiving a batch of records. I have to disassemble and insert the data into DB which I have completed. But I don't want any message to come out of the pipeline except the last custom made message.
I have extended FFDasm and called Disassembler(), then we have GetNext() which is returning every debatched message out and they are failing as there is subscribers. I want to send nothing out from GetNext() until Last message.
Please help if anyone have already implemented this requirement. Thanks!
If you want to send only one message on the GetNext, you have to call on Disassemble method to the base Disassemble and get all the messages (you can enqueue this messages to manage them on GetNext) as:
public new void Disassemble(IPipelineContext pContext, IBaseMessage pInMsg)
{
try
{
base.Disassemble(pContext, pInMsg);
IBaseMessage message = base.GetNext(pContext);
while (message != null)
{
// Only store one message
if (this.messagesCount == 0)
{
// _message is a Queue<IBaseMessage>
this._messages.Enqueue(message);
this.messagesCount++;
}
message = base.GetNext(pContext);
}
}
catch (Exception ex)
{
// Manage errors
}
Then on GetNext method, you have the queue and you can return whatever you want:
public new IBaseMessage GetNext(IPipelineContext pContext)
{
return _messages.Dequeue();
}
The recommended approach is to publish messages after disassemble stage to BizTalk message box db and use a db adapter to insert into database. Publishing messages to message box and using adapter will provide you more options on design/performance and will decouple your DB insert from receive logic. Also in future if you want to reuse the same message for something else, you would be able to do so.
Even then for any reason if you have to insert from pipeline component then do the following:
Please note, GetNext() method of IDisassembler interface is not invoked until Disassemble() method is complete. Based on this, you can use following approach assuming you have encapsulated FFDASM within your own custom component:
Insert all disassembled messages in disassemble method itself and enqueue only the last message to a Queue class variable. In GetNext() message then return the Dequeued message, when Queue is empty return null. You can optimize the DB insert by inserting multiple rows at a time and saving them in batches depending on volume. Please note this approach may encounter performance issues depending on the size of file and number of rows being inserted into db.
I am calling DBInsert SP from GetNext()
Oh...so...sorry to say, but you're doing it wrong and actually creating a bunch of problems doing this. :(
This is a very basic scenario to cover with BizTalk Server. All you need is:
A Pipeline Component to Promote BTS.InterchageID
A Sequential Convoy Orchestration Correlating on BTS.InterchangeID and using Ordered Delivery.
In the Orchestration, call the SP, transform to SOAP, call the SOAP endpoint, whatever you need.
As you process the Messages, check for BTS.LastInterchagneMessage, then perform your close out logic.
To be 100% clear, there are no practical 'performance' issues here. By guessing about 'performance' you've actually created the problem you were thinking to solve, and created a bunch of support issues for later on, sorry again. :( There is no reason to not use an Orchestration.
As noted, 25K records isn't a lot. Be sure to have the Receive Location and Orchestration in different Hosts.

Using signalR to broadcast results from a timerjob?

I'm just getting started with SignalR and I'm wondering if it's a good tool for the task I'm working on.
In short, I have objects with properties that change over time. A timer job runs every once in a while to update these properties. For the sake of explanation, let's say I have MilkJugs with a property "isExpired" that changes once a certain DateTime is hit.
When my timerjob hits a MilkJug and flips it to isExpired = true, I want all clients to get a notification instantly. If a client is looking at seven MilkJugs in Chrome, I want them to see all seven MilkJugs turn yellow (or something like that).
Could I use signalR to "broadcast" these notifications to the clients from the timerJob? I just ran through the chat example they have up and it seems super simple to get working... I think I could do something like this serverside:
public class ChatHub : Hub
{
public void Send(List<MilkJugUpdate> updates)
{
// Call the broadcastMessage method to update milkJugs.
Clients.All.broadcastMessage(updates);
}
}
And then clientside just iterate over the serialized array, updating the appropriate fields in my JS viewModels.
Does this sound about right?
You have got the basic idea there. However there are probably some improvements you could make.
Here I assume you send the message every time you run the timer job. This isn't necessary. You only really need to send a message to the clients if something changes.
Firstly you could handle the onconnected event, and send the current state of the milk jugs.
Now when you run the timer job, you only need to call send if something has changed. Then you send the message to the clients, telling them what has changed. On the clients side, the function handles the change something like the following
Server
public class ChatHub : Hub
{
public override Task OnConnected()
{
//some code here to fetch current state of jugs.
return base.OnConnected();
}
public void JugExpired(MilkJugUpdate update)
{
// Call the broadcastMessage method to update milkJugs.
Clients.All.updateJug(update);
}
}
Client
ChatHub.client.updateJug = function(update) {
// code to update jug here
}
This saves you sending messages to the client if nothing has changed.
Similarly as pointed out in another answer, you can call the client method directly from your timer job, but again, I would only recommend sending updates, rather than the entire state every time.
Absolutely, ShootR does this already (HTML5 multiplayer game). This is also done in the Stock Ticker Sample on nuget.
Ultimately, you can grab the hub context outside of the hub and use it to send messages:
public void MyTimerFunction(object state)
{
GlobalHost.ConnectionManager.GetHubContext<ChatHub>().Clients.All.broadcastMessage(updates);
}

ASP.NET Async Tasks - how to use WebClient.DownloadStringAsync with Page.RegisterAsyncTask

A common task I have to do for a site I work on is the following:
Download data from some third-party API
Process the data in some fashion
Display the results on the page
I was initially using WebClient.DownloadStringAsync and doing my processing on the result. However I was finding that DownloadStringAsync was not respecting the AsyncTimeout parameter, which I sort of expected once I did a little reading about how this works.
I ended up adapting the code from the example on how to use PageAsyncTask to use DownloadString() there - please note, it's the synchronous version. This is probably okay, because the task is now asynchronous. The tasks now properly time out and I can get the data by PreRender() time - and I can easily genericize this and put it on any page I need this functionality.
However I'm just worried it's not 'clean'. The page isn't notified when the task is done like the DownloadStringAsync method would do - I just have to scoop the results (stored in a field in the class) up at the end in my PreRender event.
Is there any way to get the Webclient's Async methods to work with RegisterPageTask, or is a helper class the best I can do?
Notes: No MVC - this is vanilla asp.net 4.0.
If you want an event handler on your Page called when the async task completes, you need only hook one up. To expand on the MSDN "how to" article you linked:
Modify the "SlowTask" class to include an event, like - public event EventHandler Finished;
Call that EventHandler in the "OnEnd" method, like - if (Finished != null)
{
Finished(this, EventArgs.Empty);
}
Register an event handler in your page for SlowTask.Finished, like - mytask.Finished += new EventHandler(mytask_Finished);
Regarding ExecuteRegisteredAsyncTasks() being a blocking call, that's based only on my experience. It's not documented explicitly as such in the MSDN - http://msdn.microsoft.com/en-us/library/system.web.ui.page.executeregisteredasynctasks.aspx
That said, it wouldn't be all that practical for it be anything BUT a blocking call, given that it doesn't return a WaitHandle or similar. If it didn't block the pipeline, the Page would render and be returned to the client before the async task(s) completed, making it a little difficult to get the results of the task back to the client.

Actionscript 3: How to do multiple async webservice call requests

I am using Flex and Actionscript 3, along with Webservices, rpc and a callResponder. I want to be able to, for example, say:
loadData1(); // Loads webservice data 1
loadData2(); // Loads webservice data 2
loadData3(); // Loads webservice data 3
However, Actionscript 3 works with async events, so for every call you need to wait for the ResultEvent to trigger when it is done. So, I might want to do the next request every time an event is done. However, I am afraid that threading issues might arise, and some events might not happen at all. I don't think I'm doing a good job of explaining, so I will try to show some code:
private var service:Service1;
var cp:CallResponder = new CallResponder();
public function Webservice()
{
cp.addEventListener(ResultEvent.RESULT, webcalldone);
service = new Service1();
}
public function doWebserviceCall()
{
// Check if already doing call, otherwise do this:
cp.token = service.WebserviceTest_1("test");
}
protected function webcalldone(event:ResultEvent):void
{
// Get the result
var result:String = cp.lastResult as String;
// Check if other calls need to be done, do those
}
Now, I could ofcourse save the actions in an arraylist, but whose to say that the addToArrayList and the check if other calls are available do not mess eachother up, or just miss each other, thereby halting execution? Is there something like a volatile Arraylist? Or is there a completely different, but better solution for this problem?
Use an AsyncToken to keep track of which call the returned data was for http://flexdiary.blogspot.com/2008/11/more-thoughts-on-remoting.html
When I want to store data in an async manor I put it in an array and make a function that will "pop" the element as I send it off.
This function will be called on complete and on error events.
Yes I know there could be an issue with the server and data lost but oh well. That can also be handled
Events will always fire however, it may not be a complete event that gets fired but could be an error event.
Once the array is empty the function is done.

Resources