Fire another command within the command handler in an axonframework application - axon

Is this a good way to fire another command within the command handler in axonframework application?
For example, I want to provide a ROLLBACK function, which the underlying process is read the history state of aggregate with the given sequence number, and then update the aggregate according to the history state, imagine it as following:
#CommandHandler
private void on(RollbackCommand command, MetaData metaData) {
ContractAggregate ca = queryGateWay.query(new QueryContractWithGivenSequenceNumber(...));
commandGateWay.sendCommandAndWait(new UpdateContractCommand(ca));
}
Will it work fine?

On "Dispatching commands from command handlers"
Command handlers can roughly exist in two areas in an Axon application:
Within the Aggregate
On a Command Handling Component
In both options, it would be possible to dispatch a command from within the command handler, but I would only advice such an operation from option 2.
The reasoning behind this is that when Axon handles a command from within an Aggregate, that exact Aggregate instance will be locked.
This is done to ensure no concurrent operations are performed on a given aggregate instance.
Knowing this, we can deduce that the subsequent command could also land up in an aggregate instance, which will be locked as well. Added, if the command being dispatched from within an aggregate command handler is targeted to the same aggregate instance, you'll effectively be blocking the system. Axon will throw a LockAcquisitionFailedException eventually, but nonetheless you'd have created something undesirable.
Thus, I'd only dispatch commands from within #CommandHandler annotated methods which reside on a Command Handling Component.
On "your use case"
I have some questions on your use case, as the blanks make me slightly concerned whether this is the best approach. Thus, let me ask you some follow up questions.
If I understand your question correctly, you want to introduce a command handler which queries the aggregate to be able to rollback it's state with a single command?
Would you have a command which adjusts the entire state of the aggregate?
Or specific portions of the aggregate instance?
And, I assume the query is targeted towards a dedicated Query Model representing the aggregate, thus not Axon's idea of the #Aggregate annotated class, right?

Related

Elsa work flow trigger and integration with exe

I want to build Elsa workflow for the below requirement:
Can be trigger from database table trigger when new row inserted.
Can execute exe file to get some information.
To read data from the database.
I agree with #fatihyildizhan and #vahidnaderi, but if I interpret the question as "How to do these 3 steps with Elsa from a high-level overview - can someone give me any pointers?" then I can answer as follows:
If all you really need are the 3 steps you mentioned, then don't use Elsa; it is overkill for what you want to do.
Here's why:
Although you can achieve all of this with Elsa, you can't do it with Elsa out of the box; you will have to write custom activities and support services to trigger your workflows, which is a little bit more work than simply doing your thing from your "row inserted" application handler.
If, on the other hand, you are planning on implementing multiple workflows that are potentially more complicated and perhaps even long-running, then it might be worthwhile to consider Elsa after all.
Doing this with Elsa requires you the following up-front work:
1. Trigger Workflow When New Row Inserted
To trigger a workflow when a new row is inserted, you need to implement a handler that responds to that event (this you would need to do regardless of whether you use Elsa or not).
Next, you need to implement a custom activity that represents the "row inserted" event as a trigger, e.g. called RowInserted. You can then use that activity as a starting point in any of your workflows or even as a resumption point (e.g. for workflows where you began some work that might insert some eventual addition of a database row, which is the event you want to handle), which would be triggered whenever a new row is inserted. You probably want to be able to configure for which database table to trigger this event, so you might add a TableName property to your activity.
Then, in order to make Elsa actually trigger workflows with your custom activity, you will need to do the following:
Implement a bookmark model and provider for Elsa to index and invoke. E.g.
NewRowBookmark : IBookmark with a single property called TableName. Bookmarks are Elsa's way of starting & resuming workflows.
Update your "row inserted" handler described earlier to invoke IWorkflowLaunchpad.CollectAndDispatchWorkflowsAsync, passing in the appropriate bookmark/trigger model containing the table name into which the row was inserted, and provide the inserted row as input (assuming you want the workflow to do something with the inserted row).
2. Execute File
To execute a file, you need to create another custom activity that does this. You can make this activity as specific or generic as you need.
3. Read From Database
Same as with #2, you need to create another custom activity that reads from the database. You can make this activity as specific or generic as you need.
The above should give you a rough idea of the work involved to implement this with Elsa. And as mentioned earlier, this might be overkill if all you are looking to do is do the 3 steps mentioned.

In Airflow, what do they want you to do instead of pass data between tasks

In the docs, they say that you should avoid passing data between tasks:
This is a subtle but very important point: in general, if two operators need to share information, like a filename or small amount of data, you should consider combining them into a single operator. If it absolutely can’t be avoided, Airflow does have a feature for operator cross-communication called XCom that is described in the section XComs.
I fundamentally don't understand what they mean. If there's no data to pass between two tasks, why are they part of the same DAG?
I've got half a dozen different tasks that take turns editing one file in place, and each send an XML report to a final task that compiles a report of what was done. Airflow wants me to put all of that in one Operator? Then what am I gaining by doing it in Airflow? Or how can I restructure it in an Airflowy way?
fundamentally, each instance of an operator in a DAG is mapped to a different task.
This is a subtle but very important point: in general if two operators need to share
information, like a filename or small amount of data, you should consider combining them
into a single operator
the above sentence means that if you want any information that needs to be shared between two different tasks then it is best you could combine them into one task instead of using two different tasks, on the other hand, if you must use two different tasks and you need to pass some information from one task to another then you can do it using
Airflow's XCOM, which is similar to a key-value store.
In a Data Engineering use case, file schema before processing is important. imagine two tasks as follows :
Files_Exist_Check : the purpose of this task is to check whether particular files exist in a directory or not
before continuing.
Check_Files_Schema: the purpose of this task is to check whether the file schema matches the expected schema or not.
It would only make sense to start your processing if Files_Exist_Check task succeeds. i.e. you have some files to process.
In this case, you can "push" some key to xcom like "file_exists" with the value being the count of files present in that particular directory in Task Files_Exist_Check.
Now, you "pull" this value using the same key in Check_Files_Schema Task, if it returns 0 then there are no files for you to process hence you can raise exception and fail the task or handle gracefully.
hence sharing information across tasks using xcom does come in handy in this case.
you can refer following link for more info :
https://www.astronomer.io/guides/airflow-datastores/
Airflow - How to pass xcom variable into Python function
What you have to do for avoiding having everything in one operator is saving the data somewhere. I don't quite understand your flow, but if for instance, you want to extract data from an API and insert that in a database, you would need to have:
PythonOperator(or BashOperator, whatever) that takes the data from the API and saves it to S3/local file/Google Drive/Azure Storage...
SqlRelated operator that takes the data from the storage and insert it into the database
Anyway, if you know which files are you going to edit, you may also use jinja templates or reading info from a text file and make a loop or something in the DAG. I could help you more if you clarify a little bit your actual flow
I've decided that, as mentioned by #Anand Vidvat, they are making a distinction between Operators and Tasks here. What I think is that they don't want you to write two Operators that inherently need to be paired together and pass data to each other. On the other hand, it's fine to have one task use data from another, you just have to provide filenames etc in the DAG definition.
For example, many of the builtin Operators have constructor parameters for files, like the S3FileTransformOperator. Confusing documentation, but oh well!

How to run multiple Firestore Functions Sequentially?

We have 20 functions that must run everyday. Each of these functions do something different based on inputs from the previous function.
We tried calling all the functions in one function, but it hits the timeout error as these 20 functions take more than 9 minutes to execute.
How can we trigger these multiple functions sequentially, or avoid timeout error for one function that executes each of these functions?
There is no configuration or easy way to get this done. You will have to set up a fair amount of code and infrastructure to get this done.
The most straightforward solution involves chaining together calls using pubsub type functions. You can send a message to a pubsub topic that will trigger the next function to run. The payload of the message to send can be the parameters that the function should use to determine how it should operate. If the payload is too big, or some more complex sources of data are required to make that decision, you can use a database to store intermediate data that the next function can query and use.
Since we don't have any more specific details about how your functions actually work, nothing more specific can be said. If you run into problems with a specific detail of this scheme, please post again describing that specifically you're trying to do and what's not working the way you expect.
There is a variant to the Doug solution. At the end of the function, instead of publishing a message into pubsub, simply write a specific log (for example " end").
Then, go to stackdriver logging, search for this specific log trace (turn on advanced filters) and configure a sink into a PubSub topic of this log entry. Thereby, every time that the log is detected, a PubSub message is published with the log content.
Finally, plug your next function on this PubSub topic.
If you need to pass values from function to another one, you can simply add these values in the log trace at the end of the function and parse it at the beginning of the next one.
Chaining functions is not an easy things to do. Things are coming, maybe Google Cloud Next will announce new products for helping you in this task.
If you simply want the functions to execute in order, and you don't need to pass the result of one directly to the next, you could wrap them in a scheduled function (docs) that spaces them out with enough time for each to run.
Sketch below with 3 minute spacing:
exports.myScheduler = functions.pubsub
.schedule('every 3 minutes from 22:00 to 23:00')
.onRun(context => {
let time = // check the time
if (time === '22:00') func1of20();
else if (time === '22:03') func2of20();
// etc. through func20of20()
}
If you do need to pass the results of each function to the next, func1 could store its result in a DB entry, then func2 starts by reading that result, and ends by overwriting with its own so func3 can read when fired 3 minutes later, etc. — though perhaps in this case, the other solutions are more tailored to your needs.

Return entity updated by axon command

What is the best way to get the updated representation of an entity after mutating it with a command.
For example, lets say I have a project like digital-restaurant and I want to be able to update a field on the restaurant and return it's current state to the client making the update (to retrieve any modifications by different processes).
When a restaurant is created, it is easy to retrieve the current state (ie: the projection representation) after dispatching the create command by subscribing to a FindRestaurantQuery and waiting until a record is returned (see Restaurant CommandController)
However, it isn't so simple to detect when the result of an UpdateCommand has been applied to the projection. For example,
if we use the same trick and subscribe to the FindRestaurantQuery, we will be notified if the restaurant has been modified,
but it may not be our command that triggered the modification (in the case where multiple processes are concurrently issuing
update commands).
There seems to be two obvious ways to detect when a given update command has been applied to the projection:
Have a unique ID associated with every update command.
Subscribe to a query that is updated when the command ID has been applied to the projection.
Propagate the unique ID to the event that is applied by the aggregate
When the projection receives the event, it can notify the query listener with the current state
Before dispatching an update command, query the existing state of the projection
Calculate the destination state given the contents of the update command
In the case of (1): is there any situation (eg: batching / snapshotting) where the event carrying the unique ID may be
skipped over somehow, preventing the query listener from being notified?
Is there a more reliable / more idiomatic way to accomplish this use case?
Axon 4 with Spring boot.
Although fully asynchronous designs may be preferable for a number of reasons, it is a common scenario that back-end teams are forced to provide synchronous REST API on top of asynchronous CQRS+ES back-ends.
The part of the demo application that is trying to solve this problem is located here https://github.com/idugalic/digital-restaurant/tree/master/drestaurant-apps/drestaurant-monolith-rest
The case you are mentioning is totally valid.
I would go with the option 1.
My only concern is that you have to introduce new unique ID associated with every update command attribute to the domain (events). This ID attribute does not have any Domain/Business value by my opinion. There is an Audit(who, when) attribute associated to every event already, and maybe you can use that to correlate commands and subscriptions. I believe that there is more value in this solution (identity is part of domain), if this is not to relaxing for your case.
Please note that Queries have to be extended with Audit in this case (you will know who requested the Query)

Do I need a separate command when using an oracle?

I want to issue something e.g. a new option. Inside the flow where I'm issuing this new option, I need to get info from two separate oracles that need to provide data for the output state.
How should I do this... should I have one output and 3 commands? command with data from Oracle 1, command with data from Oracle 2 and then the issue command? or can this be done with one command?
It's entirely up to you - your command can contain whatever you data you want, so in theory, you could do the whole with one command.
Having said that, I would probably split it out into at least two commands for clarity and privacy. The privacy element is you can build a filtered transaction for the oracle to sign that only contains the oracle command.
If you don't mind the two oracles seeing the data sent to each for signing, you could encapsulate the data in one command e.g.
class OracleCommand(val spotPrice: SpotPrice, val volatility: Volatility) : CommandData
Where one oracle attests to spotPrice and the other to Volatility.
However, you would find it hard to determine what part of the data they attested to since they will both sign over the entire filtered transaction.
Unless you knew the design of the oracle could specifically pick out the correct data, you're probably better off going with three separate commands.

Resources