How to handle commands for aggregates with IDs assigned after command? - axon

I know the subject line doesn't make sense given the way Axon works, but here is my problem:
I need to create a new instance of aggregate, "Quote", that is tied to a backend system of record. That is, the aggregate ID must eventually match the ID assigned in the backend system.
So, my uiServer app is calling commandGateway and sending it a CreateQuoteCmd, but I don't know what to pass as the target aggregate ID, since the ID will come from a backend system called by the command handler. The uiServer cannot assign the quoteId. The command handler for CreateQuoteCmd contacts our backend system to get the new quoteId. The backend system also supplies several default values which will be placed in the aggregate.
So, how do I make that quoteId the ID for the aggregate?
What do I pass as the target aggregate ID in the command object?
Is it true that I must pass a target aggregate ID in CreateQuoteCmd instead of allowing the object to set its own ID in the command handler after communication with the backend system?
Thanks for your help.

The Command which will create an Aggregate is not inclined to have a #TargetAggregateIdentifier annotated field. This holds as the field which is the 'target aggregate identifier', cannot point to an existing aggregate, as that command will be the starting point of an aggregate.
The creation of the Aggregate Identifier can happen at several points in your system, and is really up to you.
The important part here though is that the #CommandHandler annotated constructor within an Aggregate has a return value, which is the Aggregate Identifier you have assigned to that Aggregate.
You should thus handle the result given to you from the CommandGateway/CommandBus when dispatching your CreateQuoteCmd. This should contain the QuoteId you have assigned to your (I assume) Quote Aggregate.

You need to get the aggregate ID from the external system before sending the command (at domain or application service layer)

Related

Can you save a result (Given) to a variable in a Gherkin feature file, and then compare the variable with another result (Then)? (Cucumber for Java)

I am new to Cucumber for Java and trying to automate testing of a SpringBoot server backed by a MS SQL Server.
I have an endpoint "Get All Employees".
Writing the traditional feature file, I will have to list all the Employees in the #Then clause.
This is not possible with thousands of employees.
So I just want to get a row count of the Employee table in the database, and then compare with the number of objects returned from the "Get All Employees" endpoint.
Compare
SELECT count(*) from EMPLOYEE
with size of the list returned from
List<Employee> getAllEmployees()
But how does one save the rowcount in a variable in the feature file and then pass it into the stepdefs Java method?
I have not found any way that Gherkin allows this.
After writing a few scenario and feature files, I understood this about Cucumber and fixed the issue.
Gherkin/Cucumber is not a programming language. It is just a specification language. When keywords like Given, Then are reached by the interpreter, the matching methods in Java code are called. So they are just triggers.
These methods are part of a Java glue class. Data is not passed out of the Java class and into the gherkin feature file. The class is instantiated at the beginning and retained until the end. Because of this, it can store state.
So from my example in the question above, the Then response from a Spring endpoint call will be stored in a member variable in the glue class. The next Then invocation to verify the result will call the corresponding glue method which will access the data in the member variable to perform the comparison.
So Gherkin cannot do this, but Java at a lower level in the glue class, can.
You could create a package for example named dataRun (with the corresponding classes in the package) and save there the details during the test via setters.
During the execution of the step "And I get the count of employees from the data base" you set this count via the corresponding setter, during the step "And I get all employees" you set the number via the dedicated setter.
Then during the step "And I verify the number of employees is the same as the one in the data base" you get the two numbers via the getters and compare them.
Btw it's possible to compare the names of the employees (not just the count) if you put them into list and compare the lists.

DynamoDBMapper batchLoad pass in parameters

There are two ways for DDBMapper to call batchLoad, differing in different pass-in parameters.
public Map<String,List<Object>> batchLoad(Iterable<? extends Object> itemsToGet)
public Map<String,List<Object>> batchLoad(Map<Class<?>,List<KeyPair>> itemsToGet)
I understand the second way, which makes more sense to me by specifying keyPair.
Then what about the first one? So basically just to pass in a list? Then whats the difference? The second one obviously looks more complicated
Imagine I have a User object with partition key userId and range key createdDate. I want to batch load 3 Users.
In the second option I have to create 3 key-pairs of userId and createdDate. In the first option I instantiate 3 User objects using userId and createdDate and put them in a List.
The first option might be more appropriate if I have logic in the User constructor. For example maybe createdDate cannot be more than 1 year ago. In this case creating User objects is an advantage as the constructor logic is executed. Alternatively I may have been passed the User object from some other part of the application, in which case creating key-pairs from them is just extra code I shouldn't need to write.
So basically there isn't much difference. I suspect some people will find the first option more pleasing since DynamoDBMapper is an object persistence solution, so it should support passing objects (not undefined key-pairs) around.

Fetch the process instances by passing the multiple business key[processBusinessKey] values

I have one process which has task and form fields. And I have one field "location" which I am considering as business key. For example, I have 2 locations as: India and UK. I want to fetch the process instances of these two locations. Means I need to pass the multiple business key values. Is it possible to pass the multiple business key values and fetch the process instances of these 2 business key values [multiple business key values]?
Thanks & Regards
Shilpa Kulkarni
There's no out of the box functionality for this. but you can always query process instances based on variable values. e.g. create a service which takes multiple keys as argument and query them separately with runtimeService.createProcessInstanceQuery().variableValueLike("location", "yourKey").list(); this will return all process instances with having location as yourKey entered.
An Activiti process instance can only have a single associated Business key. However you can retrieve a list of instaces based on a list of buisiness keys and other properties by using a process instance query.

to get data from table without using reference of state

am trying to get the value from db without using serviceHub and vault.but i couldn't. what my logic is, when i pass the country name, it should return the id's(PK)of that country which is in one table.using those id's, it should return the values related to those id's from other table.it could be possible in flow class.but am trying to do in api class where servicehub couldn't import. Please help me out.
Only the node has access to the ServiceHub. The API runs outside of the node in a separate process, so it is limited to interacting with the node via the operations offered by CordaRPCOps.
Either you need to store the data you want to access in a separate database outside of the node, or you need to find some way to programatically log into the node's database from the API, using JDBC as described here: https://docs.corda.net/node-database.html.

SQL Server connection-specific variables

I'm upgrading my application to use stored procedures rather than dynamic SQL as it is now. What I'd like to do is call some procedure, for example setUser(id), and then for that ID to be carried forward for the duration of the current connection. I have a UserVariables table which stores important data related to the current user (I'm not using the session to store this data as the session only lasts for so long; this way the users data is persisted across logins). I want to select data, such as the ID of the client they're currently viewing, without having to pass the user ID into each stored procedure. Call it laziness, if you like!
I've searched for this quite a bit, and looked at various articles, but they're all about either global variables (which we can't change) or something unrelated.
Basically what I want to do is set the user ID at the beginning of the page load (may even move this into the session_start method at some point) and then access this variable during all subsequent stored procedure calls or queries.
Edit: What I'm after is like when you set a variable at the beginning of an asp page (my application is written in good ol' classic asp) and you then use it throughout the page and any includes. Think of the asp page representing the connection and the includes representing the stored procedures, if you like.
I have found a suitable alternative to setting connection-specific variables. I have a function which takes the sproc name, an array of parameter names, an array of parameter values and a variable to return (i.e. a byref variable). I have modified this function so that, when the parameters are refreshed, it checks for a userid parameter and sets it automatically if it exists. It then returns the 'retval' parameter if it exists, otherwise sets the return variable to myCmd.execute.

Resources