Elsa work flow trigger and integration with exe - .net-core

I want to build Elsa workflow for the below requirement:
Can be trigger from database table trigger when new row inserted.
Can execute exe file to get some information.
To read data from the database.

I agree with #fatihyildizhan and #vahidnaderi, but if I interpret the question as "How to do these 3 steps with Elsa from a high-level overview - can someone give me any pointers?" then I can answer as follows:
If all you really need are the 3 steps you mentioned, then don't use Elsa; it is overkill for what you want to do.
Here's why:
Although you can achieve all of this with Elsa, you can't do it with Elsa out of the box; you will have to write custom activities and support services to trigger your workflows, which is a little bit more work than simply doing your thing from your "row inserted" application handler.
If, on the other hand, you are planning on implementing multiple workflows that are potentially more complicated and perhaps even long-running, then it might be worthwhile to consider Elsa after all.
Doing this with Elsa requires you the following up-front work:
1. Trigger Workflow When New Row Inserted
To trigger a workflow when a new row is inserted, you need to implement a handler that responds to that event (this you would need to do regardless of whether you use Elsa or not).
Next, you need to implement a custom activity that represents the "row inserted" event as a trigger, e.g. called RowInserted. You can then use that activity as a starting point in any of your workflows or even as a resumption point (e.g. for workflows where you began some work that might insert some eventual addition of a database row, which is the event you want to handle), which would be triggered whenever a new row is inserted. You probably want to be able to configure for which database table to trigger this event, so you might add a TableName property to your activity.
Then, in order to make Elsa actually trigger workflows with your custom activity, you will need to do the following:
Implement a bookmark model and provider for Elsa to index and invoke. E.g.
NewRowBookmark : IBookmark with a single property called TableName. Bookmarks are Elsa's way of starting & resuming workflows.
Update your "row inserted" handler described earlier to invoke IWorkflowLaunchpad.CollectAndDispatchWorkflowsAsync, passing in the appropriate bookmark/trigger model containing the table name into which the row was inserted, and provide the inserted row as input (assuming you want the workflow to do something with the inserted row).
2. Execute File
To execute a file, you need to create another custom activity that does this. You can make this activity as specific or generic as you need.
3. Read From Database
Same as with #2, you need to create another custom activity that reads from the database. You can make this activity as specific or generic as you need.
The above should give you a rough idea of the work involved to implement this with Elsa. And as mentioned earlier, this might be overkill if all you are looking to do is do the 3 steps mentioned.

Related

Return entity updated by axon command

What is the best way to get the updated representation of an entity after mutating it with a command.
For example, lets say I have a project like digital-restaurant and I want to be able to update a field on the restaurant and return it's current state to the client making the update (to retrieve any modifications by different processes).
When a restaurant is created, it is easy to retrieve the current state (ie: the projection representation) after dispatching the create command by subscribing to a FindRestaurantQuery and waiting until a record is returned (see Restaurant CommandController)
However, it isn't so simple to detect when the result of an UpdateCommand has been applied to the projection. For example,
if we use the same trick and subscribe to the FindRestaurantQuery, we will be notified if the restaurant has been modified,
but it may not be our command that triggered the modification (in the case where multiple processes are concurrently issuing
update commands).
There seems to be two obvious ways to detect when a given update command has been applied to the projection:
Have a unique ID associated with every update command.
Subscribe to a query that is updated when the command ID has been applied to the projection.
Propagate the unique ID to the event that is applied by the aggregate
When the projection receives the event, it can notify the query listener with the current state
Before dispatching an update command, query the existing state of the projection
Calculate the destination state given the contents of the update command
In the case of (1): is there any situation (eg: batching / snapshotting) where the event carrying the unique ID may be
skipped over somehow, preventing the query listener from being notified?
Is there a more reliable / more idiomatic way to accomplish this use case?
Axon 4 with Spring boot.
Although fully asynchronous designs may be preferable for a number of reasons, it is a common scenario that back-end teams are forced to provide synchronous REST API on top of asynchronous CQRS+ES back-ends.
The part of the demo application that is trying to solve this problem is located here https://github.com/idugalic/digital-restaurant/tree/master/drestaurant-apps/drestaurant-monolith-rest
The case you are mentioning is totally valid.
I would go with the option 1.
My only concern is that you have to introduce new unique ID associated with every update command attribute to the domain (events). This ID attribute does not have any Domain/Business value by my opinion. There is an Audit(who, when) attribute associated to every event already, and maybe you can use that to correlate commands and subscriptions. I believe that there is more value in this solution (identity is part of domain), if this is not to relaxing for your case.
Please note that Queries have to be extended with Audit in this case (you will know who requested the Query)

Bring back a workflow task to previous state in Alfresco activiti workflow

I would like to bring back a workflow task to previous state in Alfresco activiti workflow.
For example, there are two reviewer A and B. The workflow is serial, A is the first reviewer and B is the second. After A accepted the task, the task is assigned to B. At that time, A would like to bring back the task from B. What api method should I use to implement this behavior? (it's not possible?)
What you mean is Reassigning a task to another user, which is in your case the same user who did the first step.
You can to this by the following: http://forums.activiti.org/content/reassign-task-another-user
Take a look in the Share web components for task-edit-header.js. There is a reassign button in share which does a bit you're asking for. Check which calls alfresco makes a reuse that.
why not putting an exclusiveGateway after B that evaluates a boolean, so depending to its value ,workflow will either complete or return to A

How do I check which values in my Form have changed before saving?

The situation is like this. We have a form with a large number of fields (over 30 spread over several tabs) and what I want to do is find which values have changed before saving with minimum impact on performance. What happens right now is, for editing, single records are queried from several databases. The values are passed over to the client side as value objects. At the moment they are not bound to any fields in the form.
My initial idea was to have a boolean flag for each field to set true or false each time any of the fields were changed. At the time of saving the program would run through the list of flags to see which fields have changed. This seems more than a bit clunky to me so I was thinking maybe it could be done on the server side. But then I don't want to go through each field one by one checking to see which ones don't match the db records.
Any ideas on what to do here?
This is a very common problem for a lot of Flex applications. Because it happens so often there are a number of commercial implementations for Data Management. Queries are stored into entities and those entities are bound to a form on the client side. Whenever a field is updated, it will automatically perform the steps to persist the changes to the db and do rollbacks when requested.
Adobe LCDS Data Management - If you are dealing with a Java environment
WebOrb - If you are dealing with a .net, php, java, rails environment
Of course you can re-invent the wheel and roll out your own, set up PropertyChangeEvent listeners on each field. When the changes are dispatched, listen for them and write handlers for each one.
This sounds exactly like what we're doing with one of the projects I'm working on for a client.
What we do is dupe the value objects once they back to the UI. Then when calling the update service, I send both the original object and the new object. In the service, I do a field by field compare on the server to determine what values should sent to the database.
If you need to update every field/property conditionally based on whether or not it changed; then I don't see a way to avoid the check with every field/property. Even if you implement your Boolean idea and swap the flag in the UI whenever anything changes; you're still going to have to check those Boolean values when creating your query to determine what should be updated or not.
In my situation, three different databases are queried to create the value object that gets sent back to the UI. Field updates are saved in one of those database and given first order of preference when doing the select. So, wee have an explicit field by field comparison happening inside a stored procedure.
If you don't need field by field comparisons, but rather a "record by record" comparisons; then the Boolean approach to let you know the record/Value Object had changed is going to save you some time and coding.

How to handle concurrency control in ASP.NET Dynamic Data?

I've been quite impressed with dynamic data and how easy and quick it is to get a simple site up and running. I'm planning on using it for a simple internal HR admin site for registering people's skills/degrees/etc.
I've been watching the intro videos at www.asp.net/dynamicdata and one thing they never mention is how to handle concurrency control.
It seems that DD does not handle it right out of the box (unless there is some setting I haven't seen) as I manually generated a change conflict exception and the app failed without any user friendly message.
Anybody know if DD handles it out of the box? Or do you have to somehow build it into the site?
Concurrency is not handled out the of the box by DD.
One approach would be to implement this on the database side, by adding a "last updated" timestamp column (or other unique stamp, such as a GUID) to each table.
You then create an update trigger for each table. For each row being updated, is the "last updated" stamp passed in the same as the one on the row in the database?
If so, update the row, but give it a new "last updated" stamp.
If not, raise a specific "Data is out of date" exception.
On the client side, for each row you update, you'd need to refresh the "last updated" stamp.
In the client code you watch for the "Data is out of date" exception and display a helpful message to the user, asking them to refresh the data and re-submit their change.
Hope this helps.
All depends on the definition, what do you mean under "out of the box". Of cause you have to create a lot of code to handle concurrency, but some features help us to implement it.
My favorite model is "optimistic concurrency" based on rowversion datatype of SQL Server. It is like "last updated" timestamp, but you need not use any update trigger for each table. All updates of the corresponding "timestamp" column in your tables will be made automatically by SQL server at every update of data in the table row. I describes it in my old answer Concurrency handling of Sql transactrion. I hope it will be helpful for you.
I was of the impression the Dynamic data does the update on the underlying data source. Maybe you can specify the concurrency model (pessimistic/optimistic) on the data meta model that gets registered on the App_Init section. But you would probably get unable to save changes error, so by default would be pessimistic, last in loses....
Sorry to replay late. Yes DD is too strong when it come to fast development of project. Not only that it is base for .Net 4.0. DD is more enhance and have been included in .Net 4.0.
DD mostly work on Linq to sql. I will suggest you to have a look on that part.
In linq to SQl when you go to property of table you will find a property there which specify wheater to check the old value before updating new value. If you set that true I think your proble will get handle.
wish you best luck.
Let's learn from each other.
The solution given by Binary Worrier works and it's widely used on platforms providing a GUI to merge the changes (e.g. source control programs, wiki engines, etc). That way none of the users lose their changes. In the other hand, it requires much code or using external components or DLLs.
If you are not happy with that, another approach is just to lock the record that is being edited. Nobody else will be able to edit that record until the user commit the changes or his session expires. It has pros and cons but requires little code compared with the first option.

Mate Framework - Check data before making remote call

Until recently I have been using cairngorm as a framework for flex. However, in this latest project I have switched to Mate. It's` still confusing me a little as I kind of got used to leaving data in the model. I have a couple of components which rely on the same dataset(collection).
In the component the creation complete handler sends a 'GiveMeMyDataEvent' which is caught by one of the eventmaps. Now in cairngorm in my command class I would have had a quick peek in the model to decide whether I need to get the data from the server or not and then either returned the data from the model or called the db.
How would I do this in Mate? Or is there a better way to go about this, I'm trying to utilize the data that has already been recieved from the server, but at the same time I'm not sure I have loaded the data or not. If a component which uses that same data has been instantiated then the answer is yes otherwise no.
Any help/hints greatly appreciated.
Most things in Mate are indirect. You have managers that manage your data, and you set up injectors (which are bindings) between the managers and your views. The injectors make sure your views are synchronized with your managers. That way the views always have the latest data. Views don't get updated as a direct consequence of dispatching an event, but as an indirect consequence.
When you want to load new data you dispatch an event which is caught by an event map, which in turn calls some service, which loads data and returns it to the event map, and the event map sticks it into the appropriate manager.
When the manager gets updated the injectors make sure that the views are updated.
By using injectors you are guaranteed to always have the latest data in your views, so if the views have data the data is loaded -- unless you need to update periodically, in which case it's up to you to determine if data is stale and dispatch an event that triggers a service call, which triggers an update, which triggers the injectors to push the new data into the views again, and round it goes.
So, in short the answer to your question is that you need to make sure you use injectors properly. If this is a too high-level answer for you I know you can get more help in the Mate forums.
I ran into a similiar situation with the app I am working on at the moment, and found that it is easily implemented in Mate when you start thinking about having two events.
The first event being something like DataEvent.REFRESH_MY_DATA. This event is handled by some DataManager, which can decide to either ignore it (since data is already present in the client and considered up to date), or the manager can dispatch an event like DataEvent.FETCH_MY_DATA.
The FETCH_MY_DATA event triggers a service call in the event map, which updates a value in the manager. This update is property-injected into the view, happy days :)

Resources