I have an activity, and would like to save the state (persist) the workflow so its state will save to the database. Can you please show me or redirect me how this may happen?
There's a Persist activity in the toolbox under Runtime.
I don't think you can Persist from inside an activity as the workflow must be in a persist-able state before it can persist.
I prefer to set _workflowApplication.PersistableIdle = WorkflowApplicationPersistableIdle;
Then put in a Delay activity where you want to trigger the Persist. The advantage is that you can do stuff in WorkflowApplicationPersistableIdle().
e.g
private PersistableIdleAction WorkflowApplicationPersistableIdle(WorkflowApplicationIdleEventArgs e)
{
if (_canWeUnloadThisWorkflow)
{
_workflowController.RemoveWFManagerFromList(this);
return PersistableIdleAction.Unload;
}
return PersistableIdleAction.None;
}
Related
we have Axon application that stores new Order. For each order state change (OrderStateChangedEvent) it plans couple of tasks. The tasks are triggered and proceeded by yet another Saga (TaskSaga - out of scope of the question)
When I delete the projection database, but leave the event store, then run the application again, the events are replayed (what is correct), but the tasks are duplicated.
I suppose this is because the OrderStateChangedEvent triggers new set of ScheduleTaskCommand each time.
Since I'm new in Axon, can't figure out how to avoid this duplication.
Event store running on AxonServer
Spring boot application autoconfigures the axon stuff
Projection database contains the projection tables and the axon tables:
token_entry
saga_entry
association_value_entry
I suppose all the events are replayed because by recreating the database, the Axon tables are gone (hence no record about last applied event)
Am I missing something?
should the token_entry/saga_entry/association_value_entry tables be part of the DB for the projection tables on each application node?
I thought that the event store might be replayed onto new application node's db any time without changing the event history so I can run as many nodes as I wish. Or I can remove the projection dB any time and run the application, what causes that the events are projected to the fresh db again. Or this is not true?
In general, my problem is that one event produces command leading to new events (duplicated) produced. Should I avoid this "chaining" of events to avoid duplication?
THANKS!
Axon configuration:
#Configuration
public class AxonConfig {
#Bean
public EventSourcingRepository<ApplicationAggregate> applicationEventSourcingRepository(EventStore eventStore) {
return EventSourcingRepository.builder(ApplicationAggregate.class)
.eventStore(eventStore)
.build();
}
#Bean
public SagaStore sagaStore(EntityManager entityManager) {
return JpaSagaStore.builder().entityManagerProvider(new SimpleEntityManagerProvider(entityManager)).build();
}
}
CreateOrderCommand received by Order aggregate (method fromCommand just maps 1:1 command to event)
#CommandHandler
public OrderAggregate(CreateOrderCommand cmd) {
apply(OrderCreatedEvent.fromCommand(cmd))
.andThenApply(() -> OrderStateChangedEvent.builder()
.applicationId(cmd.getOrderId())
.newState(OrderState.NEW)
.build());
}
Order aggregate sets the properties
#EventSourcingHandler
protected void on(OrderCreatedEvent event) {
id = event.getOrderId();
// ... additional properties set
}
#EventSourcingHandler
protected void on(OrderStateChangedEvent cmd) {
this.state = cmd.getNewState();
}
OrderStateChangedEvent is listened by Saga that schedules couple of tasks for the order of the particular state
private Map<String, TaskStatus> tasks = new HashMap<>();
private OrderState orderState;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void on(OrderStateChangedEvent event) {
orderState = event.getNewState();
List<OrderStateAwareTaskDefinition> tasksByState = taskService.getTasksByState(orderState);
if (tasksByState.isEmpty()) {
finishSaga(event.getOrderId());
}
tasksByState.stream()
.map(task -> ScheduleTaskCommand.builder()
.orderId(event.getOrderId())
.taskId(IdentifierFactory.getInstance().generateIdentifier())
.targetState(orderState)
.taskName(task.getTaskName())
.build())
.peek(command -> tasks.put(command.getTaskId(), SCHEDULED))
.forEach(command -> commandGateway.send(command));
}
I think I can help you in this situation.
So, this happens because the TrackingToken used by the TrackingEventProcessor which supplies all the events to your Saga instances is initialized to the beginning of the event stream. Due to this the TrackingEventProcessor will start from the beginning of time, thus getting all your commands dispatched for a second time.
There are a couple of things you could do to resolve this.
You could, instead of wiping the entire database, only wipe the projection tables and leave the token table intact.
You could configure the initialTrackingToken of a TrackingEventProcessor to start at the head of the event stream instead of the tail.
Option 1 would work out find, but requires some delegation from the operations perspective. Option 2 leaves it in the hands of a developer, potentially a little safer than the other solution.
To adjust the token to start at the head, you can instantiate a TrackingEventProcessor with a TrackingEventProcessorConfiguration:
EventProcessingConfigurer configurer;
TrackingEventProcessorConfiguration trackingProcessorConfig =
TrackingEventProcessorConfiguration.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("{class-name-of-saga}Processor",
Configuration::eventStore,
c -> trackingProcessorConfig);
You'd thus create the desired configuration for your Saga and call the andInitialTrackingToken() function and ensuring the creation of a head token of no token is present.
I hope this helps you out Tomáš!
Steven's solution works like a charm but only in Sagas. For those who want to achieve the same effect but in classic #EventHandler (to skip executions on replay) there is a way. First you have to find out how your tracking event processor is named - I found it in AxonDashboard (8024 port on running AxonServer) - usually it is location of a component with #EventHandler annotation (package name to be precise). Then add configuration as Steven indicated in his answer.
#Autowired
public void customConfig(EventProcessingConfigurer configurer) {
// This prevents from replaying some events in #EventHandler
var trackingProcessorConfig = TrackingEventProcessorConfiguration
.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("com.domain.notreplayable",
org.axonframework.config.Configuration::eventStore,
c -> trackingProcessorConfig);
}
I am investigating the capabilities of the new delegate & event subscription pattern in AX 2012.
At the moment I am looking to detect when a particular field has been modified, for example when SalesTable.SalesStatus is changed to SalesStatus::Invoiced.
I have created the following post-event handler and attatched to the SalesTable.Update method;
public static void SalesTable_UpdatePosteventHandler(XppPrePostArgs _args)
{
Info("Sales Update Event Handler");
}
Now I know I can get the SalesTable from the _args, but how can I detect a field has changed? I could really use a before & after version, which makes me think I am subscribing to the wrong event here.
If the update method does not update the field, you can use a pre event handler on the update method. If you want to monitor the PriceGroup field on the CustTable table then create a class called CustTableEventHandler containing this method:
public static void preUpdateHandler(XppPrePostArgs _args)
{
CustTable custTable = _args.getThis();
if (custTable.PriceGroup != custTable.orig().PriceGroup)
info(strFmt("Change price group from '%1' to '%2'", custTable.orig().PriceGroup, custTable.PriceGroup));
}
A post event handler will not work, as orig() will return the changed record.
Also if the the record is updated using doUpdate your handler is not called.
You could also override the aosValidateUpdate on CustTable, which is called even if doUpdate is used. This method is always run on the AOS server.
public boolean aosValidateUpdate()
{
boolean ret = super();
if (this.PriceGroup != this.orig().PriceGroup)
info(strFmt("Change price group from '%1' to '%2'", this.orig().PriceGroup, this.PriceGroup));
return ret;
}
Yet another option would be a global change to the Application.eventUpdate method.
From the header of the method:
Serves as a callback that is called by the kernel when a record in a
table is updated, provided that the kernel has been set up to monitor
records in that table.
A developer can set up the kernel to call back on updates for a given
table by inserting a record into the DatabaseLog kernel table with all
fields set to relevant values, which includes the field logType set to
EventUpdate. It is possible to set up that the kernel should call back
whenever a record is updated or when a specific field is updated.This
is very similar to how logUpdate is called and set up. The call
of this method will be in the transaction in which the record is
updated.
This method is used by the alert rule notification system. I would recommend against this, unless it is a global change (like alert rules).
Alert rules can be extended as described here.
In my application, I have a form that users fill out, then gets approved by a manager. I have various types of forms that all use the same process, so the approval buttons are all done via a user control (which includes the functionality to update the data in the database and call the postback).
However, once I click on the "Approve" button (which is in the user control), the form information doesn't update (it still says "unapproved"). A postback is definitely happening, but not sure why the page isn't updating properly.
I can confirm that the change are being made - when I manually reload the page, it gets updated - but not on the post back.
What am I missing here?
My page:
protected void Page_Load(object sender, EventArgs e)
{
int ID;
// ensure that there's an ID set in the query string
if (Int32.TryParse(Request.QueryString["ID"], out ID))
PopulatePage(ID);
else
Response.Redirect("~/Default.aspx");
}
}
protected void PopulatePage(int ID)
{
using (WOLinqClassesDataContext db = new WOLinqClassesDataContext())
{
lblStatus.Text = wo.Workorder.status;
....
}
}
I think that the Page_Load happens before the code in the submit button. To check this just use a couple of breakpoints. So the page loads the old data since the new data are not saved yet.
You should call a method to load the data inside the OnClick method of the Approve button.
After you've submitted the changes to the database, try running db.Refresh(RefreshMode.OverwriteCurrentValues) to force the changes to be reloaded into the data context.
On one PC, I have a datagrid with dataprovider being an arraycollection populated by a collection of records retrieved from MySQL DB. On another PC, another component retrieve the same collection of records and update them. Instanly, the datagrid in PC one reflected the changes that were made in PC two which is nice because it pushes the changes to all affected controls online. However, I want to detect the the arraycollection changes to do some other things but such CollectionEvent.COLLECTION_CHANGE is not detected. Can you help please? Here is the code:
protected function doInit():void
{
acLeave.addEventListener(CollectionEvent.COLLECTION_CHANGE, onAcLeaveChange);
}
protected function onAcLeaveChange(event:CollectionEvent):void
{
do something
}
I am using lcds and the data management service handled the data synchronization already. That is why the first pc with the data grid data provider acLeave changed automatically. Somehow it is because lcds knows there is a client (pc one) online, then it pushes the changes to it. My question is that the datagrid data changed, I want to detect there is a data change occur so that I can do some other updates. Normally, to detect datagrid change, I can use datagrid datachange or simply addlistener to the data provider for collectionEvent.COLLECTION_CHANGE but in this case even though I can see the ac acLeave changed, the event did not fire. Please help!
Hi, me again and thanks for your advise. I have added the setter to acLeave but still unable to listen the collectionEvent change. Here is the modified code:
private var _acLeave:ArrayCollection = new ArrayCollection();
[Bindable]
public function get acLeave():ArrayCollection
{
return _acLeave;
}
public function set acLeave(value:ArrayCollection):void
{
_acLeave = value;
}
protected function doInit():void
{
acLeave.addEventListener(CollectionEvent.COLLECTION_CHANGE, onAcChange);
}
protected function dataGrid_creationCompleteHandler(event:FlexEvent):void
{
getAllResult.token = leaverequestService.getAll();
getAllResult.addEventListener(ResultEvent.RESULT, onGotResult);
}
protected function onGotResult(event:ResultEvent):void
{
acLeave = getAllResult.lastResult;
}
protected function onAcChange(event:CollectionEvent):void
{
// this never executed because unable to detect a change on acLeave
Alert.show("acLeave Changed !");
}
If your collection change handler is not firing, I am guessing something like this is happening:
pc1 adds collection change listener
pc2 changes the collection
LCDS sends pc1 a brand new ArrayCollection object (not just the element that changed) to use
The original ArrayCollection you are listening for the collectionChange is not necessarily getting changed, it is getting replaced. So no collection change event ocurrs.
If you add a setter method for this ArrayCollection (acLeave in your example), then you will know if this ever happens. Technically, you should use both the collection change event and this setter to be able to detect all the cases when the array could be changed.
Let's say I have a list of categories for navigation on a web app. Rather than selecting from the database for every user, should I add a function call in the application_onStart of the global.asax to fetch that data into an array or collection that is re-used over and over. If my data does not change at all - (Edit - very often), would this be the best way?
You can store the list items in the Application object. You are right about the application_onStart(), simply call a method that will read your database and load the data to the Application object.
In Global.asax
public class Global : System.Web.HttpApplication
{
// The key to use in the rest of the web site to retrieve the list
public const string ListItemKey = "MyListItemKey";
// a class to hold your actual values. This can be use with databinding
public class NameValuePair
{
public string Name{get;set;}
public string Value{get;set;}
public NameValuePair(string Name, string Value)
{
this.Name = Name;
this.Value = Value;
}
}
protected void Application_Start(object sender, EventArgs e)
{
InitializeApplicationVariables();
}
protected void InitializeApplicationVariables()
{
List<NameValuePair> listItems = new List<NameValuePair>();
// replace the following code with your data access code and fill in the collection
listItems.Add( new NameValuePair("Item1", "1"));
listItems.Add( new NameValuePair("Item2", "2"));
listItems.Add( new NameValuePair("Item3", "3"));
// load it in the application object
Application[ListItemKey] = listItems;
}
}
Now you can access your list in the rest of the project. For example, in default.aspx to load the values in a DropDownList:
<asp:DropDownList runat="server" ID="ddList" DataTextField="Name" DataValueField="Value"></asp:DropDownList>
And in the code-behind file:
protected override void OnPreInit(EventArgs e)
{
ddList.DataSource = Application[Global.ListItemKey];
ddList.DataBind();
base.OnPreInit(e);
}
Premature optimization is evil. That being a given, if you are having performance problems in your application and you have "static" information that you want to display to your users you can definitely load that data once into an array and store it in the Application Object. You want to be careful and balance memory usage with optimization.
The problem you run into then is changing the database stored info and not having it update the cached version. You would probably want to have some kind of last changed date in the database that you store in the state along with the cached data. That way you can query for the greatest changed time and compare it. If it's newer than your cached date then you dump it and reload.
If it never changes, it probably doesn't need to be in the database.
If there isn't much data, you might put it in the web.config, or as en Enum in your code.
Fetching all may be expensive. Try lazy init, fetch only request data and then store it in the cache variable.
In an application variable.
Remember that an application variable can contain an object in .Net, so you can instantiate the object in the global.asax and then use it directly in the code.
Since application variables are in-memory they are very quick (vs having to call a database)
For example:
// Create and load the profile object
x_siteprofile thisprofile = new x_siteprofile(Server.MapPath(String.Concat(config.Path, "templates/")));
Application.Add("SiteProfileX", thisprofile);
I would store the data in the Application Cache (Cache object). And I wouldn't preload it, I would load it the first time it is requested. What is nice about the Cache is that ASP.NET will manage it including giving you options for expiring the cache entry after file changes, a time period, etc. And since the items are kept in memory, the objects don't get serialized/deserialized so usage is very fast.
Usage is straightforward. There are Get and Add methods on the Cache object to retrieve and add items to the cache respectively.
I use a static collection as a private with a public static property that either loads or gets it from the database.
Additionally you can add a static datetime that gets set when it gets loaded and if you call for it, past a certain amount of time, clear the static collection and requery it.
Caching is the way to go. And if your into design patterns, take a look at the singleton.
Overall however I'm not sure I'd be worried about it until you notice performance degradation.