Hello I'm struggling to come up with the distinction between the DeadlineManager implementation of scheduling future events and EventScheduler and what is the best use case for either.
Say I need to schedule a task to be performed in 24 hours based on a command that happened today. Between now and then another event or command could occur that makes the scheduled event obsolete so now I need to cancel the scheduled event.
Can I use either interchangeably? If not - in this scenario what is the best choice or is there not enough information? What would inform my decision to use one over the other?
The main difference between scheduling an event or a deadline is what you want to happen when your scheduled time has passed.
When you schedule an event , that event will always be added to the eventstore after the scheduled time has passed.
When you schedule a deadline, no event will be added directly, but instead a DeadlineHandler annotated function will be called, in which you can then decide based on the current state of the aggregate or saga, what you want to do (if anything). So unless you apply an event yourself in the deadline handler, there will be no interaction with the eventstore.
Note that both can also be cancelled before the scheduled time has ended, using the ScheduleToken returned when scheduling an event, or using the deadineId and its name in the case of a deadline.
Some further information can be found in the reference guide:
https://docs.axoniq.io/reference-guide/configuring-infrastructure-components/deadlines
https://docs.axoniq.io/reference-guide/implementing-domain-logic/complex-business-transactions/deadline-handling
Related
If I have a process looking similar to this, where there is a Approve usertask and a multiinstance parallel Review usertask. The business rule is whenever the Approver approves, then even if there are more reviewers available to review the (multi)task it should cancel all the remaining task instances. (Ex: <completionCondition>${approved == true}</completionCondition>). How should I implement this scenario? Thanks.
You could add a signal boundary event on the Multi instance Review user task. After the approve user task you can add an intermediate signal throw event that triggers the signal boundary event. In that way the Review multi instance user task will be terminated when the Approve user task is completed.
One word of warning when using the signal approach (which is IMO the right answer).
But, notice in the below image I am splitting the flow with a parallel gateway. If I simply use a parallel join, the process instance will never complete because the parallel join never gets all the tokens it expects. You should use an inclusive join (as shown below) which will recalculate the number of expected tokens and allow flow through to the "Done" task.
Is it possible to get the time when an event has been scheduled to a QEventLoop (e.g. the QCoreApplication event loop)?
I have a situation where the main event loop is paused. When it's reactivated the events are fired and I am interested in the time when the events where added to the queue. The events are not custom events but system (and other) events.
Regards,
It mainly depends on what are the system events you are interested in, for you have already the timestamp in some cases.
As an example, QInputEvent (base class for events that describe user input, like QMouseEvent, QKeyEvent, and so on) has the member method timestamp that:
Returns the window system's timestamp for this event.
In other terms, a timestamp close to the time when it has been pushed into the event loop.
I have a box job that is dependent on another job finishing. The first job normally finishes by 11pm and my box job then kicks off and finishes in about 15 minutes. Occasionally, however, the job may not finish until much later. If it finishes later than 4am, I'd like to have it send an alert.
My admin told me that since it is dependent on a prior job, and not set to start at a specific time, it is not possible to set a time-based alert. Is this true? Does anybody have a workaround they can suggest? I'd rather not set the alert on the prior job (suggested by my admin) as that may not always catch those instances when my job runs longer.
Thanks!
You can set a max run alarm time which will alert if that time is exceeded
We ended up adding a job to the box with a start time of 4am that checks for the existence of the files the rest of the job creates. We also did this for the jobs predecessors to make sure we are notified if we are at risk of not finishing by 4am.
I'm looking at using Windows WF to implement a system, which includes the need to support workflows that have either transitions or activities that need to be scheduled to execute at either a certain date/time in the future, or after a certain period of time has elapsed.
I found a question along these lines already:
Windows Workflow Foundation - schedule activities to run at certain times
Which suggests using the Delay task and calculating the duration of the delay in order to proceed at the required time. I have a few additional questions however, but don't have enough reputation to add comments so I'm posting a secondary question:
How can I implement it such that it can recover from a crash? For example, let's say the task is currently in the delay task and the process hosting the workflow engine crashes. When it is restarted, will it resume waiting in the delay task, and trigger at the time required? Do you have to do something special to get this to happen?
Another question is, again let's say a workflow instance is already mid-way through the delay task. At that point, you suddenly need to alter the time/date at which the workflow progresses to the next activity/task. Is it possible to programmatically update the duration on the in-play delay task to effect this?
If you store started instances in the SQL Store, you can add Persist activity each time you want to force save it. After crash activity will continue from last persist.
You can replace single big Delay with While cycle. In condition you check if you need to delay or not, in body you place Delay activity with short interval. You can add persist after that short Delay and instance will not wait full time again after crash (it depends on While condition, you need to store somewhere state of your instances. For example in variable).
Is it generally fast enough to make simple updates synchronously? For instance with a ASP.NET web app, if I change the person's name... will I have any issues just updating the index synchronously as part of the "Save" mechanism?
OR is the only safe way to have some other asynchronous process to make the index updates?
We do updates both synchronous and asynchronously depending on the kind of action the user is doing. We have implemented the synchronous indexing in a way where we use the asynchronous code and just waits for some time for its completion. We only wait for 2 seconds which means that if it takes longer then the user will not see the update but normally the user will.
We configured logging in a way so we would get notified whenever the "synchronous" indexing took longer than we waited to get an idea of how often it would happen. We hardly ever get over the 2 second limit.
If you are using full text session, then you don't need to update indexs explicitly. Full text session take care of indexing updated entity.